Theory


A Genetic Algorithm (GA) functions as a search metaheuristic, drawing inspiration from Charles Darwin's evolutionary theory. This algorithm reflects the process of natural selection, where the fittest individuals are selected for reproduction to produce offspring of the next generation [1]. Genetic Algorithm was developed by J. Holland [2].

According to Darwin's theory of evolution, generations with characteristics of superiority over the other generations will have a greater chance of survival. Therefore, their superior characteristics will be transferred to the next generations. On the other hand, the second part of Darwin's theory states that when multiplying children, an event occurs that changes the characteristics of the children. If these changes benefit the children, it will increase the probability of survival for those children [3,4].

In GA, the new populations are produced by iterative use of genetic operators on agents present in the population. These are the genetic operators considered in this algorithm [5]:

Genetic operators

Selection

The selection function specifies how the Genetic algorithm chooses parents for the next generation. Below, you can see some selection operators.

Roulette Wheel

Roulette wheel also known as fitness proportional selection selects parental solutions randomly with uniform distribution. The probability for being selected depends on the fitness of a solution. For this sake, the relative fitness of solutions normalized with the sum of all fitness values in a population, usually by division. This fraction of fitness can be understood as probability for a solution of being selected [9].

Table 1 shows the porpotional fitness \( \left(fit_i/\sum_{i=1}^n fit_i\right) \) about a population. Figure 1 shows a graphical schematic of the roulette wheel algorithm.

Population id Fitness % of total
0 6.31 31
1 1.11 5
2 8.48 38
3 2.57 12
4 3.08 14

Table 1. Population fitness [10].

Figure 1. Roulette wheel schema [10].

Crossover

The crossover function specifies how the Genetic Algorithm combines individuals or parents to form a crossover child for the next generation. Below, you can see some crossover operators.

Linear crossover [6]

From the two parent points \(\symbf{p_0}\) and \(\symbf{p_1}\) three new points are generated (offspring). See equations [1] to [3]. \(k\) is the \(k\)th component of the design variable vector and \(t\) is a current iteration.

\[ ch_{a,k} = 0.50 \cdot p_{0,k}^{t} + 0.50 \cdot p_{1,k}^{t}\]

(1)

\[ ch_{b,k} = 1.50 \cdot p_{0,k}^{t} - 0.50 \cdot p_{1,k}^{t}\]

(2)

\[ ch_{c,k} = -0.50 \cdot p_{0,k}^{t} + 1.50 \cdot p_{1,k}^{t}\]

(3)

The best one of the three points (offspring a \(=\;\mathbf{ch}_{a}\), offspring b \(=\;\mathbf{ch}_{b}\) and offspring c \(=\;\mathbf{ch}_{c}\)) are selected. See equation (4).

\[min(of_{ch_{a}}, of_{ch_{b}}, of_{ch_{c}}) \; \Rightarrow \; \symbf{x}^{t+1} = best(\symbf{ch}_a, \symbf{ch}_b, \symbf{ch}_c)\]

(4)

Blend crossover (BLX- \(\alpha\)) [7]

From the two parent points \(\symbf{p_0}\) and \(\symbf{p_1}\) two new points are generated (offspring). See equations [5] to [7]. \(k\) is the \(k\)th component of the design variable vector and \(\alpha\) be a uniformly distributed random number such that \(\alpha \in \left[0, 1 \right]\). \(t\) is a current iteration.

\[ ch_{a,k} = min( p_{0,k}^{t}, p_{1,k}^{t} ) - \alpha \cdot d_{k}^{t}\]

(5)

\[ ch_{b,k} = max( p_{0,k}^{t}, p_{1,k}^{t} ) + \alpha \cdot d_{k}^{t}\]

(6)

\[ d_{k}^{t} = \left| p_{0,k}^{t} - p_{1,k}^{t} \right| \]

(7)

The best one of the two points (offspring a \(=\;\mathbf{ch}_{a}\) and offspring b \(=\;\mathbf{ch}_{b}\)) are selected. See equation (8).

\[min(of_{ch_{a}}, of_{ch_{b}}) \; \Rightarrow \; \symbf{x}^{t+1} = best(\symbf{ch}_a, \symbf{ch}_b)\]

(8)

Heuristic crossover [11]

From the two parent points \(\symbf{p_0}\) and \(\symbf{p_1}\) two new points are generated (offspring). See equations [9] and [10]. \(k\) is the \(k\)th component of the design variable vector and \(\alpha\) be a uniformly distributed random number such that \(\alpha \in \left[0, 1 \right]\). \(t\) is a current iteration.

\[ ch_{a,k} = p_{0,k}^t + \alpha \cdot \left( p_{0,k}^{t} - p_{1,k}^{t} \right) \]

(9)

\[ ch_{b,k} = p_{1,k}^t + \alpha \cdot \left( p_{1,k}^{t} - p_{0,k}^{t} \right) \]

(10)

The best one of the two points (offspring a \(=\;\mathbf{ch}_{a}\) and offspring b \(=\;\mathbf{ch}_{b}\)) are selected. See equation (11).

\[min(of_{ch_{a}}, of_{ch_{b}}) \; \Rightarrow \; \symbf{x}^{t+1} = best(\symbf{ch}_a, \symbf{ch}_b)\]

(11)

Simulated Binary crossover [12]

From the two parent points \(\symbf{p_0}\) and \(\symbf{p_1}\) two new points are generated (offspring). See equations [12] to [14]. \(k\) is the \(k\)th component of the design variable vector and \(\alpha\) be a uniformly distributed random number such that \(\alpha \in \left[0, 1 \right]\). \(t\) is a current iteration.

\[ ch_{a,k} = 0.50 \cdot \left( \left(1 + \beta\right) \cdot p_{0,k}^{t} + \left(1 - \beta\right) \cdot p_{1,k}^{t} \right) \]

(12)

\[ ch_{b,k} = 0.50 \cdot \left( \left(1 - \beta\right) \cdot p_{0,k}^{t} + \left(1 + \beta\right) \cdot p_{1,k}^{t} \right) \]

(13)

\[ \beta = \left\{\begin{matrix} \left(2 \cdot \alpha \right)^\frac{1}{1+\eta_c} \;\; if \;\alpha \leq 0.50 \\ \left(\frac{1}{2 \cdot (1-\alpha)}\right)^\frac{1}{1+\eta_c} \; \; otherwise \end{matrix}\right. \]

(14)

The parameter \(\beta (\alpha)\) depends on the random number \(\alpha\). \(\beta\) is called the spread factor and is defined as the ratio between the spread of the children's and the parent's values. \(\eta_c\) index (a user-defined control parameter) alters the exploration capability of the crossover operator. Specifically, a small index induces a more significant probability of building children's values distant from the parent's values, whereas high indexes tend to create solutions very similar to the parents [12]. \(\eta_c\) is any nonnegative real number. A moderate value of \(\eta_c\) are 2 to 5 [13].

The best one of the two points (offspring a \(=\;\mathbf{ch}_{a}\) and offspring b \(=\;\mathbf{ch}_{b}\)) are selected. See equation (15).

\[min(of_{ch_{a}}, of_{ch_{b}}) \; \Rightarrow \; \symbf{x}^{t+1} = best(\symbf{ch}_a, \symbf{ch}_b)\]

(15)

Arithmetic crossover

From the two parent points \(\symbf{p_0}\) and \(\symbf{p_1}\) two new points are generated (offspring). See equations [16] and [17]. \(k\) is the \(k\)th component of the design variable vector and \(\alpha\) (weighting factor) be a uniformly distributed random number such that \(\alpha \in \left[0, 1 \right]\). \(t\) is a current iteration.

\[ ch_{a,k} = \alpha \cdot p_{0,k}^t + \left( 1 - \alpha \right) \cdot p_{1,k}^{t} \]

(16)

\[ ch_{b,k} = \alpha \cdot p_{1,k}^t + \left( 1 - \alpha \right) \cdot p_{0,k}^{t} \]

(17)

The best one of the two points (offspring a \(=\;\mathbf{ch}_{a}\) and offspring b \(=\;\mathbf{ch}_{b}\)) are selected. See equation (18).

\[min(of_{ch_{a}}, of_{ch_{b}}) \; \Rightarrow \; \symbf{x}^{t+1} = best(\symbf{ch}_a, \symbf{ch}_b)\]

(18)

Laplace crossover [14]

From the two parent points \(\symbf{p_0}\) and \(\symbf{p_1}\) two new points are generated (offspring). See equations [19] to [22]. \(k\) is the \(k\)th component of the design variable vector and \(\alpha\) be a uniformly distributed random number such that \(\alpha \in \left[0, 1 \right]\). \(t\) is a current iteration. \(\mu \in \mathbb{R} \) is called the location parameter and \(\sigma > 0\) is termed as scale parameter.

\[ ch_{a,k} = p_{0,k}^{t} + \beta \cdot d_{k}^{t}\]

(19)

\[ ch_{b,k} = p_{1,k}^{t} + \beta \cdot d_{k}^{t}\]

(20)

\[ d_{k}^{t} = \left| p_{0,k}^{t} - p_{1,k}^{t} \right| \]

(21)

\[ \beta = \left\{\begin{matrix} \mu - \sigma \cdot \ln (\alpha) \;\; if \;\alpha \leq 0.50 \\ \mu + \sigma \cdot \ln (\alpha) \; \; otherwise \end{matrix}\right. \]

(22)

For smaller values of \(\sigma\), offsprings are likely to be produce near the parents and for larger values of \(\sigma\) offsprings are expected to be produced far from the parents. Value of \(\sigma\) is set to 1 or 0.5 and \(\mu\) is set to 0 [15].

The best one of the three points (offspring a \(=\;\mathbf{ch}_{a}\), offspring b \(=\;\mathbf{ch}_{b}\) and offspring c \(=\;\mathbf{ch}_{c}\)) are selected. See equation (23).

\[min(of_{ch_{a}}, of_{ch_{b}}, of_{ch_{c}}) \; \Rightarrow \; \symbf{x}^{t+1} = best(\symbf{ch}_a, \symbf{ch}_b, \symbf{ch}_c)\]

(23)

Binomial crossover

From the two parent points \(\symbf{p_0}\) and \(\symbf{p_1}\) one new point is generated (offspring). See equation [24]. \(k\) is the \(k\)th component of the design variable vector and \(\alpha\) be a uniformly distributed random number such that \(\alpha \in \left[0, 1 \right]\). \(t\) is a current iteration.

\[ x_{i,k}^{t+1} = \left\{\begin{matrix} p_{1,k}^{t} \;\; if \;\alpha\leq p_c\\ p_{0,k}^t \; \; otherwise \end{matrix}\right. \]

(24)

\(p_c\) is the crossover rate \(\left(p_c \in \left[0,1\right] \right)\) selected by user.

Mutation

Mutation specifies how a Genetic Algorithm makes small random changes in the individuals in the population to create mutated children. Mutation provides genetic diversity and enables Genetic Algorithm to search a broader space. See one mutation example in HC algorithm theory. All formats of the mutation procedure are present in the GA framework.

Hill Climbing mutation

See one mutation example in HC algorithm theory.

Algorithm

1:  Input initial parameters (p_c, p_m, n_population,n_iteration, x_lower, x_upper, fit_function, obj_function, n_dimensions)
2:  Input initial guess (x_pop)
3:  Calculate of and fit (initial population)
4:  for iter in range(n_iterations):
5:      Apply selection operator
6:      r = random number [0,1]
7:      if r <=p_c: 
8:         Apply crossover operator
9:      r = random number [0,1]
10:     if r <= p_m:              
11:        Apply mutation operator
12:     if fit(x_temp) > fit(x_pop):
13:        x_pop(iter+1) = x_temp
14:     else:
15:        x_pop(iter+1) = x_pop(iter)

See GA algorithm in METApy Framework.

Example 1

Use the Genetic Algorithm optimization method to optimize the 2D sphere function. Use a total of 2 iterations to perform the optimization. Consider the limits \(\mathbf{x}_L = [-5.0, -5.0]\) and \(\mathbf{x}_U = [5.0, 5.0]\) for the problem design variables. Consider the initial guess (Three agents) \(\mathbf{pop}_0 = [-0.74, 1.25]\), \(\mathbf{pop}_1 = [3.58, -3.33]\) and \(\mathbf{pop}_2 = [1.50, 1.50]\). Use roulette wheel for selection procedure, linear crossover for crossover (82% rate) and hill climbing mutation (12% rate, \(cov=15\%\) and Gaussian generator).

Solution
Genetic Algorithm 01- report 

Initial population
x0 = [-0.74, 1.25], of_pop 2.1101, fit 0.3215330696762162 - best solution
x1 = [3.58, -3.33], of_pop 23.9053, fit 0.040152096140179 
x2 = [1.5, 1.5], of_pop 4.5, fit 0.18181818181818182 

Iterations

Iteration: 1
Pop id: 0 - particle movement
    Selection operator
    sum(fit) = 0.22197027795836083
    probs(fit) = [0.0, 0.18088951597254424, 0.8191104840274557]
    selected agent id = 1
    Crossover operator - Linear crossover
    offspring a = [1.42, -1.04], of_a 3.098
    offspring b = [-2.9, 3.54], of_b 20.9416
    offspring c = [5.0, -5.0], of_c 50.0
    update x = [1.42, -1.04], of = 3.098, fit = 0.2440214738897023
    Mutation operator
    Dimension 0: mean = 1.42, sigma = 0.21299999999999997, neighbor = 1.4985129991098818
    Dimension 1: mean = -1.04, sigma = 0.15600000000000003, neighbor = -1.0535911109506693
    update x = [1.4985129991098818, -1.0535911109506693], of = 3.355595437575558, fit = 0.22958973447649375
    fit_i_temp < fit_pop[pop] - not accept this solution
Pop id: 1 - particle movement
    Selection operator
    sum(fit) = 0.5033512514943981
    probs(fit) = [0.6387846831047258, 0.0, 0.36121531689527414]
    selected agent id = 2
    Crossover operator - Linear crossover
    offspring a = [2.54, -0.915], of_a 7.288825
    offspring b = [4.62, -5.0], of_b 46.3444
    offspring c = [0.45999999999999996, 3.915], of_c 15.538825000000001
    update x = [2.54, -0.915], of = 7.288825, fit = 0.12064436153495822
    No mutation 0.9041777148613984 > p_m = 0.12 
    fit_i_temp > fit_pop[pop] - accept this solution
Pop id: 2 - particle movement
    Selection operator
    sum(fit) = 0.4421774312111744
    probs(fit) = [0.7271584820498423, 0.2728415179501576, 0.0]
    selected agent id = 0
    Crossover operator - Linear crossover
    offspring a = [0.38, 1.375], of_a 2.035025
    offspring b = [2.62, 1.625], of_b 9.505025
    offspring c = [-1.8599999999999999, 1.125], of_c 4.725225
    update x = [0.38, 1.375], of = 2.035025, fit = 0.32948657754054744
    No mutation 0.6805018349694458 > p_m = 0.12 
    fit_i_temp > fit_pop[pop] - accept this solution
update solutions
x0 = [-0.74, 1.25], of_pop 2.1101, fit 0.3215330696762162 
x1 = [2.54, -0.915], of_pop 7.288825, fit 0.12064436153495822 
x2 = [0.38, 1.375], of_pop 2.035025, fit 0.32948657754054744 - best solution

Iteration: 2
Pop id: 0 - particle movement
    Selection operator
    sum(fit) = 0.45013093907550566
    probs(fit) = [0.0, 0.2680205937026718, 0.7319794062973282]
    selected agent id = 2
    No crossover 0.8211070296158199 > p_c = 0.82 
    No mutation 0.2091585539994938 > p_m = 0.12 
    fit_i_temp < fit_pop[pop] - not accept this solution
Pop id: 1 - particle movement
    Selection operator
    sum(fit) = 0.6510196472167636
    probs(fit) = [0.4938914993592482, 0.0, 0.5061085006407519]
    selected agent id = 2
    No crossover 0.931387636116316 > p_c = 0.82 
    Mutation operator
    Dimension 0: mean = 2.54, sigma = 0.381, neighbor = 1.8273474406304766
    Dimension 1: mean = -0.915, sigma = 0.13725, neighbor = -1.071939271796121
    update x = [1.8273474406304766, -1.071939271796121], of = 4.488252471197551, fit = 0.18220736113143812
    fit_i_temp > fit_pop[pop] - accept this solution
Pop id: 2 - particle movement
    Selection operator
    sum(fit) = 0.5037404308076543
    probs(fit) = [0.6382911714287011, 0.361708828571299, 0.0]
    selected agent id = 0
    Crossover operator - Linear crossover
    offspring a = [-0.18, 1.3125], of_a 1.75505625
    offspring b = [0.9400000000000001, 1.4375], of_b 2.9500062500000004
    offspring c = [-1.2999999999999998, 1.1875], of_c 3.1001562499999995
    update x = [-0.18, 1.3125], of = 1.75505625, fit = 0.3629689956421035
    No mutation 0.964612378290656 > p_m = 0.12 
    fit_i_temp > fit_pop[pop] - accept this solution
update solutions
x0 = [-0.74, 1.25], of_pop 2.1101, fit 0.3215330696762162 
x1 = [1.8273474406304766, -1.071939271796121], of_pop 4.488252471197551, fit 0.18220736113143812 
x2 = [-0.18, 1.3125], of_pop 1.75505625, fit 0.3629689956421035 - best solution

Reference list

ID Reference

[1]

Katoch, S., Chauhan, S.S. & Kumar, V (2021). A review on genetic algorithm: past, present, and future. Multimed Tools Appl 80, 8091–8126.

[2]

John H. Holland (1992). Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. The MIT Press.

[3]

Masoumi, A.P.; Tavakolpour-Saleh, A.R. (2020). Experimental assessment of damping and heat transfer coefficients in an active free piston Stirling engine using genetic algorithm. Energy.

[4]

Ali Reza Tavakolpour; Intan Z. Mat Darus; Osman Tokhi; Musa Mailah (2010). Genetic algorithm-based identification of transfer function parameters for a rectangular flexible plate system. , 23(8), 1388–1397.

[5]

Alhaddad, Wael; Halabi, Yahia; Meree, Hani; Yu, Zhixiang (2020). Optimum design method for simplified model of outrigger and ladder systems in tall buildings using genetic algorithm. Structures, 28(), 2467–2487.

[6]

Wright, Alden H. (1991). Genetic Algorithms for Real Parameter Optimization. Foundations of Genetic Algorithms, Volume 1, 1991, Pages 205-218.

[7]

Eshelman, L. J., & Schaffer, J. D. (1993). Real-Coded Genetic Algorithms and Interval-Schemata. Foundations of Genetic Algorithms, 187–202.

[8]

Takahashi, M.; Kita, H. (2001). IEEE 2001 Congress on Evolutionary Computation - Seoul, South Korea (27-30 May 2001). Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546) - A crossover operator using independent component analysis for real-coded genetic algorithms. , 1, 643–649.

[9]

Kramer, Oliver (2017). [Studies in Computational Intelligence] Genetic Algorithm Essentials Volume 679.

[10]

John Dalton (2024). Newcastle Engineering Design Centre, Merz Court, Newcastle University.

[11]

Voigt, Hans-Michael; Ebeling, Werner; Rechenberg, Ingo; Schwefel, Hans-Paul (1996). Parallel Problem Solving from Nature - PPSN IV International Conference on Evolutionary Computation. The 4th International Conference on Parallel Problem Solving from Nature Berlin, Germany, September 22 - 26, 1996. Proceedings, 336–345.

[12]

Joel Chacón; Carlos Segura(2018). 2018 IEEE Congress on Evolutionary Computation (CEC), Analysis and Enhancement of Simulated Binary Crossover, Rio de Janeiro, Brazil. 08-13 July 2018.

[13]

Deb, K. and Agrawal, R.B. (1994) Simulated Binary Crossover for Continuous Search Space. Complex Systems, 9, 115-148.

[14]

Kusum Deep; Manoj Thakur (2007). A new crossover operator for real coded genetic algorithms. Applied Mathematics and Computation, 188(1), 895–911.

[15]

Helong Yu, Shimeng Qiao, Ali Asghar Heidari, Ayman A El-Saleh, Chunguang Bi, Majdi Mafarja, Zhennao Cai, Huiling Chen (2022). Laplace crossover and random replacement strategy boosted Harris Hawks optimization: performance optimization and analysis. Journal of Computational Design and Engineering, Volume 9, Issue 5, October 2022, Pages 1879–1916.