Theory


Hill Climbing was one of the literature's first existing probabilistic optimization algorithms. The Hill Climbing method is also known as a local search method [1].

The iterative procedure continuously improves the solution until the best solution is attained. The process consists of generating random neighbors of the current solution \(\mathbf{x}_i\), according to equation (1), where \(\mathbf{N}\) indicates a Gaussian or Uniform distribution where the mean \(\mathbf{x}^{t}\) is the current solution and \(cov\) is the coefficient of variation input by the user. \(k\) is the \(k\)th component of the design variable vector \(\mathbf{x}\) and \(t\) is a current iteration.

\[x_{i,k}^{t+1} \sim \symbf{N}(x_{i,k}^{t}, \sigma)\]

(1)

\[\sigma = x_{i,k}^{t} \cdot \frac{cov}{100}\]

(2)

A small value of \(\sigma\) gives a higher probability for creating ‘near-parent’ solutions and a large value of \(\sigma\) allows distant solutions.

Algorithm

1:  Input initial parameters (cov, n_population, x_lower, x_upper, obj_function, n_dimensions)
2:  Input initial guess (x_pop)
3:  Calculate of and fit (initial population)
4:  for iter in range(n_iterations):
5:      x_temp = equation (1)
6:      if fit(x_temp) > fit(x_pop):
7:         x_pop(iter+1) = x_temp
8:      else:
9:         x_pop(iter+1) = x_pop(iter)

The Hill Climbing algorithm saves a new solution when a new candidate improves the current best solution.

See HC algorithm in METApy Framework.

Example 1

Use the Hill Climbing optimization method to optimize the 2D sphere function. Use a total of 2 iterations to perform the optimization. Consider the limits \(\mathbf{x}_L = [-5.0, -5.0]\) and \(\mathbf{x}_U = [5.0, 5.0]\) for the problem design variables. Consider the initial guess (two agents) \(\mathbf{pop}_0 = [-0.74, 1.25]\) and \(\mathbf{pop}_1 = [3.58, -3.33]\). Use \(cov = 20%\) and Gaussian random generator.

Solution - (Report)

  • \( \mathbf{x}_0 = (-0.74, 1.25) \), \( \text{of}_{\text{pop}} = 2.1101 \)
  • \( \mathbf{x}_1 = (3.58, -3.33) \), \( \text{of}_{\text{pop}} = 23.9053 \)


ITERATION: 1

Particle Movement - \( \mathbf{x}_0 \)

Current \( \mathbf{x}_0 \) = [-0.74, 1.25]

  • Dimension 0:
    • Mean = -0.74
    • Sigma = 0.14800000000000002
    • Neighbor = -0.9023477757469192
  • Dimension 1:
    • Mean = 1.25
    • Sigma = 0.25
    • Neighbor = 1.2619927898303909
Update \( \mathbf{x}_0 \)
  • \( \mathbf{x}_0 \) = [-0.9023477757469192, 1.2619927898303909]
  • \( \text{of}_{\text{pop}}\) = 2.4068573099793054
  • \( \text{fit}\) = 0.29352564813055654


Particle Movement - \( \mathbf{x}_1 \)

Current \( \mathbf{x}_1 \) = [3.58, -3.33]

  • Dimension 0:
    • Mean = 3.58
    • Sigma = 0.716
    • Neighbor = 3.6241680241318215
  • Dimension 1:
    • Mean = -3.33
    • Sigma = 0.6659999999999999
    • Neighbor = -3.2320736729816724
Update \( \mathbf{x}_1 \)
  • \( \mathbf{x}_1 \) = [3.6241680241318215, -3.2320736729816724]
  • \( \text{of}_{\text{pop}}\) = 23.58089409472079
  • \( \text{fit}\) = 0.040682002702854034

Update Solution
  • \( \mathbf{x}_0 \) = [-0.74, 1.25] , \( \text{of}_{\text{pop}}\) = 2.1101 - Best Solution
  • \( \mathbf{x}_1 \) = [3.6241680241318215, -3.2320736729816724] , \( \text{of}_{\text{pop}}\) = 23.58089409472079

ITERATION: 2

Particle Movement - \( \mathbf{x}_0 \)

Current \( \mathbf{x}_0 \) = [-0.74, 1.25]

  • Dimension 0:
    • Mean = -0.74
    • Sigma = 0.14800000000000002
    • Neighbor = -0.6839602767380419
  • Dimension 1:
    • Mean = 1.25
    • Sigma = 0.25
    • Neighbor = 0.9611420858210487
Update \( \mathbf{x}_0 \)
  • \( \mathbf{x}_0 \) = [-0.6839602767380419, 0.9611420858210487]
  • \( \text{of}_{\text{pop}}\) = 1.391595769292015
  • \( \text{fit}\) = 0.41813086176182296


Particle Movement - \( \mathbf{x}_1 \)

Current \( \mathbf{x}_1 \) = [3.6241680241318215, -3.2320736729816724]

  • Dimension 0:
    • Mean = 3.6241680241318215
    • Sigma = 0.7248336048263643
    • Neighbor = 2.9304941843959953
  • Dimension 1:
    • Mean = -3.2320736729816724
    • Sigma = 0.6464147345963345
    • Neighbor = -3.1041458794450945
Update \( \mathbf{x}_1 \)
  • \( \mathbf{x}_1 \) = [2.9304941843959953, -3.1041458794450945]
  • \( \text{of}_{\text{pop}}\) = 18.22351780565471
  • \( \text{fit}\) = 0.05201961524991249

Update Solution
  • \( \mathbf{x}_0 \) = [-0.6839602767380419, 0.9611420858210487] , \( \text{of}_{\text{pop}}\) = 1.391595769292015 - Best Solution
  • \( \mathbf{x}_1 \) = [2.9304941843959953, -3.1041458794450945] , \( \text{of}_{\text{pop}}\) = 18.22351780565471

Guide to Manual Calculation - Hill Climbing

Initial Population

The initial solution is defined based on the provided design vector. Below, we present the initial population along with the corresponding objective function value (\( \text{of} \)):

  • \( \mathbf{x}_0 = [88.25395326699595], \ \text{of}_{\text{pop}} = -7.5079 \) - Best Initial Solution

Exploration of the Neighborhood

The Hill Climbing algorithm explores the neighborhood of the current solution to generate a new candidate. This process is carried out using a normal distribution \( \text{N}(\mu, \sigma) \), where:

  • \( \mu \) is the current value of the variable (\( \mathbf{x}_i^t \)).
  • \( \sigma \) is defined by \( \sigma = \mu \cdot \frac{cov}{100} \), according to equation (2).

The detailed calculations for generating the new candidate are presented below:

Calculation of \( \sigma \):

\( \sigma = 88.25395326699595 \cdot \frac{20}{100} = 17.65079065339919 \)

Sampling of the new solution (\( \mathbf{x}_i^{t+1} \)):

\( \mathbf{x}_i^{t+1} \sim \text{N}(88.25395326699595, 17.65079065339919) \)

Generated value: \( \mathbf{x}' = [145.67723926735943] \)


Evaluation of the New Solution

The new generated solution, \( \mathbf{x}' = [145.67723926735943] \), is evaluated based on the objective function (\( \text{of} \)). We also calculate the fitness to compare the quality of the solutions:

Parameter Value
Objective function for the mutated solution (\( \mathbf{x}' \)) \( \text{of}_{\text{mutated}} = -6.3297 \)
Fitness for the mutated solution \( \text{fit}_{\text{mutated}} = 6.3297 \)
Fitness for the current solution (\( \mathbf{x}_i^t \)) \( \text{fit}_{\text{current}} = 7.5079 \)

Solution Update

The Hill Climbing algorithm only accepts solutions that improve the fitness of the current solution. In this case:

  • \( \text{fit}_{\text{mutated}} = 6.3297 \) is less than \( \text{fit}_{\text{current}} = 7.5079 \).
  • Therefore, the mutated solution \( \mathbf{x}' \) is not accepted, and the current solution remains unchanged.

Update After the Iteration

After the completion of the first iteration, the best solution remains as:

  • \( \mathbf{x}_0 = [88.25395326699595], \ \text{of}_{\text{pop}} = -7.5079 \)

Reference list