Metaheuristics in Optimization: Algorithmic Perspective

Srinivasan Balan
North Carolina State University

The key motivation of metaheuristics is to find a good-enough solution to an optimization problem. In this article, some state-of-the-art metaheuristics algorithms along with illustrative examples will be demonstrated.


The computer science and operations research communities have been striving to solve complex, large-scale real-world problems. In the previous article (Fall 2020), I described the fundamental algorithms that changed scientific computing in mathematics, computer science, and operations research. While solving a large-scale problem, finding a feasible solution and improving the solution to converge to the optimal global solution is necessary. However, finding exact solutions is difficult because resources are constrained, and most of the optimization problems are complex. Metaheuristic can solve this difficulty by offering approximate solutions.
Comprehending computational complexity is crucial to reap the benefits of metaheuristic. In computational complexity theory, we know only a few algorithms that are guaranteed to converge to optimality in reasonable run time (polynomial time-algorithms). However, many real-world problems are NP (non-deterministic). i.e., given a solution to the problem, we can verify the solution in a reasonable amount of time. Is P NP? Finding polynomial-time algorithms for the NP-complete problems is still in progress. NP-complete problems are the hardest in NP. If any NP-complete problem is P-time solvable, then all problems in NP are p-time solvable.
This article is inspired by the previous article on "Algorithms that changed the world" in the Fall 2020 issue. In continuation to this, I share the top five meta-heuristic algorithms (Genetic Algorithm, Simulated Annealing, Tabu Search, Swarm Intelligence Algorithm, Variable Neighborhood Search) to solve complex optimization problems that are difficult to solve to optimality using traditional optimization techniques.

Metaheuristic algorithms

A metaheuristic algorithm is a search procedure designed to find, a good solution to an optimization problem that is complex and difficult to solve to optimality. It is imperative to find a near-optimal solution based on imperfect or incomplete information in this real-world of limited resources (e.g., computational power and time). The emergence of metaheuristics for solving such optimization problems is one of the most notable achievements of the last two decades in operations research.
There are challenges that call for attention to develop better solutions over existing traditional approaches. Different metaheuristic algorithms are described by authors that are pretty extensive to various applications to solve non-linear non-convex optimization problems. In combinatorial optimization, it is impossible to solve specific problems that are NP-hard (i.e., in reasonable run time). Thus, metaheuristics can often find good solutions with less computational effort than optimization algorithms, iterative methods, and simple greedy heuristics. There are different varieties of problems that are impractical to solve using an optimization algorithm to global optimality. For example, an optimization problem becomes complex when there are stochastic random variables present in the objective or constraints. Hence, it is not easy to solve large-scale stochastic programs using stochastic programming or robust optimization techniques.
Metaheuristic can play a key role in different domains. In essence, many optimization problems are multi-objective functions with non-linear constraints. For instance, most of the engineering optimization problems are highly non-linear that demand solutions to multi-objective problems. On the other hand, artificial intelligence and machine learning problems rely heavily on large datasets, and it is difficult to formulate the optimization problem to solve for optimality. Therefore, metaheuristics play a significant role in solving practical problems that are difficult to solve using conventional optimization methods.
Metaheuristic algorithms are classified based on how they operate over the search space [3] such as nature-inspired vs. non-natured inspired, population-based vs. single point search, dynamic vs. static objective functions, one vs. various neighborhood structures, memory usage vs. memory-less methods. The purpose of this article is not to compare search and optimization techniques. Nevertheless, it is critical to question whether conventional search methods meet robustness requirements. Metaheuristics are suitable for both exploitation and exploration of the solution space.
Now, let us explore some metaheuristic algorithms. In addition to their algorithmic perspective, a real-life scenario of emergency response allocation of marine accidents and the application of metaheuristics will be exemplified.

Genetic Algorithm

The genetic algorithm (GA) is a metaheuristic motivated by the evolutionary process of natural selection and natural genetics. The algorithm [9] combines the fittest's survival among string structures with a structured yet randomized information exchange to form a search algorithm. The key aspect of genetic algorithms is the balance between efficiency and efficacy necessary for survival in different harsh competitive environments.
Genetic algorithms are theoretically and empirically proven to provide robust searches in complex spaces. In GA, a pool of initial solution sequences is generated and stored in a population pool. These sequences represent the genetic representation of a solution domain. A fitness function that describes a solution value of the objective function is determined. Using a selection operator, two-parent solutions from the population undergo genetic operators such as crossover and mutation with an associated probability. The parent chromosomes generate offspring chromosomes by crossover operator to represent child solutions, namely child1 and child2. These offspring chromosomes undergo a mutation (swapping of Alleles) operator. The fitness function (objective) is evaluated for the children's solution, and the population is updated based on the betterness (Elitism) of the fitness function. The algorithm is repeated until maximum generations or by a stopping condition. Generally, the solution sequences are encoded in either binary or bit string structure to represent the solution.
Consider an example of allocating response vessels (e.g., rv1, rv2, rv3, rv4, and rv5) in a few accident scenes, say s1, s2, s3, s4 and s5. The sequence's chromosome representation would be [s1 − s3 − s4 − s2 − s5]. The "Child" represents the proposed "Accident Scenes" to assign "Response Vessels". There are several types of crossover operators used in GA. Crossover operators such as single point crossover, two-point crossover, k-point crossover, priority-based encoding, and uniform crossover are widely used. A sample one-point crossover is shown below that may split the parent chromosome at a certain point.

Response Vessels rv1 rv2 rv3 rv4 rv5
Parent (Accident Scenes) s1 s2 s3 s4 s5
Child s4 s5 s1 s2 s3 (Crossover)
Child s4 s5 s3 s2 s1 (Mutation)

In mutation, one of the child sequences is swapped, such as s1 is swapped with s3. The stopping condition depends on the type of combinatorial optimization problem. Generally, maximum generations and maximum run time is considered as stopping criteria. However, stopping conditions can be based on stall time and maximum stall generations. A generic process flow of the development of the Genetic Algorithm is shown in Algorithm 1.
Some of the applications of GA are feature selection and hyperparameter tuning in machine learning, game-theory applications, scheduling problems, molecular structure optimization, non-linear programming, and financial mathematics. A response vessel allocation model using GA is proposed in [14] to support the oil spill contingency plan.

Algorithm 1: Genetic algorithm

AP Algo 1

Simulated Annealing

Simulated annealing (SA) [12] is primarily inspired by the annealing: heating and controlled cooling operation in metallurgy.
A generic process flow of the SA algorithm is shown in Algorithm 2. A neighborhood solution (s) is generated using a local search procedure using an initial seed (s0) and temperature. The fitness value of the objective function is evaluated for this seed and neighborhood solution. Let δ be the difference between the fitness function values. If δ < 0, the neighbor solution is accepted, else, the neighbor solution is accepted with a Boltzmann probability density of $e^{(\frac{-\delta}{T})}$. Thus, both exploitation (extensive search in a previously known search space) and exploration (explore new opportune wider search spaces) are deployed over the search space. In the process of cooling, the algorithm converged to an approximate solution, escaping from local optima to find the near-optimal solution. This slow cooling implemented in the SA algorithm is interpreted as an exploration process that involves a slight decrease in the probability of accepting worse solutions when the solution space is explored. In summary, the SA algorithm behaves like a hill-climbing heuristics but with more power not to get stuck in the local optima [19].

Algorithm 2: Simulated Annealing algorithm

AP Algo 2

Although SA usually achieves an approximate solution to the global minimum, it could be enough for many practical problems and implementation. Applications of SA include traveling salesman problems, graph coloring problems, vehicle routing and extensions, very-large-scale integration and computer design, and scheduling problems. Several variants of SA proposed by authors are quantum annealing, dual-phase evolution, stochastic tunneling, and parallel tempering. Optimal ship routing can be proposed using SA [13] that minimized the time of voyage and maximize the safety of voyage.

Tabu Search

Tabu search (TS), another metaheuristic algorithm, is based on the memory structures and uses local search methods to find a potential solution by checking its neighbors to find a better solution [8]. Generally, local search methods get stuck in suboptimal regions. TS enhances the search process by restricting (henceforth the term Tabu) the same solution's recurrence from coming back to previously visited solutions. TS has many similarities with SA, and both involve hill climbing and exploration possibilities. The memory structure defines a set of rules used to filter solutions from the neighborhood search. More commonly, a Tabu list consists of solutions defined by Tabu tenure (number of iterations) that the solution remains in the solution bucket. The memory structures used in TS can be divided into three categories [7]:

  • Short term memory: If a potential solution appears in the Tabu list, it cannot be revisited until it reaches Tabu tenure

  • Intermediate memory: Intensification rules are used to improve the search spaces

  • Long term memory: Diversification rules that drive the search into new regions. A reset is applied if the solution gets stuck in a plateau or sub-optimal region

A simple local search algorithm is shown below in Algorithm 3. TS uses a local search or a nearest neighborhood (greedy) search procedure to move from one potential solution iteratively, s0 to an improved solution s by a simple operation σ in the neighborhood s0, until some stopping criterion has been satisfied. Many local search heuristics are available in practice, including steepest descent, 2-opt, 3-opt, R-opt, nearest neighbor, and random descent.

Algorithm 3: Local Search algorithm [19]

AP Algo 3

The algorithm is initiated by random initialization and solutions. In each iteration, a new solution is found by making local movements over the current solution. The neighbor solution is the best among all (or a subset of) possible solutions in the neighborhood using operator N(σ). In the exploration process, recently visited solutions should not be visited, thereby maintaining a tabu list in order to strict the solution movement in the local search process. Therefore, we have a dynamic neighborhood compared to the static neighborhood obtained by other local search algorithms. Typically, there are three kinds of tabu lists maintained using memory structures: 1) A long term memory that hold the history through all the exploration process and reset the search process getting stuck in local optima, 2) an intermediate memory to improve the search space and 3) a short term memory to keep the most recently visited solutions as tabu movements.
Tabu search is very efficient and escapes from the local solution to find the near-optimal solution. Some of the applications of solving combinatorial problems using Tabu search are Scheduling, communications path assignment, graph coloring, graph partitioning, job shop scheduling, neural network pattern recognition, and routing problems [7]. The optimal allocation of berth, similar to a car park, for ships is proposed in [4] using the Tabu Search algorithm.

Algorithm 4: Tabu Search algorithm

AP Algo 4

Swarm Intelligence Algorithms

Swarm Intelligence Algorithms are inspired by bird flocking's social behavior, animal herding, bacterial growth, and fish schooling. Swarm intelligence systems, initially developed within cellular robotic systems [1], are typically made up of a population of simple agents interacting locally with one another and with their environment. Researchers propose several nature-inspired swarm intelligence algorithms to solve a combinatorial problem to a near-optimal solution. Algorithms such as ant colony optimization [5], particle swarm optimization (PSO) [11], bee colony optimization, cuckoo search are some of the well-known algorithms under swarm intelligence.
PSO [11] is a population-based evolutionary algorithm in which the best solution can be represented as a vector in an n-dimensional space. In every iteration, the velocity (vij) and the position (xi) of the particles are controlled to converge to the near-optimal solution that maximizes or minimizes the objective function. Particles move through the solution space and are evaluated based on a fitness function. In each iteration, particles are updated with two values, namely pBest and gBest. pBest (yij) is the best solution achieved so far, whereas the gBest (j) is the second-best value obtained by any particle in the population. A schematic representation of the pBest and gBest is shown in Figure 1. Thus, exploration happens with neighborhood search and reduces the susceptibility of PSO falling into local optima but slows down convergence speed.

Fig 1

Figure 1: Velocity component construction of a particle swarm

The position of the particle is changed by adding a velocity, vi(t), to the current position, as in Equation below, with xi(0) U(xmin,xmax).

xi(t + 1) = xi(t) + vi(t + 1)

For gBest PSO, the velocity of particle i is calculated as in Equation below, where c1 and c2 are acceleration constants. The PSO algorithm is shown below in Algorithm 5.

vij(t + 1) = vij(t) + c1r1j(t)[yij(t) − xij(t)] + c2r2j(t)[j(t) − xij(t)]

Algorithm 5: Particle Swarm Optimization algorithm

AP Algo 5

PSO can be applied to various optimization problems [20], including civil infrastructure systems, traffic accident forecasting, structural engineering, multi-thresholding modalities for suspicious region detection on mammograms, photovoltaic array reconfiguration, power systems, and rail transit assembly of robot systems. Some exciting applications including NASA's swarm technology investigation for planetary mapping, the movie 'Batman Returns' makes use of swarm technology for rendering, depicting the penguins' movements, and the 'Lord of the Rings' film trilogy made use of similar technology during battle scenes.
To illustrate, in the case of an emergency response system of a marine oil spill, allocation and deployment of response vessels (e.g., rescue ships) can be optimized using PSO, as proposed in [21]. In an oil recovery operation, the key goal is to collect as much oil as possible, and this is a time-contingent operation. The optimal response plan (considering the allocation of resources, recovery behaviors of skimmers, space for carrying equipment of vessel transportation, and resource utilization from response centers) proposed in this research recovered approximately 80.28% of the spilled oil within the first 48 response hours.

Fig 2

Figure 2: Emergency Response of Marine Accidents, using PSO [21]

Variable Neighborhood Search

Variable neighborhood search (VNS) [16] algorithm explores distant neighborhoods of the initial solution and moves to a new improved solution. Similar to Tabu search, a local search method is applied repeatedly to get from solutions in the neighborhood to local optima and escape from the valleys that contain them. This characteristic helps VNS to solve linear, non-linear, integer, and mixed-integer programs. VNS uses a simple local search coupled with the first improvement heuristic and neighborhood change algorithms; it has both exploitation and exploration process similar to other metaheuristics.
The details of the VNS algorithm are shown in Algorithm 10. At first, a first improvement heuristic is invoked to find the local optima, given the solution's feasible region, as in Algorithm 6. Then, the neighborhood heuristic is invoked to escape from the local optima to find a better solution in the new feasible area, as in Algorithm 7. The algorithm is repeated until maximum preset iterations and CPU run time. In addition to the first improvement heuristic, a simple local search can be used for the local search operation, as in Algorithm 3.

Algorithm 6: First improvement heuristic [10]

AP Algo 6

Algorithm 7: Neighborhood change heuristic [10]

AP Algo 7

Algorithm 8: Variable Neighborhood Search algorithm [10]

AP Algo 8

A variant of VNS [6] uses a deterministic neighborhood search, unlike the single neighborhood structure. 'Shaking' is an additional step performed to find the best alternative solution from the kth neighborhood at random. Similarly, reduced VNS, skewed VNS, variable search decomposition, parallel VNS are widely used techniques to solve mathematical programming problems [10]. Overall, VNS is simple and broadly applicable. It is more efficient to produce good solution quality in reasonable CPU time. Applications of VNS include location theory, network design, lot-sizing, pooling problems, reliability, geometry, and telecommunication design.


To summarize, metaheuristics are used to find good-enough solutions for an optimization problem. Metaheuristics are simpler to design and implement [17]. A few well-established metaheuristic algorithms that can solve optimization problems in a reasonable time frame are described in this article. Effective algorithm development is a continuous improvement process. Several search procedures, nature-inspired algorithms are being developed to solve a variety of complex optimization problems.
In light of advantages, metaheuristics are strategies that guide the search process. There are simple local search techniques to complex learning processes available in the form of iterative algorithms. Metaheuristics are approximate solution methods and usually non-deterministic [3]. The critical aspect of metaheuristics is not problem-specific, and hence any optimization problem can be solved with ease of understanding. Nevertheless, despite its popularity and practical efficacy, metaheuristic methods give inadequate quality solutions. It heavily relies on proper implementation and hyperparameters tuning, which in itself is another optimization landscape [2].
Considering implementation platforms of metaheuristics to solve real-world problems, MATLAB [15] has inbuilt GA, SA, and particle swarm algorithms. Similarly, many packages were developed using various programming languages such as C + +, Java, and Python (e.g., [18]) to solve optimization problems.



  1. Beni, G. and Wang, J. (1993). Swarm intelligence in cellular robotic systems. In Robots and biological systems: towards a new bionics?, pages 703–712. Springer.
  2. Bennett, K. P. and Parrado-Hernández, E. (2006). The interplay of optimization and machine learning research. The Journal of Machine Learning Research, 7:1265–1281.
  3. Blum, C. and Roli, A. (2003). Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM computing surveys (CSUR), 35(3):268–308.
  4. Cordeau, J.-F., Laporte, G., Legato, P., and Moccia, L. (2005). Models and tabu search heuristics for the berth-allocation problem. Transportation science, 39(4):526–538.
  5. Dorigo, M. and Di Caro, G. (1999). Ant colony optimization: a new meta-heuristic. In Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), volume 2, pages 1470–1477. IEEE.
  6. Garcia, C. G., Pérez-Brito, D., Campos, V., and Martí, R. (2006). Variable neighborhood search for the linear ordering problem. Computers & operations research, 33(12):3549–3565.
  7. Glover, F. (1990). Tabu search: A tutorial. Interfaces, 20(4):74–94.
  8. Glover, F. and Taillard, E. (1993). A user’s guide to tabu search. Annals of operations research, 41(1):1–28.
  9. Golberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Addion wesley, 1989(102):36.
  10. Hansen, P. and Mladenovic, N. (2001). Variable neighborhood search: Principles and applications. European journal of operational research, 130(3):449–467.
  11.  Kennedy, J. and Eberhart, R. (1995). Particle swarm optimization. In Proceedings of ICNN’95-international conference on neural networks, volume 4, pages 1942–1948. IEEE.
  12. Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. science, 220(4598):671–680.
  13. Kosmas, O. and Vlachos, D. (2012). Simulated annealing for optimal ship routing. Computers & Operations Research, 39(3):576–581.
  14. Łazuga, K., Gucma, L., and Perkovic, M. (2018). The model of optimal allocation of maritime oil spill combat ships. Sustainability, 10(7):2321.
  15. MATLAB (2010). version 7.10.0 (R2010a). The MathWorks Inc., Natick, Massachusetts.
  16. Mladenovic, N. and Hansen, P. (1997). Variable neighborhood search. Computers & operations research, 24(11):1097–1100.
  17. Moreno-Armendáriz, M. A., Hagan, M., Alba, E., Rubio, J. d. J., Cruz-Villar, C. A., and Leguizamón, G. (2016). Advances in neural networks and hybrid-metaheuristics: Theory, algorithms, and novel engineering applications.
  18. Olson, R. S. and Moore, J. H. (2016). Tpot: A tree-based pipeline optimization tool for automating machine learning. In Workshop on automatic machine learning, pages 66–74. PMLR.
  19. Reeves, C. (1993). Modern heuristic techniques for combinatorial problems. Blackwell scientific publications.
  20. Wang, D., Tan, D., and Liu, L. (2018). Particle swarm optimization algorithm: an overview. Soft Computing, 22(2):387–408.
  21. Ye, X., Chen, B., Lee, K., Storesund, R., Li, P., Kang, Q., and Zhang, B. (2021). An emergency response system by dynamic simulation and enhanced particle swarm optimization and application for a marine oil spill accident. Journal of Cleaner Production, 297:126591.