Agent-based evolutionary algorithms: Emerging paradigm or buzzwords?

The strengths and weaknesses of evolutionary algorithms and multi-agent systems and the advantages of coupling them.

Agent-based evolutionary algorithms: Emerging paradigm or buzzwords?

Embarrassing question: What is an agent-based evolutionary algorithm?

By Ruhul A. Sarker and Tapabrata Ray

Evolutionary algorithms (EAs) and the multi-agent systems (MAS) are two independent, well-known problem-solving paradigms with different characteristics that are usually applied to slightly different problem domains. Recently, these two paradigms have been combined, resulting in agent-based evolutionary algorithms (AEA) that have improved problem-solving in both domains. However, the characteristics of this new paradigm are not fully transparent compared to its parent paradigms.

Wooldridge and Jennings [1] wrote: “Carl Hewitt recently remarked1 that the question what is an agent? is embarrassing for the agent-based computing community in just the same way that the question what is intelligence? is embarrassing for the mainstream AI community.” We believe we are facing a similar embarrassing question: What is an agent-based evolutionary algorithm?

In the 1990s, the concept of agents was introduced and this term is now as attractive as object-oriented was in the 1980s or artificial intelligence in the 1970s. While the concept of agents and agent systems is useful in solving many complex problems, there is a risk in its overuse without transparency. Is AEA one such example?

In Science magazine, Grimm et al. [2] stated that, “What makes James Bond an agent? He has a clear goal, he is autonomous in his decisions to achieve the goal, and he adapts these decisions to his rapidly changing situation.” In a MAS, the relevant information from individual agents are compiled, theories about their behavior are formulated and implemented as computer simulations, resulting in emergence of system properties and behaviors. These generic steps are not much different from the steps entailed by evolutionary algorithms. In other words, they are similar to any other rational problem-solving approaches.

The properties of self-organization, adaptation and self-learning have been the key features of EA’s resulting in successful solution to a number of complex problems where classical approaches are either unavailable or generally lead to unsatisfactory results. In recent years, the interest in EAs has been growing dramatically specifically for solving structured and semi-structured optimization problems [3]. The structured problems are one that can be fully defined with its decision variables, parameters, objective and constraints. The structured problem can usually be expressed mathematically. The unstructured problems cannot be defined explicitly. The semi-structured problems are in between the two.

While the generic steps are similar in both the approaches, their detailed steps show a significant variation. A multi-agent system is composed of multiple interacting intelligent agents. Typically multi-agent systems refer to software agents. However, the agents in a multi-agent system could be robots, humans, human teams or a combination. Multi-agent systems can manifest self-organization and complex behaviors even though the individual strategies of all their agents are simple. This feature is attractive to solve problems that are either difficult or impossible for an individual agent or a monolithic system (also known as a single approach or an integrated approach) to solve. Examples of problems that are appropriate to multi-agent systems include online trading, disaster response, and modeling social structures. They can also be used to solve structured problems such as scheduling and transportation.

The great success of EAs in solving extremely complex structured and semi-structured optimization problems from various disciplines, and the success of MAS in solving semi- and un-structured problems, has been reflected through the growing number of publications and a corresponding increase in specialized conferences and journals. The framework of EAs is well-defined and well-accepted by the research community. However, there is no restriction to allow some variations within. To enhance the performance of EAs, hybridization with another method or methods is very common in practice. For example, a memetic algorithm is simply an evolutionary algorithm integrated with a local search method. Note that, in this article, agents will be considered as software agents and MAS will be discussed in the context of optimization and optimization-based decision problems. MAS do not have a well-defined framework like EAs.

Sycara [4] stated that there is a lack of a proven methodology enabling designers to clearly structure applications as multi-agent systems. This cannot be seen as a negative point for MAS as this gives the flexibility in designing MAS for a wide range of problems. They are suitable for solving problems where there are different entities, people or organizations with different goals (possibly conflicting) and proprietary information. They are also useful where the fitness and constraint functions cannot be expressed explicitly [5]. Note that we ignore discussing the details of agent characteristics, classification, architecture and languages in this article since our objective is to analyze the approach for possible integration with EAs. The details of EAs are also ignored for the same reason. However, the interested readers can find an overview of these two approaches in [6].

EAs deal with a population of individuals. These individuals have a common goal though all of them may not achieve their goals. However, it can be argued that, due to the process of selection, crossover and mutation, the individuals work cooperatively and collectively to achieve their goals. These individuals may be defined as agents under certain assumptions. As required by multi-agent systems, the agent properties, such as autonomy, communication, proactiveness, learning and reactivity, can potentially be used to enhance the performance of EAs. To satisfy these properties and achieve the goals, the agents engage in interactions involving a wide range of hard and soft methodologies. Most of these methodologies are not used within the current framework of EAs although there is no restriction of doing so without involving MAS formally.

Integration of MAS and EAs

Consider the following types of associations between MAS and EAs where:

  • the agent is responsible for actions and behaviors,
  • the agent represents a candidate solution, and
  • MAS, EAs and their variants are used sequentially.

In the first type, the agent represents the type of actions and behaviors that guide the system in solving the problem. Here, the agents may use EAs for learning or improving certain functionality. Vacher et al. [7] proposed a multi-agent system where an agent is composed of functions of action, knowledge and behaviors. In this system, a genetic algorithm is used to determine a set of functions for each agent that guides the system in solving the problem efficiently. Nunes and Oliveira [8] and Iantovics and Enachescu [9] used EAs as learning algorithms within MAS. Milano and Roli [10] proposed a multi-agent metaheuristic architecture, where an agent as a system is able to build a solution, to move over a landscape, to communicate with other agents, to be active (i.e., goal-oriented) and possibly to be adaptive. This way, a new algorithm (that combines several metaheuristics) can easily be designed by choosing and defining the agents and their interactions.

In the second type, an agent represents a candidate solution. In EAs, an individual can be defined as an agent under some assumptions. Hence, a population of individuals can be considered as a population of agents [11]. As defined by Barkat Ullah et al. [12], an agent may not only be a candidate solution but also store agent specific information such as agent learning techniques, learning rates etc. Such agents may reside in a 2-D cellular/lattice-like environment [11-13], graphs [14] or a chain-like or ring-like environment [15]. Each agent aims at increasing its own energy/benefit by cooperating or competing with its neighbors. A number of operators might be invoked by an agent such as competition, cooperation, mutation and self-learning.

Competition uses a kind of tournament to identify whether the agent is a loser or a winner. If it is a loser, it is replaced by a new agent generated by the system. Cooperation is basically an information exchange mechanism through the crossover operator. Self-learning could be based on local search techniques or rule based analysis. No global selection method is required in this approach. In some lattice-like environment, each agent is fixed on a lattice-point and can only interact with its neighbors [11, 12]. Hippolyte et al. [16] considered that agents may move and meet other agents in the environment.

In the third type, MAS and EAs can be applied either in sequence or iteratively in order to solve certain classes of problems. In solving a dynamic job-shop scheduling problem, Li and Du [17] proposed multi-agent consultations for initial task allocations and then applied hybrid genetic algorithms for generating optimal rescheduling. For a multiple traveling salesman problem, Giardini and Kalmar-Nagy [18] implemented a genetic algorithm to finding near-optimal solutions for each vehicle (vehicles are defined as agents). These solutions are then used to create an initial multi-vehicle plan through negotiation and sharing between the agents. This plan is further optimized by using an evolutionary algorithm. Liu and Tang [19] proposed a multi-agent system where a genetic algorithm is used as a continuous novelty generator (generation of alternatives), not as an optimizer. Such a generator stimulates the imagination of agents and extends their thinking spaces for ultimate decision-making. In the system proposed by Guo et al. [5], different agents have different functions. One of five kinds of agents uses an interactive genetic algorithm as a part of the overall decision process.

Agent-based Evolutionary Algorithms

A closer look into EAs with MAS will immediately reveal that the individuals in EAs do not have the autonomy as the agents in MAS. However, EAs inherently hold some MAS properties such as a population of individuals, self-organization, adaptation and self-learning. The individuals in EAs work collectively (not individually) to attain a common but global goal. In most cases, the environment is given and expressed by complex functions and conditions. There are competitions, in some form, to be selected for crossover through the ranking and selection processes, and to be selected for the next generation (i.e., for survival). In the crossover operation, the selected individuals cooperate with each other and exchange information (genetic materials). This exchange of information is considered as communication between the individuals. The individuals may further communicate among themselves to check relative locations on the fitness landscape. The ranking process may also be considered as a way of indirectly communicating for certain information. The memetic algorithm uses additional local search algorithms that can be recognized as local learning, i.e., learning about the locality.

A Type-1 system is basically a multi-agent system whereas Type-3 utilizes both multi-agent and evolutionary schemes. However, Type-2 is a system that incorporates some aspects of agents and multi-agent systems within an evolutionary framework. As evident from the literature, more and more of Type 2 systems are being used to solve a number of practical problems such as constraint satisfaction [20], job-shop scheduling [21], electrical machine design [16], combinatorial optimization [11] and numerical optimization [12, 22].

In a Type-2 system, the individuals in EAs are defined as agents with or without further features and assumptions. As an example of further assumptions, Barkat Ullah et al. [12] stated that the individuals in the population of EAs are not agents. Here, the agents are virtual agents where each agent stands (or supports) for one of the individuals of the population based on its belief and learning experiences. Such an assumption would help to incorporate some features within agent /individual representation in EAs. In addition, the individuals can then apply some agent properties within EAs, such as communication, cooperation, competition and learning to enhance their performance. Davidsson et al. [23] indicated that we must capitalize the strength of two approaches in a new hybrid method as they complement each other.

The agent environment includes the agent’s social network structure and the size of the neighborhood for interactions. In a Type-2 system, the agent environment (such as lattice-like and graph) is defined in a way that each agent can be compared with its neighbors locally, and agents can cooperate and compete with their neighbors [11-13, 22]. However, the lattice-like environment is somewhat similar to 2-D cellular genetic algorithm structure.

Barkat Ullah et al. [24] stated that, “In reality, beside reproduction, an individual learns and gains experiences in different ways during its life time which improves the individual’s genetic materials.” De Jong [25] stated that the agent behavior is basically a combination of “nature and nurture,” which are both inherited and learned components. De Jong [25] also indicated that evolution operates at the population level while “lifetime learning” occurs at the individual level. The agents learn throughout their life span, which improves their quality (fitness value). This learning process can be chosen by the individual agents independently. For example, in optimization problem-solving, the local search techniques could be learning processes for an agent. To give a flavor of an individual’s decision-making ability and agent learning, Barkat Ullah [12] introduced a repertoire of local search algorithms from which an agent will choose a learning method based on its learning experience. It is worth noting here that the combination of agents’ local view and the global search ability of EAs are used to balance between exploitation and exploration.

Final Words

So far, the papers published in the literature under AEAs do not ensure that they meet the basic properties of agents or agent systems. Instead, the individuals in EAs are defined as agents with one or more characteristics similar to agents, and the individuals undergo evolutionary processes with minor variations. To recognize the individuals in EAs as agents, as a minimum, they must be defined as intelligent/autonomous individuals. Every move by an agent must involve quantitative and qualitative judgment or reasoning based on its own belief, social interaction, knowledge and intelligence. The genetic operators and parameters of EAs can be considered as part of agents’ judgment or reasoning. If introduced, a wider reasoning capability of agents, which is currently limited, will make a clear difference between the agent-based EAs and EAs alone. In addition, the agent’s learning experiences can be systematically archived and retrieved when needed. The use of such knowledge will make the agent-based EAs more attractive for practical problem-solving. A computational tool, based on agent-based EAs, for automated problem-solving will be extremely useful for complex decision processes.

Ruhul A. Sarker ( is an assistant professor and Tapabrata Ray ( is a senior lecturer at the School of Engineering and Information Technology, University of New South Wales at the Australian Defense Force Academy in Canberra, Australia.


  1. 1. Wooldridge, M.J. and Jennings, N.R., 1997, “Agent Theories, Architectures and Languages: A Survey,” “Lecture Notes in Artificial Intelligence,” “ECAI-94 Workshop on Agent Theories, Architectures and Languages,” Vol. 890, pp. 1-21, Springer.
  2. Grimm, V., Revilla, E., Berger, U., Jeltsch, F., Mooij, W. M., Railsback, S. F., Thulke, H-H., Weiner, J., Wiegand, T., DeAngelis, D. L., 2005, “Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology,” Science, Vol. 310, No. 11, pp. 987-991.
  3. Sarker, R., Kamruzzaman, J., Newton, C., 2003, “Evolutionary optimization (EvOpt): A brief review and analysis,” International Journal of Computational Intelligence and Applications, Vol. 3, No. 4, pp. 311-330.
  4. Sycara, K., 1998, “Multi-agent systems,” AI Magazine, Vol. 19, No. 2, pp. 79-92.
  5. Guo, Y-N., Cheng, J., Gong, D-W., Yang, D-Q., 2006, “Knowledge-inducing interactive genetic algorithms based on multi-agent,” “Lecture Notes in Computer Science, 4221,” pp. 759-768, Springer.
  6. Sarker, R. and Ray, T. (editors), 2010, “Agent-Based Evolutionary Search,” Springer, in press.
  7. Vacher, J.-P., Galinho, T., Lesage, F., Cardon, A., 1998, “Genetic algorithms in a multi-agent system,” IEEE International Joint Symposia on Intelligence and Systems, pp. 17-26.
  8. Nunes, L., Oliveira, E., 2004, “Learning from multiple sources,” Proceedings of the Third International Joint Conference on Autonomous Agents and Multi-agent Systems,” Vol. 3, pp. 1,106-1,113.
  9. Iantovics, B., En?chescu, C., 2008, “Intelligent complex evolutionary agent-based systems,” Proceedings of the 1st International Conference on Bio-Inspired Computational Methods used for Difficult Problems Solving, AIP, pp. 116-124.
  10. Milano, M., Roli, A., 2004, “MAGMA: a multi-agent architecture for metaheuristics,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, Vol. 34, No. 2, pp. 925-941.
  11. Liu, J., Zhong, W., Jiao, L., 2010, “A multi-agent evolutionary algorithm for combinatorial optimization problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, Vol. 40, No. 1, pp. 229-240.
  12. Barkat Ullah, A. S. S. M., Sarker, R., Cornforth, D., Lokan, C., 2009, “AMA: A new approach for solving constrained real-valued optimization problems,” Soft Computing, Vol. 13, No 8-9, pp. 741-762.
  13. Zhang, J., Liang, C., Huang, Y., Wu, J., Yang, S., 2009, “An effective multi-agent evolutionary algorithm integrating a novel roulette inversion operator for engineering optimization,” Applied Mathematics and Computation, Vol. 211, pp. 392-416.
  14. Drezewski, R., Siwik, L., 2008, “Agent-based co-operative co-evolutionary algorithm for multi-objective optimization,” in Proceedings of ICAISC’2008, LNAI 5097, pp. 388-397, Springer.
  15. Liu, B., Duan, T., Li, Y., 2009, “One improved agent genetic algorithm: ring-like agent genetic algorithm for global numerical optimization,” Asia-Pacific Journal of Operational Research, Vol. 26, No. 4, pp. 479-502.
  16. Hippolyte, J.-L., Bloch, C., Chatonnay, P., Espanet, C., Chamagne, D., 2007, “A self-adaptive multi-agent evolutionary algorithm for electrical machine design,” Proceedings of the 9th annual conference on Genetic and evolutionary computation, pp. 1,250-1,255.
  17. Li, Q., Du, L., 2009, “Research on hybrid-genetic algorithm for MAS-based job-shop dynamic scheduling, “Second International Conference on Intelligent Computation Technology and Automation,” pp. 404-407 IEEE Press.
  18. Giardini, G., Kalmar-Nagy, T., 2007, “Genetic algorithm for multi-agent space exploration,” 2007 AIAA InfoTech at Aerospace Conference, Vol. 2, pp. 1,146-1,160.
  19. Liu, H., Tang, M., 2005, “Evolutionary design in a multi-agent design environment,” Applied Soft Computing, Vol. 6, No. 2, pp. 207-220.
  20. Zhong, W., Liu, J., Jiao, L., 2005, “An agent model for binary constraint satisfaction problems,” “EvoCOP2005, LNCS 3448,” Springer, pp. 260-269.
  21. Zhong, W., Liu, J., Jiao, L., 2005, “Job-shop scheduling based on multi-agent evolutionary algorithm,” “ICNC2005, LNCS 3612,” pp. 925-933, Springer.
  22. Zhong, W., Liu, Xue, M., J., Jiao, L., 2004, “A multi-agent genetic algorithm for global numerical optimization,” IEEE Transactions on Systems, Man and Cybernetics, Part B, Vol. 34, pp. 1,128-1,141.
  23. Davidsson, P., Persson, J., Holmgren, J., 2007, “On the integration of agent-based and mathematical optimization techniques,” “Agent and Multi-agent Systems: Technologies and Applications, pp. 1-10, Springer.
  24. Barkat Ullah, A. S. S. M., Sarker, R., Cornforth, D., Lokan, C., 2007, “An agent-based memetic algorithm (ama) for solving constrained optimization problems,” “IEEE Congress on Evolutionary Computation (CEC 2007)”, pp. 999-1,006.
  25. De Jong K. A., 2008, “Evolving intelligent agents: A 50 year quest,” IEEE Computational Intelligence Magazine, Vol. 3, pp. 12-17.