Agents of Change

By Douglas A. Samuelson

Listen to experienced defense and intelligence analysts talking about "what's hot" and you'll probably hear the phrase "ABM." No, it doesn't necessarily mean "anti-ballistic missiles" these days; more likely they're talking about agent-based modeling.
ABM encompasses approaches and practitioners from operations research, artificial intelligence, social network theory, cognitive science and various other disciplines. The basic idea is to expand traditional simulation to include entities whose behavior can change over time, depending on the circumstances they encounter. The field has grown explosively in numerous directions over the past 10 years, with important applications in war gaming, intelligence analysis, organizational performance, social policy and other areas. The rapidly expanding applicability and complexity of the analysis has produced interesting applications and equally interesting methodological and interpretive challenges.

Beginnings


ABM has roots in simulation, both the discrete-event type familiar to many operations researchers and the system dynamics approach made famous by Jay Forrester and others at MIT in the 1960s and 1970s. However, many researchers in ABM credit the Santa Fe Institute with essentially starting the field in its current form, in the late 1980s and 1990s, by developing Swarm, the first widely available computer package designed for ABM. Repast, developed by researchers at the University of Michigan and the University of Chicago, followed soon after. These and other packages offer the ability to specify entities — agents — within the system, program them with rules to govern their behavior, program the overall system with rules by which agents interact, and then analyze the simulated results. Agents' programming typically includes adaptation based on their experience, so substantial learning and behavior change can take place. This adds considerable complexity and realism to simulation.

The early to mid-1990s saw another important development: the adaptation of the theories of cellular automata and distributed artificial intelligence to hypothesize about social systems. Consider a simple landscape, just a grid, with supplies of food distributed randomly at various places in the grid. Add a few simple creatures that look for food, thrive and multiply if they find it, and die if they are deprived of it. Give some of them different search rules from others and see which group does better. This is Sugarscape, developed by Joshua Epstein and Rob Axtell at the Brookings Institution, collaborating with the Santa Fe Institute and the World Resources Institute. Rules for trade and interaction can readily be added, and a number of researchers have expanded on this method. The results were sufficiently interesting that Epstein and Axtell titled their book on the subject "Growing Artificial Societies," as the method seemed rich and realistic enough to yield valuable insights about social networks, trade, culture, conflict, and the spread and management of disease.

Readers of OR/MS Today may remember the work of Kathleen Carley, Michael Prietula (now at Emory University) and their colleagues at Carnegie Mellon University on social networks using agent-based simulation, as described in the December 2000 issue. Recently both they and Epstein and Axtell have turned their attention to disease spread and management, especially when the source is a deliberate introduction (biological terrorism). Carley has also explored the ways in which a network can be disrupted or dismantled, again with emphasis on terrorism. Perhaps equally important, Carley and CMU have been instrumental in forming the largest professional association in the field, the North American Association for Computational Social and Organizational Science (NAACSOS), and hosting annual meetings of researchers — an outgrowth, as it happens, of a special interest group within INFORMS.

Defense and Intelligence


After the terrorist attack on the United States in September 2001, the defense and intelligence communities expressed greatly increased interest in understanding what certain kinds of people might do, how their organizations might behave, and how to synthesize useful information out of large data sets. The Defense Advanced Research and Projects Agency (DARPA) announced new initiatives in these areas, and the intelligence community, with the National Security Agency (NSA) apparently in the lead, formed the Advanced Research and Development Activity (ARDA) with the same announced purposes. A particular interest of ARDA is what they called NIMD, Novel Intelligence from Massive Data. They also were intensely interested in finding ways to open the analytical and policy-making process to alternative and contrarian thinking: among other efforts, they put a link on the ARDA Web site to a CIA-sponsored book (Heuer, 1999) on intelligence analysis, focusing on ways to broaden analysis and avoid "institution-think."

Desmond Saunders-Newton, a faculty member at the University of Southern California and a reviewer for DARPA, describes the "deluge" of unsolicited proposals DARPA received in response to its late 2001 initiative. One of the stranger ones, he recalls, was a proposal from Maharishi University to investigate the potential for teams of trained experts in meditation to create a consciousness field that would lessen tensions and promote world peace. "We were concerned about some evaluation methodology issues," he says dryly. "Still," he adds, "I can't prove they're wrong."



Features of the In Silico Anthrax Model (ISAM).

While work continues in defense-related areas, the urgency of current threats and the focus on high short-term payoffs have, in many knowledgeable observers' view, led to a focus on tactics, computerized information sharing and coordination, and "sensor fusion," the computer-assisted, highly automated integration of data from electronic and optical collection, with a reduced emphasis on strategy, shaping conditions that promote or reduce conflict, and modeling social and organizational behavior. Future initiatives may offer more opportunities.

An indication of possible future interests is Maj. John Nagl's recent analysis (his doctoral dissertation at Oxford) of counterinsurgency, comparing the British experience in Malaya and the American experience in Vietnam. He concluded that the British military succeeded by being a learning organization, while the U.S. military suffered from a stubborn adherence to principles and procedures that didn't apply to the situation. This represents a radical departure from even the most painstaking and soul-searching analyses done previously, as Nagl questioned not only the way doctrine was applied but also the way it was developed. These questions, in turn, draw attention to how people within organizations interact, depending on the rules they follow and the way the organization lets interactions happen — exactly the kind of problem agent-based modeling is intended to address.


Striking findings: screen shot from the ISAM.

Disease Modeling


Many current agent-based models are at a very general, overview level. Other researchers have used the same tools and techniques in a more detailed way. One interesting example is the In Silico Anthrax Model (ISAM), a joint project of the Potomac Institute for Policy Studies, a private non-profit organization in Arlington, Va., and the Krasnow Institute for Advanced Studies at George Mason University.

The novelty in this model is its detailed computational representation of a complex system involving the host and the disease-causing organism. Such models provide quantitative predictions concerning the host-pathogen interaction through inputs that are based on physiological assumptions and measurable parameters. Written in Swarm, the model allows the user to explain or predict the dynamics of macroscopic properties, including the disease status and patient symptoms, from rules that operate at the microscopic level of systems interacting with each other and with their local environments.

One early finding from this work is that a reasonable, intuitively appealing treatment strategy is likely to kill the patient! The anthrax bacillus initially infects either the skin or the lungs, then moves to the lymphatic system, and eventually enters the bloodstream. As it encounters increasing numbers of white cells trying to kill it, it produces a toxin that attacks many types of blood cells.

As the number of bacilli continues to rise despite administration of an antibiotic or antiviral medication, a doctor might well decide to use high doses of multiple antibiotics. This treatment does, in fact, kill the bacilli — and causes the release of all the toxin they were making and storing. This striking finding, clinically understandable and credible without reliance on a costly, much slower research program using animals, shows the potential value of computer modeling in complex systems, and also shows the value of a level of detail that makes the model credible quickly.

Event-Behavior Logging and Practice Mapping


Similarly, some analysts now seek to model human psychosocial behavior in a way that is completely faithful to the diversity found in individual mentalities and social arrangements. Most agent-based models employ collections of agents of one to a few types. These agents tend to change slowly, often due to thermodynamically inspired metrics; the focus is to study emergent behavior from large numbers of like agents. In network models, the emphasis is on links over agents; here again, there tend to be few, or just one, type of link.

For more detailed, realistically complex modeling, coding the underlying events and behaviors in a rigorous, replicable, computer-usable way is increasingly important. Based on these precise records of behavior, the goal is to infer to the human systems' capacities to produce behavior. Michael Fehling and Gregg Courand have devoted nearly 20 years of research, mostly at Stanford, to developing a theory, methods and a suite of computer-based tools to model human systems in this way. They record events and actions in event-behavior graphs, then use their computer-based tools, called ACCORD, to develop hypothesized practice maps from collections of similar behavior trajectories. This method formalizes the distribution of practices (capacities to produce and adapt behavior, to enforce embodied criteria) over actors, the interdependencies of these competencies/practices (i.e., "social structure"), and the interaction of human practice systems with physical systems. About five years ago, convinced that their theory and methods were valid, and seeking to help organizations, they left Stanford to start their own company, Synergia LLC.

Synergia conducts human-system modeling and analysis. Synergia's clients' requirements include information needs (clarification, validation), risk assessment, choice and action design, and organizational design (skill development, coordination support and technology development). Synergia builds models to give clients self-understanding, and to clarify their environment of actors, and their risks and prospects.

Often just describing events rigorously yields valuable insights. In one case, Synergia's methods led to the discovery of a problem that could have led to catastrophic failure of the International Space Station. NASA had engaged Synergia to help with a planned organizational redesign that would affect the controllers who manage the physical systems and astronaut activities for the International Space Station.

During mapping, Courand and Fehling collected and formalized data on the practices of the astronauts and controllers. The astronauts and controllers relied on a sensor to alert them when the amount of water in a coolant tank dropped below a pre-set level. Sudden loss of a large amount of water could cause other systems to fail, with potentially disastrous consequences. In the event of a leak, astronauts repaired the leak, then controllers reset the sensor alert to slightly less than the volume of the tank post-repair, to detect further leakage.

The problem was that, if the tank was repaired and refilled, the standard procedure did not include resetting the alarm level back to its former value. Under these conditions, a catastrophic leak could occur without triggering the alarm until it was too late. Controllers' and astronauts' embodied communication practices did not mandate mention of the refill.

Courand commented, "Organizational risks are typically carried in practice inter-dependencies, and so hidden in exactly this manner. Handover from one actor or set of actors to another is one of the most important things you can examine to find potential problems, in all kinds of settings." (In Synergia's parlance, an actor can be an individual, acomponent of an organization, a large organization or a nation-sized institution — the whole idea is that by focusing on practices, differences of scale are rendered irrelevant.) He added that the risks thus identified can easily be formalized into decision models or used qualitatively, as the problem and the client's preferences indicate. In this way Synergia quantifies inter-cognitive (organizational) risk.

Issues and Implications


Many agent-based models now are complex enough, and deal with sufficiently sensitive issues, that validation becomes problematic. Simply asking the validation question can change the behavior of the subject — as, for example, if a researcher informed the executives of a company of a model of how individuals can seize power in organizations and requested permission to check predictions against actual events in the company. How can model results be checked against "external, objective reality" — and does that kind of reality even exist with respect to social systems? The question of what is knowledge, what is knowing, turns out not to be settled just yet.

This problem will only get worse as models become more complex, and there is plenty of room for added complexity. Saunders-Newton points out, "We can easily conceive of models of physical processes that would consume all the computing cycles available in the world. Social science models haven't gone anywhere near that level of complexity and detail yet, but you know a nation must be much more complicated than the weather."

Therefore, he says, "we may have to rethink what social science is. We may need to focus less on prediction and reliance on some physical 'reality' external to our subject of interest, so we move away from traditional ideas of validation and toward credible use. Evaluating models is a trans-disciplinary, trans-inquiry issue. What we bring to the table is how to make choices. We have to try to tell the effective story about why we think things work the way they do."





Author's Note:

If you think I left you out, send me an e-mail. I expect to write more on this subject fairly soon.








Douglas A. Samuelson (samuelsondoug@yahoo.com) is president of InfoLogix, Inc., a consulting firm in Annandale, Va. He holds adjunct faculty appointments at George Mason University and the University of Pennsylvania. He is treasurer of NAACSOS.

References



- Advanced Research and Development Activity (U.S. Intelligence Community), www.ic-arda.org/index.html.
- Carley, Kathleen, and Prietula, Michael, "Computational Organization Theory," Lawrence Erlbaum Associates, 1994.
- Center for Social Complexity, George Mason University, http://socialcomplexity.gmu.edu.
- Courand, Gregg, and Fehling, Michael, "Organizational Risk Modeling and Analysis to Support the International Space Station," Final Project Report submitted to NASA Engineering for Complex Systems Program, Sept. 30, 2001.
- Cognitive Agent Architecture (Cougaar) Open Source Project site, www.cougaar.org.
- Epstein, Joshua M., and Axtell, Robert, "Growing Artificial Societies," Brookings Institution Press and MIT Press, 1996.
- Gilbert, Nigel, and Troitzsch, Klaus G., "Simulation for the Social Scientist," Open University Press, 1999.
- Heuer, Richards J., "Psychology of Intelligence Analysis," 1999. Available from www.cia.gov/csi/books/19104/index.html.
- MASON software site (description and download information), George Mason University, http://cs.gmu.edu/~eclab/projects/mason/.
- Nagl, John, "Counterinsurgency Lessons from Malaya and Vietnam: Learning to Eat Soup with a Knife," Praeger, 2002.
- New England Complex Systems Institute, www.necsi.org.
- North American Association for Computational Social and Organizational Science (NAACSOS), www.casos.cs.cmu.edu/naacsos/index.php.
- Potomac Institute for Policy Studies, www.potomacinstitute.org.
- Prior, Stephen, Prior, Susan (Potomac Institute for Policy Studies), De Jong, Kenneth, and Sarma, Jayshree (Krasnow Institute for Advanced Studies, George Mason University), "Modeling and Human Diseases: Why your healthcare provider may need to buy a new computer," pre-release draft quoted by permission.
- Samuelson, Douglas A., "Designing Organizations," OR/MS Today, December 2000.
- Santa Fe Institute, www.santafe.edu.
- Swarm support group, www.swarm.org.
- Synergia LLC, www.synergia.com.
- UCLA Human Complex Systems program, http://hcs.ucla.edu.