Thematic Spotlight: History of Simulation

"...the foundation for the field - not just the work that has been published in Management Science-was provided by two papers published long before simulation had its own department in the journal .. the seminal papers of Conway, Johnson, and Maxwell (1959) and COnway (1963)." - Barry Nelson

While it wasn’t until the 1940s that the first general-purpose electronic computers (ENIAC) were constructed, the underlying principles behind the use of simulation techniques had already been in play for hundreds of years. One such example involves haruspicy, the inspection of the liver of a sacrificial animal for blemishes. The inspection is said to have been performed by leveraging well developed models of animal organs. For instance, a clay liver model from Babylonia, now in the British Museum, dated to 1900-1600 BCE, is thought to have been used to forecast the outcome of an illness. An accurate interpretation was very important as important decisions were based on the findings from these inspections.

Modern day simulation is widely believed to have gained popularity via the Markov Chain Monte Carlo method that was invented in the late 1940s by Stanislaw Ulam (1909-1986), while he was working on nuclear weapons at the Los Alamos National Laboratory. However, a little known fact is that an even earlier variant of the Monte Carlo method that can be seen in Buffon’s needle problem. While the problem itself was the earliest problem in geometric probability to be solved, the solution could, in the case where the needle length is not greater than the width of the strips, can be used to design a Monte Carlo method for approximating the value of π. The term ‘Monte Carlo’ was coined by Nicholas Constantine Metropolis (1915-1999) because of the similarity of the statistical simulation method to games of chance, and because Monte Carlo is a center for gambling and games of chance.

While the 1950s began to see the emergence of simulation a tool that could potentially be extremely impactful, it was still years away from being seen as a useful tool. The required skill set, combined with the total time needed to attain results was disheartening. Moreover, as any graduate student today working in the field of Operations Research can attest to, early theoretical models were far from accurate. For instance, attempting to model the performance of a telephone network during the peak periods offered ambiguous results, primarily because the system did not conform to the queueing theory models available for application in those days. During his time as a graduate student at New York University, Julian Reitman, one of the pioneers of the simulation industry, who attempted to accomplish this via an IBM 650 (one of IBM’s early computers and the world’s first mass-produced computer) had this to say regarding the effort: ‘we accomplished less than half of what we set out to do, took twice as long and overspent our budget at least twice. In conclusion, computer simulation was not a useful tool in the 1950s.

This isn’t to say that people weren’t fantasizing about a world in which predictive models based out of computer simulation would become a norm. In 1951, Isaac Asimov published the novel “Foundation” in which he talks about the concept of psychohistory. Essentially, it revolves around the concept that ‘while one cannot foresee the actions of a particular individual, the laws of statistics as applied to a large group of people could predict the general flow of future events’. What began as science fiction soon made its way into mainstream research. One example of this is the Living Earth Simulator, which uses an immensely large amount of data to uncover underlying sociological and psychological laws that underpin human civilization. Estimated to cost about £900 million, the Living Earth Simulator will gather knowledge from a Planetary Nervous System to try and predict societal fluctuations such as political unrest, economic bubbles, disease epidemics and the like.

As simulation started to develop into a modeling tool in the early 1960s, Richard W. Conway, in his early years as a faculty member at Cornell University (now Professor Emeritus), provided the first widely recognized framework for sketching out a new research agenda for the growing field. Along with B.M. Johnson and W.L.Maxwell (Conway’s first PhD student), of Cornell University, Conway established that simulation problems fell into two broad categories – the construction of the simulation model and the analysis of the simulation results. At around the same time in the 1960s, Keith Douglas Tocher, while working for United Steel Companies (now a Professor of Operations Research at the University of Southampton), developed a general simulation program whose main task was to simulate the operation of a production plant. This turned out to be the first discrete-event-simulation package and also led to the first book on simulation: The Art of Simulation (1963). 1967 also saw the establishment of the WSC (Winter Simulation Conference) which is, till date, a benchmark as far as the advances in the field of simulation systems are concerned.

In the 1970s, simulation was a topic that was taught to Industrial Engineers who often complained of long hours spent at a computer terminal to fix codes seemingly filled with obscure bugs. The popularity of simulation, however, was on the rise with an increasing number of conferences and sessions through the 1970s and early 1980s. A common fear during this period was that the field was extremely complicated and that it was rather time consuming because of all the programming involved. The next couple of decades saw the availability of commercially available computerized manufacturing systems complemented by an extensive array of available computer hardware and software and, like with most computer-based devices at the time, simulation got a major boost as it was propelled into the 1990s. Models were increasingly being used to design the layouts of plants and to study the flow of operations within these plants.

Today, simulation is almost taken for granted as being a tool that takes into account all the interactions between the system’s constraints and resources; and delivers time-based, cost effective solutions for the functionalities of a system. This being said, while the ease of use and wide range of applications simulation are probably two of its biggest strengths, care must be taken to ensure practitioners remember that the system being modeled is not the real world and that it is extremely hard to account for all possible factors that could affect the outcome of the experiment. This drawback is exemplified in the case of Australia which once introduced the rabbit1 and the fox2 to the continent for recreational purposes and as part of an attempt to make Australia as much like Europe as they possibly could. What this model did not take into account was that the rabbits would soon eat the food necessary for the local species to survive and turned into a major pest problem. Soon, the foxes and rabbits were all over the place and played a major role in the decline of a number of species of native animals. Essentially, the problems merely exacerbated because the modelers failed to account for the consequences of extraneous factors.

The advancement of computing power has enabled the creation of extremely large models that can provide extremely accurate results in a matter of minutes, if not hours. Power system behavior, weather conditions, electronic circuits, industrial plant operations, disease progression and epidemic modelling are just a few items that can be accomplished via some of the well-established simulation-based software in place today. Therefore, it can be rightfully said that simulation is here to stay and is becoming more valuable by the day. In fact, much of what was previously thought of as science fiction can soon be seen to be made into reality. Thus, as simulation softwares advance rapidly, especially for real world scenarios with extremely critical consequences, we can expect to have valuable tools that aid us in a variety decision making processes.



(Edited By: Siddhartha Nambiar)