President’s Desk

A rose by any other name …


Brian Denton
INFORMS President

In my last President’s Desk article, I referenced the classic Clint Eastwood western, “The Good, The Bad and The Ugly.” This time I chose a more refined title based on a well-known Shakespeare quote, “a rose by any other name would smell as sweet.” I chose this because recently there has been a lot of debate about what terms we use to describe what we do. “Operations research” and “management science” are among the longest standing and most recognized names to people within our field, but they garner little recognition outside our field. Some people advocate for new names such as “analytics” and “data science.” One thing is for certain; few topics are more likely to start an enthusiastic discussion among INFORMS members than to suggest changes to the name of our field. Rather than fuel the discussion, in this article I focus on what I believe is the root cause, which is a broadening of our field driven by increasing availability of data and the subsequent methodological challenges that have followed this trend.

When I started on the INFORMS Board in 2012 – then as secretary of INFORMS – an effort was underway to define the term “analytics.” From that effort emerged the following definition: “Analytics is the scientific process of transforming data into insights for the purpose of making better decisions.” You may or may not like the exact wording of this definition, but I believe it recognizes some important considerations that are driving a lot of change in our field. Over the last decade, we have seen rapidly increasing opportunities to use “real data” to improve decision-making. I believe there are at least two fundamental reasons for this. The first reason is the increasing availability of data due to advances in data collection, storage, curation, maintenance and access. The second reason is the growing awareness of methodological challenges of using observational data to improve decisions. Both of these are presenting opportunities to extend the frontiers of our field.

The increasing availability of data has led to opportunities to create models in new contexts where data were not readily available before. New devices such as sensors, smartphones and RFID tags have lowered the economic barriers to data collection. At the same time, the abundance of data is creating the need to scale standard methods to work with much larger data sets than ever before. For example, the challenges of using standard techniques such as regression or clustering in big data contexts have driven innovation in the development of fast decomposition methods that can leverage multiple computers in parallel across distributed computing platforms to solve the optimization problems at the core of these approaches. As another example, when large data sets cannot load into memory, new sequential analysis approaches are needed for optimizing online decision-making. Similarly, the availability of high-dimensional data has created demand for new optimization methods for variable selection in the training of predictive models.

The methodological challenges of using observational data have not traditionally been a priority in our field. In scientific contexts, the importance of randomized, controlled trials to mitigate bias is well recognized. However, many new applications in our field use complex models that are parameterized using observational data. Observational data suffer from many problems including missing data, measurement error and many sources of bias that can negatively affect the decisions that are derived from operations research models, degrading the value they provide, or worse, making them cause more harm than good.

These problems are rarely covered in operations research classes on topics such as optimization and stochastic models. In fact, many excellent textbooks on topics like these do not address the challenges of parameterizing models at all. This barrier to using operations research models in practice presents important opportunities for new research at the intersection of statistics and operations research.

The above examples of research opportunities suggest the need for more statistics in our field. However, there is an equally strong case to be made for better incorporating operations research methods in statistics. Many statistical techniques rely on optimization, but often heuristics are the norm because of the historical challenge of obtaining optimal solutions. Recent advances in optimization and computing technology have opened the door to solving these problems to optimality, and in some cases, this is exposing significant errors in the use of commonly accepted heuristics [1]. Another area in which our field is contributing is causal inference. In the context of observational data, methods that match data elements by the similarity in covariates are needed to untangle cause and effect relationships [2].

The above examples are just the tip of the iceberg in recent advances related to data-driven models that are making our field more relevant than ever in many important industrial and scientific contexts. I believe this is causing a fundamental shift in our field that is long overdue. Whether this calls for the use of new terms to describe our field is up for debate. However, what is clear is that these changes are creating new opportunities for researchers and practitioners alike. These opportunities include increased demand for educational programs at universities, more job opportunities for future academics and practitioners, and opportunities to increase the impact we have on society.


  1. Bertsimas, D., Kind, A., Mazumder, R., 2016, “Best Subset Selection via a Modern Optimization Lens,” The Annals of Statistics, Vol. 44, No. 2, pp. 813–852 DOI: 10.1214/15-AOS1388.
  2. Nikolaev, A. G., Jacobson, S. H., Cho, W. K. T., Sauppe, J. J., and Sewell, E. C., 2013, “Balance Optimization Subset Selection (BOSS): An alternative approach for causal inference with observational data,” Operations Research, Vol. 61, No. 2, pp. 398-412.