ISSUES IN EDUCATION
‘Everything you know is wrong’
By Patrick S. Noonan
A survey of any INFORMS gathering would probably ratify the notion that sensitivity analysis is a critical component of any modeling work. In fact, we might muster a majority for the proposition, “Analysis that does not address sensitivity is obviously sketchy work, to be dismissed on the spot.”
Therefore it is striking, when we look at how some of our students apply analysis in the real world, to see so little impact from such a central part of our belief system. So many organizations have processes, systems, incentives, language and an entire corporate culture built around the illusion of certainty. “Just give me the number” is a standard refrain (delivered through clenched teeth, if in response to an attempt to communicate uncertainty).
We all know the benefits of sensitivity analysis, and no doubt we sell them logically and clearly. For better results in practice, perhaps we need to turn the entire frame on its head: On the first day of every modeling course, write on the board, “Everything you know is wrong!”
Why We Care
The centrality of sensitivity to analytical work is obvious … to us. We have internalized the benefits: A model is a useful simplification of the world, but some simplifying assumptions may remove critical features, so we need to check. Our parameters include estimates and forecasts as well as quantities that vary or are in dispute, so we must find out which matter to our decisions and focus our development attention on those. Some constraints are actually moveable – at a price or cost – so quantifying potential changes in objective function can give us actionable insights into the economics of a problem. And of course, sensitivity steps can be diagnostic, helping with model verification.
Experienced modelers so deeply “know” that our process will include a testing step that we have no issue with prototyping, making rough estimates and reducing a problem to its simplest form. We take for granted that our model is wrong, but that we will make it more useful before anyone must decide and take action.
We also know we do this most effectively if we actually have a model, some model, to work with. We build a rope bridge across the canyon of a problem, so we get a better look at what we need to engineer into subsequent improvements. The motto “KISS” means, to most of us, “Keep it simple … and start!” The promise of sensitivity is an effective vaccine against analysis paralysis, because we need not achieve perfection – complete understanding, “all” data, solid forecasts, etc. – before moving forward.
A Point About Points
Why is the world, for the most part, a place that skips these steps? Why do most people, in effect, embrace certainty?
First of all, many incentives reward boldness and confidence, so the person who delivers point estimates is often seen as stronger and more competent. Also, of course, even the individual cells of our ubiquitous spreadsheet software seem to say, “Just give me the number.” Practical people gravitate toward a “PIPO” approach: points in, points out.
Let’s look at how we sell sensitivity and put it into a better context. Our standard language about sensitivity tends to include such questions as:
- If we are a little off in our parameter estimates, what difference does it make to our conclusions, especially our optimal decision strategy?
- How robust is our recommendation? That is, how far wrong could we be in our estimates before our optimal strategy is no longer the best?
Our framing suggests – to the novice – that a number “being right” is the proper state of the world, and that errors, being “a little off,” are exceptions. This has it exactly backwards, but a tweak in our approach can turn it around.
‘EYKW’
The primary reason for the testing step is to explore the foundation of assumptions we make to get started. Rather than suggesting that we search for the ones that might be “wrong,” flip the polarity: Assume they’re all wrong, but that only some of them matter; our job is to investigate which ones.
Write “EYKW” on the board, and then challenge students to give examples of things they know that are “right,” propositions (especially numerical ones) that are completely certain and have infinite precision. Of course they will find some – “I am wearing exactly 2.00000 … shoes” – but quickly they will realize that these are exceptions and atypical of modeling assumptions.
Most parameters can be disputed, so the umbrella assumption should be uncertainty, with certainty demoted to a quaint special case. Our goal, in a sense, is to replace the notion, “points are the best we could do,” to the opposite, “points are the worst we can do.” The resemblance of PIPO to GIGO – “garbage in, garbage out” – is both intentional and helpful here: It reinforces that points tend to be junk, and that models built on points alone probably belong on the trash heap.
Moving Away from GIGO
What do we offer as a replacement? Increasing depths and levels of sophistication in our thinking, backed up by software tools and our OR/MS toolkit.
We do still start with points, but we do not stop there. Base values help us build rope bridges, but they do not support commercial traffic, because we don’t believe them anymore. They are simply instruments, convenient fictions, place holders.
Our point estimates are based on something. We may not have an infinitely precise estimate of a parameter, but we don’t believe that any or every value is possible. We usually have some sense of a plausible range of values we could sweep through. Excel’s DataTables and other tools can be introduced here as a “ranges in – ranges out” (RIRO) perspective.
Of course, RIRO is a significant improvement over PIPO, but there is one level even better: We may have a good sense of the relative likelihood of various values within our ranges. We probably don’t consider every possible value to be equally likely. We may not believe there are “sharp edges” that define a range, beyond which there is no possibility. We can apply a probability distribution to the possible values – “distributions in, distributions out” (DIDO) – and Monte Carlo techniques can be introduced first as the ultimate in sensitivity analysis.
Students getting things wrong will always be a challenge in our courses, and in practice, but actually reminding them that wrongness is the natural state of the world can help them do some important things right.
Patrick S. Noonan (pnoonan@emory.edu) is an associate professor (clinical) at the Goizueta Business School at Emory University.
