Not taught in school (but useful in the ‘real world’)

Scott Nestler,

and Sam

All models are wrong, but some are useful. – George Box

Someone recently commented (and asked), “It’s a great time to be a quant grad, but what didn’t they teach you in school that you really need to know?” The first three things that occurred to us are:

  1. The need to fully embrace the second half of the famous quote from George Box [1].
  2. How to communicate the results of your analysis to decision-makers.
  3. The importance of creating a good visualization of your data/model (a subset of the previous task).

With regard to models, one of the most important keys “in the real world” is understanding the difference between a great-fitting model and a useful model and the need to structure your research and modeling toward improving the decision-maker’s ability to make decisions rather than simply getting a high R2 value. In academic environments, students are often introduced to a series of methods and then provided data sets with which to practice. The goal is almost always to develop the most accurate model possible from the available methods. In the real world, it is often more important to focus on developing models that are useful and then to improve the accuracy of the models over time under the constraint that their utility isn’t compromised. Examining the use of blackjack card-counting systems provides a good example of how model utility rather than performance is important in the real world.

BlackJack Example

Blackjack counting systems are not very “accurate” in the sense that even when the “deck is hot” (when the odds have swung in favor of the players vs. the dealer) there is still a very high probability that the dealer will win a given hand (and the bettor will lose their money to the house). These systems aren’t great predictors of the outcome of individual hands, as the model is often wrong. However, card counts are useful in that they can be employed in a systematic manner to win money, as documented in numerous recent popular books and movies. These counting systems (models) provide information that is useful for making decisions about what size bet to make, given the current state of the system. While many different card-counting systems have been developed, the most useful of these card counting systems are relatively simple because more complex card counting systems (which may be more accurate) are almost impossible to employ effectively for betting decisions in a chaotic casino environment. This is just one example of how, in the real world, there is often a trade-off space between model accuracy and model utility, with decision-making utility carrying more weight.

precision and accuracy, and the difference between systematic and reproducibility errors

Figure 1: The authors recently saw an attempt to explain the terms precision and accuracy, and the difference between systematic and reproducibility errors. In lieu of three paragraphs of text, they suggest the graphic shown here. Source:

The point about communication skills was hammered home when the first author was teaching at the Naval Postgraduate School in Monterey, Calif. In the biennial (every two years) program review, the number one comment was that the graduates came with all of the technical skills that they needed, but their ability to communicate to senior leaders and decision-makers, whose time is limited and span of involvement is high, was lacking.

One common mistake made by many analysts is a failure to make a distinction between a technical report or presentation and an executive summary or decision brief. A technical report is a document written to record how an analysis was done (so that it can be replicated) and is designed to make a scientific and logical argument to support a set of conclusions. Therefore, a common outline for such a report might be: introduction, literature review, problem definition, methodology, results and conclusions. A presentation to a decision-maker using this format is likely to produce impatience and frustration – “Just get to the bottom line.”

Executive Summary & Decision Brief

The structure of a good executive summary or decision brief relies on the logical argument of the technical report (and should only be written once this logic is firmly established) but presents the logic in reverse order. An executive summary should lead with a brief statement of purpose to orient the reader, and then summarize the conclusions and recommendations, i.e., the bottom line up front, the results of the analysis (preferably in an easy-to-read chart), and briefly highlight the methodology and data used. One way to highlight the distinction between the logic of a technical report and an executive summary is that the logic of a technical report can be summarized with a series of  “Therefore . . .” statements, while the logic of an executive summary should rely on a series of  “Because . . .” statements.

These ideas appeared in a blog post [2] by Polly Mitchell-Guthrie, chair of the INFORMS Analytics Certification Board (ACB), which oversees the Certified Analytics Professional (CAP®) program, in 2013. She writes, “Much as we lament the shortage of graduates from the STEM disciplines (science, technology, engineering and math), it is arguably more difficult to find within that pool graduates who also have the right ‘soft skills.’” Polly points out that “selling” – yourself and your skills as an analyst – to convince others that you can solve their problems and improve their decision-making is critical. She suggests Daniel Pink’s book, “To Sell Is Human: The Surprising Truth About Moving Others” [3]. While “hard math” is critical in many instances, convincing someone that you have the technical skills to solve their problem is often more difficult. This is further highlighted in the seven domains of the CAP Job Task Analysis [4]: business problem framing, analytics problem framing, data, methodology selection, model building, deployment and lifecycle management. Not surprisingly, many of the supporting 36 tasks and 16 knowledge statements involve communications skills.

These shortcomings among analysts are nothing new. In 2011, an Analytics magazine article [5] by Freeman Marvin, CAP, and Bill Klimack highlighted six “soft” skills every analyst needs to know: partnering with clients, working with teams, problem framing, interviewing experts, collecting data from groups and communicating results. Failure to effectively communicating results can lead to a project that is a technical success but has no impact. They propose that instead of dragging the decision-maker through the entire chronology of an analysis, tell a compelling story with a beginning, middle and end.

One of the best ways to tell a compelling story is to use pictures (or graphics) to communicate the results of an analysis. Unfortunately, methods and principles for visually communicating the results of an analysis are often not taught in technical programs even though, as Mike Driscoll asserts in a popular online presentation [6], the ability to “munge, model and visually communicate data” are “the three core skills of data geeks.” Reviewing the work of Edward Tufte [7] and William S. Cleveland [8] provides an excellent foundation for visually communicating quantitative information. “Choosing a Good Chart” [9] by Abela is also useful, as it suggests an appropriate type of graphic for nearly any type of data and purpose.

Summing Up

In summary, first focus on developing useful models. Second, when communicating with decision-makers, start by describing the utility of those models – how can they be used and what difference will it make. Only after communicating the practical effects of the employment of the model/analysis should you communicate how you arrived at your conclusions (follow the logic of the technical report backwards). Finally, the most compelling way to communicate these ideas is through the developing graphical products that clearly communicate the key results of your analysis. As they say, “A picture is worth a thousand words.” 

Scott Nestler (, Ph.D., CAP, is an associate professional specialist in the Department of Management, Mendoza College of Business, University of Notre Dame.

Sam Huddleston (, Ph.D., is an operations research analyst in the U.S. Army.

Disclaimer: The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Army, the Department of Defense or the U.S. government.