Models that are never wrong

New tools for multi-criteria decision-making return control to the decision-maker

By Ignacy Kaliszewski and Douglas A. Samuelson

The road best taken: A new toolkit for multi-criteria decision-making. Image © | 123rf.com

A new toolkit for multi-criteria decision-making (MCDM) offers a possibility most OR/MS practitioners consider impossible: models that are never wrong. George Box’s famous proclamation, “All models are wrong, some are useful” prevails, but we suggest an alternative idea: Many models are not so much wrong as ill-conceived.

One major reason for this is that in every practical decision problem, there are multiple desired aspects, but the traditional single-criterion optimization approaches collapse the trade-offs into one objective. Perhaps there are also a number of constraints, but in all cases the model structure imposes a single, pre-specified set of tradeoffs. Although multiple-aspect/multiple-criteria problem framing has grown in popularity, beginning in the 1970s, its use has still lagged far behind the emphasis – in our view, over-emphasis – on single-criterion optimization. (It is not widely remembered now that the famous Kuhn-Tucker optimality conditions applied originally to multi-criteria problems [1].) Moreover, the benefit of MCDM has not been readily apparent, because the MCDM (also known as multi-criteria decision analysis, or MCDA) domain offers so many different approaches and methods, all very elegant and far reaching, but too little guidance in how to apply them.

MCDM Essentials in Ready-to-Use Tool

Now, however, we have the essentials of MCDM extracted in the form of a tool, ready for use in any decision problem which occurs in practice. The new work [2] by the first author of this article refers to a generic model that consists of a set of admissible decision variants and their multi-aspect valuations. This is an implementation and extension of earlier theory [3]. The newsletter of the International Society on Multiple Criteria Decision Making, available free of charge to the members (no membership fee), is an excellent source of the current research in the domain.

In the generic model, the admissible set is a proper subset of the set of all possible decision variants – that is, the admissible set corresponds to the feasible region in traditional optimization. Essential aspects of the problem, for which an admissible (decision) variant selected as the most preferred variant should have favorable valuations (appraisals, characteristics), become criteria (alternatively: objectives). To be able to harness computers for computations to the model, we need numerical valuations (dollars, tones, etc.). Then, we can valuate variants by criteria (objective) functions. Without loss of generality and to ensure consistent exposition, we assume for this article that all valuations are in the form of the more the better.

The set of rational candidates for the most preferred variant to a decision problem fitting to this model consists of admissible variants that are efficient (i.e., they are not dominated by any other admissible variant with respect to criteria valuations). That is, there is no other admissible variant that for all criteria valuations is as good as the former and for at least one criterion valuation is better; for instance, worker A, as qualified as worker B but less productive, is dominated by B. Thus, we arrive at a Pareto frontier: a set of choices with the property that we cannot move from one choice to another to make ourselves better off in some way without making ourselves worse off in some other way.

None of these choices is universally or objectively better than any other. To select the admissible variant that the decision-maker (DM), individual or collective, considers as the most preferred, he or she can be assisted, but not dictated, by the model. Rather, the model (or models) identifies a number of sets of possible decision admissible variants and the consequences of choosing them, and the DM then – and only then, not at the beginning of the modeling process – exercises his or her final preference.

Why the Model is Never Wrong

In this class of analyses, the question of whether the model is wrong or correct is immaterial. In fact, there is not just one model. Following the Herbert Simon four-phase decision-making scheme [4], there is a sequence of models, each tentative, representing a temporal DM’s problem perception. The four phases – intelligence (of the problem), design (of the model), choice (of the most preferred variant) and overview (whether the most preferred variant fits the reality) – are closed in a cycle forming the learning loop. Cycling goes until the DM chooses. The last model in the sequence is, subjectively and temporally, correct for him or her. The DM, the sovereign of the decision process, is not a part of the model. Usually, what the DM needs is assistance in the choice (third) phase, specifically a clear evaluation of the likely consequences of the offered choices.

To illustrate via a simple example, suppose that a ship captain, using old charts and outdated navigation methods, wants to ply trade routes along a coastline with many bays and estuaries. He knows a few ports of call where he could trade profitably. Formulating his route choice in traditional ways, he might, to limit the risk of running into hazards, constrain his route not to deviate from the line connecting those points – and miss many opportunities to enter harbors where profitable trade could occur. To put it mathematically, he has wrongly assumed that the feasible region is convex and continuous, so he can find high-value points by linear interpolation between the good points he knows. However, given a less accommodating feasible region, those new computed high-value points turn out not to be reachable, and his linear methods won’t tell him where the best feasible point is for the compromises he would prefer to make. With this improved method, the captain can learn of estuaries not on his smooth interpolation line and then find, case by case, high-value actions that reflect how much risk and extra effort and expenditure he is willing to accept, given the potential profit and the attractiveness of alternatives.

Here’s How It Works

For each criterion function taken individually, the best value, i.e., the maximum (as assumed above) over admissible variants is calculated. If there exists an admissible variant that maximizes all criteria functions simultaneously (this happens, but not often), then this variant is clearly the most preferred, and this terminates the decision-making process. Indeed, the valuation of this variant with respect to all criteria is the best possible. Such variant is called ideal.

Figure 1: The case where the compromise half line q crosses the set of variant valuations at some valuation of an efficient variant; the contour of the corresponding weighted max function at is shown (dashed line).

If the ideal variant does not exist, one is still left with the ideal valuation. Despite that the ideal variant does not exist, the ideal valuation is a valuable information to the problem: What is the best we could possibly do under ideal conditions that do not exist? Comparison to this value enables us to see how much the valuation of any admissible variant, an efficient (i.e., not dominated) variant in particular, represents a concession (or sacrifice) versus the ideal valuation. And here comes the time for the DM to act: “I say! If (to get an admissible variant) I have to make concessions versus the ideal valuation, let this be on my terms!” Those terms are represented by a vector of concessions with as many components as the number of criteria, all positive. (Any concession costs something; that’s why it’s a concession.) From these choices, we construct the compromise half line, a set of possible (not necessarily admissible) decisions variants that preserve the ratio of trade-offs the DM is willing to make among criteria as we move away from the unattainable ideal.

To adhere to the temporal DM’s preferences represented by the vector of concessions, one should search for valuations of efficient variants on the compromise half line, using a construct that always yields a single number variant valuation of an efficient variant. One such construct is the weighted linear function, the sum of weighted objective functions (weights positive). Given weights, this function is maximized over the set of admissible variants only by efficient variants with valuations on hyperplanes tangent to the set of admissible variants. Except for a few very specific problems, such as the linear ones, this formulation begins by excluding some efficient variants from consideration. They cannot be derived by the linear weighted function and thus they are a priori excluded from potential candidates for the most preferred variant.

Figure 2: The case where the compromise half line q misses an infinite set of variant valuations (and hence misses the set of efficient variant valuations); the contour of the weighted max function at the valuation of an efficient variant is shown (dashed line).

Valuation of Variants

To avoid a priori exclusion of some efficient variants from considerations, we have to use another function to express the valuation of variants we are considering. The weighted max function is a suitable choice. Rather than using the weighted linear objective function that imposes a linear structure, we use a more general combination of the proposed concessions. Once the vector of concessions t is provided by the DM, we compute the weights of the max function so as to move along the compromise half line, shown as q in the figures, descending from the ideal valuation , or as close to the compromise half line as possible if the compromise half line contains no efficient variants.

Then, taking the minimum of the weighted max function with these weights over the set of admissible variants, yields an efficient variant which:

– if the compromise half line crosses the set of efficient variant valuations, then the valuation of that variant lies on the compromise half line (Figure 1).

– otherwise, the valuation of that variant lies off the compromise half line (Figures 2 and 3).

Figure 3: The case where the compromise half line q misses a finite set of variant valuations (and hence misses the set of efficient variant valuations); the contour of the weighted max function at the valuation is shown (dashed line).

Moreover, for any efficient variant there are weights for which taking the minimum of the weighted max function over the set of admissible variants yields this variant. Thus, no efficient variant is a priori excluded.

In contrast to the weighted linear function, this construct is general and hence works for any decision problem, irrespective of its properties such as linearity, differentiability, continuity, discreteness and so on.

Besides yielding a fair, egalitarian, efficient variant, the weighted max function has another merit. Namely, it is built on differences between what is ideal and what is actually achievable, thus it refers to the lost (unachievable though) opportunities, i.e., in terms of losses. If Tversky and Kahneman [5] were right in concluding that people give more attention to losses that to gains, then the weighted max function is, again, a better decision assisting construct than the weighted linear function, as DMs relate more readily to forgone losses than to possible gains.

To summarize, when applying the tool, the DM sets his/her temporal (trial) preferences and is provided with the respective variant reflecting these preferences – strictly or only closely – as the nature of the problem dictates. The DM repeats this till he or she is satisfied.

Figure 4: The DM’s “playground” once is derived.

There is not much decision-making support for the DM in this tool (which by no means is an algorithm), and this is the price for its generality. Typical MCDM/MCDA models offer much more than that, but if one goes for more, things get complicated. On the other hand, the tool enables a quick start for a beginner or a casual user. In any case, the tool can serve as a common language for multi-aspect perspectives across O.R. applications.

The Tool

Consider a decision problem with two criteria (the tool works for any number of them). The nature of the set of admissible variants, for example, could be composed of variants given explicitly, portfolios of indivisible items (e.g., portfolios of projects) or portfolios of divisible items (mixes of items in specific proportions). The whole decision process takes place in the space of valuations. After the ideal valuation is derived and assuming that the ideal variant does not exists, the DM’s “playground” is as shown in Figure 4.

Figure 5: Two efficient valuations (circles) derived with the weighted max function (dashed lines represent contours) for: 1. Vector t (defining q) provided directly, 2. vector t1 (defining q1) provided indirectly, via a trial point .

If the DM opts for any of two valuations (and the corresponding admissible variants) derived to establish , it means that he or she opts for no concessions on the best value of the first or the second criterion. Otherwise, concessions on  are inevitable. The process for selecting the most preferred variant is an iterative procedure (till he or she exclaims BINGO!). Here is one iteration of it with a hypothetical DM’s behavior:

Derivation of an efficient valuation (and the corresponding variant)

A valuation of an efficient variant is derived (Figure 5).

Analysis

After considering the valuation, DM decides that he or she would prefer to compromise on criterion 1 less than criterion 2, yielding the slope of the compromise half line.

Compromising, option 1.

To satisfy this preference, the DM sets the first component of the compromise trade-off vector of concessions t relatively low to the second, reflecting more willingness to compromise on component 2 than on component 1.

Compromising, option 2.

The DM provides a trial (reference) valuation which reflects this revised preference.

Go back to Derivation step.
Repeat until DM is satisfied.

Conclusions

Ignacy Kaliszewski is the author of the book, “Multiple Criteria Decision Making by Multiobjective Optimization: A Toolbox.”

New research has produced a short, readily usable guide to a tool that makes implementation of MCDM much more usable by practitioners. The availability of this tool can enable a revolutionary change in the support of decision-making: letting the decision-maker interactively choose desired trade-offs among competing objectives and presenting him/her the corresponding actions, subject to realism about what is feasible. This approach is much closer than the traditional single-criterion optimization to the way most decision-makers actually prefer to make choices. It also neatly avoids the common phenomenon of developing models that are wrong because they fail to address trade-offs and competing objectives in the appropriate way. Learning this new approach promises to yield substantial benefits for both researchers and practitioners – and their clients.

Ignacy Kaliszewski (ignacy.kaliszewski@ibspan.waw.pl) is a full professor of the Systems Research Institute of the Polish Academy of Science, Warsaw, Poland, where he earned his Ph.D. and Habilitation (the second scientific level degree). He has extensive consulting experience along with his distinguished record of academic research.

Douglas A. Samuelson, D.Sc., is president and chief scientist of InfoLogix, Inc., a small R&D and consulting company in Annandale, Va. He is a contributing editor of OR/MS Today.
References

  1. Kuhn, H.W., Tucker, A., W., 1951, “Nonlinear Programming Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability,” pp. 481-492, University of California Press, Berkeley, Calif.
  2. Kaliszewski, I., Miroforidis, J., Podkopaev, D., 2016, “Multiple Criteria Decision Making by Multiobjective Optimization: A Toolbox,” Springer.
  3. Kaliszewski, I., 2006, “Soft Computing for Complex Multiple Criteria Decision Making,” Springer.
  4. Simon, Herbert A., 1977, “The New Science of Management Decision,” Prentice-Hall: New Jersey.
  5. Tversky, A., Kahneman, D., 1974, “Judgment under Uncertainty: Heuristics and Biases,” Science, New Series, 185, 4157, 1124-1131.