Voodoo Decision-Making Under Severe Uncertainty
Last modified: Friday, 30-Dec-2011 13:11:28 MST
This page is the nerve center of my campaign to contain the
The Rise and Rise of Voodoo Decision Theories
in Australia — and elsewhere — and it is related to my Info-Gap Campaign.
Table of contents
Before I turn to the technical aspects of the topic, let me explain what I mean by the term "voodoo decision theory". This is important because some of my colleagues take great umbrage at my labeling Info-gap decision theory a "voodoo decision theory".
According to the Encarta online Encyclopedia,
- A religion practiced throughout Caribbean countries, especially Haiti, that is a combination of Roman Catholic rituals and animistic beliefs of Dahomean enslaved laborers, involving magic communication with ancestors.
- Somebody who practices voodoo.
- A charm, spell, or fetish regarded by those who practice voodoo as having magical powers.
- A belief, theory, or method that lacks sufficient evidence or proof.
My usage of the term "voodoo decision theory" is in line with the last meaning listed above. So roughly, in this discussion voodoo decision-making is a decision-making process that is guided and/or inspired by a voodoo decision theory, that is a theory that lacks sufficient evidence or proof and/or is based on utterly unrealistic and/or contradictory assumptions, spurious correlations, and so on.
This reading is also in line with the widely used terms Voodoo Economics, Voodoo Science and Voodoo Mathematics.
I should point out, though, that the term "Voodoo Decision Theory" is not my coinage (what a pity!):The behavior of Kropotkin's cooperators is something like that of decision makers using the Jeffrey expected utility model in the Max and Moritz situation. Are ground squirrels and vampires using voodoo decision theory?Brian Skyrms (1996, p. 51)
Evolution of the Social Contract
Cambridge University Press.
To illustrate a voodoo decision theory in action, consider this.
ExampleSuppose that your task is to determine how a given function, f=f(x), behaves on the interval X=[-1000,1000]. For instance, assume that the issue is the constraint f(x) ≥ 0. That is, assume that you want to know how robust this constraint is over the interval X=[-1000,1000].
Also, assume that evaluating function f on X is difficult and/or costly.
Then, a voodoo decision theory would come to the rescue as follows: instead of examining the constraint f(x) ≥ 0 over X, it would prescribe testing it only over a small subset of X, say X'=[-1,1].
Now suppose that the constraint f(x) ≥ 0 performs well on X'=[-1,1]. What can we say about the performance of this constraint on X=[-1000,1000]?
No Man's Land No Man's Land
Well, if you espouse voodoo decision theory, you would argue that the performance of f(x) ≥ 0 on X'=[-1,1] provides a good indication of the performance of f(x) ≥ 0 on X=[-1000,1000], and therefore the constraint f(x) ≥ 0 performs well on X. In other words, you would argue that the performance on X=[-1,1] is representative of the performance on X=[-1000,1000].
You can save a lot of $$$$$$$ this way: instead of evaluating the performance of a system over the required large space, you quickly evaluate its performance only on a relatively small subset of the required space.
However, seeing through the nonsensical argument made by voodoo decision theory, you would argue that this is absurd because:
Got the drift?
- X'=[-1,1] constitutes a tiny part of X=[-1000,1000], in fact only 0.1 percent of it.
- All the points in X' are in the same neighborhood.
- Therefore X' is not representative of X insofar as the performance of f(x) ≥ 0 is concerned.
- Therefore, hardly anything can be deduced about the performance of f(x) ≥ 0 on X from the performance of f(x) ≥ 0 on X'.
- All we can say is that f(x) ≥ 0 performs well on X'.
Of course, some readers may question the significance of this example, arguing that it is hyperbolic. After all, who would so much as contemplate suggesting that X=[-1,1] is representative of X=[-1000,1000] with regard to the constraint f(x) ≥ 0 (unless f has some very unique properties)?
My answer to this is that -- as we shall see -- this example is not an exaggeration. It is implicit in the type of argument used by experienced senior analysts in academia and business/industry (eg. banks) to justify the application of the methodology that they propose/develop.
Here I consider a special case where the decision-making environment is subject to severe uncertainty and where voodoo type thinking is used to tackle/manage the severity of the uncertainty.
As you would no doubt gather from the above example, a voodoo decision theory will tackle severe uncertainty by ... ignoring the severity of the uncertainty. For instance, it would prescribe that the severe uncertainty be tackled through an analysis of the immediate neighborhood of a wild guess of the parameter of interest whose true value is subject to severe uncertainty.
It is important, therefore, that you read the following carefully:
Read Me First
The discussion here is not a practical joke!
Scientists, analysts, and decision makers do resort (sometimes) -- often unwittingly -- to voodoo decision theories. Therefore, if you are an analyst, it is important that you be able to identify voodoo decision theories as this can be of great help on your job!
A good way to guard against mishaps in this area is to always keep in mind the Universal
GIGO Axiom Garbage In ---> Garbage Out
Specifically, when someone proposes that you use a theory that can generate meaningful / useful results that are based on a very poor estimate that is likely to be substantially wrong, recall the following corollary of the GIGO Axiom:
GIGO CorollaryThe results of an analysis are only as good as the estimates on which they are based.
The strains one has to endure when doing science are due to these and similar precepts. The bottom line is that we have to justify our theories.
But, the point about voodoo decision theories is that they take no notice of such precepts. They openly violate the GIGO Axiom and its many corollaries without so much as giving the slightest consideration to the consequences. They promise meaningful/useful/reliable results even though their analyses are knowingly based on poor estimates that are likely to be substantially wrong.
And the most interesting thing here is that some analysts justify the use of voodoo decision theories by appealing to the following excuse:
But this is the best estimate we have!?!
If you do not appreciate how unacceptable this argument is, I strongly recommend that you read my criticism of Info-Gap decision theory and my contribution to
WIKIPEDIA article on Info-Gap Decision Theory
The Mighty Maximin!
is also relevant here.
Recall that classical decision theory distinguishes between three levels of knowledge pertaining to a state-of-affairs, namely
The "Risk" category refers to situations where the uncertainty can be quantified by standard probabilistic constructs such as probability distributions.
In contrast, the "Uncertainty" category refers to situations where our knowledge about the unknown parameter under consideration is so meager that we cannot quantify the uncertainty even by means of an "objective" probability distribution.
It is clear then that, according to this classification, "Uncertainty" eludes "measuring". It is simply impossible to provide a means by which we would "measure" the level, or degree, of "Uncertainty" to thereby indicate how great or daunting it is. To make up for this difficulty, a tradition has developed whereby the level, or degree, of "Uncertainty" is captured descriptively, that is informally, through the use of "labels" such as these:
- Strict uncertainty
- Severe uncertainty
- Extreme uncertainty
- Substantial uncertainty
- Deep uncertainty
- Essential uncertainty
- Hard uncertainty
- Hight uncertainty
- Complete uncertainty
- True uncertainty
- Fundamental uncertainty
- Wild uncertainty
- Knightian uncertainty
- True Knightian uncertainty
- Severe Knightian uncertainty
In this discussion I prefer to use the term "Severe Uncertainty". I understand "Severe Uncertainty" to connote a state-of-affairs where uncertainty obtains with regard to the true value of a parameter of interest. That is, the true value of this parameter is unknown and the estimate we have of this true value is:
- A wild guess.
- A poor indication of the true value of the parameter of interest.
- Likely to be substantially wrong.
Some analysts even claim that the estimate can be based, among other things, on:
- Gut feeling
So, the point here is that decision-making under severe uncertainty deals with extreme situations where the estimates we have can be based on no more than ... rumors.
Needless to say, this is a formidable challenge, especially if we are expected to provide robust decisions.
The following example illustrates a typical situation of severe uncertainty.
We want to deliver a personal note to a young kangaroo, known to his friends as Jack. Unfortunately, we do not know Jack's exact whereabouts. All we know is that he was last seen in a huge game reserve somewhere in Australia.
In case you do not know Jack, he is the one on the left in the picture shown on the right hand side of the page. We suspect that this picture was taken about a year ago, but we are not sure. His passport photo, taken 3 years ago, is shown on the left.
We do not know where these pictures were taken.
Actually, some even argue that these are not Jack's pictures, but never mind.
In short, we do not know Jack's exact location. All we have is a very poor estimate of the location, a kind of wild guess. This estimate is likely to be substantially wrong.
But, no worries!
Using voodoo decision theory we can easily handle the severe uncertainty pertaining to Jack's true location. For, its recipe is simplicity itself: ignore the severe uncertainty in Jack's location and regard the estimate we have as good. In fact, we can assume that the estimate is so good that we can restrict the search to its immediate neighborhood.
More generally, to explain how the method of voodoo decision-making tackles severe uncertainty of this type, it is instructive to look at decision-making under severe uncertainty from two perspectives. Hence, the following comparison:
- Voodoo Reality
The picture is then as follows:
Reality Voodoo Reality This picture shows the poor estimate we have as well as a fictitious true value. The point of displaying the (unknown) true value is to drive home the fact that under severe uncertainty the estimate is a poor indicator of the true value and is likely to be substantially wrong. This picture illustrates how voodoo decision theory views the situation under consideration. The point to note here is that it gives not the slightest thought to what severe uncertainty actually entails. This is expressed (in the picture) in the glaring absence of all reference to the true value. Instead, the uncertainty is confined to the immediate neighborhood of the poor estimate. So, the net result necessarily is that no account whatsoever is given of the severe uncertainty which, of course, pertains to the entire space.
The main objective of this picture is to drive home the fact that voodoo decision theory prescribes decision-making that is based solely on an analysis that is confined to the immediate neighborhood of a single poor estimate of a parameter in question - an estimate that is likely to be substantially wrong.
Practically speaking, one is expected to ignore completely the severe uncertainty pertaining to the situation and to pretend that the estimate one has is so good that it is sufficient to conducted the analysis in its immediate neighborhood.
Practice what you preach
Of course it is hard to see how such a self-contradictory methodology that clearly makes no sense can be sold. So for marketing purposes it may have to be dressed up with jargon and rhetoric.
Here is an outline of a possible explanation of the logical progression that leads from the statement of the problem to the Voodoo Reality associated with it.
Display the region of severe uncertainty in its full glory. The larger the better! And do not forget to use impressive colors.
In fact, you may want to stress that usually the region of uncertainty is unbounded.
Display the true value (location) of the (unknown) parameter of interest. Of course, you do not know where this point is, but display it anyhow as an indication that you mean business, big business!
To impress your client, display the location of the estimate you have somewhere in the region of uncertainty. As a reminder that this is decision-making under severe uncertainty, make sure to place the estimate not too close to the true value.
This is the right moment to advise your client that the situation is very difficult precisely because under severe uncertainty the estimate we have is poor and is likely to be substantially wrong.
To stress the consequences of the fact that the estimate you have is poor and is likely to be substantially wrong, display the immediate neighborhood of the estimate. Explain to the client that under severe uncertainty it is not good enough to confine the analysis to this neighborhood.
The picture looks great. It vividly displays the difficulty associated with decision-making under severe uncertainty: it does not make much sense to base the decision on an analysis conducted on the immediate neighborhood of a poor estimate.
To deal with this difficult dilemma, it is useful at this stage to delete any reference to the true value of the parameter of interest. Since we plan to conduct the analysis in the immediate neighborhood of the poor estimate, it would be best at this point to delete any reference to the true value of the parameter of interest.
So, drop it from the picture.
You must admit that the picture looks much better now!
In fact, following this logic it would be best at this point to delete any reference to the severity of the uncertainty. So, drop the header from the picture!
It is embarrassing to show that our analysis boils down to an investigation of the immediate neighborhood of a poor estimate, so it would be best to drop the qualifier "poor".
Last but not least, we must get rid of the black background color. It is a grim reminder of the embarrassing fact that we in fact completely ignore the severity of the uncertainty under consideration.
So, invert the color scheme.
Of course, you should feel free to improvise as you go along and to use as many buzzwords as you can for extra effect.
To make a long story short, here is the essence of the method. It is a play in two Acts:
- Act 1: Ignore the severe uncertainty pertaining to the situation.
Focus instead on the poor estimate you have, even though this estimate is a wild guess and is likely to be substantially wrong.
- Act 2: Examine the immediate neighborhood of the estimate.
To play it safe do not make a big issue of the fact that the estimate is a wild guess, a poor indication of the true value of the parameter of interest that is likely to be substantially wrong. In case this issue is raised, simply say: Sorry, this is the best estimate we have!
Observe that this outline is quite flexible given that it does not specify in detail the methods and techniques to be used to:
- Determine the value of the estimate of the parameter of interest.
- Determine the size of the neighborhood around the estimate on which the analysis is conducted.
- Conduct the examination of this neighborhood.
In short, there is plenty of scope here for creative thinking.
Info-Gap decision theory
In the framework discussed above, info-gap decision theory is a classic example of voodoo decision-making. Because, while it acknowledges that, under severe uncertainty, the estimate we have is essentially a wild guess, the theory deploys a robustness model that explores only the immediate neighborhood of the estimate.
The flaw in this theory is so glaring that it can be easily described pictorially. So, here is the tale of the "Treasure Hunt" that narrates Info-Gap's fundamental flaw.
- The island represents the stipulated region of uncertainty (the region where the treasure is located).
- The tiny black dot represents a wild guess of the parameter of interest (location of the treasure).
- The large white circle represents the region of uncertainty affecting info-gap's robustness analysis.
- The small white square represents the true (unknown) value of the parameter of interest.
Clearly then, under severe uncertainty Info-Gap may conduct its robustness analysis in the vicinity of Brisbane, whereas for all we know the true location of the treasure may be somewhere in the middle of the Simpson desert or perhaps in down town Melbourne. Perhaps.
The fundamental question is this:How can a theory claim that it seeks robust solutions under severe uncertainty when its robustness model ignores the stipulated region of uncertainty and instead focuses exclusively on the immediate neighborhood of a wild guess?
Well, in voodoo-land, questions like this are completely irrelevant.
And if you persist in asking this question, the answer you get is this:
But this is the best estimate we have, mate!?
So it is important to point out that the issue here is not the poor quality of the estimate. Indeed, in many cases the estimates we have are poor and their quality cannot be improved.
Rather, the issue is: what should be done given this state-of-affairs?
For example, the fact that the estimate is bad should not prevent you from -- indeed should encourage you to -- sample the region of uncertainty under consideration with a view to incorporate in the analysis a representative sample of possible values of the parameter of interest.
But suppose that this cannot be done. What then?
Whatever is done, if one has to settle for an analysis that is based exclusively on a poor estimate of the true value of the parameter of interest, then it must be stated openly that the quality of the results of this analysis are on a par with the estimate namely: "poor".
For, an analysis earns the honorary title "voodoo" precisely when no such warning is attached to the results. Such an analysis effectively misleads the public to believe that the results are reliable. Especially when it is also claimed that the analysis, or methodology, are reliable, or that the decisions they yield are responsible.
And so, Info-Gap decision theory earns the honorary title voodoo decision theory not only because it instructs using a poor estimate as the fulcrum of the analysis. It earns this honorary title precisely because its rhetoric gives the false impression that the results yielded by this analysis are reliable, or that the methodology itself is reliable. For instance, that a "rumor" can generate a "reliable" output.
The picture speaks for itself:
rumor ----> Info-Gap Robustness Model ----> reliable result
No Man's Land rumor No Man's Land
<----------------- Complete region of uncertainty ----------------->
Isn't this too good to be true ?
Of course, adherents and advocates of this theory would strongly protest against labeling this theory a voodoo decision theory even tough this designation is based on a formal analysis of the theory (eg. see my discussion on info-gap and the references therein).
Some may contend that although my criticism is valid from a purely mathematical point of view, still info-gap has a role to play, and can help, in practice. In other words, the argument is that we have to distinguish between the theoretical and practical aspects of voodoo decision theories.
Others may even claim that my criticism is unfair and hyperbolic. For instance:
Sniedovich has stridently criticized info-gap theory [2; cf. 3]. Some of this criticism is unfair and hyperbolic (e.g., calling it "voodoo decision-making"), and some of it seems to miss the message and intent of the info-gap approach (e.g., whether its solutions are robust, and the sense in which the analysis is local). But some of the criticism is fair and germane. For instance, Sniedovich points out formal similarities with Wald's maximin decision theory that suggest info-gap decision models are really instances of classical models [cf. 4]. He also complains that info-gaps cannot really express "severe uncertainty". This seems correct if one means by this phrase uncertainty beyond the formulation of the system on which info-gap models are based. Despite these criticisms, info-gap theory has been heartily embraced in several quarters, and the literature on the subject is exploding [see 3].Ferson, S. and Tucker, W.T. (2008)
Probability boxes as info-gap models
Proceedings of NAFIPS 2008, pp.1-6
Fuzzy Information Processing Society.
In case you wonder how Ferson and Tucker (2008) substantiate their claims, particularly the claim that labeling info-gap a voodoo decision theory is unfair and hyperbolic, the answer is simple.
They do not. They do not have too.
You see, this is how scientific discussions upholding voodoo theories are conducted. Claims are not sought to be substantiated. Assertions, allegations, conclusions, suffice.
It must be admitted though that this gives voodoo decision theories a major advantage over the traditional scientific approaches to problem solving. This may also explain the popularity of some voodoo decision theories.
Which brings us to one of the most important characteristics of voodoo theories, a characteristic that it would seem enables to explain why such theories find an audience at all.
Rhetoric and Spin
If there is one thing that voodoo theories truly excel in it is rhetoric or, to use the more current term: "spin".
The true forte of voodoo theories is the verbose narratives that they use to describe their subject matter, their objectives, their capabilities etc. These narratives usually abound with grand declarations and promises that can give the impression that these theories indeed offer solutions to the most difficult (even intractable) problems.
Of course when this bombastic narrative is stripped away, you discover that the "methodologies" proposed to solve the problems, that these theories are presumably concerned with, have got nothing in common with the verbiage.
For it is typical of such theories that the "methodologies" that they propose are utterly unable to deliver the promised goods and/or services. It is typical of such theories that the mathematical models that they offer for this effort are loosely formulated, the interpretation of the relation between the model's components and those of the problems concerned is incorrect, the prescription providing concrete tools for solving the problems in question are totally unsuitable for the task, and so on and so forth.
All of this is manifest in the rhetoric that is used to promote such theories.
For example, consider this:
Making Responsible Decisions (When it Seems that You Can't)
Engineering Design and Strategic Planning Under Severe Uncertainty
What happens when the uncertainties facing a decision maker are so severe that the assumptions in conventional methods based on probabilistic decision analysis are untenable? Jim Hall and Yakov Ben-Haim describe how the challenges of really severe uncertainties in domains as diverse as climate change, protection against terrorism and financial markets are stimulating the development of quantified theories of robust decision making.
When you read the full paper you discover that the silver bullet is ... info-gap decision theory. In other words, the paper's thesis is that responsible decision-making, in the face of severe uncertainty, can be accomplished by ... conducting a robustness analysis in the immediate neighborhood of a wild guess.
Also, using this voodoo decision theory, you can generate reliable forecasts of say the local and international share markets!
and of course, GOLD:
In the words of Cole Porter (1891-1964), Anything Goes!
And talking about the share and gold markets. How about a proposal for formulating a policy yielding Confidence in Monetary Policies?
"... We adopt the technique of Info-Gap Robust Satisficing to first define confidence under Knightian uncertainty, and second quantify the trade-off between quality and robustness explicitly. We apply this to a standard monetary policy example and provide Central Banks with a framework to rank policies in a way that will allow them to pick the one that either maximizes confidence given an acceptable level of performance, or alternatively, optimizes performance for a given level of confidence. ..."
Ben-Haim and Demertzis (2008, p. 1)
Confidence in Monetary Policy
DNB Working Paper No. 192, December 2008
So, the idea here is to formulate a robust monetary policy for Central Banks based on an analysis that is conducted in the neighborhood of a wild guess of the parameter of interest!
Will you have confidence in such a monetary policy?
And how about your health? Yes, your health!
" ... Info-gap excels in conditions where the data are sparse, uncertain, and the result of failure is catastrophe. Standard clinical decision making strategies do not protect patients against errors in the clinical data. However, because info-gap is not probability-based, it provides a strong foundation for making decisions when there is not enough information -- such as in the highly individualized world of the clinic. Info-gap also allows "real-time" decision making and rapid assessment, thus allowing for targeted answers without wasted time. ..."
If you are of the opinion that a theory based on an analysis of the neighbrhood of a wild guess provides a "... strong foundation for making decisions when there is not enough information ..." then I suggest that you do some reading on Decision-Making Under Severe Uncertainty, Robust Optimization, and Black Swans. Particularly, make sure to examine carefully the GIGO issue.
Furthemore, take a close look at the following (emphasis is mine):
2. Managing uncertainty
Decisions depend on the best estimates of past performance, assessments of the current situation and visions into the future. Where the past performance may be known, the current is clouded by its immediacy and the future is a best guess. The robustness of any decision and the risk incurred in making that decision is only as good as the estimates on which it is based. Making estimation even more challenging, virtually all estimates that affect decisions are uncertain. Uncertainty can not be eliminated, but it can be managed.Top Ten Challenges for Making Robust Decisions
The Decision Expert Newsletter™ -- Volume 1; Issue 2
And, of course, you might also be interested in:
'... Finally, a reliability theory is only as good as the information upon which it rests. A reliability theory should exploit all relevant verified information, but should treat speculative information and "reasonable assumptions" with caution. ...'Ben-Haim (1996, p. 206)
Robust Reliability in the Mechanical Science Springer, Berlin.
I urge all those in health-care organizations who actually use Info-Gap decision theory to browse through my compilation of FAQs about Info-Gap to understand why this theory is a voodoo decision theory par excellence.
And as an aside regarding "individualizing/personalizing" health-care (emphasis is mine):BTS is based on Bayesian decision theory. It offers a well-founded computational theory for applying general knowledge to individual situations that are characterized by varying degrees of uncertainty and risk.
As classical statistics revolutionized the discovery of knowledge in the early 20th century, Bayesian decision theory is revolutionizing the application of knowledge in the 21st. This revolution is already underway:
- Most major spam tools are based on Bayesian methods; As a classic example involving uncertainty, it gives the software the ability to learn which mail is spam and which isn't.
- All speech recognition tools are Bayesian.
- Medical diagnosis is increasingly relying on Bayesian methods.
- Counter terrorism is using Bayesian method to determine who are the bad guys.
- Robotics and navigation use Bayesian method to navigate their environment.What makes Bayesian methods hot stuff for making the most Robust Decisions?
The Decision Expert Newsletter™ -- Volume 2; Issue 7
Just in case: I have been a reformed Bayesian for over thirty years now.
More on this and related topics can be found in the pages of the Worst-Case Analysis / Maximin Campaign, Severe Uncertainty, and the Info-Gap Campaign.
Also, see my complete list of articles
Moshe's new book!
- Sniedovich, M. (2012) Fooled by local robustness, Risk Analysis, in press.
- Sniedovich, M. (2012) Black swans, new Nostradamuses, voodoo decision theories and the science of decision-making in the face of severe uncertainty, International Transactions in Operational Research, in press.
- Sniedovich, M. (2011) A classic decision theoretic perspective on worst-case analysis, Applications of Mathematics, 56(5), 499-509.
- Sniedovich, M. (2011) Dynamic programming: introductory concepts, in Wiley Encyclopedia of Operations Research and Management Science (EORMS), Wiley.
- Caserta, M., Voss, S., Sniedovich, M. (2011) Applying the corridor method to a blocks relocation problem, OR Spectrum, 33(4), 815-929, 2011.
- Sniedovich, M. (2011) Dynamic Programming: Foundations and Principles, Second Edition, Taylor & Francis.
- Sniedovich, M. (2010) A bird's view of Info-Gap decision theory, Journal of Risk Finance, 11(3), 268-283.
- Sniedovich M. (2009) Modeling of robustness against severe uncertainty, pp. 33- 42, Proceedings of the 10th International Symposium on Operational Research, SOR'09, Nova Gorica, Slovenia, September 23-25, 2009.
- Sniedovich M. (2009) A Critique of Info-Gap Robustness Model. In: Martorell et al. (eds), Safety, Reliability and Risk Analysis: Theory, Methods and Applications, pp. 2071-2079, Taylor and Francis Group, London..
- Sniedovich M. (2009) A Classical Decision Theoretic Perspective on Worst-Case Analysis, Working Paper No. MS-03-09, Department of Mathematics and Statistics, The University of Melbourne.(PDF File)
- Caserta, M., Voss, S., Sniedovich, M. (2008) The corridor method - A general solution concept with application to the blocks relocation problem. In: A. Bruzzone, F. Longo, Y. Merkuriev, G. Mirabelli and M.A. Piera (eds.), 11th International Workshop on Harbour, Maritime and Multimodal Logistics Modeling and Simulation, DIPTEM, Genova, 89-94.
- Sniedovich, M. (2008) FAQS about Info-Gap Decision Theory, Working Paper No. MS-12-08, Department of Mathematics and Statistics, The University of Melbourne, (PDF File)
- Sniedovich, M. (2008) A Call for the Reassessment of the Use and Promotion of Info-Gap Decision Theory in Australia (PDF File)
- Sniedovich, M. (2008) Info-Gap decision theory and the small applied world of environmental decision-making, Working Paper No. MS-11-08
This is a response to comments made by Mark Burgman on my criticism of Info-Gap (PDF file )
- Sniedovich, M. (2008) A call for the reassessment of Info-Gap decision theory, Decision Point, 24, 10.
- Sniedovich, M. (2008) From Shakespeare to Wald: modeling wors-case analysis in the face of severe uncertainty, Decision Point, 22, 8-9.
- Sniedovich, M. (2008) Wald's Maximin model: a treasure in disguise!, Journal of Risk Finance, 9(3), 287-291.
- Sniedovich, M. (2008) Anatomy of a Misguided Maximin formulation of Info-Gap's Robustness Model (PDF File)
In this paper I explain, again, the misconceptions that Info-Gap proponents seem to have regarding the relationship between Info-Gap's robustness model and Wald's Maximin model.
- Sniedovich. M. (2008) The Mighty Maximin! (PDF File)
This paper is dedicated to the modeling aspects of Maximin and robust optimization.
- Sniedovich, M. (2007) The art and science of modeling decision-making under severe uncertainty, Decision Making in Manufacturing and Services, 1-2, 111-136. (PDF File) .
- Sniedovich, M. (2007) Crystal-Clear Answers to Two FAQs about Info-Gap (PDF File)
In this paper I examine the two fundamental flaws in Info-Gap decision theory, and the flawed attempts to shrug off my criticism of Info-Gap decision theory.
- My reply (PDF File) to Ben-Haim's response to one of my papers. (April 22, 2007)
This is an exciting development!
- Ben-Haim's response confirms my assessment of Info-Gap. It is clear that Info-Gap is fundamentally flawed and therefore unsuitable for decision-making under severe uncertainty.
- Ben-Haim is not familiar with the fundamental concept point estimate. He does not realize that a function can be a point estimate of another function.
So when you read my papers make sure that you do not misinterpret the notion point estimate. The phrase "A is a point estimate of B" simply means that A is an element of the same topological space that B belongs to. Thus, if B is say a probability density function and A is a point estimate of B, then A is a probability density function belonging to the same (assumed) set (family) of probability density functions.
Ben-Haim mistakenly assumes that a point estimate is a point in a Euclidean space and therefore a point estimate cannot be say a function. This is incredible!
- A formal proof that Info-Gap is Wald's Maximin Principle in disguise. (December 31, 2006)
This is a very short article entitled Eureka! Info-Gap is Worst Case (maximin) in Disguise! (PDF File)
It shows that Info-Gap is not a new theory but rather a simple instance of Wald's famous Maximin Principle dating back to 1945, which in turn goes back to von Neumann's work on Maximin problems in the context of Game Theory (1928).
- A proof that Info-Gap's uncertainty model is fundamentally flawed. (December 31, 2006)
This is a very short article entitled The Fundamental Flaw in Info-Gap's Uncertainty Model (PDF File) .
It shows that because Info-Gap deploys a single point estimate under severe uncertainty, there is no reason to believe that the solutions it generates are likely to be robust.
- A math-free explanation of the flaw in Info-Gap. ( December 31, 2006)
This is a very short article entitled The GAP in Info-Gap (PDF File) .
It is a math-free version of the paper above. Read it if you are allergic to math.
- A long essay entitled What's Wrong with Info-Gap? An Operations Research Perspective (PDF File) (December 31, 2006).
This is a paper that I presented at the ASOR Recent Advances in Operations Research (PDF File) mini-conference (December 1, 2006, Melbourne, Australia).
If your organization is promoting Info-Gap, I suggest that you invite me for a seminar at your place. I promise to deliver a lively, informative, entertaining and convincing presentation explaining why it is not a good idea to use — let alone promote — Info-Gap as a decision-making tool.
Here is a list of relevant lectures/seminars on this topic that I gave in the last two years.
ASOR Recent Advances, 2011, Melbourne, Australia, November 16 2011. Presentation: The Power of the (peer-reviewed) Word. (PDF file).
- Alex Rubinov Memorial Lecture The Art, Science, and Joy of (mathematical) Decision-Making, November 7, 2011, The University of Ballarat. (PDF file).
- Black Swans, Modern Nostradamuses, Voodoo Decision Theories, and the Science of Decision-Making in the Face of Severe Uncertainty (PDF File) .
(Invited tutorial, ALIO/INFORMS Conference, Buenos Aires, Argentina, July 6-9, 2010).
- A Critique of Info-Gap Decision theory: From Voodoo Decision-Making to Voodoo Economics(PDF File) .
(Recent Advances in OR, RMIT, Melbourne, Australia, November 25, 2009)
- Robust decision-making in the face of severe uncertainty(PDF File) .
(GRIPS, Tokyo, Japan, October 16, 2009)
- Decision-making in the face of severe uncertainty(PDF File) .
(KORDS'09 Conference, Vilnius, Lithuania, September 30 -- OCtober 3, 2009)
- Modeling robustness against severe uncertainty (PDF File) .
(SOR'09 Conference, Nova Gorica, Slovenia, September 23-25, 2009)
- How do you recognize a Voodoo decision theory?(PDF File) .
(School of Mathematical and Geospatial Sciences, RMIT, June 26, 2009).
- Black Swans, Modern Nostradamuses, Voodoo Decision Theories, Info-Gaps, and the Science of Decision-Making in the Face of Severe Uncertainty (PDF File) .
(Department of Econometrics and Business Statistics, Monash University, May 8, 2009).
- The Rise and Rise of Voodoo Decision Theory.
ASOR Recent Advances, Deakin University, November 26, 2008. This presentation was based on the pages on my website (voodoo.moshe-online.com).
- Responsible Decision-Making in the face of Severe Uncertainty (PDF File) .
(Singapore Management University, Singapore, September 29, 2008)
- A Critique of Info-Gap's Robustness Model (PDF File) .
(ESREL/SRA 2008 Conference, Valencia, Spain, September 22-25, 2008)
- Robust Decision-Making in the Face of Severe Uncertainty (PDF File) .
(Technion, Haifa, Israel, September 15, 2008)
- The Art and Science of Robust Decision-Making (PDF File) .
(AIRO 2008 Conference, Ischia, Italy, September 8-11, 2008 )
- The Fundamental Flaws in Info-Gap Decision Theory (PDF File) .
(CSIRO, Canberra, July 9, 2008 )
- Responsible Decision-Making in the Face of Severe Uncertainty (PDF File) .
(OR Conference, ADFA, Canberra, July 7-8, 2008 )
- Responsible Decision-Making in the Face of Severe Uncertainty (PDF File) .
(University of Sydney Seminar, May 16, 2008 )
- Decision-Making Under Severe Uncertainty: An Australian, Operational Research Perspective (PDF File) .
(ASOR National Conference, Melbourne, December 3-5, 2007 )
- A Critique of Info-Gap (PDF File) .
(SRA 2007 Conference, Hobart, August 20, 2007)
- What exactly is wrong with Info-Gap? A Decision Theoretic Perspective (PDF File) .
(MS Colloquium, University of Melbourne, August 1, 2007)
- A Formal Look at Info-Gap Theory (PDF File) .
(ORSUM Seminar , University of Melbourne, May 21, 2007)
- The Art and Science of Decision-Making Under Severe Uncertainty (PDF File) .
(ACERA seminar, University of Melbourne, May 4, 2007)
- What exactly is Info-Gap? An OR perspective. (PDF File)
ASOR Recent Advances in Operations Research mini-conference (December 1, 2006, Melbourne, Australia).
Disclaimer: This page, its contents and style, are the responsibility of the author (Moshe Sniedovich) and do not represent the views, policies or opinions of the organizations he is associated/affiliated with.
Last modified: Friday, 30-Dec-2011 13:11:28 MST