Monday, March 29, 2010

The Prospect Theory Problem

At its essence, the decision to (or not to) implement information security policies or procedures is just a bet on odds. Anytime you risk anything of value on the outcome of something involving chance, you are gambling. No matter how you look at it, information security is, by definition, a gamble. This undeniable fact adds an interesting twist to the business decision-making process utilized by security decision makers when it comes to purchasing information security products and services.

Whether we realize it or not, we information security consultants frequently find ourselves outside the world of simple business logic and standard economics and more a part of the mysterious realm of game theory, prospect theory and probability transformations.

Let me explain.

In 1738, a Swiss mathematician named Daniel Bernoulli wrote a paper entitled Exposition of a New Theory on the Measurement of Risk, which introduced a new idea. The Idea was that economic risk is relative based on the perceived utility of the money by its recipient. In other words, an amount of money has less value to an already wealthy person than it has to a poor person. Using a mathematical function, Bernoulli theorized, one could correct the expected value based on variables like risk aversion, risk premium, payout level, etc. Bernoulli's paper was the first formalization of “marginal utility”, which was widely accepted and continues to have broad application in economics even today.

A couple hundred years later in 1979, two psychologists named Daniel Kahneman and Amos Tversky began expanding on the idea of Marginal Utility theory by conducting a series of experiments in Israel, the University of Stockholm and the University of Michigan on how the prospect of gaining versus losing money affected intrinsic risk calculation. It was from these experiments that “Prospect Theory” developed. Prospect Theory differs from Marginal Utility theory in a number of important respects.

First, it replaces the notion of “utility” with “value.” Whereas utility is usually defined only in terms of net wealth, value is defined in terms of gains and losses (deviations from a reference point). Moreover, they found that the value function for losses is significantly different than the value function for gains. In short, the loss of $X is always felt more than the gain of $X.

Kahnemann and Tversky came to their conclusions through uncovering an interesting and shockingly consistent pattern that they referred to as the reflection effect.

In a nutshell, here’s what they did: Test subjects were offered two choices, the first involving a potential loss, and the second, a potential gain.

Scenario One- The test subject was asked to pick between:
Option A: A 100% chance of losing $3000 or
Option B: An 80% chance of losing $4000, and a 20% chance of losing nothing.

Scenario Two - Next, choose between:
Option C: A 100% chance of receiving $3000 or
Option D: An 80% chance of receiving $4000, and a 20% chance of receiving nothing.

What the study showed was that 92% of the subjects chose option B in the first scenario, while only 20% chose option D, the seemingly equivalent choice, in the second scenario. They found that a similar pattern held regardless of positive and negative prizes, and probabilities. This led Kahnemann and Tversky to conclude that when decision problems involve not just possible gains, but also possible losses, people's preferences over negative prospects are more often than not the inverse of their preferences over positive prospects. Simply put – human beings are risk-averse when it comes to potential gains, but for some reason we become risk loving when faced with scenarios involving potential losses. Daniel Bernoulli didn’t account for that back in the 16th century.

The challenge for the information security professional is how to manage the Prospect Theory problem of an organization (or decision maker within the organization) that is normally fiscally cautious, becoming risk loving when discussing the potential impact of security failures. To some degree, regulatory compliance has forced large portions of the private sector to invest in risk management and mitigation activities whether they like it or not. However, there are still many organizations that require convincing, and Prospect Theory tells us that selling the idea of risk aversion in a loss-focused scenario will not be easy.

One way to approach this problem is through the pseudocertainty effect. The pseudocertainty effect demonstrates that people’s choices can be easily affected by simply reframing the descriptions of the outcomes without changing the actual utility or any of the facts. In other words, we transform what appears to be a potential loss into a potential gain. For example, ask yourself the following question:

Scenario One:
An epidemic breaks out that is likely to kill 600 people if left untreated.

Treatment strategy A: will save 200 people.

Treatment strategy B: has 1/3 chance of saving 600 people and 2/3 chance of saving nobody.

Which approach would you choose?

Scenario Two:
An epidemic breaks out that is likely to kill 600 people if left untreated.

Treatment strategy C: 400 people will die.

Treatment strategy D: there is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die.

Which approach would you choose?

If you’re like most people, you recommended Treatment Strategy A in the first scenario. Most people (almost 3/4) prefer the definite positive outcome of saving 200 people, to the conditional but larger positive outcome of saving 600 people.

However, in the second scenario the same number of people choose Treatment Strategy D and are willing to accept the risk of a larger negative outcome (600 people dying) to have a chance of averting an otherwise definite negative outcome (400 people dying).

The fascinating thing about the two scenarios and the treatment options presented above is that the information in both of them is identical in every way. Treatment A is the same as Treatment C, and Treatment B is the same as Treatment D with no variation. The only difference is in the presentation, the wording. Everything else is identical and yet respondents consistently reach opposing conclusions for each scenario.

What this tells us is that we can lead our risk loving clients through important decisions about risk by framing the outcomes in a way that will satisfy their sense of value, and in turn, convert the risk-loving into the risk-averse; which is what good security management is all about. By understanding the client’s mindset, and framing our solution appropriately using Prospect Theory, we can increase the perceived value of information security services exponentially… even if the client didn’t realize we did it.

By Charles P. Braman, VP of Consulting

Read more!