21  Prospect theory exercises

21.1 Changes in probability

You are in a draw for $1 million. Consider the following four scenarios where your chances of winning increases by 5%:

  1. From 0% to 5%
  2. From 5% to 10%
  3. From 50% to 55%
  4. From 95% to 100%

Most people report that scenarios A) and D) represent better news. Why?

There is strong experimental evidence that we overweight certain events relative to near certain events. In this instance, we will tend to overweight the shift to a certain result (scenario D) relative to shifts involving intermediate probabilities (e.g. scenario B and C). This overweighting of certainty is effectively the same as overweighting low probability events. As a result, the probability shift that provides an initial chance at $1 million is also overweighted (scenario A).

The result is that scenarios A and D tend to be seen as better news than scenarios B and C.

The following diagram illustrates. On the horizontal axis is the probability (p). On the vertical axis is the decision weight applied to each probability (\pi). If people applied probability weights linearly as they do in expected utility theory, the dashed line would represent how probabilities map to weights. Under prospect theory, people overweight small probabilities and underweight probabilities short of certainty. The solid line represents one functional mapping of these possibility and certainty effects.

From this diagram, you can then see the change in decision weight from each change in probability. The changes from 0% to 5% and from 95% to 100% represent much larger changes in decision weight than the changes in intermediate probabilities.

21.2 Bitcoin

Edna, Ferdinand and Gretel each bought some Bitcoin at $50,000. The price rose to $80,000 and then dropped to $60,000, at which time they sold it.

All three are loss averse and have the following reference-dependent value function:

v(x)=\left\{\begin{matrix} x \quad &\textrm{where} \quad x \geq 0\\ 2x \quad &\textrm{where} \quad x < 0 \end{matrix}\right.

Edna uses the purchase price as her reference point. Ferdinand uses the peak price as his reference point. Gretel uses the sale price as her reference point.

What is the change in value for each person? Who is happiest?

Value for Edna:

\begin{align*} v(x)&=v(60-50) \\ &=10 \end{align*}

Value for Ferdinand:

\begin{align*} v(x)&=v(60-80) \\ &=-2\times 30 \\ &=-60 \end{align*}

Value for Gretel:

\begin{align*} v(x)&=v(60-60) \\ &=0 \end{align*}

Edna is happiest whereas Ferdinand is most disappointed. Both Edna and Gretel see the peak price as a foregone gain, whereas Ferdinand sees the failure to sell at the peak as a loss.

21.3 Reference points

Megan has the following reference-dependent value function:

v(x)=\left\{\begin{matrix} x \quad &\textrm{where} \quad x \geq 0\\ 2x \quad &\textrm{where} \quad x < 0 \end{matrix}\right.

where x is the realised outcome relative to the reference point.

a) If you look at v(x), we expect Megan to be:

  • loss averse
  • have no decreased sensitivity to changes of greater magnitude.

Why?

Megan is loss averse as the slope of the value function in the loss domain is steeper than in the gain domain. One unit of loss leads to a greater change in value than one unit of gain.

There is no decreases sensitivity as the value function is linear in both domains. Sensitivity is constant. As the size of the loss or gain increases, the change in value remains constant despite the increasing magnitude.

b) Assume Megan has received a birthday card from her aunt, which some years contains $25 and other years contains nothing.

She opens the card and it contains $10.

Consider two alternative reference points: Megan is pessimistic and expects no money in the card, and Megan is optimistic and expects $25.

Compute Megan’s value under each reference point. Which reference point yields higher value?

The value of the $10 from the reference point of expecting nothing is:

\begin{align*} v(x)&=v(10) \\[6pt] &=10 \end{align*}

The value of the $10 from the reference point of expecting $25 is:

\begin{align*} v(x)&=v(10-25) \\[6pt] &=2*(-15) \\[6pt] &=-30 \end{align*}

Megan has higher value when she does not expect to receive any money. She gets the value of a gain of $10. If she expected the $25, she suffered the value of a loss of $15.

c) Megan has received the $10 in the card and a piece of birthday cake. Her value function over money and pieces of birthday cake is:

v(x)=v(m-r_m)+v(4c-4r_c)

Where m is the amount of money she receives, r_m is her reference point of how much money she expects, c is how many pieces of birthday cake she receives and r_c is how many pieces of birthday cake she expects.

To illustrate how this value function works, imagine Megan expects two pieces of cake and her dog jumps onto the table and eats one of them. Her change in the value function is:

\begin{align*} v(x)&=v(4c-4r_c) \\ &=v(4\times 1-4\times 2) \\ &=v(-4) \end{align*}

As v(x)=2x when x<0:

v(-4)=-8

For this question, assume Megan does not believe that she will receive any money and she expects to eat two pieces of birthday cake. Her reference point is therefore r_m=0 and r_c=2. She receives $10 from her aunt.

Her brother - who loves cake - offers to buy one of her pieces of birthday cake. What price p_s would make Megan indifferent between selling and keeping the piece of cake?

Megan will be indifferent when the value from each option is the same:

\begin{align*} \underbrace{v(10-0+p_s)}_{\substack{\textrm{Value from} \\ \textrm{unexpected} \\ \textrm{money plus} \\ \textrm{payment} \\ \textrm{for cake}}}+ \underbrace{v(4\times 1-4\times 2)}_{\substack{\textrm{Value lost from} \\ \textrm{giving up cake}}} &=\underbrace{v(10-0)}_{\substack{\textrm{Value from} \\ \textrm{unexpected} \\ \textrm{money}}}+ v\underbrace{(4\times 2-4\times 2)}_{\substack{\textrm{Value of} \\ \textrm{keeping cake}}} \\[12pt] v(10+p_s)+v(-4)&=v(10)+v(0) \\[6pt] 10+p_s-8&=10\\[6pt] p_s&=8 \end{align*}

d) Assume that Megan expects to receive only one piece of birthday cake. Her brother then offers to sell her his piece of cake. For r_m=0 and r_c=1, what price p_b would make Megan indifferent between buying the cake and eating only her own piece?

Megan will be indifferent when the value from each option is the same:

\begin{align*} \underbrace{v(10-0-p_b)}_{\substack{\textrm{Value from} \\ \textrm{unexpected} \\ \textrm{money minus} \\ \textrm{payment} \\ \textrm{for cake}}}+ \underbrace{v(4\times 2-4\times 1)}_{\substack{\textrm{Value gained from} \\ \textrm{buying cake}}} &=\underbrace{v(10-0)}_{\substack{\textrm{Value from} \\ \textrm{unexpected} \\ \textrm{money}}}+ v\underbrace{(4\times 1-4\times 1)}_{\substack{\textrm{Value of} \\ \textrm{only one piece}}} \\[12pt] v(10-p_b)+v(4)&=v(10)+v(0) \\[6pt] 10-p_b+4&=10\\[6pt] p_b&=4 \end{align*}

e) Why is there a difference between the price at which Megan was willing to sell cake in part c) compared to the price she was willing to pay in part d)?

The willingness to accept in part c) is higher than the willingness to pay in part d) as in part c) the foregone cake is coded as a loss. This loss reduces value at twice the rate of a gain in cake. The payment for the cake in both parts is in the gain domain due to the birthday present of $10, so the payment received or paid is given less weight than any loss.

21.4 Saving on a purchase

Consider the following two scenarios:

A) You are considering buying a new type of coffee bean for your home coffee machine. It costs $50 at your local hipster cafe, but you discover that it is for sale for $40 at the supermarket 20 minutes drive from your home. Do you make the trip?

B) You are considering buying a new laptop. It costs $1990 at your local computer store, but you discover that it is for sale for $1980 at another computer store 20 minutes drive from your home. Do you make the trip?

When people are presented with scenarios such as this, they tend to report that they are less likely to make the trip in Scenario B for the more expensive product.

Explain how an S-shaped value function with diminishing value in both gains and losses could result in this behaviour.

Diminishing value means that the absolute difference between v(-40) and v(-50) is much larger than the absolute difference between v(-1980) and v(-1990). For example, suppose the value function was:

v(x)=\left\{\begin{matrix} x^\frac{1}{2} \quad &\textrm{where} \quad x \geq 0\\ -2(-x)^\frac{1}{2} \quad &\textrm{where} \quad x < 0 \end{matrix}\right.

This would mean that the difference between v(-40) and v(-50) is:

\begin{align*} v(-40)-v(-50)&=-2(40^\frac{1}{2}-50^\frac{1}{2}) \\ &=1.493025 \end{align*}

The difference between v(-1980) and v(-1990) is:

\begin{align*} v(-1980)-v(-1990)&=-2(1980^\frac{1}{2}-1990^\frac{1}{2}) \\ &=0.2244502 \end{align*}

Much less value is gained by driving for the 20 minutes across town for the computer.

21.5 A 60:40 gamble

Suppose an agent has the following reference-dependent value function:

v(x)=\left\{\begin{matrix} x^{3/4} \qquad &\textrm{where} \space &x \geq 0\\ -2(-x)^{3/4} \quad &\textrm{where} \space &x < 0 \end{matrix}\right.

Where x is the realised outcome relative to the reference point.

Assume that the agent’s reference point is the status quo and the agent is offered the gamble A:

(\$100, 0.6; −\$100, 0.4)

a) Will they want to play this gamble? Why?

The utility from the gamble is:

\begin{align*} V(A)&=0.6v(100)-0.4v(-100) \\ &=0.6\times (100)^{0.75}-0.4\times 2\times (100)^{0.75} \\ &=-6.3245553 \end{align*}

They will not want to play this gamble as it has a negative value for the agent. They could receive value of 0 by simply not playing.

The reason for this negative value is that the agent is loss averse. The loss of $100 is given twice the weight of an equivalent gain, meaning that they reject the bet despite a win of $100 being more probable.

b) Suppose the agent takes this gamble and loses $100. They feel bad about it and perceive it as a loss. Their reference point is unchanged at the original status quo. They are offered gamble A again? Do they accept this second time? Why?

After losing $100 but not changing their reference point, they have two possible outcomes relative to their reference point: recovery of their loss of $100 so they come out even and a loss of $200 (losing $100 twice).

\begin{align*} V(A)&=0.6v(-100+10)-0.4v(-100-100) \\ &=0.6\times (0)^{0.75}-0.4\times 2\times (200)^{0.75} \\ &=-42.5463672 \end{align*}

The utility of not playing the gamble involves remaining with a loss of $100:

\begin{align*} V(\neg A)&=v(-100) \\ &=-2\times (100)^{0.75} \\ &=-63.2455532 \end{align*}

They will now want to play the gamble as it has a greater value than staying with their current loss. The reason the gamble becomes attractive is because it gives an opportunity to recover the loss. The agent is risk seeking over the loss domain.

c) The agent is offered a new job for which they receive a $50,000 sign-on bonus. They adapt to their new wealth, so their reference point changes to accommodate their new situation. They are now offered gamble A again. Do they accept? Why?

With their new reference point, this question is effectively the same as part a). They will refuse the bet.

21.6 A 50:50 gamble

Suppose Tim has the following reference-dependent value function:

v(x)=\left\{\begin{matrix} x^{1/2} \qquad &\textrm{where} &\space x \geq 0\\ -2(-x)^{1/2} \quad &\textrm{where} &\space x < 0 \end{matrix}\right.

x is the change in Tim’s position relative to his reference point.

a) What feature of Tim’s value function leads to loss aversion? Explain.

The part of the value function that applies when x<0 is increased by a factor of two relative to that part of the value function for when x>0. This is done by multiplying the bottom portion by 2.

b) Tim considers the gamble A: ($250, 0.5; -$100, 0.5).

  1. Will Tim want to play this gamble?

\begin{align*} V(A)&=p_1v(x_1)+p_2v(x_2) \\ &=0.5v(\$250)+0.5v(-\$100) \\ &=0.5*250^{1/2}-0.5*2*100^{1/2} \\ &=-2.0943058 \end{align*}

Tim has negative value from the gamble, when he could simply have zero value by declining. He rejects.

  1. Explain what features of the value function lead him to accept or reject the gamble.

Two features of the value function lead him to reject:

  • Tim is loss averse (the -2 in the bottom equation), which leads him to give twice the weight to losses relative to an equal sized gain.
  • Tim has diminishing sensitivity to losses and gains. The larger gain is reduced proportionately more by this diminishing sensitivity.

c) Suppose Tim were to experience a large positive or negative shock to his wealth that does not immediately change his reference point. Could either shock cause him to change his decision concerning gamble A? Explain.

Both a positive and negative shock could lead Tim to change his decision.

A large positive shock would place the entire bet into the gain domain. That would remove loss aversion as a factor in rejecting the bet. Given the high expected value, that would likely be sufficient for him to accept. However, a large shock would also place Tim further up the value function to a region where it is more linear (i.e. he is less risk averse). This will also increase the tendency to accept the bet.

A large negative shock would place the entire bet into the loss domain. That would remove loss aversion as a factor in rejecting the bet as there is no gain against which the loss can be given relatively greater weight. Due to diminishing sensitivity to losses, Tim is also risk seeking in the loss domain. This would lead him to accept any bet with a positive expected value.

21.7 Insurance

Explain how probability weighting as proposed under Prospect Theory can lead a person to simultaneously gamble and purchase insurance.

Decisions under Prospect Theory are the outcome of two factors:

  • The value function that gives a value to each outcome relative to a reference point
  • The probability weighting function that gives a decision weight to the probability of each outcome.

The Prospect Theory value function leads people to be risk averse in the domain of gains and risk seeking in the loss domain. This combination would tend to lead people to purchase neither insurance or lotteries. Their risk seeking behaviour could lead them to avoid the certain loss of the insurance premium. They would rather risk the loss of the insurable asset.

Their risk averse behaviour in the domain of gains would make a lottery with a negative expected value unattractive.

However, the probability weighting function can counteract that effect. By overweighting the small probability of an insurable event or a lottery win, the agent may decide to insure or purchase a lottery despite the risk attitudes inherent in the value function.

As an example, consider an agent who overweights a probability of 1 in a million lottery win by 1,000 times, acting as though they would win the $1 million prize once every 1000 lotteries. They also overweight the probability of a one in a thousand fire that would destroy their million dollar house by 100 times, acting as though it is a 1-in-10 chance. (These numbers are extreme, but I am creating a toy example.)

Suppose the lottery ticket is $1 and their utility function is:

v(x)=\left\{\begin{matrix} x^{1/2} \quad &\textrm{where} \space &x \geq 0\\ -(-x)^{1/2} \quad &\textrm{where} \space &x < 0 \end{matrix}\right.

Value of purchasing the lottery:

\begin{align*} v(x)&=\sum_{i=1}^{n} \pi(x_i)v(x_i) \\[6pt] &=0.001\times (1000000-1)^{1/2}-0.999\times (1)^{1/2} \\[6pt] &=0.000999 \end{align*}

The value of not purchasing the lottery is zero.

They will purchase the lottery as it has the higher value.

Value of not purchasing insurance, which involves the potential loss of the house:

\begin{align*} v(x)&=\sum_{i=1}^{n} \pi(x_i)v(x_i) \\[6pt] &=-0.1\times (1000000)^{1/2}+0.9\times (0)^{1/2} \\[6pt] &=-100 \end{align*}

Value of purchasing insurance, which is the certain loss of the premium:

\begin{align*} v(x)&=\sum_{i=1}^{n} \pi(x_i)v(x_i) \\[6pt] &=-(1000)^{1/2} \\[6pt] &=-31.6 \end{align*}

They purchase insurance as it has higher value than not purchasing it.

21.8 Locking in a win

Amber sued a tabloid newspaper for defamation. The trial has completed and the judge has retired to make her decision.

Amber’s lawyer tells her that she has a 95% chance of winning and receiving $1,000,000 in damages, meaning she has a 5% chance of leaving with nothing. (Ignore any potential costs.) She then receives an offer of settlement from the newspaper for $800,000. Amber accepts.

Use the fourfold pattern of attitudes to risk under prospect theory to explore why Amber accepted the settlement offer.

Rejecting the offer has a higher expected value than the value of the settlement:

\begin{align*} E[\text{reject}]&=p_{win} x_{win} \\ &=0.95\times 1000000 \\ &=950000 \end{align*}

Two forces under prospect theory would lead Amber to take the option with the lower expected value:

  • People are risk averse in the gain domain. Amber would prefer certainty to a gamble with the same expected value, and depending on the level of risk aversion, would be willing to accept an amount for certain lower than the expected value of the bet. That is, the certainty equivalent of the gamble would be less than its expected value.
  • People overweight small probabilities. The 5% probability of losing is likely overweighted by Amber.

21.9 Lotteries

A common intervention to encourage behaviour change is to offer a lottery ticket as an incentive. For example, a lottery ticket might be offered to people who complete a survey.

The government is considering whether it should incentivise vaccination with either:

  1. a payment of $10 or

  2. a lottery ticket with a 1 in a million chance of winning $10 million.

Use concepts from prospect theory to explain why either option might be more successful.

There are two opposing forces that could lead to either option being preferred.

Both the lottery and payment of $10 are in the gain domain. Under prospect theory, people tend to be risk averse in the gain domain. This will make the payment, which has the same expected value as the lottery, more attractive.

Conversely, under prospect theory, people do not weight outcomes directly by their probability. They apply decision weights that tend to overweight small probabilities and underweight probabilities just short of certainty. This means that the small chance of $10 million will likely be overweighted. This could lead to the value of the lottery exceeding that of a certain payment.

Which effect dominates depends on how risk-averse people are and how much they overweight the probability.