# 12Risk and uncertainty exercises

## 12.1 Expected value of roulette

You are playing roulette at the casino. There are 37 numbered pockets around the edge of the wheel (0 through 36). If you make a straight up bet on one of the 37 single numbers, you are paid $35 for every dollar you bet (in addition to receiving back your bet). What is the expected value of a$20 bet.

The expected value of the Roulette bet is:

\begin{align*} E[X]&=\sum_{i=1}^n p_ix_i \\[6pt] &=\frac{36}{37}\times(-\$20)+\frac{1}{37}\times(35\times\$20) \\[6pt] &=\-0.54 \end{align*} ## 12.2 Expected value of insurance An agent is considering insurance against bushfire for its1,000,000 house. The house has a 1 in 1000 (p=0.001) chance of burning down. An insurer is willing to offer full coverage for premium $1100. a) What is the expected value of purchasing insurance? If you purchase insurance, you pay the premium and do not suffer any loss regardless of whether there is a bushfire or not. E[\text{purchase}]=-\text{premium}=-\$1,100

The expected value of purchasing insurance is the guaranteed loss of the premium.

You could also think of the expected value of purchasing insurance as involving both the loss of the house and the insurance payout in case of fire. In that case, you would write:

\begin{align*} E[\text{purchase}]&=p\times(-value_{\text{house}}+payout-premium) \\[6pt] &\qquad +(1-p)\times(-premium) \\[6pt] &=0.001\times(-1000000+1000000-1100) \\[6pt] &\qquad +0.999\times(-1100) \\[6pt] &=-\1,100 \end{align*} This gives the same answer as the first method. b) What is the expected value of not purchasing insurance? \begin{align*} E[\text{don't}]&=p\times-value_{\text{house}} \\ &=-0.001\times 1000000 \\ &=-\1000 \end{align*}

## 12.3 A bet or a certain payment?

Anika is an expected utility maximiser with the following utility function:

U(x)=\sqrt{x}

Anika is offered the following choice:

1. A 50% chance of winning $10 and a 50% chance of winning nothing 2.$4 for certain

Anika has zero wealth besides this offer.

a) What is the expected value of option A)?

The expected value of option A) is:

\begin{align*} E[A]&=\sum_{i=1}^n p_ix_i \\[12pt] &=0.5*\$10+0.5*0 \\[6pt] &=\$5 \end{align*}

b) Will Anika choose A or B? Why?

We need to determine the expected utility of each option. Anika will selection the option with the highest expected utility.

The expected utility of option A) is:

\begin{align*} EU(A)&=p_1U(x_1)+p_2U(x_2) \\ &=0.5*\sqrt{10}+0.5*\sqrt{0} \\ &=1.58 \end{align*}

The expected utility of option B) is:

\begin{align*} EU(B)&=U(4) \\ &=\sqrt{4} \\ &=2 \end{align*}

Anika will choose option B) as it gives her higher expected utility. Anika is risk averse.

c) What is the certainty equivalent of option A?

To calculate the certainty equivalent of option A, we calculate what payment with certainty would deliver equivalent expected utility. That is:

\begin{align*} EU(CE)&=1.58 \\ \sqrt{CE}&=1.58 \\ CE&=1.58^2 \\ &=2.5 \end{align*}

The certainty equivalent of option A is $2.50. That is, Anika would be indifferent between option A and a payment of$2.50 for certain.

d) Draw a graph showing Anika’s utility curve, the expected value of option A, the expected utility of options A) and B) and the certainty equivalent of option A).

## 12.4 A 50:50 gamble

Consider the following gamble:

(0.5; $550; 0.5, -$500)

This gamble provides a 50% chance of winning $550 and a 50% chance of losing$500.

a) Would a risk neutral agent (who maximises expected value) be willing to pay 20 to play this gamble? What is the most they would be willing to pay to play? The expected value of the gamble is: \begin{align*} E[X]&=\sum_{i=1}^n p_ix_i \\[12pt] &=0.5(550)+0.5(-500) \\[6pt] &=25 \end{align*} This is greater than20, so a risk neutral agent will be willing to pay $20 to participate in the gamble. We could also have solved this by determining the expected value if they had paid$20:

\begin{align*} E[X]-c&=\sum_{i=1}^n p_ix_i-c \\[12pt] &=0.5(550)+0.5(-500)-20 \\[6pt] &=5 \end{align*}

As the expected value is positive, the agent would be willing to pay $20. b) Would a risk averse expected utility maximiser with wealth$1000 and utility function U(x)=x^{1/2} be willing to pay $20 to play this gamble? What is the most they would be willing to pay to play? The expected utility of the gamble for the risk averse agent if they paid$20 to play is:

\begin{align*} EU(x)&=p_1(W+x_1-c)+p_2(W+x_2-c) \\[6pt] &=0.5(1000+550-20)^{1/2}+0.5(1000-500-20)^{1/2} \\[6pt] &=30.51 \end{align*}

The expected utility of not playing the gamble is:

\begin{align*} EU(x)&=(1000)^{1/2} \\[6pt] &=31.62 \end{align*}

They would not pay 20 as they would have higher utility if they turned down the gamble. In fact, they would not pay any positive sum to participate in the gamble. If they were offered the gamble for free, their expected utility would be: \begin{align*} EU(x)&=0.5(1000+550)^{1/2}+0.5(1000-500)^{1/2} \\[6pt] &=30.86 \end{align*} This is less than if they simply turned down the gamble. They would be willing to pay to avoid the gamble. How much? We can determine this by asking what wealth a utility of 30.86 is: \begin{align*} W^{1/2}&=30.86 \\[6pt] W&=30.51^2 \\[6pt] &=\952.67 \end{align*}

The certainty equivalent of the gamble is $952.67. The agent would be willing to pay up to$47.33 to avoid the gamble.

c) Would the expected utility maximiser with utility function U(x)=x^{1/2} change their decision if they had $1 million in wealth? Explain. If they now have$1 million in wealth, we simply repeat the calculations above with the new wealth.

\begin{align*} EU(x)&=0.5(1000000+550-20)^{1/2}+0.5(1000000-500-20)^{1/2} \\[6pt] &=1000.00247 \end{align*}

The expected utility of not playing the gamble is:

\begin{align*} EU(x)&=(1000000)^{1/2} \\[6pt] &=1000 \end{align*}

They would be willing to pay 20 as they would have higher utility if they accepted the gamble. What is the most they would be willing to pay? If they were offered the gamble for free, their expected utility would be: \begin{align*} EU(x)&=0.5(1000000+550)^{1/2}+0.5(1000000-500)^{1/2} \\[6pt] &=1000.0125 \end{align*} How much would they be willing to pay for this opportunity? We can determine this by asking what wealth a utility of 1000.0125 is: \begin{align*} W&=(1000.0124655)^2 \\[6pt] &=\1000024.93 \end{align*}

The agent would be willing to pay up to $24.93 for the gamble. This is close to the expected value of$25.

Intuitively, as the agent’s wealth increases their utility function becomes increasingly linear (second derivative approaches zero) and they become closer to risk neutral.

## 12.5 A 60:40 gamble

Penny is an expected utility maximiser with utility function u(x)=ln(x) and wealth of $300. Penny is offered the following bet A: • a 60% probability to win$150
• a 40% probability to lose 100. a) Does Penny accept bet A? Penny compares the utility of taking versus not taking the bet: \begin{align*} U(\text{A})&=p_1u(x_1)+p_2u(x_2) \\ &=0.6ln(W+150)+0.4ln(W-100) \\ &=0.6ln(450)+0.4ln(200) \\ &=5.785 \\ \\ U(W)&=ln(W) \\ &=ln(300) \\ &=5.704 \end{align*} U(A)>U(W), so Penny accepts the bet. b) Following some bad economic news, Penny wealth declines to150.

Penny is offered bet A again. Does Penny accept the bet?

Penny compares the utility of taking versus not taking the bet:

\begin{align*} U(\text{A})&=p_1u(x_1)+p_2u(x_2) \\ &=0.6ln(W+150)+0.4ln(W-100) \\ &=0.6ln(300)+0.4ln(50) \\ &=4.987 \\ \\ U(W)&=ln(W) \\ &=ln(150) \\ &=5.011 \end{align*}

U(A)<U(W), so Penny rejects the bet.

## 12.6 Another 60:40 gamble

Gamble A is as follows:

($100, 0.6; -$100, 0.4)

This is a gamble with a 60% chance of winning $100 and a 40% chance of losing$100.

a) Would a risk neutral decision-maker (who maximises expected value) be willing to pay 10 to play gamble A? What is the most they would be willing to pay to play? A risk neutral decision maker will accept any offer with positive expected value. The expected value of the bet is: \begin{align*} E[A]&=p_1x_1+p_2x_2 \\ &=0.6*100+0.4*(-100) \\ &=\20 \end{align*}

The risk neutral decision maker would pay $10 as this is less than the expected value of the bet. They would be willing to pay up to the expected value of the bet:$20. At that point they would be indifferent between paying for the bet and refusing the bet.

b) Would an expected utility maximiser with wealth $200 and utility function U(x)=ln(x) be willing to pay$10 to play gamble A? What is the most they would be willing to pay to play?

The expected utility maximiser will play if their utility from playing and paying is greater than their utility of refusing.

\begin{align*} U(W)&=ln(W) \\ &=ln(\200) \\ &=5.2983174 \\ \\ E[U(A-c)]&=p_1U(x_1)+p_2U(x_2) \\ &=0.6U(W+100-10)+0.4U(W-100-10) \\ &=0.6ln(200+100-10)+0.4ln(200-100-10) \\ &=5.2018524 \end{align*} U(W)>U(A-c) so the decision maker will not be willing to pay10.

To determine the most they would be willing to pay, we will first check whether they will pay any positive sum. We will do that by examining the expected utility of the gamble with no payment.

\begin{align*} E[U(A)]&=p_1U(x_1)+p_2U(x_2) \\ &=0.6U(W+100)+0.4U(W-100) \\ &=0.6ln(200+100)+0.4ln(200-100) \\ &=5.2643376 \end{align*}

U(W)>U() so the decision maker will not be willing to pay any amount. In fact, they would pay to avoid the bet.

To calculate how much, we determine what the certainty equivalent of the bet is:

\begin{align*} U(CE)&=E[U(A)] \\ ln(CE)&=5.2643376\\ CE&=e^{5.2643376} \\ &=\\end{align*} Having wealth of200 and the bet is the equivalent of having wealth of $193.32. They would be willing to pay up to$6.68 to avoid the bet.

c) Would the expected utility maximiser with utility function change their decision if they had 1000 in wealth? Explain. The expected utility maximiser will play if their utility from playing and paying is greater than their utility of refusing. \begin{align*} U(W)&=ln(W) \\ &=ln(\1000) \\ &=6.9077553 \\ \\ E[U(A-c)]&=p_1U(x_1)+p_2U(x_2) \\ &=0.6U(W+100-10)+0.4U(W-100-10) \\ &=0.6ln(1000+100-10)+0.4ln(1000-100-10) \\ &=6.9128484 \end{align*}

U(W)<U(A-c) so the decision maker is now willing to pay 10. As the agent’s wealth increases their utility function becomes increasingly linear (second derivative approaches zero) and they become closer to risk neutral. As a result, the positive expected value bet becomes increasingly attractive. d) At what wealth is the expected utility maximiser with utility function U(x)=ln(x) indifferent between accepting gamble A or not? The expected utility maximiser will be indifferent when: \begin{align*} U(W)&=E[U(A)] \\ U(W)&=0.6U(W+100)+0.4U(W-100) \\ ln(W)&=0.6ln(W+100)+0.4ln(W-100) \end{align*} There isn’t a simple closed form solution to this equation, but we know from questions b) and c) that W is somewhere between200 and $1000. If we wanted to calculate exact solution, you could use tool such as Mathematica or Matlab to solve, write a short code to solve in R or even iterate toward a solution using Excel. Code equation <- function(W){ log(W) - 0.6 * log(W + 100) - 0.4 * log(W - 100) } indifference <- uniroot(equation, c(200, 1000)) The expected utility maximiser is indifferent when W=$256.81.

An agent is considering insurance against bushfire for its $1,000,000 house. The house has a 1 in 1000 chance of burning down. An insurer is willing to offer full coverage for$1100.

a) Would a risk neutral agent purchase the insurance?

We have already calculated that purchasing insurance in this case has a lower expected value than not purchasing the insurance. A risk neutral agent would not purchase the insurance.

b) Suppose an agent has a logarithmic utility function (U(x)=\ln(x)) and they have $10,000 in cash in addition to their house, giving them wealth (W) of$1,010,000. Would this agent purchase the insurance? Are they risk seeking, risk neutral or risk averse?

\begin{align*} E[U(\text{purchase})]&=\ln(W-premium) \\ &=\ln(1,008,900) \\ &=13.8244\\ \\ E[U(\text{don't})]&=0.999*\ln(W)+0.001*\ln(W-value_{\text{house}}) \\ &=0.999*\ln(1,010,000)+0.001*\ln(10,000) \\ &=13.8208 \end{align*}

The expected utility of purchasing insurance is greater than the expected utility from not purchasing insurance. This agent will purchase insurance. They are risk averse.

What is the intuition for this agent’s purchase of insurance? Diminishing marginal utility means that the utility of average wealth is greater than the average utility of wealth (e.g. U(\$0)+U(\$200)<U(\$100)+U(\$100)). Therefore, their expected utility is higher when wealth is distributed evenly across the possible states of the world rather than concentrated in one state - or in the case of a disaster, very low in one state. The consumer insures as a way of evenly distributing wealth across all possible states.

## 12.8 Insurance but not a lottery ticket

In your own words but using concepts from this subject explain why a risk averse agent who makes decisions according to expected utility theory might purchase insurance but not a lottery ticket.

Both lotteries and insurance have a negative expected value.

The risk averse agent will typically reject a lottery as it has a small probability of a large win for the price of a small loss. Due to diminishing marginal returns, the average weight given to each dollar in the large gain is weighted much less than the average weight given to each dollar in the small price. This makes the lottery unattractive.

In contrast, a risk averse agent may purchase insurance as for a small price they can avoid the possibility of a large loss. Due to diminishing marginal returns, the large loss can have much higher average weight given to each dollar than to the weight given to each dollar for the small premium.

## 12.9 An anomaly in expected utility

Consider the following two choices:

Choice 1: Choose one of the following bets:

Bet A:

• $10,000 with probability: 11% •$0 with probability: 89%

Bet B:

• $50,000 with probability: 10% •$0 with probability: 90%

Choice 2: Choose one of the following bets:

Bet A’:

• $10,000 with probability: 100% Bet B’: •$50,000 with probability: 10%
• $10,000 with probability: 89% •$0 with probability: 1%

Many people pick B for Choice 1 and A’ for Choice 2.

Does this pair of choices conform with Expected Utility Theory? Why?

According to Expected Utility Theory, if an agent selects B:

\begin{align*} 0.10U(50,000)+0.90U(0)&> 0.11U(10,000)+ 0.89U(0) \\[6pt] 0.10U(50,000)+0.01U(0)&> 0.11U(10,000) \end{align*}

According to Expected Utility Theory, if an agent selects A’:

\begin{align*} U(10,000)&>0.10U(50,000)+0.89U(10,000)+0.01U(0) \\[6pt] 0.11U(10,000)&>0.89U(50,000)+0.01U(0) \end{align*}

This is a contradiction. Under expected utility theory, if an agent chooses B it should choose B’. And if the agent chooses A it should choose A’.

This occurs due to a breach in the principle of independence.

Here is a representation of the choices.

The bets in the two shaded areas are the same. They are paired with an outcomes of either $10,000 or$0. Preferring B to A and A’ to B’ is a violation of the axiom of the independence of irrelevant alternatives: Under that axiom, two gambles mixed with an irrelevant third gamble will maintain the same order of preference as when the two are presented independently of the third gamble.

Using this representation in the table, here is another way of understanding why this combination of choices is an anomaly. Imagine there are 100 tickets numbered 1 to 100. One ticket will be drawn. If a ticket between 1 and 89 is drawn, you win the prize in the first column. If a ticket between 90 and 99 is drawn, you win the amount in the second. If a 100 is drawn, you win the sum in the third.

Suppose that you know the ticket that is drawn is between 1 and 89. Would you prefer A or B? As you would win $0 with either choice, you will be indifferent. You will similarly be indifferent between A’ and B’, winning$10,00 no matter what.

Suppose instead that a ticket between 90 and 100 is drawn, but you don’t know which. You can see that if you prefer A to B, you should also prefer A’ to B’. In each choice you are effectively facing the same bet. Let’s assume for the moment that you prefer B and B’.

Finally, suppose you don’t know what ticket will be drawn. We have just determined that if you know the ticket is between 1 and 89 you are indifferent between the options, but if between 90 and 100 is drawn you prefer B and B’. You do not prefer A or A’ when the ticket range is 1 to 89 or 90 to 100, so you should not prefer A or A’ when the ticket number is unknown.

Finally, using the formal definition for the independence of irrelevant alternatives axiom:

• if x and y are lotteries with x\succcurlyeq y and
• p is the probability that a third option z is present, then: pz+(1-p)x\succcurlyeq pz+(1-p)y

For each of the choices in our lottery:

• p=89\%

• x is a 100% chance of $10,000 • y is a 0.01/(1-0.89) chance of$0 and 0.10/(1-0.89) chance of $50,000 • z is$10,000 in choice 1 and \$0 in choice 2, although z’s value does not matter due to its assumed irrelevance.