Neutrality - 014 - Pure, Functional, Financial, Python Katas

Ice pops in the summer are less valuable than in Winter, financial securities are also seasonal.

When times are scary, safer assets are more valuable.

Assets also differ in value between people. While one person might be nervous and seek safe assets another might be bullish and seek riskier ones.

This is one of the reasons why finance is not a zero sum game, because winning and losing is subjective.

Subjectivity is what matters, it is how you feel about your investments that matters, not how someone tells you to feel.

This doesn't work in simplified quant models however.

The last kata just showed us how our forecast of the future assumes we will not profit from our investments, we merely will earn compounded interest.

This is not compatible with the 'subjective' view that riskier investments should return more profit on average.

Humans are risk averse animals, it seems only fair that we will expect compensation for the mental stress caused by more risk.

But in large markets of trillions of dollars, perhaps humans are not the key factor anymore. Institutions with longer time horizons can afford to take on more risk. In fact the risk premium for accepting more risk is quite low or flat, which means we cannot expect much extra profit in exchange for risky investments, because there are large institutions which are willing to do the same.

This implication, that risk doesn't matter too much on a market level, is called risk neutrality, it is an implication of the No Arbitrage principle.

On a personal level, risk neutrality does not hold, and in reality, arbitrage is much easier, because each has their individual risk preferences and can strike win-win deals with those having different needs.

~

Today's exercise is to recalculate the expected returns in the previous kata, but find the expected utilities instead of dollars.

Risk averse utility looks like this in pseudo code (the exponent is below 1).

U = $^0.5

The risk seeking function looks like this (the exponent is above 1).

U = $^1.5

In general the equation is,

U = $^A

Our expected payoff goes from,

P * U + (1 - P) * D

to

(P * U) ^ 0.5 + ((1 - P) * D) ^ 0.5

Write up the payoff in Python lambdas and plot them by overall expected utility and 'A' (the risk seeking parameter).

How does the utility between both security one and two in the previous kata compare?