St. Petersburg’s Paradox

Suppose I lay out a game to you involving flipping coins. You start with $2, and start flipping a coin. For every head you flip, your balance doubles. However, on the first tails you flip, the game ends.

How much would you pay for this game?

One naive way to assess this “worth” of this game would be to maximize the average or the expectation.

\[\sum_{k=1}^\infty\frac{2^k}{2^k} = \sum_{k=1}^\infty 1 = \infty\]

From a decision making perspective, that doesn’t quite make sense. The probability of a high payoff drops dramatically and very quickly. Would any rational actor actually pay $1000 for this game? What about $100,000,000? Somehow, using the expectation to measure the “worth” of the game doesn’t work like we intended.

Log Wealth

Suppose we are playing a series of \(N\) investments to grow our wealth. We start with initial capital \(W_0\), and after the \(i\)th iteration of investing some amount (possibly not all of our assets), we end with \(W_i\) from \(W_{i-1}\). How do we go about determining the optimal strategy to maximize wealth?

One way to do so would be to just maximize the average of wealth, but as shown in the previous section, this may be awry and possibly lead us to guaranteed destitution.

Since investment gains/losses compound over time, we can look to maximize \(\frac{W_N}{W_0}\) instead. If \(R_i\) is the proportional return for the \(i\)th investment, then maximizing \(\frac{W_n}{W_0}\) would amount to maximizing the geometric mean on the returns: \( (\Pi_{i=1}^N R_i)^{1/N} \). Doing so would be optimal in the long run due to the compounding nature of returns.

Since \( (\Pi_{i=1}^N R_i)^{1/N} = \text{exp}(\frac{1}{N}\sum_{i=1}^N \log(R_i))\), this amounts to maximizing the expectation of \(\sum_{i=1}^N \log R_i\).

Furthermore, if we take \(W_n = W_0(\Pi_{i=1}^N R_i)\), maximizing the expectation of log returns amounts to also maximiming log wealth.

Thus, from this long escapade, we come to the conclusion that instead of simply maximizing the expectation of \(W_n\) which results in pathological results at times, we can try maximizing \(\log W_n\) instead.

Concavity

Since log is a concave function, its usage allows us to incorporate risk aversion into our calculations. As we accumulute more wealth, the additional utility does not scale linearly. You can think of it as the “diminishing marginal utility” of additional wealth. Thinking back at the original game, the concavity of the log function would dampen the contribution of huge exponential payoffs to the overall expectation.

Kelley Criterion

Let’s consider a new game with a binary outcome. Given some capital for which we can bet some arbitrary fraction \(f\) of our available capital. In the game, success occurs with probabilty \(p\) and failure with probability \(1-p = q\). Upon success, we earn some proportion of our original \(fb\) and likewise for failure, we lose some proportion \(fa\).

What would be the optimal amount to bet?
When \(pb > qa\), the expectation of the game is positive as long as \(f\) is non-zero. If we were to just maximize expectation, then we would bet everything every time. After repeated playing of this game, we are pretty much guaranteed to lose everything. Playing to maximize expectation is not a viable long-term strategy.

According to Kelley himself,

The gambler introduced here follows an essentially different criterion from the classical gambler. At every bet he maximizes the expected vaule of the logarithm of his capital. The reason has nothing to do with the value function which he attached to hi money, but merely wit hthe fact that it is the logarithm which is additive in repeated bets and to which the law of large numbers applies.

Using our now perspective from the previous section, we have an alternative approach: maximize log wealth. This leads us to the optimal betting amount called the kelley criterion:

\[f^{*} = \frac{pb-qa}{ab}\]

If the expection is not positive, then \(f^* \leq 0\) and the optimal amount to bet is to not (obviously).
When the odds are 1:1 (a=1 and b=1), the optimal betting size becomes \(f^* = p-q\).

Quick Derivation

Let \(E = p\log(1+fb) + q\log(1-fa)\) be the expected log wealth, and we are trying to determine the optimal choice \(f^* \) that would maxmimize \(E\). We must have \(\frac{dE}{df}\mid_{f=f^* } = \frac{pb}{1+f^* b} - \frac{qa}{1-f^* a} = 0\).

Solving for \(f^* \), we obtain \(f^* = \frac{pb-qa}{ab}\)

Aside: CAGR

In finance terms, CAGR (Compound Annual Growth Rate) also uses the geometric mean to approximate the growth of some investment based on historical data. The formula is given by

\[CAGR = \left(\frac{EV}{BV}\right)^{1/n} - 1\]

where \(EV\) is the ending value, \(BV\) is the beginning value, and \(n\) is the number of years.

References

  1. https://quant.stackexchange.com/questions/43369/why-does-kelly-maximise-e-log-space-g-rather-than-simply-eg
  2. https://en.wikipedia.org/wiki/Kelly_criterion
  3. https://www.princeton.edu/~wbialek/rome/refs/kelly_56.pdf