Law of Large Number and Central Limit Theorem under Uncertainty, the related New Itô's Calculus and Applications to Risk Measures

Speaker: 
Shige Peng
Date: 
Thu, Jul 9, 2009
Location: 
University of New South Wales, Sydney, Australia
Conference: 
1st PRIMA Congress
Abstract: 

Let $ S_n= \sum_{i=1}^n X_i $ where $ \{X_i\}_{i=1}^\infty $ is a sequence of independent and identically distributed (i.i.d.) of random variables with $ E[X_1]=m $. According to the classical law of large number (LLN), the sum $ S_n/n $ converges strongly to $ m $. Moreover, the well-known central limit theorem (CLT) tells us that, with $ m = 0 $ and $ s^2=E[X_1^2] $, for each bounded and continuous function $ j $ we have $ \lim_n E[j(S_n/\sqrt{n}))]=E[j(X)] $ with $ X \sim N(0, s^2) $.

These two fundamentally important results are widely used in probability, statistics, data analysis as well as in many practical situation such as financial pricing and risk controls. They provide a strong argument to explain why in practice normal distributions are so widely used. But a serious problem is that the i.i.d. condition is very difficult to be satisfied in practice for the most real-time processes for which the classical trials and samplings becomes impossible and the uncertainty of probabilities and/or distributions cannot be neglected.

In this talk we present a systematical generalization of the above LLN and CLT. Instead of fixing a probability measure P, we only assume that there exists a uncertain subset of probability measures $ \{P_q:q \in Q\} $. In this case a robust way to calculate the expectation of a financial loss $ X $ is its upper expectation: $ [\^\,(\mathbf{E})][X]=\sup_{q \in Q} E_q[X] $ where $ E_q $ is the expectation under the probability $ P_q $. The corresponding distribution uncertainty of $ X $ is given by $ F_q(x)=P_q(X \leq x) $, $ q \in Q $. Our main assumptions are:

  1. The distributions of $ X_i $ are within an abstract subset of distributions $ \{F_q(x):q \in Q\} $, called the distribution uncertainty of $ X_i $, with $ ['(m)]=[\^(\mathbf{E})][X_i]=\sup_q\int_{-\infty}^\infty xF_q(dx) $ and $ m=-[\^\,(\mathbf{E})][-X_i]=\inf_q \int_{-\infty}^\infty x F_q(dx) $.
  2. Any realization of $ X_1, \ldots, X_n $ does not change the distributional uncertainty of $ X_{n+1} $ (a new type of `independence' ).

Our new LLN is: for each linear growth continuous function $ j $ we have

$$\lim_{n\to\infty} \^{\mathbf{E}}[j(S_n/n)] = \sup_{m\leq v\leq ['(m)]} j(v)$$

Namely, the distribution uncertainty of $ S_n/n $ is, approximately, $ \{ d_v:m \leq v \leq ['(m)]\} $.

In particular, if $ m=['(m)]=0 $, then $ S_n/n $ converges strongly to 0. In this case, if we assume furthermore that $ ['(s)]2=[\^\,(\mathbf{E})][X_i^2] $ and $ s_2=-[\^\,(\mathbf{E})][-X_i^2] $, $ i=1, 2, \ldots $. Then we have the following generalization of the CLT:

$$\lim_{n\to\infty} [j(Sn/\sqrt{n})]= \^{\mathbf{E}}[j(X)], L(X)\in N(0,[s^2,\overline{s}^2]).$$

Here $ N(0, [s^2, ['(s)]^2]) $ stands for a distribution uncertainty subset and $ [\^(E)][j(X)] $ its the corresponding upper expectation. The number $ [\^(E)][j(X)] $ can be calculated by defining $ u(t, x):=[^(\mathbf{E})][j(x+\sqrt{tX})] $ which solves the following PDE $ \partial_t u= G(u_{xx}) $, with $ G(a):=[1/2](['(s)]^2a^+-s^2a^-). $

An interesting situation is when $ j $ is a convex function, $ [\^\,(\mathbf{E})][j(X)]=E[j(X_0)] $ with $ X_0 \sim N(0, ['(s)]^2) $. But if $ j $ is a concave function, then the above $ ['(s)]^2 $ has to be replaced by $ s^2 $. This coincidence can be used to explain a well-known puzzle: many practitioners, particularly in finance, use normal distributions with `dirty' data, and often with successes. In fact, this is also a high risky operation if the reasoning is not fully understood. If $ s=['(s)]=s $, then $ N(0, [s^2, ['(s)]^2])=N(0, s^2) $ which is a classical normal distribution. The method of the proof is very different from the classical one and a very deep regularity estimate of fully nonlinear PDE plays a crucial role.

A type of combination of LLN and CLT which converges in law to a more general $ N([m, ['(m)]], [s^2, ['(s)]^2]) $-distributions have been obtained. We also present our systematical research on the continuous-time counterpart of the above `G-normal distribution', called G-Brownian motion and the corresponding stochastic calculus of Itô's type as well as its applications.