Monte Carlo Simulation
One of the oldest methods to estimate the value of Pi is to generate random points in a square with the largest fitting circle in it. An estimate of Pi is given by four times the average of the proportion of points falling into the circle. Why does this method work? There are two theorems related to this estimation.
Monte Carlo Analysis
Let z be a random variable, f be a pdf and g(z) be a function. Let D(f, g) be the distribution of g(z) when z has distribution f.
Then g(zj), j = 1, ..., J is a random sample from D(f, g). So DJ(f, g) is the sample analog of D(f, g).
Large of Law Numbers
In random sampling from any population with E(X) = m and V(X) = s2, the sample mean converges in probability to the population mean.
By drawing a random point in the square, and labeling success as the event that the point falls in the circle, we have a random variable z with Bernoulli distribution and parameter p = p/4. Let g(z) = (S zj)/J, i.e. g is the sample mean of size J. Then X = g(zj) is a random sample from D(f, g). JX has Binomial distribution Bin(J, p/4). Hence E(X) = p/4. By Law of Large Numbers, the sample mean of X converges in probability to the population mean, i.e. for all e > 0, P(|`X - p/4| > e) ® 0 as J ® ¥. Hence the value estimated from the simulation asymptotically approaches p/4.
Note that in the above simulation, the sequence of averages are auto-correlated, while in the proof above we require that the sequence of averages be random. Thus the result of the above simulation may be different from expected.