Law of Large Numbers

Visualize how sample means converge to expected values as sample size increases

Overview

The Law of Large Numbers is a fundamental theorem in probability stating that as you repeat a random experiment more times, the average of your results converges to the expected value. This interactive visualization demonstrates how sample means stabilize with larger sample sizes across different probability distributions.

Tips

  1. Start with small samples: Begin with 10-100 trials to see high variability, then gradually increase to 1,000 and 10,000 to watch convergence happen.

  2. Compare multiple runs: Execute several independent simulations with the same parameters to see that each path is unique, but all converge to the same expected value.

  3. Try different distributions: Test with dice rolls, coin flips, and continuous distributions to verify the law works universally across different random processes.

  4. Watch the convergence envelope: Notice how the range within which the sample mean fluctuates shrinks proportionally to 1/√n, not 1/n.

  5. Don’t confuse with Gambler’s Fallacy: Remember that individual outcomes remain random - the law doesn’t mean past results affect future ones, just that long-run averages stabilize.