Exploring Fluctuations

In The Feynman Lectures on Physics, Chapter 6-2 entitled Fluctuations explores a deceptively simple question: How many heads do we expect to get if we toss a coin N times? As always, Richard Feynman takes this common-sense query and uncovers a web of rich mathematical imagery, probabilistic nuance, and conceptual beauty that goes far beyond mere guesswork.

Let’s unpack his thinking – and dive even deeper into the imagery and mathematics he uses to explain the statistical nature of coin tossing.

The Empirical Setup – 100 Games of Chance

Feynman begins not with a formula, but with an experiment. Suppose we toss a fair coin 30 times and count how many times it lands heads-up. This is repeated 100 times, and the results are recorded. The first three trials yield 11, 11, and 16 heads. Already, a hint of irregularity appears—why didn’t we see 15 heads, which seems the “expected” outcome?

Feynman’s brilliance is to address the psychological trap of expectation. Randomness doesn’t yield perfection in every trial, and our intuition often rebels at the idea of fluctuation. By presenting real data from 100 repeated 30-toss experiments, he helps the reader understand that fluctuation is not an anomaly—it is the rule.

The results cluster around the central value of 15, as seen in Table 6-1 and the associated histogram. Scores between 12 and 18 occur most frequently, with 15 heads showing up 13 times. Interestingly, 16 heads occur even more often—16 times in fact—which might tempt someone to suspect a bias. But the total number of heads across all games averages out to just under half per toss. This tiny deviation is a classic example of statistical fluctuation, not bias.

From Tosses to Triangles – The Rise of Pascal

To understand why fluctuations like these are expected, Feynman turns to probability theory. He starts with the simplest cases: one, two, and three tosses.

Here’s where he introduces the idea of sequence enumeration—counting how many different paths lead to a certain outcome. For example, in three tosses, there are eight possible outcomes. Only one of these gives all heads. Three give two heads, three give one head, and one gives no heads at all.

This sequence—1, 3, 3, 1—is the third row of Pascal’s Triangle, which emerges as a key tool in understanding fluctuations.

Pascal’s Triangle and Binomial Coefficients

Each number in Pascal’s Triangle tells us how many different ways we can get a specific number of heads from a given number of tosses. These numbers also appear in the algebraic expansion of expressions raised to a power, which is why they are called binomial coefficients.

Feynman uses this triangle to build a grid of possibilities—every point on the grid represents a specific number of heads reached through a particular sequence of coin tosses. The triangle grows row by row, and each entry is found by adding together the two entries directly above it. This recursive nature echoes the build-up of possible paths in a random process.

Probability Distributions – From Coin Tosses to General Games

Once we understand how many ways a result can occur, we can then think about how likely it is. Each path to a result is considered equally likely in the case of a fair coin, and so the more paths there are to a particular outcome, the more likely it is to happen.

When we multiply this probability by the total number of games, we get an estimate of how often we should expect a certain number of heads to occur. These theoretical values can be plotted to form a curve that closely matches the actual experimental data. Feynman shows how the dashed curve in the graph runs through the points predicted by theory—usually only a game or two off from the real numbers observed.

The key insight: fluctuations are normal. We might expect 15 heads most often, but 14 or 16 might occur more frequently in a specific sample. That doesn’t mean the coin is biased—it simply means that randomness has structure, not certainty.

Bernoulli’s Generalisation – When the Game Isn’t Fair

Feynman then broadens the view to include cases where the coin isn’t fair—where one outcome might be more likely than the other. He introduces a more general version of the probability function, often referred to as the Bernoulli distribution, which applies to any situation with two possible outcomes.

This model is vital beyond physics—it underpins everything from sports statistics to machine learning, where outcomes aren’t necessarily even, and we’re often interested in the likelihood of a certain number of successes in a given number of trials.

Final Thoughts – Embracing Fluctuations

What Feynman shows us in this chapter is more than a lesson in probability. He teaches us a deeper philosophical point: variation is natural. Fluctuations are not mistakes; they are fundamental features of systems driven by chance.

This insight, clothed in vivid imagery—grids of paths, triangles of numbers, and bell-shaped curves—reminds us that even in the realm of randomness, structure emerges. It’s a poetic mix of chaos and predictability, of disorder and law, that lies at the heart of statistical physics.

In today’s world where data and uncertainty permeate everything from quantum mechanics to polling to AI predictions, understanding why fluctuations happen, and how to quantify them, is more valuable than ever.

Feynman, as always, helps us see the beauty beneath the surface of the coin.

Comments

Popular posts from this blog

From Clouds and Cars to Parabolas: Feynman’s First Steps in Motion

Kepler’s Harmonies: Feynman, the Ellipse, and the Poetry of the Planets

The Uncertainty Principle – Feynman’s Quantum Rethink of Reality