Simplicity Versus Complexity: Plato and Aristotle
Revisited
What is the World Like?
Is the world is simple or complicated? As with many things,
it depends on who you ask, when you ask, and how seriously they take you. If
you should ask a particle physicist, you would soon be hearing how
wonderfully simple the universe appears to be. But, on returning to
contemplate the real world, you just know “it ain’t necessarily so”: it’s
far from simple. For the psychologist, the economist, or the botanist, the
world is a higgledy-piggledy mess of complex events that just seemed to win
out over other alternatives in the long run. It has no mysterious penchant
for symmetry or simplicity. So who is right? Is the world really simple, as
the particle physicists claim, or is it as complex as almost everyone else
seems to think? Understanding the question, why you got two different
answers, and what the difference is telling us about the world, is a key
part of the story of science over the past 350 years that goes back to Plato
and Aristotle.
The Quest for Simplicity
Our belief in the simplicity of nature springs from the observation
that there are regularities which we call “laws” of nature. The idea of laws of
nature has a long history rooted in monotheistic religious thinking, and in ancient
practices of statute law and social government.
[1] The most significant advance in our understanding of their nature and consequences
followed Isaac Newton’s identification of a law of gravitation in the late
seventeenth century, and his creation of a battery of mathematical tools with
which to determine its consequences.
Laws reflect the existence of patterns in nature. We might
even define science as the search for those patterns. We observe and
document the world in all possible ways; but while this data-gathering is
necessary for science, it is not sufficient. We are not content simply to
acquire a record of everything that has ever happened, like cosmic stamp
collectors. Instead, we look for patterns in the facts, and some of these
patterns we have come to call the laws of nature, while others have achieved
only the status of by-laws. Having found (or guessed—for there are no rules
at all about how you might find them) possible patterns, we use them to
predict what should happen if the pattern is also followed at all times and
in places where we have yet to look. Then we check if we are right (there
are strict rules about how you do this!). In this way, we can update our
candidate pattern and improve the likelihood that it explains what we see. Sometimes a likelihood gets so low that we say the proposal is “falsified,”
or so high that it is “confirmed” or “verified,” although strictly speaking
neither is ever possible with certainty. This is called the “scientific
method.”
For
Newton and his contemporaries the laws
of motion were codifications into simple mathematical form of the habits and
recurrences of nature. They were idealistic: “bodies acted upon by no forces
will …,” because there are no such bodies. They were laws of cause and
effect: they told you what happened if a force was applied. The present
state determined the future uniquely and completely.
Later, these laws of change were found to be equivalent to statements that
some given quantity was unchanging: the requirement that the laws were the
same everywhere in the universe was equivalent to the conservation of
momentum; the requirement that they be found to be the same at all times was
equivalent to the conservation of energy; and the requirement that they be
found the same in every direction in the universe was equivalent to the
conservation of angular momentum. This way of looking at the world in terms
of conserved quantities, or invariances and unchanging patterns, would prove
to be extremely fruitful.
During the twentieth century, physicists
became so enamoured of the seamless correspondence between laws dictating
changes and invariances preserving abstract patterns when particular forces
of nature acted, that their methodology changed. Instead of iden tifying
habitual patterns of cause and effect, codifying them into mathematical
laws, and then showing them to be equivalent to the preservation of a
particular symmetry in nature, physicists did a U-turn. The presence of
symmetry became such a persuasive and powerful facet of the laws of physics
that physicists now began with the mathematical catalogue of possible
symmetries. They could pick out symmetries with the right scope to describe
the behaviour of a particular force of nature. Then, having identified the
preserved pattern, they could deduce the laws of change that are permitted
and test them by experiment.
Since 1973, when asymptotic freedom was
discovered by
Gross,
Politzer, and
Wilczek, this focus upon symmetry has taken centre stage in
the study of elementary-particle physics and the laws governing the
fundamental interactions of nature. Symmetry is the primary guide into the
legislative structure of the elementary-particle world, and its laws are
derived from the requirement that particular symmetries, often of a highly
abstract character, are preserved when things change. Such theories are
called
gauge theories. All the currently successful theories of
the four known forces of nature—the electromagnetic, weak, strong, and
gravitational forces—are gauge theories. These theories are prescriptive as
well as descriptive: in order to preserve the invariances upon which they
are based, they require the existence of the forces they govern. They are
also able to dictate the character of the elementary particles of matter
that they govern. In these respects, gauge theories differ from the
classical laws of
Newton which, since
they governed the motions of
all bodies, could say nothing
about the properties of those bodies. The reason for this added power of
explanation is that the elementary-particle world, in contrast to the
macroscopic world, is populated by collections of identical particles (“once
you’ve seen one electron you’ve seen ’em all,” as
Richard Feynman once remarked). Particular gauge theories
govern the behaviour of particular subsets of all the elementary particles,
according to their shared attributes. Each theory is based upon the
preservation of a pattern.
This generation of preserved patterns for each of the separate interactions
of nature has motivated the search for a unification of those theories into
more comprehensive editions based upon larger symmetries. Within those
larger patterns, smaller patterns respected by the individual forces of
nature might be accommodated, like jigsaw pieces, in an interlocking fashion
that places some new constraint upon their allowed forms.So far, this
strategy has resulted in a successful, experimentally tested unification of
the electromagnetic and weak interactions; a number of purely theoretical
proposals for a further unification with the strong interaction (“grand
unification theories”); and candidates for a fourfold unification with the
gravitational force
[2] to produce a so called “Theory of Everything,” or “TOE.” It is this general
pattern of explanation by which forces and their underlying patterns are
unified and reduced in number by unifications, culminating in a single
unified law, that lies at the heart of the physicist’s perception of the
world as “simple.” The success of this promising path of progress is the
reason that led our hypothetical particle physicist to tell us that the
world is simple. The laws of nature are few in number, and getting fewer
(see figure 1).
The first candidate for a TOE was a “superstring” theory, first developed by
Michael Green and
John Schwarz in 1984. After the initial excitement that followed their proof that string theories
are finite and well-defined theories of fundamental physics, hundreds of
young mathematicians and physicists flocked to join this research area at
the world’s leading physics departments. It soon became clear that there
were five varieties of string theory available to consider as a TOE: all
finite and logically self-consistent, but all different. This was a little
disconcerting. You wait nearly a century for a Theory of Everything, then,
suddenly, five come along all at once. They had exotic-sounding names that
described aspects of the mathematical patterns they contained—type I, type
IIA, and type IIB superstring theories, SO(32) and E8 heterotic string
theories, and eleven-dimensional supergravity. These theories are all
unusual in that they have ten dimensions of space and time, with the
exception of the last one, which has eleven. Although it is not demanded for
the finiteness of the theory, it is generally assumed that only one of these
ten or eleven dimensions is a “time” and the others are spatial. Of course,
we do not live in a nine or ten-dimensional space, so in order to reconcile
such a world with what we see it must be assumed that only three of the
dimensions of space in these theories became large, and the others remain
“trapped” with (so far) unobservably small sizes. It is remarkable that in
order to achieve a finite theory we seem to need more dimensions of space
than those that we experience. This might be regarded as a prediction of the
theory. It is a consequence of the amount of “room” that is needed to
accommodate the patterns governing the four known forces of nature inside a
single bigger pattern, without hiving themselves off into sub-patterns that
each “talk” only to themselves rather than to everything else. Nobody knows
why three dimensions (rather than one or four or eight, say) became large,
what is the force responsible. Nor do we know if the number of large
dimensions is something that arises at random (and so could be different—and
may be different—elsewhere in the universe), or is an inevitable consequence
of the laws of physics that could not be otherwise without destroying the
logical self-consistency of the theory. One thing we do know is that only in
spaces with three large dimensions can things bind together to form
structures like atoms, molecules, planets, and stars. No complexity and no
life is possible except in spaces with three large dimensions. So, even if
the number of large dimensions is different in different parts of the
universe, or separate universes are possible with different numbers of large
dimensions, we would have to find ourselves living where there are
three large dimensions no matter how improbable that might
be, because life could exist in no other type of space.
At first, it was hoped that one of these theories would turn out to be
special and attention would then narrow down to reveal it to be the true
Theory of Everything. Unfortunately, things were not so simple. Progress was
slow and unremarkable until
Edward
Witten, at
Princeton,
discovered that these different string theories are not really different. They are linked to one another by mathematical transformations that amount
to exchanging large distances for small ones, and vice versa, in a
particular way. Nor were these string theories fundamental. Instead, they
were each limiting situations of another deeper, as yet unfound, TOE which
lives in eleven dimensions of space and time.That theory became known as “M
Theory.”
[3]
Do these “extra” dimensions of space really exist? This is a key question for
all these new Theories of Everything. In most versions, the other dimensions
are so small (10-33 cm) that no direct experiment will ever see them. But,
in some variants, they can be much bigger. The interesting feature is that
only the force of gravity will “feel” these extra dimensions and be modified
by their presence. In these cases the extra dimensions could be up to one
hundredth of a millimetre in extent, and they would alter the form of the
law of gravity over these and smaller distances. This gives experimental
physicists a wonderful challenge: test the law of gravity on submillimetre
scales.More sobering still is the fact that all the observed constants of
nature, in our three dimensions, are not truly fundamental, and need not be
constant in time or space:
[4] they are just shadows of the true constants that live in
the full complement of dimensions. Sometimes simplicity can be complex
too.
Elementary Particles?
The fact that nature displays populations of identical
elementary particles is its most remarkable property. It is the “fine
tuning” that surpasses all others. In the nineteenth century, James Clerk
Maxwell first stressed that the physical world was composed of identical
atoms which were not subject to evolution. Today, we look for some deeper
explanation of the subatomic particles of nature from our TOE. One of the
most perplexing discoveries by experimentalists has been that such
“elementary” particles appear to be extremely numerous. They were supposed
to be an exclusive club, but they have ended up with an embarrassingly large
membership.
String theories offered another route to solving this problem. Instead of a
TOE containing a population of elementary pointlike particles, string
theories introduce basic entities that are loops (or lines) of energy which
have a tension. As the temperature rises the tension falls and the loops
vibrate in an increasingly stringy fashion, but as the temperature falls the
tension increases and the loops contract to become more and more pointlike. So at low energies the strings behave like points and allow the theory to
make the same successful predictions about what we should see there as the
intrinsically pointlike theories do. However, at high energies, things are
different. The hope is that it will be possible to determine the principal
energies of vibration of the superstrings. All strings, even guitar strings,
have a collection of special vibrational energies that they naturally take
up when disturbed. If we could calculate these special energies for
superstrings, then they would (by virtue of
Einstein’s famous mass-energy equivalence E = mc2)
correspond to the masses of the “particles” that we call elementary. So far,
these energies have proved too hard to calculate. However, one of them has
been found: it corresponds to a particle with zero mass and two quantum
units of spin. This spin ensures that it mediates attractions between all
masses. It is the particle we call the “graviton,” and shows that string
theory necessarily includes gravity and that it is described by the
equations of general relativity at low energies—a remarkable and compelling
feature, since earlier candidates for a TOE all failed miserably to include
gravity in the unification story.
Why is the World Mathematical?
This reflection on the symmetries behind the laws of nature
also tells us why mathematics is so useful in practice. Mathematics is
simply the catalogue of all possible patterns. Some of those patterns are
especially attractive and are studied or used for decorative purposes;
others are patterns in time or in chains of logic. Some are described solely
in abstract terms, while others can be drawn on paper or carved in stone. Viewed in this way, it is inevitable that the world is described by
mathematics. We could not exist in a universe in which there was neither
pattern nor order. The description of that order (and all the other sorts
that we can imagine) is what we call mathematics. Yet, although the fact
that mathematics describes the world is not a mystery, the exceptional
utility of mathematics is. It could have been that the patterns behind the
world were of such complexity that no simple algorithms could approximate
them. Such a universe would “be” mathematical, but we would not find
mathematics terribly useful. We could prove “existence” theorems about what
structures exist, but we would be unable to predict the future using
mathematics in the way that NASA’s mission control does.
Seen in this light, we recognise that the great mystery about mathematics and
the world is that such
simple mathematics is so far-reaching. Very simple patterns, described by mathematics that is easily within our
grasp, allow us to explain and understand a huge part of the universe and
the happenings within it.
Outcomes are Different
The simplicity and economy of the laws and symmetries that
govern nature’s fundamental forces is not the end of the story. When we look
around us we do not observe the laws of nature; rather, we see the outcomes
of those laws. The distinction is crucial. Outcomes are much more
complicated that the laws that govern them because they do not have to
respect the symmetries displayed by the laws. By this subtle interplay, it
is possible to have a world which displays an unlimited number of
complicated asymmetrical structures yet is governed by a few, very simple,
symmetrical laws. This is one of the secrets of the universe.
Suppose we balance a ball at the apex of a cone (figure 2). If we release the
ball, then the law of gravitation will determine its subsequent motion. Gravity has no preference for any particular direction in the universe; it
is entirely democratic in that respect. Yet, when we release the ball, it
will always fall in some particular direction, either because it was given a
little push in one direction, or as a result of quantum fluctuations which
do not permit an unstable equilibrium state to persist. So here, in the
outcome of the falling ball, the directional symmetry of the law of gravity
is broken. This teaches us why science is often so difficult. When we
observe the world, we see only the broken symmetries manifested as the
outcomes of the laws of nature; from them, we must work backwards to unmask
the hidden symmetries which characterise the laws behind the
appearances.
We can now understand the answers that we obtained from the different
scientists we originally polled about the simplicity of the world. The
particle physicist works closest to the laws of nature themselves, and so is
especially impressed by their unity, simplicity, and symmetry. But the
biologist, the economist, or the meteorologist is occupied with the study of
the complex outcomes of the laws, rather than with the laws themselves. As a
result, it is the complexities of nature, rather than her laws, that impress
them most.
Until the late 1970s, physicists focused far
more upon the study of the laws, rather than the complex outcomes. This is
not surprising. The study of the outcomes is a far more difficult problem,
that requires the existence of powerful interactive computers with good
graphics for its full implementation. It is no coincidence that the study of
complexity and chaos in that world of outcomes has advanced hand in hand
with the growing power and availability of low-cost personal computers since the late 1970s. It has created a new
methodology of experimental mathematics, dedicated to the simulation of
complex phenomena, with an array of diverse applications.
This division of the world into simple symmetrical time-independent laws and
complex changing outcomes is a modern manifestation of the Platonic and
Aristotelian perspectives on the world that originated in ancient Greek
thought (figure 3). It is important to appreciate that our modern
perspective reveals how these views are complementary approaches to
understanding the universe, rather than alternative, rival philosophies.
Disorganised Complexities
Complexity, like crime, comes in organised and disorganised
forms. The disorganised form goes by the name of chaos and has proven to be
ubiquitous in nature. The standard folklore about chaotic systems is that
they are unpredictable. They lead to out-of-control dinosaur parks and
out-of-work meteorologists. However, it is important for us to appreciate
the nature of chaotic systems more fully than the Hollywood headlines
do.
Classical (that is, non-quantum mechanical) chaotic systems are not in any
sense intrinsically random or unpredictable. They merely possess extreme
sensitivity to ignorance. As
James Clerk
Maxwell was the first to recognise in 1873, any initial uncertainty in our knowledge of a chaotic
system’s state is rapidly amplified over time. This feature might make you
think it hopeless even to try to use mathematics to describe a chaotic
situation. We are never going to get the mathematical equations for weather
prediction one hundred per cent correct—there is too much going on—so we
will always end up being inaccurate to some extent in our predictions.
Another important feature of chaotic systems is that, although they become
unpredictable when you try to determine the future from a particular
uncertain starting value, there may be a particular stable statistical
spre ad of outcomes after a long time, regardless of how you started out. The
most important thing to appreciate about these stable statistical
distributions of events is that they often have very stable and predictable
average behaviours. As a simple example, take a gas of moving molecules
(their average speed of motion determines what we call the gas’s
“temperature”), and think of the individual molecules as little balls. The
motion of any single molecule is chaotic, because each time it bounces off
another molecule any uncertainty in its direction is amplified
exponentially. This is something you can check for yourself by observing the
collisions of marbles or snooker balls. In fact, the amplification in the
angle of recoil, 𝜗 , in the successive (n+1st and nth) collisions of
two identical balls is well described by a rule
$$ 𝜗 _{n+1} = (\frac{d}{r}) 𝜗 _n$$ where d is the
average distance between collisions and r is the radius of the balls. Even
the minimal initial uncertainty in 𝜗 0 allowed by Heisenberg’s
Uncertainty Principle increases to exceed 𝜗 = 360 degrees after only
about 14 collisions.
Gas molecules behave like a huge number of snooker balls bouncing off each
other and the denser walls of their container. One knows from bitter
experience that snooker exhibits sensitive dependence on initial conditions:
a slight miscue of the cue ball produces a big miss! Unlike the snooker
balls, the molecules won’t slow down and stop. Their typical distance
between collisions is about 200 times their radius. With this value of d/r,
the unpredictability grows 200-fold at each close molecular encounter.All
the molecular motions are individually chaotic, just like the snooker balls,
but we still have simple rules like Boyle’s Law, governing the pressure P,
volume V, and temperature T—the averaged properties
[5] —of a confined gas of
molecules: PV/T = constant The lesson of this simple
example is that chaotic systems can have stable, predictable, long-term,
average behaviours. However, it can be difficult to predict when, because
the mathematical conditions that are sufficient to ensure it are often very
difficult to prove. You usually just have to explore numerically to discover
whether the computation of time averages converges towards a steady
behaviour in a nice way or not.
[6]
Organised Complexities
Among complex outcomes of the laws of nature, the most
interesting are those that display forms of organised complexity. A
selection of these is displayed in figure 4, in terms of their size (gauged
by their information storage capacity—how many binary digits are needed to
specify them?) versus their ability to process information (how quickly can
they change one list of numbers into another list?).
As we proceed up the diagonal, increasing information storage capability
grows hand in hand with the ability to transform that information into new
forms. Organised complexity grows. Structures are typified by the presence
of feedback, self-organisation, and nonequilibrium behaviour. Mathematical
scientists in many fields are searching for new types of “by-law” or
“principle” which govern the existence and evolution of different varieties
of complexity. These rules will be quite different from the “laws” of the
particle physicist: they will not be based upon symmetry and invariance, but
upon principles of probability and information processing. Perhaps the
second law of thermodynamics is as close as we have got to discovering one
of this collection of general rules that govern the development of order and
disorder.
The defining characteristic of the structures in figure 4 is that they are
more than the sum of their parts. They are what they are, they display the
behaviour that they do, not because they are made of atoms or molecules
(which they all are), but because of the way in which their constituents are
organised. It is the circuit diagram of the neural network that is the root
of its complex behaviour. The laws of electromagnetism alone are
insufficient to explain the working of a brain. We need to know how it is
wired up and its circuits interconnected. No Theory of Everything that the
particle physicists supply us with is likely to shed any light upon the
complex workings of the human brain or a turbulent waterfall.
On the Edge of Chaos
The advent of small, inexpensive, powerful computers with
good interactive graphics has enabled large, complex, and disordered
situations to be studied observationally—by looking at a computer monitor. Experimental mathematics is a new tool. A computer can be programmed to
simulate the evolution of complicated systems, and their long-term behaviour
observed, studied, modified, and replayed. By these means, the study of
chaos and complexity has become a multidisciplinary subculture within
science. The study of the traditional, exactly soluble problems of science
has been augmented by a growing appreciation of the vast complexity expected
in situations where many competing influences are at work. Prime candidates
are provided by systems that evolve in their environment by natural
selection, and in so doing modify those environments in complicated
ways.
As our intuition about the nuances of chaotic behaviour has matured by
exposure to natural examples, novelties have emerged that give important
hints about how disorder often develops from regularity. Chaos and order
have been found to coexist in a curious symbiosis. Imagine a very large egg
timer in which sand is falling, grain by grain, to create a growing sand
pile (figure 5). The pile evol ves under the force of gravity in an erratic
manner. Sandfalls of all sizes occur, and their effect is to maintain the
overall gradient of the sand pile in a temporary equilibrium, always just on
the verge of collapse. The pile steadily steepens until it reaches a
particular slope, and then gets no steeper. This self-sustaining process was
dubbed “self-organising criticality” by its discoverers,
Per Bak,
Chao Tang
and
Kurt Wiesenfeld, in
1987. The adjective “self-organising” captures
the way in which the chaotically falling grains seem to arrange themselves
into an orderly pile. The title “criticality” reflects the precarious state
of the pile at any time. It is always about to experience an avalanche of
some size or another. The sequence of events that maintains its state of
large-scale order is a slow local build of sand somewhere on the slope, then
a sudden avalanche, followed by another slow build up, a sudden avalanche,
and so on. At first the infalling grains affect a small area of the pile,
but gradually their avalanching effects increase to span the dimensions of
the entire pile, as they must if they are to organise it.
At a microscopic level, the fall of sand is chaotic, yet the result in the
presence of a force like gravity is large-scale organisation.If there is
nothing peculiar about the sand that renders avalanches of one size more
probable than all others,
[7] then the frequency with which avalanches occur is
proportional to some mathematical power of their size (the avalanches are
said to be “scale-free” processes). There are many natural systems (like
earthquakes) and man-made ones (like stock market crashes) where a
concatenation of local processes combines to maintain a semblance of
equilibrium in this way. Order develops on a large scale through the
combination of many independent, chaotic, small-scale events that hover on
the brink of instability. Complex adaptive systems thrive in the hinterland
between the inflexibilities of determinism and the vagaries of chaos. There,
they get the best of both worlds: out of chaos springs a wealth of
alternatives for natural selection to sift through, while the rudder of
determinism sets a clear average course towards islands of stability.
Originally, the discoverers of “organised criticality” hoped that the way in
which the sandpile organised itself might be a paradigm for the development
of all types of organised complexity. This was too optimistic. But it does
provide clues as to how many types of complex system organise themselves. The avalanches of sand can represent extinctions of species in an ecological
balance, jams on a motorway traffic flow, the bankruptcies of businesses in
an economic system, earthquakes or volcanic eruptions in a model of the
pressure equilibrium of the Earth’s crust, and even the formation of oxbow
lakes by a meandering river. Bends in the river make the flow faster there,
which erodes the bank, leading to an oxbow lake forming. After the lake
forms, the river is left a little straighter. This process of gradual
buildup of curvature followed by sudden oxbow formation and straightening is
how a river on a flat plain “organises” its meandering shape.
It seems rather remarkable that all these completely different problems
should behave like a tumbling pile of sand. A picture of Richard Solé’s,
showing a dog being taken for a bumpy walk, reveals the connection (figure
6). If we have a situation where a force is acting (for the sand pile it is
gravity, for the dog it is the elasticity of its leash), and there are many
possible equilibrium states (valleys for the dog, stable local hills for the
sand)—then we can see what happens as the leash is pulled. The dog moves
slowly uphill and then is pulled swiftly across the peak to the next valley,
begins slowly climbing again, and then jumps across. This staccato movement
of slow buildup and sudden jump, time and again, is what characterises the
sandpile with its gradual buildup of sand followed by an avalanche. We can
see from the picture that it will be the general pattern of behaviour in any
system with very simple ingredients.
At first, it was suggested that this route to self-organisation might be
followed by all complex self-adaptive systems (Bak 1996). That was far too
optimistic: it is just one of many types of self-organisation. Yet the nice
feature of these insights is that they show that it is still possible to
make important discoveries by observing the everyday things of life and
asking the right questions. You don’t always have to have satellites,
accelerators, and overwhelming computer power. Sometimes complexity can be
simple too.
Bibliography
Bak, P.
1996. How Nature Works. New
York.
Barrow, J. D.
1988. The World Within the World.
Oxford.
———. 2001. New Theories of Everything:
The Quest for Ultimate Explanation. Oxford.
———. 2002. The Constants of
Nature. London.
Greene, B.
1999. The Elegant Universe.
London.
Randall, L.
2006. Warped Passages: Unravelling the
Universe’s Hidden Dimensions. New York.
Footnotes
Note 4
For a discussion of the status of the constants of nature and evidence
for their possible time variation, see Barrow 2002.
Note 5
The velocities of the molecules will also tend to attain a particular
probability distribution of values, called the Maxwell-Boltzmann
distribution, after many collisions, regardless of their initial
values.
Note 6
This is clearly very important for computing the behaviour of chaotic
systems. Many systems possess a shadowing property that ensures that
computer calculations of long-term averages can be very accurate, even
in the presence of rounding errors and other small inaccuracies
introduced by the computer’s ability to store only a finite number of
decimal places. These “round-off” errors move the solution being
calculated onto another nearby solution trajectory. Many chaotic systems
have the property that these nearby behaviours end up visiting all the
same places as the original solution, and it doesn’t make any difference
in the long run that you have been shifted from one to the other. For
example, when considering molecules moving inside a container, you would
set about calculating the pressure exerted on the walls by considering a
molecule travelling from one side to the other and rebounding off a
wall. In practice, a particular molecule might never make it across the
container to hit the wall because it runs into other molecules. However,
it gets replaced by another molecule that is behaving in the same way as
the first one would have done had it continued on its way
unperturbed.
Note 7
Closer examination of the details of the fall of sand has revealed that
avalanches of asymmetrically shaped grains, like rice, produce the
critical scale-independent behaviour even more accurately because the
rice grains always tumble rather than slide.