One of the most important ideas in physics is that the entropy of a closed system cannot decrease. For the uninitiated, this concept is known as the Second Law of Thermodynamics. And one of the most famous thought experiments in physics, ‘Maxwell’s Demon’, is an idea devised to find a loophole in the second law.
Entropy is a measure of how uncertain the state of a system is. “Entropy doesn’t decrease in a closed system” is another way of saying that you will tend to become more uncertain about the state of a system over time, unless you interfere with it or measure it. Imagine, for example, a warm object in a cool environment. Heat will spread into the environment until the temperatures are equal. This is an increase in the total entropy because more molecular configurations are possible with the energy spread out than when the energy is concentrated in the object – you are therefore more uncertain about the microscopic state of the object and its environment.
But you can of course heat up an object to be warmer than its environment, so why doesn’t this contravene the second law? The answer is that you must consume resources to heat something, for example by burning gas. In the process, the gas and its environment go from a low entropy to a high entropy state, paying the price for pushing the object and its environment out of equilibrium.
Eventually, if left for long enough, closed systems will reach equilibrium; at this point the entropy is maximal, and we are as uncertain as possible about the system’s state. Out-of-equilibrium systems, which have lower entropy, are potentially useful resources. Food, batteries and reservoirs of water behind a dam are all out of equilibrium. The low entropy of these resources that can be used to do what we call “useful work”, which is equivalent to reducing the entropy of other systems. In the process, the original non-equilibrium resources are consumed, just like the gas was in the example of heating an object.
Maxwell’s Demon and Szilard’s Engine
James Clerk Maxwell’s thought experiment was an exploration of the possibility that by measuring random fluctuations in equilibrium, and exploiting them through ‘feedback’, it might be possible to violate the second law.
By utilising biological molecules to perform the tasks of a demon – measurement and feedback – they have grounded their work in physical reality.
Maxwell imagined a creature that exists inside a box containing particles, and a dividing wall between two halves of the box. The ‘demon’ can measure the speed of particles, and open and close a door in the wall. By choosing which particles to allow through the wall, the demon can collect the fast, ‘hot’ particles on one side, and the slow, ‘cold’ particles are on the other, creating a temperature difference where there was none before – the demon has pushed an at-equilibrium system out of equilibrium.
Fundamentally, the demon has two jobs: measurement and feedback. Information gathered through measurement is used in feedback to rectify fluctuations, converting an at-equilibrium system into a non-equilibrium one. If, theoretically, a ‘demon’ could perform these tasks, without consuming a resource, the second law would be violated.
Maxwell’s thought experiment is hard to resolve, in part because it is so intangible. The Hungarian physicist Leo Szilard refined the idea, trying to make it more concrete. He imagined a simpler system that could only be in two states, a single particle that could be randomly trapped either on the left or the right side of a box with a partition. If the position of the particle is measured, a weight could be attached to the same side of the divider as the particle. The random collisions of the particle with the partition could then slowly lift the weight. This process would reduce the combined entropy of the weight and its environment, with apparently no cost. Just like Maxwell’s demon, Szilard’s engine appears to use measurement and feedback to reduce the entropy of the Universe and violate the second law.
Szliard argued that the second law is not violated, because creating the measured configuration (with the particle and weight on the same side of the divider) requires the consumption of another resource. In the context of the Maxwell’s demon problem, Szilard would have argued that the demon needs to consume a resource to ‘correlate’ the state of the door with the speed of the particles.
Szilard was correct, but he lacked the modern ideas of information theory and thermodynamics that make the analysis of such problems tractable. Nowadays, we would describe the configuration after measurement as a non-equilibrium resource of low entropy due to the information, or correlations, between weight and particle. Moreover, Szilard was unable to conceive a concrete system in which the processes of both measurement and feedback could be properly analysed without the intervention of a fuzzily-defined intelligence, which perpetuated confusion.
Biochemical Szilard Engine
Recent developments in thermodynamics and its application to molecular systems have re-ignited the discussion around Maxwell’s demons and Szilard engines. Dr Thomas Ouldridge at the Imperial College London is leading a collaboration to create better understanding of Szilard engines. His collaborators include Rory Brittain at the University of Luxembourg, Nick Jones at Imperial College London and Pieter Rein ten Wolde at the FOM Institute, Amsterdam.
Many research groups have considered measurement and feedback in small systems, but the more theoretical attempts often ignore the costs of some steps in the process, allowing unexpected errors to creep into calculations. Dr Ouldridge and his collaborators have proposed a Szilard engine that uses biomolecules to explicitly represent the tasks of a demon – creating a long-lived measurement and using that measurement to perform feedback – providing a concrete system that can be analysed in full.
…a Szilard engine rooted in a biochemical context, where every decision-making step is represented fully, leaves minimal scope for error in thermodynamic analysis.
Components of the engine
There are two main parts to the biochemical Szilard engine:
1. A reaction volume containing two molecules, data X and memory M, which can both be in two states (X0 and X1, and M0 and M1).
2. A series of chemical buffers containing fuel molecules at a range of concentrations. These buffers act as a low-entropy resource that can be both spent to drive the reaction volume into certain states, or can be ‘charged-up’ by the reaction volume, just like a battery or the weight that is lifted in the original Szilard engine.
Section 1: Measurement
The memory molecules can interconvert between M0 and M1 by reacting with fuel molecules. However, the reactions are extremely slow unless a catalyst is present, which for converting between the states of M, are the data molecules, X. Crucially, X0 and X1 catalyse different reactions, coupling the memory molecule M to different fuels. The concentration imbalances of fuels determine which way the reactions tend to go, and the presence of X0 pushes M towards M0 and X1 pushes M towards M1.
Initially, the system is in one of four possible configurations (both X and M can be in either state), and no fuels are present. The measurement step proceeds by moving one set of fuel buffers past the reaction volume, slowly increasing the concentration imbalances associated with the two opposing fuels to which X and M are exposed. As this happens, the state of the memory M is slowly correlated with the state of the data X; eventually the system has only two possible states rather than four: either X0 and M0 or X1 and M1. This is the crux of the measurement step: the entropy has decreased, and the data-memory system is a low-entropy resource.
Thermodynamic analysis of this system shows, just as Szilard predicted in the original engine, that the measurement step uses resources. In this case, the decrease in entropy of X and M is entirely compensated for by an increase in entropy of the buffers, which started as a low-entropy resource.
Step 2: Feedback
Feedback occurs by decoupling M from the fuels that powered measurement, followed by the introduction of fuels from a different set of buffers, allowing X0 and X1 to interconvert. Extracting resources stored in the measured state requires X0 and X1 to be treated differently, just like it is necessary to attach the weight to the correct side of the particle-based Szilard engine.
This process occurs through M catalysing a reaction of X. M0 and M1 catalyse different reactions of X1 and X0, involving different fuels. Initially, fuel imbalances seen by X are large, pushing it towards X0 if M0 is present, or X1 if M1 is present, maintaining the correlation. Slowly, these imbalances are reduced and X is decorrelated from M. Eventually, all four combinations of memory and data are possible, and the entropy of X and M increases. However, this increase in entropy is offset by an equivalent decrease in entropy of the buffers; the decorrelation of X and M is used to ‘charge’ the second set of fuel buffers, transferring fuel molecules to buffers at high concentration.
Crucially, the different treatment for X0 and X1 doesn’t rely on an intelligent experimenter or demon to perform feedback. The whole measurement and feedback process is inherent to the molecular interactions in the system. The engine works to transfer entropy from one set of buffers used in measurement to the other set of buffers used in feedback.
What do we learn from the biochemical Szilard engine?
Considering a catalyst-based engine, in which reactions are driven by varying fuel concentrations in buffers, has big advantages over Szilard’s original engine. In the particle-and-weight conception, the experimenter or demon has to decide where to attach the weight based on the measurement, a step which can’t be explicitly analysed. In the biochemical model, data first influences the memory, and then is influenced by the memory via a concrete (catalytic) mechanism that removes the need for intelligent intervention by a demon. The reaction volume is exposed to the same series of biochemical buffers regardless of the outcome of the measurement; the molecules themselves implement measurement and feedback. As a result, Dr Ouldridge and his team can confirm Szilard’s claim with their model: correlating the memory and data requires the consumption of a resource, and decorrelating them allows this effort to be recovered.
Work in this field is often heavily theoretical, but this example, a Szilard engine rooted in an idealised but specific biochemical setting, allows every decision-making step to be represented fully, leaving minimal scope for error in thermodynamic analysis. This analysis illuminates key concepts in the information-processing that goes on in real cells, and brings us one step closer to creating systems – in computing for example – which can carefully manage entropy to increase efficiency.
Are Szilard engines present in nature, and if so where?