How do we know what 21 degrees C means?
An extract from James Vincent’s Beyond Measure: The Hidden History of Measurement.
This is something of a departure for the Newsletter of (Not Quite) Everything: our first guest post, written by somebody who isn’t me. It’s an extract from James Vincent’s Beyond Measure: The Hidden History of Measurement, an excellent book which overlaps with some of the topics I write about – and which, coincidentally, is out in paperback on Thursday 1 June.
This particular section is an edited version of the chapter on how we measure temperature, and asks: how do you construct a reliable thermometer, without already having a reliable thermometer to check it’s reliable?
You can buy Beyond Measure here, and also hear James speak about it at the Royal Institution next Friday.
Please do let me know whether you think the guest post thing worked. (Would you like less of me? Were you horrified to open an email and find out I hadn’t written most of it? Could go either way, probably.) And more to the point, please do read the book, which is terrific.
Anyway, the extract. It begins with the story of a fountain that drips in the sun…
The measurement of temperature might seem inconsequential compared to measuring phenomena like weight or length. Heat and cold are not integral to things like trade or construction, nor do they dominate our conceptual understanding of the world, like the measure of time. But this attitude is a habit of modernity: the result of our domestication of temperature through technologies like air conditioning and central heating. In the ancient world, hot and cold were felt more keenly and understood to be animating principles of the natural world. They were understood intuitively, though not quantitatively.
For the ancient Greek philosopher Heraclitus, for example, fire was not merely a material phenomenon but the fundamental element of the universe. It was the source of all life and a constant roiling change that burned through the world, transforming matter in its wake. The order of things “no god nor man did create”, taught Heraclitus, only “ever-living fire”, which shaped life in the womb and burned dead wood to make space for new growth.
Centuries after Heraclitus, Plato and Aristotle retained the importance of heat in their philosophies, though complicated the picture by making it one of four contrary qualities: hot and cold, and wet and dry. Later thinkers like Hippocrates in the fifth century BC and Galen in the second century AD developed this premise further by incorporating heat into the doctrine of the humours — a long-lived system of medicine and morality that dominated Western understanding of the body for more than a millennium.
Eventually, the fundamental position of temperature in the world order led to some scrutiny. Galen was one of the first thinkers to suggest there might be “degrees of heat and cold” and attempt to distinguish between them. (Though he thought that four degrees was all that was needed to properly describe the world.) These degrees weren’t measured quantities per se, but shorthand labels for vaguely defined categories. Four degrees of heat was a lot, three degrees less, and so on.
Around this period, people also began creating instruments that responded to temperature changes. These were more like basic science experiments than scientific instruments, though, usually consisting of a container of water with a hollow tube poking into its interior like a straw. As the temperature changed, air in the vessel contracted or expanded, creating a vacuum that pulled water up the tube. The first century Roman mathematician and engineer Hero of Alexandria is one of the first to describe such a device, which he called “‘a fountain that drips in the sun”.
Measurement overtakes the sense
Many centuries later, in the 1500s, natural philosophers began to add numbers to these instruments, with the Venetian physician Santorio Santorio and Galileo Galilei among the first. Santorio and Galileo built devices similar to Hero’s fountain but deployed them in a more rigorous fashion, using them to compare the heat of specific objects and environments. These were what we should call thermoscopes rather than thermometers, as they lacked proper scales.
(On a thermoscope, scales are ordinal, meaning they’re arranged in a hierarchy from top to bottom, but they are not interval scales, where each degree is uniform. Think of the difference like this: a customer survey that asks you to rate your happiness from one to five is an ordinal scale because you are ranking your response. But it’s not an interval scale because the exact levels of happiness are subjective and vaguely defined; they cannot be added or subtracted from one another. There’s no way to know if your five isn’t your neighbour’s three.)
An early thermoscope experiment.
Nevertheless, Santorio happily directed his thermoscope at the human body, publishing a text in 1626 describing an air thermometer marked with degrees, with one end placed in the human mouth. Galileo’s devices were described in the letters of his friend Giovan Francesco Sagredo, who built several thermoscopes based on the astronomer’s design and reported his findings. At the height of summer, Sagredo wrote to Galileo in good humour, noting that his interest was dominated by “measuring the aforesaid heat and cooling the wine”, while in the colder months he reports discovering “various marvelous” phenomena, including the observation that the air is often colder than snow, that snow mixed with salt is colder still, “and similar subtle matters”.
These experiments show how measurement can transform a concept like temperature. While for early philosophers heat and cold were mystical qualities, from the Renaissance onwards they begin to transition to something closer to data — information that can be extracted from its source; collected, shared, and compared. Such information can be persuasive, even able to overrule the senses.
Sagredo, for example, notes that thermometer readings show that water in natural springs is colder in winter than in summer, “although our feelings seem to indicate the contrary”. And writing at around the same time in 1620, Francis Bacon comments that the thermoscope’s sense of “heat and cold is so delicate and exquisite, that it far exceeds the human touch”. The scientific instrument had begun to displace human experience as the arbiter of reality.
In 1624, the word “thermometer” was coined by the French Jesuit priest Jean Leurechon, derived from the Greek words thermos (“hot”) and metron (“measure”). At this point the device had taken on a familiar design: “an instrument of glass which has a little bulb above and a long neck below” that rests in a reservoir filled with “vinegar, wine, or reddened water”, as Leurechon puts it. Moving the instrument from a cold to a hot place causes the liquid to fall as the air “rarefies and dilates and wishes to have more room, and therefore presses on the water”, he says, while moving from hot to cold has the opposite effect as the “air is cooled and condenses”.
These phenomena were simple and reliable, even if the underlying causes were unknown, and Leurechon writes almost wistfully of this magic, noting that he’s able to sway a thermometer’s reading by breathing on it — “as if one wished to speak a word into its ear to command the water to descend”. A little bit of the mystical nature of temperature was still in the air.
From the 17th century onwards, the increasingly rigorous measurement of temperature revealed many intriguing phenomena, with scientists observing the effect of heat on chemical reactions, for example. But as more instrument makers began building thermometers of their own, another problem emerged: how to make these tools speak in a common language.
Writing in 1693, the English astronomer Edmond Halley outlined the challenge, complaining that every thermometer works “by Standards kept by each particular Workman, without any agreement or reference to one another”. The consequence is that “whatsoever Observations any curious Persons may make . . . cannot be understood, unless by those who have by them Thermometers of the same Make and Adjustment.”
It’s a fine complaint, but how do you actually create a reliable temperature scale? How do you prove your 17 degree is the same as mine?
Scientists realised that the answer was to look for certain stable thermometric markers in nature: phenomena that were consistent in temperature whenever and wherever they appeared. These would provide a baseline for the world’s thermometers and scales; allowing instrument makers to calibrate and compare their tools. Early suggestions for these markers, though, seem maddeningly imprecise, and included phenomena like the melting point of butter, or the hottest day in summer, or the cold of certain cellars in Paris.
One such temperature scale was devised by Isaac Newton, who, in 1701, created a thermometer using linseed oil with degrees defined using a number of would-be fixed points. Newton thought these should include the temperature of air in winter, spring, and summer; two separate points based on “the greatest heat of a bath which a man can bear” (in the colder one he moves his hand constantly, in the hotter the hand is kept still); and other benchmarks based on “the external heat of the body in its natural state” and “that of blood newly drawn”. It’s a rich index, full of life itself, but somewhat thermometrically vague.
By the turn of the eighteenth century, two innovations were beginning to move thermometry into the realm of the reliable. The first was the slow-gathering consensus that the freezing and boiling points of water provided the most convenient and consistent thermometric benchmarks. The second was a series of technical advances that improved the measurement of these phenomena. These included the spread of sealed thermometers and the use of different liquids as the medium within.
The paradox of thermometry remained, though: how do you construct a reliable thermometer from fixed temperature points without already possessing a reliable thermometer to confirm that these points are fixed? The solution to this quandary would take decades of patient work from dozens of individuals.
Fixing the points
One contributor who stands out is Daniel Gabriel Fahrenheit, a scientific instrument-maker whose work brought him fame in the early eighteenth century, but whose early life was marked by tragedy.
Born into a wealthy family in Danzig (modern-day Gdańsk), Fahrenheit was orphaned at the age of fifteen when his parents died after eating poisonous mushrooms. His legal guardians arranged for him to be apprenticed in a trading business in Amsterdam, but Fahrenheit found the world of bookkeeping tedious. Instead, he yearned for the scientific pursuits he’d enjoyed during his early schooling. After four years in Amsterdam, he absconded from his apprenticeship and became a scientific fugitive, stealing money from his employers to fund his own research, while hopping around European cities to learn from the great scientists of the age. His guardians responded as any caring adults would: they had a warrant issued for his arrest and gave the authorities permission to deport him to the East Indies if captured.
The warrant, thankfully, never caught up with Fahrenheit, and from his twenties he found himself drawn into the world of scientific instrument-making, a vocation that involved both theoretical knowledge and practical proficiency. He specialized in glass blowing, building thermometers, altimeters, and barometers and earning a reputation as, in the words of one contemporary, “that industrious and incomparable Artist, Daniel Gabriel Fahrenheit”.
In 1708, he met up with Ole Rømer, a famed Danish astronomer who also happened to be mayor of Copenhagen. (It’s possible Fahrenheit sought him out to try to clear his then outstanding arrest warrant.) Rømer had devised a temperature scale which used the sexagesimal numbering system that was familiar to him as an astronomer. He set the top of his scale, the boiling point of water, at 60 degrees, and the bottom, the freezing point of brine, at zero. Fahrenheit adapted Rømer’s scale with a few key changes.
Firstly, Fahrenheit was dissatisfied with the “inconvenient and inelegant” fractions that marked freezing point and body temperature in Rømer’s scale (7.5° and 22.5° respectively). He bumped these up to 8 and 24 to make them neater, and then multiplied the whole scale by four, creating smaller intervals between degrees and thus finer accuracy in readings. These changes give us the temperature benchmarks familiar to Fahrenheit users: 32°F for freezing water and 96°F for body temperature (though this latter point is 2.6°F below current estimates).
Next, Fahrenheit took the step of using mercury instead of the then common “spirit of wine” (ethyl alcohol) as the measuring medium inside his thermometers. Mercury, or quicksilver as it’s long been known, not only has a higher boiling point than alcohol, allowing the thermometers to be used in a greater range of temperatures, but reacts more quickly to changes in hot and cold and expands and contracts to a greater degree. This meant thermometers could be built on a smaller scale, while still registering the same range of temperatures.
As is often the case in metrology, it was not any one change that made Fahrenheit’s instruments stand out, but his ability to bring together a range of improvements. His thermometers were so renowned he was inducted into Britain’s Royal Society in 1724 for his work, with his temperature scale adopted by English speaking countries around the world until it was displaced by its metric rival.
But Fahrenheit’s practical genius rested on foundations created by others, and I particularly want to call to attention the work of one Jean-André de Luc (1727–1817), a Swiss geologist and metrologist whose work in thermometry stands out for its attention to detail.
Consider, first, what exactly we mean by the boiling point of water — one of the fixed points on Fahrenheit’s scale. Not only is the temperature at which water boils affected by criteria like water purity, atmospheric pressure, and the depth of the vessel used, but boiling itself is a term of not inconsiderable imprecision. Does water boil when the first bubbles appear, or when they are produced in a continuous stream? How fast do these bubbles have to appear? These were the questions that de Luc sought answers to, working as part of a special task force assembled by the Royal Society to answer this query.
In one series of experiments, de Luc found that he could raise the temperature of water far above 100°C without ever boiling — a phenomena now known as superheating. He found that removing oxygen from water assisted this process, but without access to modern lab equipment that could deoxygenate samples for him, he had to manually shake vessels by hand to remove the gas, like someone shaking the bubbles from a fizzy drink. “This operation lasted four weeks,” he writes in one report, “during which I hardly ever put down my flask, except to sleep, to do business in town, and to do things that required both hands. I ate, I read, I wrote, I saw my friends, I took my walks, all the while shaking my water.”
Elsewhere, in a 1772 treatise, de Luc sets out to determine exactly what constitutes la vraie ébullition, or true boiling, and instead finds only a multitude of phenomena forced into homogeneity by this single, restrictive term. He examined ebullition by watching pots of boiling water with the attentiveness of a new parent leaning over the crib, noting the speed, size, and sound of bubble formation; at what depth they appear in the water; whether they make it to the surface or implode midrise; and the level of agitation on the water’s surface. He distinguishes between numerous new categories, including water that is sifflement (“whistling”), bouillonnement (“bubbling”), and soubresaut (a ballet term for a quick, short, vertical jump).
This shows, I think, one of the unexpected benefits of measurement. The work of metrology is sometimes stereotyped as a stultifying activity that reduces the vibrancy of the world to mere numbers, work like de Luc’s shows the opposite can be just as true. The desire to measure something with accuracy forces people to seek new corners of the phenomenological world; to find nooks and crannies of physical experience that were previously lost in the melee. The closer we look, the more the world reveals itself.
The irony of de Luc’s investigations, though, is that he didn’t ever find a better definition of the boiling point of water. There was just too much variety in how this phenomenon manifested. Instead, he found that what was consistent was the steam produced by boiling water. Whether the water below was sifflement or soubresaut, the steam was always the same temperature. It was this that the Royal Society would recommend to replace Fahrenheit’s boiling point as a fixed thermometric market, and it was close attention that produced this reliable form of measurement — that and watching a few pots boil.
Beyond Measure: The Hidden History of Measurement is available here. You can hear its author James Vincent speak about it at the Royal Institution next Friday.
History of science is always fascinating: a very good read
I liked it. Happy to read more stuff like this.