Rethinking thinking about figures
Statistics influence opinion, and scientists have a role to play in that regard. Professor Jane E Miller of Rutgers, The State University of New Jersey, USA is a specialist in numeric literacy. She has published guidelines to help scientists and those communicating their work to clearly convey the methods, results, and implications of quantitative research. Her work challenges entrenched norms of research interpretation and demands rethinking thinking about figures.
Statistics influence opinion. They can also be the cornerstones of crucial decision-making. Whether statistics guide or mislead, clarify or obfuscate, depends on how they’re communicated. The accuracy of the message depends on how they’re interpreted, and inadvertent misinterpretation can have unfortunate outcomes. And volume doesn’t equate to impact – reams of statistics generated in quantitative research can seem impressive, but if you dig deeper, what they’re saying might not be good evidence.

For those entrusted with communication about numeric scientific research that will steer decision-making, accurately representing the statistics is not only critical but ethically responsible. So, how can science communicators concisely and correctly identify what is ‘important’ in data and convey it to a non-specialist reader? According to one specialist in research communication, we sometimes need to rethink how people think about figures.
Professor Jane E Miller of Rutgers University in New Jersey, USA, most definitely has a head for figures. Critically, she understands those who don’t, and how even those who do, can make mistakes. Miller specialises in research communication, numeracy, and quantitative literacy. She is the author of several books dedicated to making sense of and writing about numbers that are go-to references for those conducting scientific research, specifically the figures they generate to underscore the validity of their conclusions. Her books are also important for scientists seeking to convey the methods, results, and implications of their quantitative research. Her guidance is essential at a time when claims on behalf of science can find their way into unfettered social media that feed uncritical minds.
Anchoring data in tangible realities
Miller begins with a fundamental premise: the importance of context. Her approach to communicating quantitative results is grounded in the belief that every number has a story, and like any good story, it needs a setting. This isn’t about dressing data in decorative prose – it’s about ensuring that numbers aren’t floating abstractions; they should be anchored in time, place, and group, and to tangible realities and analogies that resonate with the audience.
Miller emphasises the importance of setting the scene before delving into the numbers. For instance, a statistic about efficacy of a new drug is lost on a reader who isn’t already aware of the disease, current treatments, or demographics affected. Statistics about the percentage of a population affected by a disease become more impactful when presented within the framework of known consequences, existing treatment options, costs, and who is most at risk. It’s akin to knowing the character in a novel before you understand their challenges. This narrative technique does not simplify the science; rather, it enriches the reader’s understanding and engagement.

For Miller, the context makes data relatable and enhances understanding and retention. Numbers, inherently abstract and impersonal, gain life and relevance when tied to familiar or relatable aspects of everyday experiences. When science writers embed numbers within a well-crafted narrative that outlines context and provides a benchmark or two, they help the audience grasp and retain complex information more effectively. For instance, a newly diagnosed diabetic needs to know not just what their A1C blood test result is, but whether they should raise or lower that value to improve their health, and what the cut-off is for a healthy A1C blood level (Figure 1).
What makes results ‘important’?
Every researcher believes their research is important, and in the highly competitive arena of academia, expectations of delivery are high. For researchers yet to achieve the job security and protection of tenure, the pressure to publish is unrelenting, and their institutions and peers critically examine every piece of published research for importance. But what, in the broader context of a study’s topic, makes that study’s results ‘important’?
Miller addresses a common misconception among students and researchers: statistical significance is the ultimate determinant of a research finding’s importance. This false belief is deeply ingrained and propagated by the ways that quantitative researchers are traditionally educated in inferential statistics. One of Miller’s most crucial distinctions is between statistical significance and substantive importance. While the former tells us about the probability that the results are not due to mere chance, the latter deals with the real-world implications of these findings.
When science writers embed numbers within a well-crafted narrative that outlines context and provides a benchmark or two, they help the audience grasp and retain complex informationmore effectively.
Understanding the direction – whether an increase in one variable leads to an increase or decrease in another variable – and the magnitude of research findings is crucial in assessing their importance beyond their statistical significance. Miller argues that both the direction and the size of an effect provide essential insights into the substantive significance of the results. One notable case she explores is a study assessing the association between video gaming on teenagers’ reading habits and social interactions. Miller elucidates that while teenagers who spent more time gaming indeed spent slightly less time reading, the effect was minimal – only two minutes less reading for every hour of gaming. Although statistically significant, this decrease is insufficient to justify widescale efforts to reduce teens’ time playing video games.
A balanced communication strategy doesn’t simply glorify the numbers – it interrogates and elucidates the practical implications of the findings. Miller guides science writers to go beyond just reporting whether an effect is statistically significant. It’s one thing to declare with high statistical confidence that a drug is effective; it’s another to suggest it has a large enough impact to be worth altering clinical practices. And there are other determinants to its eventual uptake: what if it’s too expensive, or tastes so horrible people wouldn’t use it? Then, statistical significance is unimportant.

Causation – a change in this will cause a change in that – is the holy grail in scientific research seeking solutions to problems. But variables connected by happenstance and viewed with a hopeful eye have no substantive value. That’s why science journalists are drilled in the mantra that ‘correlation does not equal causation’.
Miller also encourages an examination of the applicability of the statistical results. For instance, if a medical study only included men, can the results also be applied to other genders? If only one location was analysed, can we generalise the findings to other places? Put differently, while undeniably crucial, statistical significance is just one piece of a much larger puzzle. It can overshadow other vital aspects like the size and direction of the effect and whether the findings can be generalised beyond the study’s sample. This is especially important when research guides decisions – whether individual choices or policy design – because not all studies are high quality, and even those that are might not apply to a particular situation.
Tools for effective data presentation
Telling a clear story with numbers is often best accomplished with more than just words, and here Miller introduces her toolkit: prose, tables, charts, and maps. Each tool has strengths and limitations; the adept science communicator must choose wisely. Prose can turn data into a compelling story, making it ideal for asking and answering questions and explaining complex trends and patterns. Tables offer a structured place to put detailed numbers, perfect for readers interested in the minutiae. Charts excel in showcasing the size and shape of trends and comparisons at a glance, while maps provide geographical context to data.

For example, when discussing climate change impacts, an infographic or a chart (like Figure 2) could visually represent the rise in global temperatures over the decades, while a map (Figure 3) could show the geographic areas most affected by these changes. As Miller points out, the choice of tool should align with the story’s needs and the kinds of displays familiar to the audience. Then, the written explanation can focus on introducing the topic, providing context, and interpreting the numbers rather than expecting readers to figure out what those numbers mean.
A picture may paint a thousand words – so it’s critical that what is conveyed in an infographic or map is clear – but sometimes a clever phrase can be more impactful. Here, Miller points to the value of analogies, drawing parallels with familiar ideas such as sports betting odds when presenting a result involving a ratio.
Challenging entrenched norms
In a highly disrupted media space, where the battle for attention is played out in tension-filled phrases and images, finding snappy answers to highly complex questions about our natural or social world in reams of figures is tempting. Therefore, scientists have a responsibility to be absolutely clear in what their data say – and don’t say – and not leave room for false interpretations or ask their readers to figure out how the numbers answer the question. In brief, they need to rethink how people think about figures.
Miller’s work challenges the entrenched norms of research interpretation, urging the scientific community and the public to think more deeply about whether and in what ways a research finding is important.This is why Miller’s work is so important – it challenges the entrenched norms of research interpretation, urging the scientific community and the public to think more deeply about whether and in what ways a research finding is important. It also guides science communicators and reminds them to be vigilant in analysing research data and precise in presenting it. Since statistics influence opinion, those communicating the numbers must provide a well-rounded explanation of what they do – and do not – mean.
Personal Response
What is the most common misperception when analysing data about public health, and what can science communicators do to clarify it?Conflating statistical significance with the importance of findings. That mistake plagues studies from many fields – not just public health. To help readers – especially lay readers – understand the meaning of their numeric results and how they can be applied, science communicators should also write about the other aspects that determine substantive importance. Those include whether an effect is big enough to matter in its real-world context, whether the study that generated those numbers can demonstrate cause and effect, and to whom the findings can (and cannot) be applied.