# Uncertainty and communication

## How could we show the uncertainty in statistics?

--

**Uncertainty is an integral part of statistics.** Data can come from random processes, or miss entries. Model assumptions can take many plausible values. Different methods can lead to different estimates and outputs.

In August 2020, the Government Statistical Service hosted a webinar on communicating uncertainty.

That webinar suggested four ways:

**Show the process:**say where uncertainty comes from, such as sampling.**Describe the uncertainty:**say how large that uncertainty is. That could use estimated intervals or implicit statements.**Illustrate the uncertainty:**show people what that uncertainty means through graphs.**Tell people what they can and cannot do:**give readers direct guidance.

**The Royal Statistical Society ****asks****: how well have analysts done in communicating uncertainty? **This is a big question, so let’s look at illustration.

**There are signs of good practice. That can be inconsistent. **One example is for vaccine effectiveness, using waffle charts with different shades. There was also text accompanying these dots, to help the reader.

Another example is of ribbon ranges, surrounding a central estimated line. Epidemic modelling from the University of Warwick showed different scenarios. These ribbons overlap, meaning some observed values could be consistent with distinct assumptions.

There are also examples from the Office for National Statistics of error bars. Different graphs use thin spikes and thick rays.

To use Ed Humpherson’s phrase: statisticians are the humble scholars of uncertainty.

My view is: we need more knowledge of how people interpret graphical uncertainty. Like sticks of Blackpool rock, uncertainty should be visible throughout the publication process.

What we would like people to think about when they read numbers and graphs? Think about what the stats mean. Think about the underlying distributions. **Think about the uncertainty.**