During the pandemic, I think scientists have sometimes been naive about their messaging, and have left low-hanging fruit for people with bad intentions. I’d like to explain how, and make some suggestions for how to avoid this in future.
It comes down to distortion. In a previous post I described information theory, and specifically how it allows messages to be sent and understood even when they are not received perfectly. Information theorists would say that there is “noise” in the system. But there are two types of noise: random noise and adversarial noise, and the difference between them matters a lot.
To explain the difference, think about a road sign.
Random noise might consist of mud flicked up from the road, bugs splatted on it, and so on. It’s things that happen by accident. There isn’t be a particular pattern, just a general spread of places where the original sign was changed. But, assuming it’s not too filthy, a driver can read the original sign without too much trouble. Again in information theory language, the errors can be corrected.
In contrast, adversarial noise is designed maliciously to make things worse. Someone with bad intentions might add paint in particular places to make the sign read differently. In a 2017 paper researchers showed that adding even small amounts of tape in the right places to a Stop sign, could fool an AI in a driverless car into thinking that it was actually a speed limit sign.
This is Figure 1 from that paper: the left hand image represents random noise, the right hand image is the kind of systematic noise that can fool an AI.
When it comes to COVID, my contention is that scientists believed they were in the first situation, but in fact it was the second. They were used to a situation where their messages might not be reported perfectly - someone might get hold of the wrong end of the stick or figures might get misreported, but this was just due to honest error and could often be corrected with a call to a journalist or press officer.
In reality, the pandemic information environment was much trickier. Of course, sometimes people like anti-vaxxers do simply lie. You might see purported documents apparently obtained by freedom of information requests that seem to reveal a terrifying death toll due to vaccines. However, it’s possible to check for fakes - a real freedom of information request should be archived on the site it was released from. It’s always worth checking this paper trail.
For this reason, malicious actors often behave in a more subtle way. Instead of outright lying, they often prefer to take the truth out of context and present it in a distorted way. Like sticking the tape over the road sign, a few small changes can radically transform the picture from what the original creator intended.
A classic example of this came in September 2021. UKHSA’s own figures on vaccine efficiency were reported by people such as the Daily Sceptic and Joe Rogan in a way that stripped out the context. Essentially, because we didn’t have good enough estimates of the population size and because a very high percentage of old people had been vaccinated, there was a lot of uncertainty about how many unvaccinated people there were, leaving room for misinterpretation of the vaccine efficiency results.
This was a typical adversarial trick, to use UKHSA’s own excellent reputation for accuracy against them. Of course, the true context of the data was given in a footnote, but who reads footnotes? And it is optimistic to think that a bad actor will not simply crop a document to leave an unintended message.
So, what can we do? Again there are lessons from information theory. We shouldn’t simply give up: it is still possible to communicate successfully in an adversarial environment, you just have to work harder than in a noisy one. Some suggestions might be:
Read back everything that you write with a sceptical eye. In everyday life, it is often good to remember the Daily Mail test: “how bad would what I am doing seem if it ended up on the front page of the Daily Mail”. In the same way, there’s perhaps a Joe Rogan test: what is the worst way that my document or numbers can be taken out of context? Here having multiple readers and sense checkers can help.
Knowing that documents can be cropped, don’t just leave important context for footnotes. Either don’t put the numbers in at all, or put a watermark across them to say “approach with caution, see footnote”. Of course bad actors may still reproduce the document without that, but there’s no point in making their life easy.
Sadly, maybe the answer is for scientists to publish less. Although we have been extremely lucky throughout the pandemic to have SAGE documents and scenarios available to us, there were times when extreme points of confidence regions of the most extreme scenarios were reported as if they were central estimates. Perhaps this can’t be avoided, except by keeping more modelling within closed groups only.
As news consumers, we have a duty to be more sceptical. By being more aware of this adversarial environment and the tricks that are played within it, we can all do our part by just sense-checking claims before forwarding them. Bearing in mind that misleading claims are often designed to go viral, a certain amount of caution on all our parts can act as a vaccine to reduce the spread of misinformation.
However, the development of technologies such as ChatGPT and deepfake videos is likely to make the battle harder over time. But it’s a battle that we can’t afford to lose.
The trick I use is to compare 'windows to screens'. I place a premium on empirical information over information (especially unsolicited information) that may present itself on the variety of screens in my environment. This was useful in 2020 when the information on the screens was in such stark contrast to what was 'visible out of the windows'.
I would also try to cross reference this against information that people who I considered reliable (less susceptible to the 'Hitchcock Effect') were able to describe from their windows.
With such a disconnect, and surrounded by much fear based opinion, mostly from the susceptible, who all seemed to placed a premium on the screens, I chose not to take an experimental drug. Quite simply, despite the lack of finacial compensation or legal recourse that seemed to uncharacteristically accompany the trial, there was no motivation to involve myself in it.
Obviously I have used the same methodology when I comes to assessing the screen version of vaccine efficacy. Maybe you can imagine the results.
Was I a victim of 'adversarial noise'?