There’s a joke doing the rounds on social media. It goes like this:
How many conspiracy theorists does it take to change a light bulb?
Answer: Do your research.
Whenever I get “do your research” thrown at me I feel a frisson of “Yes! I do! See my Google Scholar page.” But this is not what conspiracy theorists are after and it only annoys them if I start throwing my citation counts at them. Worse still, I get hatred when I explain using actual evidence why they are wrong.
What they mean is more like this:
Twitter science: if enough people as ignorant as you agree, it becomes fact.
No, reset. I don’t think that’s a winnable argument either. In fact, there is no winnable argument, as I explain now.
The problem: once people are convinced that they know better than the experts, throwing expertise at them is never going to work. You will get pushback — “they would say that” or “Bill Gates is paying you to say that.” I’ve had the latter and checked my bank balance. I was disappointed. George Soros could at least buy me a small island. No gratitude.
The psychology of false belief in your own expertise has a name. It’s called the Dunning-Kruger effect. It’s named after Cornell University psychologists David Dunning and Justin Kruger whose 1999 paper first described the effect where those less knowledgeable about a subject are more likely to overestimate their skill. Ironically, the lower your knowledge and skill level, the more likely you are to falsely believe in your own competence.
Add to this the echo-chamber effect, where social media tends to concentrate like views and can be a powerful way of spreading rumours (aka fake news). I try to avoid this but, towards the end of the Trump era, I eventually gave up on two Trump-supporting Facebook “friends” and blocked them — not because I did not want to hear dissent but because they were abusive. Even if you do not go as far as blocking annoying or abusive contacts, conversations on social media can easily get dominated by one viewpoint. I’ve seen this frequently when arguing about miracle cures for Covid-19, anti-vax conspiracies and so on.
There is another effect I have observed. I call it “smartest person in the room” or Spitr. A person who is the most skilled in their normal circles can have an amplified version of Dunning-Kruger because they see themselves as the smartest person in the room based on their everyday experience — yet they lack the knowledge or expertise in a particular situation to know that they are wrong.
An example of Spitr syndrome is a group of actuaries who early in the pandemic vociferously denounced standard epidemiology and claimed superior knowledge of how to model not only economic consequences but epidemiology. Once things got really bad, they disappeared from sight and resurfaced in November to claim that talk of a “second wave” was overblown. Once hospitals started to fill up, they disappeared again.
These Spitrs sneered at “differential equations” and “exponentiated formulae” as if they are the experts and epidemiologists who have PhDs and have published in the field are dullards or dotards. Back in May, they were predicting at most 10 000 deaths; let me quote their exact words:
“We are left wondering why anybody in their right minds would be talking up a story that involves anything more than 10000 deaths for South Africa, with or without lockdown.”
The official Covid-19 toll in South Africa at time of writing is more than four times that number; excess deaths statistics suggest that the true number is well over 100 000.
I do not claim to be an expert either but I have taken the trouble to read a fair amount of research literature and to work with simplified models that allow quick checks on the effect of interventions. My review supports the general drift of standard models. If you make interventions that reduce the effective rate of transmission, you fake the effect of a less contagious disease. If you relax those measures, it bounces back. What the models predict is exponential growth as long as the fraction susceptible does not shift significantly; they do not predict unlimited exponential growth.
A similar effect has arisen with the ivermectin war. People who are very good at what they do but whose track record is not primarily in drug trials and how to interpret them have made extravagant claims that were not at the time supported by the evidence. More authoritative ivermectin trials should report by the end of February.
Another issue is claims about the false positive rate of the PCR or polymerase chain reaction test. If a PCR test has a false positive rate of 5% (of all tests conducted, 5% come back positive when they shouldn’t), we are told, the majority of positives are erroneous. The problem? Back in April 2020 when South Africa had a small number of cases, test positivity was 2.7% and clearly a good fraction of those were genuine, otherwise no one would have landed in hospital. So the actual false positive rate has to be far short of 5%.
I make no claim to be an expert in drug trials, epidemiology or PCR testing but I have read the literature and see no issue with the case made by the real experts.
Having a genuine research mindset — as opposed to being an armchair contrarian — starts with accepting that you could be wrong. That is what research is really about. It is the evidence that counts, not your ego.
Your best defence against being taken in by conspiracy theories? Be deeply sceptical of people who never admit they are wrong.
If you disagree with me, do your research.