randombio.com | science
Saturday, November 21, 2020

Censorship creates misinformation

The only way to eliminate misinformation on Twitter is to set the character limit to zero


W e've all seen it. Somebody is walking down the sidewalk not wearing a mask and a passerby walks out into the street to avoid them, right into the path of an oncoming bus, and almost gets creamed. The natural response is to question the person's intelligence. But what's really happening is that they're demonstrating the real danger of misinformation.

The term ‘misinformation’ is being re-defined before our eyes. Increasingly it means “facts I don't want to believe.” But facts are the domain of science. Unlike the fake scientists you see on TV, real scientists avoid speaking in absolutes, but we have only to remember the fiery polemic that can come from scientists like Richard Dawkins when they decide their domain is being challenged.

Social media are actively censoring any discussion of that randomized controlled trial of 3030 participants in Denmark that found no statistically significant benefit to wearing masks.[1] In so doing they've appointed themselves as sages sitting in judgement of science. The conclusions in the Danish study may or may not be correct, but it's not Twitter's place to decide.

This isn't the only example. In their attempts to cure the “infodemic”, Twitter and Facebook are now censoring any scientific or medical information whose conclusions they dislike. YouTube censored a video questioning the need for a prolonged lockdown by statistician John Ioannidis[2]. Scientists and medical doctors are starting to notice. They're doing what they always do: writing papers about it.

An example is “Factors Predicting Willingness to Share COVID-19 Misin­form­ation[3]. The dogma in psychology is that political conservatives tend to be more disgust-prone than liberals, which would imply a greater desire to avoid pathogens like SARS-CoV-2. This sort of thing might explain why conservatives are more skeptical of psychology than liberals, but the discrepancy in mask-wearing throws the disgust theory into doubt. A better explanation might be that conservatives, at least in the era A.T. (After Trump), tend to be less conformist than liberals. The authors write:

Individuals high in traditionalism and low in social dominance were more willing to share misinformation about the severity and spread of the COVID-19 pathogen, consistent with the hypothesis that traditionalism functionally relates to pathogen-sensitivity. Equally suggestively, a reverse pattern was obtained with regard to SDO [social dominance orientation] and propensities to spread misinformation, such that individuals who favored social dominance but not traditionalism were less inclined to spread claims about the severity of illness, instead showing a willingness to spread conspiratorial claims, a thematically consistent association insofar as conspiracies inherently entail certain groups vying for advantage over others.

In English it means they found that people who spread conspiratorial claims are not traditionalists but people who crave social dominance, or what might colloquially be called jerks. Also, people who are traditionalists but not jerks share the misinformation spread by the jerks. The authors blame this conclusion on their computer and say more research by somebody else is needed.

There is even a book on this subject titled Your Post has been Removed: Tech Giants and Freedom of Speech (available here). Based on an analysis of content, the authors, Frederik Stjernfelt and Anne Mette Lauritzen, find that moderation of content is often politically biased.[4] They write:

[G]iven the dominance of social media in our information society, we run the risk of outsourcing the definition of our principles for discussion in the public domain to private companies.

But it's not just COVID-19 information that is being censored. Almost everything in medicine is controversial because getting it wrong can cost lives. The problem is that while clinical trials may be the “gold standard”, their conclusions are routinely overturned.

A recent book titled Overkill: When Modern Medicine Goes Too Far by Paul A. Offit (reviewed here) talks about how things once thought to be true have been found to be false by recent clinical trials. The author believes that new studies overrule previous studies, something that may or may not be true, but there's no doubt that clinical trials often contradict each other. Many important controversies aren't discussed, such as why women have a worse prognosis of coronary atherosclerosis than men[5], but others are, such as whether antioxidants, vitamin D, and hormone replacement therapy are beneficial or harmful.

An example of how fragile clinical trial results really are can be seen in the controversy over whether obesity is a risk factor for Alzheimer's disease. For years this was thought to be true. Then in 2015 a retrospective cohort study[6] of 1,958,191 people came out claiming that obesity reduced the risk. A big controversy ensued. Finally it was found that obesity is indeed a risk factor, but obesity measurements are confounded by behavioral changes due to preclinical disease, which is to say the patients lose interest in food and soon they are no longer obese.[7] The gigantic retrospective study had tried to do a simple correlation between two variables and ignored what was really going on—proof that size doesn't matter in a clinical trial if your hypothesis is wrong.

I defy anyone to fit all that into 280 characters.

In blocking one side of controversial research, Twitter is falsely claiming that there is no conflict of opinion. Thus, in trying to censor misinformation, they actually create it. Emilia Niemiec writes[2]:

Although the censorship on social media may seem an efficient and immediate solution to the problem of medical and scientific misinformation, it paradoxically introduces a risk of propagation of errors and manipulation. This is related to the fact that the exclusive authority to define what is “scientifically proven” or “medically substantiated” is attributed to either the social media providers or certain institutions, despite the possibility of mistakes on their side or potential abuse of their position to foster political, commercial or other interests.

If Twitter really wants to stop misinformation, they must block all discussion of politics, science, medicine, people, culture, knitting, cats, knitting cats, cats knitting, cats doing politics and cats practicing medicine, as well as any discussion of cats or any mention of the existence or non-existence of cats or debate as to whether cats are intrinsically amusing or not.

That would leave only two permissible tweets:

This means they could reduce the character limit to 15. In fact, since these two are always true for Twitterers, they could set the limit to one bit. We can even assume that if they are on Twitter the tweets have already been made, so they can reduce the character limit to zero. Problem solved.

The only dangerous idea is that there is such a thing as a dangerous idea. Social media would be making a big mistake by trying to unseat science as the arbiter of scientific truth. If Silicon Valley tries to challenge science, it is likely to find itself up against a very powerful and entrenched adversary.


1. Bundgaard H, Bundgaard JS, Raaschou-Pedersen DET, von Buchwald C, Todsen T, Norsk JB, Pries-Heje MM, Vissing CR, Nielsen PB, Winsløw UC, Fogh K, Hasselbalch R, Kristensen JH, Ringgaard A, Porsborg Andersen M, Goecke NB, Trebbien R, Skovgaard K, Benfield T, Ullum H, Torp-Pedersen C, Iversen K (2020). Effectiveness of Adding a Mask Recommendation to Other Public Health Measures to Prevent SARS-CoV-2 Infection in Danish Mask Wearers : A Randomized Controlled Trial. Ann Intern Med. Nov 18. doi: 10.7326/M20-6817. PMID: 33205991. Full-text link

2. Niemiec E. COVID-19 and misinformation: Is censorship of social media a remedy to the spread of medical misinformation? EMBO Rep. 2020 Nov 5;21(11):e51420. doi: 10.15252/embr.202051420. PMID: 33103289; PMCID: PMC7645258.

3. Lobato EJC, Powell M, Padilla LMK, Holbrook C. Factors Predicting Willingness to Share COVID-19 Misinformation. Front Psychol. 2020 Sep 24;11:566108. doi: 10.3389/fpsyg.2020.566108. PMID: 33071894; PMCID: PMC7541968.

4. Stjernfelt F, Lauritzen AM (2020). Your Post has been Removed: Tech Giants and Freedom of Speech Springer International Publishing, ISBN 978-3-030-25970-9

5. Schmidt KMT, Nan J, Scantlebury DC, Aggarwal NR. Stable Ischemic Heart Disease in Women. Curr Treat Options Cardiovasc Med. 2018 Aug 7;20(9):72. doi: 10.1007/s11936-018-0665-4. PMID: 30084006.

6. Qizilbash N, Gregson J, Johnson ME, Pearce N, Douglas I, Wing K, Evans SJW, Pocock SJ. BMI and risk of dementia in two million people over two decades: a retrospective cohort study. Lancet Diabetes Endocrinol. 2015 Jun;3(6):431–436. doi: 10.1016/S2213-8587(15)00033-9. PMID: 25866264.

7. Allen AN, Clarke R, Shipley M, Leon DA. Adiposity in middle and old age and risk of death from dementia: 40-year follow-up of 19,000 men in the Whitehall study. Age Ageing. 2019 48(2):247–253. doi: 10.1093/ageing/afy182. PMID: 30624572.


nov 21 2020, 7:23 am


Related Articles

Another misleading observational study on HCQ and CQ
The news media are celebrating a new study on hydroxychloroquine and chloroquine in COVID-19 that was badly mis-analyzed.

Bad science and bad reporting about COVID-19
The dismal quality of science reporting by the news media now threatens our very lives

We need better conspiracy theories
The latest batch of conspiracy theories are almost pitiful enough to deserve censorship

Vaccines, the Internet, and trust of science
Demonizing anti-vaxxers will only reduce the public's trust in science. Here's how that works.


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
book reviews
home