randombio.com | science commentary
Wednesday, July 29, 2020

Statistics do not decide scientific truth

Some people think statistical validity is a criterion for whether a scientific finding is true. They're wrong


T here's a growing myth out there that statistics are a determinant of scientific truth. Statistical arguments became even more important earlier this year when clinicians conducted dozens of clinical trials to determine whether hydroxychloroquine (HCQ) was effective or ineffective against COVID-19. Many of them were randomized control trials (RCTs) that reported p-values of 0.001 or lower. If statistical validity in an RCT is a measure of truth, they should have settled the matter. Yet many of the findings were still wrong.

Statistics are abstractions, which means the specifics are stripped away. A statistic can only tell you the probability that some result could be obtained by chance. Essentially it's a measurement of noise. Statistics say nothing about whether your underlying hypothesis makes any sense, and making sense—explaining how the world works—is an essential ingredient in truth. No matter how well designed an RCT may be, if you're testing the wrong hypothesis you'll get the wrong result.

Saying this a different way, if we don't understand the molecular details of the pathogenesis, whether we're talking about heart disease or a virus, and without knowing how the drug interferes in the pathogenesis, an RCT will give us the correct answer only by chance.

Vitamin E and cardiovascular disease

In a 2007 article in JAMA,[1] anti-science statistician JPA Ioannidis and two colleagues criticized researchers for remaining optimistic about vitamin E as a potential treatment for cardiovascular disease even after the famous HOPE trial found no statistically significant connection. They wrote:

Claims from highly cited observational studies persist and continue to be supported in the medical literature despite strong contradictory evidence from randomized trials.

In normative language rare in the scientific literature, they accuse the researchers of scientific dishonesty and “wish bias”:

Specialist articles apparently continued to use references to the highly cited observational studies to support their own lines of research. The presence of refuting data were not mentioned in many articles.
. . .
Thus, one wonders whether any contradicted associations may ever be entirely abandoned, if such strong randomized evidence is not considered as much stronger evidence on the topic.
. . .
It can be difficult to discern whether perpetuated beliefs are based on careful consideration of all evidence and differential interpretation, inappropriate entrenchment of old information, lack of dissemination of newer data, or purposeful silencing of their existence.

It is very disturbing to see a respected medical journal using language like this, and it seems clear that medical professionals and statisticians would benefit from a greater understanding of how science works. There are two issues here: first, clinical trials are too often based on outdated or invalid scientific theories, and second, what appears as statistical noise is often nothing of the sort.

I'm not going to defend the oxidation theory of cardiovascular disease. Vitamin E is said to be an antioxidant, and this has led to tunnel vision about vitamin E in which its other functions in signal transduction are ignored. There's evidence that α-tocopherol is only beneficial if the animal has a deficiency.[2] That tells us it's filling a biochemical role, not acting as a drug, so there was little reason other than, indeed, hope to think it would prevent cardiovascular disease.

Omega fatty acids and dementia

We saw a similar dynamic with omega-fatty acids such as docosahexaenoic acid (DHA), which failed in clinical trials of Alzheimer's disease. It is now recognized that this failure was inevitable: the brain is already rich in DHA, and inducing a deficiency to see if it causes Alzheimer's would be unethical. Again, the clinical trials were based on hope instead of science.

There are certainly cases of clinical researchers refusing to give up on a drug. They keep looking for hidden clusters in their results that could be explained by a different hypothesis. If they look hard enough they will find them, and statistics can remind us that this approach is invalid. But if they told us what was true, we'd all become statisticians and give up on this messy and expensive lab stuff.

Hydroxychloroquine and COVID-19

A third example is hydroxychloroquine, which may or may not be effective against SARS-CoV-2 and other viruses. There's been a virtual blizzard of flawed clinical trials on this topic, many poorly designed and a few clearly biased against the drug. But the primary reason we still don't have a clear answer is not the badly done trials or the faulty statistical analyses, but because there was no clear idea how the drug was supposed to work.

Does it act by raising the pH of the endosome and inhibiting the proteolytic activation of the virus? Or does it act by inhibiting the innate immune system, perhaps by blocking cytokines? Or does it, as some bloggers are claiming, act by increasing the zinc concentration in the cytosol?

There are papers supporting each of these hypotheses. Each theory would require a different trial design, which is why it's so essential to understand the basic pathogenesis first. If you skip that step, as public pressure wants us to do, the trial will give us the wrong result. This is why the recent tilt by federal funding agencies against basic science is wasteful, and it's why the claim that statistics are a determinant of truth is false and unhelpful.

There are other factors that can lead to failure of a clinical trial, most notably genetic polymorphisms and patient heterogeneity. This is not just statistical noise as is commonly assumed: these are real, measurable differences among patients that only appear as noise because the patient population is randomized.

That noise is the sound of a brand new hypothesis screaming to get out, and unless those who keep funding clinical trials at the expense of basic research recognize this, that noise will become deafening.

What's happening is that drugs are rushed into testing before the basic science is understood. I've seen this again and again, and not just with nutraceuticals: someone becomes convinced that some molecule will cure some disease. Even though the pathogenesis is not understood, the disease is too urgent to wait. So they marshal whatever evidence they can find from the scientific literature, design a trial. It fails, and a potentially valuable line of research is discredited and abandoned.

Science, it is true, sometimes falls into self-perpetuating fads, but criticizing scientists for not putting enough weight on a clinical trial ignores the fact that clinical trials are not designed to generate new hypotheses. That is what basic scientists do for a living. But when you're chopped liver, nobody listens to what you say.


1. Tatsioni A, Bonitsis NG, Ioannidis J (2007). Persistence of Contradicted Claims in the Literature JAMA. 298(21):2517–2526

2. Suarna C, Wu BJ, Choy C, Mori T, Croft K, Cynshi O, Stocker R (2006). Protective effect of vitamin E supplements on experimental atherosclerosis is modest and depends on preexisting vitamin E deficiency Free Radical Biology & Medicine 41, 722–730.


jul 29 2020, 6:21 am


Related Articles

Should peer review be abolished in science?
'Pee review' should be done fairly or not at all, but there's a better solution

Hydroxychloroquine is great again
Two higly publicized papers—one on HCQ and one on dexamethasone—show the dangers of relying too much on statistics

Bad statistics in intelligent design
Is Darwin's theory of evolution really on the verge of being overthrown? Not by biochemists


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
book reviews
home