randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Friday, February 25, 2022 | Science

You can't draw conclusions from non-significant results

The perils of blogging about ivermectin while not understanding pharmacology and basic statistics


O ne of the few good things to come from the Covid ordeal was the enthusiasm that bloggers have acquired in reading scientific papers. Unfortunately, not all of them have a strong background in science or statistics: sometimes they discover errors in the paper, and sometimes errors discover them. But they're trying, and that's a great sign.

In effect, bloggers have become post-publication peer-reviewers. I've peer-reviewed hundreds of papers at work. The goal in peer review is never to criticize the authors personally, but to point out what changes they would need for their paper to become good science. I only recommend rejection when it's fatally flawed and un-fixable, or when their data contradict their conclusions, which is a polite way of saying the authors are being deceptive.

A new paper on ivermectin in JAMA Internal Medicine [1] shows why peer review is so important. Some bloggers are saying it's actually a positive result, while the news media are saying it's proof that ivermectin doesn't work. The page on Substack claiming it's proof that the science establishment is being unfair to ivermectin has been widely cited by others. After reading Steve Kirsch's Substack description, it occurred to me that perhaps a refresher course in how to do peer review (and basic statistics) would be beneficial for everyone.

The I-TECH Randomized Clinical Trial in Malaysia

Here is the table they're fighting about. It's a reanalysis of data from eTable 6: Post-hoc analyses by vaccination status in the Supplemental Online Content. The numbers are deaths / number of patients in each group with the percentage in parentheses. For example, 1/75 means 75 patients were in the group and one died.

Group   Vaccinated   Not Vaccinated  Vax Effectiveness   P value  
Ivermectin 2/166 (1.20%) 1/75 (1.33%) 9.77% 1.00 (0.94)
Control 6/165 (3.64%) 4/84 (4.76%) 23.5% 0.74 (0.67)
Total 8/331 (2.42%) 5/159 (3.14%) 22.9% 0.76 (0.64)

Group   Ivermectin   Control   Iver. Effectiveness   P value  
Not Vaccinated 1/75 (1.33%) 4/84 (4.76%) 72.1% 0.37 (0.21)
Vaccinated 2/166 (1.20%)6/165 (3.64%) 67.0% 0.17 (0.15)
Total 3/241 (1.24%) 10/249 (4.02%) 69.2% 0.09 (0.06)

The second table is simply a transposition of the first table. The column titled “Effectiveness” is 1 minus the ratio of percentages in the first and second columns. The bloggers used this table to claim that ivermectin is much more effective than the vax and the researchers are covering it up. I included the actual patient numbers, which the bloggers dropped, to show why the whole table is worthless.

In the last column, the first p-value is calculated using Fisher's Exact Chi-square test. The p-value in parentheses is calculated by Chi-square without Yates correction as reported by the bloggers. The values differ, but the point is that not a single one of these changes is statistically significant. This means that no conclusions can be drawn from them. You can calculate this yourself. Make a small contingency table like the one below. Row 1 will be ivermectin and row 2 will be the controls. Put the number that died in column 1 and the number that did not die in column 2. If you add the numbers in each row, you should get the total number of patients in the group (e.g., 3 + 238 = 241).

  Died   Did not die  
Treatment 1 3 238
Treatment 2 10 239

Then run it on an online statistics calculator. You'll get a p-value of 0.0886 using the two-tailed Fisher Exact Test and 0.0563 using the other test. Both numbers are above 0.05, which means there is no difference between the groups. It's all just statistical noise.

Of course, an experienced researcher would know just by looking at the numbers of patients that there was not a chance in hell of it being statistically significant. Knowing the number of patients per group is crucial information, but the bloggers didn't include it.

What this means is that the clinical trial gave us a grand total of zero bits of information about whether the treatment affects the risk of death. It doesn't mean that ivermectin works better or worse than the vax or that either of them works or doesn't work. It just means the authors wasted everyone's time.

To see this, remember the TV show Mythbusters. When they got something to work, they had a demonstration that something related to the myth might have happened. (Though not necessarily: that episode where the CO2 cylinder smashed through a cinder block wall was a perfect example. The conclusion looked sound but there was something really fishy in how they did it.)

But in the more usual case, Mythbusters couldn't reproduce the myth. They called it “busted,” but it was more likely they did something wrong. That's also true in clinical trials: if you don't see any effect, you can't draw any conclusions at all. What you can do, however, is bring disrepute on your cause by telling your readers what they want to hear and hope they overlook that you're misrepresenting the data.

Other scientists realize this as well: the University of Virginia is planning another study, this time with 15,000 patients. By the time SARS-CoV-6 or -7 is over we may know if this drug does anything.

The bloggers calculated the p-values correctly. They even drew a red box around them, which means they understood their importance. But their own data contradicted their conclusion.

Update, Apr 5 2022: The bloggers now have another article on the Together Trial of ivermectin. After going through this flawed interpretation of the I-TECH trial, I am highly skeptical.

The paper itself was seriously underpowered. The primary endpoint—requiring supplemental oxygen—was too subjective to be useful. The authors didn't measure interferon, cytokines, or virus levels. Their secondary endpoints were either subjective or way underpowered: rates of mechanical ventilation, ICU admission, 28-day in-hospital mortality, and adverse events. But the editors (and the press which uncritically reported it) liked it, maybe because there is so much popular interest in the topic, or maybe because the reviewers couldn't find any fatal flaws in it, or maybe just because it reinforced what they already believed.


[1]. Lim SCL, Hor CP, Tay KH, Mat Jelani A, Tan WH, Ker HB, Chow TS, Zaid M, Cheah WK, Lim HH, Khalid KE, Cheng JT, Mohd Unit H, An N, Nasruddin AB, Low LL, Khoo SWR, Loh JH, Zaidan NZ, Ab Wahab S, Song LH, Koh HM, King TL, Lai NM, Chidambaram SK, Peariasamy KM; I-TECH Study Group. Efficacy of Ivermectin Treatment on Disease Progression Among Adults With Mild to Moderate COVID-19 and Comorbidities: The I-TECH Randomized Clinical Trial. JAMA Intern Med. 2022 Feb 18. doi: 10.1001/jamainternmed.2022.0189. PMID: 35179551. Journal Link NCBI link


feb 25 2022, 4:35 am. updated apr 05 2022


Related Articles

A Closer Look at the Ivermectin Statistics in Africa
The Internet is getting another buzz on about some small molecule with antiviral activity

Ivermectin and COVID-19
Ivermectin has strong antiviral activity in vitro. It is neither a miracle drug nor a quack cure

Approving drugs without understanding their mode of action is a recipe for failure
Watch out for that warm and fuzzy feeling when you drop the placebo group


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
book reviews
home