randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science Thursday, July 25, 2024 | science commentary On publishing negative resultsEventually half of all results will be 'negative' results. Some, like climate models, cause the effect they're studying |
orgive me for disagreeing with my colleagues, but the problem with publishing negative results—the latest fad in science—isn't just publisher bias. It is that negative results themselves, while harder, automatically expand over time.
When you do an experiment, you normally follow up the results to get more detail on what is happening. That's impossible by definition with a negative finding, so the best you can do is describe something. That makes it harder for somebody else to publish a positive result because reviewers will say it only “confuses the literature.”
The Moon is made of an unidentified green substance, possibly cheese. Or maybe we forgot to remove the green filter
A ‘negative’ result usually means either the phenomenon turned out to be far more complicated than you thought or that your basic hypothesis was wrong. Publishing it adds to the amount of dishonest prose in the literature. If the authors of a negative study were being honest, they'd say something like this:
We spent six months doing this useless experiment only to discover that our hypothesis was not just wrong, but meaningless. Boy, do we feel stupid. I mean gosh, what were we thinking?
Thus, the literature would fill up with papers like “The Moon is not made of green cheese”, “The Moon is made of an unidentified green substance, probably not cheese”, “The Moon is not made of Limburger cheese”, “The Moon is not made of Camembert cheese.” That would give credence to the theory of John P.A. Ioannidis, who argued that almost all papers in science are wrong because (he thought) scientists pick hypotheses at random, which makes them statistically unlikely to be true. So even though the idea of registered reports may give Nature editors a warm, fuzzy feeling, their main effect will be to drive the need for more negative results to disprove them.
Conventional wisdom is right about one thing: the reward system makes publishing negative results difficult. But as the amount of science increases, an increasing percentage of the most important results will be negative results.
An example is the 2015 paper in PNAS by Chinese researchers (PMID: 27930341) who cloned and expressed 138 mutant forms of presenilin and found no correlation with expression of β-amyloid 1–42. Before this, there was a large body of work implicating interactions between these two proteins in Alzheimer's disease. The authors could have published it as a series of negative results and gotten 138 papers. They didn't, and moved the field forward instead by showing the hypothesis was wrong.
As science expands, more and more time has to be spent revisiting previous results. Another example is Alexander DD et al., “Air ions and respiratory function outcomes: a comprehensive review” (Journal of Negative Results in BioMedicine 2013, 12:14). The authors discovered that the literature was full of poorly designed and underpowered studies. Some had only 7 or 8 subjects, practically guaranteeing no measurable effect. The studies all measured different things under uncontrolled conditions, which makes it nearly impossible to identify which ion is responsible (or not). They conclude:
[T]he human experimental studies do not indicate a significant detrimental effect of exposure to positive air ions on respiratory measures. Exposure to negative or positive air ions does not appear to play an appreciable role in respiratory function.
This is incorrect. This wasn't a negative result at all. They had documented just how bad the science is in that field. The format of the “negative result” obscures this fact. Indeed, studies claiming toxic effects from gas stoves have no trouble getting published with even smaller effect sizes. The question is: is a result that shows nothing really a negative result or just a badly done experiment, and can people still tell the difference?
The same issue of Nature helpfully mentions an earlier article about google AI “predicting” global warming. They kindly make it open access so we can tear it to shreds (as I will do in a future post). Since AI can't actually predict anything, it's a good example of stuff—the scientific term isn't suitable for children—that gets published because the topic happens to be fashionable.
But we have a new problem: Schrödinger science. If the CO2 hypothesis is true, then AI causes global warming. Studying the problem causes it. The more simulations we do, the bigger the effect gets. So if the study is done correctly, it will get a false result. Perhaps we should ban global circulation models.
Publishing inconclusive results helps nobody. Let's tackle the problem of people being reluctant to refute results that are designed to mislead us first. Negative results will automatically follow.
jul 25 2024, 5:39 am. updated 1:33 pm
Grilled cheese sandwiches of death
Twelve scientists try to cook a grilled cheese sandwich on a gas stove,
panic ensues
Little-known facts about the Moon
The Moon really does have a dark side. And an atmosphere ... sort of.
Science under siege, part 5
A reproducibility crisis, you say? Talk to the hand.