randombio.com | science commentary Thursday, January 9, 2020 There is no such thing as an irreproducible resultThere are no irreproducible results, only badly described ones |
he history of science is full of examples of researchers embellishing their experimental results to conform to simple, logical, coherent theory.” That claim, made by a prominent critic of science, is well on the way to becoming a platitude. Many people think science isn't reproducible and that scientists aren't interested in finding the truth, only with acquiring status, money, and prestige, so they fake their results, as Millikan was said to have done in measuring the charge of the electron.
As with all myths, sometimes it has a basis in fact, but there's some confusion in terms going on here. Faking a result or throwing out data points that don't fit sometimes happens, but that would actually make the result more reproducible, not less. Suppose they wrote this in a paper:
We decided arbitrarily that the result should be forty-two. Therefore, any measurements below 41 or above 43 were discarded.
Statistical analysis: We assumed that our drug would cure cancer, so we only used one-tailed t-tests to exclude the chance of discovering that it would make cancer worse. We also discarded any cells that did not show the desired effect. We also kept changing statistical tests until we found one that gave us a p-value below 0.05.
Obviously this is bad science, but the one thing it is not is irreproducible. Anyone who followed their procedure would get the same result. The point is that reproducibility is not an index of quality.
This isn't some obscure metaphysical point. Two totally different things are being claimed. The anti-science brigades accuse scientists of outright lying, and the other side focuses on making sure their methods are accurately described. They're talking past each other.
A third claim is that scientists don't describe the details of the discovery process. The problem here is that no one really knows how that works. And if we were too honest about what goes on in the lab, papers might look like this:
Cell culture: On week 4, we incubated the cells for three hours in Campbell's® Chicken Broth instead of the regular medium, because one of us (the esteemed Dr. Cthulhuson) did not examine the label properly, nor did he ever feel like coming in to change the culture medium.
It may, however, be more relevant that the other one of us (the esteemed Dr. Davros) neglected to change the gas cylinders three times because he was too busy gluing useless round things onto the sides of his Daleks, whatever those are.
This is why we try to keep all personal observations out of a paper, no matter how interesting they may be. All we can ask from researchers is that they are careful and honest. If they are, their results must be reproducible; otherwise it would mean the laws of nature are changing and understanding the world is impossible.
The above examples lead us to an important conclusion: There are no irreproducible results. There are only incorrect descriptions of the experiment. Provided that the statistics are done correctly, it is physically impossible for a correctly described experiment to be irreproducible. Reproducibility is a function of nature, not of a paper.
Of course, that doesn't necessarily mean the finding is correct. Sometimes unknown factors, mistakes, or even typos affect the outcome. Sometimes, a researcher describes the experiment correctly but uses an incorrect statistical test or draws the wrong conclusion from the results. These are very common errors that occur even in top journals. But in this case, it is a failure of peer review, because these things are easy to spot.
Even if the reviewer catches errors, sometimes papers get published anyway. On one journal, manuscripts I rejected would invariably come back a week later with trivial changes and a note saying if I didn't respond within a few days they'd publish it. One time they were so sure I'd recommend acceptance they'd already formatted it and assigned page numbers. They weren't pleased to learn it was fatally flawed, and they stopped sending me articles to review. I was a bit sad not to have to read those papers; some of them were quite entertaining in their creative use of statistics.
A more subtle problem is caused by the researcher's choice of what to study. Fads happen a lot in science, and it's not clear how to prevent them. The most harmful thing is when funding agencies or journals favor one side of a controversial issue and starve the other. This allows activists to claim falsely that science supports their position, turning science into a participant in a political dispute.
Peter Medawar, in his article “Is the scientific paper a fraud?”, criticized scientific papers for not representing the reality of how we really do experiments. The idea is that a scientific article is a reconstruction, a narrative that makes it seem too logical, too coherent, whereas real science is a chaotic, creative process.
Sometimes, discoveries come from being obsessive: you test every conceivable thing, and soon enough something happens that makes you say: “How in the world am I going to explain THAT?”—HITWAIGTET for short. But often, being obsessive doesn't work; to get to that point, you must add a small amount of disorder to the experiment.
It is in those HITWAIGTET moments that great discoveries are made. If you're too focused on your theory, there's a tendency to ignore them. Maybe something went wrong. Maybe the assistant mixed up the samples again. That's why it's so important to write everything down. I tell my assistants: if the sample turns pink, or if it explodes, or if it dissolves a hole in the benchtop like that blood sample in Alien, write it down. If you think you made a mistake, write that down. I've never criticized anyone for making a mistake, and I praise them for telling me.
True story: An M.D. wanted to know how a particular chemical was causing cardiomyopathy. His idea was that some enzyme in the heart converted the chemical to something toxic. He'd give rats the chemical, and sure enough they all developed heart disease. But we couldn't demonstrate it in the test tube. One day when I dissected out the heart, I neglected to trim off the aorta as I was supposed to do. The chart recorder pegged! (This was a while ago.) We had discovered that the heart injury was not caused by anything in the heart at all, but by a totally different enzyme in the blood vessels.
Another example: once I tried to reproduce an experiment that was published in Nature. It worked the first time, but then we couldn't repeat it. It turned out that the first time we had accidentally added ten times the correct amount of the critical reagent. The original authors had either made the same mistake or they had put the decimal point for the concentration in the wrong place.
We could never say any of that in a paper. Does this mean a paper is a
‘travesty’ as Medawar says? No. This is how the creative process works,
and it's the biggest secret in science, because the last thing you want is for your
<span teeth="clenched">
esteemed colleagues</span>
to figure out how to do their experiments properly.
Medawar's claim that there is no such thing as an unprejudiced observation is just the sort of thing relativists live for. They sit like vultures looking for ways to attack science. They want to believe science is a social construction, that everything we do is about power and privilege. It is not.
But neither is science a process of trying one thing, then another, and refining our ideas dispassionately until all the pieces fit together, the clouds part, and a glorious new shifted paradigm shines down upon the world. And it is certainly not grinding out hypotheses and ploddingly adducing more and more evidence for them until we have a solid case. Those who work that way get bored with science pretty quick.
There's a cottage industry now of people claiming that science is flawed, there's a big crisis, and we need more rules and regulations to make sure our tax money isn't being wasted, and maybe industry ought to do science instead of academia because industry never gets irreproducible results.
Speaking as someone who has to buy the products that industry sells us, I'm skeptical about that. But in every creative endeavor, and that includes science, if you always get whatever you're looking for, you're doing it wrong.
I propose that the journal system should be updated to a blog-like format with versioning where researchers can ask questions and authors can make corrections as needed. When an author draws a conclusion that doesn't seem to follow, or if the author misses an important implication of their work, misses a typo, or includes a graph that appears deceptive (all of which occur often, even in top journals), a healthy discussion can give the author a chance to defend it or change it without the overhead of retracting the entire article. Make it easier to fix mistakes, and we'll get fewer of them.
A few websites have succeeded in creating discussion platforms in which the comments are more informative than the articles. There's no reason why this template can't work in science.
Julius Axelrod used to say you should never worry about being wrong. I think what he meant was: if you're too afraid of making a mistake, you'll worry so much about getting the details right and being reproducible that you'll ignore the fact that the whole project was a ludicrously stupid idea. You'll stop thinking big, and you might as well stop doing science altogether and become an accountant.
jan 09 2020, 5:20 am. revised jan 12 2020, 6:29 am
Science under siege, part 6
The climate studies scandal has seriously impacted the public perception of science.
Here's how we can put science back on track
Is an academic career worthwhile?
Last month, advice for a young scientist. Now, some advice about graduate school.
The fountain of wisdom never stops.
We need to encourage more deep science
The reward system is damaging the reputation of science. It must be overhauled.
Better advice for a young scientist
Thirteen rules based on years of observation of how a science
career can go horribly wrong.