commentary

Corrosive claims about academic research

Industry scientists often claim that academic research is not reproducible. But they never show their own data.
by T.J. Nelson

Commentary

W ay back in 2012, a guy named C. Glenn Begley made one of the most inflammatory claims ever hurled at academic scientific research. Begley, who had worked for pharmaceutical giant Amgen, claimed that he could reproduce only 6 of 53 major studies in cancer research: a reproducibility rate of only 11.3%. Since Sharon Begley publicized the result in Reuters, it has achieved wide notoriety.

In Begley's most recent article (Circ. Res. 2015:116, 116), he is a little more conciliatory. But the 2012 claim still stands. I've heard it over and over and over in my treks back and forth through the hallowed halls of industry and academia. Industry people cite it as an explanation of why their drugs so often inexplicably fail in clinical testing. Academics seem unwilling to discuss it. But failing to respond only helps the myth take root.

Industry people have always been defensive about the perception in academia that industry is where academics go after they fail. There's some merit to this idea: we all want our work to become famous, and it's a rare scientist who gets that in industry. But in general it's unfair—to succeed in academia you have to follow a strict path out of graduate school, without deviation. Take a detour, or stay too long in a postdoc position, or get stuck with a boss who slows you down, takes credit for your work, or forces you to work on dead-end projects, and you'll be passed over for that tenure-track job.

Some people are attracted to industry because it offers higher salaries and greater resources to researchers than academia. There are as many good, ambitious scientists in industry as in academia.

Cultured cells
Cultured neurons
There's also no doubt that error occurs in biomedical research. I have lost many hours following up on findings of reputable scientists that turned out to be as evanescent as a spring breeze. For sure there's lots of research that turns out to be wrong (I have my pet peeves: inappropriate statistical tests and failure to calculate error propagation). But how accurate are these industry claims? What if that 11.3% number we keep hearing is itself irreproducible?

To find out, I looked up the 100 most recent papers in the research literature with the word ‘cancer’ in the title. Cancer is something that industry is actively researching, so there should be lots of papers from industry, and their results should be highly reproducible, right? Well, that's what I expected to find.

What I actually found was zero: no papers at all from industry. Not one. Five papers were collaborations between some corporation and an academic institute. Four had no affiliation listed. All the rest—91% of the total—were done exclusively in universities, hospitals, or government academic institutes. In no case was the principal author listed as being affiliated with a corporation.

Where is all the industry cancer research? If Begley is right, it should dominate the research literature, and it should be of sterling quality. But if it exists, it is largely inaccessible.

So I looked again at Begley's article. I couldn't find any description of his criteria for determining whether the results were reproducible, and no statistics—only his narrative. Maybe they're published somewhere else; if so, I couldn't find them. Yet he openly accused cancer researchers of fraud: “They sometimes said they presented specific experiments that supported their underlying hypothesis,” he writes, “but that were not reflective of the entire data set.”

Sometimes some things occur, according to Begley. But as Carl Sagan said: extraordinary claims require extraordinary evidence. Where are the statistics? Where, even, is his evidence?

The comments to Begley's article were, for scientists, uncharacteristically impolite. They ripped his claim to shreds:

“Particularly relevant to ‘Hematology and Oncology’ we now know that mice housed under different conditions with different microflora can have vastly different outcomes in any model, not just cancer. To suggest academic incompetence or outright unethical behavior is offensive, and is a particularly narrow view of why experiments are difficult to reproduce. Further, as indicated in Table 1, the entire definition of not-reproducible hinges on a priori profit motive of ‘robust’ differences (whatever that means). There is always room for improvement in science, but this entire article is disingenuous and belittling to those of us who are on the front lines.”

Another one wrote:

“Which specific articles were picked, what criteria was used to categorize something as a Landmark finding, how were the claims tested, what reproducibility criteria were used, etc... speaking of cherry picked results, lack of controls, and poor publishing standards!”

I would have asked how Begley was able to repeat 53 major cancer studies in such a short time. If he spent two years on this, that works out to one major cancer study carried out every two weeks. His employer must be very proud to have such a fantastically productive employee.

What seems to be happening is a major cultural disconnect between industry and science. Industry expects a clear-cut, binary result that can be monetized with little risk. For an academic, a 20% reduction in growth might look big: it could be an important clue that no one else noticed. An industry person, looking to make a profit from it, expects the results to be laid out so they can be scooped up, zipped off to the USPTO, and churned out into a blockbuster drug. Sadly, that's not how science works. If the industry lab tries to reproduce a 20±5% result, and finds only a 20±10% effect, thereby nudging the p-value above 0.05, the industry people say they couldn't find any significant effect, ergo the result was irreproducible. Just as they always suspected. Academics, pffft.

I'm very pro-industry, and I like industry people, but I am seeing this more and more.

I've also seen how they do research. Often it's excellent. But I've watched industry groups get stymied by problems that wouldn't have happened if they hadn't made stupid mistakes: simple things like not filtering their FCS before using it, or assuming that the drug they're testing will behave pharmacokinetically like every drug in Goodman & Gilman.

These things are not always written down in research publications, and often omitted from textbooks. They have to be learned from experience. When I politely informed one group, stymied by the appearance of strange clumps in their cells, that in our lab we always filter our FCS before using it, they just started arguing. Their standardized and validated procedures were correct and I was wrong. After that I had grave doubts about their ability to do science, and stopped collaborating with them.

This highlights the difference between the two groups: in academia, you're penalized the most for not producing. In industry, you're penalized the most for being wrong. So much so that it often creates paralysis. If you can't create a paper trail of blame that leads away from you in case something goes wrong, you can't risk acting—or innovating.

None of this would matter much, except that this anti-academia bias is now being taken up and used by normally sensible people outside science who are aggravated by all the political correctness, restrictions on freedom of speech, and other nonsense coming from the liberal arts side of the campus. We're seeing calls to eliminate tenure and reduce dependence on public funds. Those ideas are worth discussing, but there are risks: privatizing research would make it vulnerable to drastic swings as companies turn to cutbacks and M&As to sustain past profits.

The industry is also feeling heat from anticapitalist critics, like this one at the BBC, who accuse them of profiteering. The public has a generally unfavorable perception of the pharmaceutical industry, so they are eager to deflect criticism wherever possible. But skeptics might ask how the industry can criticize academics, whose research is easily accessible, while shielding their own research from scrutiny. We can only speculate how much of that 11.3% figure is real and how much is due to differences in skill, confirmation bias, and the hierarchical command system that industry uses. Their attacks on academia are self-destructive, because they feed into the anti-science bias of the quack cure vendors, who do not distinguish the two types of research.

I'm sure corporate science is being done, somewhere. We just never see it, because they so rarely publish. So we have absolutely no idea how reproducible it is. But if some company out there found a cure for cancer, they would publish it somewhere ... right?

There are undoubtedly problems in science. But criticism will do far more harm than good if the criticism itself is based on bad data. If Begley and others want to engage with the cancer research community, they should show the cancer researchers their results. Otherwise, they don't have any.


Update They'd have to, if it was a new chemical entity. Some people think if it's not patentable for some reason, we might never hear about it. But pharmaceutical companies are economic entities which survive by making money. We cannot blame them for not doing what is impossible for them. Publishing their research would go a long way to squelching those suspicions, and it would have the additional benefit of making the whole industry more creative.

See also:


Related Articles

Bad Pharma: Fact or Myth?
The pharma­ceutical industry is dying. When it's gone, some of the blame for its loss will have to be laid at the feet of the authors of all those pharma-bashing books.

Why You Should Feel Bad For Big Pharma

Why Do So Many Drugs Fail?

Book reviews


Essential CNS Drug Development

Drug Truths: Dispelling the Myths About Pharma R&D by John L. LaMattina

The Future of Pharma: Evolutionary Threats and Opportunities by Brian D. Smith

Leading Pharma­ceutical Innovation: Trends and Drivers for Growth in the Pharmaceutical Industry, 2nd ed

Guidebook for Drug Regulatory Submissions by Sandy Weinberg

Clinical Trial Methodology by Peace and Chen

Fundamentals of Clinical Trials Friedman et al

Principles and Practice of Clinical Trial Medicine

On the Internet, no one can tell whether you're a dolphin or a porpoise
mar 15, 2015; last edited may 30, 2017

back

to top