science commentary

Fraud in Science

Where there's misconduct in science, there's invariably a deep reservoir of social and managerial pathology that creates it.

by T.J. Nelson

science commentary

T here's much tearing of hair and gnashing of teeth these days about misconduct in science. But why? Why don't we care as much about fraud in corporate management, or fraud in the janitorial staff? Or how about fraud in computer programming, the effects of which I'm observing right now, as my browser struggles to display this page without crashing?

Part of the reason may be that, unlike computer programmers, plumbers and janitors, as important as their work is, scientists are held to a higher standard because their currency is the truth; partly because of this, their work profoundly influences the future of our civilization (although it's been argued that plumbers have actually done more for civilization than almost anybody).

I have witnessed people publishing questionable data many times. It's impossible to fight against it: the perp always seems to be the most arrogant twonk in the lab. Trying to convince them their calculations or procedures are wrong only creates a bitter, determined enemy—and if they will lie in their publications, imagine what they will say about you to your boss. In fact, there's so much pushing of limits nowadays that the best you can do is to make sure it doesn't happen on papers where you're the first author.

Yes-men are in the same category as fraudsters: telling the boss whatever the boss wants to hear is the same, psychologically, as creating a fake story for a scientific journal. The journal, like the boss, is an authority figure that offers benefits when appeased.

Swan Nebula
Swan Nebula. Not even a real swan.

Unfortunately, you can't avoid arrogant people or yes-men, and if the boss is guilty of misconduct, there's not much you can do about it other than taking your name off the paper.

The late Horace Freeland Judson wrote in his 2004 book The Great Betrayal that he hoped electronic publishing and open review, where the referees are no longer anonymous, would be a solution.

We now know he was wrong. Making people sign their names to referee reports would kill science because it would guarantee retaliation. You'd never give a critical review, because the other guy would make sure you'd never get a favorable one. And we've seen that unrefereed Internet journals like ArXiv and minimally refereed ones like PloS One aren't the solution either. They have value: they provide a place for offbeat ideas, and they provide a place for cranks and eccentrics to publish their stuff, keeping it out of the real literature. But they can do little about misconduct, and might even make it worse.

Peer Review

Peer review works. I've rejected a few papers because of obvious fraud, and feedback can be of enormous value to junior researchers. Last week I spent most of a day reviewing a paper by a Chinese group who had a promising idea but little experience. Their English was almost as bad as their technique, and there were strange anomalies in their figures, but there was also an ember of good science there. For these cases, peer review provides an option short of rejection: ‘major revision.’ Do it over and fix the mistakes.

But ethics training does not help. At the risk of sounding like Rutger Hauer, you wouldn't believe the things I've heard people say at those meetings. One ethics session I attended turned into a smorgasbord of self-aggrandizing lies.

It's hard to distinguish between misconduct and bad judgment. David Goodstein has a good discussion of this in his book On Fact and Fraud. Goodstein is an actual researcher who knows that a major part of science is judging whether the data make sense and having the courage to accept an outrageous result. As an example, he cites Pons and Fleischmann's cold fusion paper. It was not fraudulent, but if you read it you shake your head wondering how such good scientists could fool themselves with such weak data. The cold fusion fiasco was as close to show biz that we ever got in modern times until the Circus of Global Warming came to town.

Fraud sometimes happens because scientists think other scientists do it, or because they think their work is being treated unfairly. But mainly it happens because their psychological well-being and social status depend on convincing themselves and others that they're always right.

Fraud comes from above, not below

Many people think fraud can be solved by keeping tighter control of postdocs and junior scientists. But for junior scientists fraud is almost always caused by the threat of losing their career. Sometimes it's imposed on them by their supervisors. I know a guy who ran an experiment three times; twice the result was consistent, but disproved his boss's theory. The third time one single data point skewed the curve, giving a false result. The boss insisted that the false curve was the correct one, because it supported his theory, and put enormous pressure on him to publish it. He had to repeat the experiment eleven more times before he could drive home the point. It wasted three months of this guy's time and cost thousands of dollars.

Even if they don't get fired outright, it is very hard for postdocs to risk doing this: three months of unproductive work can easily kill your career. For foreign postdocs, it could mean being forced to leave the country. I suspect many people would have simply given the boss what he wanted.

To this day, I am told, that boss still thinks the original result is the correct one. With some people, their narrative becomes part of their self-identity, and they can't shake it. Any challenge to that narrative becomes a threat, and is dealt with accordingly. That means doing whatever is necessary to beat the data into shape and firing any employees who are unenthusiastic about it.

But it's rarely outright fabrication. Most often it's minor things that could be passed off as mistakes and bad judgment, like cherry-picking results, stating significance when it's not there, and browbeating subordinates into not objecting when their research shows something it doesn't. I once read a paper where they'd plagiarized an entire section from another paper by a different author. If you have a good memory, you can find many papers like this. (In this case they didn't plagiarize any results, but some uninteresting paragraphs in the introduction.)

The sad part is that misconduct almost always hurts the case they're trying to make. When someone publishes a computer simulation that's rigged to produce the desired result, only non-scientists are convinced by it. Colleagues inside the field trying to get to the truth are hurt the most, because their own work, which reaches more nuanced conclusions, becomes unpublishable. As always, when humans are involved, the bad drives out the good.

Bureaucratization of science

Just last week I helped a colleague with his statistics. He'd been using a one-tailed instead of a two-tailed test, and made a few other mistakes, resulting in a p-value of 0.01 (below 0.05 is considered statistically significant). When done correctly, his number was closer to 0.20. This type of mistake seems to be fairly common—but what causes it?

In almost all cases, it's the reward system. In industry, where the boss has absolute power over your career, you're rewarded for providing results that match the boss's expectations and penalized for providing results that don't. Industry recognizes this and responds by bureaucratizing their science. In academia, the role of the boss is most often played by the journal, and the pressure to tweak data points and inflate statistics is balanced by the need to maintain one's scientific reputation. Since academics maintain a public persona, this can be a powerful deterrent.

Bureaucratizing science isn't the solution. Aside from the fact that it drives the costs up, it has dampening effects on flexibility and creativity. We are seeing these costs in academia already: academics are abandoning the use of radioisotopes and animals, despite the fact that doing so hurts their science, because of the paperwork.

The latest bureaucratic fad is eliminating chemicals and glassware. We are not allowed to place any glassware (including empty beakers) higher than 51 inches above the floor. Unfortunately, the lowest shelf in our lab is 51.75 inches above the floor. So our expensive lab shelves are mostly empty. Last year, our local bureaucrats floated the idea of imposing a fine if we had any chemical that we didn't use for more than a year. That idea bombed, but it will be back, because bureaucracies always expand to fill the space available for controlling others. We already have to spend time documenting the quantities of every chemical on hand (a lab may have thousands of chemicals, in 1 to 10 gram quantities). Soon we will probably have to account for every beaker and test tube.

We're starting to see fatalism as a response to all this: if society values filling out paperwork more than curing diseases, that is what we'll do. The attitude is generally one of weary acceptance, knowing that industry has it worse. We will grieve for the bureaucrats when they die from the disease they didn't want us to cure, but society pays the biggest price.

Fake results are rare

Tenure-track academics in the sciences are in a position where failure to obtain a grant means no tenure and the end of a career. A high-profile published result can make the difference between saving a career one has worked twenty years toward and gone deeply into debt for, and losing it. The pressure to discover something important, and do it fast, is enormous.

Despite this, in all my years of doing science, I have never seen outright fakery. I've never seen anyone try to publish an experiment that was never done. In extreme cases ambitious postdocs have been known to do nasty things like putting radioactive P-32 in each other's food, but, as when people accuse each other of fraud, it's almost always because of some personality clash—often caused by incompetent management. The accusations are more often fraudulent than the fraud itself. Honest mistakes, where someone reads a blot upside-down or a discovery doesn't hold up for unknown reasons, vastly outnumber the cases of deliberate data forgery.

In his book Judson accuses Mendel, Pasteur, Freud, and many other famous long-dead scientists of fraud. I can't say whether any of his accusations are true (and I strenuously object to anyone calling Sigmund Freud a scientist), but the only solution I've seen proposed so far is to make science more bureaucratic to make misconduct easier to trace. That's like bombing the visible part of the iceberg.

Fraud is a problem for science, because the bad always drives out the good. But there are an infinite number of ways an ingenious person could fake a result. Where there's misconduct, there's invariably a deep reservoir of social and managerial pathology that creates it. That is what would have to be addressed.

Scientific misconduct is a symptom of arrogance, narcissism, and social pathology. If you want to eliminate it, you will have to get rid of all the psychopaths, the arrogant twonks, the narcissists, and the yes-men: all the pathologies that afflict every profession.

The problem is, if we do that, some days it seems like our science buildings will be nearly empty afterward.

See also:


Related Articles

Corrosive claims about academic research
Industry scientists often claim that academic research is not reproducible. But they never show their own data.

What is the value of computer modeling?
If mathematical models are done badly, they will discredit an entire branch of science. It's happened before.

Should you pursue a career in science?
Being a research scientist can be a highly rewarding career. What you discover could change how people make their toast in the morning, or it could change how civilization evolves--maybe even prevent the next Dark Ages.


On the Internet, no one can tell whether you're a dolphin or a porpoise
may 04, 2015; revised may 23, 2015. Last updated july 10, 2015

back

to top