randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Thursday, Aug 04, 2022 | Science Commentary

Four myths about science

It's important for the average person and for the media to understand what science can and cannot do


L ast week science was hit hard by two major scandals: eleven papers on cancer research at Ohio State University were allegedly found to be falsified. A week earlier the same happened in Alzheimer's disease. What's going on? Is science in trouble?

Getting science right is probably the most important task that anyone can undertake. Europe is facing catastrophe this winter (and possibly risking civil unrest) because of bad decisions based on questionable science. Sri Lanka has consigned its citizens to possible starvation because of it, and two years ago science administrators here in the USA competed to see who could disseminate the most inaccurate predictions about Covid.

Given the stakes, there is no longer any reason for any scientific journal to demand more than a token payment to read scientific articles. The writing is done entirely by unpaid contributors and the peer reviewing is done entirely by volunteers. The only costs are for editorial staff and Internet bandwidth. While academics can usually get articles through their libraries, we need to reduce the barriers to the public because the public needs to understand what's going on in science. Before starting, however, it's also necessary to dispel some of the misconceptions people have about science.

Myth #1: A scientific article is the last word on the subject.

People sometimes ask what percentage of scientific findings are correct. This is the wrong question. Over the years many self-proclaimed science-checkers have made fools of themselves by claiming that they tried and failed to reproduce large numbers of scientific findings.

There are many reasons why such a task is impossible. When I was a grad student, the lab I worked in had a technician whose sole job was to perform a specific measurement. She had done this same measurement five days a week for at least ten years. If some hotshot fact-checker tried to reproduce it, they'd get standard errors at least twice as big as this technician got, pushing the result below the level of statistical significance. No doubt they'd claim that they couldn't reproduce it, when in fact their gigantic egos prevented them from recognizing that they actually sucked in the lab. Since then I've seen this phenomenon over and over.

In science, new is not necessarily better. Just because something that conflicts with previous research manages to squeak through peer review does not mean the earlier results are wrong.

No matter how definitively the authors state their conclusion, in reality a scientific conclusion cannot (even in principle) be more solid than a reaffirmation that their starting hypothesis didn't go down in flames. It doesn't mean the hypothesis is correct—indeed, some of the most important papers in recent years found important stuff that was completely overlooked by the authors. The authors had a bad hypothesis and drew the wrong conclusions, but when people looked at their data it moved the field forward anyway.

The only thing that is true in a scientific paper is the experimental data. The rest of it, including the abstract, introduction, and discussion, is just fluff added because the editors demand it. Often the authors change the hypothesis when their evidence disproves it. This is a bad thing to do in clinical research, but it's also bad in basic research because it creates the false impression that scientists are always right. Don't like it? Talk to the editors who won't publish negative results.

A scientific paper is not a record of new knowledge. It is a proposition: here is what we found and here is what we think it means. The more definitively a conclusion is stated, the greater chance that it is wrong.

Myth #2: A computer model provides knowledge.

The other day an article showed up in the press describing how a computer model by researchers at the University of Cambridge supposedly claimed that global warming would trigger nuclear war, a financial crisis, and an extinction-level pandemic by 2070.

If not for the seriousness of the claims, the paper would be good for a laugh. The nice thing about making predictions is that you don't need any evidence. If I said my crystal ball made these predictions, no one would believe me. If I said my computer model made them, suddenly people panic. But there really is little difference.

There are still honest scientists earnestly studying the climate and trying to find out what's happening. But ‘global warming’ ceases to be science when the methods used to predict it become unscientific.

A computer model can never produce new knowledge. At best, if it's programmed honestly, it may be a useful way of generating hypotheses: if the assumptions we put into it are true, and if it works the way we think, can it explain the observations? The only time you gain knowledge is when the computer predicts something incorrectly. You then learn that you need a better understanding of the phenomenon. Tweaking your program to match the data or to obtain a specific result, no matter how good your intentions, is fraud.

Computer models never work the other way around. It is never valid to say “If the thing that the computer predicts happens, then our assumptions were correct.” And you're just indulging in tasseomancy if you say “Our computer predicts it will happen, therefore we must do this, that, and the other thing.”

What is often ignored in the global warming dispute is that carbon dioxide is essential to life on Earth, and that over the past half billion years much of it has been rendered inaccessible. Most of the limestone buried in the Earth is made of carbon that was once part of our biosphere, as is every drop of the oil and every gram of the coal. Coal and oil are rich sources of valuable organic chemicals, so an argument could be made that it would be better to save them for manufacturing plastics and chemicals instead of burning them as fuel. But if we wanted to preserve life, we'd be frantically trying to restore the former high level of carbon dioxide to the atmosphere, not reduce it.

You may or may not agree with that opinion—it is a judgment call—and that is the point: no computer model can ever make judgment calls for us. Computers aren't magic oracles no matter how smart the programmer may be.

Myth #3: Scientific articles are technical and hard to understand.

Yes, some background knowledge is needed to understand them. It's also true that some scientists are just not very good writers. I read a paper just the other day where the authors went on for five pages about some new miracle drug for Alzheimer's disease but forgot to tell us the name of the drug, where they got it from, or how they synthesized it.

Readers expecting flashy writing, colorful adjectives, and simple definitive factual statements often get bogged down and give up. Scientists are trained to state only the facts. One does not read a scientific paper, one studies it, and it can take hours. It helps to write down the acronyms and pertinent facts as they occur. Check the statistics and ask whether the experiment was done fairly.

The hardest part is not in understanding what was written but in evaluating whether it was done correctly, whether it tells us anything new, and whether the conclusions follow. It takes years to get to that point, and even scientists can get it wrong.

Myth #4: The news media represent scientific findings accurately.

Although some people cling to the belief that the news media are required to state the truth, in fact, whether through scientific illiteracy, a desire for clicks, or mendacious intent, the news media rarely succeed in describing a scientific finding accurately.

Often the press will fall to Myth #1, thinking that a newer finding automatically invalidates all previous findings. Or they'll fall to Myth #2 and think that a computer model could really predict that global warming could start a war. In earlier times, reporters would contact recognized experts and ask their opinion. Nowadays reporters often just go to somebody they know will affirm what they want to write, or they go with the outrage factor instead—to everybody's annoyance.

Funding agencies and bureaucrats, who hold researchers' careers in their hands, have to decide: do they want results fast or do they want them right? They can't have both. The more time we spend writing up useless findings to appease the bureaucrats, the less time we have to search for cures. All scientists would prefer to get grants only on topics that are scientifically interesting. Unfortunately, that's not an option in today's environment. If we bureaucratize science even more to get rid of the scandals, then science really will be in trouble.


aug 04 2022, 6:20 am


Related Articles

Meta-analysis of junk science is still junk science
A paper on gender violence and global warming reminds us that meta-analysis doesn't make something true

Trust science? Pshaw
Religious people claim that order is proof of a deity. They have it backwards, but science is no picnic either

Science should not be a religion
Watch out for that warm and fuzzy feeling that doctors get when they drop the placebo group

The legacy of the virus lab leak coverup
The credibility of science took a huge beating over the past year. Advocacy science is the main culprit


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
technology
home