randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Tuesday, November 14, 2023 | science commentary

Can AI really diagnose Alzheimer's disease?

What does the new reliance on computer databases do to science? Nothing good


T he average person could be forgiven for thinking those big pharma drug trials showed us what causes Alzheimer's disease, but it's not so. Despite the FDA's rubber stamping of immunotherapies, still no one knows what causes AD. Even if—and it's a big if—beta amyloid (Aβ) is more than just an epiphenomenon, the question of what causes it to increase remains.

It's bad enough that the science media think the problem is solved. Now NIH thinks so too, or more precisely the people at the Center for Scientific Review, who determine the topics researchers can write grant applications on. I worked with many of them back when they were doing bench science. They were good scientists then, but it's evident that once they moved to government they were unable to go against the flow. A few disagree, but they're a distinct minority and they're forced to follow the party line.

Over the years, there have been more papers by researchers claiming to have found a biomarker for AD than I can count. It is one of those projects that sounds simple, but is actually a nightmare. Not one has ever held up, for the simple reason that we don't understand the pathogenesis.

One biomarker, phospho-tau, seems to have won the lottery, beating even Aβ42/Aβ40 ratio, which is decreased in the blood and CSF. This is the opposite of what you'd expect. Aβ42 is greatly increased in AD brains. No one knows why. One theory that presenilins 1 and 2 were changed somehow to favor Aβ42 over Aβ40 didn't hold up. There are no reliable changes in β-secretase, which is the rate-limiting step in its synthesis, and none of the secondary changes, like phospho-ERK1/2, cholesterol, or copper, ever held up either. Many of the academics who rushed to form companies to commercialize their supposedly big discoveries discovered the humiliation of going out of business.

Now the fad of "AI" has spread to diagnosing AD. According to a non-peer-reviewed article on Biorxiv, the AI was able to determine the patient's sex with 94.9% accuracy, so they trained it on a big AD database. Good luck. How will the AI know the difference between AD and FTD, PD, VaD, or any of a hundred other neurodegenerative diseases just by looking at brain scans? Simple: it's AI, it's an oracle. The computer is never wrong.

That mentality is a sign that biomedicine is falling into the same trap that climate studies fell into years ago when they started using computer simulations instead of doing empirical research. They assumed that the radiative transfer equations written by their predecessors could be applied to climate. They had to tweak the hell out of them before they could even predict the current climate, making any predictions of the future scientifically invalid.

Big databases are now an important way of doing clinical research. If you propose a new clinical study, NIH will refuse to fund it if any part of what you intend to measure is in a database. While relying on a big database is appealing because it beefs up your N and eliminates the bureaucratic entanglements in getting IRB approval, it runs counter to the fundamental principle of science: if you don't do hands-on research yourself, you will never make a serendipitous discovery.

My original mentor, a famous biochemist in his eighties, knew this. He made a practice of spending one day a week at the bench so as not to forget what it means to do science. We thought he was just puttering around, but he was wise: leave the bench too long and you end up as Alice in Wonderland. NIH should set up labs in Rockville to allow the program officials to keep their lab skills sharp. Letting them do basic things like blots or virus gain-of-function experiments once in a while would help them understand that what's in those glossy science mags is often closer to wishful thinking than science.

Otherwise, coupled with the glacial progress in industry in creating an affordable high-resolution way of measuring proteins, you have a recipe for stagnation and failure. Many labs have abandoned Western blots as being too risky. The number of controls that are needed and their intrinsic variability makes them vulnerable to amateur sleuths who go after any paper containing a Western blot and find flaws whether they're real or not. And so scientists spend their days looking in huge databases for statistical correlations like that guy in Taiwan who finds them everywhere.

Spurious correlations show up frequently in AD studies. For instance, a correlation between hip fractures and AD was explained by the higher incidence of osteoporosis and AD in women. The reported correlation between gum disease and AD might be important, or might simply be that AD patients are forgetful about dental hygiene. If your study is limited to someone else's database, no matter how big it may be, you can't do a follow-up study, so you may never know.

Even worse is that the loss of hands-on contact with patients deprives researchers of anecdotal information that would otherwise be available. For instance, did the patients getting a drug for depression experience any memory loss? If it's not in the database, there's no way to know. I found these unanswered questions to be enormously frustrating when I analyzed the data from our clinical drug trials.

Science is not just plodding forward to grind out data using established methods. It is also exploratory and relies on intuition and serendipity.

Of course, the bean-counters who set the rules can't conceive that such a thing as serendipity exists, let along that it's worth funding, because no researcher can prove that it will happen. And so it becomes a self-fulfilling prophecy.


nov 14 2023, 5:23 am. updated nov 15 2023 5:02 am


Related Articles

Western blotting must die. All those retracted papers will kill it
Why in the world are people still trying to get reliable results with the most unreliable method ever invented?

What new technologies are needed in biology?
I don't know about anyone else, but I've had it up to here with Western blots—and rats

The FDA's approval of aducanumab is a blow to Alzheimer research
Biogen cut corners on their clinical trials, leaving scientists wondering: is it beta-amyloid or not?


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
technology
home