randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Tuesday, December 26, 2023 | computer commentary

How AI will affect image processing

Hint: more complicated browsers, fatter books, more expensive software, all new computers, and higher electric bills


A lmost all the high-end computer workstations at HP and Dell are now advertised as "AI capable". They come with 136 TB of storage and 1 TB DDR5 ECC, and the description is like this:

Take on Processor-Intensive Workloads. Relentlessly. Recommended for 3D rendering with real-time ray tracing, virtual production, VFX, color grading, finite element analysis, ML/AI/DL, model training, fine-tuning, inferencing, computer vision and natural language processing.

For those of you who do not know, ML = machine learning, AI = artificial intelligence, and DL = deep learning. Why? Are that many people really doing neural network simulations? Or are they all creating fake celebrity porn images for people with a fetish for six fingers and soulless eyes?

The correct answer is 'none of the above.' People are finally figuring out that AI really means artificial information. You've seen it already: that Palestinian kid with his hand raised who somehow got six fingers. That picture of former President Trump running from the cops. Or that actress who is so modest she only managed a single good on-screen kiss in her entire career suddenly doing, shall we say, implausibly detailed close-ups.

All the fake reviews, fake fact-checkers, fake news, and fake body parts might be enter­tain­ing to us, but where there's money involved there's pressure to do something about it. When you can't trust a call from your grandma telling you she can't get up, images in a scientific paper claiming to have cured cancer, or a report in the press claiming that WWIII started last night, it means change will be imposed on us whether we want it or not.

What will happen is that every image, every phone call, every email, and every tweet* will be encrypted and will come with a certificate of authenticity. Every camera, word processor, and sound recorder will be required to encrypt its data and embed a certificate in each image and each block of text to prove it wasn't created by an LLM.

Doing things the old way is no longer an option when your browser refuses to render an image without a valid cert. Every form of information, not just Internet communications, will need it. Even printed documents will need a scannable QR code to prove they're not generated by AI. This will all take horsepower. We'll have to buy new cameras, new software, and more powerful computers to handle the load. Programmers are preparing for it: even my humble image analysis program now handles encrypted images. It's only a placeholder so far because no standard is yet available.

So, what happens to all those candid snapshots of our unloved ones accidentally falling into the Grand Canyon? What about those stacked astrophotographs showing a comet, which moves at anything between 2000 and 100,000 mph, as if it were stationary against a background of fixed stars? Without metadata and a valid cert, they're outed as fake and can't be used against you in a court of law and can't be used by real astrophysicists to laugh at you.

Text isn't immune. Word processors, especially those used by students and news reporters, will have to certify each block of text to ensure that it wasn't created by an LLM. Exactly how that will work is unclear, but already publishers run every article you write through plagiarism-detecting software, which is a nightmare for researchers who use the same biochemical test twice in different experiments.

And we need better metadata: not just exposure levels, date, and f/stops, but for software, including the software in the camera, to embed a record of all operations done on the image, from sharpening and contrast enhancement to warping and copy-move forgery. Bureaucrats will love the extra paperwork, but so will anyone whose images show something that people are likely to act on. That means anyone who has an enemy (translation: anyone in academia) will need it.

Even with all this baggage, we'll still have the same problem we have today: maybe you'll be able to trust that the news media are really the news media, but what we really need is an AI that can tell us when a reporter is lying. Or, for that matter, our car mechanic.

The goal of encryption is to make the probability of guessing whether a bit is correct as close to 50% as possible. The same is true of lying. If everything someone says is a lie, you still get the truth by assuming the opposite of whatever he says is the truth. But if 50% is true and 50% is false, you get no information at all. The liar is using an unbreakable code. So you might say one form of lying is needed to counteract another.

How can we know whether anyone is telling us the truth? It would require a level of AI way beyond what we have now. What passes for AI today would get no prize for deciding whether Pfizer or our politicians are lying, but if it could decide all truth it becomes a god. And that's what the humans crave: a mechanical deity that will decide right and wrong, true and false, and good and evil, so they don't have to.

If by some miracle we got that, how would we ever know whether the AI was being truthful? AI is already opaque. It's also been shown to lie and invent facts whenever it 'feels' like it. There's nothing worse than a deceptive, capricious god. So it seems like the universe will have to have certificates all the way down.

It is a good day to crawl into a corner and wish we could move to a cabin in Finland and survive by hunting reindeer like that guy in Hanna.

* I still call them tweets. 'X's shall not stand. You have to take a stand somewhere.


dec 26 2023, 5:54 am. updated dec 27 2023, 3:04 am


Related Articles

Can AI really diagnose Alzheimer's disease?
What does the new reliance on computer databases do to science? Nothing good

Digital Image Forensics Theory and Implementation
book review

Plagiarism engines and linguistic gray goo
ChatGPT4 fails the Turing test. Also, scientists discover that water is wet

How to do bad image forensic analysis
Scientific journals are paying experts to analyze images submitted by researchers. They're not very good

Censorship in Science
Scientific journals are using computer programs to ignore the real threat and focus on fake problems


On the Internet, no one can tell whether you're a dolphin or a porpoise

back
science
technology
home