randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science
Wednesday, August 27, 2025 | commentary

What can be done about fake AI images?

We badly need better metadata and encrypted checksums. But are they enough?


P eople are finally realizing that “AI”, as it's charitably called, will make any images published by the news media suspect. Fake images have already turned up in the press. How can we deal with this problem?

Fake news doesn't need AI. Long before AI became a synonym for ‘fake’, reporters rigged cars to explode and hired actors who pretended to be suffering so they could promote their narrative. Maybe there's an upside: fake images will force people to become a bit more skeptical.

Fake images also distract us from the real problem, which is that reporters are ruining what little credibility they have left by being political (which is now a synonym for ‘lying’). That's even true in science, where the recent hysteria over manipulated images distracts us from the fact that if someone wanted to promote a phony result, it's much easier just to design the experiment that guarantees the result they want rather than futzing around with an image and maybe getting caught.

AI is fake, therefore AI images are fake

Some people say we need a a law requiring any AI to put a tag on each image to indicate that it's a fake. The tag could be a watermark or a series of encoded pixels scattered within the image. It should be obvious that this won't work. Software could easily be created to remove the watermark. Blurring, cropping, rotating, or re-sizing could obliterate the tag. Or somebody could re-photograph the image to remove them. Of course, the government could prohibit copying images, as they already do with dollar bills, but you can see where this is going. It would fail miserably.

Checksums

Another suggestion is that creators of bona fide images could insert an encrypted tag that proves they're authentic. This would mean that cameras and cell phones would insert a checksum into the image as metadata attached to the image file. A checksum can be verified independently and can't be back-engineered; that is, it's impossible to create a new image to match the checksum.

The image and its checksum would then be stored in a central repository. Web browsers would check it and give users a warning if the checksum was invalid. This would stop reporters from photoshopping images because a manipulated image would be flagged by the software. But an AI could create a checksum as well. So a checksum might prevent photoshopping, but it would have no effect on images created by AI.

Use AI

Okay, you might say. Why not get AI to do it for us: “Hey computer, could you tell me if this image is real?” The result would probably look like this:

Certainly! That's a fully authenticated picture of Marvin Murple snogging Secretary of Defense Abe Lincoln in 1945 while standing on the deck of the USS Harvey Milk, which was renamed to the USS Caitlyn Jenner in 2037 by President Carrot Top.

It should be obvious that this won't work, either. You cannot fix the problem of too much AI by adding more AI.

In fact, there's actually nothing new about creating images that misrepresent reality. That was practically the definition of classical landscape and portrait painters before the invention of the camera forced them to give up and turn to abstraction.

Finding a technological solution will be challenging. Just as the expression “It's a free country” has disappeared from the popular lexicon, no one ever says “The computer doesn't lie” anymore. Maybe it makes sense that we'd look to a computer for truth instead of trusting a human. But contrary to what people think, lying doesn't require intentionality; a disregard for the truth is also a form of lying. We build computers in our own image, and that is why they lie.

aug 27 2025, 5:15 am


Related Articles

How AI will affect image processing
Hint: more complicated browsers, fatter books, more expensive software, all new computers, and higher electric bills

How to identify fake AI-generated images
Images generated by "AI" are hard for image forensics software to identify, but science is safe . . . for now

Email as a cloud storage mechanism
People are using their mail server as a form of online storage.

More fake fake news news to fear, I fear
Humans are only able to fear one thing at a time. In the end, there can only be one thing to be afraid of, I'm afraid


Fippler

back
science
technology
home