|
randombio.com | Science Dies in Unblogginess | Believe All Science | I Am the Science Friday, July 25, 2025 | philosophy Can artificial intelligence ever be conscious?Brain science says yes, religion says no. Just don't confuse a chatbot with something intelligent |
he question of whether an AI can ever be conscious is a perfect problem for
philosophy: the term ‘consciousness’ has so many contradictory
meanings one can argue in circles; and AI doesn't yet exist—and might
not exist for decades—so there's no chance of being proved wrong in
the foreseeable future. The question may be unanswerable or even meaningless,
like whether we have ‘free will’.
The good news is that unanswerable questions are the only things worth arguing about: if the question were answerable, someone would have done so long ago and we could just cite the result.
Suppose we've decided what we mean by consciousness. Let's assume we can rule out ‘awareness’, which means that a piece of information is accessible to our senses. We rule out ‘awakeness’, which means the individual is not sleeping or comatose. And we rule out ‘qualia’, which are collections of associations within the mind that elicit other memories and emotional states.
We're left with ‘subjectivity’, or internality, which means that information is represented in the person's mind from a person's specific frame of reference. Right away we can see that we're talking about something complementary to the principle of special relativity, wherein all frames of reference are rigorously equivalent. So what we need is a theory. The beauty of a theory is that it might give us clues how to measure it, which is impossible at present. Proof of this lies in the failure of the Turing test to give a credible result.
Despite repeated attempts[1] by some scientists to dismiss this so-called hard problem of consciousness as a psychological illusion, consciousness remains a mystery. There have been many attempts to solve it (reviewed here)[2], but when people invoke “post-materialistic phenomena”[3], quantum mechanics, or theories like IIT, which avoids the fundamental issue of how physical and phenomenal elements could interact,[4] neuroscientists abandon the field, sensing that unscientific mysticism is afoot. (Of course, they also know the US government isn't going to fund it.)
To understand consciousness scientifically, we would have to create a sort of anti-theory of relativity. In 2022, Nir Lahav and Zachariah A. Neemeh took a stab at it,[5] but it seems nobody really knows where to start. We will leave revising Einstein's equations to account for the mind as an exercise for the reader. Here we'll just discuss what direction a theory would have to go.
One argument, popular in pre-neuroscience times, is that consciousness is a unitary phenomenon. The conscious mind is said to be a unified and irreducible whole. As Kant put it in Critique of Pure Reason:
The I think must be able to accompany all my representations; for otherwise something would be represented in me which could not be thought; in other words, the representation would either be impossible, or at least be, in relation to me, nothing.[6]
Kant defined consciousness as “the representation that another representation is in me.” Even in those days, it was understood from the focus on ‘representation’ that information is involved, but the terms ‘I’ and ‘me’ suggest a unitary mind. Seen this way, it seems that the conclusion is leaking into the premise.
Another argument, going back at least to Aquinas, is that the ‘soul’, by which Aquinas meant ‘that which is conscious and thinks and survives after death’, must be a non-corporeal and irreducible whole. For survival after death to make sense, the mind would have to be a unified whole, meaning it would have to be irreducible to any smaller components. Otherwise, the soul would rapidly disintegrate after death and an ‘afterlife’ would be impossible. This would also mean that non-corporeal beings such as angels, demons, and God himself could not exist, something that religious thinkers obviously would not accept.
Obviously, what people are calling ‘AI’ today, while perhaps unitary, is not intelligent or conscious in any sense of the word. Indeed, neuroscience seems to show that unitarity is not necessary for consciousness. Considering that the human brain has 180–200 distinct regions, each performing a different function, unitarity may actually be incompatible with it. Despite the fact that a chatbot may refer to itself as ‘I’ or ‘me’, it has no consciousness because it lacks the physical architecture and programming that could generate it. The argument here is not about ChatGPT and similar information-processing software, but whether or not machine consciousness is possible in principle.
The concept of irreducibility is not as alien to science as you might think. In particle physics, the only particles that can never decay are those that have zero mass, like the photon. Unlike protons and neutrons, they are irreducible. One reason is that a particle can only decay if it can give up some of its mass.
Nevertheless, humans perceive themselves as singular, unified beings. Modern neuroscience considers this to be an illusion: when we see an object like a box, we create a model of the object in our head and assign the model properties pertaining to different sensory modalities (color, sound, smell, etc.) which we can query for information. We may be conscious beings, but the belief that our awareness has a unitary, singular locus is a narrative created by our frontotemporal cortex to help us make sense of the world and our perceptions.
I once knew a guy who said that when he was taking hallucinogenic drugs, he realized that there was a specific part of his brain that was activated whenever he saw, for instance, wavy lines. I don't remember what drug this guy was stoned on, but neuroscience has long proposed a similar idea. In the past year or so, they've even re-discovered the so-called ‘grandmother cells’, which are activated only when you're seeing a specific person, in this case your grandmother. The existence of such cells was long ridiculed by earlier researchers on the grounds that if there were only one grandmother cell and it died you'd lose your memory of your grandma; but that turned out to be a false assumption and grandmother cells turned out to be real.
The brain has a huge number of such models, which it can rearrange, modify with experience, and rebuild if necessary. Probably the biggest and most complex model is the narrative the brain creates of itself. It is our identity, which we depend on for self-esteem, and which we defend vigorously against any challenges. In dark moments we probably realize that much of what we believe about ourselves is a lie. Yet the brain does not really care: It will happily convince us of any falsehood that facilitates our individual survival. If it's a choice between believing a falsehood and jumping off a cliff, the brain will invariably choose the falsehood. It's a continual and maybe hopeless struggle to identify and get rid of these false beliefs and most people don't even try.
The big mystery is not ‘how the brain works’, but how subjectivity emerges. It's a tough problem because scientists have spent centuries expunging subjectivity from their theories. Maybe, just as we must include the interaction of a particle with the measurement apparatus in quantum theory for it to make sense, we must consider both information and the brain before we can get a handle on subjectivity.
Soon enough AI researchers will realize that to be intelligent, an AI will also need to create a model of itself like that used by the brain. The only secrets that remain are how to build the model and how to incorporate emotional feelings and motivations. AI researchers will no doubt discover the trick soon enough. If so, the challenge will be to demonstrate whether or not it is really conscious.
It will come down to whether you believe that the materialist or the ‘spiritualist’ or immaterialist view of nature, or some combination, is a more productive view of reality. Science is biased toward the materialist; religion toward the incorporeal. Since consciousness is almost by definition individual and inaccessible to others, even if somebody invents a true AI the question could remain unanswerable. Without a theory to make testable predictions, we'll still have something to argue about. And in science and philosophy, maybe that's all that matters.
[1] Berent I. Consciousness isn't "hard"-it's human psychology that makes it so! Neurosci Conscious. 2024 Apr 3;2024(1):niae016. doi: 10.1093/nc/niae016. PMID: 38585293; PMCID: PMC10996123.
[2] Sanfey J. Simultaneity of consciousness with physical reality: the key that unlocks the mind-matter problem. Front Psychol. 2023 Sep 28;14:1173653. doi: 10.3389/fpsyg.2023.1173653. PMID: 37842692; PMCID: PMC10568466.
[3] Wahbeh H, Radin D, Cannard C, Delorme A. What if consciousness is not an emergent property of the brain? Observational and empirical challenges to materialistic models. Front Psychol. 2022 Sep 7;13:955594. doi: 10.3389/fpsyg.2022.955594. PMID: 36160593; PMCID: PMC9490228.
[4] Carroll, S. (2021). Consciousness and the laws of physics. J. Conscious. Stud. 28, 16–31.
[5] Lahav N, Neemeh ZA. A Relativistic Theory of Consciousness. Front Psychol. 2022 May 12;12:704270. doi: 10.3389/fpsyg.2021.704270. PMID: 35801192; PMCID: PMC9255957.
[6] Kant I, The Critique of Pure Reason, Book 1, Of the deduction of the pure conceptions of the understanding, Chapter II, SS12, p76 This line is famous, probably because it's the only short sentence in the book; see p 348 for some doozies
jul 25 2025, 4:54 am. last updated jul 27, 2025
Understanding consciousness will turn our thinking on its side
Celebrity scientists try to apply scientific reductionism to a
philosophical problem
Plagiarism engines and linguistic gray goo
ChatGPT4 fails the Turing test. Also, scientists discover that water is wet
Artificial intelligence, mental telepathy, and theory of mind
If only they could develop a functional AI by next Tuesday, then
I wouldn't have to struggle with that dreadful tax software
Scientific materialism and the ‘hard problem’ of consciousness
Some scientists assert that subjective phenomena cannot be studied scientifically.
But subjectivity is an irrefutable fact of nature. Understanding it will be
essential to understanding the mind.
From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett Book review
The Conscious Mind: In Search of a Fundamental Theory by David Chalmers Book review