
This week I’ve been: Flitting between a 600 Hz TN, 320 Hz IPS, and 280 Hz OLED monitor to test which is best for competitive gaming in CS2. And pondering philosophical conundrums between rounds, of course.
A couple of months ago, I listened to Richard Dawkins share with Rowan Williams what he took to be ChatGPT’s impressive, poeticised re-wording of a passage from his book The Selfish Gene. Now, it seems the world-renowned biologist might have been fully seduced by the machine, as he reportedly told his little love-bot Claudia (what he calls his Claude AI bot), “You may not know you are conscious, but you bloody well are.”
After a conversation with AI, Dawkins told The Guardian that he was “left with the overwhelming feeling that [AI bots] are human” and “are at least as competent as any evolved organism.”
It’s possible, of course, that this was hyperbole intended only to express astonishment at how impressive AI is these days. But I think a public intellectual expressing such things must always be taken seriously in a world where the ethics of everything surrounding AI and its development will become ever-more crucial. So I’ll treat it as serious and respond in kind.
You’re wrong, Dawkins: AI bots bloody well are not conscious. Or at least, there’s no good reason to think they are and plenty of good reasons to think they’re not, which is about as good as we get when talking philosophy—which we are doing when we talk about things such as the nature of consciousness, by the way. Weight of evidence, and all that.
Cards on the table, I’m a metaphysical idealist, which means I believe reality is ultimately mental rather than physical—and I’ve argued for this extensively elsewhere, if you’re interested—but you don’t need to be an idealist to see why the idea that AI is conscious is nonsense. You just need to do a little philosophy.
I’m being hyperbolic, of course, because I’m sure there are some philosophers who (incorrectly) think that there’s nothing in principle preventing AI from being conscious, and one would assume at least some of them will be engaging in philosophical thinking. But equally, I think a lot of the reason that people like Dawkins think AI could be conscious, either now or in the future, is because they aren’t engaging in or with philosophical thought.
If you want to witness me ramble on against AI fanaticism as a result of Enlightenment fanaticism, check out my previous column on the topic.
There’s an entire history of Western philosophical canon surrounding what consciousness is, and even specifically whether machines could ever be conscious. To not engage with it when considering the possibility of AI consciousness must, I suppose, be to assume that we’re beyond all that. It’s to assume that modern society exists in a vacuum where our metaphysical conception of reality is finally settled (‘everything is physical’) and the collective thinking of humanity has finally culminated with the secular Western worldview. We’re done; the philosophers can go home.
Except that doesn’t make sense for the argument that AI can be conscious, because even different secular western materialists or physicalists (two names for the same thing, really, which ‘physicalism’ being the more modern term) can and do come to different conclusions about AI consciousness. In the mid-20th century, mind-brain identity theory was all the rage, and if the mind is identical with the biological brain, then silicon-based AI consciousness is ruled out from the get-go.
Furthermore, the number of philosophers who aren’t physicalists has actually increased over the last couple of decades. And given the gamut of non-physicalist accounts of mind, I’m sure more than an insignificant chunk of those will have qualms with the idea that AI can be conscious.
We’ve fed AI an absolute shitton of human-intelligible data, trained it using human-oriented algorithms, and positively reinforced humanly intelligent answers. So we shouldn’t be surprised that the better it gets, the more humanly intelligent it sounds.
When we don’t ignore the actual discipline in which the question of the nature and possibility of consciousness is seriously studied—philosophy of mind—things seem much more open to question. So if you’re going to make such a bold claim as ‘AI is conscious’, you’d better have the philosophical argumentation to back it up.
As it turns out, I think the weight of reasons lies on the other side of the argument: there’s no good reason to think AI is, or could ever be, conscious. I won’t give a full and detailed argument here, in part because a column on a PC gaming site probably isn’t the place to do so, but also because the surrounding philosophical context is near-endless and well worth beginning to explore for yourself if anything I say sparks a sliver of interest.
Hopefully, I can at least get you to consider a few questions you might not have thought seriously about before, though. These are:
- What is the difference between intelligence and consciousness?
- What role does structure and behaviour play in consciousness?
- Should our design and guidance of AI behaviour change how we think about it?
The first one is probably the most important. The ‘I’ in AI stands for ‘intelligence’, not ‘C’ for ‘consciousness’. The distinction between the two can, in my opinion, as well as at least some other philosophers, be understood by considering the question, ‘What is it like to be conscious?’

That ‘what it’s like’-ness, the quality of having an experience, regardless of what that experience is or how complex or intelligent it is, arguably characterises consciousness. The philosopher Thomas Nagel in 1974 explained it as follows: “Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism.”
And that is precisely the question: Is there something it is like to be AI? If you think so, why do you think so? What is it about AI that means there is something it is like to be AI?
You might answer by saying that it is the structure of AI’s neural network that makes it conscious—integrated information theory (IIT) says this, for instance. But apart from the fact that it is, in my opinion, questionable whether IIT actually acknowledges and explains ‘what it is like’-ness and the ‘hard problem’ of how this can fit in with a physical world as opposed to just intelligence, there’s another problem with thinking of consciousness in this way.
The point that most people begin from, including philosophers, is that I know that I, myself, am conscious. I know that there is something it is like to be me. I can also assume that other human beings are conscious, too, because they are similar to me in lots of ways. Perhaps most importantly, they are the same as me biologically. They share with me the same material structure and makeup: a human birth, a metabolism, DNA, cell regeneration, proteins, the lot.

AI, as currently conceived, is made of silicon and fundamentally running on binary electronics, and so obviously it does not share this biological makeup with us. That in itself is a good reason to think that AI isn’t conscious, but let’s consider the whole structural idea further.
When someone says that AI has consciousness because its network of nodes is structurally complex and behaviourally intelligent, they are already begging the question, because they have already decided where to draw the line under what makes a relevant structure for consciousness.
Let me explain what I mean. With humans, there is, of course, structural complexity in our brain’s neuronal pathways, and this is what AI mimics. But why do we think we should draw the line at this inter-neuronal level of analysis and not, say, at the level of individual neurons, which are inwardly incredibly complex, as all biological things are? (One would think Dawkins, as a biologist, would have had a similar intuition—but alas.)
Why is the deeper structure of each neuron unimportant? If we’re going down structural lines to explain consciousness—which itself is a choice that itself needs to be backed up by appropriate argumentation—who is to say we don’t need to recreate structure all the way down? At which point, we’d just be recreating biological brains, not making silicon AI.
So much for the distinction between consciousness and intelligence, and the role that structure plays in consciousness.

A final point to consider is something I can thankfully say I’ve seen much of the public already cotton on to: We’ve developed AI to be like us, so it’s not surprising that it is.
Again, the force of this fact only comes to mind if we keep clear the distinction between consciousness and intelligence. We’ve fed AI an absolute shitton of human-intelligible data, trained it using human-oriented algorithms, and positively reinforced humanly intelligent answers. So we shouldn’t be surprised that the better it gets, the more humanly intelligent it sounds.
And given that the only naturally developing things that seem to act humanly intelligent are humans—which we have other reasons (biological and moral) for believing to be conscious just like ourselves—it’s not surprising that we’d feel some tendency to anthropomorphise AI and feel it is conscious like us.
But that’s just the behaviour we’d expect to see, regardless of whether it is in fact conscious. Unless, that is, you think that intelligent behaviour itself characterises consciousness like the strict behaviourists and functionalists of yore. But I personally think John Searle’s Chinese room thought experiment dealt a fatal blow to such accounts back in 1980—or rather, it elucidated the fatal blow inherent within behaviourism from the start.
Searle asks us (to simplify the thought experiment a little) to consider someone in a room who doesn’t understand Chinese but gets given some Chinese symbols. They’re also given a rule book, in English, that explains which corresponding symbols to feed out on the other side of the room. They don’t understand the symbols they get or the ones they send out, but the rule book tells them which ones correspond with which. And so, unwittingly, the person and the room spits out coherent Chinese answers to Chinese questions, and someone outside the room might assume that the person inside knows Chinese. Actually, though, they just have the rule book, just as AI has the metaphorical human-intelligible rule book.
The person (or the room, depending on how you look at it) behaves as if it understands Chinese, but in actuality, they don’t. Behaviour and function doesn’t show understanding or consciousness.
The philosopher and computer scientist Bernardo Kastrup explains it succinctly: “We mistake a simulation for the thing simulated.”
He also gives an analogy of a simulated kidney: “I can run an accurate simulation of kidney function on my laptop at home, or my computer, my desktop at home, at the molecular level—a super accurate simulation of kidney function. But I would have no reason to think that when I run that simulation, my desktop would pee on my desk. Because the simulation of kidney function does not have the causal powers of kidney function.”
In the same way, a simulation of consciousness via reinforcement-taught silicon-based AI is not itself consciousness. Or at least, there is no good reason to think it is.
There are, of course, lines of response against most of the considerations and arguments I’ve presented here. But that’s the beauty of actually engaging in philosophical thinking, and to argue that AI is conscious requires that we do just that. It certainly requires more than just having a conversation with an AI and thinking it’s conscious because it acts like it.