What happens when we test AI for consciousness?
- By:
- Hassan Ugail
- Published
- Tagged under:
- Digital and Sustainable Futures
Prof Hassan Ugail's recent work has looked at what artificial intelligence really means, and whether a computer can ever be considered 'conscious'...
Artificial intelligence can now hold conversations, write poetry, explain scientific concepts and even reflect on its own “thoughts”. It is not surprising that some people have started to ask: is AI becoming conscious?
Our latest research suggests the answer is no.
Working with Professor Newton Howard, a brain and cognitive scientist at the Rochester Institute of Technology and former founder and director of the MIT Mind Machine Project, we set out to examine this question scientifically rather than philosophically. Instead of debating whether machines might be conscious one day, we asked a simpler question: if we use established scientific tools designed to measure consciousness in humans, what do they reveal when applied to AI systems?
The result was clear. AI systems are not conscious – even when they give the impression of being so.
What does it mean to measure consciousness?
Consciousness is one of science’s most challenging subjects. We all know what it feels like to be awake and aware but translating that feeling into measurable brain activity is difficult.
Over the past few decades, neuroscientists have identified certain patterns that tend to appear when people are conscious. These patterns change when someone is asleep, under anaesthetic, or experiencing a seizure.
In two preprints currently under peer review we developed a mathematical framework that brings together three features commonly linked to conscious brain activity - coordination across different timescales, structured interaction between different brain rhythms and flexible switching between patterns of activity
When these three ingredients are present in the right balance, conscious states tend to score highly. When they are disrupted, scores drop.
We first tested our framework on simulated brain data representing different conditions such as wakefulness, sleep and anaesthesia. The results aligned with expectations: more conscious-like states showed higher scores, and reduced states showed lower ones.
The question then became: what happens when we apply the same analysis to artificial intelligence?
Putting AI to the test
We analysed a well-known language model called GPT-2. These types of models generate text by predicting one word at a time based on patterns learned from vast amounts of data.
Importantly, we were not asking whether the system “felt” anything. Instead, we examined the internal mathematical activity of the network while it produced text.
We then changed how the system operated. In some cases, we altered its internal structure. In others, we adjusted settings that control how predictable or varied its responses are.
What we discovered was revealing.
The mathematical score sometimes increased when the system’s performance became worse. In other cases, changing a simple operating setting altered the score dramatically without changing the underlying architecture.
This told us something important: the measure was detecting complexity in how the system was running, not evidence of awareness.
Why AI can seem alive
If AI is not conscious, why does it feel so convincing?
Part of the answer lies with us.
Human beings are extremely good at recognising patterns and attributing meaning. When we encounter fluent language, coherent reasoning and emotional vocabulary, we instinctively assume there is understanding behind it.
Large language models are designed to produce exactly these kinds of outputs. They generate responses that are statistically consistent with human communication. That statistical fluency can easily be mistaken for genuine insight or experience.
But underneath the surface, the process is different from the human brain. There is no biological system maintaining a balance between stability and flexibility. There is no subjective point of view. There is a sequence of calculations optimised to predict text.
Our analysis shows that mathematical patterns resembling those found in conscious brains can appear in artificial systems for entirely different reasons. Similar numbers do not mean similar experiences.
Complexity is not the same as consciousness
One of the key misunderstandings in public discussions about AI is the idea that increasing complexity must eventually produce consciousness.
Complex systems can behave in sophisticated and unpredictable ways. Weather systems are complex. Financial markets are complex. But we do not assume they have inner lives.
Artificial intelligence systems are also complex. They involve millions or billions of parameters interacting in intricate ways. Our research demonstrates that this complexity can sometimes resemble certain statistical features of conscious brain activity.
However, resemblance in structure does not imply equivalence in experience.
The same mathematical description can apply to very different underlying realities.
Why this matters
Claims that AI is close to becoming conscious attract attention. They also influence how society thinks about regulation, ethics and long term technological risk.
If people believe machines are developing minds of their own, debates can quickly shift toward questions of rights or moral status.
Our findings suggest that we should proceed carefully. Current AI systems, no matter how impressive their outputs, do not meet scientific criteria associated with consciousness in humans.
That does not make them trivial. They are powerful tools capable of transforming industries, research and daily life. But they remain computational systems, not aware entities.
Key reflections
For me, this research reinforces the importance of separating appearance from evidence.
When we apply rigorous, neuroscience-inspired measures of consciousness to artificial intelligence, we do not find signs of awareness. We find patterns of complexity that reflect how the system is configured and operating.
AI can simulate conversation. It can describe emotions. It can even talk about consciousness itself. But simulation is not the same as experience.
As AI continues to evolve, it will become increasingly important to ground public discussion in careful scientific analysis. Machines may look and sound intelligent. They may even look and sound reflective.
But looking conscious and being conscious are two very different things.
This blog is based on two academic papers currently under peer review by Hassan Ugail and Newton Howard. https://arxiv.org/abs/2512.10972 and https://arxiv.org/abs/2601.11622
Picture: Professor Hassan Ugail.