top of page

Superintelligence: The Ghost in the Machine

  • Writer: Dean Anthony Gratton
    Dean Anthony Gratton
  • 2 days ago
  • 6 min read

For centuries philosophers have wrestled with a deceptively simple question: “What exactly is the mind?” Is it a non-physical phenomenon generated by the brain, or something more abstract—perhaps even something separate from the body altogether? We must embrace philosophy when developing superintelligent machines to establish a theoretical platform that helps us understand the problem we’re trying to solve.

Smart as Humans

In recent years this philosophical debate has transitioned from philosophy to artificial intelligence research centres. As engineers attempt to build machines capable of genuine intelligence, the arduous puzzle of mind and consciousness has now become an engineering problem. Nevertheless, if we want machines to be as smart as humans, we must first understand what mind and consciousness are, and how they are formed within our brain masses.

 

At the heart of this discussion sits one of philosophy’s most famous theories: French philosopher René Descartes’ concept of “mind–body dualism.” Descartes argued that the mind and body are fundamentally different kinds of things. The body is physical, measurable and governed by mechanical-like laws, whereas the mind is immaterial—the seat of thought, awareness and identity. This distinction might seem abstract, but it has profound implications when applied to artificial intelligence or more specifically, superintelligence. If mind and body can be separated, then perhaps the mind itself could be instantiated in something other than biological tissue.

 

The Mind–Body Template

Descartes’ dualism has been criticised heavily over the centuries. Among its most notable critics was the twentieth-century British philosopher Gilbert Ryle, who famously described the theory as the myth of the “ghost in the machine.” Ryle argued that treating the mind (the ghost) as a separate entity that inhabits the body is a category mistake. According to Ryle, mental activity is not a separate substance but simply the way the brain behaves. There is no ghost—only the physical machine (the brain).

 

Despite its criticism, Descartes’ concept is an idealistic design principle for superintelligence machines. If we translate this philosophical idea into modern engineering, the body becomes hardware—sensors, motors, processors and circuitry, whereas the mind is software—algorithms, neural networks and data-driven reasoning procedures. Our brain in this instance resides in the domain of hardware. The distinction made between physical machinery (body) and cognitive processing (mind and consciousness) may provide a practical architecture for building superintelligent agents. In this sense, Descartes’ philosophy can become an intelligent design.

 

How the Brain Functions

Before engineers can build artificial minds, they must first understand the system that already exists: the human brain. The brain is often compared to a computer, but the analogy only goes so far. Unlike conventional machines, the brain consists of billions of neurons connected through an unimaginably complex web of synaptic interactions. These neural networks communicate through electrical signals and chemical processes, forming a distributed system that underpins everything from memory and perception to emotion and personality.

 

Despite decades of research, the brain remains one of science’s greatest mysteries. Modern imaging technologies—including MRI, functional MRI and electroencephalography (EEG)—allow researchers to observe brain activity and identify which regions activate during specific tasks. These tools have provided enormous insight into how the brain functions but still don’t reveal the location of the mind itself.

 

Looks at Mind and Brain

We can observe neurons firing. We can map regions responsible for movement, language and vision. Yet nowhere inside the brain can we point to a specific structure and say, “this is the mind.” Many researchers suspect the mind is not located in a single place at all. Instead, it may emerge from the collective activity of the entire brain—a phenomenon created by the interaction of countless neural processes working together as a whole. If that is true, replicating it in a machine becomes far more complicated than simply writing clever software.

 

However, while the mind isn't a "spot," modern neuroscience is getting better at identifying "neural correlates of consciousness" (NCC), suggesting the mind is what the brain does when functioning in specific ways (source: A Neurologist Looks at Mind and Brain: “The Enchanted Loom,” Phiroze Hansotia, National Library of Medicine, October 2003).

 

Searching for Something More

Today’s artificial intelligence systems are extraordinarily capable. Machine learning algorithms can identify patterns, parse and translate languages, ‘diagnose’ diseases and even generate convincing human-like text, voice, and video. But despite these impressive achievements, most researchers agree that modern AI, or machines for that matter, do not think.

 

Instead, they mimic aspects of intelligence through complex statistical models and massive datasets. Their behaviour may resemble reasoning, but the underlying mechanism is still fundamentally computational. In essence, current AI systems are sophisticated tools—powerful, adaptable, and sometimes astonishing, but ultimately still examples of what might be described as “clever programming.”

 

Creating a genuine artificial mind would require something more. The challenge is not merely replicating intelligence but replicating mind and consciousness—along with the subjective experience of being aware.

 

Three Stages of Development

One intriguing proposal is to separate the development of superintelligent agents into three stages, plagiarising aspects of human biology. The first stage is the embodiment of the mechanical structure that allows a machine to interact with the physical world. This includes sensors for vision, touch, and sound, as well as systems for movement and balance.

 


The second stage is the brain: A computational structure capable of processing sensory information and coordinating behaviour. Only after these foundations exist could the final component emerge—the mind. A digital mind encompassing consciousness and self-awareness that is capable of thinking, reasoning, reflecting and perhaps even questioning its own existence.

 

Mimicking Child Development

This layered approach plagiarises how human cognition works. Our bodies gather sensory information, our brains process it and, somewhere within that process, our sense of self appears. Whether such a structure could produce the hallmarks of our being in a machine remains an open question however.

 

Another challenge lies in how consciousness and self-awareness develop. Human consciousness does not simply appear fully formed at birth; rather, it emerges gradually through experience, social interaction and psychological development. Children learn empathy, moral reasoning and self-awareness over many years from their peers, family, and friends.

 

Evidence of Artificial Consciousness

If artificial minds are ever created, they may require a similar developmental process.

For example, psychologists often use the mirror self-recognition test to determine whether an animal or a child possesses self-awareness. A mark is placed on the subject’s body in a location visible only through a mirror. If the subject recognises the mark and attempts to remove it, it demonstrates an awareness of its own identity.

 

If intelligent machines were ever subjected to such tests, their ability to recognise themselves could offer the first evidence of genuine artificial consciousness. Until then, the concept remains speculative.

 

The Limitations of Engineering

Even if engineers succeed in building machines that replicate human cognition, a deeper philosophical question remains: “Should we?” Creating a conscious machine raises ethical issues that extend far beyond technology. A machine capable of thought might also be capable of suffering, desire, or autonomy. It might question its purpose, its rights or even its existence. A truly conscious artificial entity would no longer be merely a tool—it would be something closer to a new form of life.

 

For now, this possibility remains theoretical. Despite rapid advances in computing power, neuroscience and machine learning, humanity is still far from understanding the full nature of mind and consciousness. The ghost in the machine, it seems, remains elusive, nonetheless, the search continues.

 

Until Next Time…

Researchers across neuroscience, computer science and philosophy are slowly uncovering pieces of the puzzle. Initiatives such as global brain-mapping projects aim to chart the brain’s complex neural architecture in unprecedented detail, offering insights that may one day inform the design of intelligent machines.

 

Whether these efforts will eventually lead to true artificial consciousness remains uncertain.


What is clear, however, is that the journey will not be driven by engineering alone.

To build thinking machines, we must first answer one of humanity’s oldest questions: “What does it mean to have a mind? Until we solve that mystery, the ghost will remain where it has always been—somewhere inside the machine, but just out of reach.

 

This is where ‘I think, therefore I am’ Dr G signs off.

 


  • Twitter - Grey Circle
  • LinkedIn - Grey Circle
white-fedora.png

a technologist, researcher & futurist 

I dispel the rumours, gossip and hype surrounding new technology

SUBSCRIBE 

Subscribe to receive regular updates from Dean to include his technology columns, books, interviews and technology and social media-related news.

© Copyright 2025 DEAN ANTHONY GRATTON

bottom of page