The story in our previous post, “Rhea’s Dream,” envisions a future in which AIs, far superior to human minds, have colonized the outer solar system.
Is such a future possible? Probable?
As I continue to research topics for science fiction, one of the questions I return to again and again is: Will AIs become conscious, self-aware beings?
Conscious? What does that even mean?
First you have to ask, what exactly is consciousness?
One definition, which I read many years (and sorry, I can’t now find the source) was stated in regard to whether computers could ever be conscious. This scientist claimed that “consciousness is simply an awareness of past states.” If the AI remembers its past states, then by definition it has an ongoing consciousness.
Okay. Why not?
More recently, neuroscientist Erik Hoel wrote in his susbtack , disputing an opinion—held by many scientists—that consciousness is hard to define. Hoel cites leading neuroscientists who all give pretty straightforward definitions. His own, “off the top of my head” definition is typical:
Consciousness is what it is like to be you. It is the set of experiences, emotions, sensations, and thoughts that occupy your day, at the center of which is always you, the experiencer.
Common sense, right?
Not so fast…
This recent article in MIT Technology Review makes it plain there are still plenty scientists who disagree that consciousness is easy to define.
To quote: “Consciousness poses a unique challenge in our attempts to study it, because it’s hard to define,” says Liad Mudrik, a neuroscientist at Tel Aviv University who has researched consciousness since the early 2000s. “It’s inherently subjective.”
To muddy things further, the article explains how some authorities draw a distinction between conscious, sentience, and self-awareness:
(Consciousness is) often confused with terms like “sentience” and “self-awareness,” but according to the definitions that many experts use, consciousness is a prerequisite for those other, more sophisticated abilities. To be sentient, a being must be able to have positive and negative experiences—in other words, pleasures and pains. And being self-aware means not only having an experience but also knowing that you are having an experience.
Okay. I guess my question then becomes, “Will AIs become self-aware, and if so when?”
How does the brain do it?
To answer the question, we’d first need to understand what processes in our brains result in consciousness (and/or self-awareness).
Neuroscientists and AI architects have been wrestling with this problem at least since Alan Turing in 1950. This recent article in Scientific American by neuroscientiest Christof Koch explains and compares two currently dominant theories of how consciousness relates to the neuronal activity in our brains.
On the one side is integrated information theory (IIT). This and kindred theories postulate that conscious experience arises from hierarchical circuits of neurons:
The causal interactions within a circuit in a particular state or the fact that two given neurons being active together can turn another neuron on or off, as the case may be, can be unfolded into a high-dimensional causal structure.
The MIT article cited earlier explains IIT this way:
Integrated information theory proposes that a system’s consciousness depends on the particular details of its physical structure—specifically, how the current state of its physical components influences their future and indicates their past.
On the other side of the debate are “computational functionalist theories,” notably global neuronal workspace theory (GNWT), which seems to define consciousness as a kind of momentary focus set in a “workspace” in the brain. As Koch explains:
GNWT, starts from the psychological insight that the mind is like a theater in which actors perform on a small, lit stage that represents consciousness, with their actions viewed by an audience of processors sitting offstage in the dark. The stage is the central workspace of the mind, with a small working memory capacity for representing a single percept, thought or memory. The various processing modules—vision, hearing, motor control for the eyes, limbs, planning, reasoning, language comprehension and execution—compete for access to this central workspace. The winner displaces the old content, which then becomes unconscious.
What does this tell us about self-aware AIs?
The debate continues over these two prevalent models attempting to explain why human brains are conscious.
But which model offers the best likelihood for the evolution of self-aware AIs?
In the Scientific American article, Koch leans toward ITT.
But there’s a problem …
According to GNWT and other computational functionalist theories (that is, theories that think of consciousness as ultimately a form of computation), consciousness is nothing but a clever set of algorithms ….
Conversely, for IIT, the beating heart of consciousness is intrinsic causal power, not computation ... And here’s the rub: causal power by itself, the ability to make the system do one thing rather than many other alternatives, cannot be simulated. It must be built into the system.
There indeed is the rub. Current AIs are based on computation via software.
But there is a different computer archictecture. “Neuromorphic computers” have been built, with hardware based on the same connectivity as the brain. Accroding to Koch:
A so-called neuromorphic or bionic computer could be as conscious as a human, but that is not the case for the standard von Neumann architecture that is the foundation of all modern computers.
So then, can we conclude that self-aware AIs can and will evolve, based on the intrinsic, causal theory of consciousness and running on future “neuormorphic” computers?
Not necessarily.
Because not everyone agrees with Koch that AI consciousness cannot be based on simulation.
How to Create a Mind
One authority in particular who seems to disagree is prominent futurist and computer scientist Ray Kurzweil, author of The Age of Spiritual Machines and The Singularity is Near. In the first part of his career, Kurzweil developed computer systems for optical pattern recognition and speech recognition.
In How to Create a Mind (2012), he provides a detailed analysis of research into how the brain’s neocortex works to recognize and process external stimuli. He then goes on to propose a design for a hierarchical network of pattern recognizers capable of learning and reprogramming themselves—all built as a software implementation.
So, could an AI based on a non-neuromorphic computer architecture, using software simulation, become self-aware? Kurzweil seems to think so.
When will our AI Overlords Arrive?
So where does all this leave the question of self-aware AIs?
Given that:
scientists remain widely divided on how consciousness arises in human brains,
and at least equally divided on what kinds of computer systems might conceivably become conscious,
the answer can only be:
It’s hard to say.
And yet … Given that …
self-awareness has evolved in human brains
there is ongoing, sometimes exponential, growth of computer capabilities,
And barring calamitous collapse of civilization (which is always possible),
… it’s hard to conclude that self-aware AIs will not emerge.
In the next century, if not sooner.
What will they be like, these super-intelligent artificial minds?
And what will they do with us, their clumsy, primitive forebears?
Will they keep us as pets? Put us in zoos? Exterminate us like roaches?
Or will they be, as Ray Kurzweil has envisioned, “transcendent servants…very friendly, taking care of all our needs”? 1
All great questions for speculative fiction to explore!
Sources/For further reading:
“Ambitious theories of consciousness are not ‘scientific misinformation’" in The Intrinsic Perspective by Erik Hoel, September 17, 2023
“What Does It ‘Feel’ Like to Be a Chatbot?” by Christof Koch. Scientific American, September 8, 2023.
“Minds of machines: The great AI consciousness conundrum,” by Grace Huckins. MIT Technology Review, October 16, 2023.
Ray Kurzweil. How to Create a Mind: The Secret of Human Thought Revealed, Penguin Books (2012).
Images by vecstock, courtesy of freepik.com
Ray Kurzweil, “The Singularity” in Science at the Edge, edited by John Brockman, Sterling Publishing Co. Inc, 2008, Page 304.