Can a robot ever be conscious and how would we know if it were?
摘要：Some people think we could easily build and identify a conscious robot, while others insist that it’s impossible – it all depends on what you think consciousness is.
There is disagreement over whether machines can ever be conscious, let alone how we would know if one were. Your view may depend on how you see consciousness.
If the subjective feeling of consciousness is an illusion created by brain processes, then machines that replicate such processes would be conscious in the way that we are. How would we know this? Daniel Dennett at Tufts University in Massachusetts thinks a Turing test, in which a machine has to convince a human interrogator that it is conscious, should, if conducted “with suitable vigour and aggression and cleverness”, be enough.
Michael Graziano at Princeton University thinks we could take a more direct approach. His attention schema hypothesis sees consciousness as the brain’s simplified model of its own workings – a representation of how it represents things. He believes it is possible to build a machine that possesses a similar self-reflective model. “If we can build it in a way that we [can] see into its guts, then we will know this is a machine that has a rich self-description,” he says. “It is a machine that thinks and believes it has consciousness. And those are confirmable because you can understand, in principle, how the machine is processing information.”
What is consciousness like for other animals and when did it evolve?What forms can consciousness take and can we see it in our brains?Can physics explain consciousness and does it create reality?
For Graziano, consciousness could appear in any machine, whether it is purely in software or constructed of matter, biological or otherwise. Anil Seth at the University of Sussex, UK, isn’t so sure. “I think it is still an unknown whether consciousness is substrate-independent,” he says. For him, determining whether a machine is conscious requires making informed judgements based on whether, for example, it has analogues of brain structures that we know are important for consciousness in humans, and what it is made of (brain organoids, for example, are made of biological material).
Identifying consciousness in a machine may be more straightforward if you subscribe to the integrated information theory of consciousness. In principle, this simply entails ensuring that phi, a quantity indicating the degree of information integration within the system, is greater than zero. In practice, calculating phi is computationally intractable for anything but the simplest of systems. So, even if a machine were designed to integrate information, it would be far beyond our abilities to tell whether it is conscious.
Phil Maguire at the National University of Ireland, Maynooth, goes further. He notes that, by definition, integrated systems can’t be understood by looking at their parts. “Machines are made up of components that can be analysed independently,” he says. “They are disintegrated. Disintegrated systems can be understood without resorting to the interpretation of consciousness.” In other words, machines can’t be conscious.
Selmer Bringsjord at Rensselaer Polytechnic Institute in Troy, New York, agrees – but for different reasons. He thinks our subjective feeling of being conscious is the outcome of non-material stuff of some sort, and that this is crucial for some of our intelligent behaviour. For him, machines can never possess this essence, so will never be conscious or intelligent in the way that we are. Anil Ananthaswamy
Models of consciousness
There are many ideas about how consciousness arises. Here are five of the most influential. Although each has its own proponents, they aren’t all mutually exclusive.
Global neuronal workspace
Information entering the brain from the outside world or the body competes for attention in the cortex and a structure in the centre of the brain called the thalamus. If the signal it generates is stronger than signals from other information, it is broadcast across the brain into the global workspace. Only then do you consciously register it.
The brain evolved to contain a model of how it represents itself. This attention schema is like a self-reflecting mirror. It is what creates the subjective feeling of consciousness. There is no “ghost in the machine”; consciousness is just a mirage created by sophisticated neural processing.
The brain is a prediction machine, meaning that what we perceive is the brain’s best guesses about the causes of its sensory input. As a result, much of conscious experience and selfhood is based on what we expect, not what is there.
Consciousness isn’t confined to brains. It arises in any system as a result of the way information moves between its subsystems. The degree of integration of this information is measured with a metric called phi. Any system with a phi of more than zero is conscious.
Orchestrated objective reduction Quantum mechanics can explain consciousness. Microscopic structural elements within the brain, called microtubules, can exist as a superposition of all possible states. This quantum system collapses into a single state when the mass of the microtubules in it exceeds a certain threshold. The collapse is what creates consciousness.