Artificial Consciousness
Full Title: Artificial Consciousness
Author / Editor: Antonio Chella and Riccardo Manzotti (Editors)
Publisher: Imprint Academic, 2007
Review © Metapsychology Vol. 11, No. 51
Reviewer: Joel Parthemore
Given the current state of the art in artificial intelligence, it might seem presumptuous to devote a book to artificial consciousness. Intuitively, intelligence seems the less ambitious claim. Despite the success of projects like IBM's Deep Blue, there are no examples of general-knowledge machine intelligences, as successful at debating Sartre's understanding of personal responsibility or writing a limerick as they are at playing chess. Though a certain well-known philosopher (who appears in this volume) has been known to claim in an unguarded moment that his laptop is conscious, consciousness seems to add requirements that go well beyond general knowledge to a capacity for reflection and self-reflection.
John Taylor writes (p. 24), "machine consciousness is an as yet ill-defined concept, since so far as we know we have no machines that possess it, only billions of humans who are known to be conscious." If some would argue that discussing artificial consciousness before we have a clearer understanding of natural consciousness is putting the cart before the horse, most of the contributors to this volume, including Taylor, would agree with Riccardo Manzotti when he writes (p. 175) that "trying to implement a conscious machine could be a feasible approach to the scientific understanding of consciousness itself." According to another contributor, Vincenzo Tagliasco, the researchers in the field of artificial consciousness tend to be engineers more than theoreticians, happy to build things and see what interesting properties they exhibit.
The phrase "artificial consciousness" hides, Tagliasco says, a critical ambiguity: are we discussing artificial consciousness: something that looks like natural consciousness if we don't examine it too closely; or artificial consciousness: real consciousness arrived at by artificial means? On which word should the emphasis fall? Is a successful artificial consciousness one that cleverly fools us into treating it as a conscious agent (the cleverness being, presumably, in the designer and not the artifact), or is it one that we justifiably treat as being on a par with naturally conscious agents (in which case, at what point does the artificial cease being artificial and become natural?).
This collection is the outcome of an International Workshop on Artificial Consciousness held in 2005 in Agrigento, Italy, and it probably needs to be understood in that light, reflecting both the strengths and the weaknesses of insight of the people who gathered there. A general introduction to the field of consciousness studies in general or artificial consciousness studies in particular it is not. It is better approached as a series of thought-provoking essays by some eminent philosophers on the questions of what is consciousness; what is the relationship between ideas like consciousness, awareness, sense, and intelligence; what would it take to create consciousness in an artifact; and what does it ultimately mean to be, in the words of Peter Farleigh, a "mindful creature". He writes (p. 256): "Mindful creatures appear to act and feel as one. Or at least I believe this to be true of one creature in particular – myself – even if I could be deluded by others around me."
Some common themes run through the collection. One is that appearances can be deceiving. As Tom Ziemke writes, people may be inclined to treat a humanoid robot and a person as having more in common than that person and a snail. Whether they do or not turns entirely on what one considers most salient. Artificial consciousness roboticists, in order to be successful, may need to pay as much attention to inner workings as they do to external interactions.
Another is that scientific study of consciousness cannot be postponed just because the issues are difficult. The relation between the objective (the traditional realm of science) and the subjective (the realm of consciousness) needs to be confronted head on. As Igor Aleksander writes (p. 79): "Treating phenomenology as the 'hard' part of consciousness simply kicks it out of touch of science into some mystical outfield."
Consciousness is about simulation: being able not just to reflect on actions and responses but to reproduce them in mental re-creations. For simple organisms, purely stimulus-response mechanisms are sufficient. Flexibility is gained by allowing the organism to draw on past experience to arrive at simple inductions and deductions. To go beyond that, Owen Holland et al say, requires going beyond actual experience to imagine new experiences. The language is very reminiscent of Jesse Prinz describing his proxytypes theory of concepts.
Finally, running through all of the essays is the relationship between embeddedness (a conscious agent is located in and interacts with a particular environment) and embodiment (a conscious agent takes a particular physical form, which constrains its interactions with its environment). Embeddedness and embodiment are two sides of a coin: embeddedness looks at how the environment shapes the agent, embodiment at how the agent shapes his environment. A "brain in a vat" is, to most if not all of these writers, incoherent; an agent must be both richly embedded and embodied if it is to be conscious.
For the philosophically inclined, there are a few surprises. So for example Holland is unabashedly homuncular in describing how a model of an agent manipulating a model of the agent's world can allow the real agent to interact in the real world. Ziemke toys with the idea that consciousness has an unavoidably biological basis: an artificial agent, in order to be conscious, would also need, in some sense, to be alive (in keeping with the tradition from the autopoiesis literature that "cognition is life"). Andrea Lavazza considers the eliminativists' proposal that consciousness is a mistake; it doesn't really exist: no mind, only matter.
If the reader is wondering why all of this matters, Ricardo Sanz et al suggest a possibility that is reminiscent of James P. Hogan's novel, The Two Faces of Tomorrow (which was written in extensive consultation with Marvin Minsky): we are engineering increasingly complex, interconnected systems that suffer from "butterfly effects." In order to avert catastrophe, some amount of responsibility for these systems' operations must be brought inside the systems themselves. Intelligent systems must be morally responsible ones, and morally responsible ones must have the capacity for conscious reflection.
No one is claiming that any of the projects described in this book actually have achieved consciousness – though Holland suggests that an updated version of his Cronos/Simnos project might do just that. What the writers are claiming is that behaviorism is well and truly dead, consciousness is no longer a dirty word in cognitive science or philosophy, and that forthcoming technology may do much to change our conception of what it is to be human and what it means to be conscious.
© 2007 Joel Parthemore
Joel Parthemore is a third-year DPhil student studying theories of concepts at the University of Sussex in Brighton, UK. He is a member of the Philosophy of AI and Cognitive Science research group in the Department of Informatics. In his spare time he plays with Linux computer systems. You can find him online at http://www.parthemores.com/research/.