Machine Consciousness

Full Title: Machine Consciousness
Author / Editor: Owen Holland (Editor)
Publisher: Imprint Academic, 2003

 

Review © Metapsychology Vol. 8, No. 36
Reviewer: Catherine Legg, Ph.D.

Consciousness is a perennial
subject of fascination to human inquiry, and whether it can be reproduced in a
human artifact − the question of "Artificial Consciousness"
− has recently distinguished itself from the question of "Artificial
Intelligence" as a research area in its own right, enabling one to pose
the question of whether "AC" will turn out to be a necessary
precondition for AI (now arguably one of AI’s most intriguing questions). This
collection arises from a workshop entitled, "Can a Machine be Conscious?"
which took place at the Banbury Center, Long Island, in 2001 (though some of
the published essays were not presented at the workshop but solicited later).
Methodologically, the collection ranges across a wide spectrum: from armchair
discussions from a range of philosophical perspectives, to technical reports
from actual robotics research projects. The result is a stimulating resource
for advanced undergraduates or the interested layperson.

Four papers sit on the more purely
theoretic or philosophical end of the spectrum. First of all, Stevan Harnad ("Can
a Machine be Conscious? How?") re-argues the classic Cartesian line that consciousness
is forever closed to mechanical reproduction because of the epistemic inaccessibility
in principle of qualia, also known as "the other minds problem" ("€¦our
forward- and reverse-engineering can only explain how it is that we can do
things, not how it is that we can feel things. And that is why the ghost
in the machine is destined to continue to haunt us even after all cognitive
science’s empirical work is done" p. 75).

A more original paper by Susan Blackmore
argues that seeking to artificially create consciousness is actually somewhat
of a red herring for robotics research, since, "ordinary human
consciousness is an illusion". To be more specific, it is an illusion "created
by memes for their own propagation" (p. 20). Obviously following Dawkins, she
claims that we are the only species which supports memetic evolution. For this
to happen we must be able to copy each others’ memes, which requires us to
believe in "other selves", (here Blackmore replaces the good old
English word "self" with the neologism "memeplex"), but
since this is the only function selves perform, they are entirely fictive. All
this is not to say, she acknowledges, that it would be impossible to create a
machine subject to the same illusions, but what would be the point? This is a
fascinatingly subversive argument, though it’s worth noting that, rather than
drawing from it Blackmore’s eliminativist conclusion regarding the memeplex,
one might equally posit selves as themselves memes (and in the case of a
powerful personality − Socrates and Buddha come to mind €“ even some of
the most potentially enduring).

Jesse Prinz, in a paper entitled "Level-Headed
Mysterianism and Artificial Experience", explicates his favored mysterianism
as the view that we can give necessary conditions for consciousness and
sufficient conditions, but never both. This however, surprisingly, does not
mean we cannot formulate "good, concrete hypotheses about the material
basis of consciousness". Prinz sketches his favoured "science of
consciousness", which consists of a number of explanatory levels, building
on the pioneering work of Marr (namely, "psychological profile", the "algorithmic
level", then a number of levels of "neuronal implementation").
Such a multilayered story means, however, that "we can describe the key
systems involved in consciousness at varying degrees of abstraction" (p.
120), and it is difficult to isolate exactly which levels matter for
consciousness, and how, in the way required for necessary and sufficient
conditions. He also allows that consciousness could alter without affecting
behavior, so that although we can make predictions about which of the machines
we build are conscious, we can never confirm them. (Here a certain assumption
regarding Cartesian privacy of the mental strikes again.)

Finally, the editors show a nice
eclecticism by including the provocatively entitled "Borg or Borges" by
cultural critic William Irwin Thompson. This stimulating piece launches a
poetic attack on the very idea of ‘artificial intelligence’ ("It is a
paradox of the work of Artificial Intelligence that in order to grant
consciousness to machines, the engineers first labour to subtract it from
humans€¦", p. 187), and on the attempt to provide a technological solution
to what Thompson claims are essentially spiritual questions ("Technologists
are closer to paranoids than they are to mystics in the sense that they are
literalists given to perceptions of misplaced concreteness; they always see
spiritual experiences as the products of technology€¦Mystics flip this
literalism over to see technology as a system of externalized metaphors€¦"
p. 188).

On the engineering side, the
collection contains, as mentioned, a number of technical reports on robotic
consciousness projects, though, disappointingly, none seem to show many real
results as yet. Firstly, Rodney Cotterill (Danish Technical University) reports
on a project called CyberChild which "aims to search for the neural
correlates of consciousness through computer simulation" (p. 29). This
system simulates two senses (hearing and touch), and also typical baby-discomforts
such as hunger and wetness. The hope is that by allowing the system to evolve,
it will "ontogenetically acquire novel reflexes", though nothing like
this seems to have happened so far, and Cotterill concludes with the
disappointingly weak, "The project is still in its very early stages, and
although no suggestion of consciousness has yet emerged, there appears to be no
fundamental reason why consciousness could not ultimately develop and be
observed."(p. 29).

The second real-world project
described is IDA, a US Navy computer program designed to take over the task of
scheduling sailors’ work rosters from traditional ‘detailers’. Detailing is a
task requiring an interesting set of skills sophisticated both purely computationally
and regarding the maintenance of human relationships. The IDA developers use
global workspace theory to conceptualise consciousness, and there is an interesting
discussion of this. Once again, though, the results seem rather minimal, evidenced,
among other things, in a coy use of scare-quotes around the term ‘conscious’
whenever it is attributed to IDA. For instance, the authors write, "Though
IDA does not, as yet, engage in non-routine problem-solving, work on adding
that capability is in progress. She uses her ‘consciousness’ module to handle
routine problems with novel content. All this together makes a strong case€¦for
functional consciousness" p. 63).

Holland and Goodman, in their "Robots
With Internal Models", take perhaps the most explicitly engineering
approach to achieving machine consciousness. The key to consciousness, they
claim, is a robot’s ability to include itself in its model of the world.
(This is not a new idea, of course.) The authors outline a plan to build a
succession of robots that function in the world by building and exploiting
internal models of ever-increasing sophistication. However once again it would
have been good to see the researchers actually implement this plan, but they
appear merely to have worked with a set of (‘ARAVQ’) simulations at CIT.

As well as the philosophical and
the exclusively engineering approaches to machine consciousness, a number of
papers attempt to bridge the two. Sloman and Chrisley (University of Birmingham),
in "Virtual Machines and Consciousness", begin by arguing that the concept
of consciousness is a cluster concept subject to a Babel of confused claims. It
therefore needs a scientific precisification (as happened with ‘warmth’ when it
became ‘temperature’, in this way their claim is a version of the pragmatism of
Charles Peirce, though the authors appear unaware of this antecedent). The
authors suggest (like Peirce) that such scientific precisification is best
performed a posteriori, by designing and building virtual-machine
architectures which capture various features of consciousness.

In a clear-eyed discussion with
potential to throw new light on the highly worked-over ‘supervenience’ issue in
analytic metaphysics, the authors claim that virtual machines, though they are based
on or realized in physical mechanisms, are not necessarily describable in the
language of the physical sciences (consider, for example, a chess-playing ‘virtual
machine’). At the same time, bridging laws between virtual machine and physical
mechanisms are neither analytic (since they cannot be provided a
priori
) yet neither are they empirical (since they are possessed of
a form of necessity once grasped). ‘Virtual machine functionalism’ (p. 148), they
urge, allows for multiple coexisting, independently varying, causally interacting
states and processes − as opposed to atomic state functionalism which
allows an organism only one mental state at a time.

Drawing originally from mathematics,
they note that the conscious / non-conscious distinction may be neither a
dichotomy nor a continuum. A third formal possibility is a conceptual space
with many discontinuities. They cash out this insight with the claim that just
as architecture-space contains many niches, correspondingly many different
varieties of mentality are possible, even sketching an ‘architecture-schema’ in
which one might map out some of this space. Their overall aim is to show that
many of our pre-theoretic concepts of mind can be reconstrued as architectures,
rather than (as GOFAI would have it) as algorithms. This ‘structural’
interpretation of mentality, unlike that of many other papers in this volume, transcends
simple Cartesian dualism. (Even qualia too are given architectural explication,
and regarding the now well-worn comeback about the logical possibility of
zombies, the authors bite the bullet €“ "it is not clear that anything
intelligible is left over" p. 169.). This is an original and very
interesting paper, which, if taken seriously, has the potential to change the
methodology of much philosophy of mind, from seeking to find ‘correct’
conceptual analyses of inherently indeterminate folk mental concepts, to
exploring and experimentally testing spaces of more determinate concepts
discovered a posteriori.

Finally, Aleksander and Dunmall (Imperial
College, London) set themselves the task of crafting a system of axioms to
guide and structure any conceivable test of a machine for ‘minimal’
consciousness. Examples include: Depiction: an agent has perceptual
states
that depict parts of S (a sensorily-accessible world). Imagination:
an agent has internal imaginational states that recall parts of S or
fabricate S-like sensations.

A well-written introduction traces
links between the papers in terms of shared themes and/or contrasting takes on
the same questions. One opportunity missed in a book entitled "Machine
Consciousness" is the lack of any systematic interrogation of the word "machine".
The solution to the problem of machine consciousness has been assumed by all parties
represented in this collection to lie with the concept of consciousness, which
in this book is prodded, pushed and pulled in all directions €“ many
illuminating.

 

© 2004 Catherine Legg

 

 

Catherine Legg, Ph.D., Department
of Philosophy, University of Waikato
, New Zealand

Categories: Philosophical, Psychology