(Originally posted by Pat in 3/2012)
A review of Godel, Escher, Bach by Douglas Hofstadter
Like I Am A Strange Loop only more so, Godel, Escher, Bach is a very uneven work.
On
the one hand, Hofstadter is a very brilliant man, and he makes
connections between formal logic, artificial intelligence, cognitive
science, and even genetics that are at once ground-breaking and (in
hindsight) obviously correct. GEB makes you realize that it may
not be a coincidence that DNA, Godel's theorems, and the Turing test
were discovered in the same generation—indeed, it may not simply be that
technology had reached a critical point, but rather that there is a
fundamental unity between formal logic, computers, and self-replication,
which makes it essential that you will either understand them all or
you will understand none of them.
On the other hand, GEB is
filled with idiotic puns and wordplay that build on each other and get
more and more grating as the book goes on (“strand” backwards becomes
“DNA rapid-transit system”, etc.), and it often digresses into
fuzzy-headed Zen mysticism (the two are combined when “MU-system
monstrosity” becomes “MUMON”). Worst of all, between each chapter and
the next there is a long, blathering dialogue between absurd,
anachronistic characters that is apparently supposed to illuminate the
topics of the next chapter, but in my experience only served to bore and
frustrate. (Achilles is at one point kidnapped by a helicopter; that
should give you a sense of how bizarre these dialogues become.)
Hofstadter loves to draw diagrams, and while a few of them are genuinely
helpful, most of them largely serve to fill space. He loves to talk
about different levels of analysis, different scales of reduction (and
so do I); but then in several of his diagrams he “illustrates” this by
making larger words out of collections of smaller words. If he did this
once, I could accept it; twice, I could forgive. But this happens at
least five times over the course of the book, and by then it's simply
annoying.
Much
of what Hofstadter is getting at can be summarized in a little fable,
one which has the rare feature among fables of being actually true.
There
was a time, not so long ago, when it was argued that no machine could
ever be alive, because life reproduces itself. Machines, it was said,
could not do this, because in order to make a copy of yourself, you must
contain a copy of yourself, which requires you to be larger than yourself. A mysterious elan vital was postulated to explain how life can get around this problem.
Yet in fact, life's solution was much simpler—and yet also much more profound. Compress the data. To copy a mouse, devise a system of instructions for assembling a mouse, and then store that inside
the mouse—don't try to store a whole mouse! And indeed this system of
instructions is what we call DNA. Once you realize this, making a
self-replicating computer program is a trivial task. (Indeed in UNIX
bash I can write it in a single line. Make an executable script called copyme that contains one command: cp copyme copyme$$ (the $$
appends the id of the current process, making the copy unique.)) Making
a self-replicating robot isn't much harder, given the appropriate
resources. These days, hardly anyone believes in elan vital, and
if we don't think that computers are literally “alive”, it's only
because we've tightened the definition of “life” to limit it to evolved
organics.
Hoftstadter also points out that we often tighten the
definition of “intelligence” in a similar way. We used to think that any
computer which could beat a competent chess player would have to be of
human-level intelligence, but now that computers regularly beat us all
at chess, we don't say that anymore. We used to say that computers could
do arithmetic, but only a truly intelligent being could function as a
mathematician; and then we invented automated theorem-proving. In this
sense, we might have to admit that our computers are already intelligent,
indeed for some purposes more intelligent than we are. To perform a
10-digit multiplication problem, I would never dream of using my own
abilities; computers can do it a hundred times faster and be ten times
as reliable. (For 2 digits, I might well do it in my head; but even then
the computer is still a bit better.) Alternatively, we could insist
that a robot be able to do everything a human can do, which is a matter of time.
Yet even then, it seems to me that there is still one critical piece missing, one thing that really is essential
to what I mean by “consciousness” (whether it's included in
“intelligence” is less clear; I'm not sure it even matters). This is
what we call sentience, the capacity for first-person qualitative experiences of the world. Many people would say that computers will never have
this capacity (e.g. Chalmers, Searle); but I wouldn't go so far as
that. I think they very well might have this capacity one day—but I
don't think they do yet, and I have no idea how to give it to them.
Yet, one thing troubles me: I also have no idea how to prove that they don't already have it. How do I know, really, that a webcam does not experience redness? How do I know that a microphone does not hear loudness? Certainly
the webcam is capable of distinguishing red from green, no one disputes
that. And clearly the microphone can distinguish different decibel
levels. So what do I mean, really, when I say that the webcam doesn't see redness? What is it I think I can do that I think the webcam cannot?
Hofstadter continually speaks, in GEB and in Strange Loop, as
if he is trying to uncover such deep mysteries—but then he always stops
short, interchanges the deep question for a simpler one. “How does a
physical system achieve consciousness?” becomes “How does a program
reference itself?”; this is surely an interesting question in its own
right—but it's just not what we were asking. Of course a
computer can attain “self-awareness”, if self-awareness means simply the
ability to use first-person pronouns correctly and refer meaningfully
to one's internal state—indeed, such abilities can be achieved with
currently-existing software. And we could certainly make a computer that
would speak as if it had qualia; we can write a program that
responds to red light by printing out statements like “Behold the
ineffable redness of red.” But does it really have qualia? Does it really experience red?
If
you point out I haven't clearly defined what I mean by that, I don't
disagree. But that's precisely the problem; if I knew what I was talking
about, I would have a much easier time saying whether or not a computer
is capable of it. Yet one thing is clear to me, and I think it should
be clear to you; I'm not talking about nothing. There is this experience we have of the world, and it is of utmost importance; the fact that I can't put it into words really is so much the worse for words.
In
fact, if you're in the Less Wrong frame of mind and you really insist
upon dissolving questions into operationalizations, I can offer you one:
Are computers moral agents? Can a piece of binary software be held morally responsible for its actions? Should we take the interests of
computers into account when deciding whether an action is moral? Can we
reward and punish computers for their behavior—and if we can, should
we?
This latter question might be a little easier to answer,
though we still don't have a very good answer, and even if we did, it
doesn't quite capture everything I mean to ask in the Hard Problem. It
does seem like we could make a robot that would respond to reward and
punishment, would even emulate the behaviors and facial expressions of
someone experiencing emotions like pride and guilt; but would it really feel pride
and guilt? My first intuition is that it would not—but then my second
intuition is that if my standards are that harsh, I can't really tell if
other people really feel either. This in turn renormalizes into a third intuition: I simply don't know whether a robot programmed to simulate all the expressions of guilt would actually be feeling it. I don't know whether
it's possible to make a software system that can emulate human behavior
in detail without actually having sentient experiences.
These are
the kinds of questions Hofstadter always veers away from at the last
second, and it's for that reason that I find his work ultimately
disappointing. I have gotten a better sense of what Godel's theorems are
really about—and why, quite frankly, they aren't important. (The fact
that we can say within a formal system X the sentence “this sentence is
not a theorem of X” is really not much different from the fact that I
myself cannot assert “It's raining but Patrick Julius doesn't know that”
even though you can assert it and it might well be true.) I
have even learned a little about the history of artificial
intelligence—where it was before I was born, compared to where it is now
and where it needs to go. But what I haven't learned from Hofstadter is
what he promised to tell me—namely, how consciousness arises from the
functioning of matter. It's rather like my favorite review of Dennett's Consiousness Explained: “It explained a lot of things, but consciousness wasn't one of them!”
Godel, Escher, Bach is
an interesting book, one probably worth reading despite its unevenness.
But one thing I can't quite get: Why was it this Pulitzer Prize-winning
bestselling magnum opus?
I guess that's just another mystery to solve.
No comments:
Post a Comment