Saturday, May 31, 2014

Philosophy of Mind, Part 1: What would a scientific theory of consciousness look like?

Recently, mathematician and physicist Max Tegmark proposed, in a highly technical paper, that consciousness can be thought of as a state of matter with particular informational properties. Although Tegmark has by no means solved the mystery of mental states, papers like this are a giant step in the right direction. At this point in human history, we know enough about the world to glimpse, if not the theory itself, at least the shape of a potential theory of consciousness. We can find our way down the path all the more easily if we have a sense of the destination.

There are many philosophical theories of consciousness, ranging from mostly-abandoned Cartesian theories of dualism to any number of monistic theories insisting that consciousness is entirely compatible with existing physics (I am in this camp). But the discipline of philosophy has proven that it alone does not have the tools to explain consciousness. It reaches the fringes of possible linguistic discourse only to hit the ceiling of "but really, how could the color blue possibly be just a bunch of neurons?" The reason for this ceiling is that consciousness at the level of the brain is an extraordinarily complex phenomenon, and must be understood by correspondingly complex theories of the sort that require advanced mathematics combined with empirical investigation (i.e., scientific theories). What philosophy can do for us is to help us justify why such theories actually constitute an explanation.

Indeed, that's what the best work in the philosophy of mind aims to achieve: assuming that consciousness is linked to a precise physical process in the brain, have we really captured all there is to know about it? Therein lies The Hard Problem of consciousness: how can mental states reduce to physical states, when the two seem so fundamentally, categorically different? This has led many philosophers to conclude that consciousness will never be explained scientifically, no matter how well we understand the brain. I strongly disagree.

Defeating the Hard Problem

To defeat such bold claims of helplessness in the face of this problem, the best thing to do is just plow ahead with the science and wait for philosophy to catch up. And one of things the science needs to explain is why it feels as though there is a hard problem of consciousness in the first place. For I submit that the so-called "Hard Problem" is only an apparent problem; that is, to someone with the right mental model of consciousness, it is not a problem at all. The assertion that "consciousness cannot be physical in nature" is epistemically unwarranted; there is absolutely zero evidential justification for this claim. Such assertions necessarily reduce to the intuition of the person making them, and in this case intuition completely fails.
Why does intuition fail? Let's explore an analogy. Tony Stark has built a computer program named Jarvis, capable of storing, manipulating, and comparing knowledge. Suppose also that Jarvis is capable of some degree of self-reflection - it can gain knowledge about its own implementation by observing itself. Suppose that variable x is integral in the execution of the Jarvis: x is a boolean variable that tells Jarvis whether or not the current object of focus is red. At some point Jarvis reflects on itself and realizes that it can distinguish red objects from non-red objects; however, Jarvis cannot directly see its implementation, so it does not know about variable x. Jarvis uses its abstraction algorithm to create a new file to represent the knowledge it has discovered. The file looks like this:

Redness (Object-type: Indivisible Subjective Experience)
  • A basic quality of the appearance of some objects
  • Immediately distinguishable from nonredness
  • Associated with intense heat
  • Associated with mental state M
  • Associated with ...
  • etc. ...
  • Examples of red objects: apples, stop signs, firetrucks, blood, ...

Later, Jarvis finds an explanation of its source code in Tony Stark's database. It incorporates this file into its knowledge database:

Digital Representation of Wavelengths near 650 nm in Jarvis (Object-type: model of physical system)
  • Visual module determines wavelength of light from object
  • Sets variable x to true if 650 nm plus or minus 10 nm, false otherwise
  • The state of x is recorded in physical memory at offset 0x44 from file pointer
  • Other modules can invoke the processor's logic commands to compare x to other booleans and make decisions
  • etc.

Jarvis realizes that this explanation corresponds with its perception of redness, but when it compares the two data files - the introspected file and the external file - Jarvis finds that two aren't the same. Not only do their contents not match - they are represented as two completely different ontologies. Jarvis can't figure out how to reconcile the two ontologies, or convert between them: an experiential file concerns fundamentally different entities than a file detailing the abstract components of a physical system (which is itself composed of experiential files). Thus, Jarvis experiences a category mismatch - to Jarvis, source code is not and can never be an introspected object. Jarvis asks itself, "is this source code red?" and tries to make a direct comparison. But the x-value of the source code file is false, as the very knowledge of how redness works is not itself a red object. Jarvis grows frustrated, and declares that redness cannot be reduced to mere source code.

The parallel with human qualia is obvious. You can distinguish red objects from non-red objects. You can also introspect on this fact and abstract the objects you classified as red as having the "qualia of redness." You can try to describe the nature of the redness of an object, but you find that you cannot access the "redness" directly. You can list objects that you remember to be red, you can describe your emotional reaction, you can describe the shape and brightness and texture of the redness, but the "redness" itself, as in whatever it is that induces all these responses and is distinguishable from other colors, remains inaccessible to description. Later, you learn in a textbook the precise neurological events that correspond with the perception of redness. But these neurological events are not themselves red, nor are they even categorically the same as your concept of "redness" - your knowledge of the neurology is comprised of your knowledge of physics, chemistry, and biology, which are themselves comprised of various conceptual structures comprised of other structures that are ultimately comprised of fundamental qualia like the sound of the words as you heard them in the classroom. Your brain has no hope of converting between these two representations, which use fundamentally different neural circuits. When someone says that redness just is this collection of neurological events, your brain raises a category error and refuses to believe that these things are the same. But it is wrong - it is only the representation of these things in your perception that is different - not the things themselves. The map is not the territory.

But...but...physical objects have charge and mass and so forth, and mental objects don't! Physical objects don't have the same properties as mental objects! Sure they do! The neural circuits corresponding to redness have properties in relation to the rest of the brain that are isomorphic to the properties redness has in your mind's representation of itself! But...that just can't be right... That's your brain raising the category error we just talked about. You're representing one single reality in two different ways, based on two different means of knowledge, and your brain doesn't have the machinery to reconcile the two.

We need to talk about Mary

There is a famous philosophical problem about a scientist named Mary. Mary has spent her whole life in a house that does not contain the color red. She has never before seen red with her own eyes, yet she has studied the underlying biology for her whole life, and knows absolutely everything there is to know about about the neurochemistry of redness. Yet when she finally goes outside and sees a red bicycle, she seems to have gained new knowledge, namely, what redness actually looks like. Ergo, knowing everything about the underlying neurochemistry does not capture all the knowable facts about redness. Ergo, redness cannot be reduced to physical things. Ergo, physicalism is false. So the argument goes.
What can we say about this, given our above reasoning? Mary has not gained new knowledge about redness. She has, instead, gained a new ability - the ability to represent red in her mind using the actual neurocircuits that she has been studying all these years, rather than through the different neural circuits used to represent knowledge about neural circuits! She has also gained the ability to experience the category mismatch. In some ways you could say she has gained knowledge; namely, she gains the indexical knowledge "I am now experiencing redness through its natural neural pathways, and I am now experiencing a category mismatch between this and my knowledge of the underlying neurophysiology, as I knew I would experience based on my knowledge of the neurophysiology." She has gained a new mode of interaction with a single underlying physical reality. The Mary problem is a non-problem.

The Shape of a Scientific Theory of Consciousness

So what should a precise theory of consciousness look like? A good scientific theory accounts for existing data and can make predictions. The existing data is the nature of consciousness as we experience it. Consciousness is composed of qualia. Some of the properties of the quale of redness are listed as follows:
  1. Totally ineffable. Cannot be directly described. Has that "oomph" property that makes it unique among colors, that "reddishness" that everyone just understands, but no one can vocalize.
  2. Can be reflected upon and abstracted as an object of experience, which seems to the subject to be fundamentally different from other types of objects in experience
  3. Consistent across space and time - tomorrow's red looks the same as today's red. Objects whose parts vary in brightness seem consistently of a solid color barring closer inspection; we can separate and recognize redness itself from various lighting scenarios and contexts
  4. Can elicit an emotional reaction
  5. Can evoke memories of objects and experiences with the same quale
  6. Can be directly compared with other qualia of the same type and judged based on similarity.
  7. Can be tied with properties from different qualia (i.e., shape, brightness, distance, etc). Tied with seemingly unrelated qualia in synesthetes.
A scientific theory of consciousness should therefore describe neural circuits in the brain with large-scale mathematical properties isomorphic to the properties listed above. Physically, a quale is a neural circuit and its relation to the rest of the brain, such that the circuit
  1. Cannot be broken up and analyzed by language circuits
  2. Can be taken as the object of the analysis of other neural circuits devoted to conscious attention
  3. Is robust against variations of visual input in time and space.
  4. Can activate emotional circuits.
  5. Can be incorporated in memories
  6. Can be compared alongside and distinguished from similar circuits
  7. Can be bundled with other qualia to form compound objects
With any luck, we can make these into mathematically rigorous properties of neural networks, predict what they should look like on scans, and then look for these networks in the brain. Once a strong evidential base is established, all that remains for the theory to include is a series of "phenomenological bridge rules" to unify our two models of reality: the basic qualia of experience with the abstract structures of physics. Specifically, our theory of consciousness implicitly or explicitly must include a series of rules like "neural circuits of type X are red qualia," "neural circuits of type Y are pain qualia," etc. This step cannot be avoided, as it is necessary to logically link physics and mental states into a complete model of the nature of reality. This is the step on which some philosophers would hesitate due to the seeming categorical mismatch between the physical and the mental - but as we've seen, this mismatch is accounted for in the model itself! There should be no logical, epistemic, or metaphysical reason that we can't include them in our theory; they are the best explanation of the empirical correlation between brain states and mental states.

Indeed, at this level of analysis, the ordinary philosophical language of epistemology and metaphysics starts to come apart at the seams, as philosophical analysis is predicated on the idea that the concepts involved are atomic and independent of subjective experience, when in fact they are comprised in the brain of certain persistent neural circuits with their own interactions and substructures. Metaphysics, it would seem, is an artifact of our brain's ability to abstract away general structures across the objects of its experience; in our unified model of consciousness and physics, there seems to be no reason to take our metaphysical intuitions as real, objective, independent properties of external reality. They are merely properties of our experience of reality, and the map is not the territory.

Now, it still might be possible to add metaphysics to our model of reality. Maybe we experience metaphysical intuitions because these are real facets of existence! Then again, maybe they are artifacts of the structure of the brain - we can find out by analysing our scientific model of consciousness. A theory of consciousness might help us provide real, definite answers to classic metaphysical questions. In any case, as Kant would have said, it seems that metaphysics should be limited by boundaries of possible experience.

The Prospects of a Scientific Theory of Consciousness

With a well-developed theory of consciousness, we can answer philosophical and ethical questions that today seem almost beyond the reach of human reason. We coud solve age-old metaphysical and epistemic questions. We could determine which animals suffer the most intensely and adjust our ethics accordingly. We could recognize whether or not an artificial intelligence is actually conscious. Maybe we could gain an inkling into what it is like to be bat. Mental disorders could be treated more effectively. Perhaps we could gain a window into how someone with autism experiences the world.

Example application: the inverted spectrum problem. Does red seem to me as green seems to you? With a model of consciousness, we need only compare the neural circuits in our two brains. The answer is probably no: red is probably more similar than different for both of us, with some variation.

Though have a long way to go, the potential for discovery is nearly limitless. Neuroscience is due for a revolution. Grasping consciousness won't be easy, but it is well within the reach of human knowledge.

No comments:

Post a Comment