Originally printed in the JULY-AUGUST 2001 issue of Quest magazine.
Citation: Cartmill, Matt. "Do Horses Gallop in Their Sleep: The Problem of Animal Consciousness." Quest 89. 4 (July-August 2001): 124-131.
By Matt Cartmill
Let me propose a thought experiment. Imagine, if you will, that there's a certain clump of nerve cells in the brain that's essential for conscious awareness. Now suppose that a certain drug suppresses neural activity in just this nucleus, with no effect on the rest of the brain. Subjects who take this drug do things as usual, but they experience nothing. The drug converts them into sleepwalkers. Finally, imagine that I've developed a new form of this drug, which has permanent effects. It abolishes consciousness forever, with no effect on behavior. I want to test it on you. How much will you charge to take it?
I think the question answers itself. Spending your life as a sleepwalker is equivalent to being dead, and so you will charge me whatever price you would charge to commit suicide.
I offer this thought experiment to dispel the notion that conscious awareness is too metaphysical and subjective a phenomenon for science to concern itself with. The phenomenon of consciousness is the source of all value in our lives. As such, it should be at the top of the scientific agenda. Yet despite its fundamental importance, consciousness is a subject that most scientists are reluctant to deal with. We know practically nothing about either its mechanisms or its evolution. In fact, many distinguished scientists and philosophers believe that consciousness has no evolutionary history, because they think that human beings are the only creatures that have it. Although most scientists will admit in private that our close animal relatives probably have mental lives something like ours (because, after all, they have bodies and brains and behavior that resemble ours), a lot of scientists are reluctant to say so plainly and publicly; and those who do can count on being accused of sentimentality and anthropomorphism.
If you have a dog, you have probably had the experience of seeing your dog search out a favorite toy and bring it to you in hopes of getting you to play with him. It's hard even to describe these familiar experiences without saying things like, "The dog was trying to find his ball," or "The dog wanted me to play with him." But scientists aren't supposed to say things like that, at least when we have our lab coats on. If we discuss such things at all, we prefer to do so in some way that doesn't involve attributing intentions or any other mental states to the dog.
There are at least two ways we can do this. First, we can use clumsy behavioral circumlocutions for mental language. Instead of saying, "The dog looked for his ball until he found it," we can say something like, "The dog exhibited repeated bouts of investigative behavior, which ceased after he contacted the ball." This somehow manages to suggest that the dog wasn't thinking about the ball while he was looking for it, and that he didn't perceive anything when he got it in his mouth.
Second, if we find these circumlocutions silly and tedious, we can adopt some variant of what is sometimes called "logical behaviorism," in which the mental words are still used but they are redefined in terms of the probabilities of certain behaviors. In this view, a dog's intentions and desires and beliefs turnout, when properly understood, not to be something inside the dog, but theoretical constructs pinned on the dog by a human observer. Therefore, the human observer can know whether the dog has intentions and desires and beliefs, but the dog can't.
Why Not Attribute Consciousness to Animals?
Why do scientists and philosophers go through all these contortions to avoid attributing mental states to animals? There are several reasons, some of which are better than others. There's no doubt that sentimentality and uncritical anthropomorphism are real temptations, and that they should be avoided in describing and analyzing the behavior of nonhuman organisms. A lot of us succumb to these temptations. We all know people who insist on telling you what kind of music their begonia likes or what their cat thinks about Rush Limbaugh. These people are mistaken. And scientists sometimes make similar mistakes. Some of the early Darwinians in particular were guilty of this sort of thing. Because Darwin's opponents often cited the mental and moral differences between people and beasts as reasons for rejecting the whole idea of evolution, many of his early followers tried to play down those differences by repeating anecdotes they had heard about the nobility of dogs and the self-sacrifice of chickens.
The British psychologist C. Lloyd Morgan was dismayed by this uncritical attribution of human mental states to animals, and he tried to put a stop to it.In 1894, Morgan laid down the following law:
In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale.
"Higher" here turns out to mean "humanlike," as it often did in the nineteenth century. Successive generations of experimental psychologists have adopted this dictum as a fundamental axiom called Morgan's Canon. It's generally thought of as a special case of Occam's Razor, the principle that you shouldn't make up entities unless you have to. By this view, we are required to deny mental events in animals whenever we can, in the name of parsimony.
All this sounds reasonable, but there's a fundamental flaw in it. Because we have mental events, we already know that there are such things in the universe. Denying them to animals therefore doesn't save anything; we have the same number of entities on our hands no matter what we decide about animal minds. So Occam's Razor doesn't provide any support for Morgan's Canon. In fact, some of the animal rights philosophers claim that Occam's Razor is on their side. They argue that if we're going to invoke intentions, desires, beliefs, and other mental phenomena in accounting for our own actions, we should explain other animals' behavior in similar terms whenever we can--again, in the name of parsimony.
The problem with Morgan's Canon comes into sharp focus if we transfer the argument from the brain to the kidney. Consider this version:
In no case may we interpret an animal's urine as the outcome of humanlike biochemical processes, if we can find any other way of explaining it.
If Morgan's Canon represents a safe assumption, so does this one. But it's obvious that this version is ridiculous, and that physiologists would think I was crazy if I insisted they adopt this rule to avoid the temptations of anthroporenalism. Then why does Morgan's Canon seem so much more plausible than this one? Are neurologists just more gullible than urologists? Or is there something special about events in the brain that makes them different from events in the kidneys?
Part of the answer is that we don't care about kidneys the way we care about brains, because brain events are a source of human status and kidney events are not. Our mental abilities are markers of the moral boundary between animals and people. Because nonhuman animals lack some of those mental abilities, we regard them as property, to be used for our ends in any way we choose--on the dinner table, or in scientific experiments, or transformed into soap and shoes and lampshades. The only moral constraint that we observe on our use of other animals is an obligation not to make them suffer. And we acknowledge that duty only because we believe that at least some of the animals are on our side of the second big line we draw across the moral landscape--the boundary between sentience and non-sentience, between things that are conscious and things that aren't. So both of our major moral boundaries are defined by things that go on in the brain.
Up to this point, I have been assuming that mental events are, or are produced by, events in the brain. Scientists rarely question this assumption, but philosophers question it a lot. Brain events, they point out, are objective and public; mental events are subjective and private. This is the other crucial difference between the brain and the kidneys--and the other source of scientists' qualms about the question of animal consciousness.
The intrinsic subjectivity of consciousness makes scientists uneasy. Being conscious is the same thing as having private experiences; and the scientific method is fundamentally committed to the assumption that private experiences don't count as evidence. Only publicly accessible and repeatable experiences have that status. If somebody makes a claim that you can't check out for yourself, you're not obliged to take it seriously. This makes science constitutionally antiauthoritarian, which is good; but it also makes it unreceptive to claims about consciousness and its contents. Most of the recent literature on the subject of consciousness is not really about consciousness at all, but about either neurology or behavior. These are public phenomena, and scientists know how to deal with them. So they spend a lot of time trying to convince themselves that studying these things is somehow the same thing as studying consciousness--like the drunk in the story who lost his wallet in Central Park, but went looking for it in Times Square because the light was better there.
Artificial Intelligence versus Human Essence
The field of computer science called artificial intelligence grew out of these assumptions. In 1950, the English computer theorist Alan Turing offered a famous test for telling whether machines can think. He called it "the imitationgame." Suppose, he said, that we can write a program that will exchange messages with you. If, after five minutes of sending messages back and forth, you can't tell whether you've been chatting with a human being or a computer, then the machine has a human mind--because that's what having a human mind means: being able to carry on a human conversation. What other test could there be? And Turing predicted that some of us would see such machines within our lifetimes."I believe," wrote Turing, "that in about fifty years' time it will be possible to program computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning."
It's exactly fifty years later now, and 109 equals around 128 megabytes. You can buy the supercomputer of Alan Turing's fondest dreams off the shelf at Sears for the price of a beat-up used car. Far bigger machines can be had at higher prices. But none of them has yet been programmed to play the imitation game successfully. What went wrong?
I think what went wrong wasn't just Alan Turing but the whole Western conception of what it means to be human. Our traditions encourage us to define ourselves not by what we are, but by how we are different: to think of the human essence not in terms of our properties, but in terms of our peculiarities--the small subset of human traits that we don't share with any other creatures. Many of these human peculiarities hinge on our unique skill in manipulating symbols, and that also happens to be what philosophers get paid for doing. It's not surprising, therefore, that philosophers and professors from Plato on down to Noam Chomsky have told us that juggling words and numbers is the defining excellence that makes people special, and that animals that lack it are mere objects. Marcus Aurelius summed it up in this maxim: "Use animals and other things and objects freely; but behave in a social spirit toward human beings, because they can reason."
Many Western thinkers have gone further and insisted that because animals can't talk, their mental lives are defective in big ways, or even nonexistent."Thinking," wrote Wittgenstein, "is essentially the activity of operating with signs." That view of thinking naturally appeals to college professors, who sometimes get so consumed by operating with signs that they wander around their campuses talking to themselves and tripping over shrubs. And since nonhuman animals aren't very good at operating with signs, many professional types have been reluctant to grant that beasts can have mental lives at all.
Because Western thinkers have always attached so much importance to juggling symbols as a marker of human status, and so little importance to walking around without tripping over things (which couldn't be very important, because a donkey can do it just as well as a philosopher), it was inevitable that when we managed to build a symbol-juggling engine--a machine that could beat us all at chess and prove the four-color theorem--our philosophers would try to persuade us that it was human. Once we taught it to play the imitation game, they assured us, it would be just like one of us. But so far, it has proved impossible to program such an engine to succeed at the imitation game. The reason is that, although a computer has many of the symbol-manipulating abilities that we prize so highly,it lacks the subtler and more mysterious skills that come with being a sentient animal, inhabiting and experiencing the world in a living body.
Computer metaphors have come to dominate our thinking about brain processes and mental events. They predispose us to believe that mental events are algorithmic--that is, that they are produced by executing a programmatic list of logically connected instructions--and that digital computers (which are algorithm machines) will eventually become conscious if only we can run are algorithm program on the right kind of hardware with the proper stored data. But as the philosopher John Searle has argued forcefully, there are good reasons for thinking that conscious awareness isn't, and can't be, produced by running a computer program.
A digital computer is essentially a grid of slots, each of which can be either full or empty. We think of these as ones and zeroes. Some of these slots are linked causally by rules of operation, which provide that when a certain pattern shows up in some area, the contents of other slots are changed in various ways, which may depend on the contents of yet other slots. In modern computers, the ones and zeroes are represented by electrical charges in semiconductors, but they could be represented by anything: holes punched in cards, or beads on wires, or eggs in egg cartons. The medium doesn't matter: what's important is the algorithm. All the operations that you do on a computer could be done in exactly the same way by giving a team of people written instructions for moving eggs around in a football field full of egg cartons,though of course it would take longer. (By the way, a football field full of egg cartons has about 1 megabyte of RAM.)
This fact poses problems for computational theories of the mind. If moving electrical charges around in a certain pattern can produce subjective awareness and bring a mind into existence, so can moving around a collection of eggs in the same pattern; and if I knew how many eggs to use and what rules of operation to use in moving them, I could make my egg collection think it was Elizabeth Dole or the Wizard of Oz. I could get the same effects by making chalk marks on a blackboard, or waving semaphore flags, or singing songs, or tap dancing. All these processes can be computationally equivalent, with algorithms that correspond in every detail; but none of them seems like a plausible way of producing a subjective awareness. And since a digital computer is just another way of instantiating an algorithm, it seems impossible for such a device to become conscious. If we ever succeed in creating an artificial intelligence, it's going to have to be something more than just an algorithm machine.
How Is Consciousness Produced?
If consciousness isn't algorithmic, then how is it produced? We don't know.The machineries of consciousness are an almost perfect mystery. Neuro scientists and computer scientists have produced a lot of useful and suggestive models of how the brains of animals process sensory data and judge and discriminate among stimuli. We know that such mechanisms exist in our own brains, and that we need them to perceive the world. But although these perceptual mechanisms are necessary for consciousness, they aren't sufficient, because we can perceive things and respond to them without being aware of them.
The most spectacular example of this is sleepwalking. Many people--as many as 30 percent of all children and 7 percent of adults--sometimes get up and start walking around during the deepest, most unconscious part of sleep. Typically, sleepwalkers open their eyes, sit up in bed with a blank facial expression,pluck aimlessly at the bed clothes, and then rise up and walk. They ignore objects and people nearby, but they usually manage to get around without bumping into things. They may do very complicated and distinctively human things--talk,make phone calls, get into a car and drive off, or even play musical instruments. If you try to wake them up, they struggle violently to get away from you; and if you succeed in awakening them, they're totally confused and have no recollection of what they were doing or how they got there.
The phenomenon of sleepwalking shows that you can get surprisingly complicated and even distinctively human behavior without consciousness. This makes it much harder for us to find out anything about animal awareness. How do we know that animals aren't simply sleepwalking all the time, even when they appear to be awake? Do wolves hunt and horses gallop in their sleep, in the same way that a human somnambulist gets into a car and drives off on the freeway at 65 miles an hour? When the cock crows in the morning, is the farmer the only animal on the farm that wakes up? And if we can do so many things without being conscious, then why did consciousness evolve?
Some people have argued that consciousness confers no adaptive advantage whatever; it's just an incidental side effect of the neural events that produce behavior. But I think that idea can be rejected for Darwinian reasons. If consciousness were a useless epiphenomenon, natural selection would have operated to get rid of it somehow, since we apparently have to pay a high price to maintain it.
The price we pay for consciousness is unconsciousness, of the special kind we call sleep. Most animals don't sleep. Invertebrates and cold-blooded vertebrates usually have daily periods of torpor when they hide and rest, but most of them show little or no correlated change in neural activity. Among vertebrates, true sleep, involving a shift from fast to slow waves in the forebrain, appears to be limited to mammals and birds, though there are hints of it in some reptiles.
Mammalian sleep is so dangerous, complicated, and time-consuming a performance that we feel sure it must have a payoff of some sort, but it's not really clear exactly what it is. On the face of it, it sounds like a bad idea to spend about a third of the day plunged into a limp, helpless trance state that leaves you unable to detect or react to danger. Some argue that sleep serves to conserve energy, which is why we see it only in warm-blooded animals. The trouble with this theory is that mammalian sleep uses almost as much energy as wakeful resting. During eight hours of sleep, a human being saves only about 120 calories. These savings don't seem worth spending a third of your life dead to the world. Another theory holds that sleep is a defense against predators; it's nature's way of telling us to hide during those times of day when we don't need to be active. The main problem with this story is that birds and mammals that are too big to hide still have to flop down and fall asleep every day, right out there on the prairie, exposed to every predator in the world. They do it as little as possible--a horse sleeps only about 3 hours a day, of which only 20 minutes is spent lying down--but they'd be better off if they didn't do it at all. They do it because they have to do it, not to save energy or avoid predators.
Sleep appears to be something imposed upon us, not by our environmental circumstances, but by the needs of the brain itself. Consciousness damages or depletes something in the waking brain, and we can't keep it up indefinitely. If we're forced to stay conscious around the clock, day after day, with rest but no sleep, we soon start manifesting pathological symptoms, beginning with irritability and proceeding through fainting and hallucinations to metabolic collapse and death.
If sleep serves to restore something that is damaged or depleted by things that go on when we are conscious, it seems reasonable to think that animals that have to sleep as we do are conscious when they are awake. It seems significant in this connection that animals that are (probably) never conscious don't sleep,whereas sleep is compulsory for the animals that we know are sometimes conscious (that is, people) and for those nonhuman animals that we suspect for behavioral reasons may have mental lives something like ours. The natural inference is that the waking state in these animals is also something like ours, that it includes mental events and awareness of the world, and that the subjective differences for them between being asleep and being awake parallel our own as closely as the objective (neurological and behavioral) differences do.
The Evidence for Consciousness
Because we can't directly observe the contents of animal minds, the evidence for animal consciousness is necessarily indirect. But it seems at least as persuasive as the indirect evidence that we have for other unobservable phenomena--for example, the Big Bang, or neutrinos, or human evolution. The philosophers and scientists who refuse to acknowledge that dogs feel pain when you kick them seem to me to suffer from the same kind of ingeniously willful ignorance that we see in creationists who reject the notion of evolution because they have never seen a fish turn into a chicken. I am inclined to believe that these philosophers and scientists are not so much concerned about understanding the universe as they are about looking tough-minded and spurning the temptations of anthropomorphism.
To most of us, the temptations of anthropomorphism don't look quite so dangerous as all that. Our close animal relatives, after all, are anthropomorphic in the literal sense of the word, which means "human-shaped."They have organs like ours, placed in the same relative positions. And interestingly enough, they seem to recognize the same correspondences we do.Despite the conspicuous differences in sight, feel, and smell between a humanbody and a dog's, a friendly dog will greet you by licking your face and sniffing your crotch, and a murderously angry dog will go for your throat--just as they would behave in similar moods toward members of their own species. These are sophisticated homology judgments; and they encompass not only anatomy, but behavior as well. Just as we anthropomorphize dogs, horses, and other animals,they cynomorphize and hippomorphize us--and each other--right back in the other direction.
Psychological accounts of these facts often treat them as mistakes: category errors, resulting from what the ethologist Heini Hediger called the"assimilation tendency" in social animals. I suggest that the assimilation tendency isn't a mistake, but an accurate perception of the way things are. In a world inhabited by closely related species, it confers an adaptive advantage. A gazelle that can tell when a lioness is thinking about hunting is less likely to be eaten; a lioness that can tell when a gazelle is thinking about bolting is less likely to go hungry. A man who doesn't notice that a horse is furiously angry, or a horse that can't make that sort of judgment about a human being, is correspondingly less likely to have offspring. Insofar as anthropomorphism recognizes and incorporates these facts about the world, it is not a vice but a survival skill. Indeed, one of the adaptive advantages of consciousness itself may lie precisely in the fact that it facilitates the reciprocal perception of other minds--not just in our own species, but in others as well--by analogy with our own. If this perception is adaptive, as I believe that it is, then perhaps we should stop resisting its incorporation into the world view and vocabulary of science.
Matt Cartmill, PhD (Chicago) is Professor of Biological Anthropology and Anatomy at Duke University's School of Medicine. He is the author of more than a hundred scholarly and popular works on anatomy, evolution, and the philosophy of science. He is a Fellow of the American Association for the Advancement of Science and has been a Guggenheim Fellow, President of the American Association of Physical Anthropologists, and a Phi Beta Kappa Visiting Scholar. This article is reprinted from the Key Reporter 66.1 (autumn 2000) 6 -9, by permission of its editor, Priscilla S. Taylor, and the author. Copyright © by the Phi Beta Kappa Society. All rights are reserved.