John R. Searle | The New York Review of Books https://www.nybooks.com Thu, 14 Jan 2021 16:56:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.5 195950105 At the Information Desk https://www.nybooks.com/articles/2014/12/18/information-desk/ Wed, 26 Nov 2014 16:30:00 +0000 http://nybooks.wpengine.com/ To the Editors:

John Searle’s review of my book The 4th Revolution: How the Infosphere Is Reshaping Human Reality is astonishingly shallow and misguided. The silver lining is that, if its factual errors and conceptual confusions are removed, the opportunity for an informed and insightful reading can still be enjoyed.

The post At the Information Desk appeared first on The New York Review of Books.

]]>
To the Editors:

John Searle’s review of my book The 4th Revolution: How the Infosphere Is Reshaping Human Reality is astonishingly shallow and misguided [NYR, October 9]. The silver lining is that, if its factual errors and conceptual confusions are removed, the opportunity for an informed and insightful reading can still be enjoyed.

The review erroneously ascribes to me a fourth revolution in our self-understanding, which I explicitly attribute to Alan Turing. We are not at the center of the universe (Copernicus), of the biological kingdom (Darwin), or of the realm of rationality (Freud). After Turing, we are no longer at the center of the world of information either. We share the infosphere with smart technologies. These are not some unrealistic artificial intelligence, as the review would have me suggest, but ordinary artifacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate our unique intelligence. Their successes largely depend on the fact that the world has become an IT-friendly environment, where technologies can replace us without having any understanding or semantic skills. We increasingly live online (think of apps tracking your location).

The pressing problem is not whether our digital systems can think or know, for they cannot, but what our environments are gradually enabling them to achieve. Like Kant, I do not know whether the world in itself is informational, a view that the review erroneously claims I support. What I do know is that our conceptualization of the world is. The distinction is trivial and yet crucial: from DNA as code to force fields as the foundation of matter, from the mind–brain dualism as a software–hardware distinction to computational neuroscience, from network-based societies to digital economies and cyber conflicts, today we understand and deal with the world informationally. To be is to be interactable: this is our new “ontology.”

The review denounces dualisms yet uncritically endorses a dichotomy between relative (or subjective) vs. absolute (or objective) phenomena. This is no longer adequate because today we know that many phenomena are relational. For example, whether some stuff qualifies as food depends on the nature both of the substance and of the organism that is going to absorb it. Yet relativism is mistaken, because not any stuff can count as food: sand never does. Likewise, semantic information (e.g., a train timetable) is a relational phenomenon: it depends on the right kind of message and receiver. Insisting on mapping information as either relative or absolute is as naive as pretending that a border between two nations must be located in one of them.

The world is getting more complex. We have never been so much in need of good philosophy to understand it and take care for it. But we need to upgrade philosophy into a philosophy of information of our age for our age if we wish it to be relevant. This is what the book is really about.

Luciano Floridi
Professor of Philosophy and Ethics of Information
University of Oxford
Oxford, United Kingdom

John R. Searle replies:

In The 4th Revolution, Floridi makes a number of strong claims about information. He says that we are essentially informational entities (inforgs) and that we live in an environment that is essentially informational (the infosphere). He summarizes his view by saying, “Maximally, infosphere is a concept that can also be used as synonymous with reality, once we interpret the latter informationally. In this case, the suggestion is that what is real is informational and what is informational is real.” He even cites Hegel’s claim, “what is rational is real and what is real is rational,” as a predecessor to his form of metaphysics. I made a number of criticisms of this, two of which are worth repeating.

First, the notion of information is systematically ambiguous. There is the observer-independent sense of information that I have in my conscious mind-brain and the observer-relative derivative information that exists in books, computers, temperature gauges, etc. In either form, all information is dependent on conscious minds. It either exists in the form of conscious thought processes, or it exists in derivative forms of books, computers, etc. Information is not primary in the structure of reality; rather it is dependent on consciousness, just as consciousness itself is a biological phenomenon dependent on brain processes that are themselves dependent on more basic features of physics and chemistry.

If my views are correct, they are devastating to his general thesis. What is his response? He says, “Like Kant, I do not know whether the world in itself is informational, a view that the review erroneously claims I support. What I do know is that our conceptualization of the world is.” This is a very strange response. In my review, I made no reference to Kant’s notion of the in itself, I simply quoted Floridi’s passage, “what is real is informational and what is informational is real.” Does he now wish to deny this? It is no help to be told “our conceptualization of the world is [informational],” because, by definition, all conceptualizations of the world are informational. That is what a conceptualization of the world does: where accurate, it puts into concepts some facts about the world and thus gives us information. So his current claim is either false or trivial. It is false that the world is informational. It is trivial that conceptualizations of the world are informational.

A second major point concerns the Fourth Revolution. He treats the information revolution as the fourth in a sequence of revolutions that include the Copernican, the Darwinian, and the Freudian. But all three of these are about observer-independent phenomena—the solar system, natural selection, and the unconscious—whereas the information he is describing is all dependent on consciousness for its very existence as information. Floridi claims that he finds his conception of information in the work of Alan Turing and criticizes me for not recognizing Turing as the author of the Fourth Revolution. But the reason I attributed Floridi’s views to Floridi and not Turing is that I find nothing in Turing’s work that is identical with, or even remotely like, Floridi’s views. Does he really have quotations from Turing where Turing says that we are all “inforgs” inhabiting the “infosphere”? And that “what is real is informational and what is informational is real”? Floridi gives no quotations from Turing to show that his views are similar to Floridi’s.

Among other achievements, Turing made valuable contributions to the theory of computation. We are all in his debt. It is a discredit to his memory to attribute to him the exaggerated and implausible views advanced by Floridi.

The post At the Information Desk appeared first on The New York Review of Books.

]]>
41731
What Your Computer Can’t Know https://www.nybooks.com/articles/2014/10/09/what-your-computer-cant-know/ Thu, 18 Sep 2014 16:00:00 +0000 http://nybooks.wpengine.com/ We are all beneficiaries of the revolution in computation and information technology—for example, I write this review using devices unimaginable when I was an undergraduate—but there remain enormous philosophical confusions about the correct interpretation of the technology. For example, one routinely reads that in exactly the same sense in which Garry Kasparov played and beat […]

The post What Your Computer Can’t Know appeared first on The New York Review of Books.

]]>
searle_1-100914.jpg

Private Collection/Art Resource © 2014 C. Herscovici/Artists Rights Society (ARS), New York

René Magritte: Birth of the Idol, 1926

We are all beneficiaries of the revolution in computation and information technology—for example, I write this review using devices unimaginable when I was an undergraduate—but there remain enormous philosophical confusions about the correct interpretation of the technology. For example, one routinely reads that in exactly the same sense in which Garry Kasparov played and beat Anatoly Karpov in chess, the computer called Deep Blue played and beat Kasparov.

It should be obvious that this claim is suspect. In order for Kasparov to play and win, he has to be conscious that he is playing chess, and conscious of a thousand other things such as that he opened with pawn to K4 and that his queen is threatened by the knight. Deep Blue is conscious of none of these things because it is not conscious of anything at all. Why is consciousness so important? You cannot literally play chess or do much of anything else cognitive if you are totally disassociated from consciousness.

I am going to argue that both of the books under review are mistaken about the relations between consciousness, computation, information, cognition, and lots of other phenomena. So at the beginning, let me state their theses as strongly as I can. Luciano Floridi’s book, The 4th Revolution, is essentially a work of metaphysics: he claims that in its ultimate nature, reality consists of information. We all live in the “infosphere,” and we are all “inforgs” (information organisms). He summarizes his view as follows:

Minimally, infosphere denotes the whole informational environment constituted by all informational entities, their properties, interactions, processes, and mutual relations…. Maximally, infosphere is a concept that can also be used as synonymous with reality, once we interpret the latter informationally. In this case, the suggestion is that what is real is informational and what is informational is real.

Nick Bostrom’s book, Superintelligence, warns of the impending apocalypse. We will soon have intelligent computers, computers as intelligent as we are, and they will be followed by superintelligent computers vastly more intelligent that are quite likely to rise up and destroy us all. “This,” he tells us, “is quite possibly the most important and most daunting challenge humanity has ever faced.”

Floridi is announcing a completely new era. He sees himself as the successor to Copernicus, Darwin, and Freud, each of whom announced a revolution that transformed our self-conception into something more modest. Copernicus taught that we are not the center of the universe, Darwin that we are not God’s special creation, Freud that we are not even masters of our own minds, and Floridi that we are not the champions of information. He claims that the revolution in ICTs (information and communication technologies) shows that everything is information and that computers are much better at it.

While Floridi celebrates the revolution, Bostrom is apocalyptic about the future. His subtitle, Paths, Dangers, Strategies, tells the story. There are “paths” to a superintelligent computer, the “danger” is the end of everything, and there are some “strategies,” not very promising, for trying to avoid the apocalypse. Each book thus exemplifies a familiar genre: the celebration of recent progress (Floridi) and the warning of the coming disaster together with plans for avoiding it (Bostrom). Neither book is modest.

1.

The Objective-Subjective Distinction and Observer Relativity

The distinction between objectivity and subjectivity looms very large in our intellectual culture but there is a systematic ambiguity in these notions that has existed for centuries and has done enormous harm. There is an ambiguous distinction between an epistemic sense (“epistemic” means having to do with knowledge) and an ontological sense (“ontological” means having to do with existence). In the epistemic sense, the distinction is between types of claims (beliefs, assertions, assumptions, etc.). If I say that Rembrandt lived in Amsterdam, that statement is epistemically objective. You can ascertain its truth as a matter of objective fact. If I say that Rembrandt was the greatest Dutch painter that ever lived, that is evidently a matter of subjective opinion: it is epistemically subjective.

Underlying this epistemological distinction between types of claims is an ontological distinction between modes of existence. Some entities have an existence that does not depend on being experienced (mountains, molecules, and tectonic plates are good examples). Some entities exist only insofar as they are experienced (pains, tickles, and itches are examples). This distinction is between the ontologically objective and the ontologically subjective. No matter how many machines may register an itch, it is not really an itch until somebody consciously feels it: it is ontologically subjective.

A related distinction is between those features of reality that exist regardless of what we think and those whose very existence depends on our attitudes. The first class I call observer independent or original, intrinsic, or absolute. This class includes mountains, molecules, and tectonic plates. They have an existence that is wholly independent of anybody’s attitude, whereas money, property, government, and marriage exist only insofar as people have certain attitudes toward them. Their existence I call observer dependent or observer relative.

These distinctions are important for several reasons. Most elements of human civilization—money, property, government, universities, and The New York Review to name a few examples–are observer relative in their ontology because they are created by consciousness. But the consciousness that creates them is not observer relative. It is intrinsic and many statements about these elements of civilization can be epistemically objective. For example, it is an objective fact that the NYR exists.

In this discussion, these distinctions are crucial because just about all of the central notions—computation, information, cognition, thinking, memory, rationality, learning, intelligence, decision-making, motivation, etc.—have two different senses. They have a sense in which they refer to actual, psychologically real, observer-independent phenomena, such as, for example, my conscious thought that the congressional elections are a few weeks away. But they also have a sense in which they refer to observer-relative phenomena, phenomena that only exist relative to certain attitudes, such as, for example, a sentence in the newspaper that says the elections are a few weeks away.

Bostrom’s book is largely about computation and Floridi’s book is about information. Both notions need clarification.

2.

Computation

In 1950, Alan Turing published an article in which he set out the Turing Test.1 The purpose of the test was to establish whether a computer had genuine intelligence: if an expert cannot distinguish between human intelligent performance and computer performance, then the computer has genuine human intelligence. It is important to note that Turing called his article “Computing Machinery and Intelligence.” In those days “computer” meant a person who computes. A computer was like a runner or a singer, someone who does the activity in question. The machines were not called “computers” but “computing machinery.”

The invention of machines that can do what human computers did has led to a change in the vocabulary. Most of us now think of “computer” as naming a type of machinery and not as a type of person. But it is important to see that in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. They go through a set of transitions in electronic states that we can interpret computationally. The transitions in those electronic states are absolute or observer independent, but the computation is observer relative. The transitions in physical states are just electrical sequences unless some conscious agent can give them a computational interpretation.

This is an important point for understanding the significance of the computer revolution. When I, a human computer, add 2 + 2 to get 4, that computation is observer independent, intrinsic, original, and real. When my pocket calculator, a mechanical computer, does the same computation, the computation is observer relative, derivative, and dependent on human interpretation. There is no psychological reality at all to what is happening in the pocket calculator.

What then is computation? In its original, observer-independent meaning, when someone computed something, he or she figured out an answer to a question, typically a question in arithmetic and mathematics, but not necessarily. So, for example, by means of computation, we figure out the distance from the earth to the moon. When the computation can be performed in a way that guarantees the right answer in a finite number of steps, that method is called an “algorithm.” But a revolutionary change took place when Alan Turing invented the idea of a Turing machine. Turing machines perform computations by manipulating just two types of symbols, usually thought of as zeroes and ones but any symbols will do. Anything other computers can do, you can do with a Turing machine.

It is important to say immediately that a Turing machine is not an actual type of machine you can buy in a store, it is a purely abstract theoretical notion. All the same, for practical purposes, the computer you buy in a store is a Turing machine. It manipulates symbols according to computational rules and thus implements algorithms.

There are two important consequences of this brief discussion, and much bad philosophy, not to mention psychology and cognitive science, has been based on a failure to appreciate these consequences.

First, a digital computer is a syntactical machine. It manipulates symbols and does nothing else. For this reason, the project of creating human intelligence by designing a computer program that will pass the Turing Test, the project I baptized years ago as Strong Artificial Intelligence (Strong AI), is doomed from the start. The appropriately programmed computer has a syntax but no semantics.

Minds, on the other hand, have mental or semantic content. I illustrated that in these pages with what came to be known as the Chinese Room Argument.2 Imagine someone who doesn’t know Chinese—me, for example—following a computer program for answering questions in Chinese. We can suppose that I pass the Turing Test because, following the program, I give the correct answers to the questions in Chinese, but all the same, I do not understand a word of Chinese. And if I do not understand Chinese on the basis of implementing the computer program, neither does any other digital computer solely on that basis.

This result is well known, but a second result that is just as important is made explicit in this article. Except for the cases of computations carried out by conscious human beings, computation, as defined by Alan Turing and as implemented in actual pieces of machinery, is observer relative. The brute physical state transitions in a piece of electronic machinery are only computations relative to some actual or possible consciousness that can interpret the processes computationally. It is an epistemically objective fact that I am writing this in a Word program, but a Word program, though implemented electronically, is not an electrical phenomenon; it exists only relative to an observer. Both of the books under review neglect these points.

3.

Superintelligent Computers

The picture that Bostrom has is this: we are now getting very close to the period when we will have “intelligent computers” that are as intelligent as human beings. But very soon we are almost certain to have “superintelligent computers” that are vastly more intelligent than human beings. When that happens, we are in a very serious, indeed apocalyptic, danger. The superintelligent computers might decide, on the basis of their arbitrarily formed motivations, to destroy us all—and might destroy not just their creators but all life on earth. This is for Bostrom a real threat, and he is anxious that we should face it squarely and take possible steps to prevent the worst-case scenario.

What should we say about this conception? If my account so far has been at all accurate, the conception is incoherent. If we ask, “How much real, observer-independent intelligence do computers have, whether ‘intelligent’ or ‘superintelligent’?” the answer is zero, absolutely nothing. The intelligence is entirely observer relative. And what goes for intelligence goes for thinking, remembering, deciding, desiring, reasoning, motivation, learning, and information processing, not to mention playing chess and answering the factual questions posed on Jeopardy! In the observer-independent sense, the amount that the computer possesses of each of these is zero. Commercial computers are complicated electronic circuits that we have designed for certain jobs. And while some of them do their jobs superbly, do not for a moment think that there is any psychological reality to them.

Why is it so important that the system be capable of consciousness? Why isn’t appropriate behavior enough? Of course for many purposes it is enough. If the computer can fly airplanes, drive cars, and win at chess, who cares if it is totally nonconscious? But if we are worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation should be real. Without consciousness, there is no possibility of its being real.

What is the argument that without consciousness there is no psychological reality to the facts attributed to the computer by the observer-relative sense of the psychological words? After all, most of our mental states are unconscious most of the time, and why should it be any different in the computer? For example, I believe that Washington was the first president even when I am sound asleep and not thinking about it. We have to distinguish between the unconscious and the nonconscious. There are all sorts of neuron firings going on in my brain that are not unconscious, they are nonconscious. For example, whenever I see anything there are neuronal feedbacks between V1 (Visual Area 1) and the LGN (lateral geniculate nucleus). But the transactions between V1 and the LGN are not unconscious mental phenomena, they are nonconscious neurobiological phenomena.

The problem with the commercial computer is it is totally nonconscious. In earlier writings,3 I have developed an argument to show that we understand mental predicates—i.e., what is affirmed or denied about the subject of a proposition—conscious or unconscious, only so far as they are accessible to consciousness. But for present purposes, there is a simpler way to see the point. Ask yourself what fact corresponds to the claims about the psychology in both the computer and the conscious agent. Contrast my conscious thought processes in, for example, correcting my spelling and the computer’s spell-check. I have a “desire” to spell correctly, and I “believe” I can find the correct spelling of a word by looking it up in a dictionary, and so I do “look up” the correct spelling. That describes the psychological reality of practical reasoning. There are three levels of description in my rational behavior: a neurobiological level, a mental or conscious level that is caused by and realized in the neurobiological level, and a level of intentional behavior caused by the psychological level.

Now consider the computer. If I misspell a word, the computer will highlight it in red and even propose alternative spellings. What psychological facts correspond to these claims? Does the computer “desire” to produce accurate spelling? And does it “believe” that I have misspelled? There are no such psychological facts at all. The computer has a list of words, and if what I type is not on the list, it highlights it in red. In the case of the computer, there are only two levels: there is the level of the hardware and the level of the behavior, but no intermediate level that is psychologically real.

Bostrom tells us that AI motivation need not be like human motivation. But all the same, there has to be some motivation if we are to think of it as engaging in motivated behavior. And so far, no sense has been given to attributing any observer-independent motivation at all to the computer.

This is why the prospect of superintelligent computers rising up and killing us, all by themselves, is not a real danger. Such entities have, literally speaking, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior.

It is easy to imagine robots being programmed by a conscious mind to kill every recognizable human in sight. But the idea of superintelligent computers intentionally setting out on their own to destroy us, based on their own beliefs and desires and other motivations, is unrealistic because the machinery has no beliefs, desires, and motivations.

One of the strangest chapters in Bostrom’s book is one on how we might produce intelligent computers by “emulating” the brain on a computer. The idea is that we would emulate each neuron as a computational device. But the computational emulation of the brain is like a computational emulation of the stomach: we could do a perfect emulation of the stomach cell by cell, but such emulations produce models or pictures and not the real thing. Scientists have made artificial hearts that work but they do not produce them by computer simulation; they may one day produce an artificial stomach, but this too would not be such an emulation.

Even with a perfect computer emulation of the stomach, you cannot then stuff a pizza into the computer and expect the computer to digest it. Cell-by-cell computer emulation of the stomach is to real digestive processes as cell-by-cell emulation of the brain is to real cognitive processes. But do not mistake the simulation (or emulation) for the real thing. It would be helpful to those trying to construct the real thing but far from an actual stomach. There is nothing in Bostrom’s book to suggest he recognizes that the brain is an organ like any other, and that cells in the brain function like cells in the rest of the body on causal biological principles.

4.

Information

Floridi does not offer a definition of information, but there is now so much literature on the subject, including some written by him, that we can give a reasonably accurate characterization. There are two senses of “information” that have emerged fairly clearly. First, there is the commonsense notion of information in which it always involves some semantic representation. So, for example, I know the way to San Jose, and that implies that I have information about how to get to San Jose. If we contrast real information with misinformation, then information, so defined, always implies truth. There is another sense of “information” that has grown up in mathematical information theory that is entirely concerned with bits of data that do not have any semantic content. But for purposes of discussing Floridi, we can concentrate on the commonsense notion because when he says that “what is real is informational and what is informational is real,” he is not relying on the technical mathematical notion.

There is an immediate problem, or rather set of problems, with the idea that everything in the universe is information. First, we need to distinguish observer-independent information from observer-relative information. I really do have in my brain information about how to get to San Jose, and that information is totally observer independent. I have that regardless of what anybody thinks. The map in my car and the GPS on the dashboard also contain information about the way to San Jose, but the information there is, as the reader will recognize by now, totally observer relative.

There is nothing intrinsic to the physics that contains information. The distinction between the observer- independent sense of information, in which it is psychologically real, and the observer-relative sense, in which it has no psychological reality at all, effectively undermines Floridi’s concept that we are all living in the infosphere. Almost all of the information in the infosphere is observer relative. Conscious humans and animals have intrinsic information but there is no intrinsic information in maps, computers, books, or DNA, not to mention mountains, molecules, and tree stumps. The sense in which they contain information is all relative to our conscious minds. A conscious mind surveying these objects can get the information, for example, that hydrogen atoms have one electron and that the tree is eighty-seven years old. But the atoms and the tree know nothing of this; they have no information at all.

When Floridi tells us that there is now a fourth revolution—an information revolution so that we all now live in the infosphere (like the biosphere), in a sea of information—the claim contains a confusion. The other three revolutions all identify features that are observer independent. Copernicus, Darwin, and Freud all proposed theories purporting to identify actual, observer-independent facts in the world: facts about the solar system, facts about human evolution, and facts about human unconsciousness. Even for Freud, though the unconscious requires interpretation for us to understand it, he continuously supposed that it has an existence entirely independent of our interpretations.

But when we come to the information revolution, the information in question is almost entirely in our attitudes; it is observer relative. Floridi tells us that “reality” suitably interpreted consists entirely of information. But the problem with that claim is that information only exists relative to consciousness. It is either intrinsic, observer-independent information or information in a system treated by consciousness as having information. When anybody mentions information, you ought to insist on knowing the content of the information. What is the information? What is the information about? And in what does the information consist? I do not think he offers a precise and specific answer to these questions.

Floridi’s book is essentially an essay in metaphysics—metaphysics that I find profoundly mistaken. According to the metaphysics that his view is opposed to, the universe consists entirely in entities we find it convenient, if not entirely accurate, to call “particles.” Maybe better terms would be “points of mass energy” or “strings,” but in any case we leave it to the physicists to ascertain the basic structure of the universe. Some of these particles are organized into systems, where the boundaries of the system are set by causal relations. Examples of systems are water molecules, babies, and galaxies.

On our little Earth, some of these systems made of big carbon-based molecules with lots of hydrogen, nitrogen, and oxygen have evolved into life. And some of these life forms have evolved into animals with nervous systems. And some of these animals with nervous systems have evolved consciousness and, with consciousness, the capacity to think and express thought. Once you have consciousness and thought, you have the possibility of recognizing, creating, and sustaining information. Information is entirely a derivative higher-order phenomenon, and to put it quite bluntly, only a conscious agent can have or create information.

Here is Floridi’s rival picture: information is the basic structure of the universe. All the elements of the universe, including us, are information. What we think of as matter is patterns of information. We, as humans, are just more information. What is wrong with this picture? I do not believe it can be made consistent with what we know about the universe from atomic physics and evolutionary biology. All the literal information in the universe is either intrinsic or observer relative, and both are dependent on human or animal consciousness. Consciousness is the basis of information; information is not the basis of consciousness.

For Floridi, a model of information and information processing is in computers. How does he cope with the fact that the computer is a syntactic engine? He admits it. In a strange chapter, he says that he endorses explicitly the distinction between syntax and semantics that I have made, and he points out that the computer is a syntactical engine. But if so, how is there to be any intrinsic information in the computer? Such information exists only relative to our interpretation. To put this as bluntly as I can, I think there is a huge inconsistency between this chapter, where he grants the nonintrinsic character of the semantic information in the computer and concedes that the computer is only a syntactic engine, and earlier chapters where he insists that the computer is a paradigm of actual, real information in the world. He gets the point that the syntax of the program is never sufficient for semantic information, but he does not get the point I make in this article that even the syntax is observer relative.

I agree with Floridi’s account that there is a lot more information readily available in the present era than was the case previously. If you want to know the number of troops killed at Gettysburg or the number of carbon rings in serotonin, you can find the answers to these questions more or less instantly on the Internet. The idea, however, that this has produced a revolution in ontology so that we live in a universe consisting of information seems to me not a well-defined thesis because most of the information in question is observer relative.

5.

Both of these books are rich in facts and ideas. Floridi has a good chapter on privacy in the information age, and Bostrom has extensive discussions of technological issues, but I am concentrating on the central claim of each author.

I believe that neither book gives a remotely realistic appraisal of the situation we are in with computation and information. And the reason, to put it in its simplest form, is that they fail to distinguish between the real, intrinsic observer-independent phenomena corresponding to these words and the observer-relative phenomena that also correspond to these words but are created by human consciousness.

Suppose we took seriously the project of creating an artificial brain that does what real human brains do. As far as I know, neither author, nor for that matter anyone in Artificial Intelligence, has ever taken this project seriously. How should we go about it? The absolutely first step is to get clear about the distinction between a simulation or model on the one hand, and a duplication of the causal mechanisms on the other. Consider an artificial heart as an example. Computer models were useful in constructing artificial hearts, but such a model is not an actual functioning causal mechanism. The actual artificial heart has to duplicate the causal powers of real hearts to pump blood. Both the real and artificial hearts are physical pumps, unlike the computer model or simulation.

Now exactly the same distinctions apply to the brain. An artificial brain has to literally create consciousness, unlike the computer model of the brain, which only creates a simulation. So an actual artificial brain, like the artificial heart, would have to duplicate and not just simulate the real causal powers of the original. In the case of the heart, we found that you do not need muscle tissue to duplicate the causal powers. We do not now know enough about the operation of the brain to know how much of the specific biochemistry is essential for duplicating the causal powers of the original. Perhaps we can make artificial brains using completely different physical substances as we did with the heart. The point, however, is that whatever the substance is, it has to duplicate and not just simulate, emulate, or model the real causal powers of the original organ. The organ, remember, is a biological mechanism like any other, and it functions on specific causal principles.

The difficulty with carrying out the project is that we do not know how human brains create consciousness and human cognitive processes. (Nor do we know the long-term effects that electronic communication may have on the consciousness created in brains.) Until we do know such facts, we are unlikely to be able to build an artificial brain. To carry out such a project it is essential to remember that what matters are the inner mental processes, not the external behavior. If you get the processes right, the behavior will be an expression of those processes, and if you don’t get the processes right, the behavior that results is irrelevant.

That is the situation we are currently in with Artificial Intelligence. Computer engineering is useful for flying airplanes, diagnosing diseases, and writing articles like this one. But the results are for the most part irrelevant to understanding human thinking, reasoning, processing information, deciding, perceiving, etc., because the results are all observer relative and not the real thing.

The points I am making should be fairly obvious. Why are these mistakes so persistent? There are, I believe, two basic reasons. First there is a residual behaviorism in the cognitive disciplines. Its practitioners tend to think that if you can build a machine that behaves intelligently, then it really is intelligent. The Turing Test is an explicit statement of this mistake.

Secondly there is a residual dualism. Many investigators are reluctant to treat consciousness, thinking, and psychologically real information processing as ordinary biological phenomena like photosynthesis or digestion. The weird marriage of behaviorism—any system that behaves as if it had a mind really does have a mind—and dualism—the mind is not an ordinary part of the physical, biological world like digestion—has led to the confusions that badly need to be exposed.

The post What Your Computer Can’t Know appeared first on The New York Review of Books.

]]>
41432
Can a Photodiode Be Conscious? https://www.nybooks.com/articles/2013/03/07/can-photodiode-be-conscious/ Thu, 14 Feb 2013 16:30:00 +0000 http://nybooks.wpengine.com/ To the Editors: The heart of John Searle’s criticism in his review of Consciousness: Confessions of a Romantic Reductionist [NYR, January 10] is that while information depends on an external observer, consciousness is ontologically subjective and observer-independent. That is to say, experience exists as an absolute fact, not relative to an observer: as recognized by […]

The post Can a Photodiode Be Conscious? appeared first on The New York Review of Books.

]]>
To the Editors:

The heart of John Searle’s criticism in his review of Consciousness: Confessions of a Romantic Reductionist [NYR, January 10] is that while information depends on an external observer, consciousness is ontologically subjective and observer-independent. That is to say, experience exists as an absolute fact, not relative to an observer: as recognized by Descartes, je pense donc je suis is an undeniable certainty. Instead, the information of Claude Shannon’s theory of communication is always observer-relative: signals are communicated over a channel more or less efficiently, but their meaning is in the eye of the beholder, not in the signals themselves. So, thinks Searle, a theory with the word “information” in it, like the integrated information theory (IIT) discussed in Confessions, cannot possibly begin to explain consciousness.

Except for the minute detail that the starting point of IIT is exactly the same as Searle’s! Consciousness exists and is observer-independent, says IIT, and it is both integrated (each experience is unified) and informative (each experience is what it is by differing, in its particular way, from trillions of other experiences). IIT introduces a novel, non-Shannonian notion of information—integrated information—which can be measured as “differences that make a difference” to a system from its intrinsic perspective, not relative to an observer. Such a novel notion of information is necessary for quantifying and characterizing consciousness as it is generated by brains and perhaps, one day, by machines.

Another of Searle’s criticisms has to do with panpsychism. If IIT accepts that even some simple mechanisms can have a bit of consciousness, then isn’t the entire universe suffused with soul? Searle justly states: “Consciousness cannot spread over the universe like a thin veneer of jam; there has to be a point where my consciousness ends and yours begins.” Indeed, if consciousness is everywhere, why should it not animate the iPhone, the Internet, or the United States of America?

Except that, once again, one of the central notions of IIT is exactly this: that only “local maxima” of integrated information exist (over elements, spatial and temporal scales): my consciousness, your consciousness, but nothing in between; each individual consciousness in the US, but no superordinate US consciousness. Like Searle, we object to certain kinds of panpsychism, with the difference that IIT offers a constructive, predictive, and mathematically precise alternative.

Finally, we agree with Searle that one looks in vain for mouthwatering admissions of guilt in Confessions. That is true from the point of view of a priest eager for sins to be revealed. Yet among scientists, there exists a powerful edict against bringing subjective, idiosyncratic memories, beliefs, and desires into professional accounts of one’s research. Confessions breaks with this taboo by mixing the impersonal and objective with the intensely personal and subjective. To a scientist, this is almost a sin. But philosophers too can get close to sin, in this case a sin of omission: not to ponder enough, before judgment is passed, what the book and ideas one reviews are actually saying.

Christof Koch
Chief Scientific Officer
Allen Institute for Brain Science
Seattle, Washington

Giulio Tononi
Professor of Psychiatry
University of Wisconsin
Madison, Wisconsin

John R. Searle replies:

One of my criticisms of Koch’s book Consciousness is that we cannot use information theory to explain consciousness because the information in question is only information relative to a consciousness. Either the information is carried by a conscious experience of some agent (my thought that Obama is president, for example) or in a nonconscious system the information is observer-relative—a conscious agent attributes information to some nonconscious system (as I attribute information to my computer, for example).

Koch and Tononi, in their reply, claim that they have agreed with this all along, indeed it is their “starting point,” and that I have misrepresented their theory. I do not think I have and will now quote passages that substantiate my criticisms. (In this reply I will assume they are in complete agreement with each other.)

  1. The Conscious Photodiode. They say explicitly that the photodiode is conscious. The crucial sentence is this:

Strictly speaking, then, the IIT [Integrated Information Theory] implies that even a binary photodiode is not completely unconscious, but rather enjoys exactly 1 bit of consciousness. Moreover, the photodiode’s consciousness has a certain quality to it….*

This is a stunning claim: there is something that it consciously feels like to be a photodiode! On the face of it, it looks like a reductio ad absurdum of any theory that implies it. Why is the photodiode conscious? It is conscious because it contains information. But here comes my objection, which they claim to accept: the information in the photodiode is only relative to a conscious observer who knows what it does. The photodiode by itself knows nothing. If the “starting point” of their theory is a distinction between absolute and observer-relative information, then photodiodes are on the observer-relative side and so are not conscious.

  1. The Observer Relativity of Integrated Information. They think they get out of the observer relativity of information by considering only integrated information, and integrated information, they think, is somehow absolute information and not just relative to a consciousness. But the same problem that arose for the photodiode arises for their examples of integrated information. Koch gives several: personal computers, embedded processors, and smart phones are three. Here is an extreme claim by him:

Even simple matter has a modicum of Φ [integrated information]. Protons and neutrons consist of a triad of quarks that are never observed in isolation. They constitute an infinitesimal integrated system. (p. 132)

So on their view every proton and neutron is conscious. But the integrated information in all of these is just as observer-relative as was the information in the photodiode. There is no intrinsic absolute information in protons and neutrons, nor in my personal computer, nor in my smart phone. The information is all in the eye of the beholder.

  1. Panpsychism. They claim not to be endorsing any version of panpsychism. But Koch is explicit in his endorsement and I will quote the passage over again:

By postulating that consciousness is a fundamental feature of the universe, rather than emerging out of simpler elements, integrated information theory is an elaborate version of panpsychism.(p. 132, italics in the original)

And he goes on:

The entire cosmos is suffused with sentience. We are surrounded and immersed in consciousness; it is in the air we breathe, the soil we tread on, the bacteria that colonize our intestines, and the brain that enables us to think. (p. 132)

Any system at all that has both differentiated and integrated states of information is claimed to be conscious (Koch, p. 131). But my objections remain unanswered. Except for systems that are already conscious, the information in both simple systems like the photodiode and integrated systems like the smart phone is observer-relative. And the theory has a version of panpsychism as a consequence.

But the deepest objection is that the theory is unmotivated. Suppose they could give a definition of integrated and differentiated information that was not observer-relative, that would enable us to tell, from the brute physics of a system, whether it had such information and what information exactly it had. Why should such systems thereby have qualitative, unified subjectivity? In addition to bearing information as so defined, why should there be something it feels like to be a photodiode, a photon, a neutron, a smart phone, embedded processor, personal computer, “the air we breathe, the soil we tread on,” or any of their other wonderful examples? As it stands the theory does not seem to be a serious scientific proposal.

The post Can a Photodiode Be Conscious? appeared first on The New York Review of Books.

]]>
39600
Can Information Theory Explain Consciousness? https://www.nybooks.com/articles/2013/01/10/can-information-theory-explain-consciousness/ Thu, 20 Dec 2012 17:00:00 +0000 http://nybooks.wpengine.com/ 1. The problem of consciousness remains with us. What exactly is it and why is it still with us? The single most important question is: How exactly do neurobiological processes in the brain cause human and animal consciousness? Related problems are: How exactly is consciousness realized in the brain? That is, where is it and […]

The post Can Information Theory Explain Consciousness? appeared first on The New York Review of Books.

]]>
1.

The problem of consciousness remains with us. What exactly is it and why is it still with us? The single most important question is: How exactly do neurobiological processes in the brain cause human and animal consciousness? Related problems are: How exactly is consciousness realized in the brain? That is, where is it and how does it exist in the brain? Also, how does it function causally in our behavior?

searle_1-011013.jpg

Collection of Arturo Schwarz, Milan/Scala/Art Resource

Tristan Tzara: Self-Portrait, 1928

To answer these questions we have to ask: What is it? Without attempting an elaborate definition, we can say the central feature of consciousness is that for any conscious state there is something that it feels like to be in that state, some qualitative character to the state. For example, the qualitative character of drinking beer is different from that of listening to music or thinking about your income tax. This qualitative character is subjective in that it only exists as experienced by a human or animal subject. It has a subjective or first-person existence (or “ontology”), unlike mountains, molecules, and tectonic plates that have an objective or third-person existence. Furthermore, qualitative subjectivity always comes to us as part of a unified conscious field. At any moment you do not just experience the sound of the music and the taste of the beer, but you have both as part of a single, unified conscious field, a subjective awareness of the total conscious experience. So the feature we are trying to explain is qualitative, unified subjectivity.

Now it might seem that is a fairly well-defined scientific task: just figure out how the brain does it. In the end I think that is the right attitude to have. But our peculiar history makes it difficult to have exactly that attitude—to take consciousness as a biological phenomenon like digestion or photosynthesis, and figure out how exactly it works as a biological phenomenon. Two philosophical obstacles cast a shadow over the whole subject. The first is the tradition of God, the soul, and immortality. Consciousness is not a part of the ordinary biological world of digestion and photosynthesis: it is part of a spiritual world. It is sometimes thought to be a property of the soul and the soul is definitely not a part of the physical world. The other tradition, almost as misleading, is a certain conception of Science with a capital “S.” Science is said to be “reductionist” and “materialist,” and so construed there is no room for consciousness in Science. If it really exists, consciousness must really be something else. It must be reducible to something else, such as neuron firings, computer programs running in the brain, or dispositions to behavior.

There are also a number of purely technical difficulties to neurobiological research. The brain is an extremely complicated mechanism with about a hundred billion neurons in humans, and most investigative techniques are, as the researchers cheerfully say, “invasive.” That means you have to kill or hideously maim the animal in order to investigate the operation of the brain. Noninvasive research techniques, such as brain imaging, are useful, but they have so far not given us the sort of detailed understanding of the workings of the conscious mind that we would like.

2.

Christof Koch has written about these issues before, including an important book I reviewed in these pages, The Quest for Consciousness.1 His current book abandons the biological approach he adapted earlier, and which I have articulated above. According to his current view, consciousness has no special connection with biology. He follows the Italian neuroscientist Giulio Tononi,2 now at the University of Wisconsin–Madison, in thinking that the key to consciousness is information theory, which, he writes, “exhaustively catalogues and characterizes the interactions among all parts of any composite identity.” It does so by quantifying the information about such interactions as “bits” that can be measured, stored, and transmitted. The application of information theory made by Tononi and Koch emphasizes that consciousness requires that the information that constitutes consciousness should be both “differentiated” and “integrated.” In one of Tononi’s examples, in experiencing a red square we “differentiate” the property of redness and the property of squareness, but the experience is “integrated” in that it “cannot be decomposed into the separate experience of red and the separate experience of a square.” Tononi goes on,

Similarly, experiencing the full visual field cannot be decomposed into experiencing separately the left half and the right half: such a possibility does not even make sense to us, since experience is always whole.

According to Koch, any system at all that has processes describable by information theory is, at least to some degree, conscious. But since any system that has causal relations can be described in the vocabulary of information theory, it turns out that consciousness is everywhere. Panpsychism follows. As he tells us:

By postulating that consciousness is a fundamental feature of the universe, rather than emerging out of simpler elements, integrated information theory is an elaborate version of panpsychism.… Once you assume that consciousness is real and ontologically distinct [i.e., exists apart] from its physical substrate, then it is a simple step to conclude that the entire cosmos is suffused with sentience. We are surrounded and immersed in consciousness….

No matter whether the organism or artifact hails from the ancient kingdom of Animalia or from its recent silicon offspring, no matter whether the thing has legs to walk, wings to fly, or wheels to roll with—if it has both differentiated and integrated states of information, it feels like something to be such a system; it has an interior perspective.

In other words it is conscious. So:

personal computers, embedded processors, and smart phones…might be minimally conscious.

Koch and Tononi begin by investigating biological consciousness in humans and animals. They develop a theory that consciousness is information. But such information is not confined to biological systems. You also find consciousness in, say, smartphones. So, in the end, for these authors, there is nothing especially biological about consciousness.

The integrated information theory of consciousness makes a number of important predictions. Among them is that, in the specific case of biological consciousness, information arises from causal interactions within the nervous system, and when those interactions cannot take place anymore the amount of consciousness shrinks. For example, there is less consciousness in deep sleep than in wakefulness. According to Tononi this is because there is less integration going on in the brain in deep sleep compared with that in wakefulness. Tononi and his colleague Marcello Massimini, now a professor in Milan, set out to prove this by attaching electrodes to volunteers both sleeping and awake. A difference of results, according to Tononi, showed that in deep-sleeping subjects the integration breaks down.

Koch discusses a number of other issues in the book: notably free will, the relation of science and religion, and the role of unconscious mental processes. I will discuss some of these later, but the single most important claim is the analysis of consciousness based in information theory.

3.

Two objections stand out immediately. The first is that no reason has been given at all why there should be any special connection between information theory and consciousness. In his earlier views, Koch argued that consciousness is explained by synchronized neuron firings. Now he objects to that previous view. The objection is: Why should there be any connection between certain rates of neuron firings and consciousness? The same question arises with information theory: Why should information theory give us the essence of subjectivity? What is the connection supposed to be? My second objection is that the theory implies panpsychism, and pansychism is absurd for a reason I can explain briefly.

Consciousness comes in units. The qualitative state of drinking beer is different from finding the money in your wallet to pay for it. But a consequence of its subjectivity is its unity. So for example, I am conscious and you are conscious but each consciousness is separate from the other; they do not smear into each other like adjoining puddles of mud. Consciousness cannot be spread over the universe like a thin veneer of jam; there has to be a point where my consciousness ends and yours begins. For people who accept panpsychism, who attribute consciousness, as Koch does, to the iPhone, the question is: Why the iPhone? Why not each part of it? Each microprocessor? Why not each molecule? Why not the whole communication system of which the iPhone is a part? The problem with panpsychism is not that it is false; it does not get up to the level of being false. It is strictly speaking meaningless because no clear notion has been given to the claim. Consciousness comes in units and panpsychism cannot specify the units.

4.

Christof Koch describes his book as the “Confessions of a Romantic Reductionist.” But this is misleading. His book is explicitly and aggressively antireductionist, it contains no confessions, and if you are looking for a romantic book—this is not it. A crucial sentence is this:

Experience, the interior perspective of a functioning brain, is something fundamentally different from the material thing causing it and…it can never be fully reduced to physical properties of the brain.

And:

I believe that consciousness is a fundamental, an elementary, property of living matter. It can’t be derived from anything else; it is a simple substance, in Leibnitz’s words.

This is antireductionism with a vengeance. Indeed, as he himself says, it is a form of dualism. You are reductionist if you think that consciousness is really something else, and that the first-person ontology—the sense I have that I exist—can be shown to be third-person ontology—my sense is reducible to something else. Favorite candidates for reducing consciousness to something else are neuron firings, computer processes, and behavior. Antireductionism does not become reductionism by being described as “romantic.” There is no sense whatever—romantic or otherwise—in which Koch is a reductionist about consciousness.

Also, I could not find any confessions in the book. “Confessions” implies that he admits he has done something wrong. There are many personal reflections in the book about himself, his family, his children, his dogs, his mountain-climbing experiences, and his work as a scientist. His friends, of whom I am one, will find many of these quite moving. But any confession where he actually admits to some serious or even trivial misdeed is conspicuously absent from the book. An accurate subtitle would be “Personal Reflections of a Scientific Dualist.”

5.

On the question of free will, Koch endorses the most extreme of interpretations of the experiments conducted by the late neuroscientist Benjamin Libet, whose experiments drew on the work of other scientists. Libet would tell his subjects to perform some intentional but trivial act, such as pushing a button or flicking their wrist, and to do it every so often whenever they feel like it. But he asked them to observe on a clock exactly the point at which they made up their mind to do it and he found that at exactly the point at which the intention in action began, there was an interval between increased brain activity in a specific area of the brain and the awareness by the subject that he is beginning to push the button or perform a similar action. In short, before I was aware that I was about to push the button, my brain was getting ready to do so. The brain has an extra activity, called the “readiness potential,” prior to the reported awareness of the onset of the action. This can last a couple hundred milliseconds or sometimes even longer.

searle_2-011013.jpg

Edward Gorey Charitable Trust

‘Likely motives’; drawing by Edward Gorey from The Deadly Blotter, which appears with The Just Dessert in a new edition of Gorey’s Thoughtful Alphabets, published by Pomegranate

Now what is one to make of these data? I think the most extreme unwarranted interpretation is that they show that we do not have free will. The brain decides to do something before the mind knows what it is doing. The brain decides to act and the mind becomes aware of this later on. Koch endorses this extreme naive view.

I think the data do not show anything of this sort. The cases in question are all cases where the subject has already made up his mind to eventually perform a course of action, and the brain has an increased activity prior to his awareness of a conscious decision to physically perform it; but the presence of the readiness potential does not constitute a causally sufficient condition for the performance of the action. It could be the case that a person would have been inclined to push a button, that the brain then undertook the activity called readiness potential, and that the person would not push the button. Readiness potential in the brain is not a condition that is sufficient to cause the act. It is associated with the act but does not determine it. We need much more research before we can give a confident interpretation of the readiness potential data. By the way, the pedant in me is annoyed by the fact that he attributes all of this to the work of Ben Libet when in fact the same “Bereitschaftspotential” was discovered in the 1970s by two German scientists, Lüder Deecke and H.H. Kornhuber, and their colleagues.3 Koch makes no mention of Deecke and Kornhuber.

6.

Koch’s proposal to explain consciousness by the processing of information marks a major shift in the type of explanation he is seeking. Standard explanations in biology are causal; for example, we want to know how genes cause physical and other traits and how brain processes cause consciousness. But Koch’s explanation abandons this project. He is not saying that information causes consciousness; he is saying that certain information just is consciousness, and because information is everywhere consciousness is everywhere. I think that if you analyze this carefully, you will see that the view is incoherent.

To put the explanation bluntly: consciousness is independent of an observer. I am conscious no matter what anybody thinks. But information is typically relative to observers. These sentences, for example, contain information that make sense only relative to our capacity to interpret them. So you can’t explain consciousness by saying it consists of information, because information only exists relative to consciousness.

Information is one of the most confused notions in contemporary intellectual life. First of all, there is a distinction between information in the ordinary sense in which it always has a content—that is, typically, that such and such is the case or that such and such an action is to be performed. That kind of information is different from information in the sense of the mathematical “theory of information,” originally invented by Claude Shannon of Bell Labs. The mathematical theory of information is not about content, but how content is encoded and transmitted. Information according to the mathematical theory of information is a matter of bits of data where data are construed as symbols. In more traditional terms, the commonsense conception of information is semantical, but the mathematical theory of information is syntactical. The syntax encodes the semantics. This is in a broad sense of “syntax” which would include, for example, electrical charges.

Information theory has proved immensely powerful in a number of fields and may become more powerful as new ways are found to encode and transmit content, construed as symbols. Tononi and Koch want to use both types of information, they want consciousness to have content, but they want it to be measurable using the mathematics of information theory.

To explore these ideas two distinctions must be made clear. The first is between two senses of the objective and subjective distinction. This famous distinction is ambiguous between an epistemic sense (where “epistemic” means having to do with knowledge) and an ontological sense (where “ontological” means having to do with existence). In the epistemic sense, there is a difference between those claims that can be settled as a matter of truth or falsity objectively, where truth and falsity do not depend on the attitudes of makers and users of the claim. If I say that Rembrandt was born in 1606, that claim is epistemically objective. If I say that Rembrandt was the best Dutch painter ever, that is, as they say, a matter of “subjective opinion”; it is epistemically subjective.

But also there is an ontological sense of the subjective/objective distinction. In that sense, subjective entities only exist when they are experienced by a human or animal subject. Ontologically objective entities exist independently of any experience. So pains, tickles, itches, suspicions, and impressions are ontologically subjective; while mountains, molecules, and tectonic plates are ontologically objective. Part of the importance of this distinction, for this discussion, is that mental phenomena can be ontologically subjective but still admit of a science that is epistemically objective. You can have an epistemically objective science of consciousness even though it is an ontologically subjective phenomenon. Ben Libet was practicing such an epistemically objective science; so are a wide variety of scientists ranging, for example, from Antonio Damasio to Oliver Sacks.

This distinction underlies another distinction—between those features of the world that exist independently of any human attitudes and those whose existence requires such attitudes. I describe this as the difference between those features that are observer-independent and those that are observer-relative. So, ontologically objective features like mountains and tectonic plates have an existence that is observer-independent; but marriage, property, money, and articles in The New York Review of Books have an observer-relative existence. Something is an item of money or a text in an intellectual journal only relative to the attitudes people take toward it. Money and articles are not intrinsic to the physics of the phenomena in question.

Why are these distinctions important? In the case of consciousness we have a domain that is ontologically subjective, but whose existence is observer-independent. So we need to find an observer-independent explanation of an observer-independent phenomenon. Why? Because all observer-relative phenomena are created by consciousness. It is only money because we think it is money. But the attitudes we use to create the observer-relative phenomena are not themselves observer-relative. Our explanation of consciousness cannot appeal to anything that is observer-relative—otherwise the explanation would be circular. Observer-relative phenomena are created by consciousness, and so cannot be used to explain consciousness.

The question then arises: What about information itself? Is its existence observer-independent or observer- relative? There are different sorts of information, or if you like, different senses of “information.” In one sense, I have information that George Washington was the first president of the United States. The existence of that information is observer-independent; I have that information regardless of what anybody thinks. It is a mental state of mine, which while it is normally unconscious can readily become conscious. Any standard textbook on American history will contain the same information. What the textbook contains, however, is observer-relative. It is only relative to interpreters that the marks on the page encode that information. With the exception of our mental thoughts—conscious or potentially conscious—all information is observer-relative. And in fact, except for giving examples of actual conscious states, all of the examples that Tononi and Koch give of information systems—computers, smart phones, digital cameras, and the Web, for example—are observer-relative.

We cannot explain consciousness by referring to observer-relative information because observer-relative information presupposes consciousness already. What about the mathematical theory of information? Will that come to the rescue? Once again, it seems to me that all such cases of “information” are observer-relative. The reason for the ubiquitousness of information in the world is not that information is a pervasive force like gravity, but that information is in the eye of the beholder, and beholders can attach information to anything they want, provided that it meets certain causal conditions. Remember, observer relativity does not imply arbitrariness, it does not imply epistemic subjectivity.

An example prominently discussed by Tononi will make this clear. He considers the case of a photodiode that turns on when the light is on and off when the light is off. So the photodiode contains two states and has minimal bits of information. Is the photodiode conscious? Tononi tells us, and Koch is committed to the same view, that yes, the photodiode is conscious. It has a minimal amount of consciousness, one bit to be exact. But now, what fact about it makes it conscious? Where does its subjectivity come from? Well, it contains the information that the light is either on or off. But the objection to that is: the information only exists relative to a conscious observer. The photodiode knows nothing about light being on or off, it just responds differentially to photon emissions. It is exactly like a mercury thermometer that expands or contracts in a way that we can use to measure the temperature in the room. The mercury in the glass knows nothing about temperature or anything else; it just expands or contracts in a way that we can use to gain information.

Same with the photodiode. The idea that the photodiode is conscious, even a tiny bit conscious, just in virtue of matching a luminance in the environment, does not seem to be worth serious consideration. I have the greatest admiration for Tononi and Koch but the idea that a photodiode becomes conscious because we can use it to get information does not seem up to their usual standards.

A favorite example in the literature is the rings in a tree stump. They contain information about the age of the tree. But what fact about them makes them information? The answer is that there is a correlation between the annual rings on the tree stump and the cycle of the seasons, and the different phases of the tree’s growth, and therefore we can use the rings to get information about the tree. The correlation is just a brute fact; it becomes information only when a conscious interpreter decides to treat the tree rings as information about the history of the tree. In short, you cannot explain consciousness by referring to observer-relative information, because the information in question requires consciousness. Information is only information relative to some consciousness that assigns the informational status.

Well, why could not the brute facts that enable us to assign informational interpretations themselves be conscious? Why are they not sufficient for consciousness? The mercury expands and contracts. The photodiode goes on or off. The tree gets another ring with each passing year. Is that supposed to be enough for consciousness? As long as we have the notion of “information” in our explanation, it might look as if we are explaining something, because, after all, there does seem to be a connection between consciousness and observer-independent information.

There is no doubt some information in every conscious state in the ordinary content sense of information. Even if I just have a pain, I have information, for example that it hurts and that I am injured. But once you recognize that all the cases given by Koch and Tononi are forms of information relative to an observer, then it seems to me that their approach is incoherent. The matching relations themselves are not information until a conscious agent treats them as such. But that treatment cannot itself explain consciousness because it requires consciousness. It is just an example of consciousness at work.

7.

There are many other interesting parts of Koch’s book that I have not had the space to discuss, and as always Koch’s discussions are engaging and informative. I would not wish my misgivings to detract from the real merits of his book. But the primary intellectual ambitions of the book—namely to offer a model for explaining consciousness and to suggest a solution to the problem of free will and determinism—do not seem to me successful.

The post Can Information Theory Explain Consciousness? appeared first on The New York Review of Books.

]]>
39468
The Mystery of Consciousness, Con’t. https://www.nybooks.com/articles/2011/09/29/mystery-consciousness-cont/ Thu, 08 Sep 2011 20:00:00 +0000 http://nybooks.wpengine.com/ To the Editors:

Antonio Damasio and John Searle have both contributed a lot to our understanding of the nature of consciousness, but I would suggest that neither they nor anyone else has had any success in explaining, to quote Searle, "exactly how brain processes create consciousness." What Damasio and Searle and others have done is tell us about correlates of consciousness—whether physiological or mental. Does anyone have the foggiest idea of how a bunch of firing neurons in any kind of network produce consciousness? My answer is No. What would Searle say?

The post The Mystery of Consciousness, Con’t. appeared first on The New York Review of Books.

]]>
To the Editors:

Antonio Damasio and John Searle have both contributed a lot to our understanding of the nature of consciousness, but I would suggest that neither they nor anyone else has had any success in explaining, to quote Searle, “exactly how brain processes create consciousness.” What Damasio and Searle and others have done is tell us about correlates of consciousness—whether physiological or mental. Does anyone have the foggiest idea of how a bunch of firing neurons in any kind of network produce consciousness? My answer is No. What would Searle say?

Barclay Martin
Professor Emeritus
University of North Carolina
Chapel Hill, North Carolina

John Searle replies:

There were many letters in response to my review of Antonio Damasio’s Self Comes to Mind [NYR, June 9]. They exhibited no common pattern of objection, but Professor Martin’s concern expresses a worry that should be answered. He does not see how it is possible that “a bunch of firing neurons in any kind of network produce consciousness.” I entirely agree that, at present, the way neurons produce consciousness remains mysterious.

But before despairing we should remind ourselves of two things. First, we know it happens. We know that brain processes cause all of our conscious experiences. If it happens, we should try to figure out how it happens. Second, we have been through such mysteries before. There have frequently been phenomena that seemed utterly mysterious but that eventually received an explanation. Two famous examples: first, the problem of life once seemed mysterious. How could mere chemical processes, brute lifeless chemical reactions, produce life? It is hard for us today to discover the passion with which this issue was once debated. A second example is electromagnetism. By the principles of classical Newtonian physics, electromagnetism seems positively mysterious, even spooky. Advances in knowledge have removed the sense of mystery from life and electromagnetism.

Currently, we have several similar mysteries on our agenda: consciousness is one, but not the only one; the problem of the freedom of the will, and the interpretation of quantum mechanics are also equally mysterious. It is possible that we may never have a solution to the mystery of consciousness, but it would be unintelligent to give up the effort, because we have no reason to suppose that consciousness is inexplicable. That being the case, we have to do everything we can to try to explain it.

The post The Mystery of Consciousness, Con’t. appeared first on The New York Review of Books.

]]>
37853
The Mystery of Consciousness Continues https://www.nybooks.com/articles/2011/06/09/mystery-consciousness-continues/ Thu, 19 May 2011 21:00:00 +0000 http://nybooks.wpengine.com/ What exactly is consciousness? There are a number of senses of the word in ordinary speech, but there is one that is most important for philosophy and science: consciousness consists of qualitative, subjective states of feeling or sentience or awareness. These typically begin when we awake from a dreamless sleep and they go on until we fall asleep again or otherwise become unconscious. Dreams are a form of consciousness. Consciousness, in short, is a matter of the qualitative experiences that we have. To understand qualitativeness, think of the difference between drinking beer, listening to music, and thinking about your income tax. Each experience has a distinct quality.

The post The Mystery of Consciousness Continues appeared first on The New York Review of Books.

]]>
1.

How do neurobiological processes in the brain cause consciousness? I think this is the most important question in the biological sciences today. Two related questions: Where exactly is consciousness realized in the brain and how does it function causally in our behavior? Antonio Damasio is one of the leading workers in the field of consciousness research, and after having written a number of books on related problems, in Self Comes to Mind he addresses the problem of consciousness directly. He does not claim to have solved it but he believes that he has made advances and pointed in the right direction for a solution.

searle_1-060911.jpg

Musée d’Orsay, Paris/RMN–Grand Palais/Hervé Lewandowski

Odilon Redon: Closed Eyes, 1890; from the exhibition ‘Odilon Redon: The Prince of Dreams, 1840–1916,’ at the Grand Palais, Paris, March 23–June 20, 2011, and the Musée Fabre, Montpellier, July 7–October 16, 2011

What exactly is consciousness? There are a number of senses of the word in ordinary speech, but there is one that is most important for philosophy and science: consciousness consists of qualitative, subjective states of feeling or sentience or awareness. These typically begin when we awake from a dreamless sleep and they go on until we fall asleep again or otherwise become unconscious. Dreams are a form of consciousness. Consciousness, in short, is a matter of the qualitative experiences that we have. To understand qualitativeness, think of the difference between drinking beer, listening to music, and thinking about your income tax. Each experience has a distinct quality.

Because of this qualitative character all conscious states are essentially subjective in the sense that they exist only as experienced by a subject—human or animal. The problem of consciousness can now be stated somewhat more precisely: How does the brain produce qualitative subjectivity? How does it get us over the hump from the objective third-person character of neuron firings to the subjective first-person feelings we have when we are conscious?

There is so much confusion surrounding the notions of objectivity and subjectivity that I need to say a word to clarify them. In one sense, the objective/subjective distinction is about claims to knowledge. I call this the epistemic sense. A claim is said to be objective if its truth or falsity can be settled as a matter of fact independently of anybody’s attitudes, feelings, or evaluations; it is subjective if it cannot. For example, the claim that Van Gogh died in France is epistemically objective. But the claim that Van Gogh was a better painter than Gauguin is, as they say, a matter of subjective opinion. It is epistemically subjective.

In another sense, the objective/subjective distinction is about modes of existence. I call this the ontological sense. An entity has an objective ontology if its existence does not depend on being experienced by a human or animal subject; otherwise it is subjective. For example, mountains, molecules, and tectonic plates are ontologically objective. Their existence does not depend on being experienced by anybody. But pains, tickles, and itches only exist when experienced by a human or animal subject. They are ontologically subjective.

I emphasize these two senses of the distinction because a common mistake is to suppose that because science is objective and consciousness is subjective, there cannot be a science of consciousness. Science is indeed epistemically objective, because scientific claims are supposed to be verifiable independently of anybody’s feelings and attitudes. But the ontological subjectivity of the domain of consciousness does not preclude an objective science of that domain. You can have an (epistemically) objective science of an (ontologically) subjective consciousness. Much confusion has been created by the failure to see this point.

So our question is: How does the brain create ontological subjectivity? We know consciousness happens and we know the brain does it. How does it work? How do we approach this problem scientifically? The standard way is to go through three steps. First, try to find the neurobiological correlate of consciousness. A lot of work has been done on this. There is now even a commonly used abbreviation, NCC, for the neuronal correlate of consciousness. Second, try to test if the correlations are in fact causal. Do the neurobiological states cause consciousness? Third, try to formulate a theory. Why do these processes cause consciousness at all, and why do these specific processes cause these specific conscious states? In recent years there has been a sizable number of important research efforts devoted to solving these problems, and I have reviewed several of the relevant books in these pages.1

One depressing feature of this entire research project is that it does not seem to be making much progress. Most efforts to identify the NCC have concentrated on the thalamocortical system, the area including the thalamus and the different layers of the cortex. But the slowness of progress makes one wonder if we are, perhaps, proceeding on the basis of wrong assumptions. Damasio’s book makes a new start in at least two respects. First, he emphasizes other areas of the brain in the production of consciousness, especially the brain stem. Other theorists ignore the brain stem, presumably because it is an evolutionarily primitive part of the brain. They think consciousness is the result of activity in more advanced neuroanatomical features, such as the thalamocortical system. Second, his entire book is built around the theme that the self plays a crucial role in the creation of consciousness.

2.

To summarize Damasio’s argument2 is not an easy task because the book is densely argued and to me at least often unclear. Here is the basic framework: the brain creates an (unconscious) mind. The brain also creates the self. When the self encounters the mind, consciousness results. As with most important theories there is an underlying intuition that drives his theory, though he does not say it explicitly: whenever I have a conscious experience I always experience it as mine. I do not just have a sequence of unrelated neutral qualitative states that could belong to anybody, but I have them as part of a coherent unity that is constitutive of and experienced as myself. So if consciousness is somehow always related to the self, then it seems natural to think that maybe the key to understanding the neurobiology of consciousness is by way of the neurobiology of the self.

The book addresses two problems: (1) How does the brain construct a mind? and (2) How does the brain make the mind conscious? The brain creates a mind by creating images, which are unconscious momentary patterns on sheets of neurons called maps. The images may be either of parts of the body or of things outside the body, but in general, perception is the result of mapping. Damasio says, “The distinctive feature of brains such as the one we own is their uncanny ability to create maps.” Brain maps are not static; they change from moment to moment. The mind is a consequence of the mapping activity of the brain. “Minds emerge,” Damasio writes, “when the activity of small [neuronal] circuits is organized across large networks so as to compose momentary patterns. The patterns represent things and events located outside the brain.” The term “map” applies to all of these patterns, and though they are mental, they are at this stage still totally unconscious, according to Damasio.

Body mapping is the key to the problem of consciousness, because by mapping the body the brain manages to create the critical component of the self. Having made a mind by making maps, the brain makes the mind conscious by creating a self, and when the self encounters the mind, consciousness results. This is the source of Damasio’s title, Self Comes to Mind:

The decisive step in the making of consciousness is not the making of images and creating the basics of the mind. The decisive step is making the images ours, making them belong to their rightful owners…. [Italics in original.]

Damasio’s two crucial notions are consciousness and the self.

(1) Consciousness. In actual practice I think his idea of consciousness is essentially the one stated above. Its essence is qualitative subjectivity. But when Damasio defines it explicitly it comes out a bit differently: it is “a state of mind in which there is knowledge of one’s own existence and of the existence of surroundings” (italics in original). I do not believe this definition is correct. My dog, Gilbert, is plainly conscious, but in what sense does he have knowledge of his own existence? He is certainly aware of his surroundings when he perceives anything. But it is hard to say that when he is dreaming he has knowledge of the existence of his surroundings. It is Damasio’s right to define a word any way he likes, but I think in practice he uses “consciousness,” as I do, to refer to ontologically subjective states such as pains, and does not use it just to describe epistemic states, such as my knowing that I am in Berkeley.

(2) The Self. The self is a much harder notion to define, and I do not find his definitions entirely clear. He says the self is decomposable into three components, the protoself, the core self, and the autobiographical self. Each of these can come in two forms, the “self-as-object” and the “self-as-knower.” But the self-as-object can also operate as knower.

The protoself is constituted by special kinds of mental images of the body produced in body-mapping structures, below the level of the cerebral cortex. The protoself is “an integrated collection of separate neural patterns that map, moment by moment, the most stable aspects of the organisms physical structure” (italics in original). The first product of the protoself is “primordial feelings.” Whenever you are awake there has to be some form of feeling. The second form of the self, the “core self,” is about action. “The core self unfolds in a sequence of images that describe an object engaging the protoself and modifying that protoself, including its primordial feelings.” These images are now conscious because they have encountered the self. Finally there is the autobiographical self, constituted in large part by memories of facts and events about the self and about its social setting. The protoself and the core self constitute a “material me.” The autobiographical self constitutes a “social me.” Our sense of person and identity is in the autobiographical self.

When we put this all together we get the following result: conscious minds begin when the self comes to mind, when brains add a process involving a person’s sense of self to the mind mix. Specifically, the neurology of consciousness is organized around the brain structures involved in generating three features: wakefulness, mind, and self. Three major anatomical features are the brain stem, the thalamus, and the cerebral cortex. There is no direct alignment between, on the one hand, each of these anatomical features, and, on the other, each component of the mental trio of wakefulness, mind, and self. All three anatomical divisions contribute some aspect of wakefulness, mind, and self. To be fully conscious you have to have three features: (1) to be awake, (2) to have an operational mind, and (3) to have a sense of self as a protagonist of the experience.

Finally, after more than 250 pages, we get to the problem of qualia—a term often used for qualitative conscious states. Damasio tells us that there are two kinds of qualia: Qualia I and Qualia II. Qualia I is about pain and pleasure, but the problem of Qualia II is why there should be any feelings at all. He thinks Qualia I is not a mystery but that the Qualia II problem is more difficult. About Qualia II we get the following stunning passage:

Qualia is part of the contents that come to be known as the self process, the self construction illuminating the mind construction. But somewhat paradoxically, Qualia II is also the grounding for the proto-self and thus sits astride mind and self, in a hybrid transition. The neural design that enables qualia provides the brain with felt perceptions, a sense of pure experience. After a protagonist is added to the process, the experience is claimed by its newly minted owner, the self.

It is a bit hard to see how this is consistent with the rest of the book. The self is introduced to explain consciousness, but if it is to explain consciousness we cannot assume that the self is already conscious. How did it get to be conscious? Yet he tells us that qualia are the “grounding” of the self. But qualia just are conscious states. So this account, if we are to take this passage seriously, would be circular. We would be assuming consciousness to explain consciousness.

This is only a brief summary of the main argument of the book and I am leaving out Damasio’s rich and rewarding discussion of many other issues, such as the emotions, perception, memory, and mirror neurons.3

3.

I have a large number of criticisms to make of the book, but I will confine them to three topics: Damasio’s account of the self, his conception of the mind, and his theory of consciousness. I think all are open to question.

The Self. The project is to give an account of consciousness by showing how the interaction between the mind and the self produces it. In order to do that one would have to give an account of the mind and the self that did not already presuppose that either was conscious and then show how their interaction produces consciousness. One would have to explain the mind as a set of ontologically objective biological processes, then do a similar explanation of the self, and then specify the mechanisms by which the structures of the self interact with the mind structures in order to produce qualitative subjectivity.

As far as I can tell Damasio does not succeed in doing this; indeed he does not even really try. He does try to give an account of the mind as a set of (unconscious) mapping activities of the brain, and this does not presuppose, or at least it does not obviously presuppose, that these activities are conscious. But when he gets to the self, it is hard to understand any of his three divisions of the self, protoself, core self, and autobiographical self, without supposing that they are already conscious.

The problem can be stated succinctly by presenting his account with the following dilemma: Is the self, as he describes it, unconscious or conscious? If it is unconscious then he has nothing to say about how its encounter with a mind results in consciousness. But if you look at the text closely it seems pretty clear that there is no way to understand the sort of self that he describes without supposing that it is already conscious. He frequently uses words like “primordial feeling” and “emotion” to describe the self. It is hard to understand these in a way that does not imply consciousness. This account is therefore circular because we are assuming a conscious self in order to explain the conscious mind, but this uses consciousness to explain consciousness.

The Mind. My objection to Damasio’s account of the self is that he has tacitly, if unconsciously, smuggled consciousness into his conception of it without explaining how it got there. My objection to his account of mind is that he does not see how much consciousness is essential to our understanding of the mind.

He says the brain creates the mind by making maps. On the standard understanding of the causal relations between brain and mind, that is not true. The brain creates the mind by making thoughts, feelings, perceptions, pains, memories, sensations, and all the rest of it, both conscious and unconscious. The creation of neurobiological patterns is an essential part of this process, but he gives no reason to suppose that the map, qua map, has any psychological reality at all. When he tells us that the mind consists largely of unconscious maps, one has to ask: What fact about these maps makes them mental? When we read words like “image,” “perception,” and “feeling” in his account of maps, we tend naturally to connect them with the conscious formation of images and the experience of perceptions and feelings. But that is not what he means when he talks about the mapping activity of the brain. The problem, to put it in a nutshell, is that he has given us no reason to suppose that these maps have any mental or psychological reality at all.

Consider an obvious example of a map that we are all familiar with, the retinal image inside the eyeball when we see anything. As far as I can see, this map/image has no psychological reality whatever. We do not see it or otherwise experience it. The retinal image is a step from the introduction of optical stimuli to conscious perceptions. But most of these steps, as far as I know, have no psychological reality whatever. They are not the mind, or part of the mind. In any case he gives us no reason to suppose they have any mental reality at all. They are just “momentary patterns.” He tells us that they are “of” something and that they “represent” things and events outside the brain. But how do they represent? What fact makes them “of” anything?

At any given point in waking life there are two sorts of things going on in the brain, neurobiological processes and consciousness. Some of those neurobiological processes are unconscious mental states. What fact about them makes them mental? The question about how these processes become mental is not trivial. Damasio assumes that a pattern that occurs on the way to a mental state is already mental. There is no justification given for this assumption. He tells us that subjectivity is not required for mental states to exist. Yes, but what is required? What fact about them makes them mental?

Whenever we talk about mental states and events, conscious or unconscious, we have to be able to say what exactly the content of the state in question is. I see the tree in front of me, I think about the lunch I am going to have later, I feel a pain in my thigh. In each case, the mental reality, conscious or unconscious, has a specific content. The problem with Damasio’s notion of the mind is that he does not specify any actual or possible contents. Maybe there is a psychological reality to the maps, but he gives us no reason to suppose there is. A more natural way to describe at least many of the maps would be to say that they are stages on the way to the construction of mental states.

Consciousness. We have to keep reminding ourselves that any type of qualitative subjectivity is a form of consciousness. The possession of such states is necessary and sufficient for being conscious. Consciousness comes in degrees, and these range all the way from fanatic intensity to just barely being awake. But all of these are degrees within consciousness. There is no such thing as a “hybrid” form of consciousness. But the present task in the neurobiology of consciousness is to explain exactly how brain processes create consciousness.

I think the fact that Damasio finds dreams and states such as jet lag “exceptions” to his account is a clue that something is wrong. About jet lag he says: “there is a mind but not quite yet a mind organized with all the properties of consciousness.” He tells us that wakefulness is essential for consciousness. But in the ordinary sense it is not. I can have dreams when I am in REM sleep. He says this is a “partial exception” but there is nothing partial about it. It is a total exception.

He sees correctly that dreams and jet lag have to be distinguished from normal, healthy, goal-directed states of consciousness. But the question “What makes a state of consciousness healthy?” is quite different from “How does the brain create consciousness in the first place?” Our task is to explain consciousness, i.e., qualitative subjectivity. Damasio claims that wakefulness is a necessary condition for consciousness. But what these cases show is that it is not. His answer, that they are not normal, is quite correct but does not address the question: What are the necessary and sufficient conditions for the creation of qualitative subjectivity?

4.

Damasio and I agree that in addition to conscious mental states there are unconscious states. What fact about certain brain processes makes them mental? This is one aspect of the traditional “mind-body problem.” What is its solution? Without going into detail, the best way is to consider an example. Even when I am in a sound, dreamless sleep it is true to say of me that I believe/remember that George Washington was the first president. What fact about me makes it the case then and there that I have that unconscious mental state? I think the answer is that my brain is in a condition such that I can produce that state in a conscious form and in conscious behavior. I can, for example, give a correct answer to the question “Who was the first president?”

The notion of the unconscious is the notion of a capacity of the brain to produce states and actions in a conscious form. Sometimes these capacities are blocked by brain damage, repression, loss of memory, etc., but the basic idea of an unconscious mental state, I think, is clear enough. I believe that Damasio uses the notion of the unconscious rather uncritically, and he does not see the importance of distinguishing between those neurobiological processes that are genuinely unconscious mental states, such as my unconscious memory when I am sound asleep that Washington was the first president; and those that are totally nonconscious and nonmental, such as the secretions of the neurotransmitters when I move my body.

I am sympathetic with the basic intuition that drives Damasio’s investigation, namely that in any account of consciousness we need to explain how our conscious states are experienced, not just as a sequence of isolated qualitative subjective events, but as “my experiences.” This is part of what we need to explain. He proposes that we should take this characteristic of the self and treat it as the basis of consciousness. In the end that may be the right approach, but he does not give convincing reasons to suppose that it is. I believe a more plausible approach is to suppose that nonpathological forms of consciousness already come with a sense of the self. Our sense of self is a product of a certain sort of consciousness, not conversely. That is why we can lose that sense in certain pathological forms of consciousness. I have great admiration for this book and its author. I think it is an adventurous, courageous, and intelligent effort. I do not think he has made a convincing case that this is the right way to solve the problem of consciousness.

The post The Mystery of Consciousness Continues appeared first on The New York Review of Books.

]]>
37562
‘Is Just Thinking Enough?’ https://www.nybooks.com/articles/2011/02/24/just-thinking-enough/ Thu, 03 Feb 2011 17:00:00 +0000 http://nybooks.wpengine.com/ To the Editors: In his review of my book Making the Social World [NYR, November 11, 2010], Colin McGinn makes a number of criticisms. I believe they are all mistaken and most rest on misunderstandings. The first and most important misunderstanding is about language. I claim that human institutional reality—such things as money, property, government, […]

The post ‘Is Just Thinking Enough?’ appeared first on The New York Review of Books.

]]>
To the Editors:

In his review of my book Making the Social World [NYR, November 11, 2010], Colin McGinn makes a number of criticisms. I believe they are all mistaken and most rest on misunderstandings.

The first and most important misunderstanding is about language. I claim that human institutional reality—such things as money, property, government, and marriage—requires linguistic representation both for its creation and its maintenance. McGinn thinks I am claiming that it is “logically impossible” for there to be any beings who have such facts without language. I make no such claim. I am talking about humans, not about possible gods. He agrees with me that these institutional facts can only exist if they are represented as existing, but he thinks even we humans do not really need language for that, because we can just think in “concepts.” But thinking in concepts requires some medium in which the thinking takes place. For some simple concepts, for instance color, perhaps imagery is enough, but for complex institutional concepts you have to have words or symbols.

For example, it is easy to imagine a tribe that has language but no property, government, marriage, or money. But try to imagine one that has all of those but no language at all, no symbols or symbolic representations of any kind. It would be impossible. Why can’t we just think, as McGinn believes, in pure concepts and thus create institutional reality? For at least two reasons: first, the representations that are constitutive of institutional reality have to be collectively shared and thus have to be communicable from one person to another.

Second, the complexity, the logical structure of the concepts, and above all their ability to work in status function declarations, which, as I wrote, “impose functions on objects and people [which] cannot perform the functions solely in virtue of their physical structure,” require linguistic forms of expression. Take a simple institutional thought: “Her mortgage is largely paid off, but the recent decline in interest rates may make it desirable for her to refinance to lower her payments and to take out cash.” Try to think that thought in pure concepts without any words or other sorts of symbols. I cannot do it and I do not believe McGinn can either. Humans need language both to think such thoughts and to communicate them.

McGinn says that I don’t make explicit the difference between humans and social animals that lack language. I thought I did, but I will do it again. They may have cooperation, division of labor, status hierarchies with alpha males and alpha females, territoriality, and pair bonding. But they don’t have government, marriage, private property, money, income taxes, and all the rest of institutional reality, because such phenomena require some symbolic means to create obligations, rights, duties, and the rest of what I call “deontic powers.”

Another difficulty with McGinn’s review is that he is unclear about what is a linguistic act or a speech act. He doubts that just pushing the glass of beer toward someone can be a speech act, and asks rhetorically, “What act is not a speech act if this is one? Is sitting in a chair in the pub also a speech act…?” After fifty years of speech act theory it should be obvious that any intentional movement can be a speech act provided it is performed with certain sets of semantic intentions that are communicated to the hearer. The case I described is obviously a speech act, because it assigns the right to drink the beer to the recipient. You can imagine circumstances in which sitting in a chair would be a speech act, but normally it is not. “Speech act” is a quasi-technical term that means, roughly, “a meaningful linguistic act that is intended to communicate propositional content with a certain force from speaker to hearer, which may be spoken, written, or conveyed in some other symbolic form.” Not all speech acts need to be spoken. He thinks I am somehow weakening or modifying (“sliding and hedging”) my account when I allow for speech acts to be performed by something other than spoken words. But that is an emphatic exemplification, not a weakening.

McGinn assumes that I imagine that any status function can be assigned to any object whatever, for example, that stones can be married. I discuss the constraints on the assignment of status function in some detail, and one interesting class of status function is where we impose status requirements only on preexisting physical abilities. As I explain, not everything can be a licensed driver or a qualified surgeon, much less a married couple.

McGinn seems to have understood only a part but not the whole argument of the book. He writes, “There seems little difficulty in the idea that the collective recognition of status functions by itself is sufficient to create institutional facts….” But he does not see that that is the problem and not the solution. We can’t make it rain by getting together and agreeing that it’s raining, but if we agree in a certain way that something is money, then it is money. The problem is to explain in detail how exactly that works.

For further development of these and other responses to McGinn’s review, see my website, socrates.berkeley.edu/~jsearle/articles.html.

John R. Searle
Slusser Professor of Philosophy
University of California
Berkeley, California

Colin McGinn replies:

John Searle is sure that I have misunderstood him and that my criticisms are mistaken. I in turn think that he has misunderstood me and that his criticisms are mistaken.

First, my possibility claim did not concern gods but humans and creatures like them: such cognitive beings do not necessarily need to use language in order to have institutions. To say that concepts require language and hence that institutions depend on language is misguided in two ways: it wrongly assimilates the “medium” of thought with language in the full-blown sense (when did Searle become a language of thought theorist?); and it trivializes the thesis of language dependence Searle is advancing—since now anything that involves thought involves language. His original thesis was that institutions depend on linguistic declarations; but my point was that no alleged medium of thought is a declaration (thoughts and intentions are not speech acts). Nor is it convincing to suggest that the sharing of mental representations entails their being communicated: clearly two people can think the same thing without one of them having communicated the thought to the other, linguistically or otherwise.

Second, my point about animal societies is that they are societies, but they don’t exemplify Searle’s conditions for social facts. Such societies lack many of our social formations, but that just shows that the concept of a social fact is broader than Searle allows.

Third, Searle completely misses my point about speech acts and their absence. It is not that pushing the beers cannot be a speech act; it is that such an act need not be a speech act—yet it can establish property rights. Speech acts require complex communicative intentions (as H.P. Grice pointed out long ago), but not all intentional actions are speech acts. My point was that acts without such communicative intentions, such as pushing the beers across the table when you have no such intentions, can be enough to provide evidence of the agent’s wishes—which then confer property rights (“He wants me to have this beer”). Surely Searle does not think that every act of giving is a speech act.

Fourth, my point about objects and status functions was simply that Searle underestimates the strength of the physical constraints in his theoretical framework, not that he recognizes no such constraints. I think the ontology of institutions needs explicitly to build in physical (and other) constraints as part of the very essence of the fact (hence my analogy with artifacts). This is really a point about presentation and emphasis, since obviously not any old physical object can serve as the basis of an institutional fact.

Fifth, the psychological (not linguistic) theory of institutions is a solution to the problem, if the problem is specifying the nature of social ontology. Of course, we need to hear the details, but the general form of the answer is there. My objection to Searle was simply that it is not of the essence of social facts like marriage and private property that they should be brought about by acts involving language—any more than it is part of the essence of pain that pains should be expressed in language. He has mistaken the contingent for the necessary.

The post ‘Is Just Thinking Enough?’ appeared first on The New York Review of Books.

]]>
37214
‘Fear of Knowledge’: An Exchange https://www.nybooks.com/articles/2009/12/17/fear-of-knowledge-an-exchange/ Thu, 17 Dec 2009 05:00:00 +0000 http://nybooks.wpengine.com/ To the Editors: There was something pleasantly nostalgic about John Searle’s review of Fear of Knowledge by Paul A. Boghossian [NYR, September 24], riding to the defense of Enlightenment values of truth, objectivity, and rationality. I was however rather surprised to find myself (although in good company) representative of the forces of darkness he needed […]

The post ‘Fear of Knowledge’: An Exchange appeared first on The New York Review of Books.

]]>
To the Editors:

There was something pleasantly nostalgic about John Searle’s review of Fear of Knowledge by Paul A. Boghossian [NYR, September 24], riding to the defense of Enlightenment values of truth, objectivity, and rationality. I was however rather surprised to find myself (although in good company) representative of the forces of darkness he needed to justify his crusade. Along with the good old-fashioned intellectual virtues he claims to espouse, many of us were taught another one. That is to read someone’s work before making it an object of discussion (or derision as I think we might say in this case).

On the basis of a few lines from my paper in the Proceedings of the Aristotelian Society (Supplementary Vol. 71), lines he urges us to read closely and then perversely misreads, he draws wild conclusions, which even the most cursory reading of the paper would have made impossible. He claims that I reject an independently existing reality, when all that was argued was the widely accepted point of the impossibility of an unmediated access to it. More astonishingly he attributes to me and my fellow barbarians (feminist, postcolonialist, and poststructuralist thinkers) the view that “if we are to be truly free, free to create a multicultural democracy, we must above all liberate ourselves from ‘objectivity,’ ‘rationality,’ and ‘science.'”

In place of such a fantasy my paper was instead addressing how rational assessment of knowledge claims is possible, if we accept the situatedness of knowledge seekers. It points out that feminists cannot be relativists for “feminist criticisms aimed to challenge and discredit the masculine accounts they critiqued, not simply to add a further perspective. This requires the possibility of rational encounters between the positions.”

One of the problems with Searle’s characterization of his supposed opponents is a running together of different positions. Those who argue that historical, social, and material locatedness constrain what we can discover and make sense of are accused of relativism: here the view that knowledge is knowledge-relative-to-a-certain-framework/ time-or-place. But these are quite different claims. Searle also glosses the suggestion that facts are socially constructed as “if we do not like a fact that others have constructed, we can construct another fact that we prefer.” Yet those who argue that we are the source of the frameworks in terms of which we understand the world do not have to claim that we do this in a way unconstrained by an independent reality, even while accepting that such reality does not dictate to us the single best way of making sense of it.

The failure of Searle to engage with the positions he is so eager to dismiss is puzzling. What is he afraid of here? That a willingness to see something valuable in his opponents might make his own position somewhat less heroic?

Kathleen Lennon
Ferens Professor of Philosophy
University of Hull
Hull, England

To the Editors:

In discussing Paul Boghossian’s critique of relativism, John Searle cites with approval the assertion that “the fact that descriptions are socially relative does not imply that the factsdescribed by those descriptions are socially relative.” From a sociological perspective, emphasizing (even claiming) such a distinction may prove inadequate.

In many social situations the “descriptions” (perceptions, definitions, judgments) are infinitely more consequential socially, and for a commonsense understanding of what is going on, than the alleged “facts.” And since there may well be differing and even “competing” descriptions, what the anti-relativist might like to see, in a given situation, as “fact” or “truth” might better be viewed as an outcome of processes of social perception and definition. Apposite examples are myriad.

Is the husband hitting his wife a “mere domestic disturbance” or is it “wife- battering”? The hitting may be objective “fact,” but how it is defined and reacted to will be crucial. Did the man who fell to his death have an “accident” or commit “suicide”? The evidence may be inconclusive, but in any case a social definition will be applied. What is the objective “truth” value of “clinical depression”? If the distinction between it and extreme sadness is a matter of considered yet still subjective judgment, is clinical depression a “fact”?

Should a critic feel that, by focusing in this way on “descriptions,” I am simply by-passing the philosophical debate, I would return to my earlier point about social (and psychological) consequentiality. It may be that a chair, a tree, a human body, and a physical act are “facts.” But in the arena of human interaction we constantly encounter social characterizations of people, acts, and situations. More often than not, and in the absence of a truly “established” body of supporting evidence, there is little or no consensus regarding which particular “description” or “vocabulary” is applicable. As my earlier comments suggest, in their ramifications and effects the ones that do emerge “successful” make all the difference in the world.

Edwin M. Schur
Professor Emeritus Department of Sociology
New York University

John R. Searle replies:

In my review of Boghossian’s book I cited a passage that he quotes from Kathleen Lennon, in which she contrasts “knowledge as a neutral transparent reflection of an independently ordered reality, with truth and falsity established by transcendent procedures of rational assessment” (a conception she rejects) and “all knowledge [as] situated knowledge, reflecting the position of the knowledge producer at a certain historical moment in a given material and cultural context” (a conception that she accepts and that she assumes refutes the transcendent conception she rejects). I pointed out that contrary to her view, these are not inconsistent. It is trivially true that knowledge is always arrived at by historically situated individuals in historical contexts and it is also true that these individuals sometimes produce theories that meet universal standards of rational assessment.

She says, correctly, that I had not read her article. I was reviewing Boghossian’s book, not her article. I have now read the article with some care, and I believe it contains a deep inconsistency. In her letter to me she denies that she is a relativist, and insists that the passages she quotes from her original article support her denial of relativism. But the key sentence in her original article is this: Theories cannot be assessed by reference to universal norms. This is an astounding claim, because it denies that there are universal norms such as truth, evidence, consistency, rationality, and coherence, by which we can assess theories.

Her grounds for this claim are in the passage Boghossian and I quoted where she assumes that the situatedness and contextual dependency of actual research is inconsistent with universal norms. They are not inconsistent. The rejection of universal norms implies relativism. If there are no universal norms, then what sort of norms can we use? And the answer is implicit in what she says: norms are derived from a given material and cultural context. That is relativism. She cannot have it both ways. She cannot insist that she is not a relativist and yet deny that there are universal norms of validity.

Edwin Schur makes an important point that I want to emphasize. Where brute physical reality is concerned, we can typically state facts that are totally independent of any human attitudes: that the earth is round, that hydrogen atoms have one electron, for example. But where human reality is concerned, there are many facts where the descriptions of the fact are partly constitutive of the fact in question. Something is money, property, government, or marriage only insofar as we represent it as such, and that representation requires some use of language.

Furthermore, there are many human attitudes where language is partly constitutive of the attitude. In order to fall in love or resent injustice, you have to have a certain way of conceptualizing your feelings, because the concepts are partly constitutive of the attitudes in question. And the point is not, as he suggests, that the evidence might be inconclusive. Given complete evidence in some cases you cannot separate the facts from the interpretation. I think these are very important points, and indeed I have written two books about them and related issues, The Construction of Social Reality (1995) and Making the Social World (forthcoming). I am glad that Professor Schur enables me to make this point.

A number of other questions were raised in the numerous letters commenting on my article, and I want to answer at least one familiar objection: the fact that science frequently changes is sometimes taken to support relativism. In fact scientific change is an argument against relativism. We would not bother to change our scientific theories if we did not think the new theory was closer to the truth than the old one. For example, we give up the Newtonian conception of space and time and replace it with an Einsteinian conception, because the latter is closer to the truth.

The post ‘Fear of Knowledge’: An Exchange appeared first on The New York Review of Books.

]]>
36014
Why Should You Believe It? https://www.nybooks.com/articles/2009/09/24/why-should-you-believe-it/ Thu, 24 Sep 2009 04:00:00 +0000 http://nybooks.wpengine.com/ 1. Relativism has a long history in our intellectual culture, and takes several different forms, such as relativism about knowledge and truth, ethical values, aesthetic quality, and cultural norms, to mention a few. Paul Boghossian’s book concentrates on the first of these. The basic idea he opposes is that claims to objective truth and knowledge, […]

The post Why Should You Believe It? appeared first on The New York Review of Books.

]]>
1.

Relativism has a long history in our intellectual culture, and takes several different forms, such as relativism about knowledge and truth, ethical values, aesthetic quality, and cultural norms, to mention a few. Paul Boghossian’s book concentrates on the first of these. The basic idea he opposes is that claims to objective truth and knowledge, for example the claim that hydrogen atoms have one electron, are in fact only valid relative to a set of cultural attitudes, or to some other subjective way of perceiving the world. Furthermore, according to relativism, inconsistent claims may have what he calls “equal validity.” There can be no universally valid knowledge claims.

There is a traditional refutation of relativism, as follows: The claim that all truth is relative is itself either relative or not. If it is relative then we need not accept it because it is only valid relative to somebody’s attitudes, which we may not share. If it is not relative, but absolute, then it refutes the view that all truth is relative. Either way relativism is refuted. Boghossian considers this traditional refutation and though he thinks it is serious, he does not regard it as decisive. For one thing, most relativists regard it as a kind of logical trick. They think that they are possessed of a deep insight, that all of our knowledge claims are made relative to a certain set of attitudes, cultural norms, and prejudices. This insight is not refuted by logical arguments, or so they suppose.

The currently most influential form of relativism is social constructivism, which Boghossian defines as follows: “A fact is socially constructed if and only if it is necessarily true that it could only have obtained through the contingent actions of a social group.” The social constructivist is anxious to expose construction where none had been suspected, where something that is in fact essentially social had come to masquerade as part of the natural world. Many social constructivists find it liberating because it frees us from the apparent oppression of supposing that we are forced to accept claims about the world as matters of mind-independent fact when in reality they are all socially constructed. If we do not like a fact that others have constructed, we can construct another fact that we prefer.

What do relativism and social constructivism look like in practice? Boghossian gives a number of striking examples. According to our best evidence, the Native Americans arrived on this continent from the Eurasian landmass by crossing over the Bering Strait; but according to some Native American accounts they are the descendants of the Buffalo people, and they came from inside the earth after supernatural spirits prepared this world for habitation by humans. So here are two alternative and inconsistent accounts. Some anthropologists say that one account is as good as the other. As one put it, “Science is just one of many ways of knowing the world. [The Zunis’ worldview is] just as valid as the archaeological viewpoint of what prehistory is about.” Our science constructs one reality; the Native Americans construct another. As Boghossian sees it, this is not acceptable. These two theories are logically inconsistent with each other; they cannot both be true. Is there any way to eliminate the inconsistency?

The answer, say the relativists, is to see that each claim is relative. We should say not that the early Americans came by way of the Bering Strait, but rather: “according to our theory,” they came by the Bering Strait. And “according to some Native American theories,” they came out of the earth. Once relativized, the inconsistency disappears. Indeed all claims are relativized in this way (including presumably the claim that the original claims were inconsistent and the claim that they have been relativized). Will relativism rescue social constructivism? Boghossian sees correctly that relativism fails to solve the problem, and much of his book is about this failure. I do not agree with all of his arguments but I support his overall project.

A problem faced by social constructivism concerns facts about the past. Are we now constructing facts about the past when we make claims about history? One extreme social constructivist cited by Boghossian, Bruno Latour, accepts this conclusion with somewhat comical results. Recent research shows that the ancient Egyptian pharaoh Ramses II probably died of tuberculosis. But according to Latour, this is impossible because the tuberculosis bacillus was only discovered by Robert Koch in 1882.[^1] “Before Koch, the bacillus had no real existence.” To say that Ramses II died of tuberculosis is as absurd as saying that he died of machine-gun fire.

What is one to make of Latour’s claim? The machine gun was invented in the late nineteenth century, and prior to that invention it did not exist in any form. But the tuberculosis bacillus was not invented. It was discovered. Part of the meaning of “discovery” is that to be discovered something has to exist prior to the discovery, and indeed could not have been discovered if it had not existed prior to the discovery.

The claim that knowledge is a social construction is not meant to state the commonplace truth that many facts in the social world are indeed socially constructed. For example, something is money, private property, a government, or a marriage only because people believe that’s what it is, and in that sense such things are socially constructed. Social constructivism makes the much more radical claim that physical reality itself, the very facts we might think we have discovered in physics, chemistry, and the other natural sciences are socially constructed.

This view has been influential in a number of disciplines: feminism, sociology, anthropology, philosophy of science, and literary theory among others. The titles of some typical works express various degrees of support for the doctrine: Peter Berger and Thomas Luckmann’s The Social Construction of Reality; Bruno Latour and Steve Woolgar’s Laboratory Life: The Social Construction of Scientific Facts; Andrew Pickering’s Constructing Quarks: A Sociological History of Particle Physics; Donald MacKenzie’s Statistics in Britain, 1865–1930: The Social Construction of ScientificKnowledge.[^2] Boghossian quotes a feminist view as follows:

Feminist epistemologists, in common with many other strands of contemporary epistemology, no longer regard knowledge as a neutral transparent reflection of an independently existing reality, with truth and falsity established by transcendent procedures of rational assessment. Rather, most accept that all knowledge is situated knowledge, reflecting the position of the knowledge producer at a certain historical moment in a given material and cultural context.[^3]

This passage is worth a close reading. On the face of it, the two views being contrasted, that knowledge is a “reflection of an independently existing reality” and that “all knowledge is situated knowledge,” are perfectly consistent. Historically situated investigators can discover the truth about “an independently existing reality.” But the point of the passage is to claim that most feminists reject the idea that knowledge reflects an independently existing reality; and the rhetorical flourishes in the passage, such as “transcendent procedures of rational assessment” and “neutral transparent reflection,” are designed to reinforce that point.

2.

Boghossian distinguishes three features of constructivism and considers each separately: constructivism about the facts (the facts themselves are social constructions), constructivism about justification (what we count as a justification of a belief is a matter of social construction), and constructivism about rational explanation (we never believe what we believe solely on the basis of evidence).

About the first and most important of these theses, Boghossian considers arguments from three philosophers: Hilary Putnam, Nelson Goodman, and Richard Rorty. Putnam imagines a hypothetical universe consisting of three circles: A, B, and C. Then he asks: How many objects are there in this universe? Three? No, says Putnam, because according to certain Polish logicians (he cites S. Lezniewski), we can construe one object as A, one as B, one as C, one as consisting of A+B, another as B+C, yet another as A+C, and finally, one of A+B+C. So on this basis, there are really seven objects in the universe. Because we can correctly say that there are three objects or seven objects, Putnam concludes that there is no objective fact of the matter about how many objects there are.[^4]

As Boghossian sees, the conclusion does not follow from the premises. Once you have selected your conditions for something being an object, there is a straightforward fact of the matter about how many objects there are. For Putnam to say that there is no fact of the matter would be like saying that there is no answer to the question “How many guests came to the dinner party?” because you could say eight people or four couples.

Goodman’s argument is also weak. Goodman says we construct the constellations of the night sky by drawing certain lines and not others. We draw one set of lines that creates the Big Dipper, for example. All other constellations are similarly created, and what goes for constellations goes for everything, according to Goodman. All of reality consists of human creations.Once again, a bad argument. Constellations are patterns we have selected in the sky because we can discern through our perceptual apparatus certain geometrical forms such as the Big Dipper. Constellations are, in this sense, observer-relative: the actual stars exist independently of any observer, though the patterns we use to name constellations exist only relative to our point of view.

But the stars, as well as mountains, molecules, and tectonic plates, are not in that way relative to an observer. True, we have to select a vocabulary of “stars,” “mountains,” etc., but once the vocabulary has been selected, it is a completely objective fact that Mount Everest is a mountain, for example, and not a giraffe. The general pattern of error is to confuse, on the one hand, the social relativity of the vocabulary and the making of descriptions within that vocabulary with, on the other, the social relativity of the facts described using that vocabulary. This comes out strikingly in Rorty’s argument.

Rorty says that we accept the descriptions we do, not because they correspond to the way things are, but because it serves our practical interests to do so. Boghossian agrees that the fact that we give the descriptions we do is a fact that reflects something about us and our society. But, he points out, the fact that descriptions are socially relative does not imply that the facts described by those descriptions are socially relative. Boghossian cites an argument by Rorty attacking an article of mine[^5] in which I said that mountains, for example, exist completely independently of us and our descriptions. Rorty answered as follows:

Given that it pays to talk about mountains, as it certainly does, one of the obvious truths about mountains is that they were here before we talked about them. If you do not believe that, you probably do not know how to play the usual language-games which employ the word “mountain.” But the utility of those language-games has nothing to do with the question of whether Reality as It Is In Itself, apart from the way it is handy for human beings to describe it, has mountains in it.[^6]

This is a strange passage. Rorty is saying correctly that we adopt the vocabulary that we do because it serves various interests to have that vocabulary. But what he neglects is that the facts in this sort of case exist quite independently of the vocabulary. He begins, “Given that it pays to talk about mountains…,” implying that somehow the existence of mountains depends on the usefulness of the vocabulary. But it does not. The facts are the same, whether or not “it pays to talk about mountains.”

Let us agree that we have the word “mountain” because it pays to have such a word. Why does it pay? Because there really are such things, and they existed before we had the word and they will continue to exist long after we have all died. To state the facts you have to have a vocabulary. But the facts you state with that vocabulary are not dependent on the existence or usefulness of the vocabulary. The existence of mountains has nothing whatever to do with whether or not it “pays to talk about mountains.” And it does not help Rorty’s case to sneer at the existence of mountains as “Reality as It Is In Itself,” because insofar as that expression is meaningful at all, it is obvious that Reality as It Is In Itself contains mountains.

I think Boghossian does a public service by pointing out the weaknesses of all of these arguments. But I fear that the real target of his book is not addressed by refuting bad arguments of the sort I have just cited. People who are convinced by social constructivism typically have a deep metaphysical vision and detailed refutations do not address that vision.

In a sense Boghossian makes it easier for himself by taking on more or less rational authors, specifically Putnam, Goodman, and to a lesser extent Rorty. Their views are reasonably easy to refute because they are, at least in the case of Putnam and Goodman, fairly clearly stated. It is much easier to refute a bad argument than to refute a truly dreadful argument. A bad argument has enough structure that you can point out its badness. But with a truly dreadful argument, you have to try to reconstruct it so that it is clear enough that you can state a refutation.

Boghossian takes bad arguments by Putnam, Goodman, and Rorty and refutes them. But what about the truly dreadful arguments in such authors as Jacques Derrida, Jean-François Lyotard, and other postmodernists that have been more influential during the last half-century? What about, for example, Derrida’s attempts to “prove” that meanings are inherently unstable and indeterminate, and that it is impossible to have any clear, determinate representations of reality? (He argues, for example, that there is no tenable distinction between writing and speech.) The atmosphere of Boghossian’s refutation is that of a Princeton seminar. And in fact Boghossian was a student of Rorty at Princeton. But he does not go into the swamp and wrestle with Derrida & Co.[^7]

Boghossian observes that we could say, with logical consistency, “according to our view” the Native Americans came by the Bering Strait, and “according to their view” they came from the center of the earth, but that this nonetheless does not solve the problem of relativism. However, it seems to me that Boghossian gives the wrong account of why it does not solve the problem. He says that it does not solve the problem for three reasons:

(A) If we relativize the claims by saying “according to our view,” we still have some nonrelative facts left over; there will still be nonrelative facts about what different communities accept or do not accept, for example, physical evidence of people crossing the Bering Strait.

(B) It is often much harder to figure out what people believe than it is to figure out what actually happened. The mental is more puzzling than the physical (this is one of his weaker arguments).

And

(C) if we get out of objection (A) by saying that there are no nonrelative facts, we get an infinite regress. Here is the regress. We start with:

(1) According to a theory we accept, they came over the Bering Strait.

But if everything has to be relativized then (1) has to be relativized, which produces:

(2) According to a theory we accept, there is a theory that we accept and according to that theory…

And so on ad infinitum.

I agree with objections (A) and (C) but I think they are symptoms of a deeper objection, which Boghossian does not make. The deep objection to relativizing is that the original claims have been abandoned and the subject has been changed. The original claims—that the ancestors of the Native Americans came via the Bering Strait, and that they came out of the center of the earth—were not about us and our theories but about what actually happened in human history regardless of anybody’s theories. Our claim is not that we hold a certain theory. Our claim is that the actual ancestors of the early Americans came via the Bering Strait, that there were actual physical movements of physical bodies through physical space. Relativizing of the sort that Boghossian considers does not solve the difficulty; it changes the subject to something irrelevant. It changes the subject from historical facts to our psychological attitudes.

This is the most important criticism of constructivism. It is of the very essence of the speech act of stating or asserting propositions of the sort we have been considering that the speech act commits you to the truth of what you say and therefore to the existence of a fact in the world corresponding to that truth. Such speech acts are made from a point of view and typically within certain sorts of ways of thinking, but the statements and assertions do not thereby become about the points of view or the ways of thinking. If you treat them as being about the point of view and way of thinking you get a different statement altogether, one that is not about the physical movements of Native Americans but about the psychology of the speakers. Boghossian is right to see that the relativization still leaves you with nonrelative facts about speakers and their attitudes and that if you keep going you get an infinite regress, but these are just symptoms of the deeper incoherence. The constructivists do not have a coherent conception of the speech act of asserting or stating.

3.

The second version of relativism Boghossian considers is about epistemic systems, that is, systems used to acquire knowledge and justify claims to knowledge. We justify our beliefs using one epistemic system but somebody might have a different epistemic system that would give different results from ours. It may look like any effort to justify ours would be circular because we would have to presuppose the validity of our system in order to try to justify it. Richard Rorty gives the example of the dispute between Cardinal Bellarmine and Galileo.[^8] Galileo claimed to have discovered, by astronomical observation through a telescope, that Copernicus was right that the earth revolved around the sun. Bellarmine claimed that he could not be right because his view ran counter to the Bible. Rorty says, astoundingly, that Bellarmine’s argument was just as good as Galileo’s. It is just that the rhetoric of “science” had not at that time been formed as part of the culture of Europe. We have now accepted the rhetoric of “science,” he writes, but it is not more objective or rational than Cardinal Bellarmine’s explicitly dogmatic Catholic views. According to Rorty, there is no fact of the matter about who was right because there are no absolute facts about what justifies what. Bellarmine and Galileo, in his view, just had different epistemic systems.

The point I believe Boghossian should have made immediately, though in the end he does get around to saying something like it, is that there are not and cannot be alternative epistemic rationalities. Bellarmine and Galileo reached different conclusions but they worked, like everybody else, within exactly the same system of rationality. Bellarmine held the false view that the Bible was a reliable astronomical authority. But that is a case of a false presupposition, not an alternative epistemic rationality.

Why can’t there be alternative and inconsistent epistemic rationalities? Consider the example of the statement that the Native Americans came by the Bering Strait. I have pointed out that anyone who makes such a statement is thereby committed to the existence of a fact. But that commitment in turn carries a commitment to being able to answer such questions as, How do you know? What is the evidence? Furthermore, only certain sorts of things can count as evidence for and against the claim. These requirements of rationality are not accretions to the original statement, but they are built into it. The requirement that claims admit of evidence and counterevidence and that only certain sorts of things count as evidence is not something added on to thought and language. It is built into the fundamental structure of thought and language.

Consider another example. I now believe my dog Gilbert is in this room. What is the evidence? I can see him. It is in the nature of the claim in question that what I see counts as evidence. Notice that, if in response to a demand for evidence, I said “1 + 1 = 2,” that would not answer the demand for evidence.

Boghossian is worried by the possibility that we might encounter an “alternative to our epistemic system…whose track record was impressive enough to make us doubt the correctness of our own system.” In such a case, he fears, we would not be able to justify our own. But what is meant by “track record”? The fact that he uses this metaphor without adequate explanation ought to worry him and us. The only “track record” that would be relevant would be a body of established knowledge. But in order to ascertain the presence of a “track record” in this sense, to ascertain the presence of a body of knowledge, we would have to use the only epistemic rationality we have, the one already built into thought and language. The hypothesis of alternative epistemic rationalities has no clear meaning. Eventually, after three difficult chapters (5, 6, and 7), Boghossian seems to come to something like this conclusion.

In the great debates of the 1960s and after, I was once asked by a student, “What is your argument for rationality?” That is an absurd question. There cannot be an argument for rationality because the whole notion of an argument presupposes rationality. Constraints of rationality are constitutive of argument itself, as they are of thought and language generally. This is not to say that there cannot be irrational thoughts and claims. There are plenty of irrationalities around. (For example, given the available evidence, it is irrational to deny that the present plant and animal species evolved from earlier forms of life. Why? Because, to put it as an understatement, the evidence is overwhelming.)

4.

The last form of relativism that Boghossian considers is the explanation of belief. Here the claim is that the explanation of why we believe what we do is never a matter of evidence or solely a matter of evidence, but involves some irrational factors, some social condition in which we find ourselves. I am puzzled why Boghossian takes this claim very seriously, not because it is obviously false, but because it does not really matter to the issue of the truth or falsity or the justification of the claims under discussion. If we have justifications for our beliefs, and if the justifications meet rational criteria, then the fact that there are all sorts of elements in our social situation that incline us to believe one thing rather than another may be of historical or psychological interest but it is really quite beside the point of the justifications and of the truth or falsity of the original claim. It is a factual question to what extent people reach their beliefs by rational appraisal of the evidence, not a question adequately settled by philosophical argument.

I think the reason that Boghossian is so concerned about this is that some who have written about the sociology of scientific knowledge think that they can explain all of our beliefs, both the true and the false, the well-supported and the unsupported, by a common pattern of sociological explanation. He cites David Bloor’s Knowledge and Social Imagery[^9] as an example, along with the works by Latour, Woolgar, and Pickering that I mentioned earlier. The writers in question adopt what Bloor calls “symmetrical” modes of explanation: they argue that true and false beliefs, as well as rational and irrational beliefs, must be explained by the same causes. One example, cited by Bloor, concerns a study involving physicists in Weimar Germany who attempted to “dispense with causality in physics.” A “symmetrical” understanding of this scientific project would argue that, while considering how the physicists thought about observed evidence, one should consider as well how they attempted “to adapt the content of their science to the values of their intellectual environment.”

Boghossian points out correctly that symmetry about truth and falsehood is quite different from symmetry about rationality and irrationality. Symmetry about truth is a possible research program in the sociology of knowledge because people typically arrive at their scientific views, both true and false, through the study of evidence; thus, in most cases at least, both true and false beliefs can be seen as arising from the same cause, evidence. Some evidence may be more revealing of truth than other evidence; nevertheless, if we put aside the use of fraud, both true and false theories have the same underlying cause: observed evidence.

But that is not the same as treating rationality and irrationality symmetrically. First, as we’ve just seen, for both true views and false views to be symmetrical, they must originate in the same cause: argument based on evidence. But all argument based on evidence assumes a common rationality. Thus, as Boghossian argues, the case for the symmetry of truth is wrong because it rests on “the falsity ” of the “symmetry about rationality”; both cannot simultaneously be correct. True views and false views may be arrived at by symmetrical methods, but when those methods involve evidence, they are themselves manifestations of a common rationality and thus make impossible the symmetry, or equality, of rationality and irrationality. This is one of the best arguments in Boghossian’s book.

5.

What motivates social constructionism? After all, we pay an enormous intellectual price if we deny the objective validity of the past three and a half centuries of scientific investigation. Boghossian thinks constructionism is motivated partly by intellectual argument and partly by political correctness. In the postcolonial era, some have felt that we should not impose our conception of reality on other cultures. Why shouldn’t we, in a multicultural democracy, grant that each culture, or indeed each person, can have his or her own reality? I think in fact the antirational, antiscientific bias of current versions of relativism and constructivism are motivated by a much deeper metaphysical vision than one based on postcolonial political correctness.

What exactly is that vision? Hints of it occur in the passage on feminist epistemology that I quoted from Kathleen Lennon. It is a vision according to which all of our knowledge claims are radically contingent because of their historical and social circumstances. According to this vision, all of us think within particular sets of assumptions, and we always represent the world from a point of view, and this makes objective truth impossible. For someone who accepts this argument, the idea that there are scientific claims that are objective, universal, and established beyond a reasonable doubt seems not only inaccurate but positively oppressive. And for such people the very idea of an objectively existing, independent reality must be discredited.

On this view, if we are to be truly free, free to create a multicultural democracy, we must above all liberate ourselves from “objectivity,” “rationality,” and “science.” The motivation, in short, is more profound than Boghossian allows for, and it bears interesting affinities with earlier forms of Counter-Enlightenment Romanticism of the sort described by Isaiah Berlin in his The Roots of Romanticism.[^10]

Boghossian has written an excellent book. It is very compressed, and it is not always easy reading, but it contains relentless exposures of confusion, falsehood, and incoherence.

The post Why Should You Believe It? appeared first on The New York Review of Books.

]]>
35715
Minding the Brain https://www.nybooks.com/articles/2006/11/02/minding-the-brain/ Thu, 02 Nov 2006 05:00:00 +0000 http://nybooks.wpengine.com/ 1. After having been neglected for most of the twentieth century, the subject of consciousness has become fashionable. Amazon lists 3,865 books under “consciousness,” a number of them new releases of the last year or two. What exactly is the problem of consciousness, and why exactly is it so difficult, if not impossible, for us […]

The post Minding the Brain appeared first on The New York Review of Books.

]]>
1.

After having been neglected for most of the twentieth century, the subject of consciousness has become fashionable. Amazon lists 3,865 books under “consciousness,” a number of them new releases of the last year or two. What exactly is the problem of consciousness, and why exactly is it so difficult, if not impossible, for us to agree on a solution to it? Of course, there is more than one problem, and there are many different reasons for disagreeing with proposed solutions. The hard problem of consciousness is to account for how it can exist and function in a way that is private, subjective, and qualitative, in a world that consists of public, objective, physical phenomena. How, for example, could the electrochemical activities of a kilogram and a half, about three pounds, of matter in my skull cause all of my conscious experiences? The problem of consciousness is the heart of the traditional “mind-body problem” in philosophy. What is the relation of the conscious mind to the physical brain and the rest of the body?

Before we can consider this question, we need at least a working definition of “consciousness.” Though we cannot yet give a scientifically precise, analytic definition of the word, it is not at all hard to give a common-sense definition that will help identify the issues that need to be addressed. It is important to do this because different writers use the word differently. By “consciousness,” I mean those states of sentience or feeling or awareness that begin when you wake up from a dreamless sleep and continue on throughout the day until you fall asleep again, or otherwise become unconscious. Dreams are also a form of consciousness.

Consciousness, so defined, has three remarkable characteristics. First, there is always a qualitative feel to our conscious experiences. Think of the difference between listening to music and tasting wine. Second, consciousness is always subjective in the sense that it only exists as experienced by human or animal subjects. It has a first-person mode of existence that requires some “I” that actually experiences the conscious states. And third, pathologies apart, each conscious state comes to us as part of a single, unified conscious field. So we don’t just have the taste of the wine and the sound of the music, but both of these are part of one large conscious experience. These three features are not independent. They are different aspects of the essential character of consciousness that can be accurately called qualitative subjectivity.

We can also briefly describe what we already know about conscious states and what we want a theory of consciousness to account for:

  1. Consciousness is real and ineliminable. It cannot be dismissed as some kind of an illusion, or reduced to some other phenomenon. Why not? It cannot be shown to be an illusion because if I consciously have the illusion that I am conscious, I already am conscious. Consciousness exists subjectively, in the sense that it only occurs as experienced by a human or animal subject, and therefore, it cannot be reduced to—cannot be shown to be nothing but—an objective or third-person phenomenon.
  2. Consciousness is entirely caused by brain processes. We don’t know many of the details of these processes, although neurobiologists are making much progress in tracing them. But there isn’t any real doubt that processes in our brains are causing our conscious experiences.
  3. Consciousness comes in several different forms and performs many functions. Among the most important functions are those performed by “perceptual” consciousness, the kind that gives us information about the world and enables us to coordinate perception with the actions we perform.

It might seem from these characteristics that understanding consciousness is just a matter of neurobiological research. Let the neuroscientists go to work on the brain, and find out how it causes consciousness, where exactly consciousness occurs in the brain, and how it functions causally. In the end, I think that is exactly the right approach. But there are many philosophical and conceptual obstacles along the way. It also turns out that the brain is an extremely difficult object to study.

2.

This problem, the traditional problem of the relation of conscious experiences to the physical brain, of “mind” to body, is precisely Nicholas Humphrey’s target of investigation in Seeing Red: A Study in Consciousness. I think he would agree with my definition of consciousness and with my claim that it is irreducibly subjective. But he takes exception to my claim that one of its important functions is conscious perception and he strongly disagrees with my claim that a central problem is to try to get an account of how brain processes cause conscious experiences. He, on the contrary, thinks that all perception is unconscious, and that instead of trying to find a causal explanation for consciousness we should try to find an equation: i.e., if we are going to solve the problem of the relation of the mind to the body, we have to show that conscious mental experience is identical with the content of the physical brain.

It is important to see the differences between these two approaches. On the standard account, neurobiologists are seeking the “neuronal correlate of consciousness” (NCC). The idea is that if we could first identify the NCC—the events in the brain that occur when we have subjective experiences—we could then test to see if the correlation is causal, and finally we would like to develop a theory showing how the neuronal correlates cause the conscious experiences. This research is currently widely pursued and is making some progress.[^1] Humphrey’s entire approach differs from mainstream philosophy and neuroscience. He dismisses the search for the NCC on the grounds that it “privileges neuronal events over all the other ways we might wish to describe what is going on in the brain.” For him any explanation has to be of the form mind = brain, m = b.

But now he faces a problem. We know from high school physics that in presenting an equation you have to be referring to the same dimension on both of its sides. The equation one dollar = one hundred cents can work because both sides are sums of money. But you couldn’t have one hundred cents = one month, because cents and months are in different dimensions. Mind and brain appear to be in different dimensions, because mind has qualitative subjectivity and brain does not. If you try to say, for example, that the experience of red is identical with neuron firings, the terms of the equation seem to be in different dimensions, because the conscious experience of red has the qualitative subjectivity that I described earlier, while neuron firings do not. It is a first-person phenomenon, whereas neuron firings are objective, third-person phenomena that would theoretically look the same to any observer, if they could be observed. The main aim of Humphrey’s book is to try to overcome this difficulty by redescribing both the left-hand, mind side, and the right-hand, brain side, so they come out in the same dimension. It is important to understand that many of his strange- sounding claims are motivated by the urge to get the experience of consciousness and the physical brain in the same dimension. I think the investigation proceeds from mistaken assumptions, and it is unlikely to succeed, but in the course of it he says many interesting, and indeed daring, things.

3.

Humphrey’s account of mind concentrates on visual experiences. He asks us to imagine (and in the lectures at Harvard on which the book under review is based he actually presented the scene) that we are all looking at a screen in the front of the room. A uniform color of red is projected onto the screen. How are we to describe this situation? According to contemporary scientific common sense, when we look at the red screen the reflection of light waves sets up in us a series of neuronal events beginning at the retina and ending with a conscious visual experience of red. If we assume that there are no hallucinations or pathological conditions involved, the perceiver sees, and in that way perceives, the red object by having a visual experience. The perceiver sees the object, but he does not see the visual experience of the object. He consciously sees real things in the real world and not his experiences of those things. There are not two red things in the scene but just one, the red screen.

Nicholas Humphrey agrees that there is a red object and a perceiver, and that light waves from the object stimulate the perceiver, but beyond that he disagrees with just about everything in the account I have just presented. He says that what I call the visual experience is really a “sensation” experienced in the eye and that the sensation is red, just as the screen is red. So there are two red things in the scene: the red object and the red sensation.

His account of sensation and perception contains the following striking claims: perception and sensation are totally independent; all consciousness is sensation; perception is never conscious; and all sensation is really action. The arguments for these claims are complicated and I will not try to summarize all of them; but what follows gives the flavor of his reasoning.

He writes, “I think the weight of evidence really does suggest that sensation and perception, although they are triggered by the same event, are essentially independent takes on this event, occurring not in series but in parallel, and only interacting, if they ever do, much further down the line.” And later he says that a visual sensation “can be put to several uses…, but the one thing it is not used for is as the raw material for the perception of the world. Perception has its own quite separate channel….” He tells us that we have the illusion that sensation and perception are linked because they occur at the same time.[^2]

Furthermore, sensations are really actions. We should more properly describe seeing red as “redding.” He draws the analogy between having a red sensation, on the one hand, and waving your hand or shouting, on the other; according to him all three are actions. He says: “Thus, when S has the red sensation, his impression is simply that ‘I’m redding, now, in this part of my visual field of my eyes.'”

This argument raises many questions. The last time you saw something red, and paid attention to your experience, did you have the “impression,” that is, did you think to yourself, “I’m performing the action of redding now, and I am doing it in this part of my visual field of my eyes”? I have to confess I have never had that “impression” when I saw a red object. I thought something like, “I am now consciously perceiving a red object.” Not so, says Humphrey, because we never consciously perceive anything. Perception is not only done by a different channel from that which produces consciousness, but more importantly, perception is unconscious. In Humphrey’s view, the sensation channel is conscious; the perception channel is totally unconscious. Indeed all consciousness consists of sensations. Humphrey thinks that the only form our consciousness can take is sensation, which for him includes mental imagery and dreams.

According to Humphrey, the audience, whom he told to look at the red screen, did not consciously perceive the screen at all. They had conscious red sensations, but these were not sensations of the screen. As he tells us, the “red” sensations experienced by his audience were directed at something entirely within their bodies; the sensations were of events occurring in their eyes.

I said the arguments for these remarkable views were complex, but the heart of Humphrey’s hypothesis concerns the distinction between sensation and perception. He has several arguments to support this, but the most important is about “blindsight.” There are patients whose sight is impaired by brain damage in such a way that though they can see most of the visual field, they are blind in one part. For example, in a famous case a patient D.B. was blind in the lower left quadrant of his visual field.[^3] (If the part of the world you can see at any moment is like a round clock face, D.B. was blind between roughly six o’clock and nine o’clock.) But in that quadrant D.B. could, to his surprise, detect the presence of certain sorts of stimuli. In one of many experiments he correctly “guessed” the presence of an X or an O in the blind part of his visual field. He could even guess the presence of colors in the blind area. Furthermore, Humphrey once had an experimental cat, Helen, who was totally blind because Visual Area 1 of her visual cortex had been removed, but she could still make her way around the room and even pick up crumbs off the floor.

So there are cases where it seems that some kind of visual perception takes place without conscious visual experience; the perception exists without the “sensation” of seeing. Another example he mentions—a familiar one—is subliminal perception whereby an advertiser gets a message across so rapidly that we are unconscious of seeing it on the television screen. And, Humphrey points out, just as there can be perception without sensations, there can be changes in sensation without corresponding changes in perception, as when a person is under the influence of LSD and other hallucinogens. He may have the sensation that a chair has become gigantic while still perceiving it is a chair. The general form of Humphrey’s argument is that there are various instances in which the conscious visual experience and the unconscious perception come apart. I have not described all of them here, but one of the best parts of his book is the description of these cases.

What is one to say about such arguments? The obvious point is that they are too exceptional to support Humphrey’s spectacular conclusions. Because some perceptions can take place without the subject’s conscious awareness of them, we are supposed to conclude that all perception is unconscious. The fallacy is made worse by the fact that the unconscious perceptions that provide his evidence for the unconsciousness of all perception give very imperfect and inadequate information about the environment, unlike the rich content provided by conscious perception. Yes, the blindsight patient can make guesses that are correct a surprising amount of the time, while firmly insisting that he has no conscious visual experience. But no one suggests that if the patient had all of Visual Area 1 removed and was totally blind, he would still be able to drive a car across the country, or read a book, that only his sensations had been impaired, not his perceptions. The standard, and I believe plausible, account of blindsight is that it shows that there are more than one, perhaps several, perceptual visual pathways in the brain, and not all of them are conscious. (Experiments such as the one illustrated above are trying to explore alternative pathways.) But this is different from saying that none of them is conscious.

What about the other claims? I could find no support at all for the claim that all consciousness is sensation. It just seems false. I can, for example, wake up in bed in a completely dark room with no sensations at all, save perhaps the feeling of the weight of my body on the bed and the weight of the covers on me. All the same I can still be 100 percent conscious and alert, and my consciousness is not confined to the feeling of the bed and the covers. I can, for example, be thinking about a philosophical problem. Also, the conscious experience of physical action is quite different from the experience of seeing. Raising your arm, for example, is a different sort of conscious experience from watching someone else lift your arm.

How about the claim that visual experience is really a form of action, that when I see something red I am performing the action of “redding”? I found no argument to support it, and again it seems false. Normally, when I perform expressive actions such as waving my hand or shouting (two of Humphrey’s examples), it is up to me. But when I stare at a red screen it is not up to me whether I see red.

All these remarkable views are motivated by Humphrey’s attempt to get the right- and the left-hand side of the equation mind = brain into the same dimension. He identifies what he thinks are five defining features of qualitative sensations. First, they are always owned, that is, they are always somebody’s sensation. Second, they have a bodily location, because all sensations occur at body locations. Third, they occur in the temporal present. Fourth, they come to us in a particular sensory mode, hearing as opposed to seeing, for example. And, fifth, they disclose all these features about themselves. They have what he calls a kind of “present-to-themselves” character.

He points out that simple expressive actions such as waving your arm or shouting also have these five features, and this leads him to treat sensations as a species of actions. And this in turn, he thinks, makes the mind part of his equation more like the brain part. Humphrey’s idea is that if all perceptual experiences, and indeed all conscious experiences, are sensations, and all sensations are actions, we are closer to having something mental that could be identical with something in the physical brain.

Are these five features sufficient to give us qualitative subjectivity? In a sense, they assume there is qualitative subjectivity because we are being asked to construe sensations that we already know have qualitative subjectivity, such as seeing a red screen as forms of “expressive actions.”

4.

But how about the brain side? How do we get qualitative subjectivity on the brain side as a neurobiological phenomenon? To explain this Humphrey creates a speculative account of our possible evolutionary history. The most primitive forms of life, Humphrey writes, have very basic kinds of reflexive responses to external stimuli. An amoebalike creature, for example, might respond to light or to external pressure by moving in a certain way. “There is no reason to suppose the animal is in any way mentally aware of what is happening on any level,” he writes. But in more advanced forms of life, he continues, “the time comes when it will indeed be advantageous for [an animal] to have some kind of inner knowledge of what is affecting it, which it can begin to use as a basis for more sophisticated planning and decision making.” To arrive at this kind of knowledge, Humphrey speculates, the animal “needs the capacity to form a mental representation of the stimulation at the surface of its body.” The best way to do this, he argues finally, is “for the animal to discover what is happening and even how it feels about it by the simple trick of monitoring what it itself is doing about it.”

This account is not easy to follow, so once again, I will contrast it with an account I find more plausible. On a plausible view we might think that conscious sensation evolved, among other reasons, to enable animals to perceive and then respond to outside stimuli. For example, an organism detects some toxic substance, so it responds by moving away. To use Humphrey’s terms, but not his argument, in response to the stimulus, the organism sends out a “command signal” to respond at the “site of the stimulation,” i.e., at the place where the toxic substance came into contact with the organism. For example, the “command signal” might be to move away from the toxic substance. The coordination of stimulus and response will be much more powerful if the animal is conscious of what it is perceiving.

This emphatically is not Humphrey’s view. On his evolutionary account, this conception of stimulus and response relations is correct only for very simple unconscious “amoebalike” animals. He argues that for consciousness to evolve, the command signals in the brain have to be directed not at the external stimulus, or even at the site of the stimulation, but at the inner mental representation of the stimulus. In other words, Humphrey suggests that in more evolved forms of animals, the response gets targeted at the incoming sensory pathway itself, and finally becomes internal to the brain. The response in his view becomes “privatized” within the brain as a conscious sensation. So according to his account, this is how we get to be conscious—not by consciously perceiving anything, but by “monitoring” our own internal responses to external stimuli.

The upshot is that we now have two independent processing channels, an unconscious perceptual channel that brings in information about the outside world—the kind of channel that enables certain “blindsight” patients to somehow perceive something—and a conscious sensation channel that monitors the command signals that are its own responses. But these commands are only “virtualor as-ifcommands.” They do not have any real effects on our behavior. He summarizes this as follows:

Over evolutionary time, there is a slow but remarkable change. What happens is that the whole sensory activity gets “privatized”: the command signals for sensory responses get short-circuited before they reach the body surface, so that instead of reaching all the way out to the peripheral site of stimulation they now reach only to points more and more central on the incoming sensory pathways, until eventually the whole process becomes closed off from the outside world in an internal loop within the brain.

Humphrey presents no evidence for any of this. As he acknowledges, he is presenting a speculative account of evolutionary biology. It is not easy to see how it is supposed to work for seeing the red screen, but this, I think, is what he believes: your perception of the screen is totally unconscious; you do, however, perform the action of “redding” in your eye. This action is the red sensation. It is a case of monitoring your response to an external stimulus, but not a case of perceiving the stimulus.

He agrees that the evolutionary account is not by itself sufficient to explain consciousness. But he thinks that with certain crucial additions we can account for qualitative subjectivity: “If this X factor [his expression for qualitative subjectivity] has to do with anything, it has to do with time.” He calls this aspect of consciousness the “extended present.” And he says we have no verbal way of describing the extended present. But he thinks we can understand it if we see how it resembles a work of art. He claims that we will get a deeper understanding of consciousness if we see the “analogy between a work of art and ‘a work of sensation.'” He thinks that studying Impressionism, Abstract Expressionism, and other kinds of art will enable us to explain what is so special about our conscious experience. For example, Humphrey writes, the painter Bridget Riley, a leading op artist,

explicitly acknowledges the “dual province of the senses,” making central to her vision the distinction between sensation and perception…. Riley is not interested in representing the outside world as she perceives it, as an impersonal fact. She wants only to show how it affects her—her eyes, her body.

Analogously, Humphrey continues, Monet, in his approach to painting a century earlier, “set out almost obsessively to capture the peculiar quality of present-tense experience,” or what he called “instantaneity.” For Humphrey, this is an example of how, in depicting a visual experience at a particular moment in time, some artists give precedence to describing how it feels to have that experience, rather than to describing the external realities that produced that experience. The conclusions that Humphrey draws from these observations, however, are difficult to follow:

Suppose, as an exercise in metaphor, we put a painting by Riley or Monet on the right-hand side of the mind-brain identity equation in place of the brain, will the painter’s tricks for depicting instantaneity genuinely help?

I think they not only help, they go right to the heart of it.

He thinks we have almost explained qualitative subjectivity, but we need one last thing, and that is feedback. To get consciousness we need a feedback mechanism whereby the “command signal” responds to the incoming stimulus by actually modifying the incoming sensory pathways in the brain. But he thinks the story that he told us already, about the primitive reaction of organisms to stimulus, is sufficient to explain the feedback mechanism. He writes, “Indeed, feedback has been a feature of sensation all along. Ever since the days when sensory responses were actual wriggles at the body surface, these responses have been having feedback effects by modifying the very stimulation to which they are a response.” So sustained feedback, together with a “very fine tuning,” by which he means “a precise matching of output with input, so as to provide exactly the right degree of reinforcement of the signal in the loop,” will produce consciousness. And the payoff, he tells us enthusiastically, the evolutionary advantage of all of this, is that it gives us a sense of “the Self.” It turns out that this is really the primary function of consciousness, not to give us information about the world, which comes from unconscious perception, but to give us a sense of the Self. And this also triumphantly answers the question why consciousness matters. “Consciousness matters,” he tells us, “because it is its function to matter. It has been designed to create in human beings a Self whose life is worth pursuing.”

He has much more to tell us about this, especially about the relation of the processes to the passage of time, the evolution of the self, and our relations to other people. But at the end of his argument we have to ask: Does he give us the tools to solve the mind-body problem? Does he even take a step along the road toward explaining the relations between qualitative subjectivity and the brain? I think the answer is no. Here is why.

Humphrey wants to get consciousness on both sides of the equation. He gets it for free on the left-hand side by treating conscious sensations as a species of conscious actions. How does he get it on the right-hand, or brain, side of the equation? The whole point of all his discussion about the evolution of responses to stimuli, and the analogies with impressionist and op art, was to try to show us how qualitative subjectivity could be “emerging” as a feature of the brain. But since we are talking about the brain side of the equation, we have to construe all of those features as third-person, objective, biological phenomena within the brain. Then the question arises: How are they supposed to be “delivering” qualitative subjectivity? Suppose we make a list of the features he gives us: command signals, privatization, internal loops, feedback, self-monitoring, and matching input and output; and then suppose we include in this list what he calls the metaphorical features: temporal doubling, rhyme, and self-similarity (some of these notions are not very clear). And suppose we accept his speculative evolutionary account. If we are to understand these as neurobiological features of the brain, the right-hand side of the equation, they all have to be interpreted as description of the brain in its third-person, objective, physical reality. But the mind-body problem we began with is staring us in the face once more. How do all of these features of the brain deliver (add up to, cause, give rise to, produce) consciousness, with its qualitative subjectivity?

To see the failure of the account, try it out with the example of seeing the red screen. Suppose his whole account so far is correct, that sensation is really action, that feedback loops evolved in the brain in the way he says. Here is the resultant equation:

(m) I am now performing the qualitative, subjective action of redding in my eyeball =

(b) There is in me a feedback loop that monitors my reaction to the optical stimulus. It is a lot like certain works of art and it has a precise matching of input to output.

No matter how much he adds to the brain side it still does not add up to consciousness, to qualitative subjectivity. In fact, the only way we could make it work is to suppose that the elements on the right-hand side are sufficient to cause the qualitative subjectivity on the left-hand side. But then the explanation is being given by the specification of the causes, not by the equation.

The enterprise was bound to fail because the equation does not solve the problem; it presupposes that the problem has already been solved. The problem is to explain the relation of consciousness to brain processes, specifically to explain how brain processes cause (give rise to, produce, bring about) qualitative subjectivity. We already have qualitative subjectivity on the left-hand, mind, side of the equation, by definition. The question then is: How does it get into the right-hand or brain side? But that is precisely the mind-body problem, the problem that the equation was supposed to solve. Humphrey does not address that question directly; rather, he changes the subject. Our question is: How do objective third-person brain processes right here and now (as well as in earlier evolutionary times) cause our conscious states? What specific parts of brain anatomy do it and how do they work? His question is: Assuming that perception is unconscious, how might conscious sensations have evolved and what functions would they perform? His answer, in brief summary, is that they evolved by monitoring our responses to input stimuli and they function to give us a sense of “the Self.” I think he is wrong to separate perception from consciousness; all the same, some evolutionary story about consciousness must be right. But whatever evolutionary story may be proposed is an answer to a different question from the causal question. The only part of his account that even hints at an answer to the causal question is the discussion of feedback mechanisms. But he does not tell us how we get from the feedback mechanisms to qualitative subjectivity.

5.

What has gone wrong? It seems to me Humphrey makes a fundamental error from the beginning. He thinks that the solution to our problem has to be in the form of an equation, mind = brain, rather than in a causal account. Why should we make this assumption? There are lots of explanations in science and philosophy that are not in the form of equations. In fact, equations are rather rare in biology. Think of the germ theory of disease or the theory of evolution. What we are interested in, in these cases, are causal mechanisms, not equations. What causes disease symptoms? What is the causal account of the evolution of human and animal species from simpler forms of life? And now, what causes consciousness?

Some traditional philosophical problems, though unfortunately not very many, can eventually receive a scientific solution. This actually happened with the problem of what constitutes life. We cannot now today recover the passions with which mechanists and vitalists debated whether a “mechanical” account of life could be given. The point is not so much that the mechanists won and the vitalists lost, but that we got a much richer conception of the mechanisms. I think we are in a similar situation today with the problem of consciousness. It will, I predict, eventually receive a scientific solution. But like other scientific solutions in biology, it will have to give us a causal account. It will have to explain how brain processes cause conscious experiences, and this may well require a much richer conception of brain functioning than we now have.

The post Minding the Brain appeared first on The New York Review of Books.

]]>
34054