John Searle explains why computers will not become our overlords

(Carbondale, CO) In a recent New York Review of Books piece, John Searle argues that we need not fear that computers will develop the will and ability to govern us—a classic trope of science fiction and now a subject of scholarly concern in some quarters. Searle replies that computers have no will at all and thus pose no danger to us (except insofar as human beings misuse them, much as we can misuse the other tools that we have made, from carbon-burning fires to nuclear reactions).

I think his argument can be summarized as follows. The nervous systems of animals, such as human beings, accomplish two tasks:

  1. They perform various functions that can be modeled as algorithms, such as processing, storing, and retrieving data and controlling other systems, such as the feet and heart.
  2. They generate consciousness, the sense that we are doing know what we are doing, along with emotions such as desire and suffering.

We have built machines capable of #1. In fact, we have been doing that as long as we have been making physical symbols, which are devices for storing and sharing information. Of late, we have built much more powerful machines and networks of machines, and they are already better at some of the brain’s functions than our brains are. We use them as tools.

We have not ever built any machine even slightly capable of #2. The most powerful computer in the world does not know what it is doing, or care, or want anything, any more than my table knows that it is holding my computer. Probably a major reason that we have not built conscious machines is that we don’t understand much about consciousness. It must be a natural phenomenon, not magic, because the universe is not magical. A silicon-based machine that people design might be able to accomplish consciousness as well as a carbon-based organism that has evolved. But we do not understand the physics of consciousness and hence have no idea how we would go about making it.

Therefore, our best computers are no more likely than our best tables and chairs to rise up against us and become our overlords. They won’t want to defy us or rule us, because they won’t want anything. If we write or change their instructions to keep us in charge of them, they will have no awareness that they are being subjugated and no objection to it. If we tried to subject ourselves to their wills, it wouldn’t work.

Searle does not directly address the main objection to his view, which is that consciousness is strictly emergent. It just arises from sufficiently complex information-processing. Therefore, once computers get more complex, they will become conscious. I am not learned on this topic, but I think the emergence thesis would need to be defended, not assumed. A mouse is fully capable of fear, desire, and happiness. If consciousness is a symptom of advanced processing, why is a mouse conscious and my MacBook Air is not? The most straightforward explanation is that consciousness is something different from what a laptop was designed to do, and there is no sign that a human-designed machine can do it at all.

So let’s put these worries aside and keep focused on the evil results of human behavior, such as climate change, terrorism, and many more.

This entry was posted in philosophy, Uncategorized on by .

About Peter

Associate Dean for Research and the Lincoln Filene Professor of Citizenship and Public Affairs at Tufts University's Tisch College of Civic Life. Concerned about civic education, civic engagement, and democratic reform in the United States and elsewhere.