artificial intelligence and problems of collective action

Although I have not studied the serious scholarship on AI, I often see grandiose claims made about its impact in the near future. Intelligent machines will solve our deepest problems, such as poverty and climate change, or they will put us all out of work and become our robot overlords. I wonder whether these predictions ignore the problems of collective action that already bedevil us as human beings.

After all, there are already about 7.5 billion human brains on earth, about 10 times more than there were in 1800. Arguably, we are better off than we were then–but not clearly and straightforwardly so. If we ask why a tenfold increase in the total cognitive capacity of the species has not improved our condition enormously, the explanations are pretty obvious.

Even when people agree on goals, it is challenging to coordinate their behavior so that they pursue those ends efficiently. And even when some people manage to work together toward a shared goal, they have physical needs and limitations. (Using brains requires food and water; implementing any brain’s ideas by taking physical action requires additional resources.) To make matters worse, human beings often have legitimate but conflicting interests, like the need to gain sustenance from the same land. And some human beings have downright harmful goals, like dominating or spiting others.

One can see how artificial intelligence might mitigate some of these drawbacks. Imagine a single computer with computational power equivalent to one million human beings. It will be much more coordinated than those people. It will be able to aggregate and apply information more efficiently. It can also be programmed to have consistent and, indeed, desirable goals–and it will plug away at its goals for as long as it receives the physical inputs it needs. For instance, it could clean up pollution 24/7 instead of stopping for self-interested purposes, like sleeping.

However, it still has physical needs and limitations. It might use fuel and other inputs more efficiently than a human being does, but that depends on how good the human’s tools are. A person with a bulldozer can move more garbage than a clever little robot that works 24/7–and both of them need a place to put the garbage. (Intelligence cannot negate physical limits.)

Besides, a computer is designed by people–and probably by individuals arrayed as corporations or states. As such, AI is likely to be designed for conflicting and sometimes discreditable goals, including killing other people. At best, it will be hard to coordinate the activities of many different artificially intelligent systems.

Meanwhile, people already coordinate their behavior in quite impressive ways. A city receives roughly the amount of bread it needs every day because thousands of producers and vendors coordinate their behavior through prices. An international scientific discipline makes cumulative progress because thousands of scientists coordinate their behavior through peer-review and citation networks. And the English language develops new vocabulary for describing new phenomena as millions of people communicate. Thus the coordination attained by a machine with a lot of computational power should be compared to the coordination accomplished by human beings in a market, a discipline, or a language–which is impressive.

One claim made about AI is that machines will start to refine and improve their own hardware and software, thus achieving geometric growth in computational power. But human beings already do this. Although we cannot substantially redesign our individual brains, we can individually learn. More than that, we can redesign our systems for coordinating cognition. Many people are busy making markets, disciplines, languages, and other emergent human systems work better. That is already the kind of continuous self-engineering that some people expect AI to accomplish for the first time.

It is of course possible to imagine that an incredibly intelligent machine will identify solutions that simply elude us as human beings. For instance, it will negate the physical limitations of the carbon cycle by discovering whole new processes. But that is an empty supposition, like imagining that regular old science will one day discover solutions that we cannot envision today. That is probably true–it has happened many times before–but it is unhelpful in the present. Besides, both people and AI may create more problems than they solve.

See also: the progress of science; John Searle explains why computers will not become our overlords;