Category Archives: Artificial intelligence

the difference between human and artificial intelligence: relationships

A large-language model (LLM) like ChatGPT works by identifying trends and patterns in huge bodies of text previously generated by human beings.

For instance, we are currently staying in Cornwall. If I ask ChatGPT what I should see around here, it suggests St Ives, Land’s End, St Michael’s Mount, and seven other highlights. It derives these ideas from frequent mentions in relevant texts. The phrases “Cornwall,” “recommended” (or synonyms thereof), “St Ives,” “charming,” “art scene,” and “cobbled streets” probably occur frequently in close proximity, because ChatGPT uses them to construct a sentence for my edification.

We human beings behave in a somewhat similar way. We also listen to or read a lot of human-generated text, look for trends and patterns in it, and repeat what we glean. But if that is what it means to think, then LLM has clear advantages over us. A computer can scan much more language than we can and uses statistics rigorously. Our generalizations suffer from notorious biases. We are more likely to recall ideas we have seen most recently, those that are most upsetting, those that confirm our prior assumptions, etc. Therefore, we have been using artificial means to improve our statistical inferences ever since we started recording possessions and tallying them by category thousands of years ago.

But we also think in other ways. Specifically, as intensely social and judgmental primates, we frequently scan our environments for fellow human beings whom we can trust in specific domains. A lot of what we believe comes from what a relatively small number of trusted sources have simply told us.

In fact, to choose what to see in Cornwall, I looked at the recommendations in The Lonely Planet and Rough Guide. I have come to trust those sources over the years–not for every kind of guidance (they are not deeply scholarly), but for suggestions about what to see and how to get to those places. Indeed, both publications offer lists of Cornwall’s highlights that resemble ChatGPT’s.

How did these publishers obtain their knowledge? First, they hired individuals whom they trusted to write about specific places. These authors had relevant bodily experience. They knew what it feels like to walk along a cliff in Cornwall. That kind of knowledge is impossible for a computer. But these authors didn’t randomly walk around the county, recording their level of enjoyment and reporting the places with the highest scores. Even if they had done that, the sites they would have enjoyed most would have been the ones that they had previously learned to understand and value. They were qualified as authors because they had learned from other people: artists, writers, and local informants on the ground. Thus, by reading their lists of recommendations, I gain the benefit of a chain of interpersonal relationships: trusted individuals who have shared specific advice with other individuals, ending with the guidebook authors whom I have chosen to consult.

In our first two decades of life, we manage to learn enough that we can go from not being able to speak at all to writing books about Cornwall or helping to build LLMs. Notably, we do not accomplish all this learning by storing billions of words in our memories so that we can analyze the corpus for patterns. Rather, we have specific teachers, living or dead.

This method for learning and thinking has drawbacks. For instance, consider the world’s five biggest religions. You probably think that either four or five of them are wrong about some of their core beliefs, which means that you see many billions of human beings as misguided about some ideas that they would call very important. Explaining why they are wrong, from an outsider’s perspective, you might cite their mistaken faith in a few deeply trusted sources. In your opinion, they would be better off not trusting their scriptures, clergy, or people like parents who told them what to believe.

(Or perhaps you think that everyone sees the same truth in their own way. That’s a benign attitude and perhaps the best one to hold, but it’s incompatible with what billions of people think about the status of their own beliefs.)

Our tendency to believe select people may be an excellent characteristic, since the meaning of life is more about caring for specific other humans than obtaining accurate information. But we do benefit from knowing truths, and our reliance on fallible human sources is a source of error. However, LLMs can’t fully avoid that problem because they use text generated by people who have interests and commitments.

If I ask ChatGPT “Who is Jesus Christ?” I get a response that draws exclusively from normative Christianity but hedges it with this opening language: “Jesus Christ is a central figure in Christianity. He is believed to be … According to Christian belief. …” I suspect that ChatGPT’s answers about religious topics have been hard-coded to include this kind of disclaimer and to exclude skeptical views. Otherwise, a statistical analysis of text about Jesus might present the Christian view as true or else incorporate frequent critiques of Christianity, either of which would offend some readers.

In contrast, my query about Cornwall yields confident and unchallenged assessments, starting with this: “Cornwall is a beautiful region located in southwestern England, known for its stunning coastline, picturesque villages, and rich cultural heritage.” This result could be prefaced with a disclaimer, e.g., “According to many English people and Anglophiles who choose to write about the region, Cornwall is …:” A ChatGPT result is always a summary of what a biased sample of people have thought, because choosing to write about something makes you unusual.

For human beings who want to learn the truth, having new tools that are especially good at scanning large bodies of text for statistical patterns should prove useful. (Those who benefit will probably include people who have selfish or even downright malicious goals.) But we have already learned a fantastic amount without LLMs. The secret of our success is that our brains have always been networked, even when we have lived in small groups of hunter-gatherers. We intentionally pass ideas to other people and are often pretty good at deciding whom to believe about what.

Moreover, we have invented incredibly complex and powerful techniques for improving how many brains are connected. Posing a question to someone you know is helpful, but attending a school, reading an assigned book, finding other books in the library, reading books translated from other languages, reading books that summarize previous books, reading those summaries on your phone–these and many other techniques dramatically extend our reach. Prices send signals about supply and demand; peer-review favors more reliable findings; judicial decisions allow precedents to accumulate; scientific instruments extend our senses. These are not natural phenomena; we have invented them.

Seen in that context, LLMs are the latest in a long line of inventions that help human beings share what they know with each other, both for better and for worse.

See also: the design choice to make ChatGPT sound like a human; artificial intelligence and problems of collective action; how intuitions relate to reasons: a social approach; the progress of science.

the design choice to make ChatGPT sound like a human

Elizabeth Weil provides a valuable profile of the linguist Emily M. Bender, headlined, “You Are Not a Parrot, and a chatbot is not a human. And a linguist named … Bender is very worried what will happen when we forget this.”

This article alerted me (belatedly, I’m sure) to the choice involved in making artificial intelligence applications mimic human beings and speak to us in the first-person singular.

For instance, since I’m living temporarily in Andalusia, I asked ChatGPT whether I should visit Granada, Spain.

The first sentence of its reply (repeated verbatim when I tried again) was a disclaimer: “As an AI language model, I cannot make decisions for you, but I can provide you with information that may help you decide if Granada, Spain is a destination you would like to visit.”

On one hand, this sentence discloses that the bot isn’t a person. On the other hand, it says, “I can provide …” , which sure sounds like a person.

Then ChatGPT offers a few paragraphs that always seem to include the same main points, conveyed in evaluative sentences like these: “Granada is a beautiful city located in the southern region of Spain, known for its rich history, culture, and stunning architecture. It is home to the world-famous Alhambra Palace, a UNESCO World Heritage site and one of the most visited attractions in Spain. The city is also known for its vibrant nightlife, delicious cuisine, and friendly locals.”

My initial amazement at ChatGPT is wearing off, but the technology remains uncanny. And yet, would it look less impressive it gave more straightforward output? For instance, imagine if I asked whether I should visit Granada, and it replied:

The computer has statistically analyzed a vast body of text produced by human beings and has discerned several patterns. First, when human beings discuss whether to visit a location or recommend doing so, they frequently itemize activities that visitors do there, often under the categories of food, recreation, and sightseeing. Second, many texts that include the words “Grenada, Granada, Spain” also use positive adjectives in close proximity to words about food, sights, and outdoor activities. Specifically, many texts mention the word “Alhambra” in proximity to the phrases “UNESCO heritage site” and “world-famous,” paired with positive adjectives.

This would be an impressive achievement (and potentially useful), but it would not suggest that the computer likes Grenada, Granada wants to help me, or knows any friendly locals. It would be clear that people experience and judge, and ChatGPT statistically models texts.

We human beings also draw statistical inferences from what other people say, and perhaps we even enjoy the Alhambra because human beings have told us that we should. (See “the sublime and other people.”) But I really did see a peacock strutting past palms and reflecting pools in the Carmen de los Martires this morning, whereas ChatGPT will never see anything. Why try to confuse me about the difference?

See also: artificial intelligence and problems of collective action

artificial intelligence and problems of collective action

Although I have not studied the serious scholarship on AI, I often see grandiose claims made about its impact in the near future. Intelligent machines will solve our deepest problems, such as poverty and climate change, or they will put us all out of work and become our robot overlords. I wonder whether these predictions ignore the problems of collective action that already bedevil us as human beings.

After all, there are already about 7.5 billion human brains on earth, about 10 times more than there were in 1800. Arguably, we are better off than we were then–but not clearly and straightforwardly so. If we ask why a tenfold increase in the total cognitive capacity of the species has not improved our condition enormously, the explanations are pretty obvious.

Even when people agree on goals, it is challenging to coordinate their behavior so that they pursue those ends efficiently. And even when some people manage to work together toward a shared goal, they have physical needs and limitations. (Using brains requires food and water; implementing any brain’s ideas by taking physical action requires additional resources.) To make matters worse, human beings often have legitimate but conflicting interests, like the need to gain sustenance from the same land. And some human beings have downright harmful goals, like dominating or spiting others.

One can see how artificial intelligence might mitigate some of these drawbacks. Imagine a single computer with computational power equivalent to one million human beings. It will be much more coordinated than those people. It will be able to aggregate and apply information more efficiently. It can also be programmed to have consistent and, indeed, desirable goals–and it will plug away at its goals for as long as it receives the physical inputs it needs. For instance, it could clean up pollution 24/7 instead of stopping for self-interested purposes, like sleeping.

However, it still has physical needs and limitations. It might use fuel and other inputs more efficiently than a human being does, but that depends on how good the human’s tools are. A person with a bulldozer can move more garbage than a clever little robot that works 24/7–and both of them need a place to put the garbage. (Intelligence cannot negate physical limits.)

Besides, a computer is designed by people–and probably by individuals arrayed as corporations or states. As such, AI is likely to be designed for conflicting and sometimes discreditable goals, including killing other people. At best, it will be hard to coordinate the activities of many different artificially intelligent systems.

Meanwhile, people already coordinate their behavior in quite impressive ways. A city receives roughly the amount of bread it needs every day because thousands of producers and vendors coordinate their behavior through prices. An international scientific discipline makes cumulative progress because thousands of scientists coordinate their behavior through peer-review and citation networks. And the English language develops new vocabulary for describing new phenomena as millions of people communicate. Thus the coordination attained by a machine with a lot of computational power should be compared to the coordination accomplished by human beings in a market, a discipline, or a language–which is impressive.

One claim made about AI is that machines will start to refine and improve their own hardware and software, thus achieving geometric growth in computational power. But human beings already do this. Although we cannot substantially redesign our individual brains, we can individually learn. More than that, we can redesign our systems for coordinating cognition. Many people are busy making markets, disciplines, languages, and other emergent human systems work better. That is already the kind of continuous self-engineering that some people expect AI to accomplish for the first time.

It is of course possible to imagine that an incredibly intelligent machine will identify solutions that simply elude us as human beings. For instance, it will negate the physical limitations of the carbon cycle by discovering whole new processes. But that is an empty supposition, like imagining that regular old science will one day discover solutions that we cannot envision today. That is probably true–it has happened many times before–but it is unhelpful in the present. Besides, both people and AI may create more problems than they solve.

See also: the progress of science; John Searle explains why computers will not become our overlords;

against artificial intelligence

I have lost the reference, but sometime within the last 72 hours, I

read a quote by an official of the Defense Advanced Research Projects

Agency (DARPA), the agency that

helped launch the Internet and recently got into trouble for creating

a "futures market" in terrorism. This official bemoaned the

stupidity of his laptop, which doesn’t know what he wants it to do;

he called for much more public investment in artificial intelligence

(AI).

I have an interesting colleague in computer science, Ben

Shneiderman, who strongly criticizes AI research. His argument is

not that the machines will take over the world and make us do their

will. Rather, he argues that AI tends to make machines less useful,

because they become unpredictable. When, for example, Microsoft Word

tries to anticipate my desires by suddenly numbering or bulleting my

paragraphs, that can be convenient—but it can also be a big nuisance.

Shneiderman argues that computers are best understood as tools; and

a good tool is easy to understand and highly predictable. It lets us

do what we want. All the revolutionary computer technologies

have been very tool-like, with no AI features. (Think of email, word

processing, and spreadsheets.) Meanwhile, untold billions of dollars

have been poured into AI, with very modest practical payoffs.