the design choice to make ChatGPT sound like a human

Elizabeth Weil provides a valuable profile of the linguist Emily M. Bender, headlined, “You Are Not a Parrot, and a chatbot is not a human. And a linguist named … Bender is very worried what will happen when we forget this.”

This article alerted me (belatedly, I’m sure) to the choice involved in making artificial intelligence applications mimic human beings and speak to us in the first-person singular.

For instance, since I’m living temporarily in Andalusia, I asked ChatGPT whether I should visit Granada, Spain.

The first sentence of its reply (repeated verbatim when I tried again) was a disclaimer: “As an AI language model, I cannot make decisions for you, but I can provide you with information that may help you decide if Granada, Spain is a destination you would like to visit.”

On one hand, this sentence discloses that the bot isn’t a person. On the other hand, it says, “I can provide …” , which sure sounds like a person.

Then ChatGPT offers a few paragraphs that always seem to include the same main points, conveyed in evaluative sentences like these: “Granada is a beautiful city located in the southern region of Spain, known for its rich history, culture, and stunning architecture. It is home to the world-famous Alhambra Palace, a UNESCO World Heritage site and one of the most visited attractions in Spain. The city is also known for its vibrant nightlife, delicious cuisine, and friendly locals.”

My initial amazement at ChatGPT is wearing off, but the technology remains uncanny. And yet, would it look less impressive it gave more straightforward output? For instance, imagine if I asked whether I should visit Granada, and it replied:

The computer has statistically analyzed a vast body of text produced by human beings and has discerned several patterns. First, when human beings discuss whether to visit a location or recommend doing so, they frequently itemize activities that visitors do there, often under the categories of food, recreation, and sightseeing. Second, many texts that include the words “Grenada, Granada, Spain” also use positive adjectives in close proximity to words about food, sights, and outdoor activities. Specifically, many texts mention the word “Alhambra” in proximity to the phrases “UNESCO heritage site” and “world-famous,” paired with positive adjectives.

This would be an impressive achievement (and potentially useful), but it would not suggest that the computer likes Grenada, Granada wants to help me, or knows any friendly locals. It would be clear that people experience and judge, and ChatGPT statistically models texts.

We human beings also draw statistical inferences from what other people say, and perhaps we even enjoy the Alhambra because human beings have told us that we should. (See “the sublime and other people.”) But I really did see a peacock strutting past palms and reflecting pools in the Carmen de los Martires this morning, whereas ChatGPT will never see anything. Why try to confuse me about the difference?

See also: artificial intelligence and problems of collective action