the difference between human and artificial intelligence: relationships

A large-language model (LLM) like ChatGPT works by identifying trends and patterns in huge bodies of text previously generated by human beings.

For instance, we are currently staying in Cornwall. If I ask ChatGPT what I should see around here, it suggests St Ives, Land’s End, St Michael’s Mount, and seven other highlights. It derives these ideas from frequent mentions in relevant texts. The phrases “Cornwall,” “recommended” (or synonyms thereof), “St Ives,” “charming,” “art scene,” and “cobbled streets” probably occur frequently in close proximity, because ChatGPT uses them to construct a sentence for my edification.

We human beings behave in a somewhat similar way. We also listen to or read a lot of human-generated text, look for trends and patterns in it, and repeat what we glean. But if that is what it means to think, then LLM has clear advantages over us. A computer can scan much more language than we can and uses statistics rigorously. Our generalizations suffer from notorious biases. We are more likely to recall ideas we have seen most recently, those that are most upsetting, those that confirm our prior assumptions, etc. Therefore, we have been using artificial means to improve our statistical inferences ever since we started recording possessions and tallying them by category thousands of years ago.

But we also think in other ways. Specifically, as intensely social and judgmental primates, we frequently scan our environments for fellow human beings whom we can trust in specific domains. A lot of what we believe comes from what a relatively small number of trusted sources have simply told us.

In fact, to choose what to see in Cornwall, I looked at the recommendations in The Lonely Planet and Rough Guide. I have come to trust those sources over the years–not for every kind of guidance (they are not deeply scholarly), but for suggestions about what to see and how to get to those places. Indeed, both publications offer lists of Cornwall’s highlights that resemble ChatGPT’s.

How did these publishers obtain their knowledge? First, they hired individuals whom they trusted to write about specific places. These authors had relevant bodily experience. They knew what it feels like to walk along a cliff in Cornwall. That kind of knowledge is impossible for a computer. But these authors didn’t randomly walk around the county, recording their level of enjoyment and reporting the places with the highest scores. Even if they had done that, the sites they would have enjoyed most would have been the ones that they had previously learned to understand and value. They were qualified as authors because they had learned from other people: artists, writers, and local informants on the ground. Thus, by reading their lists of recommendations, I gain the benefit of a chain of interpersonal relationships: trusted individuals who have shared specific advice with other individuals, ending with the guidebook authors whom I have chosen to consult.

In our first two decades of life, we manage to learn enough that we can go from not being able to speak at all to writing books about Cornwall or helping to build LLMs. Notably, we do not accomplish all this learning by storing billions of words in our memories so that we can analyze the corpus for patterns. Rather, we have specific teachers, living or dead.

This method for learning and thinking has drawbacks. For instance, consider the world’s five biggest religions. You probably think that either four or five of them are wrong about some of their core beliefs, which means that you see many billions of human beings as misguided about some ideas that they would call very important. Explaining why they are wrong, from an outsider’s perspective, you might cite their mistaken faith in a few deeply trusted sources. In your opinion, they would be better off not trusting their scriptures, clergy, or people like parents who told them what to believe.

(Or perhaps you think that everyone sees the same truth in their own way. That’s a benign attitude and perhaps the best one to hold, but it’s incompatible with what billions of people think about the status of their own beliefs.)

Our tendency to believe select people may be an excellent characteristic, since the meaning of life is more about caring for specific other humans than obtaining accurate information. But we do benefit from knowing truths, and our reliance on fallible human sources is a source of error. However, LLMs can’t fully avoid that problem because they use text generated by people who have interests and commitments.

If I ask ChatGPT “Who is Jesus Christ?” I get a response that draws exclusively from normative Christianity but hedges it with this opening language: “Jesus Christ is a central figure in Christianity. He is believed to be … According to Christian belief. …” I suspect that ChatGPT’s answers about religious topics have been hard-coded to include this kind of disclaimer and to exclude skeptical views. Otherwise, a statistical analysis of text about Jesus might present the Christian view as true or else incorporate frequent critiques of Christianity, either of which would offend some readers.

In contrast, my query about Cornwall yields confident and unchallenged assessments, starting with this: “Cornwall is a beautiful region located in southwestern England, known for its stunning coastline, picturesque villages, and rich cultural heritage.” This result could be prefaced with a disclaimer, e.g., “According to many English people and Anglophiles who choose to write about the region, Cornwall is …:” A ChatGPT result is always a summary of what a biased sample of people have thought, because choosing to write about something makes you unusual.

For human beings who want to learn the truth, having new tools that are especially good at scanning large bodies of text for statistical patterns should prove useful. (Those who benefit will probably include people who have selfish or even downright malicious goals.) But we have already learned a fantastic amount without LLMs. The secret of our success is that our brains have always been networked, even when we have lived in small groups of hunter-gatherers. We intentionally pass ideas to other people and are often pretty good at deciding whom to believe about what.

Moreover, we have invented incredibly complex and powerful techniques for improving how many brains are connected. Posing a question to someone you know is helpful, but attending a school, reading an assigned book, finding other books in the library, reading books translated from other languages, reading books that summarize previous books, reading those summaries on your phone–these and many other techniques dramatically extend our reach. Prices send signals about supply and demand; peer-review favors more reliable findings; judicial decisions allow precedents to accumulate; scientific instruments extend our senses. These are not natural phenomena; we have invented them.

Seen in that context, LLMs are the latest in a long line of inventions that help human beings share what they know with each other, both for better and for worse.

See also: the design choice to make ChatGPT sound like a human; artificial intelligence and problems of collective action; how intuitions relate to reasons: a social approach; the progress of science.