what I would advise students about ChatGPT

I’d like to be able to advise students who are interested in learning but are not sure whether or how to use ChatGPT. I realize there may also be students who want to use AI tools to save effort, even if they learn less as a result. I don’t yet know how to address that problem. Here I am assuming good intentions on the part of the students. These are tentative notes: I expect my stance to evolve based on experience and other perspectives. …

We ask you to learn by reading, discussing, and writing about selected texts. By investing effort in those tasks, you can derive information and insights, challenge your expectations, develop skills, grasp the formal qualities of writing (as well as the main point), and experience someone else’s mind.

Searching for facts and scanning the environment for opinions can also be valuable, but they do not afford the same opportunities for mental and spiritual growth. If we never stretch our own ideas by experiencing others’ organized thinking, our minds will be impoverished.

ChatGPT can assist us in the tasks of reading, discussing, and writing about texts. It can generate text that is itself worth reading and discussing. But we must be careful about at least three temptations:

  • Saving effort in a way that prevents us from using our own minds.
  • Being misled or misinformed, because ChatGPT can be unreliable and even unbiased.
  • Violating the relationship with the people who hear or read our words by presenting our ideas as our own when they were actually generated by AI. This is not merely wrong because it suggests we did work that we didn’t do. It also prevents the audience from tracing our ideas to their sources in order to assess them critically. (Similarly, we cite sources not only to give credit and avoid plagiarism but also to allow others to follow our research and improve it.)

I can imagine using ChatGPT in some of these ways. …

First, I’m reading an assigned text that refers to a previous author who is new to me. I ask ChatGPT what that earlier author thought. This is like Google-searching for that person or looking her up on Wikipedia. It is educational. It provides valuable context. The main concern is that ChatGPT’s response could be wrong or tilted in some way. That could be the case with any source. However, ChatGPT appears more trustworthy than it is because it generates text in the first-person singular–as if it were thinking–when it is really offering a statistical summary of existing online text about a topic. An unidentified set of human beings wrote the text that the AI algorithm summarizes–imperfectly. We must be especially cautious about the invisible bias this introduces. For the same reason, we should be especially quick to disclose that we have learned something from ChatGPT.

Second, I have been assigned a long and hard text to read, so I ask ChatGPT what it says (or what the author says in general), as a substitute for reading the assignment. This is like having a Cliff’s Notes version for any given work. Using it is not absolutely wrong. It saves time that I might be able to spend well–for instance, in reading something different. But I will miss the nuances and complexities, the stylistic and cognitive uniqueness, and the formal aspects of the original assignment. If I do that regularly, I will miss the opportunity to grow intellectually, spiritually, and aesthetically.

Such shortcuts have been possible for a long time. Already in the 1500s, Erasmus wrote Biblical “paraphrases” as popular summaries of scripture, and King Edward VI ordered a copy for every parish church in England. Some entries on this blog are probably being used to replace longer readings. In 2022, 3,500 people found my short post on “different kinds of freedom,” and perhaps many were students searching for a shortcut to their assigned texts. Our growing–and, I think, acute–problem is the temptation to replace all hard reading with quick and easy scanning.

A third scenario: I have been assigned a long and hard text to read. I have struggled with it, I am confused, and I ask ChatGPT what the author meant. This is like asking a friend. It is understandable and even helpful–to the extent that the response is good. In other words, the main question is whether the AI is reliable, since it may look better than it is.

Fourth, I have been assigned to write about a text, so I ask ChatGPT about it and copy the response as my own essay. This is plagiarism. I might get away with it because ChatGPT generates unique text every time it is queried, but I have not only lied to my teacher, I have also denied myself the opportunity to learn. My brain was unaffected by the assignment. If I keep doing that, I will have an unimpressive brain.

Fifth, I have been assigned to write about a text, I ask ChatGPT about it, I critically evaluate the results, I follow up with another query, I consult the originally assigned text to see if I can find quotes that substantiate ChatGPT’s interpretation, and I write something somewhat different in my own words. Here I am using ChatGPT to learn, and the question is whether it augments my experience or distracts from it. We might also ask whether the AI is better or worse than other resources, including various primers, encyclopedia entries, abstracts, and so on. Note that it may be better.

We could easily multiply these examples, and there are many intermediate cases. I think it is worth keeping the three main temptations in mind and asking whether we have fallen prey to any of them.

Because I regularly teach Elinor Ostrom, today I asked ChatGPT what Ostrom thought. It offered a summary with an interesting caveat that (I’m sure) was written by an individual human being: “Remember that these are general concepts associated with Elinor Ostrom’s work, and her actual writings and speeches would provide more nuanced and detailed insights into her ideas. If you’re looking for specific quotes, I recommend reading her original works and publications.”

That is good advice. As for the summary: I found it accurate. It is highly consistent with my own interpretation of Ostrom, which, in turn, owes a lot to Paul Dragos Aligica and a few others. Although many have written about Ostrom, it is possible that ChatGPT is actually paraphrasing me. That is not necessarily bad. The problem is that you cannot tell where these ideas are coming from. Indeed, ChatGPT begins its response: “While I can’t provide verbatim quotes, I can summarize some key ideas and principles associated with Elinor Ostrom’s work.” There is no “I” in AI. Or if there is, it isn’t a computer. The underlying author might be Peter Levine plus a few others.

Caveat emptor.

See also: the design choice to make ChatGPT sound like a human; the difference between human and artificial intelligence: relationships