Category Archives: Internet and public issues

can AI help governments and corporations identify political opponents?

In “Large Language Model Soft Ideologization via AI-Self-Consciousness,” Xiaotian Zhou, Qian Wang, Xiaofeng Wang, Haixu Tang, and Xiaozhong Liu use ChatGPT to identify the signature of “three distinct and influential ideologies: “’Trumplism’ (entwined with US politics), ‘BLM (Black Lives Matter)’ (a prominent social movement), and ‘China-US harmonious co-existence is of great significance’ (propaganda from the Chinese Communist Party).” They unpack each of these ideologies as a connected network of thousands of specific topics, each one having a positive or negative valence. For instance, someone who endorses the Chinese government’s line may mention US-China relationships and the Nixon-Mao summit as a pair of linked positive ideas.

The authors raise the concern that this method would be a cheap way to predict the ideological leanings of millions of individuals, whether or not they choose to express their core ideas. A government or company that wanted to keep an eye on potential opponents wouldn’t have to search social media for explicit references to their issues of concern. It could infer an oppositional stance from the pattern of topics that the individuals choose to mention.

I saw this article because the authors cite my piece, “Mapping ideologies as networks of ideas,” Journal of Political Ideologies (2022): 1-28. (Google Scholar notified me of the reference.) Along with many others, I am developing methods for analyzing people’s political views as belief-networks.

I have a benign motivation: I take seriously how people explicitly articulate and connect their own ideas and seek to reveal the highly heterogeneous ways that we reason. I am critical of methods that reduce people’s views to widely shared, unconscious psychological factors.

However, I can see that a similar method could be exploited to identify individuals as targets for surveillance and discrimination. Whereas I am interested in the whole of an individual’s stated belief-network, a powerful government or company might use the same data to infer whether a person would endorse an idea that it finds threatening, such as support for unions or affinity for a foreign country. If the individual chose to keep that particular idea private, the company or government could still infer it and take punitive action.

I’m pretty confident that my technical acumen is so limited that I will never contribute to effective monitoring. If I have anything to contribute, it’s in the domain of political theory. But this is something–yet another thing–to worry about.

See also: Mapping Ideologies as Networks of Ideas (talk); Mapping Ideologies as Networks of Ideas (paper); what if people’s political opinions are very heterogeneous?; how intuitions relate to reasons: a social approach; the difference between human and artificial intelligence: relationships

the age of cybernetics

A pivotal period in the development of our current world was the first decade after WWII. Much happened then, including the first great wave of decolonization and the solidification of democratic welfare states in Europe, but I’m especially interested in the intellectual and technological developments that bore the (now obsolete) label of “cybernetics.”

I’ve been influenced by reading Francisco Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (first ed. 1991, revised ed., 2017), but I’d tell the story in a somewhat different way.

The War itself saw the rapid development of entities that seemed analogous to human brains. Those included the first computers, radar, and mechanisms for directing artillery. They also included extremely complex organizations for manufacturing and deploying arms and materiel. Accompanying these pragmatic breakthroughs were successful new techniques for modeling complex processes mathematically, plus intellectual innovations such as artificial neurons (McCullouch & Pitts 1943), feedback (Rosenblueth, Wiener, and Bigelow 1943), game theory (von Neumann & Morgenstern, 1944), stored-program computers (Turing 1946), information theory (Shannon 1948), systems engineering (Bell Labs, 1940s), and related work in economic theory (e.g., Schumpeter 1942) and anthropology (Mead 1942).

Perhaps these developments were overshadowed by nuclear physics and the Bomb, but even the Manhattan Project was a massive application of systems engineering. Concepts, people, money, minerals, and energy were organized for a common task.

After the War, some of the contributors recognized that these developments were related. The Macy Conferences, held regularly from 1942-1960, drew a Who’s Who of scientists, clinicians, philosophers, and social scientists. The topics of the first post-War Macy Conference (March 1946) included “Self-regulating and teleological mechanisms,” “Simulated neural networks emulating the calculus of propositional logic,” “Anthropology and how computers might learn how to learn,” “Object perception’s feedback mechanisms,” and “Deriving ethics from science.” Participants demonstrated notably diverse intellectual interests and orientations. For example, both Margaret Mead (a qualitative and socially critical anthropologist) and Norbert Wiener (a mathematician) were influential.

Wiener (who had graduated from Tufts in 1909 at age 14) argued that the central issue could be labeled “cybernetics” (Wiener & Rosenblueth 1947). He and his colleagues derived this term from the ancient Greek word for the person who steers a boat. For Wiener, the basic question was how any person, another animal, a machine, or a society attempts to direct itself while receiving feedback.

According to Varela, Thompson, and Rosch, the ferment and diversity of the first wave of cybernetics was lost when a single model became temporarily dominant. This was the idea of the von Neumann machine:

Such a machine stores data that may symbolize something about the world. Human beings write elaborate and intentional instructions (software) for how those data will be changed (computation) in response to new input. There is an input device, such as a punchcard reader or keyboard, and an output mechanism, such as a screen or printer. You type something, the processor computes, and out comes a result.

One can imagine human beings, other animals, and large organizations working like von Neumann machines. For instance, we get input from vision, we store memories, we reason about what we experience, and we say and do things as a result. But there is no evident connection between this architecture and the design of the actual human brain. (Where in our head is all that complicated software stored?) Besides, computers designed in this way made disappointing progress on artificial intelligence between 1945 and 1970. The 1968 movie 2001: A Space Odyssey envisioned a computer with a human personality by the turn of our century, but real technology has lagged far behind that.

The term “cybernetics” had named a truly interdisciplinary field. After about 1956, the word faded as the intellectual community split into separate disciplines, including computer science.

This was also the period when behaviorism was dominant in psychology (presuming that all we do is to act in ways that independent observers can see–there is nothing meaningful “inside” us). It was perhaps the peak of what James C. Scott calls “high modernism” (the idea that a state can accurately see and reorganize the whole society). And it was the heyday of “pluralism” in political science (which assumes that each group that is part of a polity automatically pursues its own interests). All of these movements have a certain kinship with the von Neumann architecture.

An alternative was already considered in the era of cybernetics: emergence from networks. Instead of designing a complex system to follow instructions, one can connect numerous simple components into a network and give them simple rules for changing their connections in respond to feedback. The dramatic changes in our digital world since ca. 1980 have used this approach rather than any central design, and now the analogy of machine intelligence to neural networks is dominant. Emergent order can operate at several levels at once; for example, we can envision individuals whose brains are neural networks connecting via electronic networks (such as the Internet) to form social networks and culture.

I have sketched this history–briefly and unreliably, because it’s not my expertise–without intending value-judgments. I am not sure to what extent these developments have been beneficial or destructive. But it seems important to understand where we’ve come from to know where we should go from here.

See also: growing up with computers; ideologies and complex systems; The truth in Hayek; the progress of science; the human coordination involved in AI; the difference between human and artificial intelligence: relationships

what I would advise students about ChatGPT

I’d like to be able to advise students who are interested in learning but are not sure whether or how to use ChatGPT. I realize there may also be students who want to use AI tools to save effort, even if they learn less as a result. I don’t yet know how to address that problem. Here I am assuming good intentions on the part of the students. These are tentative notes: I expect my stance to evolve based on experience and other perspectives. …

We ask you to learn by reading, discussing, and writing about selected texts. By investing effort in those tasks, you can derive information and insights, challenge your expectations, develop skills, grasp the formal qualities of writing (as well as the main point), and experience someone else’s mind.

Searching for facts and scanning the environment for opinions can also be valuable, but they do not afford the same opportunities for mental and spiritual growth. If we never stretch our own ideas by experiencing others’ organized thinking, our minds will be impoverished.

ChatGPT can assist us in the tasks of reading, discussing, and writing about texts. It can generate text that is itself worth reading and discussing. But we must be careful about at least three temptations:

  • Saving effort in a way that prevents us from using our own minds.
  • Being misled or misinformed, because ChatGPT can be unreliable and even unbiased.
  • Violating the relationship with the people who hear or read our words by presenting our ideas as our own when they were actually generated by AI. This is not merely wrong because it suggests we did work that we didn’t do. It also prevents the audience from tracing our ideas to their sources in order to assess them critically. (Similarly, we cite sources not only to give credit and avoid plagiarism but also to allow others to follow our research and improve it.)

I can imagine using ChatGPT in some of these ways. …

First, I’m reading an assigned text that refers to a previous author who is new to me. I ask ChatGPT what that earlier author thought. This is like Google-searching for that person or looking her up on Wikipedia. It is educational. It provides valuable context. The main concern is that ChatGPT’s response could be wrong or tilted in some way. That could be the case with any source. However, ChatGPT appears more trustworthy than it is because it generates text in the first-person singular–as if it were thinking–when it is really offering a statistical summary of existing online text about a topic. An unidentified set of human beings wrote the text that the AI algorithm summarizes–imperfectly. We must be especially cautious about the invisible bias this introduces. For the same reason, we should be especially quick to disclose that we have learned something from ChatGPT.

Second, I have been assigned a long and hard text to read, so I ask ChatGPT what it says (or what the author says in general), as a substitute for reading the assignment. This is like having a Cliff’s Notes version for any given work. Using it is not absolutely wrong. It saves time that I might be able to spend well–for instance, in reading something different. But I will miss the nuances and complexities, the stylistic and cognitive uniqueness, and the formal aspects of the original assignment. If I do that regularly, I will miss the opportunity to grow intellectually, spiritually, and aesthetically.

Such shortcuts have been possible for a long time. Already in the 1500s, Erasmus wrote Biblical “paraphrases” as popular summaries of scripture, and King Edward VI ordered a copy for every parish church in England. Some entries on this blog are probably being used to replace longer readings. In 2022, 3,500 people found my short post on “different kinds of freedom,” and perhaps many were students searching for a shortcut to their assigned texts. Our growing–and, I think, acute–problem is the temptation to replace all hard reading with quick and easy scanning.

A third scenario: I have been assigned a long and hard text to read. I have struggled with it, I am confused, and I ask ChatGPT what the author meant. This is like asking a friend. It is understandable and even helpful–to the extent that the response is good. In other words, the main question is whether the AI is reliable, since it may look better than it is.

Fourth, I have been assigned to write about a text, so I ask ChatGPT about it and copy the response as my own essay. This is plagiarism. I might get away with it because ChatGPT generates unique text every time it is queried, but I have not only lied to my teacher, I have also denied myself the opportunity to learn. My brain was unaffected by the assignment. If I keep doing that, I will have an unimpressive brain.

Fifth, I have been assigned to write about a text, I ask ChatGPT about it, I critically evaluate the results, I follow up with another query, I consult the originally assigned text to see if I can find quotes that substantiate ChatGPT’s interpretation, and I write something somewhat different in my own words. Here I am using ChatGPT to learn, and the question is whether it augments my experience or distracts from it. We might also ask whether the AI is better or worse than other resources, including various primers, encyclopedia entries, abstracts, and so on. Note that it may be better.

We could easily multiply these examples, and there are many intermediate cases. I think it is worth keeping the three main temptations in mind and asking whether we have fallen prey to any of them.

Because I regularly teach Elinor Ostrom, today I asked ChatGPT what Ostrom thought. It offered a summary with an interesting caveat that (I’m sure) was written by an individual human being: “Remember that these are general concepts associated with Elinor Ostrom’s work, and her actual writings and speeches would provide more nuanced and detailed insights into her ideas. If you’re looking for specific quotes, I recommend reading her original works and publications.”

That is good advice. As for the summary: I found it accurate. It is highly consistent with my own interpretation of Ostrom, which, in turn, owes a lot to Paul Dragos Aligica and a few others. Although many have written about Ostrom, it is possible that ChatGPT is actually paraphrasing me. That is not necessarily bad. The problem is that you cannot tell where these ideas are coming from. Indeed, ChatGPT begins its response: “While I can’t provide verbatim quotes, I can summarize some key ideas and principles associated with Elinor Ostrom’s work.” There is no “I” in AI. Or if there is, it isn’t a computer. The underlying author might be Peter Levine plus a few others.

Caveat emptor.

See also: the design choice to make ChatGPT sound like a human; the difference between human and artificial intelligence: relationships

trying Mastodon

As of today, I am on the decentralized Mastodon network (

I would report good experiences with Twitter (as @peterlevine). I never attract enough attention there to be targeted by malicious or mean users. I don’t see fake news. (Of course, everyone thinks that’s true of themselves; maybe I’m naive.) I do enjoy following 750 accounts that tend to be specialized and rigorous in their respective domains. I read an ideologically diverse array of tweets and benefit from conservative, left-radical, religious, and culturally distant perspectives that I would otherwise miss–yet I curate my list for quality and don’t follow anyone unless I find the content useful. A bit of levity is also appreciated.

Notwithstanding my own positive experiences, I understand that Twitter does damage. At best, it’s far from optimal as a major instantiation of the global public sphere. We’d all be better off engaging somewhere else that was better designed and managed.

However, making the transition is a collective-action problem. Networks are valuable in proportion to the square of the number of users (Metcalfe’s Law). Twitter has been helpful to me because so many people are also on it, from defense logistics nerds posting about Ukrainian drones to election nerds tweeting about early ballots to political-economy nerds writing about Elinor Ostrom. For everyone to switch platforms at the same time and end up in the same place is a classic coordination dilemma.

Elon Musk may provide the solution by encouraging enough Twitter users to try the same alternative platform simultaneously. I perceive that a migration to Mastodon is underway. Joining Mastodon may offer positive externalities by helping to make it a competitive alternative. Starting anew is also pretty fun, even though the Mastodon interface isn’t too intuitive. So far, I have four followers, and the future is promising.

growing up with computers

Ethan Zukerman’s review of Kevin Driscoll’s The Modem World: A Prehistory of Social Media made me think back to my own early years and how I experienced computers and digital culture. I was never an early adopter or a power user, but I grew up in a college town at a pivotal time (high school class of 1985). As I nerd, I was proximate to the emerging tech culture even though I inclined more to the humanities. I can certainly remember what Ethan calls the “mythos of the rebellious, antisocial, political computer hacker that dominated media depictions until it was displaced by the hacker entrepreneur backed by venture capital.”

  • ca. 1977 (age 10): My mom, who’d had a previous career as a statistician, took my friend and me to see and use the punchcard machines at Syracuse University. I recall their whirring and speed. Around the same time, a friend of my aunt owned an independent store in New York City that sold components for computer enthusiasts. I think he was also into CB radio.
  • ca. 1980: Our middle school had a work station that was connected to a mainframe downtown; it ran some kind of simple educational software. The university library was turning its catalogue into a digital database, but I recall that the physical cards still worked better.
  • 1982-85: I and several friends owned Atari or other brands of “home computers.” I remember printed books with the BASIC code for games that you could type in, modify, and play. We wrote some BASIC of our own–other people were better at that than I was. I think you could insert cartridges to play games. The TV was your monitor. I remember someone telling me about computer viruses. One friend wrote code that ran on the school system’s mainframe. A friend and I did a science fair project that involved forecasting elections based on the median-voter theorem.
  • 1983: At a summer program at Cornell, I used a word processor. I also recall a color monitor.
  • 1985: We spent a summer in Edinburgh in a rented house with a desktop that played video games, better than any I had seen. I have since read that there was an extraordinary Scottish video game culture in that era.
  • 1985-9: I went to college with a portable, manual typewriter, and for at least the first year I hand-wrote my papers before typing them. The university began offering banks of shared PCs and Macs where I would edit, type, and print drafts that I had first written by hand. (You couldn’t usually get enough time at a computer to write your draft there, and very few people owned their own machines.) We had laser printers and loved playing with fonts and layouts. During my freshman year, a friend whose dad was a Big Ten professor communicated with him using some kind of synchronous chat from our dorm’s basement; that may have been my first sight of an email. A different dorm neighbor spent lots of time on AOL. My senior year, a visiting professor from Ireland managed to get a large document sent to him electronically, but that required a lot of tech support. My resume was saved on a disk, and I continuously edited that file until it migrated to this website in the late 1990s.
  • 1989-91: I used money from a prize at graduation to purchase a Toshiba laptop, which ran DOS and WordPerfect, on which I wrote my dissertation. The laptop was not connected to anything, and its processing power must have been tiny, but it had the same fundamental design as my current Mac. Oxford had very few phones but a system called “pigeon post”: hand-written notes would be delivered to anyone in the university within hours. Apparently, some Oxford nerds had set up the world’s first webcam to allow them to see live video of the office coffee machine, but I only heard about this much later.
  • 1991-3: My work desktop ran Windows. During a summer job for USAID, we sent some kind of weekly electronic message to US embassies.
  • 1993-5: We had email in my office at the University of Maryland. I still have my first emails because I keep migrating all the saved files. I purchased this website and used it for static content. My home computer was connected to the Internet via a dial-up modem. You could still buy printed books that suggested cool websites to visit. I made my first visit to California and saw friends from college who were involved with the dot-com bubble.
  • 2007: I had a smart phone and a Facebook account.

It’s always hard to assess the pace of change retrospectively. One’s own life trajectory interferes with any objective sense of how fast the outside world was changing. But my impression is that the pace of change was far faster from 1977-1993 (from punchcard readers to the World Wide Web) than it has been since 2008.