Category Archives: Artificial intelligence

Reading Arendt in Palo Alto

During a recent week at Stanford, I reread selections from Hannah Arendt’s On Revolution (ON) and The Human Condition (HC) to prepare for upcoming seminar sessions. My somewhat grim thoughts were evidently informed by the national news. I share them here without casting aspersions on my gracious Stanford hosts, who bear no responsibility for what I describe and are working on solutions.

I can imagine telling Arendt that Silicon Valley has become the capital of a certain kind of power, explaining how it reaches through Elon Musk to control the US government and the US military and through Musk and Mark Zuckerberg to dominate the global public sphere. I imagine showing her Sand Hill Road, the completely prosaic—although nicely landscaped—suburban highway where venture capitalists meet in undistinguished office parks to decide the flow of billions. This is Arendt’s nightmare.

For her, there should be a public domain in which diverse people convene for the “speech-making and decision-taking, the oratory and the business, the thinking and the persuading, and the actual doing” that constitutes politics (OR 24).

Politics enables a particular kind of equality: the equal standing to debate and influence collective decisions. Politics also enables a specific kind of freedom, because a person who decides with others what to do together is neither a boss nor a subordinate but a free actor.

Politics allows us to be–and to be recognized–as genuine individuals, having our own perspectives on topics that also matter to others (HC 41). And politics defeats death because it is where we concern ourselves with making a common world that can outlast us. “It is what we have in common not only with those who live with us, but also with those who were here before and with those who will come after us” (HC 55).

Politics excludes force against fellow citizens. “To be political, to live in a polis, meant that everything was decided through words and persuasion and not through force and violence” (HC 26). Speech is not persuasive unless the recipient is free to accept or reject it, and force destroys that freedom. By the same token, force prevents the one who uses it from being genuinely persuasive, which is a sign of rationality.

Musk’s DOGE efforts are clear examples of force. But I also think about when Zuckerberg decided to try to improve the schools of Newark, NJ. He had derived his vast wealth from developing a platform on which people live their private lives in the view of algorithms that nudge them to buy goods. He allocated some of this wealth to a reform project in Newark, discovered that people were ungrateful and that his plan didn’t work, and retreated in a huff because he didn’t receive the praise or impact that he expected to buy.

From Arendt’s perspective, each teenager in Newark was exactly Zuckerberg’s equal, worthy to look him in the eye and say what they they should do together. This would constitute what she calls “action.” However, Zuckerberg showed himself incapable of such equality and therefore devoid of genuine freedom.

Musk, Zuckerberg, and other tech billionaires understand themselves as deservedly powerful and receive adulation from millions. But, says Arendt, “The popular belief in ‘strong men’ … is either sheer superstition … or is a conscious despair of all action, political and non-political, coupled with the utopian hope that it may be possible to treat men as one treats other ‘material'” (HC 188).

There is no public space on Sand Hill Road. Palo Alto has a city hall, but it is not where Silicon Valley is governed. And the laborers “who with their bodies minister to the [bodily] needs of life” (Aristotle) are carefully hidden away (HC 72).

Arendt describes how economic activity has eclipsed politics in modern times. Descriptions of private life in the form of lyric poetry and novels have flourished–today, thousands of fine novels are available on the Kindle store–a development “coinciding with a no less striking decline of all the more public arts, especially architecture” (HC 39). In her day, corporations still built quite impressive urban headquarters, like Rockefeller Center, which continued the tradition of the Medici Palace or a Rothschild estate. But Sand Hill Road is a perfect example of wealth refusing to create anything of public value. Unless you are invited to a meeting there, you just drive by.

Arendt acknowledges that people need private property to afford political participation and to develop individual perspectives. We each need a dwelling and objects (such as, perhaps, books or mementos) that are protected from outsiders: “a tangible. worldly place of one’s own” (HC 70). But we do not need wealth. Arendt decries the “present emergence everywhere of actually or potentially very wealthy societies which at the same time are essentially propertyless, because the wealth of any single individual consists of his share in the annual income of society as a whole” (HC 61). For example, to own a great deal of stock is not to have property (the basis of individuality) but to be part of a mass society that renders your behavior statistically predictable, like a natural phenomenon (HC43). All those Teslas that cruise silently around Palo Alto are metaphors for wealth that is not truly private property.

Much of the wealth of Silicon Valley comes from digital media through which we live our private lives in the view of algorithms that assess us statistically and influence our behavior. For Arendt, “A life spent entirely in public, in the presence of others, becomes, as we would say, shallow” (HC 71). She is against socialist and communist efforts to expropriate property, but she also believes that privacy can be invaded by society in other ways (HC72). She expresses this concern vaguely, but nothing epitomizes it better than a corporate social media platform that becomes the space for ostensibly private life.

Artificial Intelligence represents the latest wave of innovation in Silicon Valley, producing software that appears to speak in the first-person singular but actually aggregates billions of people’s previous thought. Arendt’s words are eerie: “Without the accompaniment of speech .., action would not only lose its revelatory power, but, and by the same token, it would lose its subject; not acting men but performing robots would achieve what, humanly speaking, would be incomprehensible” (HC 178).

The result is a kind of death: “A life without speech and without action … is literally dead to the world; it has ceased to be a human life because it is no longer lived among men” (HC 176).


See also: Arendt, freedom, Trump (2017); the design choice to make ChatGPT sound like a human; Victorians warn us about AI; “Complaint,” by Hannah Arendt etc.

Victorians warn us about AI

In the fictional dialogue entitled Impressions of Theophrastus Such (first edition, 1879), George Eliot’s first-person narrator envisions the development of machines that can think, affect the physical world, and reproduce themselves. Humans suffer as a result, devolving into passivity and ultimately becoming extinct:

Under such uncomfortable circumstances our race will have diminished with the diminishing call on their energies, and by the time that the self-repairing and reproducing machines arise, all but a few of the rare inventors, calculators, and speculators will have become pale, pulpy, and cretinous from fatty or other degeneration, and behold around them a scanty hydrocephalous offspring. As to the breed of the ingenious and intellectual, their nervous systems will at last have been overwrought in following the molecular revelations of the immensely more powerful unconscious race, and they will naturally, as the less energetic combinations of movement, subside like the flame of a candle in the sunlight. Thus the feebler race, whose corporeal adjustments happened to be accompanied with a maniacal consciousness which imagined itself moving its mover, will have vanished, as all less adapted existences do before the fittest—i.e., the existence composed of the most persistent groups of movements and the most capable of incorporating new groups in harmonious relation. Who—if our consciousness is, as I have been given to understand, a mere stumbling of our organisms on their way to unconscious perfection—who shall say that those fittest existences will not be found along the track of what we call inorganic combinations, which will carry on the most elaborate processes as mutely and painlessly as we are now told that the minerals are metamorphosing themselves continually in the dark laboratory of the earth’s crust? Thus this planet may be filled with beings who will be blind and deaf as the inmost rock, yet will execute changes as delicate and complicated as those of human language and all the intricate web of what we call its effects, without sensitive impression, without sensitive impulse: there may be, let us say, mute orations, mute rhapsodies, mute discussions, and no consciousness there even to enjoy the silence.

In On Liberty (1859), John Stuart Mill had not forecast such a future as explicitly as Eliot would do, but he used it as a thought-experiment to demonstrate that the point of life is to develop one’s own capacities, not to accomplish any practical ends. A life in which important matters are handled by other minds–or by machines–is a life devoid of value:

He who lets the world, or his own portion of it, choose his plan of life for him, has no need of any other faculty than the ape-like one of imitation. He who chooses his plan for himself, employs all his faculties. He must use observation to see, reasoning and judgment to foresee, activity to gather materials for decision, discrimination to decide, and when he has decided, firmness and self-control to hold to his deliberate decision. And these qualities he requires and exercises exactly in proportion as the part of his conduct which he determines according to his own judgment and feelings is a large one. It is possible that he might be guided in some good path, and kept out of harm’s way, without any of these things. But what will be his comparative worth as a human being? It really is of importance, not only what men do, but also what manner of men they are that do it. Among the works of man, which human life is rightly employed in perfecting and beautifying, the first in importance surely is man himself. Supposing it were possible to get houses built, corn grown, battles fought, causes tried, and even churches erected and prayers said, by machinery—by automatons in human form—it would be a considerable loss to exchange for these automatons even the men and women who at present inhabit the more civilised parts of the world, and who assuredly are but starved specimens of what nature can and will produce. Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.

The possibility that AI will render us extinct remains speculative, 150 years after Eliot posited it. But there is an urgent, present threat that AI tools will “guide” us along “some good path” and thereby block “the free development of individuality,” which “is one of the leading essentials of well-being.”

See also: the difference between human and artificial intelligence: relationships; artificial intelligence and problems of collective action; what I would advise students about ChatGPT; the human coordination involved in AI; the design choice to make ChatGPT sound like a human etc. I owe the reference to Eliot to Harry Law.

a collective model of the ethics of AI in higher education

Hannah Cox, James Fisher, and I have published a short piece in an outlet called eCampus News. The whole text is here, and I’ll paste the beginning here:

AI is difficult to understand, and its future is even harder to predict. Whenever we face complex and uncertain change, we need mental models to make preliminary sense of what is happening.

So far, many of the models that people are using for AI are metaphors, referring to things that we understand better, such as talking birds, the printing press, a monsterconventional corporations, or the Industrial Revolution. Such metaphors are really shorthand for elaborate models that incorporate factual assumptions, predictions, and value-judgments. No one can be sure which model is wisest, but we should be forming explicit models so that we can share them with other people, test them against new information, and revise them accordingly.

“Forming models” may not be exactly how a group of Tufts undergraduates understood their task when they chose to hold discussions of AI in education, but they certainly believed that they should form and exchange ideas about this topic. For an hour, these students considered the implications of using AI as a research and educational tool, academic dishonesty, big tech companies, attempts to regulate AI, and related issues. They allowed us to observe and record their discussion, and we derived a visual model from what they said.

We present this model [see above] as a starting point for anyone else’s reflections on AI in education. The Tufts students are not necessarily representative of college students in general, nor are they exceptionally expert on AI. But they are thoughtful people active in higher education who can help others to enter a critical conversation.

Our method for deriving a diagram from their discussion is unusual and requires an explanation. In almost every comment that a student made, at least two ideas were linked together. For instance, one student said: “If not regulated correctly, AI tools might lead students to abuse the technology in dishonest ways.” We interpret that comment as a link between two ideas: lack of regulation and academic dishonesty. When the three of us analyzed their whole conversation, we found 32 such ideas and 175 connections among them.

The graphic shows the 12 ideas that were most commonly mentioned and linked to others. The size of each dot reflects the number of times each idea was linked to another. The direction of the arrow indicated which factor caused or explained another.

The rest of the published article explores the content and meaning of the diagram a bit.

I am interested in the methodology that we employed here, for two reasons.

First, it’s a form of qualitative research–drawing on Epistemic Network Analysis (ENA) and related methods. As such, it yields a representation of a body of text and a description of what the participants said.

Second, it’s a way for a group to co-create a shared framework for understanding any issue. The graphic doesn’t represent their agreement but rather a common space for disagreement and dialogue. As such, it resembles forms of participatory modeling (Voinov et al, 2018). These techniques can be practically useful for groups that discuss what to do.

Our method was not dramatically innovative, but we did something a bit novel by coding ideas as nodes and the relationships between pairs of ideas as links.

Source: Alexey Voinov et al, “Tools and methods in participatory modeling: Selecting the right tool for the job,” Environmental Modelling & Software, vol 19 (2018), pp. 232-255. See also: what I would advise students about ChatGPT; People are not Points in Space; different kinds of social models; social education as learning to improve models

the ACM brief on AI

The Association for Computing Machinery (ACM) has 110,000 members. As artificial intelligence rapidly acquires users and uses, some ACM members see an analogy to nuclear physics in the 1940s. Their profession is responsible for technological developments that can do considerable good but that also pose grave dangers. Like physicists in the era of Einstein and Oppenheimer, computer scientists have developed ideas that are now in the hands of governments and companies that they cannot control.

The ACM’s Technology Policy Council has published a brief by David Leslie and Francesca Rossi with the following problem statement: “The rapid commercialization of generative AI (GenAI) poses multiple large-scale risks to individuals, society, and the planet that require a rapid, internationally coordinated response to mitigate.”

Considering that this brief is only three pages long (plus notes), I think it offers a good statement of the issue. It is vague about solutions, but that may be inevitable for this type of document. The question is what should happen next.

One rule-of-thumb is that legislatures won’t act on demands (let alone friendly suggestions) unless someone asks them to adopt specific legislation. In general, legislators lack the time, expertise, and degrees of freedom necessary to develop responses to the huge range of issues that come before them.

This passage from the brief is an example of a first step, but it won’t generate legislation without a lot more elaboration:

Policymakers confronting this range of risks face complex challenges. AI law and policy thus should incorporate end-to-end governance approaches that address risks comprehensively and “by design.” Specifically, they must address how to govern the multiphase character of GenAI systems and the foundation models used to construct them. For instance, liability and accountability for lawfully acquiring and using initial training data should be a focus of regulations tailored to the FM training phase.

The last quoted sentence begins to move in the right direction, but which policymakers should change which laws about which kinds of liability for whom?

The brief repeatedly calls on “policymakers” to act. I am guessing the authors mean governmental policymakers: legislators, regulators, and judges. Indeed, governmental action is warranted. But governments are best seen as complex assemblages of institutions and actors that are in the midst of other social processes, not as the prime movers. For instance, each legislator is influenced by a different set of constituents, donors, movements, and information. If a whole legislature manages to pass a law (which requires coordination), the new legislation will affect constituents, but only to a limited extent. And the degree to which the law is effective will depend on the behavior of many other actors inside of government who are responsible for implementation and enforcement and who have interests of their own.

This means that “the government” is not a potential target for demands: specific governmental actors are. And they are not always the most promising targets, because sometimes they are highly constrained by other parties.

In turn, the ACM is a complex entity–reputed to be quite decentralized and democratic. If I were an ACM member, I would ask: What should policymakers do about AI? But that would only be one question. I would also ask: What should the ACM do to influence various policymakers and other leaders, institutions, and the public? What should my committee or subgroup within ACM do to influence the ACM? And: which groups should I be part of?

In advocating a role for the ACM, it would be worth canvassing its assets: 110,000 expert members who are employed in industry, academia, and governments; 76 years of work so far; structures for studying issues and taking action. It would also be worth canvassing deficits. For instance, the ACM may not have deep expertise on some matters, such as politics, culture, social ethics, and economics. And it may lack credibility with the diverse grassroots constituencies and interest-groups that should be considered and consulted. Thus an additional question is: Who should be working on the social impact of AI, and how should these activists be configured?

I welcome the brief by David Leslie and Francesca Rossi and wouldn’t expect a three-page document to accomplish more than it does. But I hope it is just a start.

See also: can AI help governments and corporations identify political opponents?; the design choice to make ChatGPT sound like a human; what I would advise students about ChatGPT; the major shift in climate strategy (also about governments as midstream actors).

can AI help governments and corporations identify political opponents?

In “Large Language Model Soft Ideologization via AI-Self-Consciousness,” Xiaotian Zhou, Qian Wang, Xiaofeng Wang, Haixu Tang, and Xiaozhong Liu use ChatGPT to identify the signature of “three distinct and influential ideologies: “’Trumplism’ (entwined with US politics), ‘BLM (Black Lives Matter)’ (a prominent social movement), and ‘China-US harmonious co-existence is of great significance’ (propaganda from the Chinese Communist Party).” They unpack each of these ideologies as a connected network of thousands of specific topics, each one having a positive or negative valence. For instance, someone who endorses the Chinese government’s line may mention US-China relationships and the Nixon-Mao summit as a pair of linked positive ideas.

The authors raise the concern that this method would be a cheap way to predict the ideological leanings of millions of individuals, whether or not they choose to express their core ideas. A government or company that wanted to keep an eye on potential opponents wouldn’t have to search social media for explicit references to their issues of concern. It could infer an oppositional stance from the pattern of topics that the individuals choose to mention.

I saw this article because the authors cite my piece, “Mapping ideologies as networks of ideas,” Journal of Political Ideologies (2022): 1-28. (Google Scholar notified me of the reference.) Along with many others, I am developing methods for analyzing people’s political views as belief-networks.

I have a benign motivation: I take seriously how people explicitly articulate and connect their own ideas and seek to reveal the highly heterogeneous ways that we reason. I am critical of methods that reduce people’s views to widely shared, unconscious psychological factors.

However, I can see that a similar method could be exploited to identify individuals as targets for surveillance and discrimination. Whereas I am interested in the whole of an individual’s stated belief-network, a powerful government or company might use the same data to infer whether a person would endorse an idea that it finds threatening, such as support for unions or affinity for a foreign country. If the individual chose to keep that particular idea private, the company or government could still infer it and take punitive action.

I’m pretty confident that my technical acumen is so limited that I will never contribute to effective monitoring. If I have anything to contribute, it’s in the domain of political theory. But this is something–yet another thing–to worry about.

See also: Mapping Ideologies as Networks of Ideas (talk); Mapping Ideologies as Networks of Ideas (paper); what if people’s political opinions are very heterogeneous?; how intuitions relate to reasons: a social approach; the difference between human and artificial intelligence: relationships