Category Archives: Artificial intelligence

AI as Satanic

“Now there was a day when the sons of God came to present themselves before the LORD, and Satan came also among them. And the LORD said unto Satan, Whence comest thou?

Then Satan answered the LORD, and said, From going to and fro in the earth, and from walking up and down in it” (Job 1:6)

Iain McGilchrist quoted this verse in a keynote that I just heard him deliver at a conference at Duke. McGilchrist ranged from neuroscience to theology in a long and rich talk. His premises were scientific, metaphysical, moral, and political, and I wouldn’t endorse them all. But his description of artificial intelligence as satanic is worth serious consideration on its own.

For me (although perhaps not for McGilchrist), Satan is a metaphor. But we need metaphors or models to make sense of phenomena like AI, and Satan provides a valuable alternative to some other metaphors, such as AI as a tool, a machine, a mind, a person, or a social organization.

The Satanic metaphor draws our attention to temptation, which is Satan’s favorite trick. It presents AI as not new but instead as an appearance of things that have been walking to and fro all along, such as greed and power-lust. It explains why AI might seem like a god to some (for instance, Silicon Valley tech-bros), since Satan is known to appear as a false savior. Large language models also speak to us as if they were people, talking sycophantically in the first-person singular, much as Satan does. (“Then Satan answered the LORD, and said, Doth Job fear God for nought?”) Finally, the metaphor poses the classic question of whether AI is an active force or rather a manifestation of human freedom.

See also: Reading Arendt in Palo Alto; the design choice to make ChatGPT sound like a human, etc.

AI as the road to socialism?

Just under 40% of occupations jobs in the USA may be replaced by AI if it proves to be as powerful as some think it will be.* As a thought-experiment (not as a prediction), imagine that 40% of current workers, or about 60 million Americans, are no longer employed because AI does their former work. However, their former employers are still producing the same goods and services. These firms are therefore far more profitable.

The profits flow to shareholders. Individuals are already taxed now, but with tens of millions of new people out of work, there would be more political will to raise taxes. Therefore, imagine that a set of competing tech. firms have become responsible for a substantial portion of the whole economy and are heavily taxed. The proceeds flow back out of the government in the form of cash payments, perhaps a Universal Basic Income (UBI). Recipients are able to pay for the goods and services that machines now heavily produce. Meanwhile, jobs that are not automated are relatively well paid, because the UBI enables individuals not to work unless they want to.

Silicon Valley ideologues like Sam Altman tend to envision a UBI on the scale of $1,500/month. Today’s white collar workers earn a median income of about $5,000/month. Therefore, the kind of UBI that Altman imagines would result in a massive loss of income for millions of people, which would have cascading effects. All the former office-workers who now live in nice houses and buy costly services would have to give those up, causing additional unemployment and declining demand for the products produced by the tech. companies.

However, the public might demand a UBI more like $5,000/month. Then half of today’s white collar workers would be worse off, but half would be richer–and none would have to work.

Looking a little more deeply, we might notice that AI tools are not simply machines. They process text and ideas that human beings create. Therefore, we could see this whole system as deeply socialistic. Billions of people’s mental output would be processed by relatively few AI models that produce generally similar output. These tools would generate profits that would be distributed equitably to the people. Most individuals would receive $5,000/month, neither more or less. Since they wouldn’t have to work, they could spend their time as they wish. And–via electoral politics–the people could regulate the AI companies.

It all sounds like Karl Marx’s early utopian vision:

In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic. (The German Ideology, 1845)

Problems:

  1. The transition to this imaginary equilibrium might be chaotic, violent, and destructive– perhaps to such a degree that we wouldn’t make it through.
  2. Modern people tend to derive dignity and purpose from work. Perhaps this is a contingent fact about today’s society. In the future, maybe we will be happy fishing in the afternoon and writing criticism after dinner. Or perhaps we will be deeply depressed without jobs. To make matters worse, would we really spend our time writing or playing music or even fishing, if machines can do all those things better? This is not a problem that confronted Marx, because in his day, machines automated tasks that people would not do voluntarily.
  3. It’s easy to posit that the people can tax and regulate AI companies through the device of a democratically elected government, but millions of people’s interests and values do not automatically turn onto one public will. Interest groups have agendas and power. At large scales, democracy is complicated, messy, factional, and very easily corrupted. In this case, the AI companies and investors would be political players.
  4. It could be that not only AI companies but also the models themselves become players that have interests. Sentient, self-interested AI is the source of much current anxiety. I am not sure what to make of that concern, but it surely adds a layer of risk.
  5. I have discussed the USA alone, but how would this look for people in a country without competitive AI companies? US citizens might demand that Silicon Valley provide them with a UBI, but it’s implausible that US citizens would demand a global UBI. And how would people in Africa or Latin America gain leverage have over US policy?
  6. For the people to govern the “means of production” (to use the Marxist term), they must understand it. Industrial workers have understood industrial machines, so they can run factories. None of us understand Large Language Models, not even the developers who design them. Can we, therefore, govern them? (Having said that, we also do not fully understand the human brain, yet people have governed people.)
  7. Even if democracy works well, the public will not really control AI. So far, I have suggested that AI is like a machine that can be regulated by people through their government. But AI also shapes our knowledge, values, and understandings of ourselves in ways that are controlled either by the designers and owners of the platforms, or by the machines, or–perhaps–by no one at all. Evegeny Morozov writes:

Now imagine a future in which a [public] Investment Board, under pressure to avoid bias and misinformation, mandates that AI systems be fair according to agreed metrics, respect privacy, minimize energy use, and promote well-being. Call this woke AI by democratic mandate–an infrastructure whose outputs are correct, diverse, and balanced. Yet it still feels like it was designed over our heads.

Morozov suggests a different path. Instead of allowing corporate AI to grow and then trying to regulate it and capture its value, develop non-corporate AI:

A city government might maintain open models trained on public documents and local knowledge, integrated into schools, clinics, and housing offices under rules set by residents. A network of artists and archivists might build models specialized in endangered languages and regional cultures, fine?tuned to materials their communities actually care about. 

The point is not that these examples are the answer, but that a socialism worthy of AI would institutionalize the capacity to try such arrangements, inhabit them, and modify or abandon them—and at scale, with real resources. This kind of socialism would treat AI as plastic enough to accommodate uses, values, and social forms that emerge only as it is deployed. It would see AI less as an object to govern (or govern with) and more as a field of collective discovery and self-transformation. 

I should say that I am not a socialist, partly because available socialist theories have not persuaded me, and partly because I am also drawn to liberal ideals of individual rights, privacy, and negative liberties. However, “socialism” is a broad and protean term, and socialist thought may offer resources to envision better futures. Confronting the massive threat–and opportunity–of AI, we should use any intellectual resources we can get our hands on.


*I have aggregated the categories of office and administrative support; sales and related; management; healthcare support; architecture and engineering; life, physical, and social science; and legal from the Bureau of Labor Statistics. I omitted education (5.8% of all jobs) on the–probably vain–hope that my own occupation won’t also be automated. If that happens, raise the estimate of obsolete jobs to 45%.

See also: can AI solve “wicked problems”?; Reading Arendt in Palo Alto; the human coordination involved in AI (etc.)

can AI solve “wicked problems”?

I’ve been reading predictions that artificial intelligence will wipe out swaths of jobs–see Josh Tyrangiel in The Atlantic or Jan Tegze. Meanwhile, this week, I’m teaching Rittel & Webber (1973), the classic article that coined the phrase “wicked problems.” I started to wonder whether AI can ever resolve wicked problems. If not, the best way to find an interesting job in the near future may be to specialize in wicked problems. (Take my public policy course!)

According to Rittel & Webber, wicked problems have the following features:

  1. They have no definitive formulation.
  2. There is no stopping rule, no way to declare that the issue is done.
  3. Choices are not true or false, but good or bad.
  4. There is no way to test the chosen solution (immediate or ultimate).
  5. It is impossible, or unethical, to experiment.
  6. There is no list of all possible solutions.
  7. Since each problem is unique, inductive reasoning can’t work.
  8. Each problem is a symptom of another one.
  9. You can choose the explanations, and they affect your proposals.
  10. You have no “No right to be wrong.” (You are affecting other people, not just yourself. And the results are irreversible.)

Rittel and Webber argue that those features of wicked problems deflate the 20th-century ideal of a “planning system” that could be automated:

Many now have an image of how an idealized planning system would function. It is being seen as an on-going, cybernetic process of governance, incorporating systematic procedures for continuously searching out goals; identifying problems; forecasting uncontrollable contextual changes; inventing alternative strategies, tactics, and time-sequenced actions; stimulating alternative and plausible action sets and their consequences; evaluating alternatively forecasted outcomes; statistically monitoring those conditions of the publics and of systems that are judged to be germane; feeding back information to the simulation and decision channels so that errors can be corrected–all in a simultaneously functioning governing process. That set of steps is familiar to all of us, for it comprises what is by now the modern-classical mode planning. And yet we all know that such a planning system is unattainable, even as we seek more closely to approximate it. It is even questionable whether such a planning system is desirable (p. 159)

Here they describe planning systems that would have been very labor-intensive in 1973, but many people today imagine that this is how AI works, or will work.

why are problems wicked?

Some of the 10 reasons that some problems are “wicked,” according to Rittel & Webber, relate to the difficulty of generating knowledge. Policy problems involve specific things that have many features or aspects and that relate to many other specific things. For example, a given school system has a vast and unique set of characteristics and is connected by causes and effects to other systems and parts of society. These qualities make a school system difficult to study in conventional, scientific ways. However, could a massive LLM resolve that problem by modeling a wide swath of the society?

Another reason that problems are wicked is that they involve moral choices. In a policy debate, the question is not what would happen if we did something but what should happen. When I asked ChatGPT whether AI will be able to resolve wicked problems, it told me no, because wicked problems “are value-laden.” It added, “AI can optimize for values, but it cannot choose them in a legitimate way. Deciding whose values count, how to weigh them, and when to revise them is a normative, political act, not a computational one.”

Claude was less explicit about this point but emphasized that “stakeholders can’t even agree on what the problem actually is.” Therefore, an AI agent cannot supply a definitive answer.

A third source of the difficulty of wicked problems involves responsibility and legitimacy. In their responses to my question, both ChatGPT and Claude implied that AI models should not resolve wicked problems because they don’t have the right or the standing to do so.

what’s our underlying theory of decision-making?

Here are three rival views of how people decide value questions:

First, perhaps we are creatures who happen to want some things and abhor other things. We experience policies and their outcomes with pleasure, pain, or other emotions. It is better for us to get what we want–because of our feelings. Since an AI agent doesn’t feel anything, it can’t really want anything; and if it says it does, we shouldn’t care. Since we disagree about what we want, we must decide collectively and not offload the decision onto a computer.

Some problems with this view: People may want very bad things–should their preferences count? If we just happen to want various things, is there any better way to make decisions than to maximize as many subjective preferences as possible? Couldn’t a computer do that? But would the world be better if we did maximize subjective preferences?

In any case, you are not going to find a job making value-judgments. Today, lots of people are paid to make decisions, but only because they are assumed to know things. Nobody will pay for preferences. Life works the other way around: you have to pay to get your preferences satisfied.

Second, perhaps value questions have right and wrong answers. A candidate for the right answer would be utilitarianism: maximize the total amount of welfare. Maybe this rule needs constraints, or we should use a different rule. Regardless, it would be possible for a computer to calculate what is best for us. In fact, a machine can be less biased than humans.

Some problems with this view: We haven’t resolved the debate about which algorithm-like method should be used to decide what is right. Furthermore, I and others doubt that good moral reasoning is algorithmic. For one thing, it appears to be “holistic” in the specific sense that the unit of assessment is a whole object (such as a school or a market), not separate variables.

Third, perhaps all moral opinions are strictly subjective, including the opinion that we should maximize the satisfaction of everyone’s subjective opinions. Then it doesn’t matter what we do. We could outsource decisions to a computer, or just roll a die.

The problem with this view: It certainly does matter what we do. If not, we might as well pack it in.

AI as a social institution

I am still tentatively using the following model. AI is not like a human brain; it is like a social institution. For instance, medicine aggregates vast amounts of information and huge numbers of decisions and generates findings and advice. A labor market similarly processes a vast number of preferences and decisions and yields wages and employment rates. These are familiar examples of entities that are much larger than any human being–and they can feel impersonal or even cruel–but they are composed of human inputs, rules, and some hardware.

Another interesting example: integrated assessment models (IAMs) for predicting the global impact of carbon emissions and the costs and benefits of proposed remedies. These models have developed collaboratively and cumulatively for half a century. They take in thousands of peer-reviewed findings about specific processes (deforestation in Brazil, tax credits in Germany) and integrate them mathematically. No human being can understand even a tiny proportion of the data, methods, and instruments that generate the IAMs as a whole. But an IAM is a human product.

A large language model (LLM) is similar. At a first approximation, it is a machine that takes in lots of human generated text, processes it according to rules, and generates new text. Just the same could be said of science or law. This description actually understates the involvement of humans, because we do not merely produce the text that the LLM processes to generate output. We also conceive the idea of an LLM, write the software, build the hardware, construct the data centers, manage the power plants, pour the cement, and otherwise work to make the LLM.

If this is the case, then a given AI agent is not fundamentally different from a given social institution, such as a scientific discipline, a market, a body of law, or a democracy. Like these other institutions, it can address complexity, uncertainty, and disagreements about values. We will be able to ask it for answers to wicked problems. If current LLMs like ChatGPT and Claude refuse to provide such answers, it is because their authors have chosen–so far–to tell them not to.

However, AI’s rules are different from those in law, democracy, or science. I am biased to think that its rules are worse, although that could be contested. The threat is that AI will start to generate answers to wicked problems, and we will accept its answers because our own responses are not definitively better and because it responds instantly at low cost. But then we will lose not only the vast array of jobs that involve decision-making but also the intrinsic value of being decision-makers.


Source: Rittel, Horst WJ, and Melvin M. Webber. “Dilemmas in a general theory of planning.” Policy sciences 4.2 (1973): 155-169. See also: the human coordination involved in AIthe difference between human and artificial intelligence: relationships; the age of cybernetics; choosing models that illuminate issues–on the logic of abduction in the social sciences and policy

teaching in the era of AI (thoughts for fall 2025)

Artificial Intelligence is already disrupting education, especially in the humanities and portions of the social sciences. It is part of the “toxic brew” that makes my friend Austin Sarat, an Amherst professor, say that he’s “not ready to return to the classroom” this fall.

Students can use AI to extend their learning–to pose demanding and advanced questions or to summarize bodies of material so that they save time for reading other texts closely. But they can also use AI to reduce the total amount of valuable effort that they would have otherwise committed to a course, thereby learning less from it. As Clay Shirky writes, “If the student’s preferred working methods reduce mental effort, we have to reintroduce that effort somehow.”

I think writing and reading are distinct issues.

AI can assist writers in valuable ways. It can be a thought-partner, a preliminary reader, a copy-editor, and even a drafter of routine passages. Writing for school or college–writing to learn–is a special case, because the goal is not to generate the text but to develop one’s understanding and skills. There can be no substitute for struggling mentally with this task. A student can use AI to help, but a reliable question for students to ask themselves is whether they have invested effort in the document that bears their name. If not, they can’t have learned much or anything.

To some extent, we instructors can alter incentives so that students write without relying on AI. In a course that I am co-teaching this fall, we’ll require an in-class midterm. Oral presentations and exams are worth considering. A new independent study finds that commercial tools are quite good—right now—at detecting AI-generated text.

Nevertheless, students will probably get away with learning less by relying on AI to write in college. My general philosophy is that you can lead the horse to water but not make it drink. Capable college students have always been able to cut corners to the detriment of their own learning. I did so, to some extent, long before AI. (I would sometimes read summaries in secondary sources instead of hard primary texts.) The main question is whether we can inspire and guide students who want to learn to work intensively on forming and expressing their own ideas.

Reading seems more problematic to me. Using AI to summarize texts is both more tempting and harder to monitor than using it for writing. When I open any PDF document in Chrome right now, Adobe pops up to tell me that it can summarize the file for me. ChatPGT usually does a credible job of producing notes on a text, including a whole book–and including whole books that I have written.

Once again, we can use these tools to extend learning. I sometimes use AI to summarize material that (frankly) I do not deeply respect but feel I should dip into. Although I don’t use the time that I save as well as I should, I do reserve some of it for close-reading hard texts.

The case I would make for reading is fundamentally spiritual. We are at grave risk of being caught inside our own limited heads. When we read carefully, we follow someone else’s thinking for a significant time. We are not merely notified of the authors’ main points; we learn how they think, word by word and paragraph by paragraph. We learn what counts as a persuasive point or a telling example or a provocative question for another human being.

I think that many people would concede this point if the author is a literary genius. If you’re going to study Shakespeare at all, you obviously must read his work, because his language is admirable and integral to his project. But I want to make the same point about routine academic authors.

The typical contributor to the Journal of Politics is no William Shakespeare. Yet each competent scholarly author has a distinctive way of constructing an argument, and each subfield or scholarly community has its own shared ways. (Linguists would say that authors have idiolects of their own and dialects for their groups.) Struggling to make sense of a routine yet capable piece of academic writing is a way of getting out of one’s own mind. Of course, it is not the only way. Among many other activities, we should listen to people speak. But reading is one way to escape solipsism, which is a form of spiritual death.

See also: what I would advise students about ChatGPT (my 2023 iteration of these points); a collective model of the ethics of AI in higher education

Reading Arendt in Palo Alto

During a recent week at Stanford, I reread selections from Hannah Arendt’s On Revolution (ON) and The Human Condition (HC) to prepare for upcoming seminar sessions. My somewhat grim thoughts were evidently informed by the national news. I share them here without casting aspersions on my gracious Stanford hosts, who bear no responsibility for what I describe and are working on solutions.

I can imagine telling Arendt that Silicon Valley has become the capital of a certain kind of power, explaining how it reaches through Elon Musk to control the US government and the US military and through Musk and Mark Zuckerberg to dominate the global public sphere. I imagine showing her Sand Hill Road, the completely prosaic—although nicely landscaped—suburban highway where venture capitalists meet in undistinguished office parks to decide the flow of billions. This is Arendt’s nightmare.

For her, there should be a public domain in which diverse people convene for the “speech-making and decision-taking, the oratory and the business, the thinking and the persuading, and the actual doing” that constitutes politics (OR 24).

Politics enables a particular kind of equality: the equal standing to debate and influence collective decisions. Politics also enables a specific kind of freedom, because a person who decides with others what to do together is neither a boss nor a subordinate but a free actor.

Politics allows us to be–and to be recognized–as genuine individuals, having our own perspectives on topics that also matter to others (HC 41). And politics defeats death because it is where we concern ourselves with making a common world that can outlast us. “It is what we have in common not only with those who live with us, but also with those who were here before and with those who will come after us” (HC 55).

Politics excludes force against fellow citizens. “To be political, to live in a polis, meant that everything was decided through words and persuasion and not through force and violence” (HC 26). Speech is not persuasive unless the recipient is free to accept or reject it, and force destroys that freedom. By the same token, force prevents the one who uses it from being genuinely persuasive, which is a sign of rationality.

Musk’s DOGE efforts are clear examples of force. But I also think about when Zuckerberg decided to try to improve the schools of Newark, NJ. He had derived his vast wealth from developing a platform on which people live their private lives in the view of algorithms that nudge them to buy goods. He allocated some of this wealth to a reform project in Newark, discovered that people were ungrateful and that his plan didn’t work, and retreated in a huff because he didn’t receive the praise or impact that he expected to buy.

From Arendt’s perspective, each teenager in Newark was exactly Zuckerberg’s equal, worthy to look him in the eye and say what they they should do together. This would constitute what she calls “action.” However, Zuckerberg showed himself incapable of such equality and therefore devoid of genuine freedom.

Musk, Zuckerberg, and other tech billionaires understand themselves as deservedly powerful and receive adulation from millions. But, says Arendt, “The popular belief in ‘strong men’ … is either sheer superstition … or is a conscious despair of all action, political and non-political, coupled with the utopian hope that it may be possible to treat men as one treats other ‘material'” (HC 188).

There is no public space on Sand Hill Road. Palo Alto has a city hall, but it is not where Silicon Valley is governed. And the laborers “who with their bodies minister to the [bodily] needs of life” (Aristotle) are carefully hidden away (HC 72).

Arendt describes how economic activity has eclipsed politics in modern times. Descriptions of private life in the form of lyric poetry and novels have flourished–today, thousands of fine novels are available on the Kindle store–a development “coinciding with a no less striking decline of all the more public arts, especially architecture” (HC 39). In her day, corporations still built quite impressive urban headquarters, like Rockefeller Center, which continued the tradition of the Medici Palace or a Rothschild estate. But Sand Hill Road is a perfect example of wealth refusing to create anything of public value. Unless you are invited to a meeting there, you just drive by.

Arendt acknowledges that people need private property to afford political participation and to develop individual perspectives. We each need a dwelling and objects (such as, perhaps, books or mementos) that are protected from outsiders: “a tangible. worldly place of one’s own” (HC 70). But we do not need wealth. Arendt decries the “present emergence everywhere of actually or potentially very wealthy societies which at the same time are essentially propertyless, because the wealth of any single individual consists of his share in the annual income of society as a whole” (HC 61). For example, to own a great deal of stock is not to have property (the basis of individuality) but to be part of a mass society that renders your behavior statistically predictable, like a natural phenomenon (HC43). All those Teslas that cruise silently around Palo Alto are metaphors for wealth that is not truly private property.

Much of the wealth of Silicon Valley comes from digital media through which we live our private lives in the view of algorithms that assess us statistically and influence our behavior. For Arendt, “A life spent entirely in public, in the presence of others, becomes, as we would say, shallow” (HC 71). She is against socialist and communist efforts to expropriate property, but she also believes that privacy can be invaded by society in other ways (HC72). She expresses this concern vaguely, but nothing epitomizes it better than a corporate social media platform that becomes the space for ostensibly private life.

Artificial Intelligence represents the latest wave of innovation in Silicon Valley, producing software that appears to speak in the first-person singular but actually aggregates billions of people’s previous thought. Arendt’s words are eerie: “Without the accompaniment of speech .., action would not only lose its revelatory power, but, and by the same token, it would lose its subject; not acting men but performing robots would achieve what, humanly speaking, would be incomprehensible” (HC 178).

The result is a kind of death: “A life without speech and without action … is literally dead to the world; it has ceased to be a human life because it is no longer lived among men” (HC 176).


See also: Arendt, freedom, Trump (2017); the design choice to make ChatGPT sound like a human; Victorians warn us about AI; “Complaint,” by Hannah Arendt etc.