AI as Satanic

“Now there was a day when the sons of God came to present themselves before the LORD, and Satan came also among them. And the LORD said unto Satan, Whence comest thou?

Then Satan answered the LORD, and said, From going to and fro in the earth, and from walking up and down in it” (Job 1:6)

Iain McGilchrist quoted this verse in a keynote that I just heard him deliver at a conference at Duke. McGilchrist ranged from neuroscience to theology in a long and rich talk. His premises were scientific, metaphysical, moral, and political, and I wouldn’t endorse them all. But his description of artificial intelligence as satanic is worth serious consideration on its own.

For me (although perhaps not for McGilchrist), Satan is a metaphor. But we need metaphors or models to make sense of phenomena like AI, and Satan provides a valuable alternative to some other metaphors, such as AI as a tool, a machine, a mind, a person, or a social organization.

The Satanic metaphor draws our attention to temptation, which is Satan’s favorite trick. It presents AI as not new but instead as an appearance of things that have been walking to and fro all along, such as greed and power-lust. It explains why AI might seem like a god to some (for instance, Silicon Valley tech-bros), since Satan is known to appear as a false savior. Large language models also speak to us as if they were people, talking sycophantically in the first-person singular, much as Satan does. (“Then Satan answered the LORD, and said, Doth Job fear God for nought?”) Finally, the metaphor poses the classic question of whether AI is an active force or rather a manifestation of human freedom.

See also: Reading Arendt in Palo Alto; the design choice to make ChatGPT sound like a human, etc.

Don’t Call them Underdogs

I wrote a review of a new PBS documentary about urban debate leagues for Education Next. It was published today, and it begins:

You may have seen a movie in which teenagers experience grave injustice and then enter a prestigious competition where they prove to the world that they are smart. The competition might be the AP math exam (Stand and Deliver, 1988), the National Spelling Bee (Akeelah and the Bee, 2006), robotics (Spare Parts, 2015), or chess (Queen of Katwe, 2016), to name just a few.

Typically, one charismatic adult believes in the kids, inspires them to confront their doubts and society’s stereotypes, and leads them—through setbacks—to an exciting victory that demonstrates their dignity and character as well as their skills.

Immutable, a new documentary film produced by Found Object and available for streaming at PBS on March 6, is much better …

the USA at 250: constitutional crisis

Last night, I was part of The United States at 250: A Tufts Faculty Panel. In a full room of students, Tufts historians and political scientists with various specialities addressed the question: “Where are we as a nation and what’s next?”

I offered the following argument. I have derived it from other people’s scholarship, and I am not sure it is true, but I think Americans should consider it.

We’re marking a 250th anniversary because 1776 began the period that concluded with our Constitution. However, the Constitution is now in a deep crisis. We may now be coming to the end of a 250-year period. The reasons are not named “Donald J. Trump.” These are three deeper reasons.

First, presidential republics have a fatal flaw, and none except the US–and arguably, France–has survived for a long period (Linz 1990). Whenever opposing parties control the legislature and executive, they are motivated to battle at the cost of the republic.

For most of our first two centuries, we did not have regular impasses, because the Democrats were divided into two major blocs, resulting in at least three effective parties in Congress; and most presidents could build a working majority. However, when conservative Democrats defected to the GOP, the two parties polarized. Since 1990, it has been possible to govern in the ways envisioned by the Constitution only when the same party has controlled both elected branches (6 periods of 14 total years). During the other 24 years since 1990, presidents have tried to rule by executive order and Congress has tried to undermine the current administration. We have moved ever closer to complete constitutional breakdown.

Second, the Constitution enacts three branches of government: the executive, legislative, and judiciary. Since at least 1932, we have actually had another branch: the administrative and regulatory agencies, staffed by about about 2.2 million federal employees who are understood to be insulated from politics. They follow rules, norms, and principles of their own that are not mentioned in the Constitution–for example, scientifically measuring the costs and benefits of proposed policies and publishing drafts of policies for public review and comment. Perhaps we have also had a fifth branch, the national security apparatus.

We muddled through for decades by pretending that the agencies were part of the executive branch while the White House usually deferred to them. Under a 1984 Supreme Court decision, Chevron, the courts also generally deferred to agencies’ decisions. Meanwhile, Congress intentionally gave agencies broad scope. The regulatory state was largely independent from the other branches.

However, in 2024, the Court repealed Chevron with the Loper decision, allowing courts to review agency decisions. And Donald Trump has fired and replaced many civil servants and members of so-called independent agencies for openly political reasons.

Libertarians argue that we shouldn’t have had a massive federal government in the first place. And populists of right and left argue that an elected president should be able to determine policies. A left populist may celebrate the opportunity for a Democratic president to reshape the agencies at will now that they have lost their independence. I think, however, that every country with an advanced economy has built an elaborate and quasi-independent regulatory apparatus that applies science and managerial acumen to generate benefits that voters want. We may not have that anymore.

Third, Congress no longer legislates, in the sense of passing or reforming substantive statutes. In 1965 alone, Congress passed at least 10 landmark bills that established agencies or dramatically altered national policies. As recently at the 1980s, Congress sometimes legislated by substantially cutting regulation. But Congress has arguably passed no major laws in this whole century so far.

For example, Congress has never passed legislation explicitly about the climate. Federal regulatory agencies have used 1970s Clean Air Act (written before Congress was really aware of climate change) to try to regulate carbon. Likewise, federal financial laws were passed before cryptocurrency; and the Telecommunications Act of 1996 still governs despite some minor new developments, such as social media and smartphones.

In sum, we can’t handle frequent periods of divided government; our massive regulatory state lacks a constitutional basis; and the branch in which “all legislative power” is “vested” no longer legislates.

It is possible that we will keep driving ahead, frequently bumping into the Constitution’s guardrails but somehow staying on the road for decades.

Or we could see substantial reforms–major constitutional amendments or new voting laws that change the basic structure. (For instance, proportional representation would transform Congress–for better or worse–and could be accomplished by law.) I sometimes wonder whether our incompetent and blatantly authoritarian president is a blessing, alerting people to the need for reform without successfully consolidating power.

Or we could see a collapse. The typical final act of a presidential republic is a soft dictatorship. That’s why this topic is important to discuss on our 250th.


Prophetic works include Juan J. Linz, “The Perils of Presidentialism.” Journal of democracy 1.1 (1990): 51-69 and Theodore Lowi, The End of Liberalism (1969). See also: rule of law means more than obeying laws: a richer vision to guide post-Trump reconstruction; on the Deep State, the administrative state, and the civil service; the Constitution is crumbling; etc.

What Counts As Success? Assessing The Impact Of Civics In Higher Ed

On February 18, the Alliance for Civics in the Academy hosted a webinar on “What Counts as Success? Assessing the Impact of Civics in Higher Ed” with Trygve Throntveit, Rachel Wahl, Joseph Kahne, and me.

We discussed some of the advantages of developing reliable and consistent measurements of civic education, particularly the opportunity to learn from data and the need to be accountable. We also discussed some drawbacks and risks, including Campbell’s Law (a remark by Donald T. Campbell): “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

We asked ourselves who should use assessments, and for what purposes. For example, it is a different matter for a college professor to get feedback from the students in a course or for a university to measure student outcomes. I thought the conversation was both intellectually serious and relevant to practice.

Panelists:

  • Rachel Wahl: Associate Professor in the Social Foundations Program, Department of Educational Leadership, Foundations, and Policy at the School of Education and Human Development at the University of Virginia
  • Joseph Kahne: Ted and Jo Dutton Presidential Professor for Education Policy and Politics and Director of the Civic Engagement Research Group at the University of California, Riverside.
  • Trygve Throntveit: PhD, Research Professor in Higher Education and Associate Director of the Center for Economic and Civic Learning (CECL) at Ball State University.

I was the moderator. The video is here:

AI as the road to socialism?

Just under 40% of occupations jobs in the USA may be replaced by AI if it proves to be as powerful as some think it will be.* As a thought-experiment (not as a prediction), imagine that 40% of current workers, or about 60 million Americans, are no longer employed because AI does their former work. However, their former employers are still producing the same goods and services. These firms are therefore far more profitable.

The profits flow to shareholders. Individuals are already taxed now, but with tens of millions of new people out of work, there would be more political will to raise taxes. Therefore, imagine that a set of competing tech. firms have become responsible for a substantial portion of the whole economy and are heavily taxed. The proceeds flow back out of the government in the form of cash payments, perhaps a Universal Basic Income (UBI). Recipients are able to pay for the goods and services that machines now heavily produce. Meanwhile, jobs that are not automated are relatively well paid, because the UBI enables individuals not to work unless they want to.

Silicon Valley ideologues like Sam Altman tend to envision a UBI on the scale of $1,500/month. Today’s white collar workers earn a median income of about $5,000/month. Therefore, the kind of UBI that Altman imagines would result in a massive loss of income for millions of people, which would have cascading effects. All the former office-workers who now live in nice houses and buy costly services would have to give those up, causing additional unemployment and declining demand for the products produced by the tech. companies.

However, the public might demand a UBI more like $5,000/month. Then half of today’s white collar workers would be worse off, but half would be richer–and none would have to work.

Looking a little more deeply, we might notice that AI tools are not simply machines. They process text and ideas that human beings create. Therefore, we could see this whole system as deeply socialistic. Billions of people’s mental output would be processed by relatively few AI models that produce generally similar output. These tools would generate profits that would be distributed equitably to the people. Most individuals would receive $5,000/month, neither more or less. Since they wouldn’t have to work, they could spend their time as they wish. And–via electoral politics–the people could regulate the AI companies.

It all sounds like Karl Marx’s early utopian vision:

In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic. (The German Ideology, 1845)

Problems:

  1. The transition to this imaginary equilibrium might be chaotic, violent, and destructive– perhaps to such a degree that we wouldn’t make it through.
  2. Modern people tend to derive dignity and purpose from work. Perhaps this is a contingent fact about today’s society. In the future, maybe we will be happy fishing in the afternoon and writing criticism after dinner. Or perhaps we will be deeply depressed without jobs. To make matters worse, would we really spend our time writing or playing music or even fishing, if machines can do all those things better? This is not a problem that confronted Marx, because in his day, machines automated tasks that people would not do voluntarily.
  3. It’s easy to posit that the people can tax and regulate AI companies through the device of a democratically elected government, but millions of people’s interests and values do not automatically turn onto one public will. Interest groups have agendas and power. At large scales, democracy is complicated, messy, factional, and very easily corrupted. In this case, the AI companies and investors would be political players.
  4. It could be that not only AI companies but also the models themselves become players that have interests. Sentient, self-interested AI is the source of much current anxiety. I am not sure what to make of that concern, but it surely adds a layer of risk.
  5. I have discussed the USA alone, but how would this look for people in a country without competitive AI companies? US citizens might demand that Silicon Valley provide them with a UBI, but it’s implausible that US citizens would demand a global UBI. And how would people in Africa or Latin America gain leverage have over US policy?
  6. For the people to govern the “means of production” (to use the Marxist term), they must understand it. Industrial workers have understood industrial machines, so they can run factories. None of us understand Large Language Models, not even the developers who design them. Can we, therefore, govern them? (Having said that, we also do not fully understand the human brain, yet people have governed people.)
  7. Even if democracy works well, the public will not really control AI. So far, I have suggested that AI is like a machine that can be regulated by people through their government. But AI also shapes our knowledge, values, and understandings of ourselves in ways that are controlled either by the designers and owners of the platforms, or by the machines, or–perhaps–by no one at all. Evegeny Morozov writes:

Now imagine a future in which a [public] Investment Board, under pressure to avoid bias and misinformation, mandates that AI systems be fair according to agreed metrics, respect privacy, minimize energy use, and promote well-being. Call this woke AI by democratic mandate–an infrastructure whose outputs are correct, diverse, and balanced. Yet it still feels like it was designed over our heads.

Morozov suggests a different path. Instead of allowing corporate AI to grow and then trying to regulate it and capture its value, develop non-corporate AI:

A city government might maintain open models trained on public documents and local knowledge, integrated into schools, clinics, and housing offices under rules set by residents. A network of artists and archivists might build models specialized in endangered languages and regional cultures, fine?tuned to materials their communities actually care about. 

The point is not that these examples are the answer, but that a socialism worthy of AI would institutionalize the capacity to try such arrangements, inhabit them, and modify or abandon them—and at scale, with real resources. This kind of socialism would treat AI as plastic enough to accommodate uses, values, and social forms that emerge only as it is deployed. It would see AI less as an object to govern (or govern with) and more as a field of collective discovery and self-transformation. 

I should say that I am not a socialist, partly because available socialist theories have not persuaded me, and partly because I am also drawn to liberal ideals of individual rights, privacy, and negative liberties. However, “socialism” is a broad and protean term, and socialist thought may offer resources to envision better futures. Confronting the massive threat–and opportunity–of AI, we should use any intellectual resources we can get our hands on.


*I have aggregated the categories of office and administrative support; sales and related; management; healthcare support; architecture and engineering; life, physical, and social science; and legal from the Bureau of Labor Statistics. I omitted education (5.8% of all jobs) on the–probably vain–hope that my own occupation won’t also be automated. If that happens, raise the estimate of obsolete jobs to 45%.

See also: can AI solve “wicked problems”?; Reading Arendt in Palo Alto; the human coordination involved in AI (etc.)