What Counts As Success? Assessing The Impact Of Civics In Higher Ed

On February 18, the Alliance for Civics in the Academy hosted a webinar on “What Counts as Success? Assessing the Impact of Civics in Higher Ed” with Trygve Throntveit, Rachel Wahl, Joseph Kahne, and me.

We discussed some of the advantages of developing reliable and consistent measurements of civic education, particularly the opportunity to learn from data and the need to be accountable. We also discussed some drawbacks and risks, including Campbell’s Law (a remark by Donald T. Campbell): “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

We asked ourselves who should use assessments, and for what purposes. For example, it is a different matter for a college professor to get feedback from the students in a course or for a university to measure student outcomes. I thought the conversation was both intellectually serious and relevant to practice.

Panelists:

  • Rachel Wahl: Associate Professor in the Social Foundations Program, Department of Educational Leadership, Foundations, and Policy at the School of Education and Human Development at the University of Virginia
  • Joseph Kahne: Ted and Jo Dutton Presidential Professor for Education Policy and Politics and Director of the Civic Engagement Research Group at the University of California, Riverside.
  • Trygve Throntveit: PhD, Research Professor in Higher Education and Associate Director of the Center for Economic and Civic Learning (CECL) at Ball State University.

I was the moderator. The video is here:

AI as the road to socialism?

Just under 40% of occupations jobs in the USA may be replaced by AI if it proves to be as powerful as some think it will be.* As a thought-experiment (not as a prediction), imagine that 40% of current workers, or about 60 million Americans, are no longer employed because AI does their former work. However, their former employers are still producing the same goods and services. These firms are therefore far more profitable.

The profits flow to shareholders. Individuals are already taxed now, but with tens of millions of new people out of work, there would be more political will to raise taxes. Therefore, imagine that a set of competing tech. firms have become responsible for a substantial portion of the whole economy and are heavily taxed. The proceeds flow back out of the government in the form of cash payments, perhaps a Universal Basic Income (UBI). Recipients are able to pay for the goods and services that machines now heavily produce. Meanwhile, jobs that are not automated are relatively well paid, because the UBI enables individuals not to work unless they want to.

Silicon Valley ideologues like Sam Altman tend to envision a UBI on the scale of $1,500/month. Today’s white collar workers earn a median income of about $5,000/month. Therefore, the kind of UBI that Altman imagines would result in a massive loss of income for millions of people, which would have cascading effects. All the former office-workers who now live in nice houses and buy costly services would have to give those up, causing additional unemployment and declining demand for the products produced by the tech. companies.

However, the public might demand a UBI more like $5,000/month. Then half of today’s white collar workers would be worse off, but half would be richer–and none would have to work.

Looking a little more deeply, we might notice that AI tools are not simply machines. They process text and ideas that human beings create. Therefore, we could see this whole system as deeply socialistic. Billions of people’s mental output would be processed by relatively few AI models that produce generally similar output. These tools would generate profits that would be distributed equitably to the people. Most individuals would receive $5,000/month, neither more or less. Since they wouldn’t have to work, they could spend their time as they wish. And–via electoral politics–the people could regulate the AI companies.

It all sounds like Karl Marx’s early utopian vision:

In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic. (The German Ideology, 1845)

Problems:

  1. The transition to this imaginary equilibrium might be chaotic, violent, and destructive– perhaps to such a degree that we wouldn’t make it through.
  2. Modern people tend to derive dignity and purpose from work. Perhaps this is a contingent fact about today’s society. In the future, maybe we will be happy fishing in the afternoon and writing criticism after dinner. Or perhaps we will be deeply depressed without jobs. To make matters worse, would we really spend our time writing or playing music or even fishing, if machines can do all those things better? This is not a problem that confronted Marx, because in his day, machines automated tasks that people would not do voluntarily.
  3. It’s easy to posit that the people can tax and regulate AI companies through the device of a democratically elected government, but millions of people’s interests and values do not automatically turn onto one public will. Interest groups have agendas and power. At large scales, democracy is complicated, messy, factional, and very easily corrupted. In this case, the AI companies and investors would be political players.
  4. It could be that not only AI companies but also the models themselves become players that have interests. Sentient, self-interested AI is the source of much current anxiety. I am not sure what to make of that concern, but it surely adds a layer of risk.
  5. I have discussed the USA alone, but how would this look for people in a country without competitive AI companies? US citizens might demand that Silicon Valley provide them with a UBI, but it’s implausible that US citizens would demand a global UBI. And how would people in Africa or Latin America gain leverage have over US policy?
  6. For the people to govern the “means of production” (to use the Marxist term), they must understand it. Industrial workers have understood industrial machines, so they can run factories. None of us understand Large Language Models, not even the developers who design them. Can we, therefore, govern them? (Having said that, we also do not fully understand the human brain, yet people have governed people.)
  7. Even if democracy works well, the public will not really control AI. So far, I have suggested that AI is like a machine that can be regulated by people through their government. But AI also shapes our knowledge, values, and understandings of ourselves in ways that are controlled either by the designers and owners of the platforms, or by the machines, or–perhaps–by no one at all. Evegeny Morozov writes:

Now imagine a future in which a [public] Investment Board, under pressure to avoid bias and misinformation, mandates that AI systems be fair according to agreed metrics, respect privacy, minimize energy use, and promote well-being. Call this woke AI by democratic mandate–an infrastructure whose outputs are correct, diverse, and balanced. Yet it still feels like it was designed over our heads.

Morozov suggests a different path. Instead of allowing corporate AI to grow and then trying to regulate it and capture its value, develop non-corporate AI:

A city government might maintain open models trained on public documents and local knowledge, integrated into schools, clinics, and housing offices under rules set by residents. A network of artists and archivists might build models specialized in endangered languages and regional cultures, fine?tuned to materials their communities actually care about. 

The point is not that these examples are the answer, but that a socialism worthy of AI would institutionalize the capacity to try such arrangements, inhabit them, and modify or abandon them—and at scale, with real resources. This kind of socialism would treat AI as plastic enough to accommodate uses, values, and social forms that emerge only as it is deployed. It would see AI less as an object to govern (or govern with) and more as a field of collective discovery and self-transformation. 

I should say that I am not a socialist, partly because available socialist theories have not persuaded me, and partly because I am also drawn to liberal ideals of individual rights, privacy, and negative liberties. However, “socialism” is a broad and protean term, and socialist thought may offer resources to envision better futures. Confronting the massive threat–and opportunity–of AI, we should use any intellectual resources we can get our hands on.


*I have aggregated the categories of office and administrative support; sales and related; management; healthcare support; architecture and engineering; life, physical, and social science; and legal from the Bureau of Labor Statistics. I omitted education (5.8% of all jobs) on the–probably vain–hope that my own occupation won’t also be automated. If that happens, raise the estimate of obsolete jobs to 45%.

See also: can AI solve “wicked problems”?; Reading Arendt in Palo Alto; the human coordination involved in AI (etc.)

The Civic Stakes of Organizational Disagreement

A new Stanford Social Innovation Review series examines how organizations should handle disagreement. Tufts University’s Tisch College of Civic Life is proud to be a co-presenter of this series. Tisch College Professor of the Practice Ahmmad Brown is the curator and editor, and our colleague Nancy Marks even provided the professional art.

The first article is by our dean, Dayna Cunningham, and me. It is entitled “The Civic Stakes of Organizational Disagreement.” We consider the value of disagreement and dissent in different kinds of organizations (a social movement, a firm, and a university). We advocate for pluralism–not neutrality–as the guiding ideal. We argue that how organizations handle disagreement matters not only for their performance but also for the democracy more broadly.

The citation: is Levine, P., & Cunningham, D. L. (2026). The Civic Stakes of Organizational Disagreement. Stanford Social Innovation Review. https://doi.org/10.48558/EYWC-EA67

living life as a story

Thesis: It is better to live as if one’s life were a story, yet many people cannot live that way.

A conventional story has a finite number of named characters, many of whom know many of the rest. These characters have constraints and limitations, but they also face at least some consequential choices. The choices they make contribute to the plot. Their choices tend to be related to their inner lives: their beliefs, desires, and character traits. Although they may spend most of their time separately and quietly, the narrative emphasizes their interactions. In fact, dialogue occupies much of a conventional novel and all the text of a play or a screenplay. In biographies and narrative histories, quotations from speech may be shorter, but they are are often prominent. What the characters think, do, and say is noticed and preserved–at least by the narrator, and usually by some of their fellow characters.

We can feel that our lives are like this, and we can be correct about it. Or we can feel (rightly or wrongly) that this is not how we live. Here are some threats to living as if in a story:

  • Modern economies (capitalist or socialist) that organize masses of workers so that each one feels little agency, while many live so precariously that they cannot make consequential decisions.
  • State tyranny, which not only blocks consequential choices and suppresses frank discussion but also invades the private spaces in which people could develop independent beliefs and values.
  • Hypertrophied science and technology, which make human behavior appear mechanical and predictable, or which actually control human beings.
  • Bureaucracy, which minimizes individual agency by applying rules, metrics, and files.
  • Ideologies, in the pejorative sense of all-encompassing theories that explain individual choices away or that replace human characters with abstractions, such as classes or nations, as the major protagonists.
  • Loneliness or isolation, meaning the absence of the interactions that would constitute a conventional story.
  • A lack of solitude, an inner life that can be described in a narrative and connected to overt actions.
  • Catastrophes, which wipe out the memories of characters and their actions.

(On that last point, Jonathan Lear writes:

Not long ago, I listened to a lecture on climate change. The lecture went as one might expect. There was a warning of impending ecological catastrophe and talk of the “Anthropocene,” suggesting that our age—the age in which humans dominate the Earth—is coming to an end. At the end of the talk, there was a discussion period. At one point, a young academic stood up and said simply, “Let me tell you something: We will not be missed!” She then sat down. There was laughter throughout the audience. It was over in a moment.

Lear develops the idea that missing or mourning things is a distinctively human contribution; and it is ineffably sad that no one would miss homo sapiens, even if we cause our own extinction, and even if other species would be better off without us. It means that all the stories would be gone.)

I think many of us assume that our lives are like stories and that some other people notice and remember our roles in them. For us, the evaluative questions are: How is this story turning out? And what kind of a character am I? I would rather live in a comedy than in a tragedy, and I aspire to be the hero rather than the villain in my own little patch.

However, I think the main thrust of Hannah Arendt’s philosophy is that there is an antecedent question: Am I in a story at all? (See, e.g., The Human Condition, chapter v.) I believe she would say that it is better to be the villain in a tragedy than not to inhabit any kind of story, and that most modern people no longer do. The list of threats (above) comes directly from her work.

Note that this is a different ideal from the common one of authorship. For instance, Immanuel Kant defines ethical individuals as the authors of the rules that govern them:

The will is therefore not merely subjected to the law, but in such a way that it must also be regarded as self-legislating, and precisely for that reason must it be subject to the law (of which it can consider itself the author [als Urheber]).

In contrast, Arendt writes:

Although everybody started his life by inserting himself into the human world through action and speech, nobody is the author or producer of his own life story. In other words, the stories, the results of action and speech, reveal an agent, but this agent is not an author or producer. Somebody began it and is its subject in the twofold sense of the word, namely, its actor and sufferer, but nobody is its author (The Human Condition, p. 184)

For her, politics is the domain where people are characters but there is no author. This is a result of plurality: there are many of us, and no one (not even a dictator) can solely determine the outcomes.

Jürgen Habermas holds a generally similar view but presents all the citizens of a community as its authors (in the plural):

According to the republican view, the status of citizens is not determined by the model of negative liberties to which these citizens can lay claim as private persons. Rather, political rights—preeminently rights of political participation and communication—are positive liberties. They guarantee not freedom from external compulsion but the possibility of participation in a common praxis, through the exercise of which citizens can first make themselves into what they want to be—politically autonomous authors of a community of free and equal persons.

Authors and characters are metaphors, not literal descriptions. As such, they capture certain compelling ideas without fully describing reality. Here I want to suggest that the metaphor of characters draws our attention to urgent issues. We need social, political, and intellectual reforms to enable more people to live like characters in stories. These reforms require intentional action. We must be the authors of contexts in which people can be characters.


Sources: Jonathan Lear, Imagining the End: Mourning and Ethical Life (Harvard, 2022, p. 1); Kant, Grundlegung zur Metaphysik der Sitten (my trans.); Habermas, “Three Normative Models of Democracy,” in Seyla Benhabib (ed.), Democracy and Difference: Contesting the Boundaries of the Political (Princeton University Press, 1996). p. 22. See also: Hilary Mantel and Walter Benjamin; Kieran Setiya on midlife; a vivid sense of the future; the coincidences in Romola; and Freud on mourning the past.

What Counts as Success? Assessing the Impact of Civics in Higher Ed

The Alliance for Civics in the Academy hosts “What Counts as Success? Assessing the Impact of Civics in Higher Ed” with Trygve Throntveit, Rachel Wahl, Joseph Kahne, and Peter Levine on February 18, 2026, from 9:00-10:00 a.m. PT/noon Eastern.

As higher education renews its commitment to civic education, questions about how to define and measure success have become increasingly urgent. This webinar examines the strengths and limitations of common metrics and considers how different measures reflect competing visions of civic purpose in higher education. Participants will explore emerging frameworks for assessing civic learning and engagement, and discuss how institutions can align assessment practices with their educational missions and democratic goals.

Please register here.