AI as the road to socialism?

Just under 40% of occupations jobs in the USA may be replaced by AI if it proves to be as powerful as some think it will be.* As a thought-experiment (not as a prediction), imagine that 40% of current workers, or about 60 million Americans, are no longer employed because AI does their former work. However, their former employers are still producing the same goods and services. These firms are therefore far more profitable.

The profits flow to shareholders. Individuals are already taxed now, but with tens of millions of new people out of work, there would be more political will to raise taxes. Therefore, imagine that a set of competing tech. firms have become responsible for a substantial portion of the whole economy and are heavily taxed. The proceeds flow back out of the government in the form of cash payments, perhaps a Universal Basic Income (UBI). Recipients are able to pay for the goods and services that machines now heavily produce. Meanwhile, jobs that are not automated are relatively well paid, because the UBI enables individuals not to work unless they want to.

Silicon Valley ideologues like Sam Altman tend to envision a UBI on the scale of $1,500/month. Today’s white collar workers earn a median income of about $5,000/month. Therefore, the kind of UBI that Altman imagines would result in a massive loss of income for millions of people, which would have cascading effects. All the former office-workers who now live in nice houses and buy costly services would have to give those up, causing additional unemployment and declining demand for the products produced by the tech. companies.

However, the public might demand a UBI more like $5,000/month. Then half of today’s white collar workers would be worse off, but half would be richer–and none would have to work.

Looking a little more deeply, we might notice that AI tools are not simply machines. They process text and ideas that human beings create. Therefore, we could see this whole system as deeply socialistic. Billions of people’s mental output would be processed by relatively few AI models that produce generally similar output. These tools would generate profits that would be distributed equitably to the people. Most individuals would receive $5,000/month, neither more or less. Since they wouldn’t have to work, they could spend their time as they wish. And–via electoral politics–the people could regulate the AI companies.

It all sounds like Karl Marx’s early utopian vision:

In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic. (The German Ideology, 1845)

Problems:

  1. The transition to this imaginary equilibrium might be chaotic, violent, and destructive– perhaps to such a degree that we wouldn’t make it through.
  2. Modern people tend to derive dignity and purpose from work. Perhaps this is a contingent fact about today’s society. In the future, maybe we will be happy fishing in the afternoon and writing criticism after dinner. Or perhaps we will be deeply depressed without jobs. To make matters worse, would we really spend our time writing or playing music or even fishing, if machines can do all those things better? This is not a problem that confronted Marx, because in his day, machines automated tasks that people would not do voluntarily.
  3. It’s easy to posit that the people can tax and regulate AI companies through the device of a democratically elected government, but millions of people’s interests and values do not automatically turn onto one public will. Interest groups have agendas and power. At large scales, democracy is complicated, messy, factional, and very easily corrupted. In this case, the AI companies and investors would be political players.
  4. It could be that not only AI companies but also the models themselves become players that have interests. Sentient, self-interested AI is the source of much current anxiety. I am not sure what to make of that concern, but it surely adds a layer of risk.
  5. I have discussed the USA alone, but how would this look for people in a country without competitive AI companies? US citizens might demand that Silicon Valley provide them with a UBI, but it’s implausible that US citizens would demand a global UBI. And how would people in Africa or Latin America gain leverage have over US policy?
  6. For the people to govern the “means of production” (to use the Marxist term), they must understand it. Industrial workers have understood industrial machines, so they can run factories. None of us understand Large Language Models, not even the developers who design them. Can we, therefore, govern them? (Having said that, we also do not fully understand the human brain, yet people have governed people.)
  7. Even if democracy works well, the public will not really control AI. So far, I have suggested that AI is like a machine that can be regulated by people through their government. But AI also shapes our knowledge, values, and understandings of ourselves in ways that are controlled either by the designers and owners of the platforms, or by the machines, or–perhaps–by no one at all. Evegeny Morozov writes:

Now imagine a future in which a [public] Investment Board, under pressure to avoid bias and misinformation, mandates that AI systems be fair according to agreed metrics, respect privacy, minimize energy use, and promote well-being. Call this woke AI by democratic mandate–an infrastructure whose outputs are correct, diverse, and balanced. Yet it still feels like it was designed over our heads.

Morozov suggests a different path. Instead of allowing corporate AI to grow and then trying to regulate it and capture its value, develop non-corporate AI:

A city government might maintain open models trained on public documents and local knowledge, integrated into schools, clinics, and housing offices under rules set by residents. A network of artists and archivists might build models specialized in endangered languages and regional cultures, fine?tuned to materials their communities actually care about. 

The point is not that these examples are the answer, but that a socialism worthy of AI would institutionalize the capacity to try such arrangements, inhabit them, and modify or abandon them—and at scale, with real resources. This kind of socialism would treat AI as plastic enough to accommodate uses, values, and social forms that emerge only as it is deployed. It would see AI less as an object to govern (or govern with) and more as a field of collective discovery and self-transformation. 

I should say that I am not a socialist, partly because available socialist theories have not persuaded me, and partly because I am also drawn to liberal ideals of individual rights, privacy, and negative liberties. However, “socialism” is a broad and protean term, and socialist thought may offer resources to envision better futures. Confronting the massive threat–and opportunity–of AI, we should use any intellectual resources we can get our hands on.


*I have aggregated the categories of office and administrative support; sales and related; management; healthcare support; architecture and engineering; life, physical, and social science; and legal from the Bureau of Labor Statistics. I omitted education (5.8% of all jobs) on the–probably vain–hope that my own occupation won’t also be automated. If that happens, raise the estimate of obsolete jobs to 45%.

See also: can AI solve “wicked problems”?; Reading Arendt in Palo Alto; the human coordination involved in AI (etc.)

The Civic Stakes of Organizational Disagreement

A new Stanford Social Innovation Review series examines how organizations should handle disagreement. Tufts University’s Tisch College of Civic Life is proud to be a co-presenter of this series. Tisch College Professor of the Practice Ahmmad Brown is the curator and editor, and our colleague Nancy Marks even provided the professional art.

The first article is by our dean, Dayna Cunningham, and me. It is entitled “The Civic Stakes of Organizational Disagreement.” We consider the value of disagreement and dissent in different kinds of organizations (a social movement, a firm, and a university). We advocate for pluralism–not neutrality–as the guiding ideal. We argue that how organizations handle disagreement matters not only for their performance but also for the democracy more broadly.

The citation: is Levine, P., & Cunningham, D. L. (2026). The Civic Stakes of Organizational Disagreement. Stanford Social Innovation Review. https://doi.org/10.48558/EYWC-EA67

living life as a story

Thesis: It is better to live as if one’s life were a story, yet many people cannot live that way.

A conventional story has a finite number of named characters, many of whom know many of the rest. These characters have constraints and limitations, but they also face at least some consequential choices. The choices they make contribute to the plot. Their choices tend to be related to their inner lives: their beliefs, desires, and character traits. Although they may spend most of their time separately and quietly, the narrative emphasizes their interactions. In fact, dialogue occupies much of a conventional novel and all the text of a play or a screenplay. In biographies and narrative histories, quotations from speech may be shorter, but they are are often prominent. What the characters think, do, and say is noticed and preserved–at least by the narrator, and usually by some of their fellow characters.

We can feel that our lives are like this, and we can be correct about it. Or we can feel (rightly or wrongly) that this is not how we live. Here are some threats to living as if in a story:

  • Modern economies (capitalist or socialist) that organize masses of workers so that each one feels little agency, while many live so precariously that they cannot make consequential decisions.
  • State tyranny, which not only blocks consequential choices and suppresses frank discussion but also invades the private spaces in which people could develop independent beliefs and values.
  • Hypertrophied science and technology, which make human behavior appear mechanical and predictable, or which actually control human beings.
  • Bureaucracy, which minimizes individual agency by applying rules, metrics, and files.
  • Ideologies, in the pejorative sense of all-encompassing theories that explain individual choices away or that replace human characters with abstractions, such as classes or nations, as the major protagonists.
  • Loneliness or isolation, meaning the absence of the interactions that would constitute a conventional story.
  • A lack of solitude, an inner life that can be described in a narrative and connected to overt actions.
  • Catastrophes, which wipe out the memories of characters and their actions.

(On that last point, Jonathan Lear writes:

Not long ago, I listened to a lecture on climate change. The lecture went as one might expect. There was a warning of impending ecological catastrophe and talk of the “Anthropocene,” suggesting that our age—the age in which humans dominate the Earth—is coming to an end. At the end of the talk, there was a discussion period. At one point, a young academic stood up and said simply, “Let me tell you something: We will not be missed!” She then sat down. There was laughter throughout the audience. It was over in a moment.

Lear develops the idea that missing or mourning things is a distinctively human contribution; and it is ineffably sad that no one would miss homo sapiens, even if we cause our own extinction, and even if other species would be better off without us. It means that all the stories would be gone.)

I think many of us assume that our lives are like stories and that some other people notice and remember our roles in them. For us, the evaluative questions are: How is this story turning out? And what kind of a character am I? I would rather live in a comedy than in a tragedy, and I aspire to be the hero rather than the villain in my own little patch.

However, I think the main thrust of Hannah Arendt’s philosophy is that there is an antecedent question: Am I in a story at all? (See, e.g., The Human Condition, chapter v.) I believe she would say that it is better to be the villain in a tragedy than not to inhabit any kind of story, and that most modern people no longer do. The list of threats (above) comes directly from her work.

Note that this is a different ideal from the common one of authorship. For instance, Immanuel Kant defines ethical individuals as the authors of the rules that govern them:

The will is therefore not merely subjected to the law, but in such a way that it must also be regarded as self-legislating, and precisely for that reason must it be subject to the law (of which it can consider itself the author [als Urheber]).

In contrast, Arendt writes:

Although everybody started his life by inserting himself into the human world through action and speech, nobody is the author or producer of his own life story. In other words, the stories, the results of action and speech, reveal an agent, but this agent is not an author or producer. Somebody began it and is its subject in the twofold sense of the word, namely, its actor and sufferer, but nobody is its author (The Human Condition, p. 184)

For her, politics is the domain where people are characters but there is no author. This is a result of plurality: there are many of us, and no one (not even a dictator) can solely determine the outcomes.

Jürgen Habermas holds a generally similar view but presents all the citizens of a community as its authors (in the plural):

According to the republican view, the status of citizens is not determined by the model of negative liberties to which these citizens can lay claim as private persons. Rather, political rights—preeminently rights of political participation and communication—are positive liberties. They guarantee not freedom from external compulsion but the possibility of participation in a common praxis, through the exercise of which citizens can first make themselves into what they want to be—politically autonomous authors of a community of free and equal persons.

Authors and characters are metaphors, not literal descriptions. As such, they capture certain compelling ideas without fully describing reality. Here I want to suggest that the metaphor of characters draws our attention to urgent issues. We need social, political, and intellectual reforms to enable more people to live like characters in stories. These reforms require intentional action. We must be the authors of contexts in which people can be characters.


Sources: Jonathan Lear, Imagining the End: Mourning and Ethical Life (Harvard, 2022, p. 1); Kant, Grundlegung zur Metaphysik der Sitten (my trans.); Habermas, “Three Normative Models of Democracy,” in Seyla Benhabib (ed.), Democracy and Difference: Contesting the Boundaries of the Political (Princeton University Press, 1996). p. 22. See also: Hilary Mantel and Walter Benjamin; Kieran Setiya on midlife; a vivid sense of the future; the coincidences in Romola; and Freud on mourning the past.

What Counts as Success? Assessing the Impact of Civics in Higher Ed

The Alliance for Civics in the Academy hosts “What Counts as Success? Assessing the Impact of Civics in Higher Ed” with Trygve Throntveit, Rachel Wahl, Joseph Kahne, and Peter Levine on February 18, 2026, from 9:00-10:00 a.m. PT/noon Eastern.

As higher education renews its commitment to civic education, questions about how to define and measure success have become increasingly urgent. This webinar examines the strengths and limitations of common metrics and considers how different measures reflect competing visions of civic purpose in higher education. Participants will explore emerging frameworks for assessing civic learning and engagement, and discuss how institutions can align assessment practices with their educational missions and democratic goals.

Please register here.

can AI solve “wicked problems”?

I’ve been reading predictions that artificial intelligence will wipe out swaths of jobs–see Josh Tyrangiel in The Atlantic or Jan Tegze. Meanwhile, this week, I’m teaching Rittel & Webber (1973), the classic article that coined the phrase “wicked problems.” I started to wonder whether AI can ever resolve wicked problems. If not, the best way to find an interesting job in the near future may be to specialize in wicked problems. (Take my public policy course!)

According to Rittel & Webber, wicked problems have the following features:

  1. They have no definitive formulation.
  2. There is no stopping rule, no way to declare that the issue is done.
  3. Choices are not true or false, but good or bad.
  4. There is no way to test the chosen solution (immediate or ultimate).
  5. It is impossible, or unethical, to experiment.
  6. There is no list of all possible solutions.
  7. Since each problem is unique, inductive reasoning can’t work.
  8. Each problem is a symptom of another one.
  9. You can choose the explanations, and they affect your proposals.
  10. You have no “No right to be wrong.” (You are affecting other people, not just yourself. And the results are irreversible.)

Rittel and Webber argue that those features of wicked problems deflate the 20th-century ideal of a “planning system” that could be automated:

Many now have an image of how an idealized planning system would function. It is being seen as an on-going, cybernetic process of governance, incorporating systematic procedures for continuously searching out goals; identifying problems; forecasting uncontrollable contextual changes; inventing alternative strategies, tactics, and time-sequenced actions; stimulating alternative and plausible action sets and their consequences; evaluating alternatively forecasted outcomes; statistically monitoring those conditions of the publics and of systems that are judged to be germane; feeding back information to the simulation and decision channels so that errors can be corrected–all in a simultaneously functioning governing process. That set of steps is familiar to all of us, for it comprises what is by now the modern-classical mode planning. And yet we all know that such a planning system is unattainable, even as we seek more closely to approximate it. It is even questionable whether such a planning system is desirable (p. 159)

Here they describe planning systems that would have been very labor-intensive in 1973, but many people today imagine that this is how AI works, or will work.

why are problems wicked?

Some of the 10 reasons that some problems are “wicked,” according to Rittel & Webber, relate to the difficulty of generating knowledge. Policy problems involve specific things that have many features or aspects and that relate to many other specific things. For example, a given school system has a vast and unique set of characteristics and is connected by causes and effects to other systems and parts of society. These qualities make a school system difficult to study in conventional, scientific ways. However, could a massive LLM resolve that problem by modeling a wide swath of the society?

Another reason that problems are wicked is that they involve moral choices. In a policy debate, the question is not what would happen if we did something but what should happen. When I asked ChatGPT whether AI will be able to resolve wicked problems, it told me no, because wicked problems “are value-laden.” It added, “AI can optimize for values, but it cannot choose them in a legitimate way. Deciding whose values count, how to weigh them, and when to revise them is a normative, political act, not a computational one.”

Claude was less explicit about this point but emphasized that “stakeholders can’t even agree on what the problem actually is.” Therefore, an AI agent cannot supply a definitive answer.

A third source of the difficulty of wicked problems involves responsibility and legitimacy. In their responses to my question, both ChatGPT and Claude implied that AI models should not resolve wicked problems because they don’t have the right or the standing to do so.

what’s our underlying theory of decision-making?

Here are three rival views of how people decide value questions:

First, perhaps we are creatures who happen to want some things and abhor other things. We experience policies and their outcomes with pleasure, pain, or other emotions. It is better for us to get what we want–because of our feelings. Since an AI agent doesn’t feel anything, it can’t really want anything; and if it says it does, we shouldn’t care. Since we disagree about what we want, we must decide collectively and not offload the decision onto a computer.

Some problems with this view: People may want very bad things–should their preferences count? If we just happen to want various things, is there any better way to make decisions than to maximize as many subjective preferences as possible? Couldn’t a computer do that? But would the world be better if we did maximize subjective preferences?

In any case, you are not going to find a job making value-judgments. Today, lots of people are paid to make decisions, but only because they are assumed to know things. Nobody will pay for preferences. Life works the other way around: you have to pay to get your preferences satisfied.

Second, perhaps value questions have right and wrong answers. A candidate for the right answer would be utilitarianism: maximize the total amount of welfare. Maybe this rule needs constraints, or we should use a different rule. Regardless, it would be possible for a computer to calculate what is best for us. In fact, a machine can be less biased than humans.

Some problems with this view: We haven’t resolved the debate about which algorithm-like method should be used to decide what is right. Furthermore, I and others doubt that good moral reasoning is algorithmic. For one thing, it appears to be “holistic” in the specific sense that the unit of assessment is a whole object (such as a school or a market), not separate variables.

Third, perhaps all moral opinions are strictly subjective, including the opinion that we should maximize the satisfaction of everyone’s subjective opinions. Then it doesn’t matter what we do. We could outsource decisions to a computer, or just roll a die.

The problem with this view: It certainly does matter what we do. If not, we might as well pack it in.

AI as a social institution

I am still tentatively using the following model. AI is not like a human brain; it is like a social institution. For instance, medicine aggregates vast amounts of information and huge numbers of decisions and generates findings and advice. A labor market similarly processes a vast number of preferences and decisions and yields wages and employment rates. These are familiar examples of entities that are much larger than any human being–and they can feel impersonal or even cruel–but they are composed of human inputs, rules, and some hardware.

Another interesting example: integrated assessment models (IAMs) for predicting the global impact of carbon emissions and the costs and benefits of proposed remedies. These models have developed collaboratively and cumulatively for half a century. They take in thousands of peer-reviewed findings about specific processes (deforestation in Brazil, tax credits in Germany) and integrate them mathematically. No human being can understand even a tiny proportion of the data, methods, and instruments that generate the IAMs as a whole. But an IAM is a human product.

A large language model (LLM) is similar. At a first approximation, it is a machine that takes in lots of human generated text, processes it according to rules, and generates new text. Just the same could be said of science or law. This description actually understates the involvement of humans, because we do not merely produce the text that the LLM processes to generate output. We also conceive the idea of an LLM, write the software, build the hardware, construct the data centers, manage the power plants, pour the cement, and otherwise work to make the LLM.

If this is the case, then a given AI agent is not fundamentally different from a given social institution, such as a scientific discipline, a market, a body of law, or a democracy. Like these other institutions, it can address complexity, uncertainty, and disagreements about values. We will be able to ask it for answers to wicked problems. If current LLMs like ChatGPT and Claude refuse to provide such answers, it is because their authors have chosen–so far–to tell them not to.

However, AI’s rules are different from those in law, democracy, or science. I am biased to think that its rules are worse, although that could be contested. The threat is that AI will start to generate answers to wicked problems, and we will accept its answers because our own responses are not definitively better and because it responds instantly at low cost. But then we will lose not only the vast array of jobs that involve decision-making but also the intrinsic value of being decision-makers.


Source: Rittel, Horst WJ, and Melvin M. Webber. “Dilemmas in a general theory of planning.” Policy sciences 4.2 (1973): 155-169. See also: the human coordination involved in AIthe difference between human and artificial intelligence: relationships; the age of cybernetics; choosing models that illuminate issues–on the logic of abduction in the social sciences and policy

Jaspers on collective responsibility and polarization

Here is a scene that has certain resonances with the present, although the circumstances were certainly different. …

It was the winter of 1945-6 in Heidelberg, Germany. Karl Jaspers, a distinguished professor, offered a lecture to a room full of demobilized solders, women, displaced civilians, and a fair number of wounded.

Jaspers had been banned from teaching since 1933 because he didn’t endorse the Nazi regime (except to sign a loyalty oath in 1934) and because his spouse was Jewish. He and his wife had been listed for arrest–and presumably death–but they were saved when the US Army arrived the previous March. The US military trusted Jaspers, who been mediating between them and the university.

In the lecture, Jaspers notes that the Allied occupation is authoritarian; Germans have no say in their own governance. Later, he will insist that the fault for this situation lies with Germans alone. In the meantime, the occupation is not interfering with their freedom of speech.

Jaspers says that a university should never be a place for politics, in the narrow sense. “Dabbling in political actions and decisions of the day” is “never our business.” I suspect he is echoing Max Weber’s “The Meaning of ‘Ethical Neutrality’ in Sociology and Economics,” a lecture from 1917. Jaspers says that he and his audience are free to do what they should always do in a university. But what is that?

Jaspers is giving a lecture. He acknowledges that it can become propaganda even if the theme is democracy or freedom. “Talk from the platform is necessarily one-sided. We do not converse here. Yet what I expound to you has grown out of the ‘talking with each other’ [Miteinandersprechen] which all of us do, each in his own circle” (p. 5). He adds, “We want to reflect together while, in fact, I expound unilaterally. But the point is not dogmatic communication, but investigation and tender for examination on your part” (p. 9).

Reflecting together is essential, Jaspers argues, because it can change “consciousness,” which is a “precedent for our judgment in politics.” To accomplish this transformation, “We must learn to talk with each other, and we mutually must understand and accept one another in our extraordinary differences” (p. 5). This “self-education” (Selbsterziehung) is not politics, but perhaps it’s a preparation for politics (p. 9).

The need for dialogue is especially acute because Germans have had radically different experiences. Most Germans have experienced tragic losses, but it matters greatly whether one’s loved-one was killed on the battlefield while invading the USSR, bombed at home, or executed by the regime. Because there was no free speech, Germans have been unable to discuss such profound differences. Jaspers says, “Now that we can talk freely again, we seem to each other as if we had come from different worlds” (p. 13).

He never mentions how he was treated by the government or by his fellow Germans. Some of the people in the lecture room had different experiences from him–in the specific sense that they were actively involved in killing people like his wife. The proportion who supported the regime was vastly larger than the proportion who resisted it. Nevertheless, Jaspers diagnoses the situation as what we would call “polarization” (a deep disagreement among people), and he validates everyone’s experiences while attributing guilt to himself.

The solution that he proposes for polarization is dialogue. He says, “We want to learn to talk with each other. That is to say, we do not just want to reiterate our opinions but to hear what the other thinks. We do not just want to assert but to reflect connectedly, listen to reasons, remain prepared for a new insight. We want to accept the other, to try to see things from the other’s point of view; in fact, we virtually want to seek out opposing views” (pp. 5-6).

Jaspers’ opening is a very strong statement in favor of pluralistic dialogue and institutional neutrality, as we might call those things today. I find it moving because he humanizes everyone despite having every reason to be furious at them. But I also think his stance is debatable. Should universities be as detached from politics as he advocates? (Would it have helped if they had been less detached in 1925 or 1930?) Was the problem really “division,” or was it Nazism?

Jaspers then offers an analysis of the question of German war guilt. Central to his analysis is a famous four-way distinction among:

  1. Criminal guilt, which is attributable to individuals who have broken specific laws. It merits personal shame and punishment.
  2. Political guilt, which belongs to all members of a polity (a democracy or otherwise), because “Everybody is responsible for the way he is governed.” However, political guilt does not imply criminal guilt or the need for an individual penalty or shame. Germany as a whole is rightly occupied because of political guilt, which is not the fault of individual Germans. Similarly, I might say, “I didn’t vote for George W. Bush or the Iraq war, but I have responsibility for Iraq as a US citizen. I needn’t feel bad about it personally, but I must accept the political consequences.”
  3. Moral guilt: This is what one ought to feel as a result of being connected to an evil, even if one wasn’t personally responsible for what happened. It is what we would now call bad “moral luck.” For example, it is a matter of luck whether one was born a German or a Dane in 1905, but those who were born Germans have a form of guilt that is not due to their individual choices. Jaspers’ former student Hannah Arendt wrote (completely independently at about the same time): “That German refugees, who had the good fortune either to be Jews or to have been persecuted by the Gestapo early enough, have been saved from this guilt is of course not their merit.” If your conditions lead you to be good, you should reflect on your good fortune and not attribute your virtue to your self. If your conditions make you bad, you need penance and renewal.
  4. Metaphysical guilt: “There exists a solidarity among men as human beings that makes each co-responsible for every wrong and every injustice in the world, especially for crimes committed in his presence or with his knowledge.” The outcome of accepting metaphysical guilt is what Jaspers calls “transformation before God.” Again, Arendt wrote something similar at about the same time: “It is many years now that we meet Germans who declare that they are ashamed of being Germans. I have often felt tempted to answer that I am ashamed of being human.” I would paraphrase their idea as follows (without invoking God): acts of evil remind us that we are flawed creatures, and we should be mindful of that fact.

Jaspers’ lecture must have given his audience much to wrestle with, but it’s not clear that it went over well. Much later, his student Harry Pross recalled:

No one would have dared interrupt the lecture. There was not supposed to be any conversation between the students and the professors in the old lecture hall. Then [at the end of the lecture] the philosopher left, somewhat stiffly, without casting a single glance left or right. The students sat tight, as they had always done. “Pretty meshuggener,” one murmured as he walked out. “At least you don’t have to say ‘Heil’ any more,” his friend replied.


Quoting Jaspers from E.N. Ashton’s translation: The Question of German Guilt (Fordham, 2000). The German words come from a 1971 German edition of Die Schuldfrage (note that Germany is not named in the original title), published by Joseph Buttinger. Pross is quoted in Antonia Grunenberg and Adrian Daub, “Arendt, Heidegger, Jaspers: Thinking Through the Breach in Tradition,” Social Research, vol. 74, no. 4, 2007, p. 1013.

See also: Max Weber on institutional neutrality; don’t confuse bias and judgment; an international discussion of polarization; and in the Holocaust Museum (from 2006).

some upcoming talks on democracy and civic education

These talks are open to the public:

March 2, at Boston University, the Center for Media Innovation & Social Impact (MISI) is hosting me for “Civics in the Classroom.” I will talk about ways of modeling political beliefs that get beyond the left-right divide. (With support from Boston University Center for the Humanities, BUCH). Register here.

March 6-7, Duke: Cognitive Liberty conference. I’ll talk about “Civics in Higher Education and Cognitive Liberty.” Arthur Brooks, Rowan Williams, and others are speaking at the conference. Register through this page.

March 19-20, Boston College: The Clough Center for the Study of Constitutional Democracy will host its annual Spring Symposium on Democratic Resilience. Major speakers will include Daron Acemoglu, Ross Douthat, and others. I’ll be on a panel about “educating for resilience.” Register here.

April 10, Tufts: Summit on Civics in Higher Education. I am the primary organizer and will moderate a session. Register here.

April 13-14, University of South Carolina: I’ll be a keynote speaker, along with Jedediah Purdy, Deva Woodly, Arlie Hochschild, and others, at the Conference on Civic Engagement and the Constitutional Order.

a vivid sense of the future

My conception of the relatively distant future is almost empty. How things will be in 20 years, or 50–I have no idea. I am not motivated or inspired by any such vision.

Walter Benjamin would not approve. He concludes his “Theses on the Philosophy of History” with these words:

We know that the Jews were prohibited from investigating the future. The Torah and the prayers instruct them in remembrance, however. This stripped the future of its magic, to which all those succumb who turn to the soothsayers for enlightenment. This does not imply, however, that for the Jews the future turned into homogeneous, empty time. For every second of time was the strait gate through which Messiah might enter.

According to Benjamin, history was not linear for the ancient Hebrews. Studying the past revealed a future that could suddenly appear in the present. For them, the future was not empty. Nor were they like “soothsayers” who make predictions by studying current trends–like today’s pundits who project a magical, technological future based on what they observe today. The future for the ancient Jews was something radically different from the present yet foretold by the past, if you read it right.

Benjamin is thinking of the Hebrew prophets. For example, the Lord gives Amos a message to convey to the rich and powerful:

5:11Therefore because you trample on the poor
    and you exact taxes of grain from him,
you have built houses of hewn stone,
    but you shall not dwell in them;
you have planted pleasant vineyards,
    but you shall not drink their wine.
12 For I know how many are your transgressions
    and how great are your sins—
you who afflict the righteous, who take a bribe,
    and turn aside the needy in the gate.

A bit later, the Lord adds a hortatory or imperative sentence:

24 But let justice roll down like waters,
    and righteousness like an ever-flowing stream.

Such sentences from the Lord can have direct consequences, as in “God said, Let there be light: and there was light.” He then promises to shake the house of Israel, “as one shakes a sieve,” except that no pebble will make it through this shaking:

10 All the sinners of my people shall die by the sword,
    who say, ‘Disaster shall not overtake or meet us.’

This is not only a prediction (certain people will die) but also an instruction (stop denying your faults). Then comes a much more positive promise:

11 “In that day I will raise up
    the booth of David that is fallen
and repair its breaches,
    and raise up its ruins
    and rebuild it as in the days of old …

This is a vivid vision of the future–the text goes on for many verses describing it– brought into the present to serve a purpose. Amos’ prophesy is both a prediction and an exhortation. It chastises the wicked and comforts the oppressed.

Here is a quote from another text which–like the Hebrew Bible–impressed Walter Benjamin. It is Das Kapital (from the afterward of the second German edition)

With me [in contrast to Hegel], the ideal is nothing else than the material world reflected by the human mind, and translated into forms of thought. …

In its rational form [dialectics] is a scandal and abomination to bourgeoisdom and its doctrinaire professors, because it includes in its comprehension and affirmative recognition of the existing state of things, at the same time also, the recognition of the negation of that state, of its inevitable breaking up; because it regards every historically developed social form as in fluid movement, and therefore takes into account its transient nature not less than its momentary existence; because it lets nothing impose upon it, and is in its essence critical and revolutionary.

Marx is saying, on the one hand, that he simply studies “the existing state of things.” As a hard-headed scientist, he knows that the world is material and governed by laws. However, his analysis also reveals the “negation of that state,” an infinitely better future. This makes his text “critical and revolutionary.”

Benjamin begins his “Theses” with the famous story of the Mechanical Turk, the 18th-century automaton that appeared to be a machine capable of winning at chess. Actually, there was a man inside who pulled the strings. Benjamin says, “One can imagine a philosophical counterpart to this device. The puppet called ‘historical materialism’ is to win all the time. It can easily be a match for anyone if it enlists the services of theology, which today, as we know, is wizened and has to keep out of sight.”

Marxism can be interpreted as historical materialism. As a rigid system, it is unfalsifiable–“it can win all the time.” It is also inert, because human agency isn’t needed to bring about the future. It works like a machine and assumes that history is mechanical. Benjamin suggests, however, that Marxism is really a religion–in a good way. Its power is prophesy. Like the Torah, it instructs people in remembrance, conjures a future into the present, and inspires us to act.

I take my own bearings neither from Amos nor from Marx, yet I appreciate Benjamin’s idea that historical time is not linear. By acting politically, we change the meaning of the past and bring an imagined future into the now (Benjamin’s Jetztzeit). When we lose the capacity to envision a radically better future, we abandon our agency to impersonal forces.

See also: Martin Luther King’s philosophy of time; Kieran Setiya on midlife: reviving philosophy as a way of life; nostalgia in the face of political crisis (posts about Benjamin)

priorities of liberals and conservatives

In 1993, 1994, 2010, and 2021, the General Social Survey asked representative samples of Americans to choose a “America’s highest priority, the most important thing it should do” from a list of four items: maintaining order in the nation, giving people more say in government decisions, fighting rising prices, or protecting freedom of speech.

I presume that these items were meant to test Ronald Inglehart’s “postmaterialism” thesis, the idea that once a society attains a high level of economic development, many voters become most concerned about non-economic issues. Inflation is a materialist concern, and the others are “post-materialist.”

Above, I show the responses to this question by political ideology (liberal, moderate, or conservative). I omit 1993 because it looks very similar to 1994. The GSS asks people to place themselves on a 7-point ideological scale, but I collapse that into three categories to increase the numbers in each cell.

You might imagine that conservatives would be more likely to want to maintain order. Compared to liberals, that was true in 1994 and 2010, but not in 2021, when conservatives were the least likely to choose “order” as their priority. Furthermore, moderates were the most committed to order in 1994, not conservatives.

In 1994 and 2010, conservatives and moderates were more committed than liberals to popular voice in government. Perhaps this was “The West Wing” era, when many liberals were content to be technocrats. In 2021 (while a liberal was president) liberals and moderates had become more committed to voice than conservatives were.

Freedom of speech has not divided liberals from conservatives during this period. About one third of both groups have chosen it as the top priority in each year.

Inflation polarized opinion in 2021, with 55% of conservatives and only 19% of liberals choosing it as the top issue. Inglehart’s framework would suggest that moderates were the most “materialist” group in 1994, but conservatives had become the “materialists” by 2021. But I doubt that this is the right framework for interpreting the 2021 results. I think that conservatives’ spike in concern about inflation was a verdict on the incumbent Biden Administration, not a deep shift in values.

See also: trusting experts or ordinary people; class inversion as an alternative to the polarization thesis; moving to the center is a metaphor, and maybe not a good one; recent changes in tolerance for controversial speakers

Summit on Civics in Higher Education

April 10 | Tufts University, Medford, MA

The Jonathan M. Tisch College of Civic Life at Tufts University and the Alliance for Civics in the Academy (ACA), with support from GBH, are proud to host a national summit on the state of civics in higher education.

The summit will convene practitioners, faculty, administrators, and students from across the United States to explore, discuss and compare models of civic practice in higher education.

Summit speakers and panelists will include Amy BinderMary ClarkMichael CluneDayna CunninghamAndrew DelbancoFonna FormanBryan GarstenLeslie GarvinCaroline Attardo GencoTetyana Hoggan-KloubertPeter LevineJessica Kimpell JohnsonJennifer Brick MurtazashviliJosiah OberMindy RomeroJenna Silber StoreyMarisol Morales, and more.

The summit will bring together three categories of university-based centers and programs—including diverse representatives from each—that are influential and widespread:

  1. Colleges or programs of Civic Thought or Civic Studies. These entities offer civic education courses within a liberal arts curriculum. At least 13 are new initiatives at public universities. They may also produce research and public programs related to civic life.
  2. Centers and initiatives that engage higher education with communities in part to enhance their students’ civic skills and knowledge. These initiatives have roots in the Land Grant tradition (including the HBCU Land Grants) and the “Wisconsin Idea,” and many are ambitious and innovative today.
  3. Democracy research centers and institutes based in universities that aim to improve democracy or civil society by generating research, tools, and events for the public.

Panel sessions will explore these three categories, while plenary discussion will compare them and provoke reflection on questions like these:

  • To what extent should college-level civic education be about reading and discussing texts?
  • To what extent should civic education be experiential, and which kinds of experiences are most valuable?
  • Should colleges and universities be embedded in and accountable to local communities, to states, to the nation, to transnational communities, and/or to the globe?
  • What does it mean to promote viewpoint diversity in each type of program? Are there other dimensions of disagreement that are also (or more) relevant than ideology?
  • Is the goal of civic education to build support for the constitutional order, to subject the system to critical scrutiny and improvement, or both?

We anticipate rich discussions and constructive disagreements that will enrich participants’ views of these issues while also strengthening the intellectual community.

Please register on the summit site and check it for the full agenda and list of speakers. This information will be updated as the summit develops.