Just under 40% of occupations jobs in the USA may be replaced by AI if it proves to be as powerful as some think it will be.* As a thought-experiment (not as a prediction), imagine that 40% of current workers, or about 60 million Americans, are no longer employed because AI does their former work. However, their former employers are still producing the same goods and services. These firms are therefore far more profitable.
The profits flow to shareholders. Individuals are already taxed now, but with tens of millions of new people out of work, there would be more political will to raise taxes. Therefore, imagine that a set of competing tech. firms have become responsible for a substantial portion of the whole economy and are heavily taxed. The proceeds flow back out of the government in the form of cash payments, perhaps a Universal Basic Income (UBI). Recipients are able to pay for the goods and services that machines now heavily produce. Meanwhile, jobs that are not automated are relatively well paid, because the UBI enables individuals not to work unless they want to.
Silicon Valley ideologues like Sam Altman tend to envision a UBI on the scale of $1,500/month. Today’s white collar workers earn a median income of about $5,000/month. Therefore, the kind of UBI that Altman imagines would result in a massive loss of income for millions of people, which would have cascading effects. All the former office-workers who now live in nice houses and buy costly services would have to give those up, causing additional unemployment and declining demand for the products produced by the tech. companies.
However, the public might demand a UBI more like $5,000/month. Then half of today’s white collar workers would be worse off, but half would be richer–and none would have to work.
Looking a little more deeply, we might notice that AI tools are not simply machines. They process text and ideas that human beings create. Therefore, we could see this whole system as deeply socialistic. Billions of people’s mental output would be processed by relatively few AI models that produce generally similar output. These tools would generate profits that would be distributed equitably to the people. Most individuals would receive $5,000/month, neither more or less. Since they wouldn’t have to work, they could spend their time as they wish. And–via electoral politics–the people could regulate the AI companies.
It all sounds like Karl Marx’s early utopian vision:
In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic. (The German Ideology, 1845)
Problems:
- The transition to this imaginary equilibrium might be chaotic, violent, and destructive– perhaps to such a degree that we wouldn’t make it through.
- Modern people tend to derive dignity and purpose from work. Perhaps this is a contingent fact about today’s society. In the future, maybe we will be happy fishing in the afternoon and writing criticism after dinner. Or perhaps we will be deeply depressed without jobs. To make matters worse, would we really spend our time writing or playing music or even fishing, if machines can do all those things better? This is not a problem that confronted Marx, because in his day, machines automated tasks that people would not do voluntarily.
- It’s easy to posit that the people can tax and regulate AI companies through the device of a democratically elected government, but millions of people’s interests and values do not automatically turn onto one public will. Interest groups have agendas and power. At large scales, democracy is complicated, messy, factional, and very easily corrupted. In this case, the AI companies and investors would be political players.
- It could be that not only AI companies but also the models themselves become players that have interests. Sentient, self-interested AI is the source of much current anxiety. I am not sure what to make of that concern, but it surely adds a layer of risk.
- I have discussed the USA alone, but how would this look for people in a country without competitive AI companies? US citizens might demand that Silicon Valley provide them with a UBI, but it’s implausible that US citizens would demand a global UBI. And how would people in Africa or Latin America gain leverage have over US policy?
- For the people to govern the “means of production” (to use the Marxist term), they must understand it. Industrial workers have understood industrial machines, so they can run factories. None of us understand Large Language Models, not even the developers who design them. Can we, therefore, govern them? (Having said that, we also do not fully understand the human brain, yet people have governed people.)
- Even if democracy works well, the public will not really control AI. So far, I have suggested that AI is like a machine that can be regulated by people through their government. But AI also shapes our knowledge, values, and understandings of ourselves in ways that are controlled either by the designers and owners of the platforms, or by the machines, or–perhaps–by no one at all. Evegeny Morozov writes:
Now imagine a future in which a [public] Investment Board, under pressure to avoid bias and misinformation, mandates that AI systems be fair according to agreed metrics, respect privacy, minimize energy use, and promote well-being. Call this woke AI by democratic mandate–an infrastructure whose outputs are correct, diverse, and balanced. Yet it still feels like it was designed over our heads.
Morozov suggests a different path. Instead of allowing corporate AI to grow and then trying to regulate it and capture its value, develop non-corporate AI:
A city government might maintain open models trained on public documents and local knowledge, integrated into schools, clinics, and housing offices under rules set by residents. A network of artists and archivists might build models specialized in endangered languages and regional cultures, fine?tuned to materials their communities actually care about.
The point is not that these examples are the answer, but that a socialism worthy of AI would institutionalize the capacity to try such arrangements, inhabit them, and modify or abandon them—and at scale, with real resources. This kind of socialism would treat AI as plastic enough to accommodate uses, values, and social forms that emerge only as it is deployed. It would see AI less as an object to govern (or govern with) and more as a field of collective discovery and self-transformation.
I should say that I am not a socialist, partly because available socialist theories have not persuaded me, and partly because I am also drawn to liberal ideals of individual rights, privacy, and negative liberties. However, “socialism” is a broad and protean term, and socialist thought may offer resources to envision better futures. Confronting the massive threat–and opportunity–of AI, we should use any intellectual resources we can get our hands on.
*I have aggregated the categories of office and administrative support; sales and related; management; healthcare support; architecture and engineering; life, physical, and social science; and legal from the Bureau of Labor Statistics. I omitted education (5.8% of all jobs) on the–probably vain–hope that my own occupation won’t also be automated. If that happens, raise the estimate of obsolete jobs to 45%.
See also: can AI solve “wicked problems”?; Reading Arendt in Palo Alto; the human coordination involved in AI (etc.)