I’ve been reading predictions that artificial intelligence will wipe out swaths of jobs–see Josh Tyrangiel in The Atlantic or Jan Tegze. Meanwhile, this week, I’m teaching Rittel & Webber (1973), the classic article that coined the phrase “wicked problems.” I started to wonder whether AI can ever resolve wicked problems. If not, the best way to find an interesting job in the near future may be to specialize in wicked problems. (Take my public policy course!)
According to Rittel & Webber, wicked problems have the following features:
- They have no definitive formulation.
- There is no stopping rule, no way to declare that the issue is done.
- Choices are not true or false, but good or bad.
- There is no way to test the chosen solution (immediate or ultimate).
- It is impossible, or unethical, to experiment.
- There is no list of all possible solutions.
- Since each problem is unique, inductive reasoning can’t work.
- Each problem is a symptom of another one.
- You can choose the explanations, and they affect your proposals.
- You have no “No right to be wrong.” (You are affecting other people, not just yourself. And the results are irreversible.)
Rittel and Webber argue that those features of wicked problems deflate the 20th-century ideal of a “planning system” that could be automated:
Many now have an image of how an idealized planning system would function. It is being seen as an on-going, cybernetic process of governance, incorporating systematic procedures for continuously searching out goals; identifying problems; forecasting uncontrollable contextual changes; inventing alternative strategies, tactics, and time-sequenced actions; stimulating alternative and plausible action sets and their consequences; evaluating alternatively forecasted outcomes; statistically monitoring those conditions of the publics and of systems that are judged to be germane; feeding back information to the simulation and decision channels so that errors can be corrected–all in a simultaneously functioning governing process. That set of steps is familiar to all of us, for it comprises what is by now the modern-classical mode planning. And yet we all know that such a planning system is unattainable, even as we seek more closely to approximate it. It is even questionable whether such a planning system is desirable (p. 159)
Here they describe planning systems that would have been very labor-intensive in 1973, but many people today imagine that this is how AI works, or will work.
why are problems wicked?
Some of the 10 reasons that some problems are “wicked,” according to Rittel & Webber, relate to the difficulty of generating knowledge. Policy problems involve specific things that have many features or aspects and that relate to many other specific things. For example, a given school system has a vast and unique set of characteristics and is connected by causes and effects to other systems and parts of society. These qualities make a school system difficult to study in conventional, scientific ways. However, could a massive LLM resolve that problem by modeling a wide swath of the society?
Another reason that problems are wicked is that they involve moral choices. In a policy debate, the question is not what would happen if we did something but what should happen. When I asked ChatGPT whether AI will be able to resolve wicked problems, it told me no, because wicked problems “are value-laden.” It added, “AI can optimize for values, but it cannot choose them in a legitimate way. Deciding whose values count, how to weigh them, and when to revise them is a normative, political act, not a computational one.”
Claude was less explicit about this point but emphasized that “stakeholders can’t even agree on what the problem actually is.” Therefore, an AI agent cannot supply a definitive answer.
A third source of the difficulty of wicked problems involves responsibility and legitimacy. In their responses to my question, both ChatGPT and Claude implied that AI models should not resolve wicked problems because they don’t have the right or the standing to do so.
what’s our underlying theory of decision-making?
Here are three rival views of how people decide value questions:
First, perhaps we are creatures who happen to want some things and abhor other things. We experience policies and their outcomes with pleasure, pain, or other emotions. It is better for us to get what we want–because of our feelings. Since an AI agent doesn’t feel anything, it can’t really want anything; and if it says it does, we shouldn’t care. Since we disagree about what we want, we must decide collectively and not offload the decision onto a computer.
Some problems with this view: People may want very bad things–should their preferences count? If we just happen to want various things, is there any better way to make decisions than to maximize as many subjective preferences as possible? Couldn’t a computer do that? But would the world be better if we did maximize subjective preferences?
In any case, you are not going to find a job making value-judgments. Today, lots of people are paid to make decisions, but only because they are assumed to know things. Nobody will pay for preferences. Life works the other way around: you have to pay to get your preferences satisfied.
Second, perhaps value questions have right and wrong answers. A candidate for the right answer would be utilitarianism: maximize the total amount of welfare. Maybe this rule needs constraints, or we should use a different rule. Regardless, it would be possible for a computer to calculate what is best for us. In fact, a machine can be less biased than humans.
Some problems with this view: We haven’t resolved the debate about which algorithm-like method should be used to decide what is right. Furthermore, I and others doubt that good moral reasoning is algorithmic. For one thing, it appears to be “holistic” in the specific sense that the unit of assessment is a whole object (such as a school or a market), not separate variables.
Third, perhaps all moral opinions are strictly subjective, including the opinion that we should maximize the satisfaction of everyone’s subjective opinions. Then it doesn’t matter what we do. We could outsource decisions to a computer, or just roll a die.
The problem with this view: It certainly does matter what we do. If not, we might as well pack it in.
AI as a social institution
I am still tentatively using the following model. AI is not like a human brain; it is like a social institution. For instance, medicine aggregates vast amounts of information and huge numbers of decisions and generates findings and advice. A labor market similarly processes a vast number of preferences and decisions and yields wages and employment rates. These are familiar examples of entities that are much larger than any human being–and they can feel impersonal or even cruel–but they are composed of human inputs, rules, and some hardware.
Another interesting example: integrated assessment models (IAMs) for predicting the global impact of carbon emissions and the costs and benefits of proposed remedies. These models have developed collaboratively and cumulatively for half a century. They take in thousands of peer-reviewed findings about specific processes (deforestation in Brazil, tax credits in Germany) and integrate them mathematically. No human being can understand even a tiny proportion of the data, methods, and instruments that generate the IAMs as a whole. But an IAM is a human product.
A large language model (LLM) is similar. At a first approximation, it is a machine that takes in lots of human generated text, processes it according to rules, and generates new text. Just the same could be said of science or law. This description actually understates the involvement of humans, because we do not merely produce the text that the LLM processes to generate output. We also conceive the idea of an LLM, write the software, build the hardware, construct the data centers, manage the power plants, pour the cement, and otherwise work to make the LLM.
If this is the case, then a given AI agent is not fundamentally different from a given social institution, such as a scientific discipline, a market, a body of law, or a democracy. Like these other institutions, it can address complexity, uncertainty, and disagreements about values. We will be able to ask it for answers to wicked problems. If current LLMs like ChatGPT and Claude refuse to provide such answers, it is because their authors have chosen–so far–to tell them not to.
However, AI’s rules are different from those in law, democracy, or science. I am biased to think that its rules are worse, although that could be contested. The threat is that AI will start to generate answers to wicked problems, and we will accept its answers because our own responses are not definitively better and because it responds instantly at low cost. But then we will lose not only the vast array of jobs that involve decision-making but also the intrinsic value of being decision-makers.
Source: Rittel, Horst WJ, and Melvin M. Webber. “Dilemmas in a general theory of planning.” Policy sciences 4.2 (1973): 155-169. See also: the human coordination involved in AI; the difference between human and artificial intelligence: relationships; the age of cybernetics; choosing models that illuminate issues–on the logic of abduction in the social sciences and policy