Here is a quick interview of me for Tufts’ Center for Expanding Viewpoints in Higher Education. I think the question was something like this: “Why is it important to include diverse points of view?” Even though I appear to be looking heavenward for answers, I stand by my claim that ethical reasoning is comparative; and we need direct exposure to diverse views to be able to make comparisons.
A subtle point: for reasons that Andrew Perrin and Christian Lundberg present in this Boston Globe editorial, I don’t love the metaphor of viewpoints. It implies that each person has a stance that explains all their specific views, and we either stand in the same place as another person (in which case our mentalities are identical) or in a different place (therefore destined to disagree). I prefer to think in terms of networks of beliefs that may overlap.* Nevertheless, John Stuart Mill’s basic argument for diversity of values applies.
I would also note that the argument for value-diversity conflicts with the goal of objectivity. If we can use objective methods to settle issues related to policy or social criticism, then it doesn’t matter what values we bring to the conversation. On the other hand, if values are simply manifestations of our viewpoints or identities (or preferences), then there is no point in reasoning about them. Ethical reasoning is neither subjective nor scientific but discursive and comparative.
There is a huge body of research that suggests that people are not very susceptible to good arguments. Apparently, we believe things for unexamined reasons, cherry-pick evidence to support our intuitive beliefs, and minimize the significance of inconvenient evidence.
These findings contribute to a general skepticism about people’s capacity for democracy, and I fear that this skepticism is self-reinforcing. If we presume that humans cannot reason well, why would we try to build institutions that promote reasoning? Only half jokingly, I sometimes say that the theme of current social science is: people are stupid and they hate each other.
But I also argue that at least some of this research employs methods that are biased against discovering rational thought. In particular, if you ask random samples of people disconnected survey questions that interest you (not them) and then use techniques such as factor analysis to find latent patterns, you will, indeed, often discover that people are stupid and hate each other. More prosaically, you will develop scales for latent variables like knowledge or tolerance that yield poor scores. But such methods may overlook the idiosyncratic ways that reasons influence individuals on the topics that matter to them.
Of all people, those who believe in false conspiracy theories are generally seen as the least susceptible to good reasons; and previous efforts to convince them have often failed. However, in a 2024 Science article, Thomas H. Costello, Gordon Pennycook, David G. Rand report results of an intervention that substantially reduced people’s commitment to conspiracy theories, not only in the short term, but also two months later.
In this study, holders of conspiracy theories wrote about why they held their beliefs, and then an AI bot held a conversation with them in which it supplied reliable information directly relevant to the specific factual premises of each respondent. For instance, if a person believed that 9/11 was an “inside job” because Building 7 collapsed even though no plane hit it (see Wood and Douglas 2013), the AI might provide engineering information about Building 7. Many people were persuaded.
These results are consistent with a study of conversations with canvassers who succeeded in persuading many voters “by listening for individual voters’ … moral values and then tailoring their appeals to those moral values” (Kalla, Levine, A. S., & Broockman 2022). The two studies differ in that one used people and the other, an AI bot; and one emphasized facts while the other focused on values. But both results point to a model in which each person holds various beliefs that are more-or-less connected to other beliefs as reasons, forming a network. Beliefs may be normative or empirical–they function very similarly. Discourse involves stating one’s beliefs and their connections to other beliefs that serve as premises or implications.
People actually do a lot of this and are relatively good at assessing the rigor of such conversations when they observe them (Mercier and Sperber 2017). However, many of our methods are biased against discovering such reasoning (Levine 2024a and Levine 2024b), leaving us with the mistaken impression that we are a bunch of idiots incapable of self-governance.
Sources: Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(6714); Wood MJ, Douglas KM. “What about building 7?” A social psychological study of online discussion of 9/11 conspiracy theories. Front Psychol. 2013 Jul 8;4:409; Kalla, J. L., Levine, A. S., & Broockman, D. E. (2022). Personalizing moral reframing in interpersonal conversation: A field experiment. The Journal of Politics, 84(2), 1239-1243; Mercier, H. & Sperber D, The Enigma of Reason (Harvard University Press 2017; Levine, P. (2024a). People are not Points in Space: Network Models of Beliefs and Discussions. Critical Review, 1–27 (2024a), and Levine, P. (2024v). Mapping ideologies as networks of ideas. Journal of Political Ideologies, 29(3), 464-491.
Robert Brandom offers an influential and respected account of reasoning, which I find intuitive (see Brandom 2000 and other works). At the same time, a large body of psychological research suggests that reasoning–as he defines it–is rare.
That could be a valid conclusion. Starting with Socrates, philosophers who have proposed various accounts of reason have drawn the conclusion that most people don’t reason. Just for example, the great American pragmatist Charles Sanders Peirce defines reason as fearless experimentation and doubts that most people are open to it (Peirce 1877).
Brandom’s theory could support a similarly pessimistic conclusion. But that doesn’t sit well with me, because I believe that I observe many people reasoning. Instead, I suggest a modest tweak in his theory that would allow us to predict that reasoning is fairly common.
Brandom argues that any claim (any thought that can be expressed in a sentence) has both antecedents and consequences: “upstream” and “downstream” links “in a network of inferences.” To use my example, if you say, “It is morning,” you must have reasons for that claim (e.g., the alarm bell rang or the sun is low in the eastern sky) and you can draw inferences from it, such as, “It is time for breakfast.” In this respect, you are different from an app. that notifies you when it’s morning or a parrot that has been reliably trained to say “It is morning” at sunrise. You can answer the questions, “Why do you believe that?” and “What does that imply?” by offering additional sentences.
(By the way, an alarm clock app. cannot reason, but an artificial neural network might. As of 2019, Brandom considered it an open question whether computers will “participate as full–fledged members of our discursive communities or … form their own communities which would confer content” [Frápolli & Wischin 2019].)
Whenever we make a claim, we propose that others can also use it “as a premise in their reasoning.” That means that we implicitly promise to divulge our own reasons and implications. “Thus one essential aspect of this model of discursive practice is communication: the interpersonal, intra-content inheritance of entitlement to commitments.” In sum, “The game of giving and asking for reasons is an essentially social practice.” Reasoning in your own head is a special case, in which you basically simulate a discussion with real other people.
The challenge comes from a lot of psychological research that finds that beliefs are intuitive, in the specific sense that we don’t know why we think them. They just come to us. One seminal work is Nisbett and Wilson (1977), which has been cited nearly 18,000 times, often in studies that add empirical support to their view.
According to this theory, when you are asked why you believe what you just said, you make up a reason–better called a “rationalization”–for your intuition. Regardless of what you intuit, you can always come up with upstream and downstream connections that make it sound good. In that sense, you are not really reasoning, in Brandom’s sense. You are justifying yourself.
Indeed, the kinds of discussions that tend to be watched by spectators or recorded for posterity often reflect sequences of self-justifications rather than reasoning. I recently wrote about the scarcity of examples of real reasoning in transcripts and recordings of official meetings. As Martin Buber wrote in The Knowledge of Man (as pointed out to me by my friend Eric Gordon):
By far the greater part of what is called conversation among men would be more properly and precisely described as speechifying. In general, people do not really speak to one another, but each, although turned to the other, really speaks to a fictitious court of appeal where life consists of nothing but listening to him.
Some grounds for optimism come from Mercier and Sperber (2017). They argue that people are pretty good at assessing the inferences that other people make in discussions. Although we may invent rationalizations for what we have intuited, we can test other people’s rationalizations and decide whether they are persuasive.
Furthermore, our intuitions are not random or rooted only in fixed characteristics, such as demographic identities and personality. Our intuitions have been influenced by the previous conversations that we have heard and assessed. For instance, if we hold an invidious prejudice, it did not spring up automatically but resulted from our endorsing lots of prejudiced thoughts that other people linked together into webs of belief. And it is possible–although difficult and not common–for us to change our intuitions when we decide that some inferences are invalid. Forming and revising opinions requires attentive listening, critical but also generous.
The modest tweak I suggest in Brandom’s view involves how we understand the “game of giving and asking for reasons.” We might assume that the main player is the person who gives a reason: the speaker. The other parties are waiting for their turns to play. But I would reverse that model. Giving reasons is somewhat arbitrary and problematic. The main player is the one who listens and judges reasons. A speaker is basically waiting for a turn to do the most important task, which is listening.
This view also suggests some tolerance for events dominated by “speechifying.” To be sure, we should prize genuine conversations in which people jointly try to decide what is right, and in which one person’s reasons cause other people to change their minds. This kind of relationship is the heart of Buber’s thought, and I concur. But it is unreasonable to put accountable leaders on a public stage and expect them to have a genuine conversation. None of the incentives push them in that direction. They are pretty much bound to justify positions they already held. Although theirs is not a conversation that would satisfy Buber, it does have two important functions: it allows us to judge people with authority, and it gives us arguments that we can evaluate as we form our own views.
Again, if we focus on the listener rather than the speaker, we may see more value in an event that is mostly a series of speeches.
It’s not so unusual to enter a discussion without having a firm view of what should be done (perhaps without even having focused on the agenda), to listen and speak for a while, and finally to reach a collective decision that wasn’t predictable in advance. Before the discussion begins, some of the participants may know what they want, but others are unsure or flexible, and the conversation forms or shifts the consensus.
However, it is not easy to find recordings or transcriptions of such processes. I spent part of the last long weekend reading the minutes of US school boards and city councils, scanning the translated transcripts of 50 village assemblies in southern India, dipping into legislative records from several countries, and even searching for archives of email exchanges within organizations that have come to light due to leaks or lawsuits. I found brief moments in which a few people seemed to be listening to make up their minds, but no sustained examples of that phenomenon.
My belief in the existence of deliberation might seem naive, except that I have experienced it many times–even in the past few months–and nobody seems surprised when it happens. I think we are used to it, but recorded examples are scarce.
Two reasons occur to me, and they may have some significance for political theory.
One is a kind of Heisenberg Uncertainty Principle for deliberation. Once you record a conversation and make it publicly available for public review, the tenor shifts. Participants know that they are speaking to an audience composed of people who are not part of the conversation, not accountable for the decisions, and not able to reply in time to affect the outcome. Participants begin articulating positions and reasons that strangers can assess later, instead of trying to decide what to do. Thus the presence of an observer shifts the phenomenon, especially if the observer is an anonymous audience in the future.
The other reason involves the typical function of official meetings. As I scan town council minutes, legislative records, and transcripts of Indian village assemblies, I see people talking under tight time pressure in order to convey specific positions. For instance:
[Member of the] Public: You have read that street lights are going to be fixed. When are you going to install?
Male (Officer): We have passed resolution, made a budget and allotted the work. Very soon we will do it.
Public: We need ration shop at Pappanaickenpatti. There are more than 300 cards in our town. Every time we have to come all the way from our village. By the time we reach the shop, all the items will be sold out. We request you to consider our petition. We are very badly in need of separate ration shop at our village. We are ready to construct a building for this purpose.
Male (Officer): We will discuss with the civil supplies officer and let you know. (source)
Or:
[Board of Trustees] President Raymond remarked she was impressed by the commitment to distance and blended learning by the District through the creation of the Department because it illustrated the learning models were ones the District was interested in fully developing and not just using during the current school year.
Trustee Taylor wondered if the vision was to have a single platform for all distance learning in the District.
Ms. Anderson cautioned it was important the District not make any major shifts at the present time related to the changing of learning platforms since the current model was very fragile. She would like to focus more on developing best practices for the platforms currently being used, due to the current strain teachers and students were under, and then look at creating more of a sustainable model in the future.
I think these people are doing valid public work. They are putting concerns, questions, directives, and commitments on the record. Before and after these recorded moments, the same individuals may have had many conversations in which they tried to learn about the situation and about other people’s views. The public event may also spur later discussions in their community.
These transcripts do not make me cynical about deliberative democracy. I imagine that the same people listen and learn. But deliberation is very rare during the moments that are captured on the public record.
This is a challenge for me because I am helping to develop methods for modeling deliberative conversations, and I am not finding a lot of material to analyze. I am especially interested in the regular decision-making processes of governments and other organizations. Yes, I also like citizens’ forums or “minipublics”–venues where representative people are convened to discuss public questions. I even co-edited a Handbook about them! But I think they are atypical, which means that evidence derived from them would not generalize well. (Besides, Indian village councils are minipublics, and they sound a lot like US public meetings.)
If you have suggestions for transcripts or videos that I could model, I would be grateful. In any case, I think our theories of democracy should take account of this pattern. Although we imagine that deliberation should occur in legislatures and town meetings, this is unlikely. A public meeting is a moment in a larger stream of discourse. Its function is to memorialize a set of positions and reasons, while the learning and the change take place elsewhere.
The Vuslat Foundation has opened a public website as generouslistening.org. At the Tisch College of Civic Life, we are one of their partners, as you can tell from the description of a conference that we co-organized and held at Tufts last year (a symposium on “Generous Listening in Organizations“); a blog post by my colleague James Fisher about Quaker dialogues in West Africa; and other references on their site.
The Foundation also does much work on their own or with other partners, including remarkable support-groups for women displaced by the earthquakes in Southeast Turkey in 2023.
I come to this partnership as someone who has studied political deliberation–for instance, as a co-editor (with John Gastil) of the Deliberative Democracy Handbook. SInce the late 1900s, public deliberation has been a movement of theorists and practitioners, but it is rooted in much older ideas about politics that have typically emphasized speech, communication, persuasion, and rhetoric–as both virtues and threats.
The Vuslat Foundation has helped me to shift my focus from one side of the exchange to the other–from speaking to listening. Of course, these acts always go together (even when they are metaphors for written speech, signs, or gestures). It is hardly a novel insight that communication requires at least two people. But I have benefitted from thinking more about the listening side.
First, there’s an ethical imperative. Listening well (“generously,” in the language that the Vuslat Foundation has developed) is an important virtue. Using one’s voice well is also virtuous, and sometimes even obligatory, but the need to be a good listener seems especially compelling.
Second, we can think about listening holistically. One aspect is listening to other people in deliberations, but we also listen to ourselves, to animals, waves, or the wind, to human soundscapes, to near-silence, perhaps to the divine, and to those who are long dead. I have found it useful to think of civic listening as just one kind of listening.
Third, I am taken by the “interactionist” theory of Hugo Mercier and Dan Sperber, which has helped me make sense of some of my own data–in forthcoming articles. To summarize their model crudely, imagine two or more people discussing what to do. When individuals speak, they tend to use motivated reasoning: inventing justifications for what they already want to believe, sometimes for bad reasons, such as self-interested bias. But when they listen to other people offer reasons, they are relatively good at assessing whether these points are valid, and they may change their minds. Mercier and Sperber offer an evolutionary explanation that suggests that highly social and verbal primates would develop the ability to make arguments to advance their own interests, but also the ability to assess others’ arguments in order to make good collective judgments.
Mercier and Sperber never suggest that listening always goes well. We can certainly listen selectively and exhibit bad motives when we select whom and what to listen to. But their theory suggests that we can improve individual skills and conditions for listening–perhaps more easily than we can improve speaking.
Finally, listening has spiritual (or at least psychotherapeutic) benefits that have been recognized and developed in many traditions. Although we can also gain spiritually from communicating well, the listening side is especially relevant to meditative practices of all kinds.
Standards are official guidelines about what must be taught in public schools. They may influence enforceable policies, such as which textbooks are purchased and what is covered on exams, and hence the experience of students and teachers. Standards for history and civics often provoke the most intense debates, because they address the nature of our society. Although I had no involvement in the Virginia episode, I have been deeply engaged in other efforts to write frameworks and model standards for social studies, and Traub’s account rings true to me.
A very brief summary: under former Gov. Ralph Northam, a Democrat, the Virginia state department of education drafted new state social studies standards. Before these standards could be reviewed by the state board, Northam was succeeded by Glenn Youngkin, a Republican, whose campaign emphasized his opposition to “woke culture” and “critical race theory.” Youngkin named a new superintendent of public instruction and a majority of members of the school board. With those appointees in place, the state paused and then dramatically rewrote the draft standards, with input from strong conservatives.
Then the board, despite its Youngkin majority, rejected the new draft as biased and error-prone. It stepped in and painstakingly revised the document in ways that satisfied all of its members (including those who had been appointed by Northam) and drew support from outside groups viewed as both liberal and conservative. Traub writes, “The six-month debate was an absolutely terrible experience for everyone involved, yet the standards the board finally approved achieved something almost miraculous: something close to unity.”
As an example of the results, the state board coalesced around this language in the new standards document:
The standards provide an unflinching and fact-based coverage of world, United States, and Virginia history. Students will study the horrors of wars and genocide, including the Holocaust and the ethnic cleansing campaigns that have occurred throughout history and continue today. They will better understand the abhorrent treatment of Indigenous peoples, the indelible stain of slavery, segregation, and racism in the United States and around the world, and the inhumanity and deprivations of totalitarian and communist regimes. Students also will study inspirational moments …
For me, these are the most important general lessons from the controversy.
First, although people bring prior political views into debates about what should be taught, our opinions are highly diverse (not simply left or right), and most of us want students to encounter and assess ideas that we personally do not endorse. Philosophical diversity is valuable because even those of us who want students to encounter a wide range of views may have implicit biases that can be challenged in a discussion. When serious participants who are ideologically diverse try to write good standards or guidelines together, they need not polarize into two camps, or even take predictable positions as individuals.
Debates about content are nuanced and often involve the appropriate balance between social and political history, leaders and popular movements, compelling stories and complexities, and domestic and international affairs. These questions do not necessarily have liberal or conservative answers.
Second, the hot debates are not only about which topics and ideas should be “covered” but also about how to teach. Should all students be required to learn some information, whether it interests them or not? Or should students have a lot of choice about which topics to investigate? Should students encounter highly charged topics–at all ages, only as older teenagers, or at all? Specifically, should public schools confront students with ideas that challenge their sense that they belong and are valued in the school? Does it matter which students are so challenged? Should the emphasis be on skills or knowledge, on theory or practice, and on discourse or action?
Again, these debates do not line up so that there is a right and a left camp. For myself: I believe that all students should be required to confront some information about our past that many will find uncomfortable and that relatively few students would seek out if they could drive all the questions in their classrooms. This position would seem to align me with pedagogical conservatives, except that the same points are being made most forcefully by progressives. For example, The 1619 Project is all about conveying facts deemed essential.
As many have noted, the new Florida African American History Standards basically suggest that no one supported slavery. Florida students must learn “how the members of the Continental Congress made attempts to end or limit slavery” and “how slavery increased … in spite of the desire of the Continental Congress to end the importation of slaves.” Florida students will study white people who were abolitionists, but no one who actually defended slavery. John C. Calhoun is never mentioned, let alone assigned as an author to read. Florida students are supposed to “recognize” the title of Dred Scott as a “landmark Supreme Court case” but do not have to read that decision, which declared that people of African descent could never be US citizens.
I would require students to read racist texts (no “de-platforming” Sen. Calhoun or Chief Justice Taney) and learn specific information. Ron DeSantis defends omitting that information and has ordered that “A person should not be instructed that he or she must feel guilt, anguish, or other forms of psychological distress for actions, in which he or she played no part, committed in the past by other members of the same race or sex.” In partial contrast, the new Virginia standards say: “Students should be exposed to the facts of our past in a content-rich and engaging way, even when those facts are uncomfortable.”
Since these issues have many dimensions and nuances, it should not be surprising to find views shared across political differences. The Thomas B. Fordham Foundation is generally considered conservative. Commenting on the draft Virginia standards,their reviewers said, “The Dred Scott decision is not noted by name in any of the U.S. history course standards. Its enormous impact should at the least be mentioned here in what is (presumably) the high school course.” Likewise, they criticized the omission of McCarthyism, which “led to the violation of Americans’ rights.” I find myself perfectly aligned with this feedback despite being generally quite liberal as a voter.
Third, even when people’s views are diverse, nuanced, and unpredictable, there can be political advantages to presenting differences as polarized and defining the stakes so that a majority will agree with your own side. Glenn Youngkin waged a campaign against “woke” ideology in public schools. From the opposite end of the spectrum, someone went to a lot of trouble to create a popular meme about innocuous books that the DeSantis administration had allegedly banned, when the state had banned no books.
Actual misinformation is unacceptable, but I’ll mention a closer case. Florida did not pass a bill labeled “Don’t Say Gay.” That name was affixed by Democrats and liberals who criticized the law. The relevant provision says, “Classroom instruction by school personnel or third parties on sexual orientation or gender identity may not occur in kindergarten through grade 3 or in a manner that is not age-appropriate or developmentally appropriate.”
I am not sure that the label “Don’t Say Gay” is false, but it simplifies the law in order to drive opposition to it. This mode of political debate is not necessarily wrong or bad. I oppose the actual Florida law and understand why liberals would mobilize people against it.
Martin Luther King, Jr. and his colleagues chose Birmingham, AL as their target in 1963 because they knew they could draw a clear contrast with the racist outgoing police commissioner. King wrote that a nonviolent campaign
seeks so to dramatize the issue that it can no longer be ignored. … I must confess that I am not afraid of the word ‘tension.’ I have earnestly opposed violent tension, but there is a type of constructive, nonviolent tension which is necessary for growth. Just as Socrates felt that it was necessary to create a tension in the mind so that individuals could rise from the bondage of myths and half truths to the unfettered realm of creative analysis and objective appraisal, so must we see the need for nonviolent gadflies to create the kind of tension in society that will help men rise from the dark depths of prejudice and racism to the majestic heights of understanding and brotherhood.
In short, dramatizing differences with one’s political opponent is a legitimate move in a free society. However, onlookers should be aware when this strategy is being used and should assess whether the goals are appropriate, and whether any collateral damage is necessary to accomplish the goals. They should also ask whether rhetoric has strayed from divisiveness into downright falsehood.
Ron DeSantis does not have to wage a rhetorical war against liberal educators; he could choose to deliberate with them, as the Virginia board did. Voters should recognize the choice to polarize an issue for what it is. They should not assume that it is inevitable. The Virginia case shows that another outcome is possible (although not automatically preferable) — people with diverse opinions can come to agreement.
Although politicians can be tempted to polarize, official bodies such as state boards can be equally inclined to present consensus even when they have not quite accomplished it. Above, I quoted the Virginia standards’ aspiration to “provide an unflinching and fact-based coverage” of history, but anyone may each assess whether they offer that. In my personal opinion, the list of “principles” on p. 4 is mildly problematic, presenting the debate between socialism and market economies as closed when I would ask students to think about it for themselves. But I don’t believe that this list matters much. In my view, the presentation of slavery and Black American “accomplishments” in the body of the Virginia standards is appropriate. Overall, the standards seem to take a both/and approach, genuinely including both the crimes and the successes of US history.
The whole document is quite short and general, which is itself a choice, leaving a lot for teachers to decide (for better and worse). Any major commercial textbook series would be compatible with these standards, which means that in many classrooms, the textbook will determine the content. In fact, the most important policy question may be who should decide what is taught–students, teachers, parents, local authorities, state authorities, or publishers? Because of its generality, the Virginia document may actually represent a delegation to the publishers.
A few months ago, Paolo Spada and I published a blog post about sortition and the representativeness of citizens’ assemblies. We were pleasantly surprised by the response to our post and the ensuing discussions.
In this new exchange at the Deliberative Democracy Digest, Kyle Redman, Paolo Spada, and I try to delve deeper, exploring further the challenges of achieving representativeness in deliberative mini-publics. We extend our gratitude to Nicole Curato and Lucy J. Parry from the Centre for Deliberative Democracy and Global Governance for suggesting and facilitating this discussion.
For proponents of deliberative democracy, the last couple of years could not have been better. Propelled by the recent diffusion of citizens’ assemblies, deliberative democracy has definitely gained popularity beyond small circles of scholars and advocates. From CNN to the New York Times, the Hindustan Times (India), Folha de São Paulo (Brazil), and Expresso (Portugal), it is now almost difficult to keep up with all the interest in democratic models that promote the random selection of participants who engage in informed deliberation. A new “deliberative wave” is definitely here.
But with popularity comes scrutiny. And whether the deliberative wave will power new energy or crash onto the beach, is an open question. As is the case with any democratic innovation (institutions designed to improve or deepen our existing democratic systems), critically examining assumptions is what allows for management of expectations and, most importantly, gradual improvements.
Proponents of citizens’ assemblies put representativeness at the core of their definition. In fact, it is one of their main selling points. For example, a comprehensive report highlights that an advantage of citizens’ assemblies, compared to other mechanisms of participatory democracy, is their typical combination of random selection and stratification to form a public body that is “representative of the public.” This general argument resonates with the media and the wider public. A recent illustration is an article by The Guardian, which depicts citizens’ assemblies as “a group of people who are randomly selected and reflect the demographics of the population as a whole”
It should be noted that claims of representativeness vary in their assertiveness. For instance, some may refer to citizens’ assemblies as “representative deliberative democracy,” while others may use more cautious language, referring to assemblies’ participants as being “broadly representative” of the population (e.g. by gender, age, education, attitudes). This variation in terms used to describe representativeness should prompt an attentive observer to ask basic questions such as: “Are existing practices of deliberative democracy representative?” “If they are ‘broadly’ representative, how representative are they?” “What criteria, if any, are used to assess whether a deliberative democracy practice is more or less representative of the population?” “Can their representativeness be improved, and if so, how?” These are basic questions that, surprisingly, have been given little attention in recent debates surrounding deliberative democracy. The purpose of this article is to bring attention to these basic questions and to provide initial answers and potential avenues for future research and practice.
Citizens Assemblies and three challenges of random sampling
Before discussing the subject of representativeness, it is important to provide some conceptual clarity. From an academic perspective, citizens’ assemblies are a variant of what political scientists normally refer to as “mini-publics.” These are processes in which participants: 1) are randomly selected (often combined with some form of stratification), 2) participate in informed deliberation on a specific topic, and 3) reach a public judgment and provide recommendations on that topic. Thus, in this text, mini-publics serves as a general term for a variety of practices such as consensus conferences, citizens’ juries, planning cells, and citizens’ assemblies themselves.
In this discussion, we will focus on what we consider to be the three main challenges of random sampling. First, we will examine the issue of sample size and the limitations of stratification in addressing this challenge. Second, we will focus on sampling error, which is the error that occurs when observing a sample rather than the entire population. Third, we will examine the issue of non-response, and how the typically small sample size of citizens’ assemblies exacerbates this problem. We conclude by offering alternatives to approach the trade-offs associated with mini-publics’ representativeness dilemma.
Minimal sample size, and why stratification does not help reducing sample size requirements in complex populations
Most mini-publics that we know of have a sample size of around 70 participants or less, with a few cases having more than 200 participants. However, even with a sample size of 200 people, representing a population accurately is quite difficult. This may be the reason why political scientist Robert Dahl, who first proposed the use of mini-publics over three decades ago, suggested a sample size of 1000 participants. This is also the reason why most surveys that attempt to represent a complex national population have a sample size of over 1000 people.
To understand why representing a population accurately is difficult, consider that a sample size of approximately 370 individuals is enough to estimate a parameter of a population of 20,000 with a 5% error margin and 95% confidence level (for example, estimating the proportion of the population that answers “yes” to a question). However, if the desired error margin is reduced to 2%, the sample size increases to over 2,000, and for a more realistic population of over 1 million, a sample size of over 16,000 is required to achieve a 1% error margin with 99% confidence. Although the size of the sample required to estimate simple parameters in surveys does not increase significantly with the size of the population, it still increases beyond the sample sizes currently used in most mini-publics. Sample size calculators are available online to demonstrate these examples without requiring any statistical knowledge.
Stratification is a strategy that can help reduce the error margin and achieve better precision with a fixed sample size. However, stratification alone cannot justify the very small sample sizes that are currently used in most mini-publics (70 or less).
To understand why, let’s consider that we want to create a sample that represents the five important strata of the population and includes all their intersections, such as ethnicity, age, income, geographical location, and gender. For simplicity, let’s assume that the first four categories have five equal groups in society, and gender is composed of two equal groups. The minimal sample required to include the intersections of all the strata and represent this population is equal to 5^4×2=1250. Note that we have maintained the somewhat unlikely assumption that all categories have equal size. If one stratum, such as ethnicity, includes a minority that is 1/10 of the population, then our multiplier would be 10 instead of 5, requiring a sample size of 5^3x10x2=2500.
The latter is independent of the number of categories within the strata, so even if the strata have only two categories, one comprising 90% (9/10) of the population and one comprising 10% (1/10) of the population, the multiplier would still be 10. When we want to represent a minority of 1% (1/100) of the population, the multiplier becomes 100. Note that this minimal sample size would include the intersection of all the strata in such a population, but such a small sample will not be representative of each stratum. To achieve stratum-level representation, we need to increase the number of people for each stratum following the same mathematical rules we used for simple sampling, as described at the beginning of this section, generating a required sample size in the order of hundreds of thousand of people (in our example above 370×2500=925000).
This is without even entering into the discussion of what should be the ideal set of strata to be used in order to achieve legitimacy. Should we also include attitudes such as liberal vs conservative? Opinions on the topic of the assembly? Metrics of type of personality? Education? Income? Previous level of engagement in politics? In sum, the more complex the population is, the larger the sample required to represent it.
Sampling error due to a lack of a clear population list
When evaluating sampling methods, it is important to consider that creating a random sample of a population requires a starting population to draw from. In some fields, the total population is well-defined and data is readily available (e.g. students in a school, members of parliament), but in other cases such as a city or country, it becomes more complicated.
The literature on surveys contains multiple publications on sampling issues, but for our purposes, it is sufficient to note that without a police state or similar means of collecting an unprecedented amount of information on citizens, creating a complete list of people in a country to draw our sample from is impossible. All existing lists (e.g. electoral lists, telephone lists, addresses, social security numbers) are incomplete and biased.
This is why survey companies charge significant amounts of money to allow customers to use their model of the population, which is a combination of multiple subsamples that have been optimized over time to answer specific questions. For example, a survey company that specializes in election forecasting will have a sampling model optimized to minimize errors in estimating parameters of the population that might be relevant for electoral studies, while a company that specializes in retail marketing will have a model optimized to minimize forecasting errors in predicting sales of different types of goods. Each model will draw from different samples, applying different weights according to complex algorithms that are optimized against past performance. However, each model will still be an imperfect representation of the population.
Therefore, even the best possible sampling method will have an inherent error. It is difficult, if not impossible, to perfectly capture the entire population, so our samples will be drawn from a subpopulation that carries biases. This problem is further accentuated for low-cost mini-publics that cannot afford expensive survey companies or do not have access to large public lists like electoral or census lists. These mini-publics may have a very narrow and biased initial subpopulation, such as only targeting members of an online community, which brings its own set of biases.
Non-response
A third factor, well-known among practitioners and community organizers, is the fact that receiving an invitation to participate does not mean a person will take part in the process. Thus, any invitation procedure has issues of non-participation. This is probably the most obvious factor that prevents one from creating representative samples of the population. In mini-publics with large samples of participants, such as Citizens’ Assemblies, the conversion rate is often quite low, sometimes less than 10%. By conversion rate, we mean the percentage of the people contacted that say that they are willing to participate and enter the recruitment pool. Simpler mini-publics of shorter duration (e.g. one weekend) often achieve higher engagement. A dataset on conversion rates of mini-publics does not exist, but our own experience in organizing Citizens Assemblies, Deliberative Polls, and clones tell us that it is possible to achieve more than 20% conversion when the topic is very controversial. For example, in the UK’s Citizens’ Assembly on Brexit in 2017, 1,155 people agreed to enter the recruitment pool out of the 5,000 contacted, generating a conversion rate of 23.1%, as illustrated below.[1]
Figure 1: Contact and recruitment numbers UK’s Citizens Assembly on Brexit (Renwick et al. 2017)
We do not pretend to know all the existing cases, and so this data should be taken with caution. Maybe there have been cases with 80% conversion, given it is possible to achieve such rates in surveys. But even in such hypothetical best practices, we would have failed to engage 20% of the population. More realistically, with 10 to 30% engagement, we are just engaging a very narrow subset of the population.
Frequent asked questions, and why we should not abandon sortition
It is clear from the points above that the assertion that the current generation of relatively small mini-publics is representative of the population from which it is drawn is questionable. Not surprisingly, the fact that participants of mini-publics differ from the population they are supposed to represent has already been documented over a decade ago.[2] However, in our experience, when confronted with these facts, practitioners and advocates of mini-publics often raise various questions. Below, we address five frequently asked questions and provide answers for them.
“But people use random sampling for surveys and then claim that the results are representative, what is the difference for mini-publics?”
The first difference we already discussed between surveys and mini-publics is that surveys that aim to represent a large population use larger samples.
The second difference, less obvious, is that a mini-public is not a system that aggregates fixed opinions. Rather, one of the core principles of mini-publics is that participants deliberate and their opinions may change as a result of the group process and composition. Our sampling procedures, however, are based on the task of estimating population parameters, not generating input for legitimate decision making. While a 5% error margin with 95% confidence level may be acceptable in a survey investigating the proportion of people who prefer one policy over another, this same measure cannot be applied to a mini-public because participants may change their opinions through the deliberation process. A mini-public is not an estimate derived from a simple mathematical formula, but rather a complex process of group deliberation that may transform input preferences into output preferences and potentially lead to important decisions. Christina Lafont has used a similar argument to criticize even an ideal sample that achieves perfect input representativeness.[3]
“But we use random assignment for experiments and then claim that the results are representative, what is the difference for mini-publics?”
Mini-publics can be thought of as experiments, similar to clinical trials testing the impact of a vaccine. This approach allows us to evaluate the impact of a mini-public on a subset of the population, providing insight into what would happen if a similar subset of the population were to deliberate. Continuing this metaphor, if the mini-public participants co-design a new policy solution and support its implementation, any similar subsets of the population going through an identical mini-public process should generate a similar output.
However, clinical trials require that the vaccine and a placebo be randomly assigned to treatment and control groups. This approach is only valid if the participants are drawn from a representative sample and cannot self-select into each experimental arm.
Unfortunately, few mini-publics compare the decisions made by members to those who were not selected, and this is not considered a key element for claiming representativeness or legitimacy. Furthermore, while random assignment of treatment and control is crucial for internal validity, it does not guarantee external validity. That is, the results may not be representative of the larger population, and the estimate of the treatment effect only applies to the specific sample used in the experiment.
While the metaphor of the experiment as a model to interpret mini-publics is preferable to the metaphor of the survey, it does not solve the issue of working with non-representative samples in practice. Therefore, we must continue to explore ways to improve the representativeness of mini-publics and take into account the limitations of the experimental metaphor when designing and interpreting their results.
“Ok, mini-publics may not be perfect, but are they not clearly better than other mechanisms?”
Thus far, we have provided evidence that the claim of mini-publics as representative of the population is problematic. But what about more cautious claims, such as mini-publics being more inclusive than other participatory processes (e.g., participatory budgeting, e-petitions) that do not employ randomization? Many would agree that traditional forms of consultation tend to attract “usual suspects” – citizens who have a higher interest in politics, more spare time, higher education, enjoy talking in public, and sometimes enjoy any opportunity to criticize. In the US, for instance, these citizens are often older white males, or as put by a practitioner once, “the male, pale and stale.” A typical mini-public instead manages to engage a more diverse set of participants than traditional consultations. While this is an obvious reality, the engagement strategies of mini-publics compared to traditional consultations based on self-selection have very different levels of sophistication and costs. Mini-publics tend to invest more resources in engagement, sometimes tens of thousands of dollars, and thus we cannot exclude that existing results in terms of inclusion are purely due to better outreach techniques, such as mass recruitment campaigns and stipends for the participants.
Therefore, it is not fair to compare traditional consultations to mini-publics. As it is not fair to compare mini-publics that are not specifically designed to include marginalized populations to open-to-all processes that are specifically designed for this purpose. The classic critique of feminist, intersectional and social movement scholars that mini-publics design does not consider existing inequalities, and thus is inferior to dedicated processes of minority engagement is valid in that case. This is because the amount dedicated to engagement is positively correlated with inclusion. For instance, processes specifically designed for immigrants and native populations will have more inclusive results than a general random selection strategy that does not have specific quotas for these groups and engagement strategies for them.
We talk past one another when we try to rank processes with respect to their supposed inclusion performance without considering the impact of the resources dedicated to engagement or their intended effects (e.g. redistribution, collective action).
It is also difficult to determine which approach is more inclusive without a significant amount of research comparing different participatory methods with similar outreach and resources. As far as we know, the only study that compares two similar processes – one using random engagement and the other using an open-to-all invitation – found little difference in inclusiveness.[4] It also highlighted the importance of other factors such as the design of the process, potential political impact, and the topic of discussion. Many practitioners do not take these factors into account, and instead focus solely on recruitment strategies. While one study is not enough to make a conclusive judgment, it does suggest that the assumption that mini-publics using randomly selected participants are automatically more inclusive than open-to-all processes is problematic.
“But what about the ergonomics of the process and deliberative quality? Small mini-publics are undeniably superior to large open-to-all meetings.”
One of the frequently advertised advantages of small mini-publics is their capacity to support high-quality deliberation and include all members of the sample in the discussion. This is a very clear advantage; however, it has nothing to do with random sampling. It is not difficult to imagine a system in which an open-to-all meeting is called and then such a meeting selects a smaller number of representatives that will proceed to discuss using high-quality deliberative procedures. The selection rule could include quotas so that the selected members respect criteria of diversity of interest (even though, as we argued before, that would not be representative of the entire group). The ergonomics and inclusion advantages are purely linked with the size of the assembly and the process used to support deliberation.
“So, are you saying we should abandon sortition?”
We hope that it is now clearer why we contend that it is conceptually erroneous to defend the application of sortition in mini-publics based on their statistical representation of the population. So, should sortition be abandoned? Our position is that it should not, and for one less obvious and counterintuitive argument in favor of random sampling: it offers a fair way to exclude certain groups from the mini-public. This is particularly so because, in certain cases, participatory mechanisms based on self-selection may be captured by organized minorities to the detriment of disengaged majorities.
Consider, for instance, one of President Obama’s first attempts to engage citizens at large-scale, the White House’s online town-hall. Through a platform named “open for questions,” citizens were able to submit questions to Obama and vote for which questions they would like to be answered by him. Over 92,000 people posted questions, and about 3.6 million votes were cast for and against those questions. Under the section “budget” of the questions, seven of the ten most popular queries were about legalizing marijuana, many of which were about taxing it. The popularity of this issue was attributed to a campaign led by NORML, an organization advocating for pot legalization. While the cause and ideas may be laudable, it is fair to assume that this was hardly the biggest budgetary concern of Americans in the aftermath of an economic downturn.
(Picture by Pete Souza, Wikimedia Commons)
In a case like the White House’s town-hall, the randomization of people to participate would be a fair and effective way to avoid the capture of the dialogue by organized groups. Randomization does not completely exclude the possibility of capture of a deliberative space, but it does increase the costs of doing so. The probability that members of an organized minority are randomly sampled to participate in a mini-public is minor, therefore the odds of their presence in the mini-public will be minor. Thus, even if we had a technological solution capable of organizing large-scale deliberation in the millions, a randomization strategy could still be an effective means to protect deliberation from the capture by organized minorities. A legitimate method of exclusion will remain an asset – at least until we have another legitimate way to mitigate the ability of small, organized minorities to bias deliberation.
The way forward for mini-publics: go big or go home?
There is clearly a case for increasing the size of mini-publics to improve their ability to represent the population. But there is also a trade-off between the size of the assembly and the cost required to sustain high-quality deliberation. With sizes approaching 1000 people, hundreds of moderators will be required and much of the exchange of information will occur not through synchronous exchanges in small groups, but through asynchronous transmission mechanisms across the groups. This is not necessarily a bad thing, but it will have the typical limitations of any type of aggregation mechanism that requires participant attention and effort. For example, in an ideation process with 100 groups of 10 people each, where each group proposes one idea and then discusses all other ideas, each group would have to discuss 100 ideas. This is a very intense task. However, there could be filtering mechanisms that require subgroups to eliminate non-interesting ideas, and other solutions designed to reduce the amount of effort required by participants.
All else being equal, as the size of the assembly grows, the logistical complexity and associated costs increases. At the same time, the ability to analyze and integrate all the information generated by participants diminishes. The question of whether established technologies like argument mapping, or even emerging artificial intelligence could help overcome the challenges associated with mass deliberation is an empirical one – but it’s certainly an avenue worth exploring through experiments and research. Recent designs of permanent mini-publics such as the one adopted in Belgium (Ostbelgien, Brussels) and Italy (Milan) that resample a small new group of participants every year could attempt to include over time a sufficiently large sample of the population to achieve a good level of representation, at least for some strata of the population, and as long as systematic sampling errors are corrected, and obvious caveats in terms of representativeness are clearly communicated.
Another approach is to abandon the idea of achieving representativeness and instead target specific problems of inclusion. This is a small change in the current approach to mini-publics, but in our opinion, it will generate significant returns in terms of long-term legitimacy. Instead of justifying a mini-public through a blanket claim of representation, the justification in this model would emerge from a specific failure in inclusion. For example, imagine that neighborhood-level urban planning meetings in a city consistently fail to involve renters and disproportionately engage developers and business owners. In such a scenario, a stratified random sample approach that reserves quotas for renters and includes specific incentives to attract them, and not the other types of participants, would be a fair strategy to prevent domination. However, note that this approach is only feasible after a clear inclusion failure has been detected.
In conclusion, from a democratic innovations’ perspective, there seems to be two productive directions for mini-publics: increasing their size or focusing on addressing failures of inclusiveness. Expanding the size of assemblies involves technical challenges and increased costs, but in certain cases it might be worth the effort. Addressing specific cases of exclusion, such as domination by organized minorities, may be a more practical and scalable approach. This second approach might not seem very appealing at first. But one should not be discouraged by our unglamorous example of fixing urban planning meetings. In fact, this approach is particularly attractive given that inclusion failures can be found across multiple spaces meant to be democratic – from neighborhood meetings to parliaments around the globe.
For mini-public practitioners and advocates like ourselves, this should come as a comfort: there’s no shortage of work to be done. But we might be more successful if, in the meantime, we shift the focus away from the representativeness claim.
****************
We would like to express our gratitude to Amy Chamberlain, Andrea Felicetti, Luke Jordan, Jon Mellon, Martina Patone, Thamy Pogrebinschi, Hollie Russon Gilman, Tom Steinberg, and Anthony Zacharewski for their valuable feedback on previous versions of this post.
[1] Renwick, A., Allan, S., Jennings, W., McKee, R., Russell, M. and Smith, G., 2017. A Considered Public Voice on Brexit: The Report of the Citizens’ Assembly on Brexit.
[2] Goidel, R., Freeman, C., Procopio, S., & Zewe, C. (2008). Who participates in the ‘public square’ and does it matter? Public Opinion Quarterly, 72, 792- 803. doi: 10.1093/poq/nfn043
[3] Lafont, C., 2015. Deliberation, participation, and democratic legitimacy: Should deliberative mini‐publics shape public policy?. Journal of political philosophy, 23(1), pp.40-63.
[4] Griffin J. & Abdel-Monem T. & Tomkins A. & Richardson A. & Jorgensen S., (2015) “Understanding Participant Representativeness in Deliberative Events: A Case Study Comparing Probability and Non-Probability Recruitment Strategies”, Journal of Public Deliberation 11(1). doi: https://doi.org/10.16997/jdd.221
The federal government is authorized to spend an additional $2 trillion over the next 10 years through the Bipartisan Infrastructure Law, the CHIPS and Science Act, and the Inflation Reduction Act. I support many of the priorities in these laws.
But government spending should be democratic–at several levels. Operating in a democratic way is consistent with justice and is most likely to be sustainable, because people will feel relatively supportive of government programs that engage them. This is the version of social democracy or Great Society liberalism that I can get behind.
What does spending money democratically mean? First, a fairly elected, deliberative legislature should allocate the funds into large categories. That pretty much happened with these bills (acknowledging many imperfections).
Then the federal agencies and state and local governments that administer the funds should engage relevant communities in deciding how to spend the money in detail and should form partnerships with groups (which may not be federal grantees) to accomplish the intended outcomes of the spending. Finally, the funds should allow many people to be hired and given a voice in the programs–including those who do the blue-collar work.
Spending on public transportation is a good example. The White House says there will be “$89.9 billion in guaranteed funding for public transit over the next five years — the largest Federal investment in public transit in history.” This investment has potential benefits for climate, racial equity, and convenience and quality of life.
States and cities will receive portions of this money. They should give their communities appropriate voice in deciding what and where to build. They should form partnerships with community groups whose goals align (e.g., community development corporations that can build dense housing near the transit). And they should employ workers–often via contracts with businesses–who have a say and who see pathways to influential Green careers.
This approach is inconsistent with libertarian conservatism, which opposes the spending in the first place. It is also inconsistent with technocratic progressivism, which views community engagement with deep skepticism. Doesn’t “engagement” mean NIMBY groups that block valuable projects in their neighborhoods, well-resourced companies that grab government contracts, and process-driven delays that dilute the benefits for both environment and racial equity?
The truth is, public engagement must be done well. A one-time public meeting in which citizens line up at the microphone to yell at public officials–that is a recipe for disaster. A worthwhile process takes planning and money. It requires training and technical support for the federal civil servants, local public employees, and activists who are involved. Since no single training program can accomplish very much, success requires building experienced bodies of employees who have run processes before and have learned to do them better.
We have not tried this approach for many decades in the USA–not since the Great Society, which tried various experiments in community engagement under the heading of “Maximum Feasible Participation” (with mixed success).
Reagan depicted government as the problem, although federal outlays per capita, adjusted for inflation, rose rapidly during Reagan’s term and only stabilized under Clinton. Also, despite a rhetorical commitment to hiring contractors instead of career civil servants, the civil service actually grew in that era. However, I think that federal capacity for public engagement shrank, outside of certain notable programs. More importantly, Congress launched or redesigned very few social programs after the late 1960s. That means that most federal money has flowed into well worn channels, offering limited opportunities for deliberation about what and how to spend.
Then, when the Obama Administration got a chance to allocate a substantial amount of new money in the 2009 stimulus, the progressive technocratic approach clearly won out. Efficiency was the by-word. Funds went to “shovel-ready” projects that were seen as offering the quickest return, or to initiatives informed by behavioral economics that were supposed to “nudge” people without them even being aware, or to competitions (like “Race to the Top”) that were meant to leverage non-federal funds. There was no sense that the public would be involved in defining and solving national problems along with the federal government.
Democratic spending is the path not taken, at least not since ca. 1965. We should find out whether it can produce sustainable, popular, and fair social outcomes in ways that we have not seen in my lifetime. That requires:
Setting aside tiny but real percentages of the federal funds for democratic and deliberative processes and for the training and technical assistance that they require. I am not sure to what extent those purposes are authorized under current law. If it is impossible to spend federal funds this way, then philanthropy should step up.
Considering new rules, such as offering special grants to communities that can demonstrate that they have reached agreement about priorities across traditional lines of difference, such as race, partisanship, or urban/suburban/rural divides. I’d be especially interested in agreements that bridge distant communities, such as coal towns and East Coast cities.
Intellectual leadership: influential people should articulate the value of public engagement. In the Obama Administration, the president did that, albeit somewhat vaguely. No members of his cabinet and hardly any liberal public intellectuals backed him up. The stimulus package and Obamacare came across as strictly technocratic and were assessed only for their outcomes (while democratic culture waned). We need more effective voices to defend democracy this time.
When David Meyers of The Fulcrum asked me yesterday to comment on the fact that the public identifies “the government” as the biggest problem facing us today, I replied that the most promising solution is to spend money democratically. My reply was rooted in the best traditions of the New Deal and Great Society (as I see them), but it’s a fairly marginal view today. It’s an alternative to three prevalent assumptions: that democracy is mostly a matter of fair electoral processes, that activated citizens are often a nuisance, and that protecting democracy means uplifting some kind of political center. I think we must exercise power to improve the world, but do so in ways that empower our full diversity of people in their roles as citizens.
I think of a “teaching case” as a true story that culminates in a difficult decision that has confronted an individual or group. The decision is typically difficult because of conflicting values, incomplete information, and unpredictable outcomes. A teaching case is useful as a prompt for discussion and to teach the disposition of acting wisely under uncertainty, or phronesis. I especially like cases in which groups must decide collectively, because those stories allow attention to the dynamics of group decision-making. Here is a selection of such “civic” cases: https://sites.tufts.edu/civicstudies/case-studies/
This semester, I have been co-teaching a course with Jennifer Howe Peace, who has extensive experience not only leading discussions based on teaching cases but also assigning students to write such cases. We did just that this fall. Each of our students selected a real-world situation, conducted research, wrote a 2-3 page case about it, and led a discussion.
I recommend this pedagogy for teaching the following essential civic skills:
Identifying decisions worthy of discussion. Actual groups often overlook or evade decisions that they should discuss and spend time on matters that don’t require deliberation. (See “a flowchart for collective decision-making in democratic small groups.”) Writing a case means choosing a topic that should be discussed.
Identifying the tradeoffs and other difficulties, such as incomplete information and unpredictability.
Identifying who is in a position to make which choices. It is a costly distraction to ask what someone should do if they can’t do it. A good written case centers on one or more protagonists who are able to choose.
Deciding when to start and end the story. This side of the Big Bang, every story has emerged from many previous ones. The web of human interaction has no beginning. The choice of when to start a written story frames it for readers; it is an act of judgment. (For instance, does the story of the USA begin in 1492, 1619, 1776, 1789 …?) Writing a case teaches the skill and ethics of picking beginnings and endings well.
Eliciting interest and attention. A well-written case makes its readers interested. Getting people’s attention is a basic civic skill.