Building Civic Capacity in an Era of Democratic Crisis by Hollie Russon-Gilman and K. Sabeel Rahman

About $3 billion was contributed to influence 2016 federal campaigns. In a new paper entitled “Building Civic Capacity in an Era of Democratic Crisis,” Hollie Russon-Gilman and K. Sabeel Rahman suggest a much better way to spend some of that money.

I realize, by the way, that political donors want candidates to notice their support. It would nevertheless make all the difference if they gave one percent of their $3 billion to activities that strengthen democracy–compensating for irradiating the body politic with polarizing and demoralizing messages. Progressive donors would also build the base for more progressive policy by investing for the longer term.

Russon-Gilman and Rahman argue “that today’s populist moment emphasizes the need to create a genuinely responsive, participatory form of democratic politics in which communities are empowered, rather than alienated.” They advocate investments that “self-consciously strive to build constituencies and identities that are more inclusive and accommodating. Think of this as ‘us’ populism, as opposed to ‘them’ populism.”

That basic stance supports two strategies:

  1. More investment in community organizing, especially the types that build “new bridges across racial, gender, and geographic divides.” Russon-Gilman and Rahman advocate broad-based, long-term organizing instead of mobilizing people around specific issues.
  2. “Reforming our institutions of governance” so that agencies offer citizens more “hooks and levers” to influence power, and so that public sector workers have skills and incentives to engage the public better.

These strategies imply (as the authors note) a broad understanding of democracy. It is not all about elections, nor even about the official government. It’s about how people come together and exercise power.

The paper offers valuable case studies. For instance, under the heading of organizing:

  • “The Center for Rural Strategies (CRS) … based in Whitesburg, Ky. in the central Appalachian coalfields, provides rural communities and nonprofit organizations with resources on innovative media and communications strategies in order to strengthen their work.” CRS provides information, challenges stereotypes about its communities, and lobbies for better access to the physical infrastructure for communications, because both content and conduit matter. (See “Building Democracy in ‘Trump Country’” by Ben Fink for a similar case.)
  • “Coworker.org (Coworker) is a digital platform for workers’ voices founded in response to the decline of formal institutions organizing workers and geared towards building a twenty-first century model of worker power. The organization provides tools directly to workers to self-advocate within the workplace, usually where no labor structure or organizing already exists.” Like CRS, Coworker invests in people who develop as leaders.

Examples under the heading of institutions include:

  • “The Office of Community Wealth Building (OCWB) was established as a permanent city agency in Richmond, Va., in 2015 to provide anti-poverty strategy and policy advice to the mayor and to implement municipal poverty reduction initiatives and systemic changes around housing, education, and economic development.”
  • “The Public Engagement Unit (PEU) is a division in New York’s city government started in 2015 [is] devoted to knocking on doors and making calls to hard-to-reach constituents to enroll them in city services, as well as foster long-term individual relationships with city staff.”

Overall, “Building Civic Capacity in an Era of Democratic Crisis” helps make the case for investments that are less short-term, less oriented to immediate efficiency, less split between government and civil society, but more experimental, more open-ended, and more truly inclusive than we normally see (especially, I would say, on the left).

See also: why the white working class must organizeto beat Trump, invest in organizingfighting Trump’s populism with pluralist populism; and community organizing between Athens and Jerusalem.

A Case Study of Wing: Has the 2011 Localism Act succeeded as a democratic innovation

Author: 
Note: the following entry is a stub. Please help us complete it. Problems and Purpose History Originating Entities and Funding Participant Recruitment and Selection Methods and Tools Used Deliberation, Decisions, and Public Interaction Influence, Outcomes, and Effects Analysis and Lessons Learned Secondary Sources External Links Notes

Constitutional Rights Foundation Upcoming Webinar Around Civil Conversations!

CRF Con

Friends, we have been asked by the Constitutional Rights Foundation USA to share the following webinar announcement, and we are quite excited to do so. This looks to be an excellent pedagogically oriented webinar around civil conversations! You can register for the webinar here, but be sure to review the information below!

Now is the time to empower students to constructively discuss controversial issues; develop speaking, listening, and close reading skills; and improve understanding of their role in a democracy.
Join us for a free webinar that will help you facilitate engaging, structured, and standards-aligned academic discussions in your classes!
Thursday, November 2, 2017
7:00 p.m. (ET), 6:00 p.m. (MT), 5:00  p.m. (CT) , 4:00 p.m. (PT)
To view our list of free Civil Conversation resources visit our Curriculum Library.



A drawing for $25 Amazon gift cards will be held for participants that attend and complete the webinar survey!
 

Implementation of the 2011 Localism Act in Wing (Buckinghamshire, UK)

Author: 
After the passing of the Localism Act by the UK Parliament in 2011, power over planning and development was largely devolved to community and local governments. The Civil Parish of Wing quickly embraced the Localism Act's provisions, embarking on a series of public consultations to plan their neighbourhood's future development.

Turning to Eachother During Unwelcome Conversations

As tragic events seem to constantly fill our lives and newsfeeds, we wanted to lift up a poignant piece from NCDD member org Essential Partners‘ blog in response to the Las Vegas tragedy. Parisa Parsa, Executive Director of EP, writes about the tendency to jump to assessing a situation and pinning down the blame, and that while this helps us cope with tragedy, often limits our ability to grieve and genuinely process. She reminds us to hold space for these painful storytelling opportunities and how these conversations can allow us the chance to come together in community, in order to find understanding and a collective way to move forward. We encourage you to read the piece below or you can find the original on Essential Partners’ blog here.


Unwelcome Conversations

“I can’t even get my mind around Las Vegas” the woman next to me exclaimed. We were both staring at the TV blasting the news while waiting to board our flight last week. As ever, the media was already flooded with analysis to explain what had happened, while we struggled again to understand why it happened. The world rushed to the usual rallying cries: gun control, mental health, male violence…the list goes on.

A typical media pundit or post usually includes some phrase critical of what others are talking about. “It’s not about [what the last commenter said], it’s about [my deepest conviction].” And with great assuredness, folks far from the situation quickly move to assert their go-to explanation. A mad dash to do this kind of assessment of a crisis offers a great coping mechanism. When we can put an unspeakably tragic event into some frame of meaning, our bearings return and panic is reduced. Because the truth is, we don’t want to be talking about terrible moments at all. We don’t want it to have happened, and we most definitely don’t want it to happen again. Having someone or something to blame, especially if it is singular, definite and not ourselves, help us detach ourselves from these horrible acts of violence and hate. Yet so far, collectively, retracting and finger pointing has not helped us prevent the unspeakable from happening again, and again, and again.

Venturing away from defining it as “all about” mental health or guns or testosterone opens up a whole new world. In the midst of our shock and horror, listening to our grief can provide answers. When we sit with the many explanations, hear the cries of those who feel misunderstood, hold one another in our pain, sorrow and anger, we begin to connect to another story. Many voices, conflicting views, and multiple understandings arise. Those stories forge a new way out of the mire, lets our pain and our hope speak to one another, and begins to carve a path to creative solutions.

Turning to one another in community to share our responses, our meaning-making and our experiences can create another possible future. Let’s talk and listen more deeply, and see what happens.

You can find the original version of this on Essential Partners’ blog at www.whatisessential.org/blog/unwelcome-conversations.

Interaction Dynamics and Persuasion Strategies

I recently read Chenhao Tan et al’s 2016 WWW paper Winning Arguments: Interaction Dynamics and Persuasion Strategies in Good-faith Online Discussions, which presents an interesting study of the linguistic features of persuasion.

Coming from a deliberative background, the word ‘persuasion’ has negative connotations. Indeed, Habermas and others strongly argue that deliberation must be free from persuasion – defined roughly as an act of power that causes an artificial opinion change.

In its more colloquial sense, however, persuasion needn’t be so negatively defined. Within the computer science literature on argument mining and detection, persuasion is generally more benignly considered as any catalyst causing opinion change. If I “persuade” you to take a different route because the road you were planning to take is closed, that persuasion is not problematic in the Habermasian sense as long as I’m not distorting the truth in order to persuade you.

Furthermore, Tan et al gather a very promising data set for this investigation – a corpus of “good faith online discussions” as the title says. Those discussions come from Reddit’s Change My Mind forum, a moderated platform with explicit and enforced norms for sharing reasoned arguments.

Each thread starts with a user who explicit states they want to have their opinion changed. That user then shares said opinion and outlines their reasoning behind the opinion. Other users then present arguments to the contrary. The original poster then has the opportunity to award a “delta” to a response if it succeeded in changing their opinion.

So there’s a lot to like about the structure of the dataset.

I have a lot of questions, though, about the kinds of opinion which are being shared and changed. Looking through the site today, posts cover a mix of serious political discussion, existential crises, and humorous conundrums.

The all time most highly rated post on the site begins with the opinion, “Strange women lying in ponds distributing swords is no basis for a system of government.” So it’s unclear just how much we can infer about debate more broadly from these users.

However, Tan et al, intentionally restrict their analysis to linguistic features, carefully comparing posts which ultimately win a “delta” to the most similar non-delta post responding to the same opinion. In this way, they aim to “de-emphasize what is being said in favor of how it is expressed.” 

There’s a lot we lose, of course, by not considering content, but this paper makes valuable contributions in disambiguating the effects of content from the effects of syntactic style.

Interestingly, they find that persuasive posts – those which earn a delta from the original poster – are more dissimilar for the originating post in content words, while being more similar in stop words (common words such as “a”, “the”, etc). The authors are careful not to make causal claims, but I can’t help but wonder what the causal mechanism behind that might be. The similarity of content words matched by the dissimilarity of stop words seems to imply that users are talking about different things, but in similar ways.

There’s a lot of debate, though, about exactly, what should count as a “stop word” – and whether stop word lists should be specially calibrated for the content. Furthermore, I’m not familiar with any deep theory on the use of stop words, so I’m not sure this content word/stop word disjunction really tells us much at all.

The authors also investigate usage of different word categories – finding, for example, that posts tend to begin and end with tangible arguments while become more abstract in the middle.

Finally, they investigate the features of users who award deltas – e.g., users who do change their mind. In this setting, they find that people who use more first person singular pronouns are more likely to change, while those using more first person plurals are less likely to change. They posit that the first person plural indicates a sort of diffuse sense of responsibility for a view, indicating that the person feels less ownership and is therefore less likely to change.

I’d love to see an extension of this work which dives into the content and examines, for example, what sorts of opinions people are likely to change – but this paper presents a thought-provoking look the persuasive effects of linguistic features themselves.Facebooktwittergoogle_plusredditlinkedintumblrmail

new research on “civic deserts”

(Washington, DC) My colleagues Kei Kawashima-Ginsberg and Felicia Sullivan coined the phrase “civic deserts” to name places where there are few or no opportunities to be active and constructive participants in civic life. The analogy is to “food deserts”–geographical communities where there is little or no nutritious food for sale. You can still be an active citizen in a civic desert, just as you can grow vegetables in your back yard; it’s just that the whole burden falls on you.

Today at the National Conference on Citizenship, we are releasing Civic Deserts: America’s Civic Health Challenge by Matthew N. Atwell, John Bridgeland, and me. It’s a 36-page report that documents the declining opportunities for civic engagement in America. John Bridgeland and Robert Putnam also write about it today in a PBS opinion piece.

This is an example of a table from the report:

Thanks to friends at USC’s Center for Economic and Social Research, we were able to ask a  large, representative sample of Americans whether they belonged to various kinds of groups; if so, whether they participated actively in any of them; and if so, whether they thought that the group’s leaders (a) usually did what they promised and (b) usually tried to serve and include all the members. It turns out that only 28% of adult Americans actively belong to groups whose leaders are accountable and inclusive. That statistic does not tell us how much geographical space is taken up by civic deserts, but it suggests that they are common. And the historical data implies that civic engagement used to be much more widespread.

I separately formed a hypothesis that lacking direct, personal experience with good leadership would make a person more tolerant of the leadership style of Donald J. Trump, controlling for one’s political ideology. In other words, given two people who agree with Trump on issues, the one without experience of good local leadership would be more supportive of Trump as a leader. This was testable with the USC data, which includes a whole battery of questions about ideology, issues, and Trump. My hypothesis turned out not to be true: partisanship and media choice seem to explain opinions of the current president almost completely, and experience in groups adds no explanatory power. Still, I think there may be a more circuitous story about civic deserts as a cause of Trump’s victory: the decline of civic associations increases the power of partisan heuristics and ideological media. Even if that hypothesis is also false, civic deserts are still a problem, because civic engagement benefits health, economic development, safety, education, and good government.

See also: The Hollowing Out of US Democracy (my blog post for USC); Mitigating the Negative Consequences of Living in Civic Deserts – What Digital Media Can (and have yet to) Do (a new CIRCLE article); America needs big ideas to heal our divides. Here are three by Bridgeland and Putnam; and the power of the NRA in an age of civic deserts.

Comparative Effectiveness Research for democracy?

In health, we’ve seen an influential and valuable shift to Comparative Effectiveness Research (CER): measuring which of the available drugs or other interventions works best for specific purposes, in specific circumstances. Why not do the same for democracy? Why not test which approaches to strengthening democracy work best?

My colleagues and I played a leading role in developing the “Six Promising Practices” for civic education. These are really pedagogies, such as discussing current, controversial issues in classrooms or encouraging youth-led voluntary groups in a schools. Since then, we have been recommending even more pedagogies, such as Action Civics, news media literacy, and school climate reform. I am often asked which of these practices or combinations of practices works best for various populations, in various contexts, for various outcomes. This question has not really been studied. There is no CER for civics.

Likewise, in 2005, John Gastil and I published The Deliberative Democracy Handbook. Each chapter describes a different model for deliberative forums or processes in communities. The processes vary in whether participants are randomly selected or not, whether they meet face-to-face or online, whether the discussions are small or large, etc. Again, I am asked which deliberative model works best for various populations, in various contexts, for various outcomes. There is some relevant research, but no large research enterprise devoted to finding out which deliberative formats work best.

Some other fields of democratic practice have benefitted from comparative research. In the 2000’s, The Pew Charitable Trusts funded a large body of randomized experiments to explore which methods of campaign outreach were most cost-effective for turning out young people to vote. Don Green (now at Columbia) was an intellectual force behind this work: one motivation for him was to make political science a more experimental discipline. CIRCLE was involved; we organized some of the studies and published this guide to disseminate the findings. Our goal was to increase the impact of youth on politics.

Our National Study of Learning, Voting, and Engagement (NLSVE) is a database of voting records for 9,784,931 students at 1,023 colleges and universities. With an “n” that large, it’s possible to model the outcome (voter turnout) as a function of a set of inputs and investigate which ones work best. That is a technique for estimating the results that would arise from a whole body of experiments. We also provide each participating campus with a customized report about its own students that can provide the data for the institution to conduct its own experiments.

So why do some fields of democratic practice prompt research into what works, and others don’t?

A major issue is practical. The experiments on voter turnout and our NSLVE college study have the advantage that the government already tallies the votes. Given a hard outcome that is already measured at the scale of millions, it’s possible to vary inputs and learn a great deal about what works.

To be sure, people and community contexts are heterogeneous, and voter outreach can vary in many respects at once (mode, messenger, message, purpose). Thus a large body of experiments was necessary to produce insights about turnout methods. However, we learned that grassroots mobilization is cost-effective, that the message usually matters less than the mode, and that interactive contacts are more efficient than one-way outreach. We believe that these findings influenced campaigns, including the Obama ’08 primary campaign, to invest more in youth outreach.

Similarly, colleges vary in their populations, settings, resources, missions, and structures, but NSLVE is yielding general lessons about what tends to work to engage students in politics.

Other kinds of outcomes may be harder to measure and yet can still be measured at scale. For example, whether kids know geometry is hard to measure–it can’t be captured by a single test question–but society invests in designing reliable geometry tests that yield an aggregate score for each child. So one could conduct Comparative Effectiveness Research on math education. The fact that mastering geometry is a subtler and more complex outcome than voting does not preclude this approach.

But it does take a social investment to collect lots of geometry test data. For years, I have served on the committee that designs the National Assessment of Education Progress (NAEP) in civics. NAEP scores are valuable measures of certain kinds of civic knowledge–and teaching civics is a democratic practice. But the NAEP civics assessment doesn’t receive enough funding from the federal government to have samples that are reliable at the state or local level, nor is it conducted annually. This is a case where the tool exists, but the investment would have to be much larger to permit really satisfactory CER. It is not self-evident that the best way to spend limited resources would be to collect sufficient data for this purpose.

Other kinds of outcomes–such as the quality of discourse in a community–may be even more expensive and difficult to measure at scale. You can conduct concrete experiments in which you randomly vary the inputs and then directly measure the outcomes by surveying the participants. But you can only vary one (or a few) factors at a time in a controlled experiment. That means that a large and expensive body of research is required to yield general findings about what works, in which contexts, for whom.

The good news is that studying which discrete, controllable factors affect outcomes is only one way to use research to improve practice. It is useful approach, but it is hardly sufficient, and sometimes it is not realistic. After all, outcomes are also deeply affected by:

  • The motivations, commitment, and incentives of the organizers and the participants;
  • How surrounding institutions and communities treat the intervention;
  • Human capital (who is involved and how well they are prepared);
  • Social capital (how the various participants relate to each other); and
  • Cultural norms, meanings, and expectations.

These factors are not as amenable to randomized studies or other forms of CER.  But they can be addressed. We can work to motivate, prepare, and connect people, to build support from outside, and to adjust norms. Research can help. It just isn’t research that resembles CER.

Democratic practices are not like pills that can be proven to work better than alternatives, mass produced, and then prescribed under specified conditions. Even in medicine, motivations and contexts matter, but those factors are even more important for human interactions. It’s worth trying to vary aspects of an intervention to see how such differences affect the results. I’m grateful to have been involved in ambitious projects of that nature. But whether to invest in CER is a judgment call that depends on practical issues like the availability of free data. Research on specific interventions is never sufficient, and sometimes it isn’t the best use of resources.