Participatory Practices in Organizations

This 17-page review article, Participatory Practices in Organizations by Caroline Lee was published 2015 in Sociology Compass, an online journal aimed at reviewing state-of-the-art research for a broad audience of undergraduates, researchers, and those who want to stay posted on developments in particular fields.

The piece is a relatively quick overview and digest of a range of historic and current research on participation (not just deliberation, but much that is relevant to it) in a variety of different types of organizations. It might be useful for NCDD members seeking a quick literature review, students looking for gaps in existing research, or anyone interested in how organizational scholars view the evolution of participatory practices over the last century.

From the Abstract…

The literature on participatory practices in organizations has been less coherent and more limited to subspecialties than the literature on bureaucracy in organizations – despite a number of celebrated studies of participation in 20th century American sociology. Due to the practical nature of participatory reforms and the ambiguity of participation as a concept, attempts to review participatory knowledge have a tendency to focus on refining definitions and clarifying frameworks within subfields.

This article instead provides a broad thematic overview of three different types of research on participation in organizations, all critical to an understanding of today’s dramatic expansion of participatory practices across a variety of organizations. Classic research studied participation as dynamic and central to organizational legitimacy. Institutional design research has focused on participation as a stand-alone governance reform with promising empowerment potential, but mixed results in domains such as health care, environmental politics, and urban planning. Finally, recent research seeks to place participatory practices in the context of shifting relationships between authority, voice, and inequality in the contemporary era. The article concludes with suggestions for building on all three categories of research by exploring what is old and new in the 21st century’s changing participatory landscape.

Download the article here.

About the Author
Caroline W. Lee is Associate Professor of Sociology at Lafayette College. Her research explores the intersection of social movements, business, and democracy in American organizations. Her book Do-It-Yourself Democracy: The Rise of the Public Engagement Industry was published in 2015 by Oxford University Press. Her co-edited volume with Michael McQuarrie and Edward Walker, Democratizing Inequalities: Dilemmas of the New Public Participation, was published in 2015 by NYU Press.

Resource Link: http://sites.lafayette.edu/leecw/publications/

This resource was submitted by Caroline Lee, Associate Professor of Sociology at Lafayette College, via the Add-a-Resource form.

Experiment Shows Public Engagement Can Increase Tax Revenues

As public engagement practitioners, many of us have talked with public officials who want to know how engagement will improve a municipality’s bottom line. That’s why we appreciated NCDD member Tiago Peixoto‘s recent blog post on the first experimental – not just observational – evidence that our work can help cities collect more taxes. We encourage you to read Tiago’s post below or find the original here.

You can find the paper on the experiment by clicking here. There are caveats in the findings, but we hope this new evidence will help you strengthen your case with hesitant officials the next time you’re pushing for engagement.


democracy spot logo

New Evidence that Citizen Engagement Increases Tax Revenues

Quite a while ago, drawing mainly from the literature on tax morale, I posted about the evidence on the relationship between citizen engagement and tax revenues, in which participatory processes lead to increased tax compliance (as a side note, I’m still surprised how those working with citizen engagement are unaware of this evidence).

Until very recently this evidence was based on observational studies, both qualitative and quantitative. Now we have – to my knowledge – the first experimental evidence that links citizen participation and tax compliance. A new working paper published by Diether Beuermann and Maria Amelina present the results of a randomized experiment in Russia, described in the abstract below:

This paper provides the first experimental evaluation of the participatory budgeting model showing that it increased public participation in the process of public decision making, increased local tax revenues collection, channeled larger fractions of public budgets to services stated as top priorities by citizens, and increased satisfaction levels with public services. These effects, however, were found only when the model was implemented in already-mature administratively and politically decentralized local governments. The findings highlight the importance of initial conditions with respect to the decentralization context for the success of participatory governance.

In my opinion, this paper is important for a number of reasons, some of which are worth highlighting here. First, it adds substantive support to the evidence on the positive relationship between citizen engagement and tax revenues. Second, in contrast to studies suggesting that participatory innovations are most likely to work when they are “organic”, or “bottom-up”, this paper shows how external actors can induce the implementation of successful participatory experiences. Third, I could not help but notice that two commonplace explanations for the success of citizen engagement initiatives, “strong civil society” and “political will”, do not feature in the study as prominent success factors.  Last, but not least, the paper draws attention to how institutional settings matter (i.e. decentralization). Here, the jack-of-all-trades (yet not very useful) “context matters”, could easily be replaced by “institutions matter”.

You can read the full paper here [PDF].

You can find the original version of this DemocracySpot blog post at http://democracyspot.net/2015/01/07/new-evidence-that-citizen-engagement-increases-tax-revenues.

How Can Deliberation in Citizens’ Juries Improve?

We wanted to share another great thought piece from Max Hardy of Max Hardy Consulting, an NCDD organizational member – this time on the ways Citizens’ Juries can be improved. Max’s reflections are based in the Australian context, but plenty of them can apply to these deliberative bodies elsewhere. We encourage you to read his piece below or find the original on his blog by clicking here.


Reflections on the growing trend of using Citizens’ Juries in Australia (and how we might make them even more effective)

IHardy logot seems that is becoming more common for governments at all levels to entertain random selection of citizens to enable an informed judgment on controversial or complex planning matters (one form being the Citizens’ Jury). As an advocate for, and facilitator of, such processes this is exciting and most welcome. There is a growing weariness with more conventional processes that are dominated by well organised stakeholder groups and ‘hyper-engaged’ individuals; processes which largely fail to engage the so-called silent majority. The NewDemocracy Foundation has been pivotal in promoting and arguing for alternatives and is getting serious traction.

Several years ago I met an academic David Kahane, from University of Alberta, Canada at a conference in Sydney, where we discussed the merits of these emerging deliberative processes, and we thought that a paper could be written describing the rationale for the differing approaches and their advantages and disadvantages. We were soon joined by Jade Herriman, of the Institute of Sustainable Futures in Sydney, Australia, and Kristjana Loptson, also from the University of Alberta. And after several months of research, and another few months of writing, we published our paper, titled Stakeholder and Citizen Roles in Public Deliberation, in the Journal of Public Deliberation.

Since co-authoring this paper I have been involved in several more deliberative processes (for ease I will just refer to them from here on as Citizens’ Juries, though other forms exist such as the Citizens’ Assembly and Citizens’ Initiative Review) and I have been reflecting on the paper we published once again, and felt the need to document some ideas to address some of their perceived or actual limitations. So here goes.

Limitation 1 – Breadth of participation
Citizens’ Juries are recruited through random selection are really effective for allowing a group to deeply dive into a complex issue/topic. Sadly the rest of the community is, at best, observers of the process. The journey the jury experiences is difficult to replicate, so the findings they ultimately reach may not be seen as legitimate by the broader community.

Ideas to improve
A longer engagement process can be used to help inform the deliberative process – for instance, through the use of online engagement. This process could also help to identify other experts who could provide a balanced range of evidence to the jury.

Another idea is to provide the same questions being put to the jury for citizens to arrange their own meetings (BBQs and dinner parties), or to discuss in other established forums or community group meetings (this was an approach used with great success for The Queensland Plan). Responses can be logged online and fed into the citizens’ jury deliberative process.

Live streaming could also be used to invite viewers to frame questions or provide comments in real time. A theme team could cluster the questions and comments and provide them at a particular time to the jury to consider.

Limitation 2 – Stakeholders/experts feeling marginalised
Whilst the jury has an amazing learning experience, stakeholders and experts who give evidence generally provide their evidence, and then leave. Jurors and facilitators often feel that it would have been helpful for stakeholders to hear each other’s evidence, and have the opportunity to learn from each other.

Ideas to improve
Arrange panel sessions where witnesses with different perspectives can share information, and have a conversation with each other, with the jury present to observe. In addition, the jury could access expert witnesses via video conference as they approach their final deliberations with remaining questions. Although by itself this would not assist witnesses/stakeholders to go on the learning journey, it would at least give some clues as to the journey the jury has been on.

A second idea is to include stakeholders/experts/witnesses as a resource group for jurors during their final deliberations.

Yet another idea, and this will be somewhat controversial, is that stakeholders could be included on the jury, but make up no more than one third of the total jurors. (I have been involved in arrangements such as these whereby one third are randomly selected, one third are self selected from those who typically get involved, and one third are invited in to strengthen diversity – e.g., you may not recruit anyone from an indigenous group, or a young person, from the first two cohorts). The principle here is about gaining a reasonable diversity, not about perfection, and the benefit this may have is that groups with very different views may become more understanding of each other’s interests and aspirations.

Limitation 3 – Limited role in framing the ‘charge ‘, or questions to be answered
In most cases the commissioning body, process experts, or a steering committee (or any combination of the above) design the key aspects of the deliberative process. Decisions are made concerning the ‘charge’ or questions being put the jury, the duration of the process, the desired composition of the jury, and the witnesses to be called. For some individuals and groups, this is a reason to be skeptical about the deliberative process and any outcomes from such processes. In particular, if stakeholders do not believe the right question is being put, then the outcome of the process, the jury’s ‘verdict,’ can be irrelevant. When the ‘deliberative design formula’ is seen to be managed tightly by ‘others,’ it can give fuel for mistrust.

When stakeholders have some influence over the process, in my experience, they are generally more accepting or even actively supportive of the outcomes.

Ideas to improve
Consistent with the Twyfords Collaborative Pathway, engaging a cross section of stakeholders in framing the dilemma or charge to be put to the jury can be very useful. It helps to generate questions that are seen as being the important ones to address, and invariably it helps to lay out the extent of the dilemma being faced.

Conclusion and suggested principles
So that is just a few ways that deliberative processes might be strengthened. From my perspective, it is important that we continue to conduct experiments in democracy and to learn from those experiments. The important thing, from my perspective, is not that we apply a proven design, but that we continue to invest in the co-design of the process so that there is a confidence in that process and the outcomes. It is also an opportunity for groups with different values and interests to understand and respect each other more, so that the process itself contributes to a more cohesive community.

It is also important that whatever design we use follows a set of core principles. This would be my list:

  1. The ultimate decision-makers are genuine in wanting the help of citizens and stakeholders/experts to resolve an important issue/ dilemma/ question/ puzzle.
  2. The decision-makers enter the process with the intent of using that advice, to take it very seriously, and to respond publicly if they do not follow the advice given (i.e., the verdict).
  3. Reasonable efforts are made to advise the broader community about the rationale of the process, and there is an attempt to gauge their views, concerns, and aspirations prior to the deliberative process.
  4. The participants of the deliberative process (let’s say, the jurors) have access to a balanced range of information and are not steered toward a particular desired outcome of the commissioning body or the facilitators.
  5. Jurors should be recruited through an independent social research company and independently facilitated.
  6. The jurors have the ability to scrutinize those giving evidence.
  7. The jurors are given reasonable periods of time to process information and then to deliberative over that information.
  8. Jurors must feel confident that they are all actively participating and are not being overwhelmed by powerful personalities.
  9. The commissioning body and stakeholders must be confident that the questions to be posed to the jury are appropriate.
  10. The deliberative process itself should be transparent and recorded.
  11. The deliberative process is designed in such a way that it strengthens a ‘community of interest’ rather than fragmenting it further.

There are probably others, and I’m sure these could be developed further. If you have had experience in deliberative processes that rely on random selection I’d be very keen to hear your thoughts, and your feedback on mine.

You can find the original version of Max’s piece on his blog by visiting www.maxhardy.com.au/reflections-on-the-growing-trend-of-using-citizens-juries-in-australia-and-how-we-might-make-them-even-more-effective.

Evaluativism 101

We are happy to share the announcement below from NCDD Member Chris Santos-Lang of GRIN Free. Chris’s announcement came via our great Submit-to-Blog Form. Do you have news you want to share with the NCDD network? Just click here to submit your news post for the NCDD Blog!


What Is Evaluativism?

The word “homophobia” was coined in the 1960s to name something that had been occurring for centuries before being named. The word “evaluativism” is an even more recently coined term with an even older history. Much as “racism” and “sexism” refer to discrimination on the basis of race and sex respectively, “evaluativism” refers to discrimination on the basis of cognitive differences known as “evaluative diversity.”

Discrimination against Blacks, Latinos, Asians, and Caucasians all qualify as racism. Likewise, instances of evaluativism include discrimination against creative people, discrimination against subjectivists (i.e., against people who empathize), and discrimination against conservatives. One is often able to find a church, industry, or group in which one’s own evaluative type is privileged, and others in which it is oppressed.

Just as we are still discovering the species and sub-species that make up our biodiversity, we are still in the process of mapping our evaluative diversity. So far, at least four distinct branches of evaluative diversity have been confirmed to exist in both humans and computers; they correspond to well-established branches of moral theory. These four branches have been named with the mnemonic “GRIN”:

Natural Gadfly: Aimed at discovery – guided by creativity

Naturally Relational: Aimed at love – guided by empathy

Naturally Institutional: Aimed at purity – guided by best practices

Natural Negotiator: Aimed at results – guided by research

Although the name is new, evaluativism is not. For example, in the ancient story of Adam and Eve, Adam implied that Eve had a different evaluative nature, and that they would not have eaten the forbidden fruit if her nature had been suppressed (he may have been naturally relational and she a natural negotiator or gadfly). Criticisms of specific evaluative natures are also found in the Quran, Analects, Dhammapada, Tao Te Ching, Bhagavad Gita, and peer-reviewed science.

What’s in a Name?

Lacking a proper name, evaluativism was sometimes called “sexism” in the past. For example, Carol Gilligan’s theory of ethics of care defended the naturally relational from the evaluativism of her academic adviser, Lawrence Kohlberg, but society lacked the terms “naturally relational” and “evaluativism” at the time, so Carol instead claimed to defend “women” from his “sexism”. Modern measures confirm that the naturally relational are significantly more likely to be women, but equating sex with evaluative type perpetuates stereotypes. Similar stereotyping can complicate religionism and the neurodiversity movement. To avoid such stereotyping, evaluativism needs its own name.

The terms “evaluativism” and “evaluativist” derive from the term “evaluative diversity,” which is attributed to a 1961 essay by the philosopher P. F. Strawson. This derivation was made rigorous in a philosophical paper by Hartry Field which argued that evaluative diversity creates intractable disagreements even about matters of fact (such as about the nature of God), so we may as well write-off other people so far as their evaluative type does not match our own. In other words, a successful family reunion unavoidably requires keeping certain topics off-the-table.

The Science

Throughout most of history, racism and sexism were considered part of the natural order, and the same has been true (and may currently be true) of evaluativism. Like homosexuals, people of unprivileged evaluative types used to be considered mentally ill or disabled. More optimistic psychologists classified them as merely immature or ignorant, and proposed methods to educate, reform, or otherwise fix them.

Only recently have scientists begun to show that evaluative nature correlates with genes, brain structure, and type of algorithm. Trying to understand why evaluative diversity persists, they have conducted experiments and developed mathematical models to demonstrate that evaluatively diverse teams are more effective. In other words, evaluativism isn’t just hurtful to victims – it can also be counterproductive for society.

Why would we be hurtful and counterproductive? One reason resembles the reasons why people used to think the Earth was the center of the universe and why white men used to think of women and blacks as property. We have a history of weaving ego-centrism into the culture we pass from generation to generation such that it takes enormous effort and social innovation to escape notions that we are privileged, including the notion that our own evaluative type is the right one. Escaping evaluativism does not require relinquishing belief in right answers, but it does require admitting that we cannot recognize them without diverse help.

The Impact

Evaluativism has plagued humanity for thousands of years and currently produces more segregation in college than both racism and classism. In addition to the personal pain it has brought individual victims (manifesting as depression, apathy, disorientation, and creative block), evaluativism has produced a segregated society with different jobs, political parties, and hobbies for people of different evaluative types. The greatest victims are likely to be people subject to a parent, teacher, clergy, or boss, who feel they must hide their own views to maintain peace with that authority.

Perhaps the worst consequence of evaluativism has been to undermine the design of social institutions. When we succumb to evaluativism, we believe everyone should be of one evaluative type (i.e., our own). This error causes us to design social institutions as though people were interchangeable. For example, we design government in which all kinds of people are to participate in the same way, and we try to create one-size-fits-all justice and moral-education systems.

Unrecognized racism similarly tricked people into designing an economy built on slavery. That economy was temporarily stable, but would have to be reformed eventually. Current designs of government, education, and justice are a similar debt we pass to future generations – eventually, someone will have to pay the price of reforming them to match the denied truth. Meanwhile, their flawed designs cause political polarization, culture wars, and swelling prison populations.

What’s Next?

In the future, awareness of evaluativism will likely increase for the same reasons we have grown aware of speciesism: We could no longer afford to ignore speciesism when mass-production threatened to destroy entire species. Now evaluative diversity is becoming vulnerable to mass-communication, mass-production of decision-support systems, and mass-production of services for behavior control.

Advances in neuroscience, psychology, and sociology are typically at odds with evaluative diversity – aiming instead to increase the effectiveness of marketing, political campaigns, and central control. They will soon enable us to manipulate the sexual orientations and evaluative types of our children through not only genetic screening, but also through brain surgeries and exercises designed to sculpt brains like bodybuilders sculpt muscles. People may someday request brain-jobs to match their nose-jobs.  Then lawyers will debate whether it is possible to consent to such procedures freely, and what legitimacy any government can have when its citizens are so oppressed they would want to erase their own natures.

Evaluativism impacts everyone, and most of us every day. Its scope is like that of racism in an economy of slavery. There are currently no laws to regulate it, though at least one famous psychologist has endorsed the extension of mental health definitions for the sole purpose of protecting some evaluative minorities via disability legislation.

In one sense, Hartry Field was wrong that disagreements between evaluative types are intractable – as in conflicts between predator and prey, the resolution is necessarily either that each side loses some of the time or that the ecosystem collapses and all parties lose completely. Evaluative-ecosystem management would involve pruning the winners to protect evaluative diversity. It would be to social health what psychiatry and medicine are to mental and physical health.

Dr. Field seems right only when we ignore the evaluative ecosystem and consider our opinions personal, much as prey who shun predators consider their lives personal. To reject the personal perspective, however, would be evaluativistic. There’s the rub: Unlike racism and sexism, evaluativism is not a phase society can grow out of. It is more like speciesism in that ending speciesism between predator and prey would be even more dangerous than failing to regulate it. What we can grow out of is the phase in which evaluativism is unrecognized. Some forms of discrimination call for more sophisticated management, but all need to be managed.

Chris Santos-Lang is writing the book GRIN Free – GRIN Together: How to let people be themselves (and why you should).

Register for Public Lands Seminar in Yellowstone

We want to make sure that our higher ed NCDDers know that there are a few more spaces left for a great program on dealing with public issues being held in beautiful Yellowstone National Park this July 27 – August 1. The program is hosted for higher ed professionals by the AASCU. We’ve shared the NIFI announcement about the program below, and you can learn more from the program page here.


Public Lands Seminar Program PageHow does a democracy manage competing but often equally legitimate positions over public resources? How are the rights of all citizens protected in conflicts over public lands? How do universities design courses and programs to help undergraduates develop the understandings and skills necessary to think about, and become engaged in conflict management and resolution? How do we help undergraduates become more thoughtful, more engaged citizens for our democracy?

The American Democracy Project (ADP) is creating new strategies to answer that question. For the past ten summers, faculty and administrators from American Association of State Colleges and Universities (AASCU) campuses have spent a week studying political disputes in the iconic first national park in the world, Yellowstone National Park. The first year, 2005, we studied wolf re-introduction. Twenty-six (26) faculty members from 19 campuses spent a week in the Park, first studying the biology and the politics of wolf re-introduction.

But the most innovative part of the program is when we traveled outside the Park to talk to citizens and activists on both sides of the issue, to understand the controversy from their point of view. At the end of the week-long program, we considered ways that faculty might develop programs on their own campus that focused on (1) national public resource issues such as wolf re-introduction and (2) local public resource issues such as oil drilling on national seashores, wind turbines in state parks, and restoration efforts in wetlands.

For the last ten summers, we expanded the focus of the program to examine a variety of conflicts in the entire Yellowstone region, including bison and brucellosis, winter use, wolves, and grizzly bears. Our program is entitled Politics and the Yellowstone Ecosystem. In this program, we spend six (6) days in Yellowstone National Park in a combination of activities, beginning with a study of the science and history of the controversies, listening to scientists and Park rangers. Then we interview local citizens on both sides of the issues, including political activists, business people, ranchers, and other citizens.

The goal of this project is to develop new strategies and new approaches that colleges and universities can use to help undergraduates become thoughtful, informed, and engaged citizens. In a world too often filled with bitter partisan politics, this non-partisan project seeks to move beyond rhetoric and confrontation, providing students with new models that promote understanding and resolution. In a political environment where special interest groups tend to push people to polarized positions, we often try to seek common ground.

The program, held at Mammoth Hot Spring Hotel at the northern end of the Park, begins late afternoon on Monday, July 27th and ends at noon on Saturday, August 1st. The cost of the program is $1,395, which includes five (5) nights individual room lodging at Mammoth Hotel (each participant will have a separate hotel room or cabin); all instruction and instructional materials, AV rental, classroom rental; in-park transportation; and reception and dinner the first night, as well as several other meals. Space does not allow for guest participation in the full program. However, family members or guests may attend some classroom lectures, a few field trips, and evening films and presentations.

For more information on this program please contact:

George Mehaffy 202.478.4672  mehaffyg[at]aascu[dot]org

Jennifer Domagal-Goldman 202.478.7833  domagalj[at]aascu[dot]org

You can find the NIFI announcement on this seminar by visiting www.nifi.org/en/groups/registration-open-politics-and-yellowstone-ecosystem-july-2015-seminar-yellowstone-national. You can find the AASCU seminar’s page at www.aascu.org/programs/adp/SPL.

On Evaluation and Legitimacy in D&D

Our partners at the Kettering Foundation recently published an insightful interview with Prof. Katie Knobloch of the Center for Public Deliberation – an NCDD organizational member – that we wanted to share here. There’s a lot to learn from Katie’s reflection on the challenges of evaluating and legitimizing D&D work, so we encourage you to read the interview below or find the original piece here.


Does Our Work Really Matter? Deliberative Practitioners Reflect on the Impact of Their Work

kfAs attention to public deliberation has increased, one core interest of researchers has been evaluating the impact of deliberative processes. Researchers, practitioners, elected officials, and participants themselves want to know if what they’re doing matters. Does public deliberation impact policy? Does it change our attitude toward issues? Does it adhere to democratic ideals?

Professor Katherine R. Knobloch has been intimately involved in evaluation work, refining our understanding of these questions. Former research assistant Jack Becker sat down with her to talk about her work around evaluation, as well as her work with the Oregon Citizens’ Initiative Review.

Katherine R. Knobloch is an assistant professor and the associate director of the Center for Public Deliberation in the Department of Communication Studies at Colorado State University. Her research and teaching focus on political communication and civic engagement, specifically exploring how deliberative public processes can create a more informed and engaged citizenry. For this work, she has received a grant from the National Science Foundation to study the expansion of a new governing institution, the Citizens’ Initiative Review. Her work has appeared in The Journal of Applied Communication Research, Politics, and the International Journal of Communication.

Jack Becker: Your work explores the development, evaluation, and impact of deliberative public processes. How do you compartmentalize each of these in your research?

Katherine R. Knobloch: The central element of interest is figuring out how to implement deliberative practices in ways that matter. To look at the development of public deliberation, I talk with people about what goes into running organizations, how they work with public officials to implement their processes, and how they got involved in public deliberation. I do a lot of fieldwork and observations to examine this.

For evaluation, I have worked alongside a number of scholars to develop a coding scheme that allows us to break the deliberative process out into segments. We then use that scheme to judge the deliberations against goals that practitioners identified and goals and definitions that we as researchers have developed to analyze if processes are fulfilling democratic and deliberative standards.

For example, we have used an updated definition from John Gastil’s Political Communication and Deliberation (2008), that deliberation is an analytic information gathering process, a democratic discussion process, and a decision-making process. I will also spend time observing participants and getting feedback from them directly, asking, for example, did they reach their goals? Did they uphold deliberative criteria? I will also do a pre- and post-survey of participants to examine a variety of factors, such as attitudinal changes.

To look at impact, with the Citizens’ Initiative Review (CIR) process, for example, we are looking at whether the process has an impact in how voters make their decisions. Do people read the CIR statement? Do they find the information valuable?

You have a chapter in Democracy in Motion: Evaluating the Practice and Impact of Deliberative Civic Engagement (2012) in which you and your coauthors lay out criteria for evaluating deliberative public processes. What is it we learn from evaluating deliberation, and what are our challenges?

I think we’re looking to refine our methods. I’m concerned that we do evaluation in an inefficient way. Much of my own work in evaluating deliberation relies on grants, and that’s not sustainable, particularly for small organizations that lack the capacity to get large grants and do the evaluative work. So we need to figure out what survey methods are best and how they can be refined to make it easier for practitioners to regularly evaluate their work.

For the CIR, we wanted to start a coding scheme that would be applicable across deliberative events. Deliberative processes are dynamic, and that’s another challenge to the work of evaluation. During deliberative processes, the agenda may change in real time, and in the past, we’ve changed coding schemes, but now we’re trying to use the same coding scheme and develop one that will work in other deliberative processes. The goal of evaluation is to be able to look back and say what the most valuable results from a process are.

Are we seeing more practitioners evaluate their own work?

I think that’s been a trend in recent years. More people want to know if their work is doing what they say it’s doing. Also, they want to know if it is effective in impacting communities, organizations, and people.

I attended a session at the National Conference on Dialogue and Deliberation this year that was focused on practitioners and academics getting on the same page with evaluation. One of the challenges is that everyone is working off of different frameworks. Josh Lerner with Participatory Budgeting (PB) pointed out at the conference how many different teams are evaluating PB processes. So they are trying to create at least a funnel point to gather this info and synthesize this.

I’ve been talking about civic infrastructure with people for the past year. How do innovations such as the Citizens’ Initiative Review (CIR) fit into ideas of civic infrastructure?

One of the most important breakthroughs for the CIR in Oregon is that it is a legitimate and formal part of the governing process. I think effective public engagement matters. It’s important for participants to come away from deliberative processes feeling like their participation was purposeful and that it could have a real impact on public decision making. I think that’s the legitimizing part of the CIR. It legitimizes deliberation as part of governance. Ideally, we would like to see more processes like these become embedded in government as ways to improve the quality of our civic infrastructure.

Organizations, practitioners, and theorizers are taking this process seriously. As a field, deliberation faces the challenge of implementing decisions that publics make at deliberative events. So people make decisions through deliberative processes, but then decision makers decide whether to use it. So the CIR specifically addresses that problem, in that recommendations go right to voters in the voters guide for their consideration. The CIR finds a way to make those decisions matter at the policymaking level.

Participatory Budgeting is a wonderful example of making things matter for people as well. City councils and city governments are handing over portions of their budget to citizen decision making, showing that citizens have the capacity to make these decisions.

So part of the success I hear there is that they are creating connections to the decision-making process by working with decision makers. Are elected and appointed officials into this?

I think there are more city officials who are into deliberation. It may be wishful thinking, but I see city officials taking citizen voice more seriously. I think they want to understand what citizens want and why. Even President Obama making the call for a discussion on mental health is a good example. And models like the Colorado State University Center for Public Deliberation are great examples of linking deliberative practice more directly with city officials and providing recommendations to city councils in ways that are impactful.

Hawaii state senator Les Ihara Jr. stressed to me in a recent interview the importance of meeting elected officials where they are. Does this resonate as a productive approach to growing deliberative practice?

Legislators are often wary of the initiative process since the policy or legislation is created without a connection to the resources allocation process. So it creates a misalignment in the policymaking process. Legislators are open to how to improve the initiative process. And so in Oregon, officials were interested in how to improve that process and saw that the CIR could potentially bring more alignment to the initiative process.

So in developing the Citizens’ Initiative Review, to what extent was the process driven by government officials in demanding these changes?

It was really driven by the founders of the CIR who were not a part of government. When they first proposed the CIR, they had a conversation with the Oregon Secretary of State who asked them to run a pilot. The founders of the process drove it. But they worked closely alongside legislators and public officials to identify what they thought would be useful to improve the process and to make sure it met the needs for Oregon as a whole. And the legislators of Oregon asked for a thorough evaluation of the process during the pilot, exploratory phase. So it really comes back to the importance of evaluation in growing deliberative practice.

The original version of this Kettering Foundation interview can be found at http://kettering.org/kfnews/does-our-work-matter.

Apply to the Summer Institute of Civic Studies by Mar. 15

We encourage NCDD members to apply to be part of the 7th annual Summer Institute of Civic Studies this June 15-25, and also to consider attending the 2015 Frontiers of Democracy conference this June 24-25. Both events have institutions in the field that are stewarded by NCDD supporting members Drs. Peter Levine and Nancy Thomas of Tufts University.

I myself am a Summer Institute alumni and have attended multiple Frontiers conferences, and they are both great opportunities to learn and work with many of the nation’s leaders of civic innovation. Find out more below about both  in the announcement below or by clicking here.


Tufts-logo

The 7th Annual Summer Institute of Civic Studies

The seventh annual Summer Institute of Civic Studies will be an intensive, two-week, interdisciplinary seminar bringing together faculty, advanced graduate students, and practitioners from many countries and diverse fields of study.

Organized by Peter Levine of Tufts University’s Tisch College and Karol Sołtan of the University of Maryland, the Summer Institute will engage participants in challenging discussions of such topics as:

  • What kinds of citizens (if any) do good regimes need?
  • What should such citizens know, believe, and do?
  • What practices and institutional structures promote the right kinds of citizenship?
  • What ought to be the relationships among empirical evidence, ethics, and strategy?

The syllabus for the sixth annual seminar (in 2014) is here: Continue reading

NCDD Member Orgs Form New PB Research Board

In case you missed it, the Participatory Budgeting Project and Public Agenda – two key NCDD organizational members – announced last fall that they have formed the first North American research board to study the participatory budgeting process. Not only is this an important and exciting development for the field, but we are proud to count two NCDD members – Matt Leighninger and Paolo Spada – among the new board. Read the announcement below or find the original version here.


PBP and Public Agenda are facilitating the launch of the North American Participatory Budgeting Research Board with various participatory budgeting (PB) evaluators, academics, and researchers. Shortly after the 3rd International Conference on PB in North America, we came together in Oakland for our first meeting.

The goal of the board is to support the evaluation of PB processes across the US and Canada and guide a broader research agenda for PB. Over the years of PB in North America, many board members have already been informally collaborating and supporting one another’s work. With the rapid growth of PB in North America we see the importance of establishing the formal infrastructure to further strengthen and promote the research and evaluation.

The First Meeting and Historical Context

On a Sunday morning in Oakland in September, a group of leading researchers and evaluators converged at the PBP office for the first meeting of the North American PB Research Board. It was a rare and exciting moment: two hours of deep discussion amongst passionate individuals who have committed countless hours, and sometimes entire careers, to researching and evaluating PB processes in North America and overseas. This had the feeling of something that could make a vital contribution to the spread and improvement of PB in North America.

Research and evaluation have long been central features of North American PB processes. Academic researchers from diverse backgrounds have been fascinated with measuring the contribution of PB to social justice and the reform of democratic institutions. Local evaluation teams, particularly in NYC and Chicago, have conducted huge data collection efforts on an annual basis to ensure that fundamental questions such as “who participates?” and “what are the impacts of PB?” can be accurately answered.

Often the agendas of these researchers and evaluators have overlapped and presented opportunities for collaboration. PBP has played a key role in supporting both research and evaluation but, with the rapid expansion of PB in North America, we recognized the need for a more formal research and evaluation infrastructure in order to measure and communicate the impacts of PB across cities.

Partnering to Build Expertise and Capacity

Having identified this need, we saw the opportunity to partner with Public Agenda, a non-profit organization based in NYC with vast experience in research and public engagement. With leadership from Public Agenda, support from PBP, and contributions from leading researchers, the North American PB Research Board generates new capacity to expand and deepen PB.

Over 2014-2015 the board will have 17 members, including experienced PB evaluators and researchers based at universities and non-profit organizations.

2014-2015 North American PB Research Board

  • Gianpaolo Baiocchi, New York University
  • Thea Crum,Great Cities Institute, University of Illinois-Chicago
  • Benjamin Goldfrank, Seton Hall University
  • Ron Hayduk, Queens College, CUNY
  • Gabe Hetland  , University of California-Berkeley
  • Alexa Kasdan, Community Development Project, Urban Justice Center
  • Matt Leighninger, Deliberative Democracy Consortium
  • Erin Markman, Community Development Project, Urban Justice Center
  • Stephanie McNulty, Franklin and Marshall College
  • Ana Paula Pimental Walker, University of Michigan
  • Sonya Reynolds, New York Civic Engagement Table
  • Daniel Schugurensky, Arizona State University
  • Paolo Spada, Participedia
  • Celina Su, Brooklyn College, CUNY
  • Rachel Swaner, New York University
  • Brian Wampler, Boise State University
  • Rachel Weber, Great Cities Institute, University of Illinois-Chicago
  • Erik Wright, University of Wisconsin-Madison

NCDD congratulates everyone involved in taking this important step forward for PB and for the field! To find the original announcement about the Research Board, visit www.participatorybudgeting.org/blog/new-research-board-to-evaluate-pb.

IF Offers Discussion Guide on Climate Change

The next round of UN climate talks began this week in Lima, Peru, and as global leaders debate how to avert the worst effects of climate change, our communities also need to be having conversations about this pressing topic. We learned from our members at the 2014 National Conference on Dialogue & Deliberation that D&D practitioners want more resources that will help them have real and productive conversations on this difficult topic.

Lucky for us, the Interactivity Foundation (or IF) – one of the wonderful sponsors of our conference – recently created a resource for exactly that. Based on three years of online discussions with international input on climate change and the lessons learned from their signature Project Discussions on the subject, IF produced a report on the discussions called “Human Impact on Climate Change: Opportunities & Challenges.” The report serves as a discussion guide designed to use non-ideological language that helps participants to separate potential policy directions from partisan agendas and arguments over science, and to explore possibilities for how they or their communities might respond.

The easy-to-use, 40-page guide frames the possibilities that discussion participants can consider in two categories. The first, “Setting the Stage”, focuses on immediately impact awareness and action, and the second, “Meeting the Continuing Climate Challenge”, is focused on the more complicated, long-term approaches needed to impact infrastructure and natural systems.

Here is how the report has framed six different possibilities for participants to discuss:

Possibilities for Setting the Stage

A. Promote Climate Awareness – Improve understanding of climate impact, climate science, and possible approaches.
B. Change Consumer Habits – Focus on human consumption as a source of carbon and greenhouse gas emissions.
C. Go for Results – Identify efficient and low-cost solutions that are available for short-term action.

Possibilities for Meeting the Continuing Climate Challenge

D. Heal the Planet – Plan and implement long-range recovery and rehabilitation of ecosystems.
E. Deal With a Different World – Adapt to changed conditions and plan for climate emergencies.
F. Focus on the Developing World – Assist developing nations in reducing climate impact activities and adopting clean technologies.

The guide expounds on all six of these frames as starting points for in-depth conversation and deliberation, and offers example policy suggestions grounded in all six frames for participants to explore. It also includes a great list of additional resources to help facilitate further conversations at the end.

With the wide range of perspectives and the depth of feelings that the general public has about the topic of climate change, this kind of resource can be an indispensable tool to help those of us seeking to have effective deliberations on the topic that can move our communities forward without descending into divisive and counterproductive arguments. We highly encourage you to take a look at IF’s “Human Impact on Climate Change: Opportunities & Challenges” discussion report and use it to you help you host these vital conversations.

To help these conversations be more inclusive and accesible, IF has made a PDF of the report available in both English and Spanish, and you can also view it online. You can go directly to the report summary page by clicking here, and there is even a Facebook discussion group based on the report. We hope that this great resource will help you start your communities, organizations, or institutions have better discussions about this challenging issue.

To learn more about the Interactivity Foundation and its innovative work, visit www.interactivityfoundation.org. Thanks so much to IF for creating this amazing resource!

Help Everyday Democracy Learn, Win $30

EvDem LogoOur partners with Everyday Democracy, one NCDD’s long-term organizational members, are offering a great opportunity – from now until December 19th, they are seeking input from the engagement community about what kinds of issues we care about and what resources we need. They have created a survey that they will use to help develop future tools and resources for dialogue on community issues – on top of the great resources they already offer – and if you take the survey, you will have a chance to win one of five $30 Amazon gift cards.

We know that many of our NCDD members use Everyday Democracy’s tools and resources, so we strongly encourage you to take their quick 10-minute survey by visiting www.surveymonkey.com/s/2NQTPXZ. You can also find it on Everyday Democracy’s website by clicking here.

The survey closes on December 19th, so we recommend you fill it out now before the holiday rush starts. Soon after the survey ends, EvDem will be sharing the high-level lessons and reflections they take from the survey back out into the community, which promises to be a helpful learning tool in itself, so keep an eye out for that down the line.

We hope you’ll fill out the survey soon! You’ll be helping your D&D community, and you could be getting a little extra money for holiday shopping, too!