Techniques and Technologies for Mobilizing Citizens: Do They Work?

17146073016_5657607917_z1

Techniques for mobilizing citizens to vote in elections have become highly sophisticated in large part thanks to get-out-the-vote (GOTV) research, with fascinating experimental evidence from interventions to increase turnout. Until recently, the use of these techniques has been mostly limited to electoral processes, often resorting to resource intensive tactics such as door-to-door canvassing and telemarketing campaigns. But as digital technologies such as email and SMS lower the costs for targeting and contacting individuals, the adaptation of these practices to participatory processes is becoming increasingly common. This leads to the question: how effective are GOTV-type efforts when using technology outside of the electoral realm?

One of the first (if not the first) efforts to bring together technology and GOTV techniques to non-electoral processes took place in the participatory budgeting (PB) process of the municipality of Ipatinga in Brazil in 2005. An automated system of targeted phone calls to landlines was deployed, with a recorded voice message from the mayor informing residents of the time and the location of the next PB meeting closest to them. Fast forward over a decade later, and New Yorkers can receive personalized text messages on their phones indicating the nearest PB polling location. Rather than a mere coincidence, the New York case illustrates a growing trend in participatory initiatives that – consciously or not – combine technology with traditional GOTV techniques to mobilize participation.

However, and unlike GOTV in elections, little is known about the effects of these efforts in participatory processes, the reasons for which I briefly speculated about in a previous post. We have just published a study in the British Journal of Political Science that, we hope, starts to reduce this gap between practice and knowledge. Entitled “A Get-Out-the-Vote Experiment on the World’s Largest Participatory Budgeting Vote in Brazil”, the study is co-authored by Jonathan Mellon, Fredrik M. Sjoberg and myself. The experiment was conducted in close collaboration with Rio Grande do Sul’s State Government (Brazil), which holds the world’s largest participatory budgeting process.

In the experiment, over 43,000 citizens were randomly assigned to receive email and text messages encouraging them to take part in the PB voting process. We used voting records to assess the impact of these messages on turnout and support for public investments. The turnout effect we document in the study is substantially larger than what has been found in most previous GOTV studies, and particularly those focusing on the effect of technologies like email and SMS. The increase in participation, however, did not alter which projects were selected through the PB vote: voters in the control and treatment groups shared the same preferences. In the study, we also assessed whether different message framing (e.g. intrinsic versus extrinsic) mattered. Not that much, we found, and a lottery incentive treatment had the opposite effect to what many might expect. Overall, our experiment suggests that tech-enabled GOTV approaches in participatory processes are rather promising if increasing levels of participation is one of the goals. But the “more research is needed” disclaimer, as usual, applies.

You can find the final study (gated version) here, and the pre-published (open) version here.

 

Catching up (again!) on DemocracySpot

cover-bookIt’s been a while since the last post here. In compensation, it’s not been a bad year in terms of getting some research out there. First, we finally managed to publish “Civic Tech in the Global South: Assessing Technology for the Public Good.” With a foreword by Beth Noveck, the book is edited by Micah Sifry and myself, with contributions by Evangelia Berdou, Martin Belcher, Jonathan Fox, Matt Haikin, Claudia Lopes, Jonathan Mellon and Fredrik Sjoberg.

The book is comprised of one study and three field evaluations of civic tech initiatives in developing countries. The study reviews evidence on the use of twenty-three information and communication technology (ICT) platforms designed to amplify citizen voices to improve service delivery. Focusing on empirical studies of initiatives in the global south, the authors highlight both citizen uptake (yelp) and the degree to which public service providers respond to expressions of citizen voice (teeth). The first evaluation looks at U-Report in Uganda, a mobile platform that runs weekly large-scale polls with young Ugandans on a number of issues, ranging from access to education to early childhood development. The following evaluation takes a closer look at MajiVoice, an initiative that allows Kenyan citizens to report, through multiple channels, complaints with regard to water services. The third evaluation examines the case of Rio Grande do Sul’s participatory budgeting – the world’s largest participatory budgeting system – which allows citizens to participate either online or offline in defining the state’s yearly spending priorities. While the comparative study has a clear focus on the dimension of government responsiveness, the evaluations examine civic technology initiatives using five distinct dimensions, or lenses. The choice of these lenses is the result of an effort bringing together researchers and practitioners to develop an evaluation framework suitable to civic technology initiatives.

The book was a joint publication by The World Bank and Personal Democracy Press. You can download the book for free here.

Women create fewer online petitions than men — but they’re more successful

clinton

Another recent publication was a collaboration between Hollie R. Gilman, Jonathan Mellon, Fredrik Sjoberg and myself. By examining a dataset covering Change.org online petitions from 132 countries, we assess whether online petitions may help close the gap in participation and representation between women and men. Tony Saich, director of Harvard’s Ash Center for Democratic Innovation (publisher of the study), puts our research into context nicely:

The growing access to digital technologies has been considered by democratic scholars and practitioners as a unique opportunity to promote participatory governance. Yet, if the last two decades is the period in which connectivity has increased exponentially, it is also the moment in recent history that democratic growth has stalled and civic spaces have shrunk. While the full potential of “civic technologies” remains largely unfulfilled, understanding the extent to which they may further democratic goals is more pressing than ever. This is precisely the task undertaken in this original and methodologically innovative research. The authors examine online petitions which, albeit understudied, are one of the fastest growing types of political participation across the globe. Drawing from an impressive dataset of 3.9 million signers of online petitions from 132 countries, the authors assess the extent to which online participation replicates or changes the gaps commonly found in offline participation, not only with regards to who participates (and how), but also with regards to which petitions are more likely to be successful. The findings, at times counter-intuitive, provide several insights for democracy scholars and practitioners alike. The authors hope this research will contribute to the larger conversation on the need of citizen participation beyond electoral cycles, and the role that technology can play in addressing both new and persisting challenges to democratic inclusiveness.

But what do we find? Among other interesting things, we find that while women create fewer online petitions than men, they’re more successful at it! This article in the Washington Post summarizes some of our findings, and you can download the full study here.

Other studies that were recently published include:

The Effect of Bureaucratic Responsiveness on Citizen Participation (Public Administration Review)

Abstract:

What effect does bureaucratic responsiveness have on citizen participation? Since the 1940s, attitudinal measures of perceived efficacy have been used to explain participation. The authors develop a “calculus of participation” that incorporates objective efficacy—the extent to which an individual’s participation actually has an impact—and test the model against behavioral data from the online application Fix My Street (n = 399,364). A successful first experience using Fix My Street is associated with a 57 percent increase in the probability of an individual submitting a second report, and the experience of bureaucratic responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of responsiveness for fostering an active citizenry while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

Does online voting change the outcome? Evidence from a multi-mode public policy referendum (Electoral Studies)

Abstract:

Do online and offline voters differ in terms of policy preferences? The growth of Internet voting in recent years has opened up new channels of participation. Whether or not political outcomes change as a consequence of new modes of voting is an open question. Here we analyze all the votes cast both offline (n = 5.7 million) and online (n = 1.3 million) and compare the actual vote choices in a public policy referendum, the world’s largest participatory budgeting process, in Rio Grande do Sul in June 2014. In addition to examining aggregate outcomes, we also conducted two surveys to better understand the demographic profiles of who chooses to vote online and offline. We find that policy preferences of online and offline voters are no different, even though our data suggest important demographic differences between offline and online voters.

We still plan to publish a few more studies this year, one looking at digitally-enabled get-out-the-vote (GOTV) efforts, and two others examining the effects of participatory governance on citizens’ willingness to pay taxes (including a fun experiment in 50 countries across all continents).

In the meantime, if you are interested in a quick summary of some of our recent research findings, this 30 minutes video of my keynote at the last TicTEC Conference in Florence should be helpful.

 

 

New Papers Published: FixMyStreet and the World’s Largest Participatory Budgeting

2016_7_5_anderson-lopes_consulta-popular_virtual

Voting in Rio Grande do Sul’s Participatory Budgeting  (picture by Anderson Lopes)

Here are two new published papers that my colleagues Jon Mellon, Fredrik Sjoberg and myself have been working on.

The first, The Effect of Bureaucratic Responsiveness on Citizen Participation, published in Public Administration Review, is – to our knowledge – the first study to quantitatively assess at the individual level the often-assumed effect of government responsiveness on citizen engagement. It also describes an example of how the data provided through digital platforms may be leveraged to better understand participatory behavior. This is the fruit of a research collaboration with MySociety, to whom we are extremely thankful.

Below is the abstract:

What effect does bureaucratic responsiveness have on citizen participation? Since the 1940s, attitudinal measures of perceived efficacy have been used to explain participation. The authors develop a “calculus of participation” that incorporates objective efficacy—the extent to which an individual’s participation actually has an impact—and test the model against behavioral data from the online application Fix My Street (n = 399,364). A successful first experience using Fix My Street is associated with a 57 percent increase in the probability of an individual submitting a second report, and the experience of bureaucratic responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of responsiveness for fostering an active citizenry while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

An earlier, ungated version of the paper can be found here.

The second paper, Does Online Voting Change the Outcome? Evidence from a Multi-mode Public Policy Referendum, has just been published in Electoral Studies. In an earlier JITP paper (ungated here) looking at Rio Grande do Sul State’s Participatory Budgeting – the world’s largest – we show that, when compared to offline voting, online voting tends to attract participants who are younger, male, of higher income and educational attainment, and more frequent social media users. Yet, one question remained: does the inclusion of new participants in the process with a different profile change the outcomes of the process (i.e. which projects are selected)? Below is the abstract of the paper.

Do online and offline voters differ in terms of policy preferences? The growth of Internet voting in recent years has opened up new channels of participation. Whether or not political outcomes change as a consequence of new modes of voting is an open question. Here we analyze all the votes cast both offline (n = 5.7 million) and online (n = 1.3 million) and compare the actual vote choices in a public policy referendum, the world’s largest participatory budgeting process, in Rio Grande do Sul in June 2014. In addition to examining aggregate outcomes, we also conducted two surveys to better understand the demographic profiles of who chooses to vote online and offline. We find that policy preferences of online and offline voters are no different, even though our data suggest important demographic differences between offline and online voters.

The extent to which these findings are transferable to other PB processes that combine online and offline voting remains an empirical question. In the meantime, nonetheless, these findings suggest a more nuanced view of the potential effects of digital channels as a supplementary means of engagement in participatory processes. I hope to share an ungated version of the paper in the coming days.

Catching Up on DemocracySpot

CaptureOGRX

It’s been a while, so here’s a miscellaneous post with things I would normally share on DemocracySpot.

Yesterday the beta version of the Open Government Research Exchange (OGRX) was launched. Intended as a hub for research on innovations in governance, the OGRX is a joint initiative by NYU’s GovLab, MySociety and the World Bank’s Digital Engagement Evaluation Team (DEET) (which, full disclosure, I lead). As the “beta” suggests, this is an evolving project, and we look forward to receiving feedback from those who either work with or benefit from research in open government and related fields. You can read more about it here.

Today we also launched the Open Government Research mapping. Same story, just “alpha” version. There is a report and a mapping tool that situates different types of research across the opengov landscape. Feedback on how we can improve the mapping tool – or tips on research that we should include – is extremely welcome. More background about this effort, which brings together Global Integrity, Results for Development, GovLAB, Results for Development and the World Bank, can be found here.

mapping2

Also, for those who have not seen it yet, the DEET team also published the EvCaptureDEETguidealuation Guide for Digital Citizen Engagement a couple of months ago. Commissioned and overseen by DEET, the guide was developed and written by Matt Haikin (lead author), Savita Bailur, Evangelia Berdou, Jonathan Dudding, Cláudia Abreu Lopes, and Martin Belcher.

And here is a quick roundup of things I would have liked to have written about since my last post had I been a more disciplined blogger:

  • A field experiment in Rural Kenya finds that “elite control over planning institutions can adapt to increased mobilization and participation.” I tend to disagree a little with the author’s conclusion that emphasizes the role of “power dynamics that allow elites to capture such institutions” to explain his findings (some of the issues seem to be a matter of institutional design). In any case, it is a great study and I strongly recommend the reading.
  • A study examining a community-driven development program in Afghanistan finds a positive effect on access to drinking water and electricity, acceptance of democratic processes, perceptions of economic wellbeing, and attitudes toward women. However, effects on perceptions of government performance were limited or short-lived.
  • A great paper by Paolo de Renzio and Joachim Wehner reviews the literature on “The Impacts of Fiscal Openness”. It is a must-read for transparency researchers, practitioners and advocates. I just wish the authors had included some research on the effects of citizen participation on tax morale.
  • Also related to tax, “Consumers as Tax Auditors” is a fascinating paper on how citizens can take part in efforts to reduce tax evasion while participating in a lottery.
  • Here is a great book about e-Voting and other technology developments in Estonia. Everybody working in the field of technology and governance knows Estonia does an amazing job, but information about it is often scattered and, sometimes, of low quality. This book, co-authored by my former colleague Kristjan Vassil, addresses this gap and is a must-read for anybody working with technology in the public sector.
  • Finally, I got my hands on the pictures of the budget infograffitis (or data murals) in Cameroon, an idea that emerged a few years ago when I was involved in a project supporting participatory budgeting in Yaoundé (which also did the Open Spending Cameroon). I do hope that this idea of bringing data visualizations to the offline world catches up. After all, that is valuable data in a citizen-readable format.

cameroon1

picture by ASSOAL

cameroon2

picture by ASSOAL

I guess that’s it for now.

New IDS Journal – 9 Papers in Open Government

2016-01-14 16.51.09_resized

The new IDS Bulletin is out. Edited by Rosemary McGee and Duncan Edwards, this is the first open access version of the well-known journal by the Institute of Development Studies. It brings eight new studies looking at a variety of open government issues, ranging from uptake in digital platforms to government responsiveness in civic tech initiatives. Below is a brief presentation of this issue:

Open government and open data are new areas of research, advocacy and activism that have entered the governance field alongside the more established areas of transparency and accountability. In this IDS Bulletin, articles review recent scholarship to pinpoint contributions to more open, transparent, accountable and responsive governance via improved practice, projects and programmes in the context of the ideas, relationships, processes, behaviours, policy frameworks and aid funding practices of the last five years. They also discuss questions and weaknesses that limit the effectiveness and impact of this work, offer a series of definitions to help overcome conceptual ambiguities, and identify hype and euphemism. The contributions – by researchers and practitioners – approach contemporary challenges of achieving transparency, accountability and openness from a wide range of subject positions and professional and disciplinary angles. Together these articles give a sense of what has changed in this fast-moving field, and what has not – this IDS Bulletin is an invitation to all stakeholders to take stock and reflect.

The ambiguity around the ‘open’ in governance today might be helpful in that its very breadth brings in actors who would otherwise be unlikely adherents. But if the fuzzier idea of ‘open government’ or the allure of ‘open data’ displace the task of clear transparency, hard accountability and fairer distribution of power as what this is all about, then what started as an inspired movement of governance visionaries may end up merely putting a more open face on an unjust and unaccountable status quo.

Among others, the journal presents an abridged version of a paper by Jonathan Fox and myself on digital technologies and government responsiveness (for full version download here).

Below is a list of all the papers:

Rosie McGee, Duncan Edwards
Tiago Peixoto, Jonathan Fox
Katharina Welle, Jennifer Williams, Joseph Pearce
Miguel Loureiro, Aalia Cassim, Terence Darko, Lucas Katera, Nyambura Salome
Elizabeth Mills
Laura Neuman
David Calleb Otieno, Nathaniel Kabala, Patta Scott-Villiers, Gacheke Gachihi, Diana Muthoni Ndung’u
Christopher Wilson, Indra de Lanerolle
Emiliano Treré

 

Ripple Effect Mapping: A “Radiant” Way to Capture Program Impacts

A group of leaders in college extension programs created a participatory group process designed to document the results of Extension educational efforts within complex, real-life settings. The method, known as Ripple Effect Mapping, uses elements of Appreciative Inquiry, mind mapping, and qualitative data analysis to engage program participants and other community stakeholders to reflect upon and visually map the intended and unintended changes produced by Extension programming. The result is not only a powerful technique to document impacts, but a way to engage and re-energize program participants.

Ripple Effect Mapping can be used to help unearth and document the divergent outcomes that result from dialogue and deliberation programs.

This article in the Journal of Extension was published in October 2012 (Volume 50, Number 5). Authors include Debra Hansen Kollock of Stevens County Extension, Lynette Flage of North Dakota State University Extension, Scott Chazdon of University of Minnesota Extension, Nathan Paine of the University of Minnesota, and Lorie Higgins of the University of Idaho.

Introduction

Evaluating the changes in groups, organizations, or communities resulting from Extension programming is difficult and challenging (Smith & Straughn, 1983), yet demonstrating impacts is critical for continued investment (Rennekamp & Arnold, 2009).

Ripple Effect Mapping (REM), is a promising method for conducting impact evaluation that engages program and community stakeholders to retrospectively and visually map the “performance story” (Mayne, 1999; Baker, Calvert, Emery, Enfield, & Williams, 2011) resulting from a program or complex collaboration. REM employs elements of Appreciative Inquiry, mind mapping, and qualitative data analysis.

REM was used to conduct an impact analysis of the Horizons program, an 18-month community-based program delivered to strengthen leadership to reduce poverty. The method (Kollock, 2011) was piloted in Washington, Idaho, and North Dakota Horizons communities to illustrate outcomes of the program over time. While there were minor process variations in each state, the REM technique in all three states utilized maps to illustrate to community members what was accomplished as well as furthered their enthusiasm for taking action on issues.

Background

Historically, the standard approach for impact evaluation has been experimental research. Yet critics of experimental approaches emphasize that these designs are often politically unfeasible and yield very little useful information on a program’s implementation or its context (Patton, 2002).

REM, an example of qualitative methodology based on open-ended group interviewing, provides “respectful attention to context” (Greene, 1994: 538) and better addresses the concerns of program stakeholders. The participatory group aspect of this method engages participants and others to produce high-quality evaluation data and increases the likelihood of future collective action.

REM is a form of mind mapping, a diagramming process that represents connections hierarchically (Eppler, 2006:203). A fundamental concept behind REM is radiant thinking (Buzan, 2003), which refers to the brain’s associative thought processes that derive from a central point and form links between integrated concepts (Wheeler & Szymanski, 2005; Bernstein, 2000). This makes REM an ideal tool for brainstorming, memorizing, and organizing.

Description of the Method

The steps involved in Ripple Effect Mapping are:

  1. Identifying the intervention: REM is best conducted for in-depth program interventions or collaborations that are expected to produce broad or deep changes in a group, organization, or community.
  2. Scheduling the event and inviting participants: The REM process includes both direct program participants and non-participant stakeholders. This latter group offers a unique perspective and a form of external validation to verify the “performance stories” of program participants. Ultimately, a group of eight to 15 participants is ideal.
  3. Appreciative Inquiry Interviews: At the beginning of the REM event, participants are paired up and instructed to interview each other about particular ways the program affected their lives or particular achievements or successes they have experienced as a result of the program (Cooperrider & Whitney, 2007).
  4. Mapping: The core of the session involves group mapping, using Mind Mapping software (Donaldson, 2010) or papers and tape on a wall, to brainstorm and hierarchically map the effects or “ripples” of the intervention. This process engages the entire group and provides opportunities for participants to make connections among program effects. The process is co-led by a facilitator and a “mapper” and is typically completed in one to two hours.
  5. Cleaning, Coding, and Analysis: After the session, the evaluator may need to reorganize the mind map and collect additional detail by interviewing other participants. The data produced in the mapping process can be downloaded into a spreadsheet program and coded in a variety of ways. For example, the “ripples” can be coded as short-term knowledge, skill, or attitude changes; medium-term behavior changes; and long-term changes in conditions. Furthermore, these changes in conditions can be coded using the Community Capitals Framework (Emery & Flora, 2006; Rasmussen, Armstrong, & Chazdon, 2011).

Benefits and Limitations

REM is:

  • Simple and cheap. Mind mapping software is available for free or at low cost. It is efficient to gather participants together for one face-to-face meeting rather than to conduct individual interviews.
  • Able to capture impacts of complex work. The technique successfully documents both intended and unintended effects of Extension work. For example, Extension programming often succeeds at building social capital (trust and connections among people). This method allows participants to describe the connections they’ve built as well as what these connections led to.
  • An effective communication tool. The visual nature of ripple maps makes them very useful as a tool to share program effects with stakeholders such as funders or local officials.
  • Motivating. As REM engages participants and stakeholders, it also creates positive energy for further collective action.

The limitations of REM are the risk of bias in participant selection and in data collection. The assembled participants may not have complete information about all the outcomes of a program and may not provide examples of negative consequences. One way to overcome these limitations is to conduct supplementary interviews with additional stakeholders after the session has been completed and to probe for negative consequences during the session.

Example with Map

Figure 1 shows a portion of one community’s Ripple Effect Map from the Horizons program. This section of a map features examples of first, second, and third order “ripples” from the program. The map illustrates the Fort Yates Horizons program, which conducted a study circles conversation that then led to community garden development. The community garden project spurred the town to a Native Garden partnership with the Tribe, which ultimately led to significant grants to support cultural understanding and assist those with limited resources.

Figure 1.
A Segment of a Ripple Effect Map

RippleEffectMap-mindmapimage

Conclusion

REM is a useful tool for impact analysis of Extension programming and may be particularly well suited for complex interventions or collaborations. Compared with other methods, it is straightforward, cost effective, and, most important, has the potential to generate further movement towards group, organizational, or community goals. We invite program staff and evaluators in other states to try this method out and engage with us in dialogue about the many uses, benefits, and limitations of this approach.

References

Baker, B., Calvert, M., Emery, M., Enfield, R., & Williams, B. (2011). Mapping the impact of youth on community development: What are we learning? [PowerPoint slides]. Retrieved from: http://ncrcrd.msu.edu/uploads/files/133/Mapping%20Impact%20of%20Youth%20on%20Com%20Dev%2012-3-10.pdf

Bernstein, D. A., Clarke-Stewart, A., Penner, L.A., Roy, E. J., & Wickens, C. D. (2000). Psychology (5th ed.) Boston: Houghton-Mifflin Company.

Buzan, T. (2003). The mind map book. London: BBC Books.

Cooperrider, D. L., & Whitney, D. (2007). Appreciative inquiry: A positive revolution in change. Pp. 73-88 in P. Holman & T. Devane (eds.), The Change Handbook, 2nd edition. San Francisco: Berrett-Koehler Publishers, Inc.

Donaldson J. (2010). Getting acquainted with free software. Journal of Extension [On-line], 48(3) Article 3TOT7. Available at: http://www.joe.org/joe/2010june/tt7.php

Emery, M., & Flora, C. B. (2006). Spiraling-up: Mapping community transformation with community capitals framework. Community Development: Journal of the Community Development Society 37(1), 19-35.

Eppler, M. J. (2006). A comparison between concept maps, mind maps, conceptual diagrams, and visual metaphors as complementary tools for knowledge construction and sharing. Information Visualization 5:202-210.

Greene, J. C. (1994). Qualitative program evaluation: Practice and promise. Pp. 530-544 in Denzin, N.K. and Lincoln, Y.S., eds. Handbook of qualitative research. Thousand Oaks, CA: Sage.

Kollock, D. A. (2011). Ripple effects mapping for evaluation. Washington State University curriculum. Pullman, WA.

Mayne, J. (1999). Addressing attribution through contribution analysis: Using performance measures sensibly. Retrieved from: http://www.oag-bvg.gc.ca/internet/docs/99dp1_e.pdf

Patton, M. (2002). Qualitative research and evaluation methods. London: Sage Publications.

Rasmussen, C., Armstrong, J., & Chazdon, S. (2011). Bridging Brown County: Captivating social capital as a means to community change. Journal of Leadership Education 10(1):63-82.

Rennekamp, R., & Arnold, M. (2009). What progress, program evaluation? Reflections on a quarter-century of Extension evaluation practice. Journal of Extension [On-line], 47(3) Article 3COM1. Available at: http://www.joe.org/joe/2009june/comm1.php

Smith, M. F., & Straughn, A. A. (1983). Impact evaluation: A challenge for Extension. Journal of Extension [On-line], 21(5). Available at: http://www.joe.org/joe/1983september/83-5-a9.pdf

Wheeler, R., & Szymanski, M. (2005). What is forestry: A multi-State, Web-based forestry education program. Journal of Extension [On-line], 43(4) Article 4IAW3. Available from: http://www.joe.org/joe/2005august/iw3.php

Resource Link: http://www.joe.org/joe/2012october/tt6.php

References on Evaluation of Citizen Engagement Initiatives

pic by photosteve101 on flickr

I have been doing some research on works related to the evaluation of citizen engagement initiatives (technology mediated or not).  This is far from exhaustive, but I thought it would be worth sharing with those who stop by here. Also, any help with identifying other relevant sources that I may be missing would be greatly appreciated.


Rethinking Why People Participate

Screen Shot 2013-12-29 at 9.08.46 PM

Having a refined understanding of what leads people to participate is one of the main concerns of those working with citizen engagement. But particularly when it comes to participatory democracy, that understanding is only partial and, most often, the cliché “more research is needed” is definitely applicable. This is so for a number of reasons, four of which are worth noting here.

  1. The “participatory” label is applied to greatly varied initiatives, raising obvious methodological challenges for comparative research and cumulative learning. For instance, while both participatory budgeting and online petitions can be roughly categorized as “participatory” processes, they are entirely different in terms of fundamental aspects such as their goals, institutional design and expected impact on decision-making.
  2. The fact that many participatory initiatives are conceived as “pilots” or one-off events gives researchers little time to understand the phenomenon, come up with sound research questions, and test different hypotheses over time.  The “pilotitis” syndrome in the tech4accountability space is a good example of this.
  3. When designing and implementing participatory processes, in the face of budget constraints the first victims are documentation, evaluation and research. Apart from a few exceptions, this leads to a scarcity of data and basic information that undermines even the most heroic “archaeological” efforts of retrospective research and evaluation (a far from ideal approach).
  4. The semantic extravaganza that currently plagues the field of citizen engagement, technology and open government makes cumulative learning all the more difficult.

Precisely for the opposite reasons, our knowledge of electoral participation is in better shape. First, despite the differences between elections, comparative work is relatively easy, which is attested by the high number of cross-country studies in the field. Second, the fact that elections (for the most part) are repeated regularly and following a similar design enables the refinement of hypotheses and research questions over time, and specific time-related analysis (see an example here [PDF]). Third, when compared to the funds allocated to research in participatory initiatives, the relative amount of resources channeled into electoral studies and voting behavior is significantly higher. Here I am not referring to academic work only but also to the substantial resources invested by the private sector and parties towards a better understanding of elections and voting behavior. This includes a growing body of knowledge generated by get-out-the-vote (GOTV) research, with fascinating experimental evidence from interventions that seek to increase participation in elections (e.g. door-to-door campaigns, telemarketing, e-mail). Add to that the wealth of electoral data that is available worldwide (in machine-readable formats) and you have some pretty good knowledge to tap into. Finally, both conceptually and terminologically, the field of electoral studies is much more consistent than the field of citizen engagement which, in the long run, tends to drastically impact how knowledge of a subject evolves.

These reasons should be sufficient to capture the interest of those who work with citizen engagement. While the extent to which the knowledge from the field of electoral participation can be transferred to non-electoral participation remains an open question, it should at least provide citizen engagement researchers with cues and insights that are very much worth considering.

This is why I was particularly interested in an article from a recently published book, The Behavioral Foundations of Public Policy (Princeton). Entitled “Rethinking Why People Vote: Voting as Dynamic Social Expression”, the article is written by Todd Rogers, Craig Fox and Alan Berger. Taking a behavioralist stance, the authors start by questioning the usefulness of the rationalist models in explaining voting behavior:

“In these [rationalist] models citizens are seen as weighing the anticipated trouble they must go through in order to cast their votes, against the likelihood that their vote will improve the outcome of an election times the magnitude of that improvement. Of course, these models are problematic because the likelihood of casting in the deciding vote is often hopelessly small. In a typical state or national election, a person faces a higher probability of being struck by a car on the way to his or her polling location than of casting the deciding vote.”

(BTW, if you are a voter in certain US states, the odds of being hit by a meteorite are greater than those of casting the deciding vote).

Following on from the fact that traditional models cannot fully explain why and under which conditions citizens vote, the authors develop a framework that considers voting as a “self-expressive voting behavior that is influenced by events occurring before and after the actual moment of casting a vote.” To support their claims, throughout the article the authors build upon existing evidence from GOTV campaigns and other behavioral research. Besides providing a solid overview of the literature in the field, the authors express compelling arguments for mobilizing electoral participation. Below are a few excerpts from the article with some of the main takeaways:

  • Mode of contact: the more personal it is, the more effective it is

“Initial experimental research found that a nonpartisan face-to-face canvassing effort had a 5-8 percentage point mobilizing effect in an uncontested midterm elections in 1998 (Gerber and Green 2000) compared to less than a 1 percentage point mobilizing effect for live phone calls and mailings. More than three dozen subsequent experiments have overwhelmingly supported the original finding (…)”

“Dozens of experiments have examined the effectiveness of GOTV messages delivered by the telephone. Several general findings emerge, all of which are consistent with the broad conclusion that the more personal a GOTV strategy, the more effective. (…) the most effective calls are conducted in an unhurried, “chatty manner.”

“The least personal and the least effective GOTV communication channels entail one way communications. (…) written pieces encouraging people vote that are mailed directly to households have consistently been shown to produce a mall, but positive, increase in turnout.”

  • Voting is affected by events before and after the decision

“One means to facilitate the performance of a socially desirable behavior is to ask people to predict whether they will perform the behavior in the future. In order to present oneself in a favorable light or because of wishful thinking or both, people are generally biased to answer in the affirmative. Moreover, a number of studies have found that people are more likely to follow through on a behavior after they predicted that they will do so (….) Emerging social-networking technologies provide new opportunities for citizens to commit to each other that they will turnout in a given election. These tools facilitate making one’s commitments public, and they also allow for subsequently accountability following an election (…) Asking people to form a specific if-then plan of action, or implementation intention, reduces the cognitive costs of having to remember to pursue an action that one intends to perform. Research shows that when people articulate the how, when and where of their plan to implement an intended behavior, they are more likely to follow through.”

(Not coincidentally, as noted by Sasha Issenberg in his book The Victory Lab, during the 2010 US presidential election millions of democrats received an email reminding them that they had “made a commitment to vote in this election” and that “the time has come to make good on that commitment. Think about when you’ll cast your vote and how you’ll get there.”)

“ (…) holding a person publicly accountable for whether or not she voted may increase her tendency to do so. (…) Studies have found that when people are merely made aware that their behavior will be publicly known, they become more likely to behaving in ways that are consistent with how they believe others think they should behave. (…) At least, at one point Italy exposed those who failed to vote by posting the names of nonvoters outside of local town halls.”

(On the accountability issue, also read this fascinating study [PDF] by Gerber, Green & Larimer)

  • Following the herd: affinitive and belonging needs

“People are strongly motivated to maintain feelings of belonging with others and to affiliate with others. (…) Other GOTV strategies that can increase turnout by serving social needs could involve encouraging people to go to their polling place in groups (i.e., a buddy system), hosting after-voting parties on election day, or encouraging people to talk about voting with their friends, to name a few.”

“(…) studies showed that the motivation to vote significantly increased when participants heard a message that emphasized high expected turnout as opposed to low expected turnout. For example, in the New Jersey study, 77% of the participants who heard the high-turnout script reported being “absolutely certain” that they would vote, compared to 71% of those who heard the low-turnout script. This research also found that moderate and infrequent voters were strongly affected by the turnout information.”

  • Voting as an expression of identity

“(….) citizens can derive value from voting through what the act displays about their identities. People are willing to go to great lengths, and pay great costs, to express that they are a particular kind of person. (….) Experimenters asked participants to complete a fifteen-minute survey that related to an election that was to occur the following week. After completing the survey, the experimenter reviewed the results and reported to participants what their responses indicated. Participants were, in fact, randomly assigned to one of two conditions. Participants in the first condition were labeled as being “above-average citizens[s] … who [are] very likely to vote,” whereas participants in the second condition were labeled as being “average citizen[s] … with an average likelihood of voting. (….) These identity labels proved to have substantial impact on turnout, with 87% of “above average” participants voting versus 75% of “average” participants voting.”

For those working with participatory governance, the question that remains is the extent to which each of these lessons is applicable to non-electoral forms of participation. The differences between electoral and non-electoral forms of participation may cause these techniques to generate very different results. One difference relates to public awareness about participation opportunities. While it would be safe to say that during an important election the majority of citizens are aware of it, the opposite is true for most existing participatory events, where generally only a minority is aware of their existence. In this case, it is unclear whether the impact of mobilization campaigns would be more or less significant when awareness about an event is low. Furthermore, if the act of voting may be automatically linked to a sense of civic duty, would that still hold true for less typical forms of participation (e.g. signing an online petition, attending a community meeting)?

The answer to this “transferability” question is an empirical one, and one that is yet to be answered.  The good news is that while experiments that generate this kind of knowledge are normally resource intensive, the costs of experimentation are driven down when it comes to technology-mediated citizen participation. The use of A/B testing during the Obama campaign is a good example. Below is an excellent account by Dan Siroker on how they conducted online experiments during the presidential campaign.

Bringing similar experiments to other realms of digital participation is the next logical step for those working in the field. Some organizations have already started to take this seriously . The issue is whether others, including governments and donors, will do the same.