World Development Report 2016: Digital Dividends

nationalgeographic_1746433-wblive (1)

The World Development Report 2016, the main annual publication of the World Bank, is out. This year’s theme is Digital Dividends, examining the role of digital technologies in the promotion of development outcomes. The findings of the WDR are simultaneously encouraging and sobering. Those skeptical of the role of digital technologies in development might be surprised by some of the results presented in the report. Technology advocates from across the spectrum (civic tech, open data, ICT4D) will inevitably come across some facts that should temper their enthusiasm.

While some may disagree with the findings, this Report is an impressive piece of work, spread across six chapters covering different aspects of digital technologies in development: 1) accelerating growth, 2) expanding opportunities, 3) delivering services, 4) sectoral policies, 5) national priorities, 6) global cooperation. My opinion may be biased, as somebody who made some modest contributions to the Report, but I believe that, to date, this is the most thorough effort to examine the effects of digital technologies on development outcomes. The full report can be downloaded here.

The report draws, among other things, from 14 background papers that were prepared by international experts and World Bank staff. These background papers serve as additional reading for those who would like to examine certain issues more closely, such as social media, net neutrality, and the cybersecurity agenda.

For those interested in citizen participation and civic tech, one of the papers written by Prof. Jonathan Fox and myself – When Does ICT-Enabled Citizen Voice Lead to Government Responsiveness? – might be of particular interest. Below is the abstract:

This paper reviews evidence on the use of 23 information and communication technology (ICT) platforms to project citizen voice to improve public service delivery. This meta-analysis focuses on empirical studies of initiatives in the global South, highlighting both citizen uptake (‘yelp’) and the degree to which public service providers respond to expressions of citizen voice (‘teeth’). The conceptual framework further distinguishes between two trajectories for ICT-enabled citizen voice: Upwards accountability occurs when users provide feedback directly to decision-makers in real time, allowing policy-makers and program managers to identify and address service delivery problems – but at their discretion. Downwards accountability, in contrast, occurs either through real time user feedback or less immediate forms of collective civic action that publicly call on service providers to become more accountable and depends less exclusively on decision-makers’ discretion about whether or not to act on the information provided. This distinction between the ways in which ICT platforms mediate the relationship between citizens and service providers allows for a precise analytical focus on how different dimensions of such platforms contribute to public sector responsiveness. These cases suggest that while ICT platforms have been relevant in increasing policymakers’ and senior managers’ capacity to respond, most of them have yet to influence their willingness to do so.

You can download the paper here.

Any feedback on our paper or models proposed (see below, for instance) would be extremely welcome.

unpacking

unpacking user feedback and civic action: difference and overlap

I also list below the links to all the background papers and their titles

Enjoy the reading.


Civic Tech and Government Responsiveness

For those interested in tech-based citizen reporting tools (such as FixMyStreet, SeeClickFix), here’s a recent interview of mine with Jeffrey Peel (Citizen 2015) in which I discuss some of our recent research in the area.

 


Three New Papers (and a presentation) on Civic Tech

CaptureFMS

This blog has been slow lately, but as I mentioned before, it is for a good cause. With some great colleagues I’ve been working on a series of papers (and a book) on civic technology. The first three of these papers are out. There is much more to come, but in the meantime, you can find below the abstracts and link to each of the papers. I also add the link to a presentation which highlights some other issues that we are looking at.

  • Effects of the Internet on Participation: Study of a Public Policy Referendum in Brazil.

Does online voting mobilize citizens who otherwise would not participate? During the annual participatory budgeting vote in the southern state of Rio Grande do Sul in Brazil – the world’s largest – Internet voters were asked whether they would have participated had there not been an online voting option (i-voting). The study documents an 8.2 percent increase in total turnout with the introduction of i-voting. In support of the mobilization hypothesis, unique survey data show that i-voting is mainly used by new participants rather than just for convenience by those who were already mobilized. The study also finds that age, gender, income, education, and social media usage are significant predictors of being online-only voters. Technology appears more likely to engage people who are younger, male, of higher income and educational attainment, and more frequent social media users.

Read more here.

  • The Effect of Government Responsiveness on Future Political Participation.

What effect does government responsiveness have on political participation? Since the 1940s political scientists have used attitudinal measures of perceived efficacy to explain participation. More recent work has focused on underlying genetic factors that condition citizen engagement. We develop a ‘Calculus of Participation’ that incorporates objective efficacy – the extent to which an individual’s participation actually has an impact – and test the model against behavioral data from FixMyStreet.com (n=399,364). We find that a successful first experience using FixMyStreet.com (e.g. reporting a pothole and having it fixed) is associated with a 54 percent increase in the probability of an individual submitting a second report. We also show that the experience of government responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of government responsiveness for fostering an active citizenry, while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

Read more here.

  • Do Mobile Phone Surveys Work in Poor Countries? 

In this project, we analyzed whether mobile phone-based surveys are a feasible and cost-effective approach for gathering statistically representative information in four low-income countries (Afghanistan, Ethiopia, Mozambique, and Zimbabwe). Specifically, we focused on three primary research questions. First, can the mobile phone survey platform reach a nationally representative sample? Second, to what extent does linguistic fractionalization affect the ability to produce a representative sample? Third, how effectively does monetary compensation impact survey completion patterns? We find that samples from countries with higher mobile penetration rates more closely resembled the actual population. After weighting on demographic variables, sample imprecision was a challenge in the two lower feasibility countries (Ethiopia and Mozambique) with a sampling error of /- 5 to 7 percent, while Zimbabwe’s estimates were more precise (sampling error of /- 2.8 percent). Surveys performed reasonably well in reaching poor demographics, especially in Afghanistan and Zimbabwe. Rural women were consistently under-represented in the country samples, especially in Afghanistan and Ethiopia. Countries’ linguistic fractionalization may influence the ability to obtain nationally representative samples, although a material effect was difficult to discern through penetration rates and market composition. Although the experimentation design of the incentive compensation plan was compromised in Ethiopia and Zimbabwe, it seems that offering compensation for survey completion mitigated attrition rates in several of the pilot countries while not reducing overall costs. These effects varied across countries and cultural settings.

Read more here.

  • The haves and the have nots: is civic tech impacting the people who need it most? (presentation) 

Read more here.


Citizen Engagement: Two Learning Opportunities

For those willing to learn more about citizen engagement, here are two opportunities worth checking out.

The first one is the World Bank’s MOOC on Citizen Engagement. Even though the course has already started it is still possible to enroll. Here’s a brief description of the course:

The 4-week course brings together a diverse range of experts to provide students with a comprehensive overview of citizen engagement. It begins by synthesizing the theories and concepts that underlie citizen engagement, and goes on to explore how citizens can be engaged in both policymaking and public service delivery. Finally, it investigates how recent innovations are shaking up the field, through detailing both successes and failures of these new approaches. Our presenters, leaders in academia, government, and civil society, provide a wide range of perspectives and real-world experience to give participants a deeper understanding of whether citizen engagement can truly enhance the process of development. Participants will also have the opportunity to collaborate with one another and design their own citizen engagement initiatives, thereby putting theories learned in the course into practice.

This week starts module 2, with Matt Leighninger, Tina Nabatchi, Beth Noveck and myself:

This week explores the role that citizens can play in actively shaping public policy. We start by examining how citizens participate, analyzing the differences between ‘thick’ and ‘thin’ forms of engagement and asking strategic questions such as who should participate, how should participants interact with decision makers, what information do participants need, and how will participation impact policy decisions. Next, we survey examples of crowdsourcing and open innovation that are helping governments and citizens better interact. Finally, we unpack why citizens participate, moving beyond the mere calculation of costs and benefits described in the rational choice model to an analysis of broader factors that influence participation.

 

govlabcapture

The second opportunity is the coaching program on Citizen Engagement by the GovLab Academy, with Beth Noveck and myself. Here’s the description of the program:

This program is designed to help those wishing to integrate citizen engagement into ongoing projects. Whether policymaking or service delivery in nature, we start from the assumption that, engaging citizens is both more effective and more legitimate as a way of working. Engagement may be offline as well as on and local or widely distributed. But, in every case, teams should have a clear sense of the problem they are trying to solve, the rationale for why they believe greater openness to and collaboration with citizens can have a positive impact, and a willingness to measure impact. Convened by two practitioners/theorists of citizen engagement, the program with emphasize peer-to-peer coaching and introductions to relevant mentors and experts from around the world working on related problems or applying similar methods. Our goal? To have take more citizen engagement projects from idea to implementation. Everyone is invited to apply. There will be an admissions preference for those working at the city-level.

There’s a number of other awesome courses provided by the GovLab: you can check all of them here.


DemocracySpot’s Most Read Posts in 2014

Glasses for reading (1936) – Nationaal Archief

(I should have posted this on the 31st, but better late than never)

Below are some of the most read posts in 2014. While I’m at it, I’ll take the opportunity to explain the reduced number of posts in the last few months. Since mid-2014 I have been working with a small team of political and data scientists on a number of research questions at the intersection of technology and citizen engagement (I presented a few preliminary findings here). Following the period of field work, data collection and experiments, we have now started the drafting and peer-review stage of our research. This has been an extremely time-consuming process, which has taken up most of my weekends, when I generally write for this blog.

Still, one of my new year’s resolutions is precisely to better discipline myself to post more regularly. And I am hopeful that the publication of our upcoming research will make up for the recent reduction in posts. We will start to disseminate our results soon, so stay tuned.

In the meantime, here’s a selection of the five most read posts in 2014.

The Problem with Theory of Change

Technology and Citizen Engagement: Friend or Foe? 

A Brilliant Story of Participation, Technology and Development Outcomes

When Citizen Engagement Saves Lives (and what we can learn from it) 

Social Accountability: What Does the Evidence Really Say?


Now the paper: Evidence of Social Accountability Initiatives

sandwichstrategyfox

A little while ago I wrote about Jonathan Fox’s work on the evidence of social accountability initiatives. Initially in the format of a PDF slide presentation, it has now been turned into a magnificent paper, the first of the GPSA working paper series. Below is the abstract:

Policy discussion of social accountability initiatives has increasingly has increasingly focused on questions about their tangible development impacts. The empirical evidence is mixed. This meta-analysis rethinks some of the most influential evaluations through a new lens: the distinction between tactical and strategic approaches to the promotion of citizen voice to contribute to improved public sector performance. Field experiments tend to study bounded, tactical interventions that rely on optimistic assumptions about the power of information alone both to motivate collective action and to influence public sector performance. More promising results emerge from studies of multi-pronged strategies that encourage enabling environments for collective action and bolster state capacity to actually respond to citizen voice. This reinterpretation of the empirical evidence leads to a proposed new series of grounded propositions that focus on state-society synergy and sandwich strategies through which ‘voice’ and ‘teeth’ can become mutually empowering.

You can download the paper here: Social Accountability: What does the Evidence Really Say [PDF]. You can also read my take on the main lessons from Jonathan’s work here. Enjoy the reading.

***

PS: I have been away for a while doing field work, but hope to start posting (more or less) regularly soon.


New Book on 25 Years of Participatory Budgeting

Screenshot 2014-06-09 17.17.40

A little while ago I mentioned the launch of the Portuguese version of the book organized by Nelson Dias, “Hope for Democracy: 25 Years of Participatory Budgeting Worldwide”.

The good news is that the English version is finally out. Here’s an excerpt from the introduction:

This book represents the effort  of more than forty authors and many other direct and indirect contributions that spread across different continents seek to provide an overview on the Participatory Budgeting (PB) in the World. They do so from different backgrounds. Some are researchers, others are consultants, and others are activists connected to several groups and social movements. The texts reflect this diversity of approaches and perspectives well, and we do not try to influence that.

(….)

The pages that follow are an invitation to a fascinating journey on the path of democratic innovation in very diverse cultural, political, social and administrative settings. From North America to Asia, Oceania to Europe, from Latin America to Africa, the reader will find many reasons to closely follow the proposals of the different authors.

The book  can be downloaded here [PDF]. I had the pleasure of being one of the book’s contributors, co-authoring an article with Rafael Sampaio on the use of ICT in PB processes: “Electronic Participatory Budgeting: False Dilemmas and True Complexities” [PDF].

While my perception may be biased, I believe this book will be a major contribution for researchers and practitioners in the field of participatory budgeting and citizen engagement in general. Congratulations to Nelson Dias and all the others who contributed their time and energy.


10 Most Read Posts in 2013

Below is a selection of the 10 most read posts at DemocracySpot in 2013. Thanks to all of those who stopped by throughout the year, and happy 2014.

1. Does transparency lead to trust? Some evidence on the subject.

2. The Foundations of Motivation for Citizen Engagement

3. Open Government, Feedback Loops, and Semantic Extravaganza

4. Open Government and Democracy

5. What’s Wrong with e-Petitions and How to Fix them

6. Lawrence Lessig on Sortition and Citizen Participation

7. Unequal Participation: Open Government’s Unresolved Dilemma

8. The Effect of SMS on Participation: Evidence from Uganda

9. The Uncertain Relationship Between Open Data and Accountability

10. Lisbon Revisited: Notes on Participation


Rethinking Why People Participate

Screen Shot 2013-12-29 at 9.08.46 PM

Having a refined understanding of what leads people to participate is one of the main concerns of those working with citizen engagement. But particularly when it comes to participatory democracy, that understanding is only partial and, most often, the cliché “more research is needed” is definitely applicable. This is so for a number of reasons, four of which are worth noting here.

  1. The “participatory” label is applied to greatly varied initiatives, raising obvious methodological challenges for comparative research and cumulative learning. For instance, while both participatory budgeting and online petitions can be roughly categorized as “participatory” processes, they are entirely different in terms of fundamental aspects such as their goals, institutional design and expected impact on decision-making.
  2. The fact that many participatory initiatives are conceived as “pilots” or one-off events gives researchers little time to understand the phenomenon, come up with sound research questions, and test different hypotheses over time.  The “pilotitis” syndrome in the tech4accountability space is a good example of this.
  3. When designing and implementing participatory processes, in the face of budget constraints the first victims are documentation, evaluation and research. Apart from a few exceptions, this leads to a scarcity of data and basic information that undermines even the most heroic “archaeological” efforts of retrospective research and evaluation (a far from ideal approach).
  4. The semantic extravaganza that currently plagues the field of citizen engagement, technology and open government makes cumulative learning all the more difficult.

Precisely for the opposite reasons, our knowledge of electoral participation is in better shape. First, despite the differences between elections, comparative work is relatively easy, which is attested by the high number of cross-country studies in the field. Second, the fact that elections (for the most part) are repeated regularly and following a similar design enables the refinement of hypotheses and research questions over time, and specific time-related analysis (see an example here [PDF]). Third, when compared to the funds allocated to research in participatory initiatives, the relative amount of resources channeled into electoral studies and voting behavior is significantly higher. Here I am not referring to academic work only but also to the substantial resources invested by the private sector and parties towards a better understanding of elections and voting behavior. This includes a growing body of knowledge generated by get-out-the-vote (GOTV) research, with fascinating experimental evidence from interventions that seek to increase participation in elections (e.g. door-to-door campaigns, telemarketing, e-mail). Add to that the wealth of electoral data that is available worldwide (in machine-readable formats) and you have some pretty good knowledge to tap into. Finally, both conceptually and terminologically, the field of electoral studies is much more consistent than the field of citizen engagement which, in the long run, tends to drastically impact how knowledge of a subject evolves.

These reasons should be sufficient to capture the interest of those who work with citizen engagement. While the extent to which the knowledge from the field of electoral participation can be transferred to non-electoral participation remains an open question, it should at least provide citizen engagement researchers with cues and insights that are very much worth considering.

This is why I was particularly interested in an article from a recently published book, The Behavioral Foundations of Public Policy (Princeton). Entitled “Rethinking Why People Vote: Voting as Dynamic Social Expression”, the article is written by Todd Rogers, Craig Fox and Alan Berger. Taking a behavioralist stance, the authors start by questioning the usefulness of the rationalist models in explaining voting behavior:

“In these [rationalist] models citizens are seen as weighing the anticipated trouble they must go through in order to cast their votes, against the likelihood that their vote will improve the outcome of an election times the magnitude of that improvement. Of course, these models are problematic because the likelihood of casting in the deciding vote is often hopelessly small. In a typical state or national election, a person faces a higher probability of being struck by a car on the way to his or her polling location than of casting the deciding vote.”

(BTW, if you are a voter in certain US states, the odds of being hit by a meteorite are greater than those of casting the deciding vote).

Following on from the fact that traditional models cannot fully explain why and under which conditions citizens vote, the authors develop a framework that considers voting as a “self-expressive voting behavior that is influenced by events occurring before and after the actual moment of casting a vote.” To support their claims, throughout the article the authors build upon existing evidence from GOTV campaigns and other behavioral research. Besides providing a solid overview of the literature in the field, the authors express compelling arguments for mobilizing electoral participation. Below are a few excerpts from the article with some of the main takeaways:

  • Mode of contact: the more personal it is, the more effective it is

“Initial experimental research found that a nonpartisan face-to-face canvassing effort had a 5-8 percentage point mobilizing effect in an uncontested midterm elections in 1998 (Gerber and Green 2000) compared to less than a 1 percentage point mobilizing effect for live phone calls and mailings. More than three dozen subsequent experiments have overwhelmingly supported the original finding (…)”

“Dozens of experiments have examined the effectiveness of GOTV messages delivered by the telephone. Several general findings emerge, all of which are consistent with the broad conclusion that the more personal a GOTV strategy, the more effective. (…) the most effective calls are conducted in an unhurried, “chatty manner.”

“The least personal and the least effective GOTV communication channels entail one way communications. (…) written pieces encouraging people vote that are mailed directly to households have consistently been shown to produce a mall, but positive, increase in turnout.”

  • Voting is affected by events before and after the decision

“One means to facilitate the performance of a socially desirable behavior is to ask people to predict whether they will perform the behavior in the future. In order to present oneself in a favorable light or because of wishful thinking or both, people are generally biased to answer in the affirmative. Moreover, a number of studies have found that people are more likely to follow through on a behavior after they predicted that they will do so (….) Emerging social-networking technologies provide new opportunities for citizens to commit to each other that they will turnout in a given election. These tools facilitate making one’s commitments public, and they also allow for subsequently accountability following an election (…) Asking people to form a specific if-then plan of action, or implementation intention, reduces the cognitive costs of having to remember to pursue an action that one intends to perform. Research shows that when people articulate the how, when and where of their plan to implement an intended behavior, they are more likely to follow through.”

(Not coincidentally, as noted by Sasha Issenberg in his book The Victory Lab, during the 2010 US presidential election millions of democrats received an email reminding them that they had “made a commitment to vote in this election” and that “the time has come to make good on that commitment. Think about when you’ll cast your vote and how you’ll get there.”)

“ (…) holding a person publicly accountable for whether or not she voted may increase her tendency to do so. (…) Studies have found that when people are merely made aware that their behavior will be publicly known, they become more likely to behaving in ways that are consistent with how they believe others think they should behave. (…) At least, at one point Italy exposed those who failed to vote by posting the names of nonvoters outside of local town halls.”

(On the accountability issue, also read this fascinating study [PDF] by Gerber, Green & Larimer)

  • Following the herd: affinitive and belonging needs

“People are strongly motivated to maintain feelings of belonging with others and to affiliate with others. (…) Other GOTV strategies that can increase turnout by serving social needs could involve encouraging people to go to their polling place in groups (i.e., a buddy system), hosting after-voting parties on election day, or encouraging people to talk about voting with their friends, to name a few.”

“(…) studies showed that the motivation to vote significantly increased when participants heard a message that emphasized high expected turnout as opposed to low expected turnout. For example, in the New Jersey study, 77% of the participants who heard the high-turnout script reported being “absolutely certain” that they would vote, compared to 71% of those who heard the low-turnout script. This research also found that moderate and infrequent voters were strongly affected by the turnout information.”

  • Voting as an expression of identity

“(….) citizens can derive value from voting through what the act displays about their identities. People are willing to go to great lengths, and pay great costs, to express that they are a particular kind of person. (….) Experimenters asked participants to complete a fifteen-minute survey that related to an election that was to occur the following week. After completing the survey, the experimenter reviewed the results and reported to participants what their responses indicated. Participants were, in fact, randomly assigned to one of two conditions. Participants in the first condition were labeled as being “above-average citizens[s] … who [are] very likely to vote,” whereas participants in the second condition were labeled as being “average citizen[s] … with an average likelihood of voting. (….) These identity labels proved to have substantial impact on turnout, with 87% of “above average” participants voting versus 75% of “average” participants voting.”

For those working with participatory governance, the question that remains is the extent to which each of these lessons is applicable to non-electoral forms of participation. The differences between electoral and non-electoral forms of participation may cause these techniques to generate very different results. One difference relates to public awareness about participation opportunities. While it would be safe to say that during an important election the majority of citizens are aware of it, the opposite is true for most existing participatory events, where generally only a minority is aware of their existence. In this case, it is unclear whether the impact of mobilization campaigns would be more or less significant when awareness about an event is low. Furthermore, if the act of voting may be automatically linked to a sense of civic duty, would that still hold true for less typical forms of participation (e.g. signing an online petition, attending a community meeting)?

The answer to this “transferability” question is an empirical one, and one that is yet to be answered.  The good news is that while experiments that generate this kind of knowledge are normally resource intensive, the costs of experimentation are driven down when it comes to technology-mediated citizen participation. The use of A/B testing during the Obama campaign is a good example. Below is an excellent account by Dan Siroker on how they conducted online experiments during the presidential campaign.

Bringing similar experiments to other realms of digital participation is the next logical step for those working in the field. Some organizations have already started to take this seriously . The issue is whether others, including governments and donors, will do the same.