New Book on 25 Years of Participatory Budgeting

Screenshot 2014-06-09 17.17.40

A little while ago I mentioned the launch of the Portuguese version of the book organized by Nelson Dias, “Hope for Democracy: 25 Years of Participatory Budgeting Worldwide”.

The good news is that the English version is finally out. Here’s an excerpt from the introduction:

This book represents the effort  of more than forty authors and many other direct and indirect contributions that spread across different continents seek to provide an overview on the Participatory Budgeting (PB) in the World. They do so from different backgrounds. Some are researchers, others are consultants, and others are activists connected to several groups and social movements. The texts reflect this diversity of approaches and perspectives well, and we do not try to influence that.

(….)

The pages that follow are an invitation to a fascinating journey on the path of democratic innovation in very diverse cultural, political, social and administrative settings. From North America to Asia, Oceania to Europe, from Latin America to Africa, the reader will find many reasons to closely follow the proposals of the different authors.

The book  can be downloaded here [PDF]. I had the pleasure of being one of the book’s contributors, co-authoring an article with Rafael Sampaio on the use of ICT in PB processes: “Electronic Participatory Budgeting: False Dilemmas and True Complexities” [PDF].

While my perception may be biased, I believe this book will be a major contribution for researchers and practitioners in the field of participatory budgeting and citizen engagement in general. Congratulations to Nelson Dias and all the others who contributed their time and energy.


The Problem with Theory of Change

picture by skreened.com

If you are working in the fields of development or governance it’s highly likely that you’ve come across the term “theory of change” (ToC). At a conference a couple of weeks ago, while answering some questions, I mentioned that I preferred not to use the term. The comment didn’t go unnoticed by some witty observers on Twitter, and I was surprised by the number of people who came to me afterwards asking why I do not “like” theory of change.

I can see why some people are attracted to the term. First, “change” is a powerful word: it even helps win elections. And when it comes to governance issues, the need for change is almost a consensus. Second, the user of the word “theory” gives scientific verve to the conversation. However, the problem is precisely the appropriateness of its use if one thinks of the word in scientific terms. It seems that people are saying “theory” when they actually mean (at best) “hypothesis”.

We don’t have to go very far to find out what scientific theory actually is. Keeping to information that is just a click away, let’s take one of the definitions reproduced in Wikipedia’s entry for “theory”:

A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not “guesses” but reliable accounts of the real world.

 Or “scientific theory”:

A theory is a good theory if it satisfies two requirements: It must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations.

 And here’s a rap video on the difference between theory and hypothesis:

Granted, the word “theory” is often used as a synonym of “hypothesis”, and even dictionaries do so. But the problem of this in the context of current usages of “theory of change” is that it masks the difference between what we know and do not know about something, often conveying a false sense of scientific rigor. And, particularly when it comes to issues such as development and governance, it is extremely important to have a clear distinction between well-substantiated explanations and every other color of hypotheses, assumptions, and guesses. In fact, in any field, it is a minimal requirement for the production of knowledge.

So here’s an interesting exercise. Search on the web for the use of “theory of change” combined with terms like “accountability” and “open government.” Find, for yourself, which ones are really “theories of change” or, rather, merely “hunches of change.”

Most likely, people will keep using theory of change indiscriminately until the next flavor of the moment comes up. In the meantime, beware.

***

Also read: Open Government, Feedback Loops and Semantic Extravaganza


10 Most Read Posts in 2013

Below is a selection of the 10 most read posts at DemocracySpot in 2013. Thanks to all of those who stopped by throughout the year, and happy 2014.

1. Does transparency lead to trust? Some evidence on the subject.

2. The Foundations of Motivation for Citizen Engagement

3. Open Government, Feedback Loops, and Semantic Extravaganza

4. Open Government and Democracy

5. What’s Wrong with e-Petitions and How to Fix them

6. Lawrence Lessig on Sortition and Citizen Participation

7. Unequal Participation: Open Government’s Unresolved Dilemma

8. The Effect of SMS on Participation: Evidence from Uganda

9. The Uncertain Relationship Between Open Data and Accountability

10. Lisbon Revisited: Notes on Participation


Rethinking Why People Participate

Screen Shot 2013-12-29 at 9.08.46 PM

Having a refined understanding of what leads people to participate is one of the main concerns of those working with citizen engagement. But particularly when it comes to participatory democracy, that understanding is only partial and, most often, the cliché “more research is needed” is definitely applicable. This is so for a number of reasons, four of which are worth noting here.

  1. The “participatory” label is applied to greatly varied initiatives, raising obvious methodological challenges for comparative research and cumulative learning. For instance, while both participatory budgeting and online petitions can be roughly categorized as “participatory” processes, they are entirely different in terms of fundamental aspects such as their goals, institutional design and expected impact on decision-making.
  2. The fact that many participatory initiatives are conceived as “pilots” or one-off events gives researchers little time to understand the phenomenon, come up with sound research questions, and test different hypotheses over time.  The “pilotitis” syndrome in the tech4accountability space is a good example of this.
  3. When designing and implementing participatory processes, in the face of budget constraints the first victims are documentation, evaluation and research. Apart from a few exceptions, this leads to a scarcity of data and basic information that undermines even the most heroic “archaeological” efforts of retrospective research and evaluation (a far from ideal approach).
  4. The semantic extravaganza that currently plagues the field of citizen engagement, technology and open government makes cumulative learning all the more difficult.

Precisely for the opposite reasons, our knowledge of electoral participation is in better shape. First, despite the differences between elections, comparative work is relatively easy, which is attested by the high number of cross-country studies in the field. Second, the fact that elections (for the most part) are repeated regularly and following a similar design enables the refinement of hypotheses and research questions over time, and specific time-related analysis (see an example here [PDF]). Third, when compared to the funds allocated to research in participatory initiatives, the relative amount of resources channeled into electoral studies and voting behavior is significantly higher. Here I am not referring to academic work only but also to the substantial resources invested by the private sector and parties towards a better understanding of elections and voting behavior. This includes a growing body of knowledge generated by get-out-the-vote (GOTV) research, with fascinating experimental evidence from interventions that seek to increase participation in elections (e.g. door-to-door campaigns, telemarketing, e-mail). Add to that the wealth of electoral data that is available worldwide (in machine-readable formats) and you have some pretty good knowledge to tap into. Finally, both conceptually and terminologically, the field of electoral studies is much more consistent than the field of citizen engagement which, in the long run, tends to drastically impact how knowledge of a subject evolves.

These reasons should be sufficient to capture the interest of those who work with citizen engagement. While the extent to which the knowledge from the field of electoral participation can be transferred to non-electoral participation remains an open question, it should at least provide citizen engagement researchers with cues and insights that are very much worth considering.

This is why I was particularly interested in an article from a recently published book, The Behavioral Foundations of Public Policy (Princeton). Entitled “Rethinking Why People Vote: Voting as Dynamic Social Expression”, the article is written by Todd Rogers, Craig Fox and Alan Berger. Taking a behavioralist stance, the authors start by questioning the usefulness of the rationalist models in explaining voting behavior:

“In these [rationalist] models citizens are seen as weighing the anticipated trouble they must go through in order to cast their votes, against the likelihood that their vote will improve the outcome of an election times the magnitude of that improvement. Of course, these models are problematic because the likelihood of casting in the deciding vote is often hopelessly small. In a typical state or national election, a person faces a higher probability of being struck by a car on the way to his or her polling location than of casting the deciding vote.”

(BTW, if you are a voter in certain US states, the odds of being hit by a meteorite are greater than those of casting the deciding vote).

Following on from the fact that traditional models cannot fully explain why and under which conditions citizens vote, the authors develop a framework that considers voting as a “self-expressive voting behavior that is influenced by events occurring before and after the actual moment of casting a vote.” To support their claims, throughout the article the authors build upon existing evidence from GOTV campaigns and other behavioral research. Besides providing a solid overview of the literature in the field, the authors express compelling arguments for mobilizing electoral participation. Below are a few excerpts from the article with some of the main takeaways:

  • Mode of contact: the more personal it is, the more effective it is

“Initial experimental research found that a nonpartisan face-to-face canvassing effort had a 5-8 percentage point mobilizing effect in an uncontested midterm elections in 1998 (Gerber and Green 2000) compared to less than a 1 percentage point mobilizing effect for live phone calls and mailings. More than three dozen subsequent experiments have overwhelmingly supported the original finding (…)”

“Dozens of experiments have examined the effectiveness of GOTV messages delivered by the telephone. Several general findings emerge, all of which are consistent with the broad conclusion that the more personal a GOTV strategy, the more effective. (…) the most effective calls are conducted in an unhurried, “chatty manner.”

“The least personal and the least effective GOTV communication channels entail one way communications. (…) written pieces encouraging people vote that are mailed directly to households have consistently been shown to produce a mall, but positive, increase in turnout.”

  • Voting is affected by events before and after the decision

“One means to facilitate the performance of a socially desirable behavior is to ask people to predict whether they will perform the behavior in the future. In order to present oneself in a favorable light or because of wishful thinking or both, people are generally biased to answer in the affirmative. Moreover, a number of studies have found that people are more likely to follow through on a behavior after they predicted that they will do so (….) Emerging social-networking technologies provide new opportunities for citizens to commit to each other that they will turnout in a given election. These tools facilitate making one’s commitments public, and they also allow for subsequently accountability following an election (…) Asking people to form a specific if-then plan of action, or implementation intention, reduces the cognitive costs of having to remember to pursue an action that one intends to perform. Research shows that when people articulate the how, when and where of their plan to implement an intended behavior, they are more likely to follow through.”

(Not coincidentally, as noted by Sasha Issenberg in his book The Victory Lab, during the 2010 US presidential election millions of democrats received an email reminding them that they had “made a commitment to vote in this election” and that “the time has come to make good on that commitment. Think about when you’ll cast your vote and how you’ll get there.”)

“ (…) holding a person publicly accountable for whether or not she voted may increase her tendency to do so. (…) Studies have found that when people are merely made aware that their behavior will be publicly known, they become more likely to behaving in ways that are consistent with how they believe others think they should behave. (…) At least, at one point Italy exposed those who failed to vote by posting the names of nonvoters outside of local town halls.”

(On the accountability issue, also read this fascinating study [PDF] by Gerber, Green & Larimer)

  • Following the herd: affinitive and belonging needs

“People are strongly motivated to maintain feelings of belonging with others and to affiliate with others. (…) Other GOTV strategies that can increase turnout by serving social needs could involve encouraging people to go to their polling place in groups (i.e., a buddy system), hosting after-voting parties on election day, or encouraging people to talk about voting with their friends, to name a few.”

“(…) studies showed that the motivation to vote significantly increased when participants heard a message that emphasized high expected turnout as opposed to low expected turnout. For example, in the New Jersey study, 77% of the participants who heard the high-turnout script reported being “absolutely certain” that they would vote, compared to 71% of those who heard the low-turnout script. This research also found that moderate and infrequent voters were strongly affected by the turnout information.”

  • Voting as an expression of identity

“(….) citizens can derive value from voting through what the act displays about their identities. People are willing to go to great lengths, and pay great costs, to express that they are a particular kind of person. (….) Experimenters asked participants to complete a fifteen-minute survey that related to an election that was to occur the following week. After completing the survey, the experimenter reviewed the results and reported to participants what their responses indicated. Participants were, in fact, randomly assigned to one of two conditions. Participants in the first condition were labeled as being “above-average citizens[s] … who [are] very likely to vote,” whereas participants in the second condition were labeled as being “average citizen[s] … with an average likelihood of voting. (….) These identity labels proved to have substantial impact on turnout, with 87% of “above average” participants voting versus 75% of “average” participants voting.”

For those working with participatory governance, the question that remains is the extent to which each of these lessons is applicable to non-electoral forms of participation. The differences between electoral and non-electoral forms of participation may cause these techniques to generate very different results. One difference relates to public awareness about participation opportunities. While it would be safe to say that during an important election the majority of citizens are aware of it, the opposite is true for most existing participatory events, where generally only a minority is aware of their existence. In this case, it is unclear whether the impact of mobilization campaigns would be more or less significant when awareness about an event is low. Furthermore, if the act of voting may be automatically linked to a sense of civic duty, would that still hold true for less typical forms of participation (e.g. signing an online petition, attending a community meeting)?

The answer to this “transferability” question is an empirical one, and one that is yet to be answered.  The good news is that while experiments that generate this kind of knowledge are normally resource intensive, the costs of experimentation are driven down when it comes to technology-mediated citizen participation. The use of A/B testing during the Obama campaign is a good example. Below is an excellent account by Dan Siroker on how they conducted online experiments during the presidential campaign.

Bringing similar experiments to other realms of digital participation is the next logical step for those working in the field. Some organizations have already started to take this seriously . The issue is whether others, including governments and donors, will do the same.


Why ‘I-Paid-A-Bribe’ Worked in India but Failed in China

source: China Daily

Interesting paper by Yuen Yuen Ang, Political Scientist at the University of Michigan:

Authoritarian states restrain online activism not only through repression and censorship, but also by indirectly weakening the ability of netizens to self-govern and constructively engage the state. I demonstrate this argument by comparing I-Paid-A-Bribe (IPAB) — a crowd-sourcing platform that collects anonymous reports of petty bribery — in India and China. Whereas IPAB originated and has thrived in India, a copycat effort in China fizzled out within months. Contrary to those who attribute China’s failed outcome to repression, I find that even before authorities shut down IPAB, the sites were already plagued by internal organizational problems that were comparatively absent in India. The study tempers expectations about the revolutionary effects of new media in mobilizing contention and checking corruption in the absence of a strong civil society.

And a brief video with Yuen Yuen

Also read

I Paid a Bribe. So What? 

Open Government and Democracy


The Effect of SMS on Participation: Evidence from Uganda

Screen Shot 2013-11-30 at 11.19.05 AM

I’ve been wanting to post about this paper for a while. At the intersection of technology and citizen participation this is probably one of the best studies produced in 2013 and I’m surprised I haven’t heard a lot about it outside the scholarly circle.

One of the fundamental questions concerning the use of technology to foster participation is whether it impacts inclusiveness and, if it does, in what way. That is, if technology has an effect on participation, does it reinforce or minimize participation biases? There is no straightforward answer, and the limited existing evidence suggests that the impact of technology on inclusiveness depends on a number of factors such as technology fit, institutional design and communication efforts.

If the answer to the question is “it depends”, then the more studies looking at the subject, the more we refine our understanding of how it works, when and why. The study, “Does Information Technology Flatten Interest Articulation? Evidence from Uganda” (Grossman, Humphreys, & Sacramone-Lutz, 2013), is a great contribution in that sense. The abstract is below (highlights are mine):

We use a field experiment to study how the availability and cost of political communication channels affect the efforts constituents take to influence their representatives. We presented sampled constituents in Uganda with an opportunity to send a text-message to their representatives at one of three randomly assigned prices. This allows us to ascertain whether ICTs can “flatten” interest articulation and how access costs determine who communicates and what gets communicated to politicians. Critically, contrary to concerns that technological innovations benefit the privileged, we find that ICT leads to significant flattening: a greater share of marginalized populations use this channel compared to existing political communication channels. Price matters too, as free messaging increase uptake by about 50%. Surprisingly, subsidy-induced increases in uptake do not yield further flattening since free channels are used at higher rates by both marginalized and well-connected constituents. More subtle strategic hypotheses find little support in the data.

But even if the question of “who participates” is answered in this paper, one is left wondering “as to what effect?”. Fortunately, the authors mention in a footnote that they are collecting data for a companion paper in which they focus on the behavior of MPs, which will hopefully address this issue. Looking forward to reading that one as well.

***

Also read

Mobile phones and SMS: some data on inclusiveness 

Unequal Participation: Open Government’s Unresolved Dilemma

Mobile Connectivity in Africa: Increasing the Likelihood of Violence?


Open Data and Citizen Engagement – Disentangling the Relationship

[This is a cross-post from Sunlight Foundation's  series OpenGov Conversations, an ongoing discourse featuring contributions from transparency and accountability researchers and practitioners around the world.] 

As asserted by Jeremy Bentham nearly two centuries ago, “[I]n the same proportion as it is desirable for the governed to know the conduct of their governors, is it also important for the governors to know the real wishes of the governed.” Although Bentham’s historical call may come across as obvious to some, it highlights one of the major shortcomings of the current open government movement: while a strong focus is given to mechanisms to let the governed know the conduct of their governors (i.e. transparency), less attention is given to the means by which the governed can express their wishes (i.e. citizen engagement).

But striking a balance between transparency and participation is particularly important if transparency is conceived as a means for accountability. To clarify, let us consider the role transparency (and data) plays in a simplified accountability cycle. As any accountability mechanism built on disclosure principles, it should require a minimal chain of events that can be summarized in the following manner: (1) Data is published; (2) The data published reaches its intended public; (3) Members of the public are able to process the data and react to it; and (4) Public officials respond to the public’s reaction or are sanctioned by the public through institutional means. This simplified path toward accountability highlights the limits of the disclosure of information. Even in the most simplified model of accountability, while essential, the disclosure of data accounts for no more than one-fourth of the accountability process. [Note 1 - see below]

But what are the conditions required to close the accountability cycle? First, once the data is disclosed (1), in order for it to reach its intended public (2), a minimal condition is the presence of info-mediators that can process open data in a minimally enabling environment (e.g. free and pluralistic media). Considering these factors are present, we are still only half way towards accountability. Nevertheless, the remaining steps (3 and 4) cannot be achieved in the absence of citizen engagement, notably electoral and participatory processes.

 

Beyond Elections

 

With regard to elections as a means for accountability, citizens may periodically choose to reward or sanction elected officials based on the information that they have received and processed. While this may seem a minor requisite for developed democracies like the US, the problem gains importance for a number of countries where open data platforms have launched but where elections are still a work in progress (in such cases, some research suggests that transparency may even backfire).

But, even if elections are in place, alone they might not suffice. The Brazilian case is illustrative and highlights the limits of representative systems as a means to create sustained interface between governments and citizens. Despite two decades of electoral democracy and unprecedented economic prosperity in the country, citizens suddenly went to the streets to demand an end to corruption, improvement in public services and… increased participation. Politicians, themselves, came to the quick realization that elections are not enough, as recently underlined by former Brazilian President Lula in an op ed at the New York Times “(….) people do not simply wish to vote every four years. They want daily interaction with governments both local and national, and to take part in defining public policies, offering opinions on the decisions that affect them each day.” If transparency and electoral democracy are not enough, citizen engagement remains as the missing link for open and inclusive governments.

 

Open Data And Citizen Engagement

 

Within an ecosystem that combines transparency and participation, examining the relationship between the two becomes essential. More specifically, a clearer understanding of the interaction between open data and participatory institutions remains a frontier to be explored. In the following paragraphs I put forward two issues, of many, that I believe should be considered when examining this interaction.

I) Behavior and causal chains

Evan Lieberman and his colleagues conducted an experiment in Kenya that provided parents with information about their children’s schools and how to improve their children’s learning. Nevertheless, to the disillusionment of many, despite efforts to provide parents with access to information, the intervention had no impact on parents’ behavior. Following this rather disappointing finding, the authors proceeded to articulating a causal chain that explores the link between access to information and behavioral change.

Information-Citizen Action Causal Chain

The Information-Citizen Action Causal Chain (Lieberman et al. 2013)

 

While the model put forward by the authors is not perfect, it is a great starting point and it does call attention to the dire need for a clear understanding of the ensemble of mechanisms and factors acting between access to data and citizen action.

II) Embeddedness in participatory arrangements

Another issue that might be worth examination relates to the extent to which open data is purposefully connected to participatory institutions or not. In this respect, much like the notion of targeted transparency, a possible hypothesis would be that open data is fully effective for accountability purposes only when the information produced becomes “embedded” in participatory processes.

This notion of “embeddedness” would call for hard thinking on how different participatory processes can most benefit from open data and its applications (e.g. visualizations, analysis). For example, the use of open data to inform a referendum process is potentially a very different type of use than within participatory budgeting process. Stemming from this reasoning, open data efforts should be increasingly customized to different existing participatory processes, hence increasing their embeddedness in these processes. This would be the case, for instance, when budget data visualization solutions are tailored to inform participatory budgeting meetings, thus creating a clear link between the consumption of that data and the decision-making process that follows.

Granted, information is per se an essential component of good participatory processes, and one can take a more or less intuitive view on which types of information are more suitable for one process or another. However, a more refined knowledge of how to maximize the impact of data in participatory processes is far from achieved and much more work is needed.

 

R&D For Data-Driven Participation

 

Coming up with clear hypotheses and testing them is essential if we are to move forward with the ecosystem that brings together open data, participation and accountability. Surely, many organizations working in the open government space are operating with limited resources, squeezing their budgets to keep their operational work going. In this sense, conducting experiments to test hypotheses may appear as a luxury that very few can afford.

Nevertheless, one of the opportunities provided by the use of technologies for civic behavior is that of potentially driving down the costs for experimentation. For instance, online and mobile experiments could play the role of tech-enabled (and affordable) randomized controlled trials, improving our understanding of how open data can be best used to spur collective action. Thinking of the ways in which technology can be used to conduct lowered costs experiments to shed light on behavioral and causal chains is still limited to a small number of people and organizations, and much work is needed on that front.

Yet, it is also important to acknowledge that experiments are not the only source of relevant knowledge. To stick with a simple example, in some cases even an online survey trying to figure out who is accessing data, what data they use, and how they use it may provide us with valuable knowledge about the interaction between open data and citizen action. In any case, however, it may be important that the actors working in that space agree upon a minimal framework that facilitates comparison and incremental learning: the field of technology for accountability desperately needs a more coordinated research agenda.

Citizen Data Platforms?

As more and more players engage in participatory initiatives, there is a significant amount of citizen-generated data being collected, which is important on its own. However, in a similar vein to government data, the potential of citizen data may be further unlocked if openly available to third parties who can learn from it and build upon it. In this respect, it might not be long before we realize the need to have adequate structures and platforms to host this wealth of data that – hopefully – will be increasingly generated around the world. This would entail that not only governments open up their data related to citizen engagement initiatives, but also that other actors working in that field – such as donors and NGOs – do the same. Such structures would also be the means by which lessons generated by experiments and other approaches are widely shared, bringing cumulative knowledge to the field.

However, as we think of future scenarios, we should not lose sight of current challenges and knowledge gaps when it comes to the relationship between citizen engagement and open data. Better disentangling the relationship between the two is the most immediate priority, and a long overdue topic in the open government conversation.

 

Notes

 

Note 1: This section of this post is based on arguments previously developed in the article, “The Uncertain Relationship between Open Data and Accountability”.

Note 2: And some evidence seems to confirm that hypothesis. For instance, in a field experiment in Kenya, villagers only responded to information about local spending in development projects when that information was coupled with specific guidance on how to participate in local decision-making processes).

 

 


37 Papers on Transparency

HEC Paris has just hosted the 3rd Global Conference on Transparency Research, and they have made the list of accepted papers available. Judging from the amount and quality of papers from this year’s and last year’s conference in the Netherlands, it seems that, despite its short history, the conference is likely to become the place for transparency research (to further establish itself as the global reference in that domain, maybe the conference organizers could consider a 4th conference in a developing country).

As one goes through the papers,  it is clear that unlike most of the open government space, when it comes to research, transparency is treated less as a matter of technology and formats and more as a matter of social and political institutions.  And that is a good thing.

This year’s papers are listed below:

***

Also read:

The Uncertain Relationship Between Open Data and Accountability

Does Transparency Lead to Trust? Some Evidence on the Subject


Organ Donation: the Facebook Effect

I just came across a fascinating paper published last June in the Journal of Transplation, Social Media and Organ Donation: the Facebook Effect. Unfortunately I could not find an ungated version of the paper, but the abstract is below:

Despite countless media campaigns, organ donation rates in the United States have remained static while need has risen dramatically. New efforts to increase organ donation through public education are necessary to address the waiting list of over 100,000 patients. On May 1, 2012, the online social network, Facebook, altered its platform to allow members to specify “Organ Donor” as part of their profile. Upon such choice, members were offered a link to their state registry to complete an official designation, and their “friends” in the network were made aware of the new status as a donor. Educational links regarding donation were offered to those considering the new organ donor status. On the first day of the Facebook organ donor initiative, there were 13 054 new online registrations, representing a 21.1-fold increase over the baseline average of 616 registrations. This first-day effect ranged from 6.9× (Michigan) to 108.9× (Georgia). Registration rates remained elevated in the following 12 days. During the same time period, no increase was seen in registrations from the DMV. Novel applications of social media may prove effective in increasing organ donation rates and likewise might be utilized in other refractory public health problems in which communication and education are essential.

The concept, as reported on the John Hopkins University website, was developed by two long–time friends, Facebook’s COO Sheryl Sandberg and JHU transplant surgeon Andrew Cameron:

When Harvard University friends Sheryl Sandberg and Andrew M. Cameron, M.D., Ph.D., met up at their 20th college reunion last spring, they got to talking. Sandberg knew that Cameron, a transplant surgeon at Johns Hopkins, was passionate about solving the perennial problem of transplantation: the critical shortage of donated organs in the United States. And he knew that Sandberg, as chief operating officer of Facebook, had a way of easily reaching hundreds of millions of people.

The findings of the study are fascinating and a reminder of the variety of ways in which social media, and particularly Facebook, can be used towards the public good. But when it comes to the issue of citizen engagement, I have reservations about seeing Facebook as a virtual public sphere. Rather than a public square, Facebook resembles the food court of a shopping mall: while it is a social space, it is still a private one and it is still about business. But despite that fact, there’s lots of amazing things that can be done, and we are just scraping the surface. Some of my thoughts on this are in a recent article at TechCrunch.

***

Like this? Also read about  the foundations of motivation in the age of social media.