Comparative Effectiveness Research for democracy?

In health, we’ve seen an influential and valuable shift to Comparative Effectiveness Research (CER): measuring which of the available drugs or other interventions works best for specific purposes, in specific circumstances. Why not do the same for democracy? Why not test which approaches to strengthening democracy work best?

My colleagues and I played a leading role in developing the “Six Promising Practices” for civic education. These are really pedagogies, such as discussing current, controversial issues in classrooms or encouraging youth-led voluntary groups in a schools. Since then, we have been recommending even more pedagogies, such as Action Civics, news media literacy, and school climate reform. I am often asked which of these practices or combinations of practices works best for various populations, in various contexts, for various outcomes. This question has not really been studied. There is no CER for civics.

Likewise, in 2005, John Gastil and I published The Deliberative Democracy Handbook. Each chapter describes a different model for deliberative forums or processes in communities. The processes vary in whether participants are randomly selected or not, whether they meet face-to-face or online, whether the discussions are small or large, etc. Again, I am asked which deliberative model works best for various populations, in various contexts, for various outcomes. There is some relevant research, but no large research enterprise devoted to finding out which deliberative formats work best.

Some other fields of democratic practice have benefitted from comparative research. In the 2000’s, The Pew Charitable Trusts funded a large body of randomized experiments to explore which methods of campaign outreach were most cost-effective for turning out young people to vote. Don Green (now at Columbia) was an intellectual force behind this work: one motivation for him was to make political science a more experimental discipline. CIRCLE was involved; we organized some of the studies and published this guide to disseminate the findings. Our goal was to increase the impact of youth on politics.

Our National Study of Learning, Voting, and Engagement (NLSVE) is a database of voting records for 9,784,931 students at 1,023 colleges and universities. With an “n” that large, it’s possible to model the outcome (voter turnout) as a function of a set of inputs and investigate which ones work best. That is a technique for estimating the results that would arise from a whole body of experiments. We also provide each participating campus with a customized report about its own students that can provide the data for the institution to conduct its own experiments.

So why do some fields of democratic practice prompt research into what works, and others don’t?

A major issue is practical. The experiments on voter turnout and our NSLVE college study have the advantage that the government already tallies the votes. Given a hard outcome that is already measured at the scale of millions, it’s possible to vary inputs and learn a great deal about what works.

To be sure, people and community contexts are heterogeneous, and voter outreach can vary in many respects at once (mode, messenger, message, purpose). Thus a large body of experiments was necessary to produce insights about turnout methods. However, we learned that grassroots mobilization is cost-effective, that the message usually matters less than the mode, and that interactive contacts are more efficient than one-way outreach. We believe that these findings influenced campaigns, including the Obama ’08 primary campaign, to invest more in youth outreach.

Similarly, colleges vary in their populations, settings, resources, missions, and structures, but NSLVE is yielding general lessons about what tends to work to engage students in politics.

Other kinds of outcomes may be harder to measure and yet can still be measured at scale. For example, whether kids know geometry is hard to measure–it can’t be captured by a single test question–but society invests in designing reliable geometry tests that yield an aggregate score for each child. So one could conduct Comparative Effectiveness Research on math education. The fact that mastering geometry is a subtler and more complex outcome than voting does not preclude this approach.

But it does take a social investment to collect lots of geometry test data. For years, I have served on the committee that designs the National Assessment of Education Progress (NAEP) in civics. NAEP scores are valuable measures of certain kinds of civic knowledge–and teaching civics is a democratic practice. But the NAEP civics assessment doesn’t receive enough funding from the federal government to have samples that are reliable at the state or local level, nor is it conducted annually. This is a case where the tool exists, but the investment would have to be much larger to permit really satisfactory CER. It is not self-evident that the best way to spend limited resources would be to collect sufficient data for this purpose.

Other kinds of outcomes–such as the quality of discourse in a community–may be even more expensive and difficult to measure at scale. You can conduct concrete experiments in which you randomly vary the inputs and then directly measure the outcomes by surveying the participants. But you can only vary one (or a few) factors at a time in a controlled experiment. That means that a large and expensive body of research is required to yield general findings about what works, in which contexts, for whom.

The good news is that studying which discrete, controllable factors affect outcomes is only one way to use research to improve practice. It is useful approach, but it is hardly sufficient, and sometimes it is not realistic. After all, outcomes are also deeply affected by:

  • The motivations, commitment, and incentives of the organizers and the participants;
  • How surrounding institutions and communities treat the intervention;
  • Human capital (who is involved and how well they are prepared);
  • Social capital (how the various participants relate to each other); and
  • Cultural norms, meanings, and expectations.

These factors are not as amenable to randomized studies or other forms of CER.  But they can be addressed. We can work to motivate, prepare, and connect people, to build support from outside, and to adjust norms. Research can help. It just isn’t research that resembles CER.

Democratic practices are not like pills that can be proven to work better than alternatives, mass produced, and then prescribed under specified conditions. Even in medicine, motivations and contexts matter, but those factors are even more important for human interactions. It’s worth trying to vary aspects of an intervention to see how such differences affect the results. I’m grateful to have been involved in ambitious projects of that nature. But whether to invest in CER is a judgment call that depends on practical issues like the availability of free data. Research on specific interventions is never sufficient, and sometimes it isn’t the best use of resources.

Expert Panel Review of Environmental Assessment Process

In August 2016, the Minister of Environment and Climate Change established an Expert Panel to review federal environmental assessment processes. The Expert Panel travelled across Canada to gain a better understanding of the issues that are important to Canadians.

Expert Panel Review of Environmental Assessment Process

In August 2016, the Minister of Environment and Climate Change established an Expert Panel to review federal environmental assessment processes. The Expert Panel travelled across Canada to gain a better understanding of the issues that are important to Canadians.

Text as Data Conference

At the end of last week, I had the pleasure of attending the eighth annual conference on New Directions in Analyzing Text as Data, hosted by Princeton University and organized by Will Lowe, John Londregan, Marc Ratkovic, and Brandon Stewart.

The conference had a truly excellent program, and was packed with great content on a wide variety of text analysis challenges.

There were a number of papers on topic modeling, including work from my colleague Ryan Gallagher on Anchored correlation explanation: Topic modeling with minimal domain knowledge. – a really cool, information-theory approach to topic modeling.

Luke Miratrix also presented joint work with Angela Fan, Finale Doshi-Velez on Prior matters: simple and general methods for evaluating and improving topic quality in topic modeling, an approach which aims to approve upon standard LDA by using priors to promote informative words.

I also really enjoyed Hanna Wallach’s presentation on A network model for dynamic textual communications with application to government email corpora, which introduces the Interaction-Partition Topic Model (IPTM), which combines elements of LDA with ERGMs.

There were also a number of talks reflecting and improving upon the ways in which we approach the methodological challenges of textual data.

Laura Nelson argued for a process of computational grounded theory, in which textual analysis helps guide and direct deep reading, but in which the researcher stays intimately familiar with her corpus.

Justin Grimmer presented the great paper, How to make causal inferences using texts, which presents a conceptual framework for making causal inference using text.

For my own research, Will Hobbs might get the prize for method I’d most like to use, with his paper on Latent dimensions of attitudes on the Affordable Care Act: An application of stabilized text scaling to open-ended survey responses. He presents a very clever method for scaling common and uncommon words in order to extract latent dimensions from short text. It’s really cool.

And, of course, Nick Beauchamp presented work done jointly with myself and Peter Levine on mapping conceptual networks. In this work, we present and validate a model for measuring the conceptual network an individual uses when reasoning. In these networks, nodes are concepts and edges represent the connections between those concepts More on this in future posts, I’m sure.

Finally the session titles were the absolute best. See, for example:

  • How Does This Open-Ended Question Make You Feel?
  • Fake Pews! (a session on religiosity)
  • America’s Next Top(ic) Model
  • Fwd: Fw: RE: You have to see this paper!

Well played, well played.

Many thanks to all the conference organizers for a truly engaging and informative couple of days.Facebooktwittergoogle_plusredditlinkedintumblrmail

Humanizing Technology

We wanted to share the article written by NCDD member Kaliya Young, about the opportunity the internet and technology provide for deepening connections and building strong communities. She mentions the need for investment in social and emotional technologies, and mentions NCDD as well as Member Tom Atlee and others as critical to this work. This is a good piece about the interaction of social, emotional, and technical “technologies” and what it will take for online technologies to be able to enhance our connections to one another and our communities. We welcome you to share your thoughts on this in the comments below. You can read Kaliya’s post below or find the original post from opendemocracy.net over here.


Humanizing technology

Can we use the internet to enhance deep human connection and support the emergence of thriving communities in which everyone’s needs are met and people’s lives are filled with joy and meaning?

That’s a very challenging question, and the answer isn’t just about technology, at least not in the conventional sense of that word. It’s not about any of the emerging trends that are already impacting our societies like bitcoin, dronesVirtual RealityAugmented Realityhyperloops or any of the things that the Singularity University thinks will converge.

It’s not just a matter of finding new technologies either, even if they are more user-centric or built on self-sovereign digital identities in place of corporate ownership and control—the field that forms my own techno-specialty. And the solutions can’t be driven by a government need to find a military advantage—which is the case for a vast range of everyday innovations today—as Manuel DeLanda outlines in his book, War in the Age of Intelligent Machines.

Our work on ‘technical’ technologies won’t generate broad human gains unless we invest an equal amount of time, energy and resources in the development of social and emotional technologies that drive how our whole society is organized and how we work together. I think we are actually on the cusp of having the tools, understanding and infrastructure to make that happen, without all our ideas and organizing being intermediated by giant corporations. But what does that mean in practice?

I think two things are absolutely vital.

First of all, how do we connect all the people and all the groups that want to align their goals in pursuit of social justice, deep democracy, and the development of new economies that share wealth and protect the environment? How are people supported to protect their own autonomy while also working with multiple other groups in processes of joint work and collective action?

One key element of the answer to that question is to generate a digital identity that is not under the control of a corporation, an organization or a government.

I have been co-leading the community surrounding the Internet Identity Workshop for the last 12 years. After many explorations of the techno-possibility landscape we have finally made some breakthroughs that will lay the foundations of a real internet-scale infrastructure to support what are called ‘user-centric’ or ‘self-sovereign’ identities.

This infrastructure consists of a network with two different types of nodes—people and organizations—with each individual being able to join lots of different groups. But regardless of how many groups they join, people will need a digital identity that is not owned by Twitter, Amazon, Apple, Google or Facebook. That’s the only way they will be able to control their own autonomous interactions on the internet. If open standards are not created for this critical piece of infrastructure then we will end up in a future where giant corporations control all of our identities. In many ways we are in this future now.

This is where something called ‘Shared Ledger Technology’ or SLT comes in—more commonly known as ‘blockchain’ or ‘distributed ledger technology.’  SLT represents a huge innovation in terms of databases that can be read by anyone and which are highly resistant to tampering—meaning that data cannot be erased or changed once entered. At the moment there’s a lot of work going on to design the encryption key management that’s necessary to support the creation and operation of these unique private channels of connection and communication between individuals and organizations. The Sovrin Foundationhas built an SLT specifically for digital identity key management, and has donated the code required to the HyperLedger Foundation under ‘project Indy.’

While this critical infrastructure is being birthed we need to think about how to leverage it for the world that we want to create—a world of interconnected humanness in place of centralized social networks controlled by profit-driven and publically-traded companies whose mission is to manipulate us into buying more stuff. These networks are selling access to us and limiting our ability to connect and organize independently. They have deals with companies like Cambridge Analytica and Palantir to suck up the digital exhaust of our lives, spy on us, and collectively manipulate us for their own ends.

As the basis of this next generation of user-centric or self-sovereign identities, Shared Ledger Technology is crucial if corporate control of the internet and our lives is to be reversed, but this  won’t be enough to humanize  technology, and that’s my second key point: social and emotional ‘technologies’ are also vital.

Social technologies are the tools we use to interact with each other in groups of any size, from the Parent Teachers Association (PTA) and other neighborhood organizations to national governments and international bodies. They are increasingly important in the shift that is taking place from an exclusive reliance on representative political processes and institutions to an expanded range of deeper and more deliberative forms of democracy. The social technology of voting for representatives was a breakthrough 300 years ago, but these systems are breaking down and are not serving us well enough today.

Emotional technologies are the tools we use to interact with ourselves internally and in our relationships with other people. They are more critical than ever because the mental health of everyone is now a key concern—since one lone individual can inflict enormous harm through high-tech weapons or by hacking into our core infrastructures. Such technologies are well known and include mediation and meditation practices of different kinds, yoga and mindfulnessNonviolent CommunicationCo-Counseling, and 12 Step processes like Alcoholics Anonymous.

Social technologies work a lot better if people have a range of these emotional tools and practices to draw on, because they are better able to manage themselves and interact with others. We want security and have been putting billions of dollars into the security-surveillance-industrial complex post 9/11, but what about the deeper issue of how we connect to each other and solve problems together? What are we doing to address everyone’s mental and emotional wellbeing to reduce alienation and disconnection?

How do you get people on vastly different sides of controversial issues to collaborate to solve what seem to be intractable problems? How do you structure inclusive deliberations that involve whole communities and build up social capital and connection? Individuals like Miki KashtanTom Atlee and Sharif Abdulah and groups like the National Coalition for Dialogue and Deliberation have been working on these questions for many years but deserve much more investment and support. Without further innovations in these social and emotional technologies, no ‘technical’ technologies will save us.

To take a concrete example, my ‘sweet spot’ is in designing and facilitating interactive meetings for professional, scientific and technical communities in what are called ‘unconferences.’ I’ve been co-leading one of these unconferences—the Internet Identity Workshop—twice a year for over a decade, during which we’ve developed many innovations built on nurturing the emotional capacities  of the people involved and the social processes we’ve been using at our meetings.

They are organized primarily through Open Space Technology where the agenda is co-created live on the day of the event with all the participants. We throw in an hour of demonstrations on the second day after lunch and we eat dinner together every night. The patterns described in the Group Works Deck have been particularly useful—things like ‘Embracing Dissonance and Difference’ (meaning that anyone is welcome in the conversation); and openingand closing every day in a circle while diverging into as many as 15 different sessions every hour during the rest of the time we spend together—what in Open Space terms is called the rhythm of ‘Convergence and Divergence.’ Taken together these processes have been very successful in building a strongerGroup Culture.

I got excited by the possibilities of user-centric identity technologies over 15 years ago while part of the Planetwork Community, which came together to look at global ecology and information technology and think through how planetary challenges could be addressed more effectively. But through the process of co-leading efforts to build that infrastructure it became clear that we must also invest in the social and emotional technologies that make it possible to collaborate and work together at all scales.

All three forms of technology are essential to the transformation of our relationships to each other and our bigger social/societal systems. Technical technologies provide the tools that can empower individuals to connect and work together for their own wellbeing and that of their communities. Social technologies enable these tools to be used effectively and inclusively in processes of collective action. And emotional technologies support everyone’s mental health as a precondition for engaging in these processes with more chance of success.

To put it simply, technical technologies are easier to turn in the direction of democracy and social justice if they are developed and applied with social and emotional intelligence. Combining all three together is the key to using technology for liberating ends.

You can find the original version of this article written by Kaliya Young on the Transformation blog at www.opendemocracy.net/transformation/kaliya-identity-woman/humanizing-technology.

Priceless? Estimating the cost of open government reforms

costing

Results for Development has just published “Priceless?”, a report commissioned by the World Bank that develops a new, practical framework for estimating the full (economic) costs of open government programs (full disclosure: I have been minimally involved with the project at some of its stages). The framework is used to conduct a cost analysis of two cases: i) the well-known Ukrainian ProZorro e-procurement program, and ii) Sierra Leone’s Open Data Program. The final estimated costs might surprise some, but I will avoid spoilers at this stage.

The report is authored by the brilliant Praneetha Vissapragada and Naomi Joswiak and edited by none other than Nathaniel Heller and Courtney Tolmie. While much has been written about the potential benefits of open government reforms, much less has been dedicated to their respective costs. This new publication is a welcome addition towards addressing that gap, and will be a valuable resource for policymakers, practitioners and advocates working in the open government space.