Comparative Effectiveness Research for democracy?

In health, we’ve seen an influential and valuable shift to Comparative Effectiveness Research (CER): measuring which of the available drugs or other interventions works best for specific purposes, in specific circumstances. Why not do the same for democracy? Why not test which approaches to strengthening democracy work best?

My colleagues and I played a leading role in developing the “Six Promising Practices” for civic education. These are really pedagogies, such as discussing current, controversial issues in classrooms or encouraging youth-led voluntary groups in a schools. Since then, we have been recommending even more pedagogies, such as Action Civics, news media literacy, and school climate reform. I am often asked which of these practices or combinations of practices works best for various populations, in various contexts, for various outcomes. This question has not really been studied. There is no CER for civics.

Likewise, in 2005, John Gastil and I published The Deliberative Democracy Handbook. Each chapter describes a different model for deliberative forums or processes in communities. The processes vary in whether participants are randomly selected or not, whether they meet face-to-face or online, whether the discussions are small or large, etc. Again, I am asked which deliberative model works best for various populations, in various contexts, for various outcomes. There is some relevant research, but no large research enterprise devoted to finding out which deliberative formats work best.

Some other fields of democratic practice have benefitted from comparative research. In the 2000’s, The Pew Charitable Trusts funded a large body of randomized experiments to explore which methods of campaign outreach were most cost-effective for turning out young people to vote. Don Green (now at Columbia) was an intellectual force behind this work: one motivation for him was to make political science a more experimental discipline. CIRCLE was involved; we organized some of the studies and published this guide to disseminate the findings. Our goal was to increase the impact of youth on politics.

Our National Study of Learning, Voting, and Engagement (NLSVE) is a database of voting records for 9,784,931 students at 1,023 colleges and universities. With an “n” that large, it’s possible to model the outcome (voter turnout) as a function of a set of inputs and investigate which ones work best. That is a technique for estimating the results that would arise from a whole body of experiments. We also provide each participating campus with a customized report about its own students that can provide the data for the institution to conduct its own experiments.

So why do some fields of democratic practice prompt research into what works, and others don’t?

A major issue is practical. The experiments on voter turnout and our NSLVE college study have the advantage that the government already tallies the votes. Given a hard outcome that is already measured at the scale of millions, it’s possible to vary inputs and learn a great deal about what works.

To be sure, people and community contexts are heterogeneous, and voter outreach can vary in many respects at once (mode, messenger, message, purpose). Thus a large body of experiments was necessary to produce insights about turnout methods. However, we learned that grassroots mobilization is cost-effective, that the message usually matters less than the mode, and that interactive contacts are more efficient than one-way outreach. We believe that these findings influenced campaigns, including the Obama ’08 primary campaign, to invest more in youth outreach.

Similarly, colleges vary in their populations, settings, resources, missions, and structures, but NSLVE is yielding general lessons about what tends to work to engage students in politics.

Other kinds of outcomes may be harder to measure and yet can still be measured at scale. For example, whether kids know geometry is hard to measure–it can’t be captured by a single test question–but society invests in designing reliable geometry tests that yield an aggregate score for each child. So one could conduct Comparative Effectiveness Research on math education. The fact that mastering geometry is a subtler and more complex outcome than voting does not preclude this approach.

But it does take a social investment to collect lots of geometry test data. For years, I have served on the committee that designs the National Assessment of Education Progress (NAEP) in civics. NAEP scores are valuable measures of certain kinds of civic knowledge–and teaching civics is a democratic practice. But the NAEP civics assessment doesn’t receive enough funding from the federal government to have samples that are reliable at the state or local level, nor is it conducted annually. This is a case where the tool exists, but the investment would have to be much larger to permit really satisfactory CER. It is not self-evident that the best way to spend limited resources would be to collect sufficient data for this purpose.

Other kinds of outcomes–such as the quality of discourse in a community–may be even more expensive and difficult to measure at scale. You can conduct concrete experiments in which you randomly vary the inputs and then directly measure the outcomes by surveying the participants. But you can only vary one (or a few) factors at a time in a controlled experiment. That means that a large and expensive body of research is required to yield general findings about what works, in which contexts, for whom.

The good news is that studying which discrete, controllable factors affect outcomes is only one way to use research to improve practice. It is useful approach, but it is hardly sufficient, and sometimes it is not realistic. After all, outcomes are also deeply affected by:

  • The motivations, commitment, and incentives of the organizers and the participants;
  • How surrounding institutions and communities treat the intervention;
  • Human capital (who is involved and how well they are prepared);
  • Social capital (how the various participants relate to each other); and
  • Cultural norms, meanings, and expectations.

These factors are not as amenable to randomized studies or other forms of CER.  But they can be addressed. We can work to motivate, prepare, and connect people, to build support from outside, and to adjust norms. Research can help. It just isn’t research that resembles CER.

Democratic practices are not like pills that can be proven to work better than alternatives, mass produced, and then prescribed under specified conditions. Even in medicine, motivations and contexts matter, but those factors are even more important for human interactions. It’s worth trying to vary aspects of an intervention to see how such differences affect the results. I’m grateful to have been involved in ambitious projects of that nature. But whether to invest in CER is a judgment call that depends on practical issues like the availability of free data. Research on specific interventions is never sufficient, and sometimes it isn’t the best use of resources.

This entry was posted in Uncategorized on by .

About Peter

Associate Dean for Research and the Lincoln Filene Professor of Citizenship and Public Affairs at Tufts University's Tisch College of Civic Life. Concerned about civic education, civic engagement, and democratic reform in the United States and elsewhere.