Foundations and rich individuals invest hundreds of millions of dollars in efforts to change policy, such as the 20-year struggle for health care reform and the campaign to address global warming. One of those efforts achieved major legislation; the other has fizzled for now. The question arises: How can we tell when and why investments in advocacy work? If we can’t evaluate them, most donors will be skeptical about spending much money on advocacy. Those who are willing to take the risk won’t know where to invest.
Steve Teles and Mark Schmitt have an important new paper called “The Elusive Craft of Evaluating Advocacy” (PDF). It’s short, free of jargon, and largely free of data and citations, but it is based on a remarkable combination of experience and reflection. Teles is author of (among many other publications, The Rise of the Conservative Legal Movement: The Battle for Control of the Law, which is basically an evaluation of a long, complex advocacy effort that was successful in remaking the American judiciary. Schmitt has been an important Senate staffer (hence, a target of advocacy) and a thoughtful participant in DC think tanks and their advocacy efforts.
They argue that the methods appropriate for assessing services cannot work for advocacy efforts. Because of the unpredictability of politics, the many players who focus on any single topic, and the bias toward the status quo (among other reasons), detecting the policy impact of a particular project or organization is virtually impossible.
They don’t quite say this, but it could be a serious error to try to detect causality, as if one could find a replicable recipe for policy change and then put all one’s money into it. Quite the contrary, funders should use what this article (following Teles’ book) calls “a spread-betting approach to making grants.” It’s almost always smart to fund a diverse set of strategies and flavors of reform–the radical outsiders, the expert-driven negotiators, the grassroots organizers, and others–because success depends on a mix of these, in unpredictable proportions.
A corollary of the last point: Don’t try to build some kind of unified coalition, let alone one “go-to” organization. On the contrary, maintain and expand a diverse and somewhat contentious network.
That is good strategic advice, but how to evaluate outcomes? Teles and Schmitt recommend: 1) evaluating a whole portfolio, not individual projects; 2) setting the longest possible time horizon, because failures suddenly become successes, and even the most thrilling legislative victories can disappoint; 3) evaluate the advocates (i.e., organizations and leaders), not the individual projects or strategies, because you want to encourage adaptability and resilience, 4) assess positions in networks if you want to purchase influence; 5) use peer appraisals to assess advocates; and 6) drop all pretense of positivist social science and allow yourself to exercise judgment, or what Aristotle and the contemporary theorist Bent Flyvbjerg calls “phronesis.” (Flyvbjerg is hardly a household name, and his work may not be known in think tanks, but he has shaken up American political science with his radical alternative to standard social science–which resembles the recommendations of Schmitt and Teles. One additional tip he offers is to be attentive to who benefits from any given strategy, even the most idealistic ones.)
If you are interested in this topic, I also recommend Lobbying and Policy Change:Who Wins, Who Loses, and Why by my Tufts colleague Jeffrey Berry and four colleagues. Their book demonstrates that having the most money does not determine success in advocacy. In fact, money explains only about five percent of the variance in outcomes. Strategy counts, but so do uncontrollable factors such as the profound bias in favor of the status quo.