Author Archives: Peter Levine

why states need new and different policies for democratic education

States have various policies in place that we might hope would encourage civic learning and engagement. Examples are curricular requirements (for social studies and/or civics classes), mandatory tests, and even the statewide service mandate in Maryland. We don’t know much about how policies affect experiences at the classroom level, although we do know that certain experiences are valuable–notably, moderated discussions of controversial issues, well-conceived service projects, and challenging simulations of political or legal institutions.

My colleagues and I were able to combine information about all the extant state policies with evidence from the Knight Foundation’s survey of 100,000 high school students. This survey gave us information about the kids’ backgrounds, their experiences in classrooms and schools, and certain civic outcomes related to the First Amendment, such as valuing freedom of speech and using the news media. As expected, we found positive associations between classroom-level experiences and the outcomes we value. For instance, discussing controversial issues once again emerged as a beneficial opportunity. But we found no statistical links between state policies and classroom activities or students’ outcomes.

I conclude that the states are basically barking up the wrong trees. We need new types of policies that would actually encourage the activities we want to see in classrooms. Mandating courses and testing students’ academic knowledge of politics are worthy policies, but they don’t get us the values and habits we want to see.

(See Mark Hugo Lopez, Peter Levine, Kenneth Dautrich, and David Yalof, “Schools, Education Policy and the Future of the First Amendment, Political Communication, vol. 26, no. 1, January-March 2009.)

core principles of public engagement

Complementary to the “Champions of Participation” report that I mentioned yesterday is a project to identify the principles of public engagement. The authors are leaders in the “dialogue and deliberation community,” organized by the NCDD. Their main audience is the Obama Administration and others who want to help implement the president’s requirements for “transparency, participation, and collaboration.” They begin: “We believe that public engagement involves convening diverse yet representative groups of people to wrestle with information from a variety of viewpoints, in conversations that are well-facilitated, providing direction for their own community activities or public judgments that will be seriously considered by policy-makers and/or their fellow citizens.”

NCDD is open to comments on the draft, but I’m happy to endorse this as is.

the Champions of Participation speak

This is a meeting of senior federal managers who have been identified as “Champions of Participation.” They were convened by AmericaSpeaks, Demos, Everyday Democracy, and Harvard University’s Ash Institute for Democratic Governance and Innovation at the John F. Kennedy School of Government. They met to discuss the president’s executive order requiring “transparency, participation, and collaboration,” which he signed on his first day in office. (For disclosure, I am on the boards of both AmericaSpeaks and Everyday Democracy.)

So far, the administration has focused on transparency and the use of technology to share information with citizens and collect their opinions. This group broadened the topic to include face-to-face participation and collaboration by federal agencies. The full report contains very concrete and challenging recommendations. But I was especially interested in some of the philosophical positions these managers took. According to my transcriptions from the video, they said:

    Leaders engaging in public participation initiatives [should] value not knowing the solution before they begin and be willing to engage in collaborative learning and collaborative problem-solving to create the solution. And we believe that this approach, or this value, would represent a humility that is not often seen when government leaders engage the public.

    Public citizen stewardship and local knowledge. … We thought it was important to make civic engagement part of every agency’s mission and citizen-centered delivery of the mission part of that.

the Blue Mosque

Visiting the Sultan Ahmed Mosque in Istanbul–popularly known as the Blue Mosque–Barack Obama reportedly said, “I am very impressed spiritually. This is one of the most important instances in my life.” He probably didn’t have much time there, but I know the feeling. I’ve visited several times, during two three-week family visits to Turkey. On both visits, we stayed within blocks of the Blue Mosque, which is one of the masterpieces of Ottoman Architecture. It stands opposite the great Byzantine Basilica, Hagia Sofia, which was the largest and most influential Christian church in the world for a millenium. With its vast dome, the Blue Mosque competes with Hagia Sofia but also complements it. It is serenely classical: transparent, regular, symmetrical, and calm. It’s not by the preeminent genius of Ottoman architecture, Mimar Sinan, but by a somewhat later architect named Sedefkar Mehmet Aga. I suppose it is less original and bold than a Sinan mosque, but it represents a confident stage in the development of this serene style, which combines Roman engineering, Byzantine design (massive domes over square interiors), and Islamic abstraction and decoration.

I might actually choose the Blue Mosque as my single favorite building. My experiences there were probably more aesthetic than spiritual, but the distinction is elusive. (When my little kid was 7, she and I built a little version of an Ottoman mosque in cardboard.)

experimenting in social policy

Often, people who say that they have experienced specific services or opportunities also report better outcomes–such as educational success, employment, or health–compared to similarly situated people who never received those opportunities. For example, according to a paper published by CIRCLE, young adults who were required to perform community service as part of their middle school coursework are 14 percent more likely to graduate on time from college, even when one compares them to people who are similar in respect to all the other factors measured in the survey, such as test scores, parental education, race, and gender. One could conclude that community service has a 14-point positive impact on college graduation.

Indeed, that effect is possible. But service-learning has not yet been tested with a more rigorous method of evaluation. The “gold standard” is a randomized experiment, in which some people are randomly assigned to receive an experience that others (the “treatment group”) don’t get. If assignment is random, then the difference in outcomes is a measure of the impact of the experience.

Experiences that appear highly beneficial in studies of whole populations often show modest or no results in experimental tests. This is such a common pattern that it requires some general reactions.

why experiments rarely show impact

Survey-based studies and experiments may produce divergent results because the people who receive opportunities have advantages that account for their later successes and that are not measured in surveys–such as motivation or perseverance, helpful networks, enrollment at subtly better schools, or ties to motivated teachers and other helpful individuals. These advantages explain their good outcomes and account for what we originally hoped were benefits of specific programs.

(Another possible reason for the failure of programs to “work” when tested in randomized experiments is that the control groups actually find alternative programs. If that happens, an experiment will miss real benefits. But it’s my general sense that this is a relatively rare explanation.)

the injustice of testing only some programs

We treat programs and institutions with profound inconsistency. Government-funded programs for poor people are expected to show long-term impact in “gold standard” experiments, but no one ever asks whether services provided to high-income people work–even when such services are publicly subsidized. For instance, my university provides four years of elaborate educational, social, recreational, health, and housing services for its undergraduates. No one imagines that the impact of that package of services would ever be tested in a randomized experiment. Universities like mine obviously confer advantages on the individuals who graduate; our graduates are preferred in the job market. Since they benefit (relative to others), they want the opportunity to attend. And since they have political and economic clout, they get the opportunities they want. But a randomized experiment might find that the social benefit of a Tufts education is small, especially if the control group got a BA at half the cost.

what we should do

Notwithstanding this serious unfairness, I believe we should expect programs aimed at poor people to “work” under rigorous tests. Political power in unequally distributed. Those without power never get much public assistance. Given limited funds, we need to spend every dollar well. To test programs experimentally is not punitive; it’s a matter of making sure that we really do good.

In fact, I’m in favor of widespread field experimentation, with the following caveats:

1. There is an appropriate life-cycle for programs. They shouldn’t be expected to “work” in randomized studies from Day One. There should first be a fairly long process of informal experimentation and adjustment. Such experimentation should be supported.

2. When many programs fail to show impact, we shouldn’t become generally pessimistic about social interventions. We are holding them to standards that we never use in the private sector or when assessing other types of government programs (such as weapons purchases, agricultural subsidies, or macroeconomic policies).

3. Benefits need not always be long-term. Many beneficial interventions wear off, and that is an argument for follow-up, not for canceling the programs. Besides, if a program actually makes life better for 13-year-olds, that seems like an important advantage even if they are not still better off when they are 20.

4. Experiments should be used to improve programs. That is much more promising than lurching from one untested strategy to another every time results are disappointing.

5. Randomized experimentation is a rather detached, arm’s length method. But it is also an easy method to explain. Participants should influence important aspects of the research, such as decisions about what to measure and interpretations of the results.