I’m helping with the evaluation of a civic education curriculum. I don’t want to go into details because this is an unpublished evaluation for a specific organization in a particular context. However, I have observed an interesting pattern and wonder what explains it and whether it generalizes.
We asked both the students and the teachers about various pedagogies. For instance, the students were asked to evaluate statements like these (among others):
- Memorizing facts was the best way to get a good grade from teachers my classes.
- Teachers lectured, and the students took notes.
- Students were encouraged to make up their own minds about issues.
- Teachers encouraged students to express their opinions during class.
Their teachers were asked about the same list of pedagogies, but the questions for them were phrased in terms of how much they used each approach.
The goal was to distinguish various approaches and then correlate them with things like the number of correct answers to factual questions, students’ skills, and their beliefs about democracy. Then we could see whether, for example, students who discussed issues more in class were more confident about their skills for discussion. The findings wouldn’t be causal, but they would be suggestive.
In the actual data, the most teacher-centric and the most student-centric approaches (if you can accept those descriptions) correlated. For instance, there was a positive correlation (0.29) between “Teachers encouraged students to discuss political or social issues about which people have different opinions” and “Memorizing facts was the best way to get a good grade from teachers in my classes.” Likewise, there was positive correlation (0.28) between “Most students felt free to express opinions in class even when their opinions were different from most of the other students” and “Teachers required students to memorize facts or definitions.” The correlations were even larger in the teacher data.
Most of the student outcomes–especially their ability to answer factual questions–correlated positively with all of the pedagogies. Students were more likely to know the facts if their teachers lectured and if they discussed issues–not surprisingly, since these two pedagogies correlated with each other.
One interpretation is that some students just got more of everything than the others–their “dosage” was higher. But I don’t think so, based on what I know about the intervention. Besides, the questions weren’t phrased in a way that should measure dosage.
Another interpretation is that these approaches should and do complement each other. I can certainly see why good teachers might say both “I encouraged students to express their opinions during class” and “I placed great importance on students learning facts.” (These responses were correlated at 0.8).
A third interpretation is that these questions don’t yield valid data, because teachers and students are not very aware of the pedagogies they experience, and are especially unaware of how their experiences compare to others’.
I’m wondering whether the positive correlation between apparently contrasting teaching styles is commonly observed.