a critique of expertise, part 2

Yesterday’s post was Part 1 of a critique of expertise in public policy. Part 2 focuses on the issue of generalization.

Experts generalize. An important aspect of almost all professional training is the identification of general concepts or categories that trigger appropriate responses. Told a story about specific people interacting in a particular context, any professional will look for abstractions. For instance, in medical school, one learns the signs and definitions of diseases, and when a disease is present, a physician knows which treatments to offer. When more than one condition is involved, or when the diagnosis is uncertain, the decision becomes complex, and good physicians fully understand the roles of judgment and luck. Doctors could never be replaced by machines that simply took in data and spat out treatment plans. But diseases and other general health conditions remain central to physicians’ analysis. They look for the necessary and sufficient conditions that define conditions, and then apply general causal theories that say: this medicine reduces that illness.

Lawyers, meanwhile, try to apply general rules from statutes, constitutions, and court rulings. Their advice may be controversial or uncertain if no single, definitive legal rule covers the situation–and they understand that–but their professional thinking involves rules. For engineers, economists, psychologists, and virtually all other professionals, the important abstractions may be different, but the basic habits of mind are alike. Professionals have achieved monumental advances (and prestige) by discovering generalizations that apply widely. For example, the polio vaccine reliably prevents polio, and that is extremely valuable to know.

You can also hear ordinary people generalize if you listen in public spaces. They say things like, “Of course Amtrak is always late, it’s a government monopoly.” Or, “You’re getting a cold; you should take vitamin C.” Research and data disprove these assertions, and a trained professional would not make them. Even an economist who was hostile to monopolies would not draw a direct line from Amtrak’s monopoly status to the tardiness of its trains. (Other countries have monopolistic railroads that run on time.) Instead of being too quick and bold with generalizations, a good professional is fully aware of complexities and nuances.

Even so, there are drawbacks to using general concepts as the main units of analysis. A person, a situation, an institution, or a community can be apprehended as a whole object. We can assess it, judge it, and form opinions about how the entity should change. Evaluating a whole situation need not be any harder or less reliable than analyzing general categories abstracted from such situations. If we can say something valid and useful about a generality (like diabetes, tax incentives, or free speech), we can talk just as sensibly about this patient, this school, or this conversation. The particular object or situation is not just an aggregate of definable components. It has distinctive features as a whole, and we human beings are just as good at understanding those as we are at generalizing abstractly.

The form that our understanding takes is often narrative: we tell stories about particular people or institutions, and we project those stories into the future as predictions. We may find generic issues and categories embedded within a story: King Lear, for example, was a king and a father, and there are general truths to be said about both categories. But the story of King Lear is much more than an aggregate of such categories, which are not especially useful for understanding the play.

In public policy, non-professionals are often better at the assessment of whole objects than experts are. That is because ordinary members or clients of a school, a neighborhood, or a firm know its whole story better than an outsider who arrives to apply general rules.

Often, professionals have in the back of their mind an empirical finding that is valid in academic terms, but that should not tell us what to do. Even when results are statistically significant, effect sizes in the social sciences are usually small, meaning that only a small proportion of what we are interested in explaining is actually explained by the research. Statistical studies shed some light on why individuals differ, but can tell us nothing about why they are all alike. In research based on surveys or field observations, the sample may not resemble the population or situation that we face in our own communities. Experimental research is conducted with volunteers (often current undergraduate psychology majors) in artificial settings. Even if a particular finding is strong, and the sample does resemble our own, there is always a great deal of variation, and any particular case may differ from the mean. Measures are always problematic and imperfect, and some important factors are virtually impossible to measure. Unmeasured factors may be responsible for the relationships we think we see among the things we measure.

All of this is well known and may be thoughtfully presented in the “limitations” section of a published paper. When carefully and cautiously read, such a paper can be very helpful. But the professional’s temptation is focus on a statistically significant, published result even if its practical import and relevance are low. Besides, it is rarely the author of a paper who tries to influence a practical discussion. Often professionals have not even read the original paper that influences them. They rely instead on their graduate training or abstracts and second-hand summaries of more recent research. The caveats in the original studies tend to be lost.

Of course, people with professional credentials can be excellent observers and assessors of whole objects like schools, neighborhoods, or firms. In some affluent communities, practically everyone holds an advanced degree and is therefore a “professional.” But their judgments of whole objects and situations are best when they think as experienced laypeople, not as specialists. They should draw on professional expertise, but only as one source of insight (and should not rely on only one profession).

Arguments about the proper role of generalization take place within professions, not just between professionals and laypeople. Physicians, for example, are being pressed to adopt “evidence based medicine,” which deprecates doctors’ intuitions and personal experiences in favor of general scientific findings, especially those supported by randomized experiments. Some medical doctors are pushing back, arguing that experimental findings never yield reliable guidance for complex, particular cases. What matters is the whole story of the particular patient.

The same argument plays out in education. The No Child Left Behind Act of 2001 favors forms of instruction proven in “scientifically-based research,” and the gold standard is a randomized experiment. (The frequently accepted second-best is a statistical model, which can be understood as an estimate of what would be found in a randomized experiment.) Like physicians, some educators resist this pressure, on the ground that an experienced teacher can and should make decisions about individual students and classrooms that are heavily influenced by context and only marginally guided by scientific findings.

This debate will never be fully resolved, but there is a logic to the idea that if we are going to train people in expensive graduate schools and rely on their guidance to shape general policies, they should be the bearers of “scientifically-based research.” In other words, the most optimistic claims about the value of expertise rely on a notion of the expert as an abstract and general thinker. When professionals are seen instead as experienced and wise craftspeople, no one exaggerates their role in public life. The physician who is a seasoned healer is left to treat his or her patients; it is the medical researcher with general findings who is invited to influence policy. My claim is that we err when we give such research too much credence.