in defense of (some) implicit bias

I hope that if there were an implicit bias test for Nazism, I would demonstrate a strong negative bias. Shown rapid-fire images of swastikas and Nazi leaders, I would be unable to associate them with positive words without strenuous effort. The reason is that I learned a deep aversion to National Socialism, based originally on reasons and evidence. It is now no longer efficient for me to use conscious effort to assess Nazis, their pros and cons. I have rightly translated a very well-founded judgment into a habit, which works like a constructed instinct. That way, I can reserve my limited attention and cognitive capacity for other issues.

In 1970, Charles Fried proposed as a philosophical thought-experiment a situation in which two people are drowning, one of whom happens to be your spouse. It was “absurd,” said Fried, that you should be impartial about which one to save. Fried was developing an argument against pure impartiality. But Bernard Williams famously replied that you shouldn’t even have to think about which person to save. That would be “one thought too many.” If you must reason about whether to save your spouse as opposed to someone else, you do not love your spouse. The problem with having to think in this case is not mere inefficiency (it might slow you down and increase her chance of drowning). It’s more basic than that. You do not have a “deep attachment” to another person unless—here I extend Williams’ argument—you have turned your preference for that person into an acquired instinct. Your ability to act on that instinct instead of reasoning is proof of a process that we call love.

In a really interesting new paper, “Rationalization is Rational,” Fiery Cushman argues that human beings, since we have limited cognitive resources, have evolved several different modes of representing things in our environment: reason and planning, habit, instinct, and norms. These modes require varying amounts of cognitive attention. Cushman also proposes that we have evolved mechanisms for shifting representations from one mode to another for efficiency’s sake. For instance, we intentionally learn the way home and then form a habit of walking home so that we no longer have to think about it. But we can also make a habit conscious and practice until we change it.

Many people are currently worried about two specific “representational exchanges,” in Cushman’s terms. One is rationalization. We think that we are making a conscious and reasoned choice, but we have actually formed an instinctive reaction that we then merely rationalize with explicit words. This phenomenon is widely taken to be evidence of human unreason and inability to deliberate. But Cushman sees it as an efficient process. We can’t go through life assessing everything explicitly, so we develop habits of reacting to categories of things and then justify our reactions when reasons are needed. So long as the learned habit was based on good thinking in the first place, it is an efficiency measure rather than a limitation. In turn, rationalization (giving reasons for something we have already decided) serves a useful purpose: it puts a habit into verbal form so that it can be debated.

The other problem that worries many of us implicit bias, particularly in the form of racial stereotyping. Tests of implicit bias show that various forms are common in the population as a whole.

Implicit bias research sometimes seems to flatten crucial moral differences. A subject might have a 3% bias against African Americans and a 24% bias against Millennials. This does not mean that generational bias is eight times more important, even in this individual’s case. Racism is structural, historical, connected to laws and institutions, and literally deadly. Generational bias is just one of those things we should probably think about. To assess the empirical data about bias, we need judgments about what is just and unjust.

Applying Cushman’s insight, I would go further. An implicit bias is not necessarily bad at all. It is actually a virtue (in the Aristotelian sense) if it reflects a process of reasoning and learning that we have stored as a habit. Being biased against Nazis and in favor of your spouse are virtues. Being biased against people of color is a vice. The difference lies in the content of the judgment, not the form.

It’s true that any bias can mislead. For instance, your appropriate abhorrence of Nazism might distort your views of justice in the current Middle East. Your appropriate bias in favor of your dearly beloved family members might cause you to treat strangers in unjust ways. It is characteristic of virtues that each is insufficient; we need a whole suite of them. And one important task is to bring even our best biases into conversation with other ideas and principles. But it wouldn’t be progress to temper your bias against Nazis or in favor of your spouse. That would just weaken your virtues. Progress means combating bad biases, developing good biases, and combining your good biases with more abstract principles of judgment.

See also: the era of cognitive bias; marginalizing views in a time of polarization; Empathy and Justice; Jonathan Haidt’s six foundations of morality; and don’t confuse bias and judgment (which is incompatible with this post).

This entry was posted in philosophy on by .

About Peter

Associate Dean for Research and the Lincoln Filene Professor of Citizenship and Public Affairs at Tufts University's Tisch College of Civic Life. Concerned about civic education, civic engagement, and democratic reform in the United States and elsewhere.