neuroscience and morality

I recently had occasion to poke around in the growing literature on neuroscience and the morality.* I have not had time to read some of the big and important books on this subject, so the following are just preliminary notes, largely untutored.

Some evidence from brain science suggests that people need emotions in order to reason effectively about human behavior. Patients with damage to certain brain regions are able to think clearly about many matters but cannot make smart practical judgments, even in their own self-interest. An old example was Phineas Gage, the Victorian railwayman who lost a portion of his brain in a freak accident and could think perfectly well about everything except human behavior. He also lacked emotions. Often patients with similar brain damage are devoid of all empathy and guilt; they act like sociopaths. It seems that moral emotions (such as care) are biologically connected to all reasoning about human beings.


These studies support Aristotle’s view, according to which an emotion is always a combination of desire and cognition. Anger, for example, is ?desire with distress, where what is desired is retribution for a seeming slight, the slight being improper? (Rhetoric 1378a). I could get red in the face and have a high pulse rate, but I wouldn?t be ?angry? unless I believed that someone had done wrong. Empirical beliefs and moral interpretations form part of my emotional state.

Some leading brain researchers hypothesize that human beings evolved to respond emotionally to categories of situations. These instinctive responses allowed people to read one another and generated the limited altruism necessary for group survival in prehistory. Moral reasoning and theory then arose post facto. Today, we use principles to rationalize judgments that we make on the basis of instinctive emotions. For example, the Golden Rule developed as a generalization from our emotional reactions to concrete cases. However, our intuitions are often inconsistent. For example, we oppose killing an individual to save more lives, but we favor inaction when it would have the same effect. We regret an involuntary action that harms other people, but we don’t feel bad when we do the same thing without any consequences. (For example, as Michael Slote observed, if you stray across the median and kill someone, you will feel terrible for a long time; but if you make the same driving mistake and nothing happens, you will soon forget about it.) These response make little sense within most moral theories, but they can be explained as a result of an emotional adversion to active killing that arose in prehistoric times.

In essence, the brain researchers believe in an Aristotelian theory of the emotions plus a Darwinian theory of morality. The Darwinian part of their account strikes me as bad news, because it suggests that our moral intuitions are instincts that developed so that our ancestors could preserve their genes. Our instincts are biased in various ways: for example, in favor of our genetic relatives. Therefore, we cannot rely on our intuitions as guides to truly good behavior, yet we are so “wired” that instincts powerfully influence us. Fortunately. the Darwinian explanation seems empirically less certain than the basic finding that emotions and cognitions are interdependent.

The neuroscience raises but doesn’t answer important normative questions: What kind of moral reasoning can we reasonably expect of human beings? Are we at our best when we rely openly and fully on emotions and/or narratives, or when we try to use moral theories or rules? Which features of human practical reasoning are good, and which are bad?

*See, for example, Joshua Greene and Jonathan Haidt, ?How (and Where) Does Moral Judgment Work?? Trends in Cognitive Sciences, vol. 6, no. 12, (2002) 517-523; Steven W. Anderson, Antoine Bechara, Hanna Damasio, Daniel Tranel and Antonio R. Damasio, ?Impairment of Social and Moral Behavior Related to Early Damage in Human Prefontal Cortex,? Nature Neuroscience, vol. 2, no. 11 (Nov. 1999), pp. 1032-1037.