In a series of posts, I have been developing the idea that anyone’s moral thinking can be modeled as a network: the nodes are beliefs, and the links are various kinds of connections (implications, generalizations, perceived similarities).
This modeling method would work for a Kantian, a Buddhist, a Marxist, a Thomist–it is morally neutral, not a substantive position. However, I have been arguing that certain formal structures are better than others. By bringing your own network more into line with those standards, you can move from your starting point toward an improved moral position.
This is fundamentally a social process because the nodes in your network are shared with other people, and a good network is one that interacts well with theirs. Thus I am implying a theory of collaborative moral learning. It is not a psychological theory about how people do learn, but a moral theory about how we should learn. I am interested in getting this theory right and avoiding philosophical errors. But my goal is to develop an actual method for moral introspection. That would be a contribution to the long tradition of moral exercises that go back to the ancient Stoics and classical Indian thinkers.
Below are some notes about the learning theory. I think it avoids several significant pitfalls. It does not make learning from experience seem automatically beneficial, because people can learn very bad ideas. It does not imply that better educated people–those who have more formal learning–are more moral, which is clearly false. And it addresses the fact that many important moral issues are “socially constructed,” yet there are real differences between good and evil.
Why morality involves continuous learning
If we want to assess the moral beliefs of an individual, a natural method is to list the important ones and decide whether each belief is good or right. For example, if anyone’s list includes racist beliefs, that is worse than if it does not. You should get rid of racism.
We might hope to be able to generate the right list of beliefs (or at least a good enough list) by means of an algorithm: a set of clear instructions. For instance, we might start with Kant’s Categorical Imperative or the principle of utility, add empirical information about the situation that confronts the individual, and derive the moral conclusions that rightly follow. Using any reasonable method of that kind, racism would not end up on one’s list, because–why should it? It is not suggested by science or any other empirical information, and it contradicts all plausible general moral principles.
If we tried to construct a computer program that would model the moral beliefs (although perhaps not the moral sentiments and motivations) of a good person, we would not have to worry about the computer ending up as a racist. The thought would not occur to it, so to speak. That is good, because I have no doubt that racism is morally wrong, any more than I doubt that it is a sunny day or that the Allies won World War II.
So far, so good. But modern Americans should not merely refrain from endorsing racism; they should actively oppose it. Their lists of moral beliefs should include anti-racist principles. The reasons that they must confront racism are historically contingent and, we hope, temporary. At some point in the future, it will no longer be necessary to hold anti-racist ideas, because other people will not feel or endorse racism any more, and its effects will not linger.
But even then, a good list of moral principles will include many historically contingent beliefs. They will be beliefs about things that have developed over time as a result of many people’s behavior and adopted complex and sometimes ambiguous forms. Examples of the “socially constructed” objects that I should form opinions about today are the United States, romantic love, the police, eating animals, and the profession of teaching in college.
In evaluating these topics, general moral principles are often relevant. When I think about whether I should really be a college professor, the principle of utility comes appropriately to mind, and I ask, “Am I having the best possible effects on the most possible people?” (The results are troubling.) If a few general principles could generate the important moral conclusions, we could learn them quickly. That is the hope behind teaching children the Golden Rule and other such algorithms. Such principles can handle certain kinds of stylized choices: May I commit murder? May I lie?
But these principles will not tell me whether to live an active or a contemplative life, whether to engage in politics, whether to spend a limited amount of energy opposing racism or boosting economic growth, whether to have children, or a whole range of significant and troubling questions. Moral principles will not tell me which objects are morally salient, troubling, promising, possible, or inevitable.1
Nor will principles banish skepticism, because I can always doubt the principles. For example, I am very confident that racism is bad. But if I had been born a white person two centuries ago, chances are good that I would have intuited the rightness and naturalness of white superiority. I grew up intuiting that it was fine to kill and eat animals, but that may seem a barbarous assumption to my grandchildren. In the moral realm, dispositive arguments are rare; for the most part, we just have refutations of substantive views that prove internally inconsistent. How we can vindicate moral positions is not easy to say.
In the case of racism, we have begun to learn that it is wrong. When I was a young child, I first learned the very existence and salience of race from other people. Other Americans learn to recognize race in much less than a second and frequently allow it to influence their behavior; and for that reason, we must pay attention to it. I also began to learn early on that I should not associate feelings of superiority or inferiority with race—because I was told not to. The people (parents, teachers, books, and famous leaders) who told me to oppose racism had learned it from experience: their own and others’. The Civil Rights movement had taught white Americans that racism was wrong, something that its members knew from both personal and vicarious experience. Meanwhile, I was learning many other moral lessons about things that are socially constructed, from marriage to school to the Internal Revenue Service.
It is easy to see that we must learn morality, because we come into the world knowing practically nothing and have very little time to develop moral theories of the whole complex world that confronts us. Besides, a considerable amount of the important information is historical and so not available to be perceived directly. You cannot understand racism, for example, without knowing something about institutionalized slavery, and that knowledge must be vicarious.
Alas, learning from experience is not reliably beneficial. In 1800, no one was a Nazi. By 1930, millions of people had learned to be Nazis. They had learned by listening to other people and amassing and interpreting new information—about Germany’s defeat in the Great War, the economic crises of the Western democracies, the perceived power of cosmopolitan Jews, and so on. If we define “reasoning” in a reasonably neutral fashion, it can be disastrous. Being open to new experiences and perspectives can make us worse than we were.
If we wanted to build a computer to model the process of developing a good list of moral beliefs, it would have to be interactive: harvesting and interpreting masses of data and perspectives on the data. Its results would vary depending on the historical circumstances. It would not merely do what most people suggest doing, because that could lead it in the wrong direction. It would have to be programmed to reach better, rather than worse, conclusions from what it learned. And, I have argued, we could not program it to reach adequate conclusions by building into its instructions general principles like the Golden Rule. Those rules might help, especially by preventing certain egregious mistakes. They would not suffice.
So far, I have written about lists of moral beliefs, each one of which is either right or wrong. But how can we tell, in the ambiguous cases? It is helpful to think not about lists but about overall structures of beliefs, where each belief is connected to another if A implies B. Any coherence theory of moral truth looks for the quality of the overall structure of moral beliefs instead of asking whether each belief is right or wrong on its own. If a whole moral worldview hangs together well and contains many plausible but not self-evident beliefs and arguments, it is persuasive even if one can doubt each element.
But there are two problems with standard coherence theories in the realm of ethics. First, they make internal consistency the main criterion. But many different worldviews can be morally consistent, including evil ones. It is easy to avoid a moral tension between A and B by just ignoring A, but that does not make you a better person.
Second, they tend to favor systematic thinking. Every idea is supposed to be connected to a more general and abstract one, until you reach an apex in the “summum bonum, or, what is the same thing, … the foundation of morality” (J.S. Mill). Of course, not everyone shares Mill’s monism. W.D. Ross thought that there were several fundamental moral ideas that could not be reduced to a single one, and perhaps that was also true of Aristotle and Aquinas. But even Ross believed that the prima facie duties were “very few in number and very general in character.”2 The tendency is quite widespread to favor ideas of broad application as a means of achieving coherence. That is all very well if we know that those ideas are true. If we doubt their truth, the whole edifice rests on shaky foundations.
But if morality is a network, then “foundationalist” theories are just examples of structures with certain formal properties. In a foundationalist theory, all the nodes connect to all the others through one or a few nodes that represent very general beliefs. These beliefs are not really foundations (that is just a metaphor), but they are highly central nodes. It is not clear why you would want your network to look like this, unless you had very high confidence in the truth of those nodes. I would much rather spread my bets.
Instead, a network should have features that encourage learning through dialogue with other people. Those include: having lots of nodes, not relying too much on nodes that are fixed and nonnegotiable beliefs, deliberately including nodes that are shared by other people you know, and being able to connect each of your beliefs to others so that a conversation that begins with Belief A can move to Belief B, and your interlocutor can explore your views. With all due respect to Emerson, he was wrong to write in “Experience”:
All private sympathy is partial. Two human beings are like globes, which can touch only in a point, and, whilst they remain in contact, all other points of each of the spheres are inert; their turn must also come, and the longer a particular union lasts, the more energy of appetency the parts not in union acquire.
Actually, we can touch at many points–and are so much the better for it.
Why education does not guarantee moral progress
If it is better to have more beliefs rather than fewer, for all your beliefs to be connected, and for your whole moral worldview to be influenced by many other people’s ideas, then isn’t better to be highly educated? For education exposes you to a range of perspectives, requires you to develop ideas on many subjects, and encourages you to explore their connections.
Indeed, liberal education has a moral purpose, and the intention behind it should be honored. But I have not observed that people with more book-learning are better, morally. Why not?
First, there are many ways to learn from other people. I am particularly enthusiastic about learning from working together, from collaboration, which can be done by people without formal education.
Second, there may be diminishing moral returns from learning–the moral curve still climbs, but slowly after a while.
But mainly, the people who read and study the most are paid to do so (unless they are still preparing for paid employment in intellectual work). The jobs that involve reading and studying are not designed to challenge you morally or to equip you with thoughts that will enrich your moral dialog with other people. Rather, you are paid to collect information that supports or tests positions that you already hold. You read with increasingly instrumental purposes. That gives the additional reading little moral value and may actually make it counter-productive morally, unless you fight to keep your mind open.
Another interpretation is Tolstoy’s. In late works like the Death of Ivan Ilyich, Tolstoy maintained that the moral truth is simple, easily obscured by fancy culture and learning. Ilyich and all his educated friends are unable to love. Their lives are living deaths, in contrast to the simple butler Gerasim, who happily does what is right because he only has a few ideas in his head.
Tolstoy hated Shakespeare and thought that other people’s admiration for the playwright was “a great evil, as is every untruth.” That is because Shakespeare loved the world in all its variety and complexity. Keats had found in Shakespeare the quality that he called “Negative Capability, that is when man is capable of being in uncertainties, Mysteries, doubts, without any irritable reaching after fact and reason.” Other critics have noted Shakespeare’s remarkable ability not to speak on his own behalf, from his own perspective, or in support of his own positions. Coleridge called this skill “myriad-mindedness,” and Matthew Arnold said that Shakespeare was “free from our questions.” Hazlitt said that the “striking peculiarity of [Shakespeare’s] mind was its generic quality, its power of communication with all other minds–so that it contained a universe of feeling within itself, and had no one peculiar bias, or exclusive excellence more than another. He was just like any other man, but that he was like all other men.”
In short, we have a choice about what kind of ethical network to build. Tolstoy said: a simple one, with the Truth at its heart. Keats said: a very complex one without any identifiable center. Formal education does not necessarily promote either type of network–that depends on how the education is conducted. I argue that Keats was right because there is no single Truth, even though there is an objective moral difference between a good network of beliefs and an evil one.
1 In a chapter entitled “How to be a Moral Realist” (1988), Richard N. Boyd writes, “Much [moral] knowledge is genuinely experimental knowledge and the relevant experiments are (“naturally” occurring) political and social experiments whose occurrence and whose interpretation depends both on “external” factors and upon the current state of our moral understanding. Thus, for example, we would not have been able to explore the dimensions of our needs for artistic expression and appreciation had not social and technological developments made possible cultures in which, for some classes at least, there was the leisure to produce and consume art. We would not have understood the role of political democracy in [advancing the] good had the conditions not arisen in which the first limited democracies developed. Only after the moral insights gained from the first democratic experiments were in hand, were we equipped to see the depth of the moral peculiarity of slavery.” (p. 205)
2W.D. Ross, The Foundations of Ethics (Oxford, 1939), p. 190.