Category Archives: philosophy

toward a theory of moral learning

In a series of , I have been developing the idea that anyone’s moral thinking can be modeled as a network: the nodes are beliefs, and the links are various kinds of connections (implications, generalizations, perceived similarities).

This modeling method would work for a Kantian, a Buddhist, a Marxist, a Thomist–it is morally neutral, not a substantive position. However, I have been arguing that certain formal structures are better than others. By bringing your own network more into line with those standards, you can move from your starting point toward an improved moral position.

This is fundamentally a social process because the nodes in your network are shared with other people, and a good network is one that interacts well with theirs. Thus I am implying a theory of collaborative moral learning. It is not a psychological theory about how people do learn, but a moral theory about how we should learn. I am interested in getting this theory right and avoiding philosophical errors. But my goal is to develop an actual method for moral introspection. That would be a contribution to the long tradition of moral exercises that go back to the ancient Stoics and classical Indian thinkers.

Below are some notes about the learning theory. I think it avoids several significant pitfalls. It does not make learning from experience seem automatically beneficial, because people can learn very bad ideas. It does not imply that better educated people–those who have more formal learning–are more moral, which is clearly false. And it addresses the fact that many important moral issues are “socially constructed,” yet there are real differences between good and evil.

Continue reading

do fixed beliefs prevent reasonable deliberation?

In Reasoning: A Social Picture, Anthony Simon Laden (who’s visiting Tufts today) argues that there’s a “standard picture of reasoning” in which the goal is to reach conclusions. You can reason alone, but when people reason together, they assert propositions, back them up with reasons, and strive for assent. Success means reaching a conclusion as reliably and efficiently as possible.

Laden argues that this picture cannot make sense of a valuable and pervasive kind of reasoning that is quite different. Sometimes when we reason, it is not to reach a conclusion but to engage in an activity with someone else that strengthens the relationship, whether that is a civic bond, a friendship, or a romantic or familial tie. I reason with you to learn what you think, to share my views with you, to seek whatever common ground we can, and to see if we can live together rather than in parallel. A characteristic activity of this kind of reasoning is not asserting that P, but inviting a response: What do you think–can we agree that P? Reasoning is “responsive engagement with others as we attune ourselves to one another and the world around us.”

Just as there are good and bad arguments that P, so there are good and bad ways of reasoning together in Laden’s sense. But some of the norms are quite different. A particularly strong argument is one that ends the discussion. But a particularly skillful conversational invitation is one that prolongs the discussion in ways that satisfy both parties. Laden says that his view is anti-foundationalist, because one unhelpful kind of reasoning involves asserting beliefs as bedrock, self-evident, or transcendent. If someone asserts that Jesus is his personal savior, that all history is the history of the class struggle, or that science produces the only truths, that ends the conversation and so makes the individual less reasonable, on Laden’s view.

Here is where I prefer my own account of moral thought as a network of beliefs connected by various kinds of ties (not just logical entailments, but also empirical generalizations, rules-of-thumb, and family resemblances). I don’t think we can declare a person unreasonable because he holds strong commitments. None of the beliefs I listed above happen to persuade me, but I am equally fixed on other beliefs, such as my love for my own family. The fact that I am not really open to discussing that doesn’t render me unreasonable.

But to converse reasonably with others, it does help to have a moral network map with certain features:

1. You shouldn’t keep returning to a few nonnegotiable principles. It’s fine, for example, to hold religious beliefs as a matter of faith. But if you constantly and immediately cite those beliefs, it’s impossible to reason with you. To put the point in network terms, your map can include nodes that are fixed points, but they ought not carry all the traffic. It should be possible to route around them.

2. You should have more beliefs rather than fewer. It is easier to converse with someone who has many interests, commitments, and ideas, because these are points of contact. Such a person is like an organic molecule with lots of surfaces for other molecules to bond to. Yet,

3. Each belief should connect to others in ways that you can explain. That way, to engage me on my belief A provides an entree to discussing my beliefs B and C. If many of my beliefs and commitments are singletons (in network terms), the conversation quickly dies.

Network analysis helps us see what makes a good conversationalist, but it does not show that being a reasonable conversation-partner is morally valuable. Perhaps it is more important to be good and right. That is what people typically say when they have neatly organized and simple mental networks that revolve around a few nodes, whether God, science, liberty, nature, or something else. They say: “It’s all very well to reason civilly and responsively with other people, but what really matters is my belief P.”

Well, it could be true that P. For instance, it could be true that God has laid out all the commandments for how to live, and we’ll see that for certain in the next life. But in this life, I don’t know how we can know that P except by reasoning with other people. Conclusive arguments are scarce in the moral domain; mainly, they are refutations of particular views that turn out to be inconsistent. But although conclusive arguments are scarce,  we come to believe things collectively by discussing them. When the discussion is inclusive, fallibilist, and responsive to everyone’s contributions, the results are better. I am not quite sure whether reasonable conversations increase the odds that we reach the right conclusions, or rather that we create something desirable when we make our moral world by reasoning together. (In other words, I am not sure whether to characterize the activity as discovery or creation.) But the practical conclusion is the same either way. It is most important to be a reasonable interlocutor. That does not rule out holding fixed beliefs, but they must supply just some of the nodes in your moral network, and they cannot be overly central.

envisioning morality as a network

Let us posit that a person’s moral worldview is a network. The nodes are commitments, beliefs, principles, and other moral ideas. The connections are logical. But, since we are thinking about morality and not mathematics, our criteria for valid connections must be loosened. Two moral ideas are connected not only in cases when A implies B, but also if A sets a precedent for B, A is analogous to B, A is a compelling example of B, or A and B came together harmoniously in the life of a person we admire. We can examine any moral network map and ask whether it has desirable features as a whole. Improving our own map becomes a method of introspective self-improvement.

Many works in the philosophical tradition push us to ask whether all the nodes in our own maps are mutually consistent and whether all the connections are logically tight. But those are only two virtues of a moral worldview, and they are easily overrated. A moral monster could have a consistent network in which all his concrete beliefs followed from a few premises by adamantine chains of logic. This evil person might hold horrible premises, or his assumptions might be wonderful (e.g., “Freedom for everyone!”), yet his whole system could be fanatically simplistic. Although moral networks vary in quality, and we should strive to improve them, coherence is not their main virtue.

Continue reading

putting facts, values, and strategies together: the case of the Human Development Index

Amartya Sen, the Nobel laureate economist and philosopher who spoke recently at Tufts, helped design the Human Development Index, which ranks all countries on a single list based on life expectancy at birth, years of schooling, and gross national income per capita. Sen seemed a bit chagrined that he is famous for this. The work took him only a few hours, he said. The formula was extremely simple. He called it a “vulgar index,” because it lumps together diverse variables in a potentially misleading way. He said that he agreed to do it mainly at the urging of his very old friend Mahbub-ul-Haq, who believed that an ordinal ranking for all nations would win media attention and help to undermine the tyranny of GNP growth, too often treated as the only measure of development. Mahbub-ul-Haq was correct, because the HDI gets global attention and has even been a central issue in some countries’ election campaigns. A set of separate indicators wouldn’t get much notice.

In my own small way, I have tried to do something similar by creating the Index of National Civic Health (INCH) for the National Commission on Civic Renewal in the 1990s, which led to the Civic Health Index, which continues today. Our idea was to challenge the dominance of economic growth by adding a measure of civic engagement that could also be tracked. Like the HDI, it was a “vulgar” measure, designed for subversive purposes–or, to put our objective more positively, to provoke a good discussion.

One interpretation of such efforts would go like this: There are facts about the world. A full picture of the world would be very complicated, but we can strive for it. Once we have “the data,” we can choose what to emphasize and whether to use positive or negative adjectives to describe reality. That is a matter of imposing values, opinions, or preferences on the data. Finally, once we have an informed opinion about what to do, we can try to change the world by persuading other people to agree with us. Creating an index is an example of a rhetorical tactic that may prove persuasive. This, then, is the “positivistic” model:

facts > interpreted by opinions > transmitted by strategies > changes in the world

I assume Sen would reject this model. He knows that one can reason about values as well as data, so selecting and morally evaluating information is not just a matter of imposing subjective preferences or opinions on reality. For instance, it is right to see an increase in lifespan as a good thing (all else being equal). Further, what we call “data” is always imbued with norms. Education, for example, is a component of HDI–but what is education? Years spent in school looks like a hard number, but no one believes it’s worth measuring unless it is a proxy for education, rightly understood. In fact, you can’t even tell what counts as “school” without some basic value-judgments. Defining education requires a moral theory of the human good.

Sen knows all of the above, and I interpret his model like this:

reasoning about facts and values (taken together) >> transmitted by strategies >> changes in the world

For instance, Sen reasoned for a long time about human development–a rich and complicated topic–before Mahbub-ul-Haq gave him a strategy to influence the public debate: generating a “vulgar index.” The index changed the world, at least modestly.

I would push the critique of positivism further. A moral theory is no good unless it has beneficial strategic consequences. We can announce that everyone should be equal, but unless we have a plan for making everyone more equal without producing a tyranny or chaos, that statement is worse than no theory at all. Further, the information and ideas (including moral ideas) with which we reason come from somewhere. They are produced by people and institutions. By communicating strategically, we influence the process that produces the data and arguments with which we reason. Thus I would connect all of the following with two-headed arrows: facts, values, and strategies. And I think people in influential positions, like Amartya Sen, should be held accountable for having good strategies, not just good values and data.

(see also Why political recommendations often disappoint: an argument for reflexive social science, Is all truth scientific truth?, Bent Flyvbjerg and social science as phronesis, A real alternative to ideal theory on philosophy and Abe Lincoln the surveyor, or the essential role of strategy)

my evolving thoughts on animal rights and welfare

We can wrong other sentient beings in three ways: by reducing their happiness or causing them to suffer (moving them down the happiness scale), by violating their rights, or by exploiting them, which means using them as mere means to our ends. I am not convinced that the second and third kinds of wrong apply to animals as they do to human beings. But we cause much suffering in animals–for example, through factory farming and the destruction of habitats–and we are obligated to address that. Reducing the consumption of meat is obligatory to the degree that it affects the supply of factory-farmed animals. But it is not the case that any killing or eating of animals is immoral. Those are my current views, and I will try to explain below.

I think about the relationship between happiness and rights in the following way. You should not cause me to suffer or reduce my happiness. But instantly killing me would not harm me in that way. I’ve had a happy life, and you would be freezing my current happiness score at its high net level. I would then suffer no more. Yet obviously you would have violated my right to life, which must be different from my interest in being happy. Why do I have a right to life? Mainly because I have plans that make sense of my actions. By suddenly killing me, you ruin my plans and make many of my past actions pointless. You also harm other people and violate their rights by removing me from their lives (or so I hope they would feel).

Now, if our beloved dog suddenly and painlessly died, his long-term plans would not be frustrated, and his recent actions would not be rendered meaningless. He has plans, such as stealing the treats out of the closet and snuggling with his human companions; but these plans are short-lived. His past treats and snuggling sessions would still represent successes even if his life suddenly ended. We would be sad, and you would violate our rights if you took him away. But I am not convinced his rights would be affected.

Likewise, our dog would be very sad to lose me and my family; suddenly killing us would cause him harm. But if this happened while he was with his dog-sitter, whom he loves, he would not be sad. The ties among animals, although profound, only matter morally insofar as they cause happiness or suffering. In contrast, human relationships give our actions purpose, and thus wrecking other people’s relationships can violate their rights even if they aren’t unhappy about it.

As for exploitation, this also violates other people’s rights because it frustrates their plans or substitutes our plans for theirs–even if it causes them no unhappiness or indeed makes them happier. I am not convinced that this concern applies to dogs and other mammals. Whether our dog is happy is the issue, not whether we treat him as an end in himself. If we train him to do the right thing by giving him treats, we view him as a means to our ends. That just makes him happy, and why not?

If the sole moral issue with animals is their happiness, we are in the realm that philosophers call “consequentialist,” where you add up all the benefits and subtract the harms. You don’t worry as much about bright lines. For example, eating less meat may enhance animal welfare if it reduces financial support for factory farming. But zero pounds of meat is just a number, like any other. Reducing your consumption from 50 lbs to 40lbs is ten times more important than getting it down from 1lb to zero. The same is not true with eating human flesh, which we regard as a matter of transgression and pollution. Even if cannibalism is merely a taboo, killing other people is truly wrong, and you’re a killer even if you only have one victim. I don’t think that’s the case with animals.

As long as our reasoning is consequentialist, offsets seem appropriate. It could be much better to eat a steak and contribute to an animal-welfare organization than to shun the meat but do nothing about public policy. Offsets and compensatory payments do not excuse violations of human rights, but they make sense with respect to animals (and nature more generally).