In a random network, like (a) below, each node has an equal chance of being linked to any other, and the number of links per node will show a normal distribution. However, in most real networks (see “b”), a few nodes hog most of the links; the distribution is skewed. A rule of thumb–not a law of nature–is that 20% of the nodes draw 80% of the links in a naturally occurring network.
Two phenomena, among others, explain the tendency for links to cluster. First, some nodes are simply more important than others for reasons independent of the network’s structure. For instance, if an asteroid hit South Dakota, web pages devoted to asteroids would get more incoming links because the topic would be timely.
Second, the rich get richer. A node that already has a lot of links is easy to find and provides short pathways to other nodes, so there are reasons to link to it. In the case of websites, that phenomenon may be artificially exaggerated by Google, which uses the number of links to rank search results. But in a naturally occurring human network, it is smart to connect to people who are already well-connected. Regardless of their intrinsic merits, they draw more attention because they have more attention.
Now, consider one’s worldview as a set of ideas connected in various ways to each other. This network changes constantly. It tends to grow as you learn new things. You also forget or reject things that you once knew, but growth is the main tendency, at least for the first 25 years of life. Every time you are confronted with a new idea (from other people or direct observation), you will be inclined to connect it to existing ideas.
The two phenomena introduced earlier will encourage you to link the new idea to nodes that are already well-linked.
First, you will believe (rightly or wrongly) that some of your existing ideas are important, and you will link your new ideas to those. For instance, if you believe in God, that’s pretty important, and you will be inclined to ask of any new idea whether it connects to God. Perhaps it is evidence of His will or a sign of His glory.
Also, you will prefer ideas that you have already used to support other ideas. In network terms, you will look first to your high-traffic nodes as potential links to the new nodes that you are bringing into your map. They are more salient, and they allow you to connect the new idea to many old ideas.
This tendency to cluster has its dangers. It can be a cognitive bias, limitation, or “heuristic” in the bad sense of that word. It locks people into their current views. A fancy term for one relevant form of bias is asymmetric Bayesianism. Whenever a new idea or observation seems relevant to one of your favorite beliefs, you connect them and make the original belief even more central to your network. Whenever a new idea conflicts with an existing belief, you find reasons to shunt it off to the edge of the network. All your experiences reinforce your original idea.
But although clustering has dangers, I would defend it to a degree. For one thing, some ideas deserve more links than others. Whether a given moral belief deserves a lot of links is an important question. For example, it is true and bad that millions of children are hungry. But it is a different question whether that idea is linked to enough of our other beliefs. Their hunger should be relevant to many other questions, such as what I do with my own surplus income. To take a different example: the Holocaust was unthinkably bad. And it is relevant to the existence of the state of Israel. But I believe that the Holocaust is connected too often to other issues involving the contemporary Middle East, such as Israel’s relationship with Palestinians. It is not that each link is false or illegitimate, but the network is centered in the wrong place.
So your moral network should skew in favor of the right things. That is not question-begging: it rather poses an important question. Which beliefs should be central nodes?
Your moral network will also skew because of the rich-get-richer principle: ideas that you have already linked to many other ideas will attract new connections because of their prominence. I would like to challenge the premise that this is pure bias, a mere limitation.
If morality could be truly rational, then one of its hallmarks would be a lack of bias toward existing beliefs. All your ideas would also be mutually consistent. And there would be a reason for everything. You would not just believe P, you would always be able to give a reason for P.
I am afraid that I see morality differently from that. I think it is a tissue of beliefs and commitments that is relatively hard to construct and sustain. Each piece is easy to reject if we ask “Why?” But if we tear away at the tissue, we have nothing keeping us from just doing what we want. Morality is “faith-based,” whether the faith is in God or in the equality of human beings (a moral assumption not at all suggested by science).
Morality is also a means of building up a common worldview with other people. It is “socially constructed,” and constructing it allows us to live together, not merely in parallel. Again, if we ask “Why?” about each component of morality, we will just weaken the common tissue that we have spun together.
This does not mean that any moral beliefs will do, or that we needn’t be concerned about justification, consistency, logic, and other hallmarks of rationality. An inconsistency should be a source of concern and reflection. Automatically returning to a few well-traveled ideas is not satisfactory; we should strive to broaden our minds. On the other hand, we know that strongly clustered networks are robust. They work better and last longer than random-looking networks. Thus, even if two people endorse the same list of moral beliefs, I would wager that the one whose beliefs cluster will act better. I hope that my moral worldview does not center on false or bad nodes, but I do seek beliefs to which I can frequently turn. Those centers of my network map define my character or moral identity.