Category Archives: epistemic networks

we treat facts and values alike when we reason

Years ago,  Justin McBrayer found this sign hanging in his son’s second-grade classroom:

Opinion: What someone thinks, feels, or believes.

Fact: Something that is true about a subject and can be tested or proven.

This distinction is embedded in significant aspects of our culture and society. For example, science aspires to be about facts, not opinions. And values are often assigned to the category of opinions. But this distinction doesn’t describe the way people actually reason.

After you utter any standard sentence, another person can ask two questions: “Why did you say that?” And, “What does it imply?” Any standard sentence has premises that entail it and consequences that it, in turn, implies. Any sentence is in the middle of a network of related thoughts, and you can be asked to make those relationships explicit (Brandom 2000).

Imagine a rooster who wakes you up by crowing at a dawn, and a parent who wakes her child in time for school. Both have brains, perceptions, and desires. But only the parent shares a language with another party. As a result, the child can ask, “Why are we waking up now?” or “What do I have to do next?” These are upstream and downstream implications of the sentence: “Wake up!”

Upon receiving an answer, the child can ask further questions. “Why do I have to go to school?” “Why is learning good?” The parent’s patience for this kind of discussion is bound to be finite, but the very structure of language implies that it could go on virtually forever.

The same process works for sentences that are about facts and for those that are more about values. A child asks, “Why do I have to go to school?” The answer, “Because it is 8 am,” is factual. The answer, “Because it’s important to learn” involves values. Either response can, in turn, prompt further “why” questions that can be answered.

The positivist assumption that values are opinions rather than facts suggests that values are conversationally inert, connected to the speaker but not to any other sentences. When you say that you value something, a positivist understands this as a fact about yourself, not as a claim that you could justify. However, we do justify value-claims. We state additional sentences about what implies our values or what our values imply.

In real life, people sooner or later choose to halt the exchange of reasons. “Why do you think that?” “I saw it with my own eyes.” “Why do you believe your eyes?” At this point, most people will opt out of the conversation, nor do I blame them.

Note, however, that the respondent probably could give reasons other than “I saw it with my eyes.” Statements typically have multiple premises, not just one. Further, a person could explain why we typically believe what we see. There is much to be said about eyes, mental processes connected to vision, and so on. I realize that discussing such matters is for specialists, and most people should not bother going into them. But the point is that the network of reasons could almost always be extended further, if one chose.

And the same is true for value-claims. “Why do you support that?” “Because it’s fair.” “What makes it fair?” “It treats everyone equally.” “Why do you favor equality?” At this point, many people may say, “I just do,” which is rather like saying, “I saw it with my own eyes.” But again, the conversation could continue. There is a great deal to be said about premises that imply the value of equality and consequences that equality entails if it’s defined in various specific ways. By spelling out more of this network, we make ourselves accountable for our positions.

Driving a distinction between opinions/values and facts would artificially prevent us from connecting our value-laden claims to other sentences, which we naturally–and rightly–do.

Source: Robert R. Brandom, Articulating Reasons: An Introduction to Inferentialism. (Harvard 2000). See also: listeners, not speakers, are the main reasoners; how intuitions relate to reasons: a social approach; we are for social justice, but what is it?; making our models explicit; introducing Habermas; and “Just teach the facts.

[Additional note, Oct 18: David Hume originated the fact-value distinction. For him, reasoning was essentially about perceiving things. The mind formed representations, especially visualizations. As Hilary Putnam writes (p. 15), Hume had a “pictorial semantics.” But you can’t see values. Nor can you see the self or causation. If we use visual metaphors–lenses, paintings, or images–for the mind, then it can’t seem to reason about values.

Nowadays, we think of reasoning mainly in terms of symbols that are combined and manipulated. The reigning metaphor is not a lens but a computer. We absolutely can compute sentences that include values. It’s true that a mind that manipulates and combines symbols must ultimately touch the world beyond itself, and there remains a role for sensation. Computers have input devices. But the connection between a mind and the world cannot be a matter of separate and distinct representations, since many things that we reason about–not only values, but also neutrinos, diseases, and economies–do not appear to our eyes. Source: Hilary Putnam (2002) The collapse of the fact/value dichotomy and other essays. Harvard.]

a collective model of the ethics of AI in higher education

Hannah Cox, James Fisher, and I have published a short piece in an outlet called eCampus News. The whole text is here, and I’ll paste the beginning here:

AI is difficult to understand, and its future is even harder to predict. Whenever we face complex and uncertain change, we need mental models to make preliminary sense of what is happening.

So far, many of the models that people are using for AI are metaphors, referring to things that we understand better, such as talking birds, the printing press, a monsterconventional corporations, or the Industrial Revolution. Such metaphors are really shorthand for elaborate models that incorporate factual assumptions, predictions, and value-judgments. No one can be sure which model is wisest, but we should be forming explicit models so that we can share them with other people, test them against new information, and revise them accordingly.

“Forming models” may not be exactly how a group of Tufts undergraduates understood their task when they chose to hold discussions of AI in education, but they certainly believed that they should form and exchange ideas about this topic. For an hour, these students considered the implications of using AI as a research and educational tool, academic dishonesty, big tech companies, attempts to regulate AI, and related issues. They allowed us to observe and record their discussion, and we derived a visual model from what they said.

We present this model [see above] as a starting point for anyone else’s reflections on AI in education. The Tufts students are not necessarily representative of college students in general, nor are they exceptionally expert on AI. But they are thoughtful people active in higher education who can help others to enter a critical conversation.

Our method for deriving a diagram from their discussion is unusual and requires an explanation. In almost every comment that a student made, at least two ideas were linked together. For instance, one student said: “If not regulated correctly, AI tools might lead students to abuse the technology in dishonest ways.” We interpret that comment as a link between two ideas: lack of regulation and academic dishonesty. When the three of us analyzed their whole conversation, we found 32 such ideas and 175 connections among them.

The graphic shows the 12 ideas that were most commonly mentioned and linked to others. The size of each dot reflects the number of times each idea was linked to another. The direction of the arrow indicated which factor caused or explained another.

The rest of the published article explores the content and meaning of the diagram a bit.

I am interested in the methodology that we employed here, for two reasons.

First, it’s a form of qualitative research–drawing on Epistemic Network Analysis (ENA) and related methods. As such, it yields a representation of a body of text and a description of what the participants said.

Second, it’s a way for a group to co-create a shared framework for understanding any issue. The graphic doesn’t represent their agreement but rather a common space for disagreement and dialogue. As such, it resembles forms of participatory modeling (Voinov et al, 2018). These techniques can be practically useful for groups that discuss what to do.

Our method was not dramatically innovative, but we did something a bit novel by coding ideas as nodes and the relationships between pairs of ideas as links.

Source: Alexey Voinov et al, “Tools and methods in participatory modeling: Selecting the right tool for the job,” Environmental Modelling & Software, vol 19 (2018), pp. 232-255. See also: what I would advise students about ChatGPT; People are not Points in Space; different kinds of social models; social education as learning to improve models

People are not Points in Space

Newly published: Levine, P. (2024). People are not Points in Space: Network Models of Beliefs and Discussions. Critical Review, 1–27 (2024). https://doi.org/10.1080/08913811.2024.2344994 (Or a free pre-print version)

Abstract:

Metaphors of positions, spectrums, perspectives, viewpoints, and polarization reflect the same model, which treats beliefs—and the people who hold them—as points in space. This model is deeply rooted in quantitative research methods and influential traditions of Continental philosophy, and it is evident in some qualitative research. It can suggest that deliberation is difficult and rare because many people are located far apart ideologically, and their respective positions can be explained as dependent variables of factors like personality, partisanship, and demographics. An alternative model treats a given person’s beliefs as connected by reasons to form networks. People disclose the connections among their respective beliefs when they discuss issues. This model offers insights about specific cases, such as discussions conducted on two US college campuses, which are represented here as belief-networks. The model also supports a more optimistic view of the public’s capacity to deliberate.

An Association as a Belief Network and Social Network

This is a paper that I presented at the Midwest Political Science Association on April 6, 2024. I hope to reproduce this study with another organization before publishing the results as a comparison. I am open to investigating groups that you may be involved with–a Rotary Club like the one in this study, a religious congregation, or something else. Please contact me if you are interested in exploring such a study.

Abstract

A social network is composed of individuals who may have various relationships with one another. Each member of such a network may hold relevant beliefs and may connect each belief to other beliefs. A connection between two beliefs is a reason. Each member’s beliefs and reasons form a more-or-less connected network. As members of a group interact, they share some of their respective beliefs and reasons with peers and form a belief-network that represents their common view. However, either the social network or the belief network can be disconnected if the group is divided.

This study mapped both the social network and the belief-network of a Rotary Club in the US Midwest. The Club’s leadership found the results useful for diagnostic and planning purposes. This study also piloted a methodology that may be useful for social scientists who analyze organizations and associations of various kinds.

Beliefs and connections among beliefs in a club

An Association as a Belief Network and Social Network

I will present a paper entitled “An Association as a Belief Network and Social Network” at next week’s Midwestern Political Science Association meeting (remotely). This is the paper.

Abstract:

A social network is composed of individuals who may have various relationships with one another. Each member of such a network may hold relevant beliefs and may connect each belief to other beliefs. A connection between two beliefs is a reason. Each member’s beliefs and reasons form a more-or-less connected network. As members of a group interact, they share some of their respective beliefs and reasons with peers and form a belief-network that represents their common view. However, either the social network or the belief network can be disconnected if the group is divided.

This study mapped both the social network and the belief-network of a Rotary Club in the US Midwest. The Club’s leadership found the results useful for diagnostic and planning purposes. This study also piloted a methodology that may be useful for social scientists who analyze organizations and associations of various kinds.

Two illustrative graphs …

Below is the social network of the organization. A link indicates that someone named another person as a significant influence. The size of each dot reflects the number of people who named that individual. The network is connected, not balkanized. However, there are definitely some insiders, who have lots of connections, and a periphery.

The belief-network is shown above this post. The nodes are beliefs held by members of the group. A link indicates that some members connect one belief to another as a reason, e.g., “I appreciate friendships in the club” and therefore, “I enjoy the meetings” (or vice-versa). Nodes with more connections are larger and placed nearer the center.

One takeaway is that members disagree about certain matters, such as the state of the local economy, but those contested beliefs do not serve as reasons for other beliefs, which prevents the group from fragmenting.

I would be interested in replicating this method with other organizations. I can share practical takeaways with a group while learning more from the additional case.

See also: a method for analyzing organizations