how people estimate their own life expectancies

How long people expect to live could be important for at least two reasons.

First, individuals may know information about their own circumstances that affect their predictions. Maybe they know that they are sick or in frequent danger from gun violence. In that case, their prediction of their own life-expectancy might be a proxy for their social circumstances.

Second, their prediction may change some of their own cost/benefit calculations. There is, for example, no economic or other extrinsic reason to pursue education if you fear that your life is nearly over. Then again, you might procrastinate on getting more education if you think you have a very long time left to live.

Therefore I am interested in who makes optimistic or pessimistic estimates of their own life-expectancies. Of course, younger people will expect to live for more years, on average, than older people, and that doesn’t reflect optimism. So I adjusted for age by looking at the difference between how long people expect to live and how long the Social Security Administration (SSA) predicts that someone of their age and gender will live. If people give a higher number than the SSA, they are optimistic; a lower number reflects pessimism. Pessimism may be entirely warranted and reasonable, but it could still have negative effects on some important behaviors.

Data: Tufts Equity Research Group. Analysis: Peter Levine

The scatterplot shows that individuals’ predictions correlate with what the SSA would say, but there is a lot of variation. One young dude expects to live for another 110 years, whereas the SSA would give him 58 years. Several people expect to die very soon. What accounts for these differences?

Using the Tufts equity research survey, I looked first at the factors that are incorporated in prominent actuarial models–the things that we’re told actually lengthen or shorten our lives. If you go to a “longevity calculator” like this one, it will ask you to enter your own year of birth, gender, race, education level, body mass index, income, whether you are retired, and your habits of exercise, smoking, and drinking alcohol. It will tell you how many years you probably have left to live, based on your answers and a significant body of research.

Some of those factors affect optimism, but some do not. Reporting results from a regression model that are significant at p<.05:

  • Younger people are more optimistic, meaning not that they expect to live more years than older people but that their predictions for their lives are higher in comparison to the SSA’s predictions for them.
  • White people are more optimistic than people of color, and they have a basis for that.
  • Higher body mass index (BMI) correlates with pessimism.
  • Regular vigorous exercise predicts more optimism.
  • Education, gender, marital status, income, employment status, and drinking and smoking are not related to optimism, even though they are significant predictors of life expectancy in actuarial models.

I also went looking for measures that might predict optimism even though they are not in the actuarial models that I see online. Some of them are quite significant. When added to a regression model with the variables listed above …

  • Frequency of attending religious services predicts optimism.
  • People with better overall health (per their self-report) are much more optimistic.
  • Stress about climate change comes close to predicting pessimism (p=.054).
  • Whether you own a gun and whether you planned to vote for Trump or Biden are not related to optimism.

To some extent, people seem to be making accurate predictions based on life circumstances. For instance, they are right to worry about high BMIs. They seem to be missing some important factors, such as smoking and drinking. The correlation with religious participation could reflect the beneficial results of participating in communities, or perhaps religion makes one optimistic about one’s own life, or perhaps people who think they have a long time to live are motivated to attend services.

See also: how predictable is the rest of your life?; the aspiration curve from youth to old age

how predictable is the rest of your life?

Last year, I had a chance to add this question to the Tufts national survey of equity:

Imagine that someone summarized your life, long after your lifetime. To what extent do you already know what that summary would say?

I was interested in how people’s life circumstances might lead them to answer that question differently, and what different answers might mean. For instance, you might be a successful young student for whom the unpredictability of the rest of your life is a sign of broad options and unlimited possibility. You might hate your current situation and feel depressed because you don’t believe it will ever change. You might feel precarious, so that your uncertainty about the remainder of your life is mostly stressful.

I hope to investigate how various subgroups answer the question. In the meantime, I ran a very simple regression to try to predict answers based on people’s demographics (age, race, gender), their perception of their own economic trajectory (Are you better of than your parents, will you be better off next year, and will your children be better of than you?), their sense of civic or political efficacy (Can you make a change in your community by working with others?), a measure of stability at work (How far in advance do you know how many hours you will be working per day?), and a measure of stress about climate change (to see whether worries about the climate were making some people uncertain about their lives).

The results are below. (A positive coefficient indicates less certainty.) I’ll summarize the results that are statistically significant (p<.005):

  • Certainty about the story of one’s whole life rises with age, but the coefficient is small. People tend to get just a tiny bit more certain with each passing year. I am more interested in the small relationship than its statistical significance.
  • Certainty rises with more education. At least if you put the whole sample together, it seems that people who have more education don’t feel greater uncertainty because their options are expanded. Rather, they feel more certain, perhaps because they are more secure or feel more control over their lives.
  • Certainty falls with civic efficacy. Apparently, if you think you can make a difference in the world around you, you are less confident that you know the whole story of your life. I hope this is because you believe that unexpected good options might open up.
  • Certainty is lower for people who see their own families on a positive economic trajectory. Maybe perceiving that you are getting wealthier makes you hope for unexpected futures. I find it interesting that economic optimism and education have the opposite relationship to this outcome.
  • The demographic measures, stability at work, and climate stress are not related to this outcome.

As always, I would welcome any thoughts about these very preliminary findings.

See also: youth, midlife & old-age as states of mind; Kieran Setiya on midlife: reviving philosophy as a way of life; to what extent do you already know the story of your life?; the aspiration curve from youth to old age

scholasticism in global context

In The Sound of Two Hands Clapping, Georges B.J. Dreyfus describes Tibetan monasteries as homes for “scholasticism,” using a word originally coined to describe a form of Catholic thought and practice that was most influential in the thirteenth and fourteenth centuries–later to be mocked and repudiated by both Protestants and Catholic Humanists. As Dreyfus notes, this word has also been used to describe specific traditions in Islam, and more recently in Hinduism and Buddhism. In his book, he explores strong parallels in Judaism.

It could be that scholasticism is an option within any heavily organized and sustained tradition of thought, whether we classify it as a religion or as something else.

One core component is a belief in argument–not just discussion and disagreement, but contentious, often competitive pro/con debate. Debates in Tibetan monasteries are high-pressure, competitive affairs conducted before active audiences. The same was true in medieval universities, where students paid the lecturers individually and enjoyed competitive showdowns. King and Arling write that Abelard’s “quick wit, sharp tongue, perfect memory and boundless arrogance made him unbeatable in debate—he was said by supporter and detractor alike never to have lost an argument.” Dreyfus recalls the Jewish practice of havruta, learning in pairs, and emphasizes that these pairs debate each other.

In scholastic traditions, debate is not seen as a temporary necessity while we sort out important topics once and for all. Instead, it is a form of religious practice, comparable to meditation or ritual and something like an end in itself.

Martin Luther hated it for just that reason. Luther was a formidable debater, but he was trying to defeat heresy. He would have been deeply disappointed to learn that people are still debating theology centuries later. In contrast, I think that Tibetan monks work to keep the debate going. They see it as a good way of life.

Debating what is actually said in the most revered texts of any tradition is risky. While arguing about such texts, it is hard to avoid arguing with them. Therefore, an interesting pattern in scholasticism is a tendency to argue about the previous commentators. According to Dreyfus, “Tibetans emphasize less the inspirational words of the founder (the sutras) and more the study of their content as summarized by the great Indian treatises.” In theory, “the authority of the Indian commentaries is extremely important; practically, they are used in Tibetan education relatively rarely by teachers and students.” Instead, Tibetan monks memorize and debate Tibetan commentaries on the Indian summaries of the sutras that are attributed to the Buddha. My sense is that Catholic commentaries on Aristotle, Jewish Talmudic study, and Islamic jurisprudence have a similar flavor.

Again, this style drove Luther crazy. The truth was in the original Word of God (sola scriptura) not in pedantic commentaries. Erasmus opposed scholasticism for a different but compatible reason. For him, the ancient texts–including but not limited to the Bible–made better literature than the ponderous tomes of the scholastics. The classics had style and form. However, if you want to keep on debating forever, then it makes sense to focus on the commentaries and let them accumulate, layer upon layer.

Another common feature is a focus on law–not necessarily in the literal sense of state-enforced rules and punishments, but at least the question of what counts as the right action in all kinds of circumstances; call it casuistry, jurisprudence, or applied ethics. I’m guessing this is a fruitful focus because we can invent new ethical questions endlessly. Besides, if the real purpose of the debate is self-improvement, then good behavior makes an ideal topic.

Social stratification often emerges in these traditions, to the point where the scholastic authorities can be quasi-hereditary. Yet the traditions offer stories about talented teachers who came up from nowhere. That is the point of the opening story of the Platform Sutra, when an illiterate monk grasps the point that the educated ones have missed and becomes a great authority. (This is my example, not Dreyfus’, and it might not be germane.) Jean Gerson, who became the most senior scholar in Paris, was born as one of twelve children of pious peasants. Of course, meritocratic anecdotes serve as great justifications for hierarchical systems.

I share this generic definition of scholasticism without a value-judgment. I am not sure how much I admire these traditions or resonate to them. Presumably, they are best assessed as parts of much larger social orders that offer other options as well. In any case, it seems valuable to recognize a form of life that recurs so widely.

See also: Foucault’s spiritual exercises; does focusing philosophy on how to live broaden or narrow it?; Hannah Arendt and philosophy as a way of life; avoiding the labels of East and West; Owen Flanagan, The Bodhisattva’s Brain: Buddhism Naturalized; is everyone religious?; etc.

“Just teach the facts”

Apparently, at public meetings about social studies curricula, some people are saying: “Just teach facts.” Insofar as this call is coming from people incensed about Critical Race Theory in our k-12 schools, the irony is hard to ignore. CRT is very rarely, if ever, taught, and some of the ideas being attributed to it are factual. Yet I think there is also something else going on. Across many issues and in many political subcultures, it’s common to demand facts instead of opinions, as if the facts are all on our side and the other side is the opinionated one. I have encountered liberals who make versions of this argument, whether about COVID-19 or about history and politics.

In their 2002 book Stealth Democracy, John R. Hibbing and Elizabeth Theiss-Morse argue that about 70% of Americans are drawn to the idea that gives their book its title. These people basically see disagreement as a sign of corruption. It should not be necessary to disagree about matters of political or moral importance. People who express contrasting opinions must have bad motives or be sadly misguided. Since disagreement is rife, it would be “better if decisions were left up to nonelected, independent experts.”

In a great 2010 paper, Michael Neblo, Kevin Esterling, Ryan Kennedy, David Lazer, and Anand Sokhey showed that fewer people probably held the stealth democracy position than Hibbing and Theiss-Morse had found, and Americans were more tolerant of disagreement. However, Neblo and colleagues didn’t find zero support for stealth democracy, and I think it pops up fairly often.

It may reflect frustration about opinions that one strongly dislikes: Why can’t those misguided people just acknowledge the facts? But it may also reflect a deeper problem.

In an era when science (as popularly defined) has enormous prestige and purports to distinguish facts sharply from values, people don’t know what to make of value-laden disagreements. Justin McBrayer found this sign hanging in his son’s second-grade classroom:

Fact: Something that is true about a subject and can be tested or proven.

Opinion: What someone thinks, feels, or believes.

McBrayer attributes this distinction to the Common Core. I think the text of the Common Core is actually a bit subtler, and the sign reflects a widespread view. In any case, the distinction is untenable.

First of all, we must select which facts to investigate. We could teach George Washington’s achievements or slavery in colonial America–or neither, or both–but the facts themselves can’t tell us which of those things to study.

Second, the information we possess always reflects other people’s interests and concerns. American historians, for example, study marginalized and oppressed people more than they did a half century ago. This shift reflects ethical principles. Historians do not, and cannot, pursue all facts indiscriminately. You might dispute their emphasis, but then you’re arguing for different values, not rejecting their facts.

Third, it is very hard to identify a fact that is free of value-judgments or a value-judgment that does not encompass empirical beliefs about the way the world works.

Fourth, many of the most important facts about history are the opinions people held. Lincoln’s response to secession was his opinion, but attributing a position to him is either correct or incorrect. You cannot teach history without teaching–and spending a lot of your time teaching–opinions.

Perhaps most importantly, not all values are just opinions that people happen to hold. Valuing chocolate ice cream over vanilla ice cream is subjective, in this sense. Believing that genocide is evil is not. It isn’t a fact “that can be tested or proven,” but it also isn’t just something I happen to feel. It is something we are all obliged to feel.

Education inevitably involves choices about what to teach and how to talk about and interpret information. It inevitably conveys values and causes students to make judgments–whether as intended or in reaction to what the school wants them to think. Education is better when it helps students to develop political and intellectual virtues. But adults disagree about virtues, and our disagreements reflect our freedom, our diversity, and our nature as finite, embodied, fallible creatures. Therefore, disagreement about what and how to teach is inevitable, permanent, and a sign that free people care about the future. “Just teach the facts” is a call to stop this debate, when what we need is more and better.

See also: first year college students and moral relativism;  science, democracy, and civic lifeis science republican (with a little r)?some thoughts on natural lawis all truth scientific truth?; etc.

public opinion on Critical Race Theory

The Economist/YouGov has released a survey of 1,500 U.S. Adult Citizens (fielded from
June 13 – 15, 2021) that asks some questions about Critical Race Theory (CRT). This is their summary.

This issue is deeply partisan and breaks in Republicans’ favor. Eighty-five percent of Republicans are very unfavorable to CRT, whereas 58% of Democrats are very favorable. But the public as a whole breaks against CRT, 58%-38%, due to Independents’ opposition (71% are very unfavorable) and Democrats’ somewhat mixed support.

Party ID appears more significant than demographics. For instance, a slight majority of Blacks (52%) are very favorable to CRT, but 16% are very unfavorable: a less positive balance than we see among Democrats. Women, college graduates, and young people are a bit more favorable than others, but those differences are small. (With access only to the printed report, I can’t run a regression to see how these variables may interact.)

Fifty-four percent of Americans say they have a very good idea what CRT is. The remainder are split between not being sure whether they know and being sure that they do not know what it is: 23% each. Thirty-five percent have heard nothing at all about CRT, 38% a little, and 26% a lot.

I think most of the people who say they know what CRT is are giving themselves too much credit. It names a rather specific academic movement that few of us understand. I would not claim that I have reliable knowledge of CRT (when knowledge = justified true belief) even though I study this general topic. But 54% of Americans are confident that they know what it is.

Although almost half of people are not sure what CRT is, 96% of respondents state a favorable or unfavorable view of it, and a total of 78% hold either a very favorable or a very unfavorable view. In other words, many people have opinions–even strong ones–about CRT even though they do not believe they know what it is and have heard nothing at all about it.

A mainstream position in political science these days is that Americans lack well-justified and autonomous opinions about most political issues. Achen and Bartels argue that even politically conscious citizens usually display “just a rather mechanical reflection of what their favorite group and party leaders have instructed them to think” (Achen and Bartels 2017, p. 12).

I dissent from this general view and have spent the past week on a methodological paper that aims to show that individuals hold more complex and individualized structures of opinions than one can glean from standard survey research. Yet the nature of public opinion depends on the issue, and especially on whether political professionals are exploiting it.

CRT is a great example of an issue on which public opinion reflects partisan heuristics and cues from leaders rather than careful thought. It’s bound to stay near the top of the national agenda, not only because it serves as a proxy for deeper issues related to race, but also because of the partisan politics. Republicans aren’t going to drop an issue that polls so well for them, but Democratic leaders–even if they wanted to–can’t strongly oppose CRT while 58% of their voters strongly favor it.