Category Archives: Uncategorized

taking a break

I was reading Arthur C. Brooks’ Lenten article, “What You Gain When You Give Things Up” without any premonition when suddenly I thought: I should give up blogging for a month or so. I was persuaded by Brooks’ argument that you shouldn’t just take breaks from things you regret or that cause you stress or other kinds of harm. From time to time, you should also stop doing things that you like and take pride in. He says, “sacrificing something for a short period effectively resets your senses to give you more pleasure from smaller servings of the things you love.” Even more strongly: “to enjoy [pleasures] optimally, we need time away from them.” (Note that he’s talking about activities, not people.)

I’ve been blogging here since 2002. Since about 2008, I’ve put all most posts on Facebook and then also tweeted them. For more than a decade, rather obsessively, I posted every single work day. I still enjoy blogging and get satisfaction from it. I don’t think I need a break from it. But Brooks’ argument applies, and I’m going to give it a try.

I don’t plan to post again until early April–with one likely exception. In my world, the Educating for American Democracy (EAD) National Forum on March 2, at which we will release our EAD Roadmap, is a big deal. I may post about that or participate in debates that come out of it. It’s a free, open event, so please register to attend, and I’ll see you there.

the weirdness of the higher ed marketplace

Princeton has $2.86 million in endowment per student, which would yield about $140k per year in interest for every undergraduate. Princeton could charge no tuition if its goal were to maximize accessibility. On the other hand, Princeton received 32,000 applications last year, so it could easily fill a highly-qualified class with people who could pay its full tuition price–or much more than that–if the university’s goal were to maximize tuition income. Princeton could also double or triple its size or create a new campus in another state or country if it wanted to maximize both accessibility and revenue.

Of course, a university sees itself as doing more than providing education. It also generates research, arts and culture, public service, etc. Every dollar that it collects from a student can go to those other purposes. But any endowment money that it spends on those other purposes could have gone to financial aid.

Meanwhile, prospective students also want a bundle of things, including prestige. Prestige comes with selectivity and sticker-price. Imagine you have a choice between paying the basic in-state tuition at UW-Madison ($10,725) or the same amount to Stanford after receiving financial aid. Stanford might look like a better deal, since its tuition sticker price is $55,473. It seems like you are being given $44k. However, it is unlikely that the marginal cost per student at Stanford is really 5.5 times higher than the cost at UW-Madison. More likely, Stanford knows it can charge a base tuition of $55k because many families will pay that much; and asking for less would be leaving cash on the table. Stanford with financial aid is arguably a better deal than UW-Madison’s full price, not because Stanford offers a better (or more costly) education in the classroom but because attending a college that is extremely selective and expensive looks better.

In short, the demand curve rises with price. The more other people pay for a given college, the more valuable it is to you, holding your own costs and the quality of the education constant.

It sounds as if college is a Veblen good, one that rises in perceived value the more it costs. But that logic does not exactly apply. If a college that regularly turns away 94% of its applicants decided to fill its seats with people who could pay full price, it would look less academically selective (as well as less diverse), and it would become less desirable. Many of the people who could pay to attend would now try to go elsewhere. In other words, if you pay full price, you are better off the more of your fellow students do not. Financial aid demonstrates that the college is selecting on criteria other than wealth. This is not typical of a Veblen good, which looks more desirable if only rich people buy it.

As Frank Bruni amusingly postulated, a college could win the prestige sweepstakes by deciding that no one was qualified for admissions. Bruni imagines the day when Stanford finally admits zero applicants:

At first blush, Stanford’s decision would seem to jeopardize its fund-raising. The thousands of rejected applicants included hundreds of children of alumni who’d donated lavishly over the years. …

But over recent years, Stanford administrators noticed that as the school rejected more and more comers, it received bigger and bigger donations, its endowment rising in tandem with its exclusivity, its luster a magnet for Silicon Valley lucre.

In fact just 12 hours after the university’s rejection of all comers, an alumnus stepped forward with a financial gift prodigious enough for Stanford to begin construction on its long-planned Center for Social Justice, a first-ever collaboration of Renzo Piano and Santiago Calatrava, who also designed the pedestrian bridge that will connect it to the student napping meadows.

One of the anomalies that Bruni is pointing to is that people who attended a college in the past now underwrite it voluntarily with their donations. Usually, you pay in order to obtain a good. Here, you get the good and then you pay for others to get it–in part so that it can be withheld from as many applicants as possible, thus raising its value even more on your own resume.

These are strange incentives ….

a survey about technology for hybrid public meetings

I am working with Tufts colleagues who have backgrounds in civic engagement and engineering to investigate the use of technology in public meetings. Our goal is to develop software that can facilitate video coverage of meetings, enabling different levels of participation for remote participants. The software will be cheap and scalable–it will allow multiple people to use their phones to film the same meeting. Initially, the software will produce one automatically edited, live video-stream of the meeting, which is much cheaper than hiring a professional videographer/editor. Over time, the software will incorporate other features: ways for people not physically at the meeting to speak, links to documents, simultaneous translations, and even possibly fact-checking.

We are interested in understanding how meetings are run (before and during COVID-19), your experiences (if any) with using technology (e.g., Zoom, Facebook) for meetings, and how future technologies can benefit people.

If you would be willing to take an 11-minute survey to inform this project, please click here: https://tufts.qualtrics.com/jfe/form/SV_3t97H86qQy95GpE

The survey is meant for people who have some role in organizing or staffing public/community meetings. You must be at least 18 years old to participate. It does not matter whether you are a US citizen, but you must be in the USA when you take the survey.

Thanks for your advice and ideas!

the ethical meanings of indigeneity

Quentin Gausset, Justin Kenrick, and Robert Gibb note that there are two separate conversations within their own discipline (anthropology) that involve different scholars and different families of examples.

In one conversation, the keyword is “indigenous,” and it applies either to “hunter-gatherers and nomads whose livelihood and culture is threatened by encroachment from their neighbours and state … or to groups who occupied a territory before it was forcibly settled by colonising powers and have struggled ever since to maintain some control over what was left of their resources.”

For instance, I am sitting on land where the Wampanoag are indigenous, a few miles from the offices of a federally recognized Wampanoag tribe.

In the other conversation, the keyword is “autochthonous” (born in the place) and it refers to large populations–often the majority in a given country–who “believe that their resources, culture or power are threatened by ‘migrants’.”

Anthropologists have had opposite reactions to these two families of cases:

[They] have tended to display sympathy and support for indigenous peoples (such as marginalised nomads) while often being highly critical of those advancing autochthonous claims (for example, extreme right-wing parties in European countries…). While indigenous movements are often idealised as innocent victims, or even as globally concerned and ecologically sound, autochthonous movements are, on the contrary, demonised and their agenda is reduced to ‘the exclusion of supposed “strangers” and the unmasking of “fake” autochthonous, who are often citizens of the same nation-state.’

As these authors note, a dictionary treats the two words as synonyms. Thus the existence of parallel discourses is noteworthy. We could add a third conversation about “irredentism,” a belief that a given nation should regain control over all of its former territory. Irredentist claims are usually seen as bellicose and nationalistic. Fascism is often autochthonous and irredentist. We don’t typically describe fascists as the “indigenous” populations of their countries–although they may see themselves that way.

Given the availability of these three terms–with overlapping meanings but different ethical valences–all kinds of intriguing uses emerge.

Erich Fox Tree observes that migrants to the USA from Central America increasingly identify as indigenous within the United States. Their claim is “somewhat irredentist, by asserting a super-territorial homeland” that spans the continent. However, in my view, they are expressing an understandable Latino/Native solidarity and opening possibilities for powerful coalitions within the USA.

According to Cheryl L. Daytec-Yañgot, “Tribal Peoples in Africa, such as the San or Maasai, self identify as indigenous to participate in indigenous discourses in the UN, even though their occupation of the region they inhabit does not predate those of other groups.” Meanwhile, “white Afrikaners from South Africa claimed indigeneity and attempted to forward their agenda to the UN Working Group on Indigenous Populations.”

Daytec-Yañgot notes that the discourse of indigeneity is “Eurocentric.” To put it a slightly different way, I would say that concerns about the oppression of indigenous minorities arise in settler countries–places, like the USA, Australia, or Argentina, where European conquerors came in very large numbers and numerically overwhelmed the original inhabitants. This model does not fit well in much of Asia and Africa, where imperialism was also devastating but the imperialists were limited in number and have mostly gone back home. It also doesn’t fit contexts like the Caribbean, where the majority population was transported against their will to replace the older inhabitants. In at least some important cases, the most threatened groups are minorities who migrated in and are accused of being interlopers. For instance, Hindu Nationalism often presents adherents of the dharmic religions as indigenous, and Muslims (as well as Christians) as the legacy of imperialism. But Muslims are now the threatened group in India.

There is nothing wrong with the mixed affective responses of anthropologists and others. It seems right to sympathize with indigenous groups in places like Massachusetts and to criticize autochthonous majorities who want migrants to “go home” (even though the words indigenous and autochthonous are synonyms). These judgments can be consistent with appropriate theories of justice, ones that take account of past injustices, current patterns of inequality and domination, the intrinsic value of cultures, the equal rights of all human beings, and ecological considerations.

It is a curiosity that we have two sets of vocabulary for different categories, but the ethical variation is not surprising. As always, the empirical study of human beings is inseparable from value-judgments, and the objective is to get our judgments (as well as our facts) right. Being explicit about the basis of our judgments helps: it allows us to test them in dialogue with other people. But explicitness is not sufficient: the point is to improve our judgments.

See also these posts about ethical judgments embedded in social science: when is cultural appropriation good or bad? and what is cultural appropriation?; social justice should not be a cliché; science, law, and microagressions; morality in psychotherapy; insanity and evil: two paradigmsprotecting authentic human interaction;  is all truth scientific truth?; and don’t confuse bias and judgment.

what secular people can get out of theology

I’m teaching a course on the thought of Martin Luther King, who obtained two graduate degrees in theology and contributed substantially to that discipline. I happen not to be a Christian, and I am teaching in a predominantly secular context with students who have diverse faith commitments that they rarely bring explicitly into the conversation.

It’s easy to say why we should read theology if we want to understand the thought of MLK, which is a historian’s task. But why should we want to understand the theological aspects of his thought if we are in a secular context and our interests are politics and justice?

Of course, I welcome all responses to these questions from my students, including opposition to religion (although I have not actually heard that lately). These are my own, personal thoughts.

First, it is not self-evident how to distinguish religious beliefs from other beliefs. MLK believed that all human beings are created by God in God’s image. I believe that all human beings have infinite intrinsic moral worth. What is the basis for saying that he is religious and I am not?

Second, we all think with the materials we find at hand. We cannot view the world completely anew. But we can make better or worse selections and enhance (or spoil) the things we select. Christian thinkers will start with Christian materials. We can learn from how they use those ideas and add to them. It’s as if you don’t want to be a biologist but you can improve your thinking by learning some biology. (Or change the analogy to ceramics if a craft seems more apt than a science.)

Third, it is illuminating to think in a hypothetical vein. Two Christian thinkers are on my mind this week. One is Howard Washington Thurman (1899 – 1981), whom I assigned. The other is Fox News pundit Laura Ingraham, who shows up in my social media feed denouncing homosexuality on biblical grounds.

Thurman notes that God could have expressed the divine in any form–for instance, as a Roman. “But the fact is he did not.” God chose instead that the only-begotten Son would be a poor Jew “in a sad and desolate time for the people”; “a member of a minority group in the midst of a larger dominant and controlling group”; and a non-citizen, someone lacking “that quiet sense of security which comes from knowing you belong and the general climate of confidence which it inspires. If a Roman soldier pushed Jesus into the ditch, he could not appeal to Caesar; he would be just another Jew in the ditch.” The Son of God was then tortured to death for nonviolently resisting the state.

I happen not to be able to think, “So it was,” but I can think: “If there is/were a God, this is how that God would act and feel.” And I can gain ethical insight–as well as inspiration–from this reasoning. At the same time, I am sure that if there is/were a God, God would not command and act the way that Laura Ingraham assumes.

I happen not to agree with the whole story that either Thurman or Ingraham believes. However, when I move into a hypothetical mode, I am confident that Thurman is right and that Ingraham is badly wrong. And making this distinction feels like a valid way to explore ethical and political issues.

Moving further away from specific authors, I can find specific value (and pitfalls) in each of the great world religions without happening to agree with some of the core metaphysics of any of them. For instance, I can compare Christianity to Judaism or Islam, or to Buddhism and Hinduism, without ever leaving the hypothetical level.

Source: Howard Thurman, Jesus and the Disinherited (1949), pp. 17, 18, 33. See also: Martin Luther King’s philosophy of time; Martin Luther King as a philosopher; Martin Luther and Martin Luther King; notes on the metaphysics of Gandhi and King; and Jesus was a person of color.