Category Archives: philosophy

philosophy of boredom

This article is in production and should appear soon: Levine P (2023) Boredom at the border of philosophy: conceptual and ethical issues. Frontiers of Sociology 8:1178053. doi: 10.3389/fsoc.2023.1178053.

(And yes, I anticipate and appreciate jokes about writing yet another boring article–this time, about boredom.)

Abstract:

Boredom is a topic in philosophy. Philosophers have offered close descriptions of the experience of boredom that should inform measurement and analysis of empirical results. Notable historical authors include Seneca, Martin Heidegger, and Theodor Adorno; current philosophers have also contributed to the literature. Philosophical accounts differ in significant ways, because each theory of boredom is embedded in a broader understanding of institutions, ethics, and social justice. Empirical research and interventions to combat boredom should be conscious of those frameworks. Philosophy can also inform responses to normative questions, such as whether and when boredom is bad and whether the solution to boredom should involve changing the institutions that are perceived as boring, the ways that these institutions present themselves, or individuals’ attitudes and choices.

An excerpt:

It is worth asking whether boredom is intrinsically undesirable or wrong, not merely linked to bad outcomes (or good ones, such as realizing that one’s current activity is meaningless). One reason to ask this question is existential: we should investigate how to live well as individuals. Are we obliged not to be bored? Another reason is more pragmatic. If being bored is wrong, we might look for effective ways to express that fact, which might influence people’s behaviors. For instance, children are often scolded for being bored. If being bored is not wrong, then we shouldn’t—and probably cannot—change behavior by telling people that it’s wrong to be bored. Relatedly, when is it a valid critique of an organization or institution to claim that it causes boredom or is boring? Might it be necessary and appropriate for some institutions … to be boring?

I have not done my own original work on this topic. I wrote this piece because I was asked to. I tried to review the literature, and a peer reviewer helped me improve that overview substantially.

I especially appreciate extensive and persuasive work by Andreas Elpidorou. He strikes me as an example of a positive trend in recent academic philosophy, also exemplified by Amia Srinivasan and others of their generation. These younger philosophers (whom I do not know personally) address important and thorny questions, such as whether and when it’s OK to be bored and whether one has a right to sex under various circumstances. They are deeply immersed in relevant social science. They also read widely in literature and philosophy and find insights in unexpected places. Srinivasan likes nineteenth-century utopian socialists and feminists; Elpidorou is an analytical philosopher who can also offer insightful close readings of Heidegger.

Maybe it was a bias on my part–or the result of being taught by specific professors–but I didn’t believe that these combinations were possible while I pursued my BA and doctorate in philosophy. In those days, analytical moral and political philosophers paid some attention to macroeconomic theory but otherwise tended not to notice current social science. Certainly, they didn’t address details of measurement and method, as Elpidorou does. Continental moral and political philosophers wrote about the past, but they understood history very abstractly, and their main sources were canonical classics. Most philosophers addressed either the design of overall political and economic systems or else individual dilemmas, such as whether to have an abortion (or which people to kill with an out-of-control trolley).

To me, important issues almost always combine complex and unresolved empirical questions with several–often conflicting–normative principles. Specific problems cannot be abstracted from other issues, both individual and social. Causes and consequences vary, depending on circumstances and chance; they don’t follow universal laws.

My interest in the empirical aspects of certain topics, such as civic education and campaign finance, gradually drew me from philosophy into political science. I am now a full professor of the latter discipline, also regularly involved with the American Political Science Association. However, my original training often reminds me that normative and conceptual issues are relevant and that positivist social science cannot stand alone.

Perhaps the main lesson you learn by studying philosophy is that it’s possible to offer rigorous answers to normative questions (such as whether an individual or an institution should change when the person is bored), and not merely to express opinions about these matters. I don’t have exciting answers of my own to specific questions about boredom, but I have reviewed current philosophers who do.

Learning to be a social scientist means not only gaining proficiency with the kinds of methods and techniques that can be described in textbooks, but also knowing how to pitch a proposed study so that it attracts funding, how to navigate a specific IRB board, how to find collaborators and share work and credit with them, and what currently interests relevant specialists. These highly practical skills are essential but hard to learn in a classroom.

If I could convey advice to my 20-year-old self, I might suggest switching to political science in order to gain a more systematic and rigorous training in the empirical methods and practical know-how that I have learned–incompletely and slowly–during decades on the job. But if I were 20 now, I might stick with philosophy, seeing that it is again possible to combine normative analysis, empirical research, and insights from diverse historical sources to address a wide range of vital human problems.

See also: analytical moral philosophy as a way of life; six types of claim: descriptive, causal, conceptual, classificatory, interpretive, and normative; is all truth scientific truth? etc.

when does a narrower range of opinions reflect learning?

John Stuart Mill’s On Liberty is the classic argument that all views should be freely expressed–by people who sincerely hold them–because unfettered debate contributes to public reasoning and learning. For Mill, controversy is good. However, he acknowledges a complication:

The cessation, on one question after another, of serious controversy, is one of the necessary incidents of the consolidation of opinion; a consolidation as salutary in the case of true opinions, as it is dangerous and noxious when the opinions are erroneous (Mill 1859/2011, 81)

In other words, as people reason together, they may discard or marginalize some views, leaving a narrower range to be considered. Whether such narrowing is desirable depends on whether the range of views that remains is (to quote Mill) “true.” His invocation of truth–as opposed to the procedural value of free speech–creates some complications for Mill’s philosophical position. But the challenge he poses is highly relevant to our current debates about speech in academia.

I think one influential view is that discussion is mostly the expression of beliefs or opinions, and more of that is better. When the range of opinions in a particular context becomes narrow, this can indicate a lack of freedom and diversity. For instance, the liberal/progressive tilt in some reaches of academia might represent a lack of viewpoint diversity.

A different prevalent view is that inquiry is meant to resolve issues, and therefore, the existence of multiple opinions about the same topic indicates a deficit. It means that an intellectual problem has not yet been resolved. To be sure, the pursuit of knowledge is permanent–disagreement is always to be expected–but we should generally celebrate when any given thesis achieves consensus.

Relatedly, some people see college as something like a debate club or editorial page, in which the main activity is expressing diverse opinions. Others see it as more like a laboratory, which is mainly a place for applying rigorous methods to get answers. (Of course, it could be a bit of both, or something entirely different.)

In 2015, we organized simultaneous student discussions of the same issue–the causes of health disparities–at Kansas State University and Tufts University. The results are here. At Kansas State, students discussed–and disagreed about–whether structural issues like race and class and/or personal behavioral choices explain health disparities. At Tufts, students quickly rejected the behavioral explanations and spent their time on the structural ones. Our graphic representation of the discussions shows a broader conversation at K-State and what Mill would call a “consolidated” one at Tufts.

A complication is that Tufts students happened to hear a professional lecture about the structural causes of health disparities before they discussed the issue, and we didn’t mirror that experience at K-State. Some Tufts students explicitly cited this lecture when rejecting individual/behavioral explanations of health disparities in their discussion.

Here are two competing reactions to this experiment.

First, Kansas State students demonstrated more ideological diversity and had a better conversation than the one at Tufts because it was broader. They also explicitly considered a claim that is prominently made in public–that individuals are responsible for their own poor health. Debating that thesis would prepare them for public engagement, regardless of where they stand on the issue. The Tufts conversation, on the other hand, was constrained, possibly due to the excessive influence of professors who hold contentious views of their own. The Tufts classroom was in a “bubble.”

Alternatively, the Tufts students happened to have a better opportunity to learn than their K-State peers because they heard an expert share the current state of research, and they chose to reject certain views as erroneous. It’s not that they were better citizens or that they know more (in general) than their counterparts at KSU, but simply that their discussion of this topic was better informed. Insofar as the lecture on public health found a receptive audience in the Tufts classroom, it was because these students had previously absorbed valid lessons about structural inequality from other sources.

I am not sure how to adjudicate these interpretations without independently evaluating the thesis that health disparities are caused by structural factors. If that thesis is true, then the narrowing reflected at Tufts is “salutary.” If it is false, then the narrowing is “dangerous and noxious.”

I don’t think it’s satisfactory to say that we can never tell, because then we can never believe that anything is true. But it can be hard to be sure …

See also: modeling a political discussion; “Analyzing Political Opinions and Discussions as Networks of Ideas“; right and left on campus today; academic freedom for individuals and for groups; marginalizing odious views: a strategy; vaccination, masking, political polarization, and the authority of science etc.

the difference between human and artificial intelligence: relationships

A large-language model (LLM) like ChatGPT works by identifying trends and patterns in huge bodies of text previously generated by human beings.

For instance, we are currently staying in Cornwall. If I ask ChatGPT what I should see around here, it suggests St Ives, Land’s End, St Michael’s Mount, and seven other highlights. It derives these ideas from frequent mentions in relevant texts. The phrases “Cornwall,” “recommended” (or synonyms thereof), “St Ives,” “charming,” “art scene,” and “cobbled streets” probably occur frequently in close proximity, because ChatGPT uses them to construct a sentence for my edification.

We human beings behave in a somewhat similar way. We also listen to or read a lot of human-generated text, look for trends and patterns in it, and repeat what we glean. But if that is what it means to think, then LLM has clear advantages over us. A computer can scan much more language than we can and uses statistics rigorously. Our generalizations suffer from notorious biases. We are more likely to recall ideas we have seen most recently, those that are most upsetting, those that confirm our prior assumptions, etc. Therefore, we have been using artificial means to improve our statistical inferences ever since we started recording possessions and tallying them by category thousands of years ago.

But we also think in other ways. Specifically, as intensely social and judgmental primates, we frequently scan our environments for fellow human beings whom we can trust in specific domains. A lot of what we believe comes from what a relatively small number of trusted sources have simply told us.

In fact, to choose what to see in Cornwall, I looked at the recommendations in The Lonely Planet and Rough Guide. I have come to trust those sources over the years–not for every kind of guidance (they are not deeply scholarly), but for suggestions about what to see and how to get to those places. Indeed, both publications offer lists of Cornwall’s highlights that resemble ChatGPT’s.

How did these publishers obtain their knowledge? First, they hired individuals whom they trusted to write about specific places. These authors had relevant bodily experience. They knew what it feels like to walk along a cliff in Cornwall. That kind of knowledge is impossible for a computer. But these authors didn’t randomly walk around the county, recording their level of enjoyment and reporting the places with the highest scores. Even if they had done that, the sites they would have enjoyed most would have been the ones that they had previously learned to understand and value. They were qualified as authors because they had learned from other people: artists, writers, and local informants on the ground. Thus, by reading their lists of recommendations, I gain the benefit of a chain of interpersonal relationships: trusted individuals who have shared specific advice with other individuals, ending with the guidebook authors whom I have chosen to consult.

In our first two decades of life, we manage to learn enough that we can go from not being able to speak at all to writing books about Cornwall or helping to build LLMs. Notably, we do not accomplish all this learning by storing billions of words in our memories so that we can analyze the corpus for patterns. Rather, we have specific teachers, living or dead.

This method for learning and thinking has drawbacks. For instance, consider the world’s five biggest religions. You probably think that either four or five of them are wrong about some of their core beliefs, which means that you see many billions of human beings as misguided about some ideas that they would call very important. Explaining why they are wrong, from an outsider’s perspective, you might cite their mistaken faith in a few deeply trusted sources. In your opinion, they would be better off not trusting their scriptures, clergy, or people like parents who told them what to believe.

(Or perhaps you think that everyone sees the same truth in their own way. That’s a benign attitude and perhaps the best one to hold, but it’s incompatible with what billions of people think about the status of their own beliefs.)

Our tendency to believe select people may be an excellent characteristic, since the meaning of life is more about caring for specific other humans than obtaining accurate information. But we do benefit from knowing truths, and our reliance on fallible human sources is a source of error. However, LLMs can’t fully avoid that problem because they use text generated by people who have interests and commitments.

If I ask ChatGPT “Who is Jesus Christ?” I get a response that draws exclusively from normative Christianity but hedges it with this opening language: “Jesus Christ is a central figure in Christianity. He is believed to be … According to Christian belief. …” I suspect that ChatGPT’s answers about religious topics have been hard-coded to include this kind of disclaimer and to exclude skeptical views. Otherwise, a statistical analysis of text about Jesus might present the Christian view as true or else incorporate frequent critiques of Christianity, either of which would offend some readers.

In contrast, my query about Cornwall yields confident and unchallenged assessments, starting with this: “Cornwall is a beautiful region located in southwestern England, known for its stunning coastline, picturesque villages, and rich cultural heritage.” This result could be prefaced with a disclaimer, e.g., “According to many English people and Anglophiles who choose to write about the region, Cornwall is …:” A ChatGPT result is always a summary of what a biased sample of people have thought, because choosing to write about something makes you unusual.

For human beings who want to learn the truth, having new tools that are especially good at scanning large bodies of text for statistical patterns should prove useful. (Those who benefit will probably include people who have selfish or even downright malicious goals.) But we have already learned a fantastic amount without LLMs. The secret of our success is that our brains have always been networked, even when we have lived in small groups of hunter-gatherers. We intentionally pass ideas to other people and are often pretty good at deciding whom to believe about what.

Moreover, we have invented incredibly complex and powerful techniques for improving how many brains are connected. Posing a question to someone you know is helpful, but attending a school, reading an assigned book, finding other books in the library, reading books translated from other languages, reading books that summarize previous books, reading those summaries on your phone–these and many other techniques dramatically extend our reach. Prices send signals about supply and demand; peer-review favors more reliable findings; judicial decisions allow precedents to accumulate; scientific instruments extend our senses. These are not natural phenomena; we have invented them.

Seen in that context, LLMs are the latest in a long line of inventions that help human beings share what they know with each other, both for better and for worse.

See also: the design choice to make ChatGPT sound like a human; artificial intelligence and problems of collective action; how intuitions relate to reasons: a social approach; the progress of science.

analytical moral philosophy as a way of life

(These thoughts are prompted by Stephen Mulhall’s review of David Edmonds’ book, Parfit: A Philosopher and His Mission to Save Morality, but I have not read that biography or ever made a serious study of Derek Parfit.)

The word “philosophy” is ancient and contested and has labeled many activities and ways of life. Socrates practiced philosophy when he went around asking critical questions about the basis of people’s beliefs. Marcus Aurelius practiced philosophy when he meditated daily on well-worn Stoic doctrines of which he had made a personal collection. The Analects of Confucius may be “a record of how a group of men gathered around a teacher with the power to elevate [and] created a culture in which goals of self-transformation were treated as collaborative projects. These people not only discussed the nature of self-cultivation but enacted it as a relational process in which they supported one another, reinforced their common goals, and served as checks on each other in case they went off the path, the dao” (David Wong).

To practice philosophy, you don’t need a degree (Parfit didn’t complete his), and you needn’t be hired and paid to be a philosopher. However, it’s a waste of the word to use it for activities that aren’t hard and serious.

Today, most actual moral philosophers are basically humanities educators. We teach undergraduates how to read, write, and discuss texts at a relatively high level. Most of us also become involved in administration, seeking and allocating resources for our programs, advocating for our discipline and institutions, and serving as mentors.

Those are not, however, the activities implied by the ideal of analytic moral philosophy. In that context, being a “philosopher” means making arguments in print or oral presentations. A philosophical argument is credited to a specific individual (or, rarely, a small team of co-authors). It must be original: no points for restating what has already been said. It should be general. Philosophy does not encompass exercises of practical reasoning (deciding what to do about a thorny problem). Instead, it requires justifying claims about abstract nouns, like “justice,” “happiness,” or “freedom.” And an argument should take into consideration all the relevant previous points published by philosophers in peer-reviewed venues. The resulting text or lecture is primarily meant for philosophers and students of philosophy, although it may reach other audiences as well.

Derek Parfit held a perfect job for this purpose. As a fellow of All Souls College, he had hardly any responsibilities other than to write philosophical arguments and was entitled to his position until his mandatory retirement. He did not have to obtain support or resources for his work. He did not have to deliberate with other people and then decide what to say collectively. Nor did he have to listen to undergraduates and laypeople express their opinions about philosophical issues. (Maybe he did listen to them–I would have to read the biography to find out–but I know that he was not obliged to do so. He could choose to interact only with highly prestigious peers.)

Very few other people hold similar roles: the permanent faculty of the Institute for Advanced Study, the professors of the Collège de France, and a few others. Such opportunities could be expanded. In fact, in a robust social welfare state, anyone can opt not to hold a job and can instead read and write philosophy, although whether others will publish or read their work is a different story. But whether this form of life is worthy of admiration and social support is a good question–and one that Parfit was not obliged to address. He certainly did not have to defend his role in a way that was effective, persuading a real audience. His fellowship was endowed.

Mulhall argues that Parfit’s way of living a philosophical life biased him toward certain views of moral problems. Parfit’s thought experiments “strongly suggest that morality is solely or essentially a matter of evaluating the outcomes of individual actions–as opposed to, say, critiquing the social structures that deeply shape the options between which individuals find themselves having to choose. … In other words, although Parfit’s favoured method for pursuing and refining ethical thinking presents itself as open to all whatever their ethical stance, it actually incorporates a subtle but pervasive bias against approaches to ethics that don’t focus exclusively or primarily on the outcomes of individual actions.”

Another way to put this point is that power, persuasion, compromise, and strategy are absent in Parfit’s thought, which is instead a record of what one free individual believed about what other free individuals should do.

I am quite pluralistic and inclined to be glad that Parfit lived the life he did, even as most other people–including most other moral philosophers–live and think in other ways. Even if Parfit was biased (due to his circumstances, his chosen methods and influences, and his personal proclivities) in favor of certain kinds of questions, we can learn from his work.

But I would mention other ways of deeply thinking about moral matters that are also worthy and that may yield different kinds of insights.

You can think on your own about concrete problems rather than highly abstract ones. Typically the main difficulty is not defining the relevant categories, such as freedom or happiness, but rather determining what is going on, what various people want, and what will happen if they do various things.

You can introduce ethical and conceptual considerations to elaborate empirical discussions of important issues.

You can deliberate with other people about real decisions, trying to persuade your peers, hearing what they say, and deciding whether to remain loyal to the group or to exit from it if you disagree with its main direction.

You can help to build communities and institutions of various kinds that enable their members to think and decide together over time.

You can identify a general and relatively vague goal and then develop arguments that might persuade people to move in that direction.

You can strive to practice the wisdom contained in clichés: ideas that are unoriginal yet often repeated because they are valid. You can try to build better habits alone or in a group of people who hold each other accountable.

You can tentatively derive generalizations from each of these activities, whether or not you choose to publish them.

Again, as a pluralist, I do not want to suppress or marginalize the style that Parfit exemplified. I would prefer to learn from his work. But my judgment is that we have much more to learn from the other approaches if our goal is to improve the world. That is because the hard question is usually not “How should things be?” but rather “What should we do?”

See also: Cuttings: A book about happiness; the sociology of the analytic/continental divide in philosophy; does doubting the existence of the self tame the will?

defining state, nation, regime, government

As a political philosopher by training, and now political scientist by appointment, I have long been privately embarrassed that I am not sure how to define “state,” “government,” “regime,” and “nation.” On reflection, these words are used differently in various academic contexts. To make things more complicated, the discussion is international, and we are often dealing with translations of words that don’t quite match up across languages.

For instance, probably the most famous definition of “the state” is from Max Weber’s Politics as Vocation (1919). He writes:

Staat ist diejenige menschliche Gemeinschaft, welche innerhalb eines bestimmten Gebietes – dies: das „Gebiet“, gehört zum Merkmal – das Monopol legitimer physischer Gewaltsamkeit für sich (mit Erfolg) beansprucht.

[The] state is the sole human community that, within a certain territory–thus: territory is intrinsic to the concept–claims a monopoly of legitimate physical violence for itself (successfully).

Everyone translates the keyword here as “state,” not “government.” But this is a good example of how words do not precisely match across languages. The English word “government” typically means the apparatus that governs a society. The German word commonly translated as “government” (“der Regierung“) means an administration, such as “Die Regierung von Joe Biden” or a Tory Government in the UK. (In fact, later in the same essay, Weber uses the word Regierung that way while discussing the “typical figure of the ‘grand vizier'” in the Middle East.) Since “government” has a wider range of meanings in English, it wouldn’t be wrong to use it to translate Weber’s Staat.

Another complication is Weber’s use of the word Gemeinschaft inside his definition of “the State.” This is a word with such specific associations that it is occasionally used in English in place of our vaguer word “community.” A population is not a Gemeinschaft, but a tight association can be. Thus to translate Weber’s phrase as “A state is a community …” is misleading.

For Americans, a “state” naturally means one of our fifty subnational units, but in Germany those are Länder (cognate with “lands”). The word “state” derives from the Latin status, which is as “vague a word as ratio, res, causa” (Paul Shorey, 1910) but can sometimes mean a constitution or system of government. Cognates of that Latin word end up as L’État, el Estado and similar terms that have a range of meanings, including the subnational units of Mexico and Brazil. In 1927, Mussolini said, “Tutto nello Stato, niente al di fuori dello Stato, nulla contro lo Stato (“Everything in the State, nothing outside the State, nothing against the State”). I think he basically meant that he was in charge of everything he could get his hands on. Louis XIV is supposed to have said “L’État c’est moi,” implying that he was the government (or the nation?), but that phrase may be apocryphal; an early documented use of L’État to mean the national government dates to 1799. In both cases, the word’s ambiguity is probably one reason it was chosen.

“Regime” can have a negative connotation in English, but political theorists typically use it to mean any government plus such closely related entities as the press and parties and prevailing political norms and traditions. Regimes can be legitimate, even excellent.

If these words are used inconsistently in different contexts, then we can define them for ourselves, as long as we are clear about our usage. I would tend to use the words as follows:

  • A government: either the legislative, executive, and judicial authority of any entity that wields significant autonomous political power (whether it’s territorial or not), or else a specific group that controls that authority for a time. By this definition, a municipality, the European Union, and maybe even the World Bank may be a government.

(A definitional challenge is deciding what counts as “political” power. A company, a church, a college, an insurgent army, or a criminal syndicate can wield power and can use some combination of legislative, executive, and/or judicial processes to make its own decisions. Think of canon law in the Catholic Church or an HR appeals processes inside a corporation. Weber would say that the fundamental question is whether an entity’s power depends on its own monopolistic use of legitimate violence. For instance, kidnapping is a violent way to extract money, but it does not pretend to be legitimate and it does not monopolize violence. Taxation is a political power because not paying your taxes can ultimately land you, against your will, in a prison that presents itself as an instrument of justice. Not paying a private bill can also land you in jail, but that’s because the government chooses to enforce contracts. Your creditor is not a political entity; the police force is. However, when relationships between a government and private entities are close, or when legitimacy is controversial, or when–as is typical–governments overlap, these distinctions can be hard to maintain and defend.)

  • A state: a government plus the entities that it directly controls, such as the military, police, or public schools. For example, it seems most natural to say that a US government controls the public schools, but not that a given school is part of the government. Instead, it is part of the state. Likewise, an army can be in tension with the government, yet both are components of the state.
  • A regime: the state plus all non-state entities that are closely related to it, e.g., political parties, the political media, and sometimes the national clergy, powerful industries, etc. We can also talk about abstract characteristics, such as political culture and values, as components of a regime. A single state may change its regime, abruptly or gradually.
  • A country: a territory (not necessarily contiguous) that has one sovereign state. It may have smaller components that also count as governments but not as countries.
  • A nation: a category of people who are claimed (by the person who is using this word) to deserve a single state that reflects their common identity and interests. Individuals can be assigned to different nations by different speakers.
  • A nation-state: a country with a single functioning and autonomous state whose citizens widely see themselves as constituting a single nation. Some countries are not nations, and vice-versa. People may disagree about whether a given country is a nation-state, depending on which people they perceive to form a nation.

See also: defining capitalism; avoiding a sharp distinction between the state and the private sphere; the regime that may be crumbling; what republic means, revisited etc.