Monthly Archives: July 2016

on inhabiting earth with inaccessibly beautiful things

I unfortunately know no Chinese. The sounds, resonances, allusions, and calligraphy of traditional Chinese poetry can reach me only through paraphrase or as abstract patterns, each character looking not much different from the next. However, Perry Link writes,

Should we compare poetry across civilizations? If we do, classical Chinese poetry wins easily. The contest is almost unfair, because, as my students of Chinese language eventually come to see, the fundaments of language are different.

Indo-European languages, with their requirements that tense, number, gender, and part of speech be specified, and with the mandatory word inflections that the specifications entail, and with the extra syllables that the inflections add, just can’t achieve the same purity—a sense of terseness and expanse at the same time—that tenseless, numberless, voiceless, uninflected, and uninflectible Chinese characters can achieve. In a contest, one person has a butterfly net and the other a window screen.

I thought of this passage during a recent, brief visit to the Sackler Gallery in Washington, which is showing “Painting with Words: Gentleman Artists of the Ming Dynasty.” The highlights are vast, wall-sized hanging scrolls that display poems in the original authors’ calligraphy. The setting is abstract, modern, respectfully dark. In the background, a recording of a classical Chinese zither plays. The English translations by a Sackler curator, Stephen D. Allee, produce what I would call good poetry. The language is moving and sometimes surprising. For instance:

My friends are scattered few and far apart and the rain just drizzles on.
Fragrance fades from the incense burner and the teacups have toppled over;
I composed a poem on plum blossoms, but I’m sorry it is not well done.

I trust that these English lines convey the sense of the end of a poem by Wen Zhengming (composed ca. 1500), but they bear only a distant relationship to the scroll he painted and the sounds that his intended audience would hear as they read it. It’s strange to think that I will never be able to experience a deeply valuable art form–in Link’s estimation, the best tradition of poetry in the world–even though I can stand in the same room with it.

(See also: nostalgia for now and Ito Jakuchu at the National Gallery)

Hillary Clinton on spending for infrastructure

There’s an important exchange about government spending in Ezra Klein’s long, wonky interview with Hillary Clinton.

Klein notes that the government can currently borrow very cheaply, paying virtually no interest. The US has grave infrastructure needs. Businesses normally borrow in order to invest: they don’t pay for a new factory the same year they open it. So why shouldn’t the feds accept the markets’ offer of “free money?” “Shouldn’t we be doing more deficit spending for infrastructure, for middle-class tax cuts—and worrying less in the near-term about deficits?”

Paul Krugman recently put the same case even more forcefully. “Policy makers should be … accepting the markets’ offer of incredibly cheap financing. … America’s aging infrastructure is legendary. …. So why not borrow money at these low, low rates and do some much-needed repair and renovation?”

Clinton responds to Klein that our infrastructure needs are great, and we should “look for ways to pay for our investments. … But I’m not going to commit myself to [borrowing] … because I think we’ve had a period when the gains have gone to the wealthy. … I think we can pay for what we need to do though raising taxes on the wealthy.”

Klein summarizes her answer: “I’ve not heard you say it that way before. So part of the argument of doing pay-fors in the near term is not just balancing the budget or reducing the deficit but also bringing distributional fairness to the aftermath of the recession.”

If liberals could design and implement a coherent policy, they should borrow now to take advantage of the rock-bottom interest rates, and structure the repayment so that upper-income people bear the costs over time. But Clinton is not in a position to write and implement a multi-year policy all by herself. If she can do anything at all, it will have to be a compromise with Republicans in Congress. Her view is that she can get more infrastructure spending and tax equity by paying for everything right away, with some kind of surplus tax on the rich.

I respect her expertise and don’t have any desire to argue with her about economics, but I wonder: 1) How much revenue can really come from upper-income tax increases next year, given the political balance? Couldn’t we get a lot more money by borrowing? 2) Politically, will voters support a tax-and-spend program, given their extremely low trust in government to create jobs? And 3) Shouldn’t we be challenging the widespread assumption that good government requires never borrowing to make investments?

(See also “why Hillary Clinton appears untrustworthy,” in which I proposed that her failure to argue for infrastructure spending exemplifies a general tendency among technocratic liberals to refuse to say what they believe because they don’t trust the American people to understand or accept their reasons.)

progress on civics

I’m back from a very inspiring meeting of civic activists, civic educators, and students in the White House. It was convened by the Domestic Policy Council along with Civic Nation and the Beeck Center, with support from Tisch College and oCnQrgM6XgAAz4-0thers. Committed and skillful teachers and students attended from selected schools across the country. These students are not just learning about civic engagement, practicing to be citizens, or developing civic skills. They are at the forefront, right now, of addressing the most serious issues in their communities. We need them, and many more like them, to govern the republic better than it has been governed and to achieve justice that has eluded us so far.

We learned that US Secretary of Education John King is sending a “letter of guidance” to all state education agencies about new federal support for the humanities as part of the “well-rounded” education mandate of the Every Student Succeeds Act. As a humanities person, I am glad about that direction, and as a civics advocate, I’m pleased that civics is included among the humanities. 

The US Department of Education has also made civics a core component of the Department’s “Blue Ribbon Schools” program, which recognizes excellence. 

Meanwhile, the House Appropriations Subcommittee on Labor/HHS and Education recently passed the 2017 appropriations bill, which includes the first funding for civics and American History in years. The appropriation includes $6.5 million for competitive grants to improve instruction in American history, civics, and geography, particularly for schools in under-served rural and urban communities. It also includes nearly $2 million for American History and Civics Academies: professional development opportunities for teachers of these disciplines. For now, it’s just a House appropriations bill, but it’s an important step toward actual dollars for civics.

the metaphysics of latent variables in psychology

(Washington, DC) The search for “latent variables” is so common in psychology that I would almost call it definitive of the discipline today. Other disciplines also study people’s thoughts and actions, but the distinctive contribution of psychology seems to be the use of variables that are not directly observed but rather inferred from data. Latent variables have been “so useful … that they pervade … psychology and the social sciences” (Bollen, 2002, p. 606).

But what are they? This is a metaphysical question, in the sense that contemporary, professional, Anglophone philosophers use the word “metaphysics.” It doesn’t mean that latent variables are spooky or illusory, but rather that it’s worth trying to figure out what kinds of things they are and how they relate to other sorts of things, such as beliefs, observations, numbers, mental states, processes, physical brains, etc. (Cf. why social scientists should pay attention to metaphysics.)

It turns out (Mulaik, 1987) that the main tools of psychometrics were invented by early-20th century thinkers who were explicitly interested in philosophical issues. For instance, Karl Pearson, who invented P-values, chi-square tests, and Principal Components Analysis–and who first used a histogram–wrote a book about philosophy of science before he developed these tools in order to implement his philosophy. He sounds like an awful man–an active proponent of racism–but that doesn’t invalidate his contributions to statistics. Their origin in his philosophical thought does, however, reinforce the point that latent factors need a philosophical explanation.

In very general terms, a latent variable is a number derived from several direct observations (the manifest variables) and used to say something meaningful about the subject. A history test provides a simple example. The student’s answers to each question are manifest variables. The student’s grade is derived from them, usually by just calculating the percentage of correct answers, and it is supposed to measure “knowledge of history,” which is latent. Only if the test is designed according to the best statistical principals is the overall grade indeed a valid measure of knowledge.

The same example can be used to illustrate a more sophisticated tool, factor analysis. Suppose that any student’s chance of answering a given question on the history test can be predicted fairly well by a function of several measured variables (the student’s family income, the teacher’s background, the amount of time studying history, etc.) plus X, plus Y. X and Y correlate with the answers, but X and Y are not correlated with each other, and they remain constant for each student.

That much might be a mathematical result: a function that roughly matches the actual data. The question then arises: what do X and Y mean? Suppose that X has a very strong correlation with students’ performance on questions that involve difficult reading assignments, such as original source material. And Y has a very strong correlation with students’ performance on questions that involve concrete factual information, such as the dates of the Civil War. Assuming that X and Y are not correlated, we can conclude that history test scores involve two “factors”: reading ability and memorization of concrete factual information. That interpretation would likely be presented as a meaningful finding, with implications for how educators should teach history.*

I don’t disagree. I am involved in this kind of research myself (albeit usually contributing less than my fair share of the math). But what kind of a thing is “reading ability” or “memorization of concrete factual information” in this example?

They are not exactly causes of the students’ actual answers to questions, for four reasons.

First, it is often (always?) possible to describe any given set of data with multiple functions.

Second, given a mathematical function that well describes a given set of data–such as the kids’ specific answers to Mr. Brown’s AP history test–it doesn’t follow that the same factors would also describe another set of data. The next 10 kids who took Mr. Brown’s test might not fit the function at all. This is an example of the general problem of induction.

Third, we can often switch the direction of the explanatory arrow. Instead of using the student’s latent ability in reading to explain or predict her answers to specific test questions, we could use her answers to those questions to explain or predict her reading ability. If you can switch the direction of an explanation, it doesn’t seem like a causal thesis.

Finally, we don’t usually describe a “cause” as something that is derived mathematically from the effects. A student’s family income might be postulated as a cause of her test scores–although it would require an experiment to assess this hypothesis–but a variable that is derived from the test data itself doesn’t seem to be a cause of it. Mulaik (p. 300) writes, “causes generally are not strictly determinate from effects but rather must be distinct from what they explain.”

If you are a strict inductive empiricist, in the tradition of David Hume, you don’t believe that anything is real except for direct observations. That means there are no causes. But it is possible to generalize based on what you have observed so far. Statistics is just a more refined toolkit for the kind of generalizations that we perform naturally when we observe, for instance, that kids tend to perform better on a test if they study for it. This is one way to make sense of a latent variable. It is a sophisticated version of ordinary induction. However, pure inductivism has been criticized on numerous grounds.

A different view is that some kind of mental process or activity causes people to do things like score well on a given history test question. For instance, memorizing dates increases your odds of correctly answering questions on a history test. We can tell a causal story: the information enters the brain, is stored, and is then retrieved to answer the question. The latent variable that correlates with test scores is an indication of this process. (But see Robert Epstein arguing in Aeon against the storage metaphor for human memory.)

In any case, the mathematics of factor analysis would not explain that this is what’s going on. It would only very roughly suggest a phenomenon that requires causal explanation. And although it is fairly straightforward to infer a causal relationship in this case–you should study in order to do well on a test–it is much less plausible that other factors are causal. For instance, do the Big Five Personality traits “cause” answers to concrete questions about emotions and behavior?  In 1939, Wilson and Worcester (quoted in Mulaik) asked, “Why should there be any particular significance psychologically to that vector of the mind which has the property that the sum of squares of the projections of a set of unit vectors (tests) along it be maximum?”

Another level of challenge is that the data for any latent variable come from observations that someone has designed and selected. For instance, that history test could have included entirely different questions. Or we could give tests on reading but not on history. The resulting factors would look different. Some conception of what’s important underlies the design of the test in the first place.

This is what I’m inclined to propose: latent variables are numbers inferred from data. We give them names that refer to actual things that are very heterogeneous, metaphysically. Sometimes latent variables suggest causal theories, although causation requires other kinds of evidence to test. Sometimes they are descriptions of patterns in the accumulated data that are not causal at all. Sometimes they are just tools that are useful for practical reasons–for instance, a kid needs one grade in history instead of a whole bunch of numbers. Whether that grade is appropriate is partly a question of fairness, partly a question about what is valuable to learn, and partly a question of the pragmatic consequences (e.g. does this kind of test cause kids to learn well?). It is only partly a statistical question.

*The example I am informally describing here involves exploratory factor analysis. You identify factors based on pure math and name them based on a theory. On the other hand, in confirmatory factor analysis, you hypothesize a relationship based on a theory and look for patterns in the data that support or reject it. The math is somewhat different, as is the theoretical framework. I don’t want to go too deeply into that contrast because my topic here is broader than factor analysis. I am interested in uses of all latent variables.

Sources: Bollen, Kenneth A. 2002. Latent Variables in Psychology and the Social Sciences. Annual Review of Psychology, vol. 53, 605-634; Mulaik, Stanley A. “A brief history of the philosophical foundations of exploratory factor analysis.” Multivariate Behavioral Research 22.3 (1987): 267-305.

 

generational differences in attitudes toward racism

(New York City) As the nation grapples with racism and deep divisions over race, it is important to understand trends in opinions on these issues. Here is a small contribution to that topic.

In 1977, and then consistently since 1985, the General Social Survey has asked a representative sample of Americans this question: “On the average [negroes/blacks/African-Americans] have worse jobs, income, and housing than white people. Do you think these differences are mainly due to discrimination?”

This first graph shows the trends for Whites, Blacks, and all others.

GSS racial discrimination measure

Between the early 1990s and 2012, Blacks became less likely to agree that discrimination causes unequal outcomes. In fact, the “yes’s” dipped below 50% for African Americans in 2012. Blacks have become more likely to answer “yes” since then. There hasn’t been a lot of change in the Whites’ responses since 1977, although a moderate decline is evident.

The second graph shows answers by generation. One important complication is that each generation has had a different racial composition from the others. In particular, Latinos and Asian Americans have become much more numerous as the Xers and then the Millennials have arrived. By itself, that demographic change would raise the positive response rate to this question for the youngest generations. To control for that, I show only White respondents in this graph.

GSS racial discrimination

White Millennials are currently more likely to blame inequality on racial discrimination than the older groups are. That reflects a rather rapid change, since only a third of their cohort agreed in 2006. Nevertheless, less than half of them (44.5%) agreed with the statement in 2014. In 2012, according to a different survey, 58% of White Millennials said, “discrimination against whites has become as big a problem as discrimination against blacks and other minorities.”

Xers, by the way, have become substantially less likely to blame anti-Black discrimination over the course of their lives so far. More than half did when they were young, but just 27% did in 2014.

I think that Black Lives Matter reflects and contributes to a substantial increase in concern about racial discrimination since 2012. That concern has by no means captured a majority of White people, or even of White youth. However, the increase has been rapid among White youth and also among African Americans. The result is a movement that has a generational element, and a base in the Black community, but that also faces a lot of backlash.

See also: in what ways are Millennials distinctive?; tolerance and generational change; and the most educated Americans are liberal but not egalitarian (2).