Category Archives: Uncategorized

CIRCLE’s Growing Voters framework

I am digging into CIRCLE’s monumental Growing Voters report, which is subtitled, “Building Institutions and Community Ecosystems for Equitable Election Participation.” It is really the first original and comprehensive framework that CIRCLE has produced in its 21-year history. It builds on work that we conducted while I worked there, but all credit is due to the team that has succeeded me.

Youth political turnout is far too low and unequal. Often, people who care (at all) about this situation advocate reforms in election laws, get-out-the-vote drives during election season (usually only in competitive districts), or catchy policy proposals like forgiving college debt. The evidence strongly suggests that none of these approaches will come anywhere close to solving the problem, and they are all too transactional and tactical. They assume that young people can be induced, by tinkering with the incentives, to act as political leaders wish.

CIRCLE has uncovered evidence that many things do work: from laws that allow people to pre-register before age 16, to state civics tests, to school climates that make students feel like valued members of the community, to explanatory journalism. There are no silver bullets, and we need a shift in the overall attitude–a shift from electoral mobilization to “growing voters.” Youth themselves have an essential role to play in accomplishing this shift, which must involve many different kinds of institutions.

The full report is detailed and careful and deserves to be read in full.

the sublime is social–with notes on Wordsworth’s Lines Above Tintern Abbey

In secular (and probably upscale) reaches of our society, two suggestions are common for restoring mental health and equanimity: we should experience nature and reconnect to our bodies through meditation or exercise.

Of course, prayer is also an option, and activities such as walking in the woods and yoga have roots and analogues in religious traditions. Here, however, I focus on practices that are open to non-believers.

Such experiences are supposed to be authentic, personal, and at least somewhat distinct from the everyday world of conscious thoughts, words, social roles, organizations, and transactions. Although you can have these experiences alongside other people, an important aspect is inward and often literally silent. Something like the pure or raw self is thought to emerge.

This post is a modest contribution to the argument–which others have also made–that it is a mistake to understand such experiences individualistically. Other people are always integrally involved, and it is wise to be maximally conscious of them.

Although practices like hiking and meditation can be routine or even trivial, they bear at least a distant relationship to notions of the sublime. That word has been defined in diverse and incompatible ways–producing an interesting debate–but a common feature seems to be an aesthetic experience that lastingly improves the self and that would be difficult, if not impossible, to convey in ordinary words. Either a sublime experience exceeds human language or else it requires particularly excellent words (such as verse by Homer or Wordsworth) to convey. The natural or religious sublime is sometimes presented as beyond speech, while the literary or rhetorical sublime defines superior speech.

The premise that a sublime experience cannot be shared using ordinary language contains the germ of the conclusion that we do not need other people to experience it. That conclusion is especially problematic in a consumerist culture with relatively loose social ties and high levels of inequality–a society that generates headlines like this one from Wired in 2013, “In Silicon Valley, Meditation Is No Fad. It Could Make Your Career: Meditation and mindfulness are the new rage in Silicon Valley. And it’s not just about inner peace—it’s about getting ahead.”

Most thoughtful analysts are aware that words, conscious thoughts, and other people do not go away when one experiences the sublime. For one thing, we are always morally indebted to other people. We can’t go for a walk in the woods unless someone has preserved that forest and built those trails. In the Americas, the land was previously conquered from indigenous people. The shoes on our feet and the food in our stomach were made by other human beings. In many cases, the aesthetic experience was skillfully shaped by people: landscapers and foresters, yoga instructors, or whoever else is relevant. It is wise to thank those who made the sublime possible, yet empty expressions of thanks can be worse than nothing.

In addition, we acquire our tastes, our aesthetic values, and our ability to process experience from other people. As I wrote in a previous post, “I do not simply see the snow; I see it with things already in my mind, like Christmas decorations, paper snowflakes on second-grade bulletin boards, Ezra Jack Keats’ A Snowy Day, Pieter Bruegel the Elder’s ‘Hunters in the Snow,’ Han-shan’s Cold Mountain lyrics, Robert Frost’s ‘lovely, dark and deep’ woods, Hiroshige’s woodblock prints of wintry Japan, Rosemary Clooney with Bing Crosby. In short, I have been taught to appreciate a winter wonderland, a marshmallow world, and a whipped cream day.” None of us understands all the ways that our experiences have been shaped by our predecessors, but we all absorb the development of our societies as we develop from infants into adults. There is no raw self.

In fact, many have sought to combine explicit records of past human experiences with direct experiences of nature and one’s body. Thoreau says of his time alone at Walden Pond, “My residence was more favorable, not only to thought, but to serious reading, than a university; and though I was beyond the range of the ordinary circulating library, I had more than ever come within the influence of those books which circulate round the world.” And David Morris reminds us, “Whether meditating by the sea, contemplating the night sky, or crossing the Alps, eighteenth-century enthusiasts for nature rarely forgot their reading: the classics were Addison’s guidebook to Italy, while Joseph Warton’s vision of unspoiled nature comes straight from Lucretius and Shaftesbury” (Morris 1972, p. 7).

Whether to combine introspective experience with literature is a personal choice; it is surely not the only path. But I do believe that the current tendency to see the sublime as purely personal is self-centered, in a consumerist way, and we must bring other people in.

With that in mind, let’s consider one of the most famous depictions of a restorative experience in nature, Wordsworth’s “Lines Composed a Few Miles above Tintern Abbey”. The narrator says that his memories of this spot on the Wye River have given him a “gift / Of aspect more sublime,” a “blessed mood” …

In which the heavy and the weary weight
Of all this unintelligible world,
Is lightened:—that serene and blessed mood,
In which the affections gently lead us on,—
Until, the breath of this corporeal frame
And even the motion of our human blood
Almost suspended, we are laid asleep
In body, and become a living soul:
While with an eye made quiet by the power
Of harmony, and the deep power of joy,
We see into the life of things.

This seems like a perfect example of what people nowadays might call self-care through nature.

The story told in the poem is a little complicated. We learn, but not in chronological order, that Wordsworth first came to the Wye River Valley “in the hour / Of thoughtless youth,” when he could still enjoy nature spontaneously, almost as a part of it, “bound[ing] o’er the mountains … wherever nature led” with “animal movements.” In those days, he did not need concepts or words (“a remoter charm, / By thought supplied”) to filter his experience.

Memories of those “boyish days” sustained the narrator while he was busy in the human world of “joyless daylight; when the fretful stir / Unprofitable, and the fever of the world” hung over him. Now he has returned to the same spot and finds that he cannot again feel the “aching joys” and “dizzy raptures” of the place, but he is compensated by a new insight. Now in nature he hears “the still sad music of humanity” and realizes that “the mind of man, a motion and a spirit … rolls through all things.”

At the outset, he is eager to portray what he sees as nature, not as culture. For instance, he mentions “hedge-rows” (which are planted and maintained by human beings) and then corrects himself: “hardly hedge-rows, little lines / Of sportive wood run wild,” as if they were nature’s free creations. But by the midst of the poem, he acknowledges that mind and nature are “deeply interfused.”

And then a third person appears in the poem (counting the narrator as one and the reader as two). This is a “you” who is “with me here upon the banks.” This “dearest Friend” emerges as his sister, Dorothy. The poet’s objective becomes to record her experience of the natural scene so that she can better recall it as her life proceeds, and so that she can vividly remember sharing this experience of nature with her brother. His poem will be a mnemonic (see Rexroth 2021) to give her “healing thoughts” amid the “dreary intercourse of daily life.”

Thus Wordsworth’s sublime is not private or individualistic in a simple sense. But perhaps it is not admirably social, either. In an influential 1986 article, Marjorie Levinson noted that Wordsworth not only chose not to describe Tintern Abbey in this poem (even briefly), but he also omitted many obvious features of the Wye River at that time: “prominent signs of commercial activity” such as “coal mines, transport barges noisily plying the river [and] miners’ hovels.” Tintern was a mining village, and the woods were full of “vagrants” who “lived by charcoal-burning” or begging from tourists (Levinson, pp. 29-30). The abbey was a literal ruin–albeit picturesque to some–because it had been suppressed in the Reformation and sold to landlords who had dispossessed the agricultural population, creating whatever unpopulated vistas one could see in 1798.

Levinson argues that Wordsworth knew all this well, and that “the primary poetic action” of the whole poem “is the suppression of the social.” It “achieves its fiercely private vision by directing a continuous energy toward the nonrepresentation of objects and points of view expressive of a public–we would say, ideological–dimension” (pp. 37-8). The poem is a sign of Wordsworth’s retreat from political engagement in the late 1790s.

Levinson’s 1986 article has provoked some responses in defense of Wordsworth; I have not tried to assess the controversy. For me, these are the key points: Wordsworth exemplifies a currently popular way of addressing discontent or even anguish–enjoying nature–and he is conscious that “humanity,” human language, and human relationships are part of that experience. Yet he is notably apolitical. An analysis of why nature looks as it does–who has profited and suffered from it–is missing from the poem. And this seems to foreshadow many contemporary versions of the sublime.

Sources: Marjorie Levinson, “Insight and Oversight: Reading ‘Tintern Abbey,’” in Wordsworth’s Great Period Poems (1986), pp. 14-57; David B. Morris, The Religious Sublime Christian Poetry and Critical Tradition in 18th Century England (University Press of Kentucky 1972); and Grace Rexroth, “Wordsworth’s Poetic Memoria Technica: What ‘Tintern Abbey’ Remembers,” Studies in Romanticism 60.2 (2021): 153-174. See also the sublime and other people; unhappiness and injustice are different problems; the I and the we: civic insights from Christian theology; Foucault’s spiritual exercises; and when you know, but cannot feel, beauty

are Americans ‘innocent of ideology’?

Hardly a day goes by without news about polarization. Americans are said to be divided into hostile camps on the left and right.

That observation contradicts a line of political science research launched in 1964 by Philip E. Converse. In “The Nature of Belief Systems in Mass Publics,” Converse argued that the vast majority of Americans lacked organized systems of beliefs that could explain or predict their views of candidates or their political behavior. He wrote, “The political ‘belief systems’ of ordinary people are generally thin, disorganized, and ideologically incoherent.” Most Americans were not recognizably liberal or conservative–or anything else. Mainly because they did not spend much time thinking about politics, and especially not in abstract ways, most people were not influenced by the ideas that concerned pundits, intellectuals, and politicians.

Even though many observers assume that the US has become more ideologically polarized since that time, it remains entirely possible to defend Converse’s case. That is the task of Neither Liberal nor Conservative: Ideological Innocence in the American Public by Donald R. Kinder and Nathan P. Kalmoe (University of Chicago Press, 2017). This book is dedicated to “Philip E. Converse, Scholar Unsurpassed” and it ably updates his argument. Some of the key points:

  • People’s opinions about various issues correlate with each other at very low rates (an average of .16 in the American National Election Studies from 1972-2012). If most people held organized systems of belief, then many pairs of issues would correlate strongly. For instance, those who wanted lower taxes would also want to cut spending. The low correlations in the ANES indicate a lack of organization. Nor is there an important change in this measure over time.
  • Most people do not identify as liberals or conservatives, and those who identify as moderates have low information and are relatively unlikely to participate. Few Americans are principled and active centrists, but many are just not engaged.
  • Changes in the majority coalition (such as Reagan’s victories in the 1980s) are unrelated to changes in public opinion. Individuals also seem to change their opinions about most surveyed issues in random ways (notwithstanding some interesting exceptions, such as abortion).
  • Partisanship predicts voters’ choices much better than ideology does. Consistent with that finding, there are still considerable numbers of liberal Republicans and conservative Democrats, and they vote for their parties.

How could this be true in a world of Fox News and MSNBC? Well, Fox News averages about 1.5 million viewers per month, and there are about 258 million adult Americans, so Fox speaks to–and possibly for–less than one in a hundred people.

In some ways, my colleagues and I have found similar results. For instance, in a 2012 survey, CIRCLE reported that just “22% of Americans between the ages of 18 and 24 could choose the issue of greatest importance to themselves and answer two (out of two) factual questions about the candidates’ positions on that issue.” Scholars in tradition of Converse would say that this was not evidence of anything especially wrong with civic education, Millennials, or the 2012 campaign. Instead, for Kinder & Kalmoe, it is an international and transhistorical reality that most people lack organized thoughts about politics.

I think we must take this argument seriously, but I would raise two main doubts.

First, Kinder & Kalmoe conclude that groups, not ideologies, drive politics. They explicitly mention race, gender, and religion (p. 137). “Scores of studies show that public opinion on matters of politics is … shaped in powerful ways by the attitudes citizens harbor toward the social groups they see as the principal beneficiaries or victims in play” (p. 138). For some people, these attitudes are well-developed and stable. For instance, an “ardent feminist” is one who consistently sees gender as a basis for injustice (p. 138). But most of us can be influenced by events or political leaders to make different identities salient.

In her review of Kinder & Kalmoe, Samara Klar writes, “Group identities are a fundamental informational source in the course of preference formation. But must ideology be cast aside? Perhaps we can instead consider how ideology is intertwined in our identity politics.” In fact, this point seems fundamental to me. Each of the groups that Kinder & Kalmoe offer as an example reflects a complex mix of ideas and material realities.

For instance, race is not simply an idea. In the USA, people can be designated with a race at birth because it as seen as an inherited trait. And even if space-aliens arrived and erased all awareness of race from everyone’s brains, it would remain the case that White families have 10 times as much net wealth as Black families because of historical injustices.

Yet race is also about ideas. The whole concept was invented at specific times for specific reasons and has been imbued with meanings. To think of people as having racial identities is surely a form of ideology, and then to add notions of white supremacy, or ostensible color-blindness, or opposition to inequality, or pride in a minority racial status–these are powerful ideological additions. Racial identities offer politicians opportunities and challenges but are hardly created by current political leaders. They persist and recur. It would be odd to describe Americans as “innocent” of ideology if Americans see society in racialized terms (albeit with a variety of value judgments).

Religion is different in detail but similar insofar as it involves both ideas and material facts. For instance, the Catholic Church summarizes its core ideas in its creed and catechism, although it also encompasses a rich diversity of thought. Some Catholics are devout believers. For some, their ex-Catholic identity is important. Although Catholicism is not genetic, it runs in families due to socialization; and even renouncing the faith indicates that the religion is important. The American Catholic church owns tangible resources, from parochial schools and soup kitchens to cathedrals, but it also often holds a subordinate position compared to mainline Protestantism. Catholics have the Shrine of the Immaculate Conception, but Protestants have the National Cathedral. Catholics founded Boston College, but Protestants had Harvard.

Again, the point is that material circumstances and ideas are tightly connected. (And this is true for gender as well.) Thus ideology is powerfully active and omnipresent. And just as an opinion about race or religion combines ideas with material interests, so does an opinion about a classic policy-wonk question, such as whether the government should provide health insurance. If people lack opinions about health insurance but hold opinions about race, I don’t see why that makes them “innocent of ideology.”

Second, the kinds of questions fielded on surveys like the ANES are designed to assess where people stand on the kinds of issues that are debated by Democratic and Republican politicians. Insofar as people are asked about ideas outside this official mainstream, not too many usually express support. For instance, although about 40 percent express a positive view of socialism, few of those seem to define it in a radical way that would imply substantial changes in current policies. But this is not evidence of a lack of ideology. It is a sign that there is a dominant ideology in the USA, which contradicts many alternative ideologies available in the world or on paper. Not very Americans are theocratic Shiites, Maoists, or anarchists, and that is an important fact about America. The country is ideological, even if ideology does not explain the outcome of electoral contests between Democrats and Republicans.

History tells us that the dominant ideologies of whole societies can shift, sometimes surprisingly quickly. But such ruptures are not predictable with time series like the ANES.

Reading work in the tradition of Philip Converse can be a bit dispiriting. The data undermine what Achen & Bartels call the “folk theory” of democracy, according to which we debate policies and values, form opinions, vote based on our opinions, and influence policy. The data suggests that most people are not part of this process.

At the same time, this tradition is also basically complacent about the political system. If people demonstrate a surprising lack of ideological awareness in 2022, that is because they always have. It is even the case in other countries, according to Kinder & Kalmoe. Concerns about shifts toward polarization or extremism are overblown, because the actual trends move at a “glacial pace” (p. 87). For instance, the proportions of extreme liberals and extreme conservatives doubled from a very low base between 1972 and 2012. If that pace continued, it would take more than a century for those groups to predominate (p. 176).

The ultimate message seems to be that we should abandon romantic notions of an informed, deliberating electorate and yet not worry about the fundamental condition of our polity, which is stable and “innocent” of ideology.

Theodore Lowi concludes his great book The End of Liberalism (1969, revised in 1979) by saying:

Realistic political science is a rationalization of the present. The political scientist is not necessarily a defender of the status quo, but the result is too often the same, because those who are trying to describe reality tend to reaffirm it. Focus on the group, for example, is a commitment to one of the more rigidified aspects of the social process. Stress upon the incremental is apologetic as well. The separation of facts from values is apologetic.

There is no denying that modern pluralistic political science brought science to politics. And that is a good thing. But it did not have to come at the cost of making political science an apologetic discipline. But that is exactly what happened. … In embracing facts alone about the process, modern political science embraced the ever-present. In so doing, political science took rigor over relevance.

Political science that is both relevant and rigorous takes seriously the evidence about human cognitive limitations but is also serious about moral critiques of the current society and aims to help people change it.

See also: what if people’s political opinions are very heterogeneous?; US polarization in context; affective polarization is symmetrical; why political science dismissed Trump and political theory predicted him.

Russia in the larger history of decolonization

In the first half of the 1900s, empires were headquartered in London, Paris, Vienna/Budapest, Istanbul, St. Petersburg, Amsterdam, Lisbon, Berlin, Rome, Tokyo, and Washington. They were not quite simultaneous. For instance, the apogee of the Japanese empire came in 1942, well after the Ottoman and Hapsburg empires had disintegrated. However, during a single human lifespan, most of the world was dominated by competing empires, most of which fought in the two world wars.

To varying degrees, these capitals and metropoles have had to confront moral issues as their empires have been denounced from outside and within. And they have had to confront deep practical challenges as they have lost the capacity to dominate far-flung countries.

They have handled these moral and pragmatic issues in various ways and to varying degrees, none of them completely well. Perhaps Germany, Japan, and Italy made the cleanest breaks as a result of their defeats in 1945. I know the German case best, and it reflects a strong repudiation of imperialism. However, the moral introspection followed the military disaster–and not immediately.

The two imperial powers that were able to delay the reckoning longest were the US and the USSR, because they emerged from WWII with their military power intact, not defeated or exhausted. Also, both had ideologies that persuaded many of their own people–if few others–that they had never been empires in the first place. The US had the Declaration of Independence and the Monroe Doctrine and called itself the leader of the free world. The USSR was supposedly a union of equal republics united by universalist ideals; Lenin had been a trenchant critic of imperialism.

Today, the USA still has plenty of work to do decolonize. We must equalize power and reckon with past wrongs, above all the conquest and slavery that built the 50 states as well as the remaining territories. And we must recalibrate our relations with the rest of the world. The Afghan and Iraq wars were not only morally untenable but also humiliating defeats for the USA. They echoed the Suez crisis, which taught London and Paris that their imperial days were numbered. There is even a plausible argument that AR-15 are being purchased in huge numbers–and some are being used in mass murders–because of the post-9/11 wars. I suspect that the catastrophes of Iraq and Afghanistan bubble just below the surface of many of our current controversies. Americans feel betrayed by elites. We define our elite enemies in somewhat different ways, but it’s a fact that bipartisan elites supported these wars. Still, we are having a robust conversation about these themes–including an ugly backlash–and our neocolonialists seem to be in retreat in foreign policy.

As for Russia: there may well be a better internal conversation about colonialism there than I would know. Indeed, I am in no position to assess current Russian culture. I am sure that the Russian conversation about colonialism should be robust, fully acknowledging that the Tsarist empire was an example of European colonialism, the Soviet Union was a Russian-dominated empire, the Russian Federation is still 20% non-Russian, the “near-abroad” consists of sovereign states, and recent interventions in countries like Syria and Mali have been morally repugnant (but not unusually so–for instance, Russia’s involvement in Mali directly follows France’s involvement there).

These are moral points. Meanwhile, as a pragmatic matter, Russians must acknowledge that their GDP (even before the current war) was $1.7 trillion: half the size of Germany’s, a tenth of China’s, and less than a thirteenth of the USA’s. The Russian economy is not only relatively small but depends on unsustainable carbon extraction.

The Russian Federation should find its way to being a mid-sized federal republic with a distinctive and diverse cultural heritage, remarkable natural resources, a post-carbon economy, and decentralized power.

Seen in global perspective, it is not actually surprising that Russia hasn’t made this journey yet. It never faced the crises that confronted most of the other empires of 1900-1950. The collapse of the USSR in 1989-91 could be interpreted as a temporary weakness and betrayal, not as a delayed and incomplete conclusion of Russian imperialism.

In The Atlantic recently, Casey Michel wrote, “The West must complete the project that began in 1991. It must seek to fully decolonize Russia.” That statement strikes me as colonialist in itself, replicating the moral superiority and pragmatic hubris that countries like the USA must learn to surpass. Why would “we” succeed in decolonizing a region on the other side of the planet, even if doing so were our business? Citizens of the Russian Federation must decolonize their own country or else continue to decline, both morally and pragmatically. Ukrainians may assist, but their role is to save Ukraine, not to reform their neighbor. And we are right to support Ukraine–for the sake of that country.

I am skeptical that large-scale moral self-criticism is an engine of social change. (See “alerting people to their privilege,” for some evidence.) However, defeat can be an effective teacher.

See also: Putin’s cultural nationalism;  why I stand with Ukraine (from 2015); and Ukraine means borderland (2017)

growing up with computers

Ethan Zukerman’s review of Kevin Driscoll’s The Modem World: A Prehistory of Social Media made me think back to my own early years and how I experienced computers and digital culture. I was never an early adopter or a power user, but I grew up in a college town at a pivotal time (high school class of 1985). As I nerd, I was proximate to the emerging tech culture even though I inclined more to the humanities. I can certainly remember what Ethan calls the “mythos of the rebellious, antisocial, political computer hacker that dominated media depictions until it was displaced by the hacker entrepreneur backed by venture capital.”

  • ca. 1977 (age 10): My mom, who’d had a previous career as a statistician, took my friend and me to see and use the punchcard machines at Syracuse University. I recall their whirring and speed. Around the same time, a friend of my aunt owned an independent store in New York City that sold components for computer enthusiasts. I think he was also into CB radio.
  • ca. 1980: Our middle school had a work station that was connected to a mainframe downtown; it ran some kind of simple educational software. The university library was turning its catalogue into a digital database, but I recall that the physical cards still worked better.
  • 1982-85: I and several friends owned Atari or other brands of “home computers.” I remember printed books with the BASIC code for games that you could type in, modify, and play. We wrote some BASIC of our own–other people were better at that than I was. I think you could insert cartridges to play games. The TV was your monitor. I remember someone telling me about computer viruses. One friend wrote code that ran on the school system’s mainframe. A friend and I did a science fair project that involved forecasting elections based on the median-voter theorem.
  • 1983: At a summer program at Cornell, I used a word processor. I also recall a color monitor.
  • 1985: We spent a summer in Edinburgh in a rented house with a desktop that played video games, better than any I had seen. I have since read that there was an extraordinary Scottish video game culture in that era.
  • 1985-9: I went to college with a portable, manual typewriter, and for at least the first year I hand-wrote my papers before typing them. The university began offering banks of shared PCs and Macs where I would edit, type, and print drafts that I had first written by hand. (You couldn’t usually get enough time at a computer to write your draft there, and very few people owned their own machines.) We had laser printers and loved playing with fonts and layouts. During my freshman year, a friend whose dad was a Big Ten professor communicated with him using some kind of synchronous chat from our dorm’s basement; that may have been my first sight of an email. A different dorm neighbor spent lots of time on AOL. My senior year, a visiting professor from Ireland managed to get a large document sent to him electronically, but that required a lot of tech support. My resume was saved on a disk, and I continuously edited that file until it migrated to this website in the late 1990s.
  • 1989-91: I used money from a prize at graduation to purchase a Toshiba laptop, which ran DOS and WordPerfect, on which I wrote my dissertation. The laptop was not connected to anything, and its processing power must have been tiny, but it had the same fundamental design as my current Mac. Oxford had very few phones but a system called “pigeon post”: hand-written notes would be delivered to anyone in the university within hours. Apparently, some Oxford nerds had set up the world’s first webcam to allow them to see live video of the office coffee machine, but I only heard about this much later.
  • 1991-3: My work desktop ran Windows. During a summer job for USAID, we sent some kind of weekly electronic message to US embassies.
  • 1993-5: We had email in my office at the University of Maryland. I still have my first emails because I keep migrating all the saved files. I purchased this website and used it for static content. My home computer was connected to the Internet via a dial-up modem. You could still buy printed books that suggested cool websites to visit. I made my first visit to California and saw friends from college who were involved with the dot-com bubble.
  • 2007: I had a smart phone and a Facebook account.

It’s always hard to assess the pace of change retrospectively. One’s own life trajectory interferes with any objective sense of how fast the outside world was changing. But my impression is that the pace of change was far faster from 1977-1993 (from punchcard readers to the World Wide Web) than it has been since 2008.