Category Archives: Internet and public issues

growing up with computers

Ethan Zukerman’s review of Kevin Driscoll’s The Modem World: A Prehistory of Social Media made me think back to my own early years and how I experienced computers and digital culture. I was never an early adopter or a power user, but I grew up in a college town at a pivotal time (high school class of 1985). As I nerd, I was proximate to the emerging tech culture even though I inclined more to the humanities. I can certainly remember what Ethan calls the “mythos of the rebellious, antisocial, political computer hacker that dominated media depictions until it was displaced by the hacker entrepreneur backed by venture capital.”

  • ca. 1977 (age 10): My mom, who’d had a previous career as a statistician, took my friend and me to see and use the punchcard machines at Syracuse University. I recall their whirring and speed. Around the same time, a friend of my aunt owned an independent store in New York City that sold components for computer enthusiasts. I think he was also into CB radio.
  • ca. 1980: Our middle school had a work station that was connected to a mainframe downtown; it ran some kind of simple educational software. The university library was turning its catalogue into a digital database, but I recall that the physical cards still worked better.
  • 1982-85: I and several friends owned Atari or other brands of “home computers.” I remember printed books with the BASIC code for games that you could type in, modify, and play. We wrote some BASIC of our own–other people were better at that than I was. I think you could insert cartridges to play games. The TV was your monitor. I remember someone telling me about computer viruses. One friend wrote code that ran on the school system’s mainframe. A friend and I did a science fair project that involved forecasting elections based on the median-voter theorem.
  • 1983: At a summer program at Cornell, I used a word processor. I also recall a color monitor.
  • 1985: We spent a summer in Edinburgh in a rented house with a desktop that played video games, better than any I had seen. I have since read that there was an extraordinary Scottish video game culture in that era.
  • 1985-9: I went to college with a portable, manual typewriter, and for at least the first year I hand-wrote my papers before typing them. The university began offering banks of shared PCs and Macs where I would edit, type, and print drafts that I had first written by hand. (You couldn’t usually get enough time at a computer to write your draft there, and very few people owned their own machines.) We had laser printers and loved playing with fonts and layouts. During my freshman year, a friend whose dad was a Big Ten professor communicated with him using some kind of synchronous chat from our dorm’s basement; that may have been my first sight of an email. A different dorm neighbor spent lots of time on AOL. My senior year, a visiting professor from Ireland managed to get a large document sent to him electronically, but that required a lot of tech support. My resume was saved on a disk, and I continuously edited that file until it migrated to this website in the late 1990s.
  • 1989-91: I used money from a prize at graduation to purchase a Toshiba laptop, which ran DOS and WordPerfect, on which I wrote my dissertation. The laptop was not connected to anything, and its processing power must have been tiny, but it had the same fundamental design as my current Mac. Oxford had very few phones but a system called “pigeon post”: hand-written notes would be delivered to anyone in the university within hours. Apparently, some Oxford nerds had set up the world’s first webcam to allow them to see live video of the office coffee machine, but I only heard about this much later.
  • 1991-3: My work desktop ran Windows. During a summer job for USAID, we sent some kind of weekly electronic message to US embassies.
  • 1993-5: We had email in my office at the University of Maryland. I still have my first emails because I keep migrating all the saved files. I purchased this website and used it for static content. My home computer was connected to the Internet via a dial-up modem. You could still buy printed books that suggested cool websites to visit. I made my first visit to California and saw friends from college who were involved with the dot-com bubble.
  • 2007: I had a smart phone and a Facebook account.

It’s always hard to assess the pace of change retrospectively. One’s own life trajectory interferes with any objective sense of how fast the outside world was changing. But my impression is that the pace of change was far faster from 1977-1993 (from punchcard readers to the World Wide Web) than it has been since 2008.

dealing with the big tech platforms

We can hold several ideas in our minds, even though they’re in tension, and try to work through to a better solution.

One one hand …

  • Any platform for discussion and communication needs rules. It won’t work if it’s wide open.
  • A privately owned platform is free to make up its own rules, and even to enforce them at will (except as governed by contracts that it has freely entered). A private actor is not bound to permit speech it dislikes or to use due process to regulate speech. It enjoys freedom of the press.
  • Donald Trump was doing great damage on Twitter and Facebook. It’s good that he’s gone.

Yet …

  • It is highly problematic that a few companies own vastly influential global platforms for communication without being accountable to any public. The First Amendment is a dead letter if the public sphere is a small set of forums owned by private companies.
  • Twitter’s reasons for banning Trump seem pretty arbitrary. The company refers to how Trump’s tweets were “received” by unnamed “followers” and invokes the broad “context” of his comments. But speakers don’t control the reception of their words or the contexts of their speech. A well-designed public forum would have rules, but probably not these rules.
  • If a US-based company can ban a political leader in any given country (including any competitive democracy), then democratic governance is threatened.
  • Facebook, Twitter, and Google profit from news consumption, denying profits to the companies that provide shoe-leather reporting. Fewer than half as many people are employed as journalists today, compared to 10 years ago. This is at the heart of the current, very interesting battle between the Australian government and the big tech. companies.
  • These companies deploy algorithms and other design features to maximize people’s time on their platforms, which encourages addictiveness, outrageous content, and filter bubbles and polarization.

Regulation is certainly one option, but it must overcome these challenges: 1) private communications companies have genuine free speech rights. 2) Forcing a powerful company to make really good choices is hard; externally imposed rules can be ignored or distorted. 3) The fact that there are 193 countries creates major coordination problems. (I wouldn’t mind if a patchwork of inconsistent rules hurt the big companies–I think these firms do more net harm than good. But it’s not clear that the resulting mix of rules would be good for the various countries themselves.) 4) The major companies are very powerful and may be able to defeat attempts to regulate them. For instance, they are simply threatening to withdraw from Australia. 5) There is a high potential for regulatory capture–major incumbent businesses influencing the regulators and even using complicated regulatory regimes to create barriers to entry for new competitors. Imagine, for example, that laws require content-moderation. Who would be able to hire enough moderators to compete with Facebook?

Antitrust is worth considering. If the big companies were broken up, there might be more competitors. But you must believe very strongly in the advantages of a competitive marketplace to assume that the results would be better instead of worse than the status quo. Metcalfe’s Law also tends to concentrate communication networks, whatever we do with antitrust.

Another approach is to try to build new platforms with better rules and designs. The economic challenge–not having enough capital to compete with Google and Facebook–could be addressed. Governments could fund these platforms, on the model of the BBC. I think the bigger problem is that the platforms would have to draw lots of avid users, or else they would be irrelevant. They would have to be attractive without being addictive, compelling without hyping sensational content, trustworthy yet also free and diverse.

Those are tough design challenges–but surely worth trying.

See also: why didn’t the internet save democracy?; the online world looks dark; democracy in the digital age; what sustains free speech?; a civic approach to free speech, etc.

Rewiring Democracy

Matt Leighninger and Quixada Moore-Vissing have published “Rewiring Democracy: Subconscious Technologies, Conscious Engagement, and the Future of Politics” (Public Agenda 2018).

I would pick out this major contrast from the complex document of 68 pages.

  • On one hand, technologies are being used ubiquitously to influence individuals and the political world without our conscious awareness. Examples include tools that allow organizations to predict what individuals want without having to ask them, techniques for microtargeting messages, and methods of surveillance.
  • On the other hand, people are deliberately inventing and using new tools for civic purposes, i.e., for free and intentional self-governance. Examples include tools for collecting contributions of money or time and techniques for circulating information in geographical communities.

Much depends on which force prevails, and that depends on us.

The report ends with 3-page case studies of civic innovations. Public Agenda is also publishing those examples separately, starting with a nice piece on the changing role of tech on social movements. It explores how contemporary social movements share photos and collaboratively produce maps, among other developments.

See also: democracy in the digital age; the new manipulative politics: behavioral economics, microtargeting, and the choice confronting Organizing for Action; qualms about Behavioral Economics; when society becomes fully transparent to the state

Defending the Truth: An Activist’s Guide to Fighting Foreign Disinformation Warfare

(Dayton, OH) I recommend Maciej Bartkowski’s Defending the Truth: An Activist’s Guide to Fighting Foreign Disinformation Warfare from the International Center for Nonviolent Conflict. It’s free, concise, practical, and inspiring.

Some examples of advice:

Establish local networks that can be rapidly activated to verify accuracy of shared information or legitimacy of online personas that call for certain actions in the community.

Educate, drill, and practice. … Teach how to identify a deep fake and conspiracy theories and ways to react to them.

Be aware of anonymous interlocutors who attempt to draw you to causes that seemingly align with your own activism goals. Ask them to reveal their identities first before committing to anything. … Do your homework by vetting your potential partners. Perform due diligence by asking the following questions: Who are these anonymous personas asking me to join an online protest group or alive street protest? Do they know anything about my community? Who do they represent? …

Insist on a degree of self-control in community interactions. Civility does not preclude a conflict, but conflict must always be carried  out through disciplined, nonviolent means.

Declare your commitment to truth and verifiable facts, including making public and honest corrections if you inadvertently shared inaccurate information or joined actions set up by fake personas. Praise those who adhere to truth or publicly retract untruthful information that they might have previously shared.

Stress the importance of truth in community as a matter of inviolable human rights. There are no human rights without state institutions being truthful to citizens. There is no public truth without respect for human rights.


why didn’t the internet save democracy?

I don’t always like this format, but Dylan Matthews’ short interviews with Clay Shirky, Jeff Jarvis, David Weinberger, and Alec Ross add up to a useful overview of the question that Matthews poses to all four: “The internet was supposed to save democracy. … What went wrong”?

The only interviewee who really objects to the framing is Ross, who asserts that his predictions were always value-neutral. He didn’t predict that the good guys would win, only that the weak would chasten the strong. So when Putin’s Russia took Obama’s America down a peg, that fulfilled his prophesy (Russian being weaker).

Some highlights, for me:

Clay Shirky:

I underestimated two things, and both of them make pessimism more warranted. The first is the near-total victory of the “social graph” as the ideal organizational form for social media, to the point that we now use “social media” to mean “media that links you to your friends’ friends,” rather than the broader 2000s use of “media that supports group interaction.”

The second thing I underestimated was the explosive improvement in the effectiveness of behavioral economics and its real-world consequences of making advertising work as advertised.

Taken together, these forces have marginalized the earlier model of the public sphere characterized by voluntary association (which is to say a public sphere that followed [Jürgen] Habermas’s conception), rather than as a more loosely knit fabric for viral ideas to flow through.

Shirky adds that he wrote (in 2008) much more about Meetup than Facebook, when both were still startups. Facebook rules the world and Meetup is marginal. Meetup would better embody a Habermasian theory of the public sphere. (See my post Habermas and critical theory: a primer but also saving Habermas from the deliberative democrats.)


I was rather a dogmatist about the value of openness. I still value openness. But as Twitter, Blogger, and Medium co-founder Ev Williams said at [South by Southwest] recently, he and we did not account for the extent of the bad behavior that would follow. These companies accounted and compensated for dark-hat SEO, spam, and other economically motivated behavior. They did not see the extent of the actions of political bad actors and trolls who would destroy for the sake of destruction.


It’s a tragedy that while the web connects pages via an open protocol, the connections among people are managed by closed, for-profit corporations. A lot of our political problems come from that: The interests of those corporations and of its users and citizens are not always aligned.

Weinberger wants to emphasize the positive, as well, and to remind us that “applications can be adjusted so that they serve us better.”

See also the online world looks dark (2107) and democracy in the digital age.