the age of cybernetics

A pivotal period in the development of our current world was the first decade after WWII. Much happened then, including the first great wave of decolonization and the solidification of democratic welfare states in Europe, but I’m especially interested in the intellectual and technological developments that bore the (now obsolete) label of “cybernetics.”

I’ve been influenced by reading Francisco Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (first ed. 1991, revised ed., 2017), but I’d tell the story in a somewhat different way.

The War itself saw the rapid development of entities that seemed analogous to human brains. Those included the first computers, radar, and mechanisms for directing artillery. They also included extremely complex organizations for manufacturing and deploying arms and materiel. Accompanying these pragmatic breakthroughs were successful new techniques for modeling complex processes mathematically, plus intellectual innovations such as artificial neurons (McCullouch & Pitts 1943), feedback (Rosenblueth, Wiener, and Bigelow 1943), game theory (von Neumann & Morgenstern, 1944), stored-program computers (Turing 1946), information theory (Shannon 1948), systems engineering (Bell Labs, 1940s), and related work in economic theory (e.g., Schumpeter 1942) and anthropology (Mead 1942).

Perhaps these developments were overshadowed by nuclear physics and the Bomb, but even the Manhattan Project was a massive application of systems engineering. Concepts, people, money, minerals, and energy were organized for a common task.

After the War, some of the contributors recognized that these developments were related. The Macy Conferences, held regularly from 1942-1960, drew a Who’s Who of scientists, clinicians, philosophers, and social scientists. The topics of the first post-War Macy Conference (March 1946) included “Self-regulating and teleological mechanisms,” “Simulated neural networks emulating the calculus of propositional logic,” “Anthropology and how computers might learn how to learn,” “Object perception’s feedback mechanisms,” and “Deriving ethics from science.” Participants demonstrated notably diverse intellectual interests and orientations. For example, both Margaret Mead (a qualitative and socially critical anthropologist) and Norbert Wiener (a mathematician) were influential.

Wiener (who had graduated from Tufts in 1909 at age 14) argued that the central issue could be labeled “cybernetics” (Wiener & Rosenblueth 1947). He and his colleagues derived this term from the ancient Greek word for the person who steers a boat. For Wiener, the basic question was how any person, another animal, a machine, or a society attempts to direct itself while receiving feedback.

According to Varela, Thompson, and Rosch, the ferment and diversity of the first wave of cybernetics was lost when a single model became temporarily dominant. This was the idea of the von Neumann machine:

Such a machine stores data that may symbolize something about the world. Human beings write elaborate and intentional instructions (software) for how those data will be changed (computation) in response to new input. There is an input device, such as a punchcard reader or keyboard, and an output mechanism, such as a screen or printer. You type something, the processor computes, and out comes a result.

One can imagine human beings, other animals, and large organizations working like von Neumann machines. For instance, we get input from vision, we store memories, we reason about what we experience, and we say and do things as a result. But there is no evident connection between this architecture and the design of the actual human brain. (Where in our head is all that complicated software stored?) Besides, computers designed in this way made disappointing progress on artificial intelligence between 1945 and 1970. The 1968 movie 2001: A Space Odyssey envisioned a computer with a human personality by the turn of our century, but real technology has lagged far behind that.

The term “cybernetics” had named a truly interdisciplinary field. After about 1956, the word faded as the intellectual community split into separate disciplines, including computer science.

This was also the period when behaviorism was dominant in psychology (presuming that all we do is to act in ways that independent observers can see–there is nothing meaningful “inside” us). It was perhaps the peak of what James C. Scott calls “high modernism” (the idea that a state can accurately see and reorganize the whole society). And it was the heyday of “pluralism” in political science (which assumes that each group that is part of a polity automatically pursues its own interests). All of these movements have a certain kinship with the von Neumann architecture.

An alternative was already considered in the era of cybernetics: emergence from networks. Instead of designing a complex system to follow instructions, one can connect numerous simple components into a network and give them simple rules for changing their connections in respond to feedback. The dramatic changes in our digital world since ca. 1980 have used this approach rather than any central design, and now the analogy of machine intelligence to neural networks is dominant. Emergent order can operate at several levels at once; for example, we can envision individuals whose brains are neural networks connecting via electronic networks (such as the Internet) to form social networks and culture.

I have sketched this history–briefly and unreliably, because it’s not my expertise–without intending value-judgments. I am not sure to what extent these developments have been beneficial or destructive. But it seems important to understand where we’ve come from to know where we should go from here.

See also: growing up with computers; ideologies and complex systems; The truth in Hayek; the progress of science; the human coordination involved in AI; the difference between human and artificial intelligence: relationships