Category Archives: science, technology and society

Cezanne, Portrait of Gustave Geffroy

Cezanne’s portait of Gustave Geffroy

In “Cézanne’s Doubt” (1946), Maurice Merleau-Ponty discusses Paul Cézanne’s portrait of the critic Paul Geffroy (1895-6), which led me to some congruent reflections.

Merleau-Ponty notes that the table “stretches, contrary to the laws of perspective, into the lower part of the picture.” In a photograph of M. Geffroy, the table’s edges would form parallel lines that would meet at one point, and the whole object would be more foreshortened. That is how an artist who followed what we call “scientific perspective” would depict the table. Why does Cézanne show it otherwise?

Imagine that you actually stood before Paul Geffroy in his study. You would not instantly see the whole scene. Your eye might settle on your host’s face, then jump to the intriguing statuette next to him. The shelves would at first form a vague pattern in the background. Objects for which you have names, such as books, would appear outlined, as borders filled with color. On the other hand, areas of the fireplace or wall would blend into other areas.

You would know that you could move forward toward M. Geffroy, in which case the table would begin to move below you. Just as you see a flying ball as something moving–not as a round zone of color surrounded by other colors–so you might see the table as something that could shift if you moved your body forward.

A photograph of this real-world scene would be a representation of it, very useful for knowing how M. Geffroy looked in his study, and possibly an attractive object in its own right. But the photo would not represent anyone’s experience of the scene. Instead, it would be something that you could experience, rather like the scene itself, by letting your eye move around it, identifying objects of interest, and gradually adding information. You would experience the photograph somewhat differently from the actual scene because you would know that everything was fixed and your body could not move into the space.

A representation of this scene using perspective’s “laws” would make the image useful for certain purposes–for instance, for estimating the size of the table. Michael Baxandall (1978) argued that Renaissance perspective originated in a commercial culture in which patrons enjoyed estimating the size, weight, and value of objects represented in paintings.

But other systems have different benefits. Here is a print in which Toyoharu Kunichika (1835-1900) uses European perspective for the upper floor and a traditional Chinese system (with lines that remain parallel and objects placed higher if they are further away) for the lower floor. As Toshidama writes, this combination is useful for allowing us to see as many people and events as possible.

Print by Toyoharu Kunichika from Toshidama Japanese Prints

Perspective does not tell us how the world is–not in any simple way. The moon is not actually the size of a window, although it is represented as such in a perspectival picture (East Asian or European). Perspective is a way of representing how we experience the world. And in that respect, it is partial and sometimes even misleading. It overlooks that for us, important things seem bolder; objects can look soft, cold or painful as well as large or small; and some things appear in motion or likely to move, while others seem fixed. We can see a whole subject (such as a French intellectual in his study) and parts of it (his beard), at once and as connected to each other.

Merleau-Ponty writes:

Gustave Geoffrey’s [sic] table stretches into the bottom of the picture, and indeed, when our eye runs over a large surface, the images it successively receives are taken from different points of view, and the whole surface is warped. It is true that I freeze these distortions in repainting them on the canvas; I stop the spontaneous movement in which they pile up in perception and in which they tend toward the geometric perspective. This is also what happens with colors. Pink upon gray paper colors the background green. Academic painting shows the background as gray, assuming that the picture will produce the same effect of contrast as the real object. Impressionist painting uses green in the background in order to achieve a contrast as brilliant as that of objects in nature. Doesn’t this falsify the color relationship? It would if it stopped there, but the painter’s task is to modify all the other colors in the picture so that they take away from the green background its characteristics of a real color. Similarly, it is Cézanne’s genius that when the over-all composition of the picture is seen globally, perspectival distortions are no longer visible in their own right but rather contribute, as they do in natural vision, to the impression of an emerging order, of an object in the act of appearing, organizing itself before our eyes.

The deeper point is that a science of nature is not a science of human experience. Third-person descriptions or models of physical reality are not accounts of how we experience things. And even when we are presented with a scientific description, it is something that we experience. For instance, we actively interpret a photograph or a diagram; we do not automatically imprint all of its pixels. And we listen to a person lecture about science; we do not simply absorb the content.

There are truths that can be expressed in third-person form–for example, that human eyes and brains work in certain ways. But there are also truths about how we experience everything, including scientific claims.

And Cézanne is a scientist of experience.


Quotations from Maurice Merleau-Ponty, “Cézanne’s Doubt” (1946), in Sense and Non-sense, translated by Hubert L. Dreyfus and Patricia Allen Dreyfus (Northwestern University Press 1964); image by Paul Cézanne, public domain, via Wikimedia Commons. The image on the Mus?e d’Orsay’s website suggests a warmer palette, but I don’t know whether it’s open-source. I also refer to Michael Baxandall, Painting and Experience in Fifteenth Century Italy : A Primer in the Social History of Pictorial Style (Oxford, 1978).

See also: Svetlana Alpers, The Art of Describing; trying to look at Las Meninas; Wallace Stevens’ idea of orderan accelerating cascade of pearls (on Galileo and Tintoretto); and Rilke, “The Grownup.” My interactive novel, The Anachronist, is about perspective.

how thinking about causality affects the inner life

For many centuries, hugely influential thinkers in each of the Abrahamic faiths combined their foundational belief in an omnipotent deity with Aristotle’s framework of four kinds of causes. Many believers found solace when they discerned a divine role in the four causes.

Aristotle’s framework ran afoul of the Scientific Revolution. Today, there are still ways to be an Abrahamic believer who accepts science, and classical Indian thought offers some alternatives. Nevertheless the reduction of causes from Aristotle’s four to the two of modern science poses a spiritual and ethical challenge.

(This point is widely understood–and by no means my original contribution–but I thought the following summary might be useful for some readers.)

To illustrate Aristotle’s four causes, consider my hands, which are currently typing this blog post. Why are they doing that?

  • Efficient cause: Electric signals are passing along nerves and triggering muscles to contract or relax. In turn, prior electrical and mechanical events caused those signals to flow–and so on, back through time.
  • Material cause: My hand is made of muscles, nerves, skin, bones, and other materials, which, when so configured and stimulated, move. A statue’s hand that was made of marble would not move.
  • Formal cause: A hand is defined as “the terminal part of the vertebrate forelimb when modified (as in humans) as a grasping organ” (Webster’s dictionary). I do things like grasp, point, and touch with my hand because it is a hand. Some hands do not do these things–for instance, because of disabilities–but those are exceptions (caused by efficient causes) that interfere with the definitive form of a hand.
  • Final cause: I am typing in order to communicate certain points about Aristotle. I behave in this way because I see myself as a scholar and teacher whose words might educate others. In turn, educated people may live better. Therefore, I move my fingers for the end (telos, in Greek) of a good life.

Aristotle acknowledges that some events occur only because of efficient and material causes; these accidents lack ends. However, the four causes apply widely. For example, not only my hand but also the keyboard that I am using could be analyzed in terms of all four causes.

The Abrahamic thinkers who read Aristotle related the Creator to all the causes, but especially to the final cause (see Maimonides, Guide for the Perplexed, 2:1 or Aquinas, Summa TheologiaeI, Q44). In a well-ordered, divinely created universe, everything important ultimately happens for a purpose that is good. Dante concludes his Divine Comedy by invoking the final cause of everything, “the love that moves the sun and other stars.”

These Jewish and Christian thinkers follow the Muslim philosopher Avicenna, who even considers cases–like scratching one’s beard–that seem to have only efficient causes and not to happen for any end. “Against this objection, Avicenna maintains that apparently trivial human actions are motivated by unconscious desire for pleasure, the good of the animal soul” (Richardson 2020), which, in turn, is due to the creator.

However, writing in the early 1600s, Francis Bacon criticizes this whole tradition. He assigns efficient and material causes to physics, and formal and final causes to metaphysics. He gestures at the value of metaphysics for religion and ethics, but he doubts that knowledge can advance in those domains. His mission is to improve our understanding and control of the natural world. And for that purpose, he recommends that we keep formal and final causes out of our analysis and practice only what he calls “physics.”

It is rightly laid down that true knowledge is that which is deduced from causes. The division of four causes also is not amiss: matter, form, the efficient, and end or final cause. Of these, however, the latter is so far from being beneficial, that it even corrupts the sciences, except in the intercourse of man with man (Bacon, Novum Organum. P. F. Collier, 1620, II;2).

In this passage and others related to it, Bacon proved prescient. Although plenty of scientists after Bacon have believed in final causes, including divine ends, they only investigate efficient and material causes. Perhaps love moves all the stars, but in Newtonian physics, we strive to explain physical motion in terms of prior events and materials. This is a methodological commitment that yields what Bacon foresaw, the advancement of science.

The last redoubt of final causes was the biological world. My hand moves because of electrical signals, but it seemed that an object as complicated as a hand must have come into existence to serve an end. As Kant writes, “it is quite certain that in terms of purely mechanical principles of nature we cannot even adequately become familiar with, much less explain, organized beings and how they are internally possible.” Kant says that no Isaac Newton could ever arise who would be able to explain “how even a mere blade of grass is produced” using only “natural laws unordered by intention” (Critique of Judgment 74, Pluhar trans.). But then along came just such a Newton in the form of Charles Darwin, who showed that efficient and material explanations suffice in biology, too. A combination of random mutation plus natural selection ultimately yields objects like blades of grass and human hands.

A world without final causes–without ends–seems cold and pointless if one begins where Avicenna, Maimonides, and Aquinas did. One option is to follow Bacon (and Kant) by separating physics from metaphysics, aesthetics, and ethics and assigning the final causes to the latter subjects. Indeed, we see this distinction in the modern university, where the STEM departments deal with efficient causes, and final causes are discussed in some of the humanities. Plenty of scientists continue to use final-cause explanations when they think about religion, ethics, or beauty–they just don’t do that as part of their jobs.

However, Bacon’s warning still resonates. He suspects that progress is only possible when we analyze efficient and material causes. We may already know the final causes relevant to human life, but we cannot learn more about them. This is fine if everyone is convinced about the purpose of life. However, if we find ourselves disagreeing about ethics, religion, and aesthetics, then an inability to make progress becomes an inability to know what is right, and the result can be deep skepticism.

Michael Rosen (2022) reads both Rousseau and Kant as “moral unanimists”–philosophers who believe that everyone already knows the right answer about moral issues. But today hardly anyone is a “moral unanimist,” because we are more aware of diversity. Nietzsche describes the outcome (here, in a discussion of history that has become a science):

Its noblest claim nowadays is that it is a mirror, it rejects all teleology, it does not want to ‘prove’ anything any more; it scorns playing the judge, and shows good taste there, – it affirms as little as it denies, it asserts and ‘describes’ . . . All this is ascetic to a high degree; but to an even higher degree it is nihilistic, make no mistake about it! You see a sad, hard but determined gaze, – an eye peers out, like a lone explorer at the North Pole (perhaps so as not to peer in? or peer back? . . .). Here there is snow, here life is silenced; the last crows heard here are called ‘what for?’, ‘in vain’, ‘nada’ (Genealogy of Morals, Kaufman trans. 2:26)

Earlier in the same book, Nietzsche recounts how, as a young man, he was shaped by Schopenhauer’s argument that life has no purpose or design. But Nietzsche says he detected a harmful psychological consequence:

Precisely here I saw the great danger to mankind, its most sublime temptation and seduction – temptation to what? to nothingness? – precisely here I saw the beginning of the end, standstill, mankind looking back wearily, turning its will against life, and the onset of the final sickness becoming gently, sadly manifest: I understood the morality of compassion [Mitleid], casting around ever wider to catch even philosophers and make them ill, as the most uncanny symptom of our European culture which has itself become uncanny, as its detour to a new Buddhism? to a new Euro-Buddhism? to – nihilism? (Genealogy of Morals, Preface:6)

After mentioning Buddhism, Nietzsche critically explores the recent popularity of the great Buddhist virtue–compassion–in Europe.

Indeed, one of the oldest and most widely shared philosophical premises in Buddhism is “dependent origination,” which is the idea that everything happens because of efficient causes alone and not for teleological reasons. (I think that formal causes persist in Theravada texts but are rejected in Mahayana.)

Dependent origination is taken as good news. By realizing that everything we believe and wish for is the automatic result of previous accidental events, we free ourselves from these mental states. And by believing the same about everyone else’s beliefs and desires, we gain unlimited compassion for those creatures. Calm benevolence fills the mind and excludes the desires that brought suffering while we still believed in their intrinsic value. A very ancient verse which goes by the short title ye dharma hetu says (roughly): “Of all the things that have causes, the enlightened one has shown what causes them, and thereby the great renouncer has shown how they cease.”

I mention this argument not necessarily to endorse it. Much classical Buddhist thought presumes that a total release from the world of causation is possible, whether instantly or over aeons. If one doubts that possibility, as I do, then the news that there are no final causes is no longer consoling.


Secondary sources: Richardson, Kara, “Causation in Arabic and Islamic Thought”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.); Michael Rosen, The Shadow of GodKantHegel, and the Passage from Heaven to History, Harvard University Press, 2022. See also how we use Kant today; does skepticism promote a tranquil mind?; does doubting the existence of the self tame the will?; spirituality and science; and the progress of science.

using a model to explain a single case

Charles Sanders Peirce introduced the logic of what he called “abduction” — a complement to both deduction and induction — with this example:

The surprising fact, C, is observed;
But if A were true, C would be a matter of course,
Hence, there is reason to suspect that A is true.

At least since Harry Frankfurt in 1958, many readers have been skeptical. Can’t we make up an infinite number of premises that could explain any surprising fact?

For instance, Kamala Harris has gained in the polls compared to Joe Biden. If it were true that voters generally prefer female presidential candidates, then her rise would be a “matter of course.” But it is a mistake to infer that Harris has gained because she is a woman. Other explanations are possible and, indeed, more plausible.

Note that “voters prefer women candidates” is an empirical generalization. Generalizations cannot be derived from any single case. If that is what abduction means, then it seems shaky. Its only role might be to suggest hypotheses that should then be tested with representative samples or controlled experiments.

But what if A (the premise) is not an empirical generalization but rather a model? For instance, a model might posit that Harris’ current position in the polls is the combined result of eight different factors, some of them general (voters usually follow partisan cues) and some of them quite unrepeatable (the incumbent president has suddenly bowed out).

Positing a model to explain a single case has risks of its own. Perhaps we add no insight by contriving an elaborate model just to fit the observed reality. And we might be tempted to treat the various components of the model as general patterns and apply them elsewhere, even though one case should give us no basis for generalizing.

But let’s look at this example from a different perspective–a pragmatic one, as Peirce would recommend. After all, Peirce calls his topic “Abductive Judgment” (Peirce 1903), suggesting a connection to practical reason or phronesis.

The question is what should (someone) do? For instance, a month ago, should Joe Biden have dropped out and endorsed Harris? Right now, should Harris accentuate her gender or try to balance it with a male vice-presidential candidate?

Inductive logic might offer some insights. Research suggests that the choice of vice-president has never affected the outcome of a presidential election, and this general inference would suggest that Harris needn’t pay attention to the gender of her VP. But induction cannot answer other key questions, such as what to do when you replace the nominee 100 days before the election. (There is no data on this matter because it hasn’t happened before.)

Besides, various factors can interrelate. The general pattern that vice-presidents do not matter might be reversed in a situation where the nominee had herself been the second person on the ticket until last week.

And the important questions are inescapably normative. For Harris, one good goal is to win the election, but she must attend to other values as well. For instance, I think she should adopt positions that would benefit working-class voters of all races. Possibly this would help her win by restoring some of Biden’s working-class coalition from 2020. Polling data would help us assess that claim. But I favor a worker-oriented strategy for reasons of justice, and I think the important question is how (not whether) to campaign that way.

Models of social phenomena typically incorporate descriptive elements (Harris is down by two points today), causal claims (Trump is still benefitting from a minor convention bump), and normative premises (Harris must win)–all combined for the purpose of guiding action.

Arguably, we cannot do better than abduction when we are trying to decide what to do next. Beginning with a surprising fact, C (and almost anything can be seen as “surprising”), we must come up with something, A, that we can rely on to guide our next steps. A should not be a single sentence, but rather a model composed of various elements.

It is worthwhile to consider evidence from other cases that may validate or challenge components of A. But it is not possible to prove or disprove A. As the pioneering statistician Georg Rasch said, “Models should not be true, but it is important that they are applicable, and whether they are applicable for any given purpose must of course be investigated. This also means that a model is never accepted finally, only on trial.”

If a model cannot be true, why should we make it explicit? It lays out what we are assuming so that we can test the assumptions as we act. It promotes learning from error. And it can help us to hold decision-makers accountable. When evaluating leaders, we should not assess the outcomes, which are beyond anyone’s control, but rather the quality of their models and their ability to adjust in in the light of new experience.

Sources: Peirce, C.S. 1903. Lectures on Pragmatism, Lecture 1: Pragmatism: The Normative Sciences; Frankfurt, Harry G. “Peirce’s notion of abduction.” The Journal of Philosophy 55.14 (1958): 593-597. See also: choosing models that illuminate issues–on the logic of abduction in the social sciences and policy; modeling social reality; different kinds of social models

analytic and holistic reasoning about social questions

“President Biden’s student loan cancellations will bring relief.” “Retrospectively forgiving loans creates a moral hazard.” “At this college, students study the liberal arts.” “An unexamined life is not worth living for humans.”

These claims are, respectively, about a specific act (a policy announced yesterday), a pattern that applies across many cases, an assessment of an institution, and a universal principle.

These statements may be related. An ambitious defense of Biden’s decision to forgive student loans might connect that act to liberal education and thence to a good life, whereas a critique might tie the loan cancelation to cost increases. A good model of a social issue or question often combines several such components.

In this post, I will contrast two ways of thinking about models and their components.

  1. Analytic reasoning

Analytic reasoning seeks characteristics that apply across cases. We can define government aid, moral hazard, and education, defend these definitions against alternatives, and then expect them generally to have the same significance wherever they apply. For example, becoming more self-aware is desirable, all else being equal or as far as that goes (ceteris paribus or pro tanto). We need a definition of self-awareness that allows us to understand what tends to produce it and what good it does. The same goes for loans, loan-forgiveness, and so on.

Methods like controlled field experiments and regression models require analysis, and they demonstrate that it has value. Ethical arguments that depend on sharply defined universals are quintessentially analytic. Qualitative research is often quite analytic, too, particularly when either the researcher or the research subjects employ general concepts.

Analytic reasoning offers the promise of generalizable solutions to social problems. For instance, let’s say you believe that we should spend more money on schools in poor communities. In that case, you are thinking analytically: you view money as an identifiable factor with predictable impact. Note that you might advocate increasing the national education budget while also being sensitive to local differences about things other than money.

  1. Holistic reasoning

Holistic reasoning need not been any less rigorous, precise, or tough-minded than analytic reasoning, but it works with different objects: whole things. For example, we can describe a college as an entity. To do that requires saying many specific things about the institution, but each claim is not meant to generalize to other places.

At a given (imaginary) institution, the interplay between a rural setting, an affluent student body, an applied-science curriculum, a modest endowment, and a recent crisis of leadership could produce unexpected results, and those are only some of the factors that would explain the particular ethos that emerges at that college.

Holistic reasoning is wise if each factor is closely related to others in its specific context. For instance, often a statistic about a place is the result of decisions about what and how to measure, which (in turn) depend on who’s in charge and what incentives they face, which depends on prior political decisions, and so on–indefinitely. From a holistic perspective, a statistic lacks meaning except in conjunction with many other facts.

There is a link here to holistic theories of meaning and/or language, e.g., Robert Brandom: “one cannot have any concepts unless one has many concepts. For the content of each concept is articulated by its inferential relations to other concepts. Concepts, then, must come in packages” (Brandom 2000).

Holistic reasoning is also wise if values change their significance depending on the context, a view labeled as “holism” in metaethics. We are familiar with the idea that lying is bad–except in circumstances when it is good, or even courageous and necessary. An ethical holist believes that there are good and bad value-judgments, but they are not about abstract categories (such as lying). They are about wholes.

Finally, holistic reasoning is wise if we can gain insights about meaning that would be lost in analysis. In Clifford Geertz’ classic interpretation of a Balinese cockfight (Geertz 1972), he successively describes that phenomenon as “a chicken hacking another mindless to bits” (p. 84); “deep play” (p. 71), or an activity that has intrinsic interest for those involved; “fundamentally a dramatization of status concerns” (p. 74); an “encompassing structure” that presents a coherent vision of “death, masculinity, rage, pride, loss, beneficence, chance” (p. 79); and “a kind of sentimental education” from which a Balinese man “learns what his culture’s ethos and his private sensibility (or, anyway, certain aspects of them) look like when spelled out externally in a collective text” (p. 83).

Geertz offers lots of specific empirical data and uses concepts that would apply across cases, including terms like “chickens” and “betting” that arise globally. However, he is not primarily interested in what causes cockfights in Bali or what they cause. His main question is: What is this thing? Since Balinese cockfighting is a human activity, what it is is what it means. And it has meaning as a whole thing, not as a collection of parts.

Conclusion

This discussion may suggest that holistic reasoning is more sensitive and thoughtful than analytic reasoning. But recall that ambitious social reform proposals depend on analytic claims. If everything is contextual, then there is no basis for changing policies or priorities that apply across cases. Holistic reasoning may be conservative, in a Burkean sense–for better or for worse.

Then again, a “whole” need not be something small and local, like a cockfight in Bali or a college in the USA. A nation-state can also be analyzed and interpreted holistically and changed as a result.

It is trite to say that we need both analytic and holistic reasoning about policy, but we do. Instead of jumping to that conclusion, I’ve tried to draw a contrast that suggests some real disadvantages of each.

References: Brandom, Robert B. Articulating reasons: An introduction to inferentialism. Harvard University Press, 2001; Geertz, Clifford. “Deep play: Notes on the Balinese cockfight.” Daedalus 134.4 (2005): 56-86. See also: against methodological individualism; applied ethics need not mean applying ethical systems; what must we believe?, modeling social reality; choosing models that illuminate issues–on the logic of abduction in the social sciences and policy; choosing models that illuminate issues–on the logic of abduction in the social sciences and policy; different kinds of social models etc.

the age of cybernetics

A pivotal period in the development of our current world was the first decade after WWII. Much happened then, including the first great wave of decolonization and the solidification of democratic welfare states in Europe, but I’m especially interested in the intellectual and technological developments that bore the (now obsolete) label of “cybernetics.”

I’ve been influenced by reading Francisco Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (first ed. 1991, revised ed., 2017), but I’d tell the story in a somewhat different way.

The War itself saw the rapid development of entities that seemed analogous to human brains. Those included the first computers, radar, and mechanisms for directing artillery. They also included extremely complex organizations for manufacturing and deploying arms and materiel. Accompanying these pragmatic breakthroughs were successful new techniques for modeling complex processes mathematically, plus intellectual innovations such as artificial neurons (McCullouch & Pitts 1943), feedback (Rosenblueth, Wiener, and Bigelow 1943), game theory (von Neumann & Morgenstern, 1944), stored-program computers (Turing 1946), information theory (Shannon 1948), systems engineering (Bell Labs, 1940s), and related work in economic theory (e.g., Schumpeter 1942) and anthropology (Mead 1942).

Perhaps these developments were overshadowed by nuclear physics and the Bomb, but even the Manhattan Project was a massive application of systems engineering. Concepts, people, money, minerals, and energy were organized for a common task.

After the War, some of the contributors recognized that these developments were related. The Macy Conferences, held regularly from 1942-1960, drew a Who’s Who of scientists, clinicians, philosophers, and social scientists. The topics of the first post-War Macy Conference (March 1946) included “Self-regulating and teleological mechanisms,” “Simulated neural networks emulating the calculus of propositional logic,” “Anthropology and how computers might learn how to learn,” “Object perception’s feedback mechanisms,” and “Deriving ethics from science.” Participants demonstrated notably diverse intellectual interests and orientations. For example, both Margaret Mead (a qualitative and socially critical anthropologist) and Norbert Wiener (a mathematician) were influential.

Wiener (who had graduated from Tufts in 1909 at age 14) argued that the central issue could be labeled “cybernetics” (Wiener & Rosenblueth 1947). He and his colleagues derived this term from the ancient Greek word for the person who steers a boat. For Wiener, the basic question was how any person, another animal, a machine, or a society attempts to direct itself while receiving feedback.

According to Varela, Thompson, and Rosch, the ferment and diversity of the first wave of cybernetics was lost when a single model became temporarily dominant. This was the idea of the von Neumann machine:

Such a machine stores data that may symbolize something about the world. Human beings write elaborate and intentional instructions (software) for how those data will be changed (computation) in response to new input. There is an input device, such as a punchcard reader or keyboard, and an output mechanism, such as a screen or printer. You type something, the processor computes, and out comes a result.

One can imagine human beings, other animals, and large organizations working like von Neumann machines. For instance, we get input from vision, we store memories, we reason about what we experience, and we say and do things as a result. But there is no evident connection between this architecture and the design of the actual human brain. (Where in our head is all that complicated software stored?) Besides, computers designed in this way made disappointing progress on artificial intelligence between 1945 and 1970. The 1968 movie 2001: A Space Odyssey envisioned a computer with a human personality by the turn of our century, but real technology has lagged far behind that.

The term “cybernetics” had named a truly interdisciplinary field. After about 1956, the word faded as the intellectual community split into separate disciplines, including computer science.

This was also the period when behaviorism was dominant in psychology (presuming that all we do is to act in ways that independent observers can see–there is nothing meaningful “inside” us). It was perhaps the peak of what James C. Scott calls “high modernism” (the idea that a state can accurately see and reorganize the whole society). And it was the heyday of “pluralism” in political science (which assumes that each group that is part of a polity automatically pursues its own interests). All of these movements have a certain kinship with the von Neumann architecture.

An alternative was already considered in the era of cybernetics: emergence from networks. Instead of designing a complex system to follow instructions, one can connect numerous simple components into a network and give them simple rules for changing their connections in respond to feedback. The dramatic changes in our digital world since ca. 1980 have used this approach rather than any central design, and now the analogy of machine intelligence to neural networks is dominant. Emergent order can operate at several levels at once; for example, we can envision individuals whose brains are neural networks connecting via electronic networks (such as the Internet) to form social networks and culture.

I have sketched this history–briefly and unreliably, because it’s not my expertise–without intending value-judgments. I am not sure to what extent these developments have been beneficial or destructive. But it seems important to understand where we’ve come from to know where we should go from here.

See also: growing up with computers; ideologies and complex systems; The truth in Hayek; the progress of science; the human coordination involved in AI; the difference between human and artificial intelligence: relationships