the ACM brief on AI

The Association for Computing Machinery (ACM) has 110,000 members. As artificial intelligence rapidly acquires users and uses, some ACM members see an analogy to nuclear physics in the 1940s. Their profession is responsible for technological developments that can do considerable good but that also pose grave dangers. Like physicists in the era of Einstein and Oppenheimer, computer scientists have developed ideas that are now in the hands of governments and companies that they cannot control.

The ACM’s Technology Policy Council has published a brief by David Leslie and Francesca Rossi with the following problem statement: “The rapid commercialization of generative AI (GenAI) poses multiple large-scale risks to individuals, society, and the planet that require a rapid, internationally coordinated response to mitigate.”

Considering that this brief is only three pages long (plus notes), I think it offers a good statement of the issue. It is vague about solutions, but that may be inevitable for this type of document. The question is what should happen next.

One rule-of-thumb is that legislatures won’t act on demands (let alone friendly suggestions) unless someone asks them to adopt specific legislation. In general, legislators lack the time, expertise, and degrees of freedom necessary to develop responses to the huge range of issues that come before them.

This passage from the brief is an example of a first step, but it won’t generate legislation without a lot more elaboration:

Policymakers confronting this range of risks face complex challenges. AI law and policy thus should incorporate end-to-end governance approaches that address risks comprehensively and “by design.” Specifically, they must address how to govern the multiphase character of GenAI systems and the foundation models used to construct them. For instance, liability and accountability for lawfully acquiring and using initial training data should be a focus of regulations tailored to the FM training phase.

The last quoted sentence begins to move in the right direction, but which policymakers should change which laws about which kinds of liability for whom?

The brief repeatedly calls on “policymakers” to act. I am guessing the authors mean governmental policymakers: legislators, regulators, and judges. Indeed, governmental action is warranted. But governments are best seen as complex assemblages of institutions and actors that are in the midst of other social processes, not as the prime movers. For instance, each legislator is influenced by a different set of constituents, donors, movements, and information. If a whole legislature manages to pass a law (which requires coordination), the new legislation will affect constituents, but only to a limited extent. And the degree to which the law is effective will depend on the behavior of many other actors inside of government who are responsible for implementation and enforcement and who have interests of their own.

This means that “the government” is not a potential target for demands: specific governmental actors are. And they are not always the most promising targets, because sometimes they are highly constrained by other parties.

In turn, the ACM is a complex entity–reputed to be quite decentralized and democratic. If I were an ACM member, I would ask: What should policymakers do about AI? But that would only be one question. I would also ask: What should the ACM do to influence various policymakers and other leaders, institutions, and the public? What should my committee or subgroup within ACM do to influence the ACM? And: which groups should I be part of?

In advocating a role for the ACM, it would be worth canvassing its assets: 110,000 expert members who are employed in industry, academia, and governments; 76 years of work so far; structures for studying issues and taking action. It would also be worth canvassing deficits. For instance, the ACM may not have deep expertise on some matters, such as politics, culture, social ethics, and economics. And it may lack credibility with the diverse grassroots constituencies and interest-groups that should be considered and consulted. Thus an additional question is: Who should be working on the social impact of AI, and how should these activists be configured?

I welcome the brief by David Leslie and Francesca Rossi and wouldn’t expect a three-page document to accomplish more than it does. But I hope it is just a start.

See also: can AI help governments and corporations identify political opponents?; the design choice to make ChatGPT sound like a human; what I would advise students about ChatGPT; the major shift in climate strategy (also about governments as midstream actors).