the ACM brief on AI

The Association for Computing Machinery (ACM) has 110,000 members. As artificial intelligence rapidly acquires users and uses, some ACM members see an analogy to nuclear physics in the 1940s. Their profession is responsible for technological developments that can do considerable good but that also pose grave dangers. Like physicists in the era of Einstein and Oppenheimer, computer scientists have developed ideas that are now in the hands of governments and companies that they cannot control.

The ACM’s Technology Policy Council has published a brief by David Leslie and Francesca Rossi with the following problem statement: “The rapid commercialization of generative AI (GenAI) poses multiple large-scale risks to individuals, society, and the planet that require a rapid, internationally coordinated response to mitigate.”

Considering that this brief is only three pages long (plus notes), I think it offers a good statement of the issue. It is vague about solutions, but that may be inevitable for this type of document. The question is what should happen next.

One rule-of-thumb is that legislatures won’t act on demands (let alone friendly suggestions) unless someone asks them to adopt specific legislation. In general, legislators lack the time, expertise, and degrees of freedom necessary to develop responses to the huge range of issues that come before them.

This passage from the brief is an example of a first step, but it won’t generate legislation without a lot more elaboration:

Policymakers confronting this range of risks face complex challenges. AI law and policy thus should incorporate end-to-end governance approaches that address risks comprehensively and “by design.” Specifically, they must address how to govern the multiphase character of GenAI systems and the foundation models used to construct them. For instance, liability and accountability for lawfully acquiring and using initial training data should be a focus of regulations tailored to the FM training phase.

The last quoted sentence begins to move in the right direction, but which policymakers should change which laws about which kinds of liability for whom?

The brief repeatedly calls on “policymakers” to act. I am guessing the authors mean governmental policymakers: legislators, regulators, and judges. Indeed, governmental action is warranted. But governments are best seen as complex assemblages of institutions and actors that are in the midst of other social processes, not as the prime movers. For instance, each legislator is influenced by a different set of constituents, donors, movements, and information. If a whole legislature manages to pass a law (which requires coordination), the new legislation will affect constituents, but only to a limited extent. And the degree to which the law is effective will depend on the behavior of many other actors inside of government who are responsible for implementation and enforcement and who have interests of their own.

This means that “the government” is not a potential target for demands: specific governmental actors are. And they are not always the most promising targets, because sometimes they are highly constrained by other parties.

In turn, the ACM is a complex entity–reputed to be quite decentralized and democratic. If I were an ACM member, I would ask: What should policymakers do about AI? But that would only be one question. I would also ask: What should the ACM do to influence various policymakers and other leaders, institutions, and the public? What should my committee or subgroup within ACM do to influence the ACM? And: which groups should I be part of?

In advocating a role for the ACM, it would be worth canvassing its assets: 110,000 expert members who are employed in industry, academia, and governments; 76 years of work so far; structures for studying issues and taking action. It would also be worth canvassing deficits. For instance, the ACM may not have deep expertise on some matters, such as politics, culture, social ethics, and economics. And it may lack credibility with the diverse grassroots constituencies and interest-groups that should be considered and consulted. Thus an additional question is: Who should be working on the social impact of AI, and how should these activists be configured?

I welcome the brief by David Leslie and Francesca Rossi and wouldn’t expect a three-page document to accomplish more than it does. But I hope it is just a start.

See also: can AI help governments and corporations identify political opponents?; the design choice to make ChatGPT sound like a human; what I would advise students about ChatGPT; the major shift in climate strategy (also about governments as midstream actors).

can AI help governments and corporations identify political opponents?

In “Large Language Model Soft Ideologization via AI-Self-Consciousness,” Xiaotian Zhou, Qian Wang, Xiaofeng Wang, Haixu Tang, and Xiaozhong Liu use ChatGPT to identify the signature of “three distinct and influential ideologies: “’Trumplism’ (entwined with US politics), ‘BLM (Black Lives Matter)’ (a prominent social movement), and ‘China-US harmonious co-existence is of great significance’ (propaganda from the Chinese Communist Party).” They unpack each of these ideologies as a connected network of thousands of specific topics, each one having a positive or negative valence. For instance, someone who endorses the Chinese government’s line may mention US-China relationships and the Nixon-Mao summit as a pair of linked positive ideas.

The authors raise the concern that this method would be a cheap way to predict the ideological leanings of millions of individuals, whether or not they choose to express their core ideas. A government or company that wanted to keep an eye on potential opponents wouldn’t have to search social media for explicit references to their issues of concern. It could infer an oppositional stance from the pattern of topics that the individuals choose to mention.

I saw this article because the authors cite my piece, “Mapping ideologies as networks of ideas,” Journal of Political Ideologies (2022): 1-28. (Google Scholar notified me of the reference.) Along with many others, I am developing methods for analyzing people’s political views as belief-networks.

I have a benign motivation: I take seriously how people explicitly articulate and connect their own ideas and seek to reveal the highly heterogeneous ways that we reason. I am critical of methods that reduce people’s views to widely shared, unconscious psychological factors.

However, I can see that a similar method could be exploited to identify individuals as targets for surveillance and discrimination. Whereas I am interested in the whole of an individual’s stated belief-network, a powerful government or company might use the same data to infer whether a person would endorse an idea that it finds threatening, such as support for unions or affinity for a foreign country. If the individual chose to keep that particular idea private, the company or government could still infer it and take punitive action.

I’m pretty confident that my technical acumen is so limited that I will never contribute to effective monitoring. If I have anything to contribute, it’s in the domain of political theory. But this is something–yet another thing–to worry about.

See also: Mapping Ideologies as Networks of Ideas (talk); Mapping Ideologies as Networks of Ideas (paper); what if people’s political opinions are very heterogeneous?; how intuitions relate to reasons: a social approach; the difference between human and artificial intelligence: relationships

ideological pluralism as an antidote to cliche

Although a group of like-minded people can be precise and intellectually rigorous, the combination of consensus plus rigor is relatively rare. When we disagree fundamentally, we face more pressure to define our terms and specify our assumptions, predictions, generalizations, and other aspects of our mental models.

For instance, I often find myself in conversations in which almost everyone shares a general distaste for what they call “capitalism” and wants to blame it for specific problems. Capitalism can mean a combination of: commodification (treating categories of things as exchangeable), property rights, market exchanges, debt, inheritance, financial instruments, capital markets, incorporated entities, state enforcement of certain kinds of contracts, general-purpose business corporations, bureaucratic corporations, professions such as law and accounting, economies in which the state sector is relatively small, international trade and foreign direct investment, and norms such as materialism, competition, or individualism (or conformity and subservience). Most of these components are matters of degree; for instance, a society can have a smaller or larger capital market. The various components can go together–and some thoughtful people see them as closely interconnected–but it is also possible for them to come apart. For instance, there have been many market economies without capital markets.

If capitalism is responsible for bad (or good) outcomes, we should be able to say which components are relevant and why. In a room full of people who dislike capitalism, it is often possible and tempting to avoid such specificity.

I’ve written before against the idea of viewpoint diversity, because I think that is the wrong way to conceptualize and defend pluralism. A better way may be to see disagreement as an antidote to clichés.

See also: trying to keep myself honest; defining capitalism; social justice should not be a cliché; on the proper use of moral clichés