can AI help governments and corporations identify political opponents?

In “Large Language Model Soft Ideologization via AI-Self-Consciousness,” Xiaotian Zhou, Qian Wang, Xiaofeng Wang, Haixu Tang, and Xiaozhong Liu use ChatGPT to identify the signature of “three distinct and influential ideologies: “’Trumplism’ (entwined with US politics), ‘BLM (Black Lives Matter)’ (a prominent social movement), and ‘China-US harmonious co-existence is of great significance’ (propaganda from the Chinese Communist Party).” They unpack each of these ideologies as a connected network of thousands of specific topics, each one having a positive or negative valence. For instance, someone who endorses the Chinese government’s line may mention US-China relationships and the Nixon-Mao summit as a pair of linked positive ideas.

The authors raise the concern that this method would be a cheap way to predict the ideological leanings of millions of individuals, whether or not they choose to express their core ideas. A government or company that wanted to keep an eye on potential opponents wouldn’t have to search social media for explicit references to their issues of concern. It could infer an oppositional stance from the pattern of topics that the individuals choose to mention.

I saw this article because the authors cite my piece, “Mapping ideologies as networks of ideas,” Journal of Political Ideologies (2022): 1-28. (Google Scholar notified me of the reference.) Along with many others, I am developing methods for analyzing people’s political views as belief-networks.

I have a benign motivation: I take seriously how people explicitly articulate and connect their own ideas and seek to reveal the highly heterogeneous ways that we reason. I am critical of methods that reduce people’s views to widely shared, unconscious psychological factors.

However, I can see that a similar method could be exploited to identify individuals as targets for surveillance and discrimination. Whereas I am interested in the whole of an individual’s stated belief-network, a powerful government or company might use the same data to infer whether a person would endorse an idea that it finds threatening, such as support for unions or affinity for a foreign country. If the individual chose to keep that particular idea private, the company or government could still infer it and take punitive action.

I’m pretty confident that my technical acumen is so limited that I will never contribute to effective monitoring. If I have anything to contribute, it’s in the domain of political theory. But this is something–yet another thing–to worry about.

See also: Mapping Ideologies as Networks of Ideas (talk); Mapping Ideologies as Networks of Ideas (paper); what if people’s political opinions are very heterogeneous?; how intuitions relate to reasons: a social approach; the difference between human and artificial intelligence: relationships

the age of cybernetics

A pivotal period in the development of our current world was the first decade after WWII. Much happened then, including the first great wave of decolonization and the solidification of democratic welfare states in Europe, but I’m especially interested in the intellectual and technological developments that bore the (now obsolete) label of “cybernetics.”

I’ve been influenced by reading Francisco Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (first ed. 1991, revised ed., 2017), but I’d tell the story in a somewhat different way.

The War itself saw the rapid development of entities that seemed analogous to human brains. Those included the first computers, radar, and mechanisms for directing artillery. They also included extremely complex organizations for manufacturing and deploying arms and materiel. Accompanying these pragmatic breakthroughs were successful new techniques for modeling complex processes mathematically, plus intellectual innovations such as artificial neurons (McCullouch & Pitts 1943), feedback (Rosenblueth, Wiener, and Bigelow 1943), game theory (von Neumann & Morgenstern, 1944), stored-program computers (Turing 1946), information theory (Shannon 1948), systems engineering (Bell Labs, 1940s), and related work in economic theory (e.g., Schumpeter 1942) and anthropology (Mead 1942).

Perhaps these developments were overshadowed by nuclear physics and the Bomb, but even the Manhattan Project was a massive application of systems engineering. Concepts, people, money, minerals, and energy were organized for a common task.

After the War, some of the contributors recognized that these developments were related. The Macy Conferences, held regularly from 1942-1960, drew a Who’s Who of scientists, clinicians, philosophers, and social scientists. The topics of the first post-War Macy Conference (March 1946) included “Self-regulating and teleological mechanisms,” “Simulated neural networks emulating the calculus of propositional logic,” “Anthropology and how computers might learn how to learn,” “Object perception’s feedback mechanisms,” and “Deriving ethics from science.” Participants demonstrated notably diverse intellectual interests and orientations. For example, both Margaret Mead (a qualitative and socially critical anthropologist) and Norbert Wiener (a mathematician) were influential.

Wiener (who had graduated from Tufts in 1909 at age 14) argued that the central issue could be labeled “cybernetics” (Wiener & Rosenblueth 1947). He and his colleagues derived this term from the ancient Greek word for the person who steers a boat. For Wiener, the basic question was how any person, another animal, a machine, or a society attempts to direct itself while receiving feedback.

According to Varela, Thompson, and Rosch, the ferment and diversity of the first wave of cybernetics was lost when a single model became temporarily dominant. This was the idea of the von Neumann machine:

Such a machine stores data that may symbolize something about the world. Human beings write elaborate and intentional instructions (software) for how those data will be changed (computation) in response to new input. There is an input device, such as a punchcard reader or keyboard, and an output mechanism, such as a screen or printer. You type something, the processor computes, and out comes a result.

One can imagine human beings, other animals, and large organizations working like von Neumann machines. For instance, we get input from vision, we store memories, we reason about what we experience, and we say and do things as a result. But there is no evident connection between this architecture and the design of the actual human brain. (Where in our head is all that complicated software stored?) Besides, computers designed in this way made disappointing progress on artificial intelligence between 1945 and 1970. The 1968 movie 2001: A Space Odyssey envisioned a computer with a human personality by the turn of our century, but real technology has lagged far behind that.

The term “cybernetics” had named a truly interdisciplinary field. After about 1956, the word faded as the intellectual community split into separate disciplines, including computer science.

This was also the period when behaviorism was dominant in psychology (presuming that all we do is to act in ways that independent observers can see–there is nothing meaningful “inside” us). It was perhaps the peak of what James C. Scott calls “high modernism” (the idea that a state can accurately see and reorganize the whole society). And it was the heyday of “pluralism” in political science (which assumes that each group that is part of a polity automatically pursues its own interests). All of these movements have a certain kinship with the von Neumann architecture.

An alternative was already considered in the era of cybernetics: emergence from networks. Instead of designing a complex system to follow instructions, one can connect numerous simple components into a network and give them simple rules for changing their connections in respond to feedback. The dramatic changes in our digital world since ca. 1980 have used this approach rather than any central design, and now the analogy of machine intelligence to neural networks is dominant. Emergent order can operate at several levels at once; for example, we can envision individuals whose brains are neural networks connecting via electronic networks (such as the Internet) to form social networks and culture.

I have sketched this history–briefly and unreliably, because it’s not my expertise–without intending value-judgments. I am not sure to what extent these developments have been beneficial or destructive. But it seems important to understand where we’ve come from to know where we should go from here.

See also: growing up with computers; ideologies and complex systems; The truth in Hayek; the progress of science; the human coordination involved in AI; the difference between human and artificial intelligence: relationships

what I would advise students about ChatGPT

I’d like to be able to advise students who are interested in learning but are not sure whether or how to use ChatGPT. I realize there may also be students who want to use AI tools to save effort, even if they learn less as a result. I don’t yet know how to address that problem. Here I am assuming good intentions on the part of the students. These are tentative notes: I expect my stance to evolve based on experience and other perspectives. …

We ask you to learn by reading, discussing, and writing about selected texts. By investing effort in those tasks, you can derive information and insights, challenge your expectations, develop skills, grasp the formal qualities of writing (as well as the main point), and experience someone else’s mind.

Searching for facts and scanning the environment for opinions can also be valuable, but they do not afford the same opportunities for mental and spiritual growth. If we never stretch our own ideas by experiencing others’ organized thinking, our minds will be impoverished.

ChatGPT can assist us in the tasks of reading, discussing, and writing about texts. It can generate text that is itself worth reading and discussing. But we must be careful about at least three temptations:

  • Saving effort in a way that prevents us from using our own minds.
  • Being misled or misinformed, because ChatGPT can be unreliable and even unbiased.
  • Violating the relationship with the people who hear or read our words by presenting our ideas as our own when they were actually generated by AI. This is not merely wrong because it suggests we did work that we didn’t do. It also prevents the audience from tracing our ideas to their sources in order to assess them critically. (Similarly, we cite sources not only to give credit and avoid plagiarism but also to allow others to follow our research and improve it.)

I can imagine using ChatGPT in some of these ways. …

First, I’m reading an assigned text that refers to a previous author who is new to me. I ask ChatGPT what that earlier author thought. This is like Google-searching for that person or looking her up on Wikipedia. It is educational. It provides valuable context. The main concern is that ChatGPT’s response could be wrong or tilted in some way. That could be the case with any source. However, ChatGPT appears more trustworthy than it is because it generates text in the first-person singular–as if it were thinking–when it is really offering a statistical summary of existing online text about a topic. An unidentified set of human beings wrote the text that the AI algorithm summarizes–imperfectly. We must be especially cautious about the invisible bias this introduces. For the same reason, we should be especially quick to disclose that we have learned something from ChatGPT.

Second, I have been assigned a long and hard text to read, so I ask ChatGPT what it says (or what the author says in general), as a substitute for reading the assignment. This is like having a Cliff’s Notes version for any given work. Using it is not absolutely wrong. It saves time that I might be able to spend well–for instance, in reading something different. But I will miss the nuances and complexities, the stylistic and cognitive uniqueness, and the formal aspects of the original assignment. If I do that regularly, I will miss the opportunity to grow intellectually, spiritually, and aesthetically.

Such shortcuts have been possible for a long time. Already in the 1500s, Erasmus wrote Biblical “paraphrases” as popular summaries of scripture, and King Edward VI ordered a copy for every parish church in England. Some entries on this blog are probably being used to replace longer readings. In 2022, 3,500 people found my short post on “different kinds of freedom,” and perhaps many were students searching for a shortcut to their assigned texts. Our growing–and, I think, acute–problem is the temptation to replace all hard reading with quick and easy scanning.

A third scenario: I have been assigned a long and hard text to read. I have struggled with it, I am confused, and I ask ChatGPT what the author meant. This is like asking a friend. It is understandable and even helpful–to the extent that the response is good. In other words, the main question is whether the AI is reliable, since it may look better than it is.

Fourth, I have been assigned to write about a text, so I ask ChatGPT about it and copy the response as my own essay. This is plagiarism. I might get away with it because ChatGPT generates unique text every time it is queried, but I have not only lied to my teacher, I have also denied myself the opportunity to learn. My brain was unaffected by the assignment. If I keep doing that, I will have an unimpressive brain.

Fifth, I have been assigned to write about a text, I ask ChatGPT about it, I critically evaluate the results, I follow up with another query, I consult the originally assigned text to see if I can find quotes that substantiate ChatGPT’s interpretation, and I write something somewhat different in my own words. Here I am using ChatGPT to learn, and the question is whether it augments my experience or distracts from it. We might also ask whether the AI is better or worse than other resources, including various primers, encyclopedia entries, abstracts, and so on. Note that it may be better.

We could easily multiply these examples, and there are many intermediate cases. I think it is worth keeping the three main temptations in mind and asking whether we have fallen prey to any of them.

Because I regularly teach Elinor Ostrom, today I asked ChatGPT what Ostrom thought. It offered a summary with an interesting caveat that (I’m sure) was written by an individual human being: “Remember that these are general concepts associated with Elinor Ostrom’s work, and her actual writings and speeches would provide more nuanced and detailed insights into her ideas. If you’re looking for specific quotes, I recommend reading her original works and publications.”

That is good advice. As for the summary: I found it accurate. It is highly consistent with my own interpretation of Ostrom, which, in turn, owes a lot to Paul Dragos Aligica and a few others. Although many have written about Ostrom, it is possible that ChatGPT is actually paraphrasing me. That is not necessarily bad. The problem is that you cannot tell where these ideas are coming from. Indeed, ChatGPT begins its response: “While I can’t provide verbatim quotes, I can summarize some key ideas and principles associated with Elinor Ostrom’s work.” There is no “I” in AI. Or if there is, it isn’t a computer. The underlying author might be Peter Levine plus a few others.

Caveat emptor.

See also: the design choice to make ChatGPT sound like a human; the difference between human and artificial intelligence: relationships

trying Mastodon

As of today, I am @peterlevine@mastodon.sdf.org on the decentralized Mastodon network (https://mastodon.social/).

I would report good experiences with Twitter (as @peterlevine). I never attract enough attention there to be targeted by malicious or mean users. I don’t see fake news. (Of course, everyone thinks that’s true of themselves; maybe I’m naive.) I do enjoy following 750 accounts that tend to be specialized and rigorous in their respective domains. I read an ideologically diverse array of tweets and benefit from conservative, left-radical, religious, and culturally distant perspectives that I would otherwise miss–yet I curate my list for quality and don’t follow anyone unless I find the content useful. A bit of levity is also appreciated.

Notwithstanding my own positive experiences, I understand that Twitter does damage. At best, it’s far from optimal as a major instantiation of the global public sphere. We’d all be better off engaging somewhere else that was better designed and managed.

However, making the transition is a collective-action problem. Networks are valuable in proportion to the square of the number of users (Metcalfe’s Law). Twitter has been helpful to me because so many people are also on it, from defense logistics nerds posting about Ukrainian drones to election nerds tweeting about early ballots to political-economy nerds writing about Elinor Ostrom. For everyone to switch platforms at the same time and end up in the same place is a classic coordination dilemma.

Elon Musk may provide the solution by encouraging enough Twitter users to try the same alternative platform simultaneously. I perceive that a migration to Mastodon is underway. Joining Mastodon may offer positive externalities by helping to make it a competitive alternative. Starting anew is also pretty fun, even though the Mastodon interface isn’t too intuitive. So far, I have four followers, and the future is promising.

dealing with the big tech platforms

We can hold several ideas in our minds, even though they’re in tension, and try to work through to a better solution.

One one hand …

  • Any platform for discussion and communication needs rules. It won’t work if it’s wide open.
  • A privately owned platform is free to make up its own rules, and even to enforce them at will (except as governed by contracts that it has freely entered). A private actor is not bound to permit speech it dislikes or to use due process to regulate speech. It enjoys freedom of the press.
  • Donald Trump was doing great damage on Twitter and Facebook. It’s good that he’s gone.

Yet …

  • It is highly problematic that a few companies own vastly influential global platforms for communication without being accountable to any public. The First Amendment is a dead letter if the public sphere is a small set of forums owned by private companies.
  • Twitter’s reasons for banning Trump seem pretty arbitrary. The company refers to how Trump’s tweets were “received” by unnamed “followers” and invokes the broad “context” of his comments. But speakers don’t control the reception of their words or the contexts of their speech. A well-designed public forum would have rules, but probably not these rules.
  • If a US-based company can ban a political leader in any given country (including any competitive democracy), then democratic governance is threatened.
  • Facebook, Twitter, and Google profit from news consumption, denying profits to the companies that provide shoe-leather reporting. Fewer than half as many people are employed as journalists today, compared to 10 years ago. This is at the heart of the current, very interesting battle between the Australian government and the big tech. companies.
  • These companies deploy algorithms and other design features to maximize people’s time on their platforms, which encourages addictiveness, outrageous content, and filter bubbles and polarization.

Regulation is certainly one option, but it must overcome these challenges: 1) private communications companies have genuine free speech rights. 2) Forcing a powerful company to make really good choices is hard; externally imposed rules can be ignored or distorted. 3) The fact that there are 193 countries creates major coordination problems. (I wouldn’t mind if a patchwork of inconsistent rules hurt the big companies–I think these firms do more net harm than good. But it’s not clear that the resulting mix of rules would be good for the various countries themselves.) 4) The major companies are very powerful and may be able to defeat attempts to regulate them. For instance, they are simply threatening to withdraw from Australia. 5) There is a high potential for regulatory capture–major incumbent businesses influencing the regulators and even using complicated regulatory regimes to create barriers to entry for new competitors. Imagine, for example, that laws require content-moderation. Who would be able to hire enough moderators to compete with Facebook?

Antitrust is worth considering. If the big companies were broken up, there might be more competitors. But you must believe very strongly in the advantages of a competitive marketplace to assume that the results would be better instead of worse than the status quo. Metcalfe’s Law also tends to concentrate communication networks, whatever we do with antitrust.

Another approach is to try to build new platforms with better rules and designs. The economic challenge–not having enough capital to compete with Google and Facebook–could be addressed. Governments could fund these platforms, on the model of the BBC. I think the bigger problem is that the platforms would have to draw lots of avid users, or else they would be irrelevant. They would have to be attractive without being addictive, compelling without hyping sensational content, trustworthy yet also free and diverse.

Those are tough design challenges–but surely worth trying.

See also: why didn’t the internet save democracy?; the online world looks dark; democracy in the digital age; what sustains free speech?; a civic approach to free speech, etc.

Rewiring Democracy

Matt Leighninger and Quixada Moore-Vissing have published “Rewiring Democracy: Subconscious Technologies, Conscious Engagement, and the Future of Politics” (Public Agenda 2018).

I would pick out this major contrast from the complex document of 68 pages.

  • On one hand, technologies are being used ubiquitously to influence individuals and the political world without our conscious awareness. Examples include tools that allow organizations to predict what individuals want without having to ask them, techniques for microtargeting messages, and methods of surveillance.
  • On the other hand, people are deliberately inventing and using new tools for civic purposes, i.e., for free and intentional self-governance. Examples include tools for collecting contributions of money or time and techniques for circulating information in geographical communities.

Much depends on which force prevails, and that depends on us.

The report ends with 3-page case studies of civic innovations. Public Agenda is also publishing those examples separately, starting with a nice piece on the changing role of tech on social movements. It explores how contemporary social movements share photos and collaboratively produce maps, among other developments.

See also: democracy in the digital age; the new manipulative politics: behavioral economics, microtargeting, and the choice confronting Organizing for Action; qualms about Behavioral Economics; when society becomes fully transparent to the state

Defending the Truth: An Activist’s Guide to Fighting Foreign Disinformation Warfare

(Dayton, OH) I recommend Maciej Bartkowski’s Defending the Truth: An Activist’s Guide to Fighting Foreign Disinformation Warfare from the International Center for Nonviolent Conflict. It’s free, concise, practical, and inspiring.

Some examples of advice:

Establish local networks that can be rapidly activated to verify accuracy of shared information or legitimacy of online personas that call for certain actions in the community.

Educate, drill, and practice. … Teach how to identify a deep fake and conspiracy theories and ways to react to them.

Be aware of anonymous interlocutors who attempt to draw you to causes that seemingly align with your own activism goals. Ask them to reveal their identities first before committing to anything. … Do your homework by vetting your potential partners. Perform due diligence by asking the following questions: Who are these anonymous personas asking me to join an online protest group or alive street protest? Do they know anything about my community? Who do they represent? …

Insist on a degree of self-control in community interactions. Civility does not preclude a conflict, but conflict must always be carried  out through disciplined, nonviolent means.

Declare your commitment to truth and verifiable facts, including making public and honest corrections if you inadvertently shared inaccurate information or joined actions set up by fake personas. Praise those who adhere to truth or publicly retract untruthful information that they might have previously shared.

Stress the importance of truth in community as a matter of inviolable human rights. There are no human rights without state institutions being truthful to citizens. There is no public truth without respect for human rights.

 

why didn’t the internet save democracy?

I don’t always like this format, but Dylan Matthews’ short interviews with Clay Shirky, Jeff Jarvis, David Weinberger, and Alec Ross add up to a useful overview of the question that Matthews poses to all four: “The internet was supposed to save democracy. … What went wrong”?

The only interviewee who really objects to the framing is Ross, who asserts that his predictions were always value-neutral. He didn’t predict that the good guys would win, only that the weak would chasten the strong. So when Putin’s Russia took Obama’s America down a peg, that fulfilled his prophesy (Russian being weaker).

Some highlights, for me:

Clay Shirky:

I underestimated two things, and both of them make pessimism more warranted. The first is the near-total victory of the “social graph” as the ideal organizational form for social media, to the point that we now use “social media” to mean “media that links you to your friends’ friends,” rather than the broader 2000s use of “media that supports group interaction.”

The second thing I underestimated was the explosive improvement in the effectiveness of behavioral economics and its real-world consequences of making advertising work as advertised.

Taken together, these forces have marginalized the earlier model of the public sphere characterized by voluntary association (which is to say a public sphere that followed [Jürgen] Habermas’s conception), rather than as a more loosely knit fabric for viral ideas to flow through.

Shirky adds that he wrote (in 2008) much more about Meetup than Facebook, when both were still startups. Facebook rules the world and Meetup is marginal. Meetup would better embody a Habermasian theory of the public sphere. (See my post Habermas and critical theory: a primer but also saving Habermas from the deliberative democrats.)

Jarvis:

I was rather a dogmatist about the value of openness. I still value openness. But as Twitter, Blogger, and Medium co-founder Ev Williams said at [South by Southwest] recently, he and we did not account for the extent of the bad behavior that would follow. These companies accounted and compensated for dark-hat SEO, spam, and other economically motivated behavior. They did not see the extent of the actions of political bad actors and trolls who would destroy for the sake of destruction.

Weinberger:

It’s a tragedy that while the web connects pages via an open protocol, the connections among people are managed by closed, for-profit corporations. A lot of our political problems come from that: The interests of those corporations and of its users and citizens are not always aligned.

Weinberger wants to emphasize the positive, as well, and to remind us that “applications can be adjusted so that they serve us better.”

See also the online world looks dark (2107) and democracy in the digital age.

the online world looks dark

(Chicago) I’m at the #ObamaSummit, much of which can be followed online.

In the opening plenary, several speakers (including President Obama) noted the drawbacks of social media: psychological isolation, manipulation by powerful companies and governments, fake news, balkanization, and deep incivility.

I remember when discussions of civic tech were generally optimistic: people saw the Internet and social media as creative and democratic forces.

I went to the specialized breakout session with “civic media” entrepreneurs and asked them whether they shared the dark picture painted by the plenary speakers. Each gave an interesting and nuanced answer, but in short, they said Yes. The reason they build and use digital tools is basically to combat the larger trends in social media, which for the most part, they see as harmful. Even Adrian Reyna of @ United We Dream, a leader of one of the best social movements that has used online tools, emphasized that relying on civic tech can disempower people and alienate communities.

This is no reason to give up on improving the civic impact of digital media. The work remains as important as ever. It’s just that the atmosphere now feels very sober; the heady days of cyber-optimism have passed, at least for people concerned about politics and civic culture.

[See also democracy in the digital age and four questions about social media and politics]

democracy in the digital age

New chapter: “Democracy in the Digital Age,” The Civic Media Reader, edited by Eric Gordon and Paul Mihailidis (Cambridge, MA: MIT Press, 2016), pp. 29-47

Abstract: Digital media change rapidly, but democracy presents perennial challenges. It is not in people’s individual interests to participate, yet we need them to participate ethically and wisely. It’s easier for more advantaged people to participate. And the ethical values that guide personal relationships tend to vanish in large-scale interactions. The digital era brings special versions of those challenges: choice has been massively disaggregated, sovereignty is ambiguous, states can collect intrusive information about people, and states no longer need much support from their own citizens. I argue that these underlying conditions make democracy difficult in the digital age.