Underestimated effects of AI on democracy, and a gloomy scenario

A few years ago, Tom Steinberg and I discussed the potential risks posed by AI bots in influencing citizen engagement processes and manipulating public consultations. With the rapid advancement of AI technology, these risks have only intensified. This escalating concern has even elicited an official response from the White House.

A recent executive order has tasked the Office of Information and Regulatory Affairs (OIRA) at the White House with considering the implementation of guidance or tools to address mass comments, computer-generated remarks, and falsely attributed comments. This directive comes in response to growing concerns about the impact of AI on the regulatory process, including the potential for generative chatbots to lead mass campaigns or flood the federal agency rule-making process with spam comments.

The threat of manipulation becomes even more pronounced when content generated by bots is viewed by policymakers as being on par with human-created content. There’s evidence to suggest that this may be already occurring in certain scenarios. For example, a recent experiment was designed to measure the impact of language models on effective communication with members of Congress. The goal was to determine if these models could divert legislative attention by generating a constant stream of unique emails directed at congressional members. Both human writers and GPT-3 were employed in the study. Emails were randomly sent to over 7,000 state representatives throughout the country, after which response rates were compared. The results showed a mere 2% difference in response rates, and for some of the policy topics studied, the response rates remained consistent.

Now, the real trouble begins when governments jump on the bot bandwagon and start using their own bots to respond, and we, the humans, are left out of the conversation entirely. It’s like being the third wheel on a digital date that we didn’t even know was happening. That’s a gloomy scenario.

Three Thoughts on Iowa

  • I made a series of predictions on the eve of the caucuses that turned out to be wrong. I predicted that Sanders and Trump would win; I placed some small bets on that basis. I was roundly proven wrong, even though some pundits are calling the outcome a “virtual tie” and a few delegates were apparently allocated by coin flip. I respect the Sanders campaign for trying to spin the loss as a victory, but I don’t get to collect on the bet for a virtual tie for the same reason you don’t get to move in to the White House on the basis of a virtual tie.

Now, I wasn’t really confident in either prediction (I say after the fact). I was swayed by a late poll by Ann Selzer that has had a history of being pretty good. So I’m again struck by the value of making probability forecasts rather than predictions: at best that poll shifted my uncertainty on Cruz/Trump and Clinton/Sanders a little bit towards certainty. But it’s also the case that the right attitude before the event really should have been uncertainty: some outcomes were impossible, but several outcomes were live possibilities. The goal really shouldn’t be to gloat or mope after the fact: the goal should be to update your forecasting abilities, to get better at making future predictions.

  • The caucus format is deliberative. (More so for the Democrats than the Republicans, but still.) That makes polling somewhat less predictive, because polling can only measure pre-deliberative attitudes. We published a really good account of the issues with polling as a measure of “public opinion” in The Good Society a few years back: Liz Turner’s “Penal Populism, Deliberative Methods, and the Production of ‘Public Opinion’ on Crime and Punishment.” Turner argues that surveys produce only one version of the “hypothetical public” which is aggregative, generalized, individualized, and passive. It can (when properly massaged) produce a good prediction about electoral outcomes, since voting ballots, too, have become aggregative, generalized, individualized, and passive. But even mildly deliberative moments like the Iowa caucuses can lead to surprising outcomes because a very different public (no longer hypothetical) is constituted by the caucus form.
  • Finally, the real problem throughout the (Republican) race has been the number of candidates who had some claim to viability. The larger the number of candidates running, the more likely you are to have Condorcet loser (the one who would win the majority of head-to-head ballots) winning the election. Large numbers of (viable) candidates make voting irrational. In Iowa, there were at least six viable Republican candidates measured by delegates, and eleven candidates received at least 1% of the vote. We can see this problem on a much smaller scale with the way that the Clinton campaign planned to use Martin O’Malley as a spoiler, to prevent Sanders from picking up delegates at the margins. That said, I haven’t seen any evidence that this ended up happening, but rather the reverse.