How AI will win your vote in the next election

Omar Adan
Omar Adan

Global Courant

Could organizations use artificial intelligence language models like ChatGPT to push voters to behave in a certain way?

Senator Josh Hawley posed this question to OpenAI CEO Sam Altman May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman responded that he was indeed concerned that some people would use language models to manipulate voters, persuade them, and engage in one-on-one interactions.

Altman didn’t elaborate, but he may have had something like this scenario in mind. Imagine political technologists soon developing a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues only one goal: to maximize the chances of its candidate – the campaign purchasing Clogger Inc’s services – winning in elections.

- Advertisement -

While platforms such as Facebook, Twitter and YouTube use forms of AI to get users to do so spend more time on their sites, Clogger’s AI is said to have a different goal: to change people’s voting habits.

How Clogger would work

Like a political scientist and a lawyer who study the intersection of technology and democracy, we believe that something like Clogger could use automation to increase the scale and possibly the effectiveness of behavioral manipulation and microtargeting techniques that political campaigns have used since the early 2000s.

Like advertisers use your browsing and social media history now to target commercial and political ads individually, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advantages over current state-of-the-art algorithmic behavior manipulation. First, the language model would generate messages — texts, social media, and email, perhaps including images and videos — that are tailored to you personally.

Where advertisers strategically place a relatively small number of ads, language models like ChatGPT can generate countless unique messages for you personally – and millions for others – during a campaign.

- Advertisement -

Second, Clogger would use a technique called reinforcement learning to generate a sequence of messages that are increasingly likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach where the computer takes actions and gets feedback on what works better to learn how to achieve a goal.

Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.

- Advertisement -

How reinforcement learning works.

Third, Clogger’s messages can evolve over the course of a campaign to take into account your reactions to previous messages from the machine and what it has learned about changing the minds of others. Clogger could have dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you on various websites and social media.

The nature of AI

Three other features – or bugs – are worth mentioning.

First, the messages Clogger sends may or may not be political in content. The machine’s sole purpose is to maximize the number of votes, and it would probably devise strategies to achieve this goal that no human campaigner would have thought of.

One option is to send likely opponent voters information about non-political passions they have in sports or entertainment to bury the political messages they receive.

Another possibility is to send unpleasant messages – incontinence commercials, for example – that are timed to coincide with opponents’ messages. And another manipulates voters’ social media friend groups to make it feel like their social circles support the candidate.

Second, Clogger has no respect for the truth. Indeed, it has no way of knowing what is true or false. Language model “hallucinations” are not a problem for this machine because the purpose is to change your vote, not provide accurate information.

Third, because it’s a black box type artificial intelligenceThere’s no way people would know what strategies it’s using.

The field of explainable AI aims to open the black box of many machine learning models so that humans can understand how they work.

Clogocracy

If the Republican presidential campaign deployed Clogger in 2024, the Democratic campaign would likely be forced to respond in kind, perhaps with a similar machine. Call it Dogger. If the campaign managers thought these machines were effective, perhaps the presidential contest would come down to Clogger vs. Dogger, and the winner would be the customer of the more effective machine.

Political scientists and pundits would have a lot to say about why one or the other AI triumphed, but probably no one would really know. The president will not have been elected because his or her policy proposals or political ideas convinced more Americans, but because he or she had the more effective AI. The content that won the day would come from an AI focused solely on victory, with no political ideas of its own, rather than candidates or parties.

In this very important sense, a machine would have won the election instead of a person. The elections would no longer be democratic, even though all the normal activities of democracy – the speeches, the advertisements, the messages, the voting and the counting of the votes – have taken place.

The president chosen by AI can then go two ways. He or she could use the election mantle to pursue Republican or Democratic party policies. But because the party’s ideas might have had little to do with why people voted the way they voted — Clogger and Dogger don’t care about policy positions — the president’s actions wouldn’t necessarily reflect the will of voters. Voters were allegedly manipulated by the AI ​​instead of freely choosing their political leaders and policies.

Another way is for the president to pursue the messages, behaviors, and policies that the machine predicts will maximize reelection chances. On this path, the president would have no specific platform or agenda beyond maintaining power. The president’s actions, led by Clogger, would most likely manipulate voters rather than serve their genuine interests or even the president’s own ideology.

Avoid clogocracy

It would be possible to prevent AI election manipulation if candidates, campaigns and advisers all renounced the use of such political AI.

We think that is unlikely. If politically effective black boxes were developed, the temptation to use them would be almost irresistible. Political advisers might see these tools as required by their professional responsibility to help their candidates win. And once one candidate uses such an effective tool, the opponents can hardly be expected to resist by disarming unilaterally.

Enhanced privacy protection would help. Clogger would rely on accessing massive amounts of personal data to target individuals, tailor messages to persuade or manipulate them, and track and retarget them over the course of a campaign. Any bit of that information that companies or policymakers deny to the machine would make it less effective.

Strict data privacy laws can help prevent AI from becoming manipulative.

Another solution lies with election commissions. They could try to ban or tightly regulate these machines. There is a heated discussion about whether it is “replicating” speech, even if political in nature, can be regulated. The extreme freedom of speech in the US leads many leading academics to say that this cannot be done.

But there is no reason to automatically extend First Amendment protections to the product of these machines. The nation could well choose to entitle machines, but that would have to be a decision based on today’s challenges, not the misguided assumption that the views of James Madison in 1789 were intended to apply to AI.

European Union regulators are moving in this direction. Policymakers have revised the European Parliament’s draft artificial intelligence bill to designate “AI systems to influence voters in campaigns” as “high risk” and subject to regulatory oversight.

A constitutionally safer, albeit smaller, step, already partially adopted by European internet regulators and in California, is to prevent bots from impersonating humans. For example, regulations may require campaign messages to come with disclaimers when the content they contain is generated by machines rather than humans.

This would be the same as the ad disclaimer requirements — “Paid for by the Sam Jones for Congress Committee” — but modified to reflect the AI ​​origin: “This AI-generated ad was paid for by the Sam Jones for Congress Committee .”

A stronger version might require: “This AI-generated message is being sent to you by the Sam Jones for Congress Committee because Clogger has predicted that your chances of voting for Sam Jones will increase by 0.0002%.” At the very least, we think voters deserve to know when it’s a bot speaking to them, and they should know why.

The possibility of a system like Clogger shows that the way to it human collective impotence may not need a superhuman artificial general intelligence. It can only take overenthusiastic campaigners and consultants who have powerful new tools that can effectively push the many buttons of millions of people.

Archon Fungprofessor of citizenship and self-government, Harvard Kennedy School And Lawrence Lessigprofessor of law and leadership, Harvard University

This article has been republished from The conversation under a Creative Commons license. Read the original article.

Similar:

Like it Loading…

How AI will win your vote in the next election

Asia Region News ,Next Big Thing in Public Knowledg

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *