Actually, AI could be good for democracy

Omar Adan

Global Courant

It is fashionable to regard artificial intelligence (AI) as inherent dehumanizing technologya ruthless one power of automation which has unleashed legions of virtual skilled workers in anonymous form.

But what if AI turns out to be the only tool that can identify what makes your ideas special, recognizing your unique perspective and potential on the issues where it matters most?

You’d be forgiven for being distraught about society’s ability to handle this new technology. So far there is no shortage forecasts about the democratic demise That AI can wreak havoc on the American system of government.

There are legitimate reasons to be concerned that AI could spreading misinformation, disrupt public reaction processes on regulation, overwhelm legislators with artificial compounding outreach, aid to automate business lobbyingor even generate laws in a manner tailored to the benefit of narrow interests.

But there are also reasons to be more optimistic. Many groups have started demonstrating the potential advantageous using AI for governance. An important constructive use case for AI in democratic processes is to serve as moderator And consensus builder.

To help democracy scale better in the face of growing, increasingly interconnected populations – as well as the widespread availability of AI language tools that can generate large volumes of text at the touch of a button – the US will need to harness AI’s ability to quickly process, interpret and to summarize this content.

An old problem

There are two different ways to approach using generative AI to improve citizen participation and governance. Each is likely to lead to drastically different experiences for public policy advocates and other people trying to make their voices heard in a future system where AI chatbots are both the dominant readers and writers of public comments.

For example, consider individual letters to a representative or comments as part of a regulatory process. In both cases, we tell the people and the government what we think and want.

For more than half a centuryagencies have used human power to review all comments received and generate summaries and responses to their key themes. Certainly, digital technology has helped.

Recording audience comments has been a challenge for representatives and their staff for decades. Photo: AP through The Conversation

In 2021, the Council of Federal Chief Data Officers recommended modernization the comment review process by implementing natural language processing tools for removing duplicates and clustering similar comments in government processes. These tools are simplistic by 2023 AI standards.

They work by assessing the semantic similarity of comments based on metrics such as word frequency (How often did you say “personality”?) and cluster similar comments and give reviewers an idea of ​​the topic they relate to.

Understanding the essence

Think of this approach as the collapse of public opinion. They take a large, hairy mass of comments from thousands of people and condense them into a neat set of essential reading that is generally sufficient to reflect the broad themes of community feedback.

This is much easier for a small agency staff or legislative bureau than it would be for staffers to actually read that many individual perspectives.

But what is lost in this collapse is individuality, personality and relationships. The reviewer of the abridged comments may overlook the personal circumstances that have led so many commentators to write with a common point of view, and may overlook the arguments and anecdotes that might be the most compelling content of the testimony.

Most importantly, the reviewers may miss the opportunity to recognize dedicated and knowledgeable advocates, whether advocacy groups or individuals, who could have long-lasting, productive relationships with the agency.

These drawbacks have real implications for the potential effectiveness of those thousands of individual messages, undermining what all those people were doing it for. Yet the practical side tips the balance towards a kind of summary approach. A passionate advocacy is of no value if regulators or legislators simply don’t have time to read it.

Finding the signals and the noise

There is another approach. In addition to collapsing testimonials through summaries, government employees can use modern AI techniques to detonate it. They can automatically extract and recognize a distinctive argument from one testimonial that does not exist in the thousands of other testimonials received.

They can discover the kind of constituent stories and experiences that legislators like to repeat at hearings, town halls and campaign events. This approach can support the potential impact of individual public comments to shape legislation, even if the number of testimonials can grow exponentially.

AI could help elected representatives make their voices heard. Photo: AP via The Conversation/Patrick Semansky

In computer science, there is a rich history of that kind of automation task in what is called detection of outliers.

Traditional methods usually find a simple model that explains most of the data in question, such as a set of topics that well describe the vast majority of comments submitted. But then they take it a step further by isolating those data points that fall out of the mold — comments that don’t take arguments that fit into the neat little clusters.

State-of-the-art AI language models are not necessary for identifying outliers in text document datasets, but using them could bring a greater degree of sophistication and flexibility to this procedure. AI language models can be tasked to identify new perspectives within a large body of text just by asking. You just need to tell the AI ​​to do it find them.

In the absence of that ability to extract discerning comments, legislators and regulators have no choice but to prioritize other factors. If nothing better, “who donated the most to our campaign” or “which company employs most of my former staffersbecome reasonable benchmarks for prioritizing public comments. AI can help elected representatives do much better.

If Americans want AI to help revive the country’s ailing democracy, they need to think about how to match the incentives of elected leaders with those of individuals. At the moment, as much as 90% of communication is from voters mass emails organized by interest groups, and they are largely ignored by staffers.

People send their passions to huge digital warehouses where algorithms pigeonhole their expressions so they don’t need to be read. This creates an incentive for citizens and interest groups to fill that box to the brim, so that someone notices that it is overflowing.

A talented, informed, engaged citizen should be able to articulate their ideas and share their personal experiences and distinctive points of view in a way that they can be included in everyone else’s comments, where they contribute to the summary, and separately can be recognized among the other notes.

An effective process for summarizing comments would take those unique views off the shelf and put them in the hands of lawmakers.

Bruce Schneier is adjunct lecturer in public policy, Harvard Kennedy School And Nathan Sanders is Affiliate, Berkman Klein Center for Internet & Society, Harvard University

This article has been republished from The conversation under a Creative Commons license. Read the original article.

Similar:

Like it Loading…

Actually, AI could be good for democracy

Asia Region News ,Next Big Thing in Public Knowledg

TAGGED:
Share This Article
Exit mobile version
slot ilk21 ilk21 ilk21