Biden talks to experts about dangers of AI

Nabil Anas
Nabil Anas

Global Courant

President Biden will meet with researchers and lawyers with artificial intelligence expertise in San Francisco on Tuesday as his administration tries to address the problem possible dangers of a technology that can fuel misinformation, job losses, discrimination and privacy violations.

The meeting comes as Biden ramps up his efforts to raise money for his 2024 re-election bid, including from tech billionaires. While visiting Silicon Valley on Monday, he attended two fundraisers, including one co-hosted by entrepreneur Reid Hoffman, who has numerous ties to AI companies. The venture capitalist was an early investor in Open AI, which built the popular ChatGPT app, and sits on the boards of technology companies, including Microsoft, that invest heavily in AI.

Experts Biden is expected to meet on Tuesday include some of Big Tech’s loudest critics. The list includes child advocate Jim Steyer, founder and leader of Common Sense Media; Tristan Harris, executive director and co-founder of the Center for Humane Technology; Joy Buolamwini, founder of the Algorithmic Justice League; and Fei-Fei Li, co-director of Stanford’s Human-Centered AI Institute.

- Advertisement -

Some experts have experience working for large technology companies. Harris, a former Google product manager and design ethicist, has spoken out about how social media companies like Facebook and Twitter can harm people’s mental health and amplify misinformation.

Biden’s meetings with AI researchers and tech staffers underline how the president plays on both sides as his campaign attempts to attract wealthy donors as his administration examines the risks of the burgeoning technology. While Biden was critical of tech giants, executives and employees from companies like Apple, Microsoft, Google and Meta also contributed millions of dollars to his campaign in the 2020 election cycle.

“AI is a top priority for the president and his team. Generative AI tools have increased significantly in recent months and we do not want to solve yesterday’s problem,” a White House official said in a statement.

The Biden administration has focused on The risks of AI. Last year, the administration has a “Blueprint for an AI Bill of Rights,” outlining five principles developers should keep in mind before releasing new AI-powered tools. The administration also met with tech CEOs, announced steps the federal government had taken to address AI risks, and continued other efforts to “foster responsible American innovation.”

Lina Khan, the chairperson of the Federal Trade Commission who was appointed by Biden, said in a May op-ed published in the New York Times that the rise of technology platforms such as Facebook and Google is costing users their privacy and security.

- Advertisement -

“As the use of AI becomes more widespread, government officials have a responsibility to ensure that this hard-won history does not repeat itself,” said Khan.

Tech giants are using AI in various products to recommend videos, control virtual assistants and transcribe audio. While AI has been around for decades, the popularity of an AI chatbot known as ChatGPT has intensified a race between big tech players like Microsoft, Google, and Facebook’s parent company Meta. Launched in 2022 by OpenAI, ChatGPT can answer questions, generate text and perform various tasks.

The rush to advance AI technology has left tech workers, researchers, legislators and regulators worried about whether new products will be released before they are safe. In March, Elon Musk, CEO of Tesla, SpaceX and Twitter, Apple co-founder Steve Wozniak and other technology leaders called for AI labs to pause the training of advanced AI systems and urged developers to work with policymakers. AI pioneer Geoffrey Hinton, 75, quit his job at Google so that he could speak more openly about the risks of AI.

- Advertisement -

As technology advances rapidly, legislators and regulators struggle to keep up. In California, Governor Gavin Newsom said he wants to be cautious about AI regulations at the state level. Newsom said in a Los Angeles conference in May that “the biggest mistake” politicians can make is to assert themselves “without trying to understand it first”. California lawmakers have put forward several ideas, including legislation that could combat it algorithmic discriminationset up an artificial intelligence agency and set up a working group that would provide the legislature with an AI report.

Writers and artists are also concerned that companies could use AI to replace workers. Using AI to generate text and art raises ethical questions, including about plagiarism and copyright infringement. The Writer’s Guild of America, still on strike, proposed rules in March about how Hollywood studios can use AI. Text generated by AI chatbots, for example, “cannot be included in determining writing credits,” according to the proposed rules.

The potential misuse of AI to spread political propaganda and conspiracy theories, an issue that has plagued social media, is another major concern among disinformation researchers. They fear that AI tools that can spew out text and images will make it easier and cheaper for bad actors to spread misleading information.

AI is already being deployed in some mainstream political advertising. The Republican National Committee posted an AI-generated video ad depicting a dystopian future if Biden wins his 2024 re-election bid. AI tools have also been used to fake audio clips of politicians and celebrities making comments they didn’t actually say. The campaign of GOP presidential nominee and Florida Governor Ron DeSantis shared a video of what appeared to be AI-generated footage of former President Trump playing Dr. Anthony Fauci hugged.

Tech companies are not against guardrails. They welcome regulations, but also try to shape them. In May, Microsoft released a Report of 42 pages about controlling AI, pointing out that no company is above the law. The report includes a “blueprint for AI public governance” that outlines five points, including creating “safety barriers” for AI systems that control the power grid, water systems and other critical infrastructure.

That same month, OpenAI CEO Sam Altman testified before Congress calling for AI regulation.

“My biggest fear is that we, the technology industry, will do significant damage to the world,” Altman told lawmakers. “If this technology goes wrong, it can go very wrong.” Altman, who has met with leaders in Europe, Asia, Africa and the Middle East, also signed a one sentence letter in May with scientists and other leaders warning of the “risk of extinction” AI poses.

Biden talks to experts about dangers of AI

America Region News ,Next Big Thing in Public Knowledg

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *