Global Courant 2023-05-22 11:00:58
NEWYou can now listen to Fox News articles!
On Tuesday, May 16, Mr. Altman went to Washington. And today the world feels a little scarier.
There is so much movement, so much talk and so much concern about the proliferation of artificial intelligence (AI) in every area of our lives. There’s rarely a day when we don’t hear a new report on the groundbreaking impact – and potential danger – of this technology. Great learning models like ChatGPT have surprised the world based on the speed at which they learn and what they can do now.
So it was only a matter of time before the government intervened. Anything that moves so fast, with such a major impact on society, will inevitably raise questions about risk and regulation. That’s why Sam Altman, the CEO of ChatGPT, went to Washington this week to testify at a hearing on congressional oversight and regulation of generative AI.
‘IT’S ALL GOING TO TAKE OVER’: AMERICANS REVEAL THE FEAR OF AI IMPACTING EVERY DAILY LIFE
It was an awkward discussion; more like something we’d expect from a sci-fi series. Consider some of the language we’re hearing both on Capitol Hill and from companies concerned about AI:
Sam Altman, CEO of OpenAI, takes his seat before the start of the hearing of the Senate Judiciary Subcommittee on Privacy, Technology and Legislation on “Oversight of AI: Rules for Artificial Intelligence” on Tuesday, May 16, 2023 . (Bill Clark/CQ-Roll Call, Inc via Getty Images)
Language of doom
Altman admitted that AI could do “significant damage to the world” if the technology goes wrong. Potential for AI to be “destructive to humanity”. “I think if this technology goes wrong, it could go pretty wrong.”
Language about speed
AI technology moves forward “as fast as possible”It evolves by the minute”
Language around the progressive/aggressive nature of the technology:
“Shows signs of human reasoning””Getting smarter than humans”
The fact that Congress is moving in a bipartisan way to regulate AI, and the fact that the technology’s inventors and those with skin in the game like Elon Musk are on the front line, leading the cry of warning and calling for regulation, should make us reason to put the industry on hold.
There is clearly a need for regulation, as there is for other potentially harmful industries, from cigarettes to nuclear power.
ChatGPT co-wrote an episode of the TV comedy series South Park in March 2023. (Marco Bertorello/AFP via Getty Images)
But in the heat of battle, amid worry and fear, let’s not lose sight of the exciting potential for AI. Whether you love it, hate it, love it or fear it, AI is here to stay. And it already touches your life in some way.
In the wake of Altman’s visit to Capitol Hill, it’s a good time to rethink and possibly reframe some of the perceptions and positions around AI, without claiming it needs regulation. Here are four quick things to consider, or possible ways to reframe the debate over this amazing technology:
AI: a danger or a welcome innovation? Throughout history, every century has a revolution that moves us forward. The printing press. Production. The Internet. Now there is AI. We can consider it a threat to freedom of expression or humanity in general. Or we can embrace it as a great new frontier and do what America does best: lead the world in innovation. Does it follow us or does it make our lives easier? There is no doubt that generative AI will significantly impact the job market. According to Goldman Sachs economists, “the job market could face significant disruptions” with as many as 300 million full-time jobs around the world potentially automated in some way by the latest wave of AI like ChatGPT.
CLICK HERE TO RECEIVE THE OPINION NEWSLETTER
But it’s not just about coming after us and replacing jobs. Instead of seeing AI as a job thief, why not think of it as a potential productivity booster? Throughout history, technological innovation that initially displaced workers also drove long-term employment growth.
According to the Goldman Sachs report, widespread adoption of AI could ultimately increase labor productivity — and increase global GDP by 7% per year over a 10-year period. “The combination of significant labor cost savings, new job creation, and an increase in productivity for non-displaced workers raises the possibility of a labor productivity explosion such as that that followed the emergence of earlier general-purpose technologies such as the electric motor and personal computer.”
Regulating what matters: Regulations are coming. Most people want it. The AI industry itself is asking for it. But as with so many issues, few people believe that government is equipped to regulate. Regulation should be defined on our terms, with a framework that gives people certainty about their top concerns about issues such as bias, privacy and misinformation.
CLICK HERE TO GET THE FOX NEWS APP
Finally, don’t be dismissive of the technology: remember that we’re basically still at version 1.0 of AI, hard as that may be to believe. As with so many emerging technologies and breakthroughs, there are many weaknesses today that will not exist tomorrow. We can focus on those current shortcomings, or we can think of the technology as an amazing work in progress, something that is here to stay and will get better and better, fast.
In our own company, we are exploring ways to use generative AI to support and enhance our work. And we already see great potential to improve our productivity. Instead of fearing it, we should embrace it. And our language should reflect that change of mindset.
Lee Carter is the chairman and partner of maslansky + partnersa language strategy firm based on the idea that “it’s not what you say, it’s what they hear” and author of “Persuasion: Persuading others when facts don’t seem to matterFollow her on Twitter at @lh_carter
CLICK HERE TO READ MORE FROM LEE CARTER