Researchers can’t say if they can fully remove AI hallucinations: ‘inherent’ part of ‘mismatch’ use

Harris Marley

Global Courant

Some researchers are increasingly convinced they will not be able to remove hallucinations from artificial intelligence (AI) models, which remain a considerable hurdle for large-scale public acceptance. 

“We currently do not understand a lot of the black box nature of how machine learning comes to its conclusions,” Kevin Kane, CEO of quantum encryption company American Binary, told Fox News Digital. “Under the current approach to walking this path of AI, it’s not clear how we would do that. We’d have to change how they work a lot.”

Hallucinations, a name for the inaccurate information or nonsense text AI can produce, have plagued large language models such as ChatGPT for almost the entirety of their public exposure. 

- Advertisement -

Critics of AI immediately focused on hallucinations as a reason to doubt the usefulness of the various platforms, arguing hallucinations could exacerbate already serious issues with misinformation. 

POPULAR AI-POWERED PROGRAMS ARE MAKING A MESS IN THE COURTROOM, EXPERT CLAIMS

An illustration from May 4, 2023 (Reuters/Dado Ruvic/Illustration)

Researchers quickly pursued efforts to remove hallucinations and improve this “known issue,” but this “data processing issue” may never go away due to “use case,” the fact that AI may have issues with some topics, said Christopher Alexander, chief analytics officer of Pioneer Development Group. 

“I think it’s absurd to say that you can solve every problem ever as it is to say you can never fix it,” Alexander told Fox News Digital. “I think the truth lies somewhere in between, and I think it’s going to vary greatly by case.

- Advertisement -

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

“And if you can document a problem, I find it hard to believe that you can’t fix.”

Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, told The Associated Press hallucinations may be “unfixable” because they arise from an “inherent” mismatch between “technology and the proposed use case.”

- Advertisement -

That mismatch exists because researchers have looked to apply AI to multiple use cases and situations, according to Alexander.

ChatGPT is a large language model that functions by analyzing massive data sets from available information on the internet. (Leon Neal/Getty Images)

While developing an AI to tackle a specific problem, Alexander’s team looked at existing models to repurpose to accomplish a task instead of building a full model. He claimed his team knew the program wouldn’t create ideal results, but he suspected many groups take a similar approach without embracing the understanding of limited performance as a result. 

“[Researchers] put together pieces of something, and it wasn’t necessarily made to do that, and now what’s the AI going to do is put in the circumstance? They probably don’t fully do,” Alexander explained, suggesting that researchers may try to refine AI or custom-build models for specific tasks or industries in the future. “So, I don’t think it’s universal. I think it’s very much case-by-case basis.” 

WHAT IS CHATGPT?

Kane said setting a goal of getting rid of hallucinations is “dangerous” since researchers don’t fully understand how the algorithms behind AI function, but part of that comes down to a flaw in the understanding in how AI functions overall. 

“A lot of the machine learning is sort of in our image,” Kane explained. “We want it to talk to us the way we talk to each other.

The Nvidia logo displayed on a phone screen and a microchip and are seen in this illustration from Krakow, Poland, July 19, 2023. (Jakub Porzycki/NurPhoto via Getty Images)

“We generally try to design systems that sort of mimic how humans understand intelligence, right?” he added. “Humans are also a black box, and they also have some of the same phenomena. So, the question is, if we want to develop artificial intelligence, it means we want to be like humans.

“Well, if we want to be like humans, we have to then be willing to live with the pitfalls of that.”

Researchers from MIT suggested one way to deal with the issue is allowing multiple models to argue with each other in a method known as “society of minds” to force the models to wear each other down until “factual” answers win,” The Washington Post reported. 

AI-POWERED HIGHLIGHTS ADD TO GAME-CHANGING WORLD CUP VIEWING EXPERIENCE

Part of the issue arises from the fact that AI looks to predict “the next word” in a sequence and is not necessarily trained to “tell people they don’t know what they’re doing” or to fact-check themselves. 

Nvidia tackled the issue with a software called NeMo Guardrails, which aimed to keep large language models “accurate, appropriate, on topic and secure,” but the technology only aims to keep the program focused, not necessarily to fact-check itself, ZD Net reported. 

Alexander acknowledged that, in some respects, researchers don’t fully understand — on a case-by-case basis — how AI can do some of the things it has done.

CLICK HERE TO GET THE FOX NEWS APP

In one example, Alexander described a conversation with researchers who told him AI models had exceeded expectations for how fast they would learn and develop. When Alexander asked them how that happened, the researchers admitted, “We don’t know.” 

The Associated Press contributed to this report. 

Peter Aitken is a Fox News Digital reporter with a focus on national and global news. 

Researchers can’t say if they can fully remove AI hallucinations: ‘inherent’ part of ‘mismatch’ use

World News,Next Big Thing in Public Knowledg

Share This Article
slot ilk21 ilk21 ilk21