Global Courant 2023-05-22 11:00:29
Artificial intelligence — particularly large language models like ChatGPT — could theoretically give criminals information needed to cover their tracks before and after a crime, and then erase that evidence, an expert warns.
Large language models, or LLMs, are a segment of AI technology that uses algorithms that can recognize, summarize, translate, predict and generate text and other content based on knowledge obtained from massive data sets.
ChatGPT is the most well-known LLM, and its successful, rapid development has caused concern among some experts and prompted a Senate hearing to hear from Sam Altman, the CEO of ChatGPT maker OpenAI, who pushed for oversight.
Companies such as Google and Microsoft are developing AI at a rapid pace. But when it comes to crime, that’s not what Dr. Harvey Castro, a board-certified emergency medicine physician and national artificial intelligence speaker who has created his own LLM called “Sherlock.”
WORLD’S FIRST AI UNIVERSITY PRESIDENT SAYS TECHNOLOGY WILL DISTURB EDUCATION TENETS AND CREATE ‘RENAISSANCE SCHOLARS’
Samuel Altman, CEO of OpenAI, testifies before the Senate Subcommittee on Privacy, Technology and the Law on May 16, 2023 in Washington, DC. The commission held a supervisory hearing to examine AI, with a focus on artificial intelligence rules. ((Photo by Win McNamee/Getty Images))
It is the “unscrupulous 18-year-old” who can create his own LLM without the guardrails and protections and sell it to would-be criminals, he said.
“One of my biggest concerns isn’t really the big guys, like Microsoft or Google or OpenAI ChatGPT,” Castro said. “I’m not really too worried about them because I feel like they’re regulating themselves, and the government is watching and the world is watching and everyone is going to regulate them.
“I’m actually more concerned about those teenagers or someone just out there, who are able to create their own big language model that doesn’t follow the rules, and they can even sell it on the black market. I I’m really concerned about that as a possibility in the future.”
WHAT IS AI?
On April 25, OpenAI.com said the latest ChatGPT model has the ability to disable chat history.
“If chat history is disabled, we keep new conversations for 30 days and review them only when necessary to check for abuse before permanently deleting them,” OpenAI.com said in its announcement.
LOOK DR. HARVEY CASTRO EXPLAIN AND DEMONSTRATE HIS LLM “SHERLOCK”
The ability to use that kind of technology, with chat history disabled, could be beneficial for criminals and problematic for investigators, Castro warned. Take two pending criminal cases in Idaho and Massachusetts to turn the concept into real-world scenarios.
OPENAI CHIEF ALTMAN DESCRIBED WHAT ‘SCARY’ AI MEANS TO HIM, BUT CHATGPT HAS ITS OWN EXAMPLES
Bryan Kohberger pursued a Ph.D. in criminology when he allegedly murdered four University of Idaho students in November 2022. Friends and acquaintances have described him as a “genius” and “genuinely intelligent” in previous interviews with Fox News Digital.
In Massachusetts, there is the case of Brian Walshe, who allegedly killed his wife, Ana Walshe, in January and disposed of her body. The murder case against him is based on circumstantial evidence, including a laundry list of alleged Google searches, such as how to dispose of a body.
BRYAN KOHBERGER CHARGED FOR STUDENT MURDERS IN IDAHO
Castro fears that someone with more expertise than Kohberger could set up an AI chat and clear the search history that could contain vital evidence in a case like the one against Walshe.
“Usually people can be caught using Google in their history,” Castro said. “But if someone has created their own LLM and allows the user to ask questions while telling them not to keep the history of this, while they can get information on how to kill a person and how to dispose of the body.”
At the moment, ChatGPT refuses to answer such questions. It blocks “certain types of insecure content” and doesn’t answer “inappropriate requests,” according to OpenAI.
WHAT IS THE HISTORY OF AI?
Dr. Harvey Castro, a board-certified emergency medicine physician and national artificial intelligence speaker who has created his own LLM called “Sherlock,” talks to Fox News Digital about possible criminal uses of AI. (Chris Eberhart)
During Senate testimony last week, Altman told lawmakers that GPT-4, the latest model, will deny harmful requests, such as violent content, self-harm content, and adult content.
“Not that we think adult content is inherently harmful, but there are things that can be associated with that that we can’t reliably discern. That’s why we reject everything,” said Altman, who also discussed other safeguards such as age restrictions .
“I would create a set of safety standards focused on what you said in your third hypothesis as the hazardous capability evaluations,” Altman said in response to a senator’s questions about what rules should be implemented.
AI TOOLS ARE USED BY POLICE WHO ‘DO NOT UNDERSTAND HOW THESE TECHNOLOGIES WORK’: STUDY
“An example we’ve used in the past is seeing if a model can replicate itself and sell the exfiltrate into the wild. We can give your office a long different list of the things we think are important there, but specific tests that a model must pass before it can be deployed in the world.
“And third, I would need independent audits. So not just from the company or agency, but experts who can say whether or not the model meets these stated safety thresholds and these performance rates on question X or Y.”
To put the concepts and theory into perspective, Castro said, “I’m guessing 95% of Americans don’t know what LLMs or ChatGPT are,” and he’d prefer it that way.
ARTIFICIAL INTELLIGENCE: FREQUENTLY ASKED QUESTIONS ABOUT AI
Artificial intelligence will be hacking data in the near future. (iStock)
But there’s a possibility that Castro’s theory could become reality in the not-so-distant future.
He alluded to a now-discontinued AI research project at Stanford University, which was nicknamed “Alpaca.”
A group of computer scientists created a product that cost less than $600 to build and had “very similar performance” to OpenAI’s GPT-3.5 model, according to the first university announcement, and ran on Raspberry Pi computers and a Pixel 6 smartphone.
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE SCARED OF ARTIFICIAL INTELLIGENCE
Despite its success, researchers have ended the project, citing licensing and safety concerns. The product was not “designed with adequate safety measures,” the researchers said in a press release.
“We emphasize that Alpaca is only intended for academic research and any commercial use is prohibited,” the researchers said. “There are three factors in this decision. First, Alpaca is based on LLaMA, which has a non-commercial license, so we necessarily inherit this decision.”
CLICK HERE TO GET THE FOX NEWS APP
The researchers continued that the instructional data is based on OpenAI’s text-davinci-003, “the terms of use of which prohibit developing models that compete with OpenAI. Finally, we have not designed adequate security measures, so Alpaca is not ready to be deployed for general usage.”
But Stanford’s successful creation sparks fear in Castro’s otherwise half-filled view of how OpenAI and LLMs could potentially change humanity.
“I tend to be a positive thinker,” Castro said, “and I think this is all going to happen for good. And I hope that big companies start putting up their own guardrails and start regulating themselves.”
Chris Eberhart is a crime and American news reporter for Fox News Digital. Email tips to chris.eberhart@fox.com or on twitter @ChrisEberhart48