Global Courant 2023-05-04 11:21:30
Rapidly evolving artificial intelligence technology may have prevented the Boston Marathon bombing, but it could also become the police’s latest nightmare.
That was the message from Boston Police Commissioner Ed Davis during the deadly terrorist attack on April 15, 2013. Ten years after that plot that killed three people and injured hundreds, he told Fox News Digital that AI was “finally closing the investigations.” and allow many dangerous criminals to be brought to justice.”
“The use of artificial intelligence systems applied to classified and top-secret databases could very well have prevented the Boston Marathon bombing,” he said.
“However, it could be years before this happens. But at this point, the government needs to be aware of the drawbacks and mitigate the risks of AI.”
BOSTON MARATHON SURVIVORS REFLECTION, SHARE CONVINCED ABOUT CONVINCED BOMBER WHO USED COVID-19 FUNDS
Medical personnel help injured people after bombs exploded near the finish line of the Boston Marathon on April 15, 2013. (AP)
Davis, who has since retired and founded a security consultancy, testified last week at the Senate Subcommittee Hearing on Emerging National Security Threats about how new technology, particularly AI, has changed policing over the past 10 years.
“One of the key benefits of new AI-driven imaging devices is the ability to transform traditional video surveillance systems into real-time data sources and proactive investigative tools,” Davis said at the hearing.
SOUTH CAROLINA PRIEST SAYS ‘NO ROOM’ FOR AI AFTER ASIA USES CATHOLIC CHURCH FOR SYNODAL DOCUMENT
“Today’s cameras and coordinated systems have the potential to provide real-time analytics, identify potentially dangerous items, and respond and pivot based on crowd dynamics, such as abnormal movement patterns or crowding.”
The seeds to create a proactive law enforcement system already exist.
Boston Police Commissioner Ed Davis testifies at the Senate Homeland Security and Government Affairs Committee hearing on July 10, 2013, to discuss the lessons learned from the Boston Marathon bombings. (AP Photo/J. Scott Applewhite)
Davis mentioned a number of companies using AI software that essentially analyzes a wealth of video data to anticipate what is likely to happen in a crowd.
For example, the video analysis company Genetec uses high-tech cameras and the AI software Vintra which can learn “from normal activity” to notify operators of “impending threats and anomalies”.
REGULATIONS COULD DOMINATE CHINA IN AI RACE, EXPERTS WARN: ‘WE WILL LOSE’
Vintra IQ can review live or recorded videos 20 times faster than an average human, identify all individuals a suspect has come into contact with, and then rank those interactions to identify the most recurring relationships, the company said on its website.
The AI software recreates the suspect’s “separate” patterns, matches events to those patterns and finds the “anomalies where known patterns are violated” in real life, according to Vintra’s website.
Check out Vintra’s demo:
That’s what law enforcement agencies did when they chased the bombers, but the AI-driven technology “solves the manual search of the overwhelming amount of data produced using thousands of cameras,” Davis said.
In the case of the Boston Marathon bombing, photos and videos of witnesses were posted all over social media, and armchair detectives came up with all sorts of conspiracies and theories that turned the manhunt into a runaway train.
MEET THE 72-YEAR-OLD CONGRESSMAN WHO GOES BACK TO COLLEGE TO LEARN ABOUT AI
There was an internal debate about releasing the blurry security footage of Tamerlan and Dzhokhar Tsarnaev as prime suspects, which Davis supported.
“I was convinced that if we got[the photos]in our pocket in a matter of hours — people know who committed these crimes and will help us,” Davis said during an interview for Netflix’s recent documentary about the bombardment.
Boston Marathon bombers Tamerlan Tsarnaev, left, and Dzhokhar Tsarnaev (The Associated Press / Lowell Sun & Robin Young / Dossier)
The photos were eventually released, but not before they were leaked to the press.
Within 24 hours of the photos being released, 19-year-old Dzhokhar was identified as one of the bombers, and detectives retraced the teen’s steps, which led them to his older brother, the mastermind behind the plot.
10 WAYS BIG GOVERNMENT USES AI TO CREATE THE TOTALITARIAN SOCIETY OF ORWELL’S CLASSIC ‘1984’
“Since 2013, the government has made significant improvements in security measures, including cybersecurity, border security and emergency planning,” Davis told lawmakers.
“These improvements include more advanced technologies, more comprehensive planning, and greater public education and awareness, supported by many private-public relationships and innovative companies.”
Joselyn Perez of Methuen, Massachusetts, who attended the 2013 Boston Marathon and survived the bombing, hugs her mother, Sara Valverde Perez, who was hospitalized as a result of the bombing that day, in Boston on April 15, 2021 Joselyn’s brother, Yoelin Perez, hugs them as church bells ring to mark the moment the bombs went off eight years earlier. (Jessica Rinaldi/Boston Globe via Getty Images)
But what happens when the same technology is in the hands of bad guys?
The same data could be used to disrupt an ongoing investigation and send police on a wild goose chase, Davis said.
FLASHBACK: STEPHEN HAWKING WARNED AI COULD MEAN THE ‘END OF THE HUMAN RACE’ YEARS BEFORE HIS DEATH
In the aftermath of the Boston bombing, Davis said investigators sifted through thousands of photos to authenticate them because the public used Photoshop to edit images.
“They photoshopped a suspicious person on a roof near the attacks and photoshopped a bag at the site of the attack into another photo,” he said. “These edited photos added an extra challenge, requiring us to verify and rule out public fakes, complicating the monumental tasks that were already at hand.”
And that’s technology that existed 10 years ago.
Now “deepfakes” use AI to create realistic fake images of people and replicate their voices. The technology is already being used in ransom scams and other schemes.
ARE YOU READY FOR AI VOICE CLONING ON YOUR PHONE?
“These ‘deepfakes,’ when used to distort or disrupt an investigation, present a clear challenge for law enforcement that Congress and legislation must anticipate and prepare for,” Davis said.
“Legislation and regulation must be put in place to protect this profound technological advancement as it continues to expand. Nefarious use of AI poses a clear and present threat to the safety of the American public.”
“Technology will save lives,” Davis said. But “as new technology becomes available to law enforcement, it is also available to criminals and terrorists.”
Davis said the private sector is already using the latest AI technology, but police “still lag miserably behind,” in part due to a lack of resources and information. (Reuters / Dado Ruvic / illustration)
Davis said the private sector is already using the latest AI technology, but police “still lag miserably behind,” in part due to a lack of resources and information.
But there is also a general reluctance among law enforcement officials to adopt “controversial techniques,” the former Boston commissioner said.
CLICK HERE TO GET THE FOX NEWS APP
Therefore, he argues, it is vital that government officials get involved to provide a framework for “acceptable police procedures.”
But right now, AI technology advances seem to be moving faster than congressional debate, and the White House hasn’t discussed the topic much publicly since it “Blueprint for an AI Bill of Rights” last October.
Chris Eberhart is a crime and American news reporter for Fox News Digital. Email tips to [email protected] or on twitter @ChrisEberhart48