Global Courant
An Instagram logo is displayed on a smartphone.
SOPA Images | LightRocket | Getty Images
According to an investigation published Wednesday, Instagram’s recommendation algorithms have linked and promoted accounts that enable and sell child sexual abuse content.
Metas The photo-sharing service differs from other social media platforms and “appears to have a particularly serious problem” with accounts displaying self-generated child sexual abuse material, or SG-CSAM, Stanford University researchers wrote in an accompanying research. Such accounts claim to be controlled by minors.
“Because of the widespread use of hashtags, the relatively long life span of seller accounts, and most importantly, its effective recommendation algorithm, Instagram serves as the primary discovery mechanism for this particular community of buyers and sellers,” said the study, which was cited in the research by The Wall Street Journal, Stanford University’s Internet Observatory Cyber Policy Center, and the University of Massachusetts Amherst.
While the accounts could be found by any user who searched for explicit hashtags, the researchers found that Instagram’s recommendation algorithms also promoted them “to users viewing an account in the network, enabling account discovery without keyword searching.”
A Meta spokesperson said in a statement that the company has taken several steps to address the issues and has “set up an internal task force” to investigate and address these claims.
“Child exploitation is a heinous crime,” the spokesman said. “We are working aggressively to combat it on and off our platforms and support law enforcement in their efforts to apprehend and prosecute the criminals behind it.”
Alex Stamos, Facebook’s former chief security officer and one of the paper’s authors, said in a tweet Wednesday that the researchers focused on Instagram because its “position as the most popular platform for teens worldwide makes it a critical part of this ecosystem.” However, he added: “Twitter continues to have serious problems with the exploitation of children.”
Stamos, who is now director of the Stanford Internet Observatory, said the problem persisted after Elon Musk took over Twitter late last year.
“What we found is that Twitter’s basic scan for known CSAM broke after Mr. Musk’s acquisition and wasn’t fixed until we notified them,” Stamos wrote.
“They then cut off our API access,” he added, referring to the software that allows researchers to access Twitter data to conduct their studies.
Earlier this year, NBC News reported multiple Twitter accounts offering or selling CSAM have remained available for months, even after Musk promised to address child exploitation issues on the social messaging service.
Twitter has not commented on this story.
Note: YouTube and Instagram would benefit most from a ban on TikTok