Her voice thick with anger, an American mother worried about what the future holds for her teenage daughter, one of dozens of girls targeted in yet another AI-enabled pornography scandal that has rocked an American school .
The controversy that erupted last year at Lancaster Country Day School in Pennsylvania highlights a new normal for students and teachers who are struggling to keep up with a boom in cheap, readily available artificial intelligence tools that have created surreal deepfakes. Facility has been provided.
One parent, who spoke to AFP on condition of anonymity, said her 14-year-old daughter came to her “desperately crying” after AI-generated nude photos of her were circulated among her peers last summer .
“What will be the long-term effects of that?” The mother expressed fear that the doctored photos might resurface when her daughter applies to college, starts dating or enters the job market.
“You can’t tell they’re fake.”
Multiple charges were filed last month — including child molestation and possession of child pornography — against two teenage boys who authorities accused of making the photos.
Investigators uncovered 347 images and videos on messaging app Discord affecting a total of 60 victims, most of whom were private school students.
All but one were under 18 years of age.
‘Trouble’
The scandal is the latest in a series of similar incidents in schools in US states from California to New Jersey, leading the FBI last year to warn that such child sexual abuse material, including realistic AI-generated images, was illegal. ,
“The rise of generative AI collides with a long-standing problem in schools: the act of sharing non-consensual intimate images,” said Alexandra Reeve Givens, chief executive of the nonprofit Center for Democracy and Technology (CDT). “
“In the digital age, children desperately need help to deal with technology-enabled harassment.”
A CDT survey of public schools last September found that 15 percent of students and 11 percent of teachers were aware of at least one “deepfake” that depicts someone associated with their school in a sexually explicit or intimate manner. .
Such non-consensual imagery can lead to harassment, bullying or blackmail, sometimes leading to devastating mental health consequences.
The mother who spoke to AFP said she knew victims who didn’t go to school, had trouble eating or needed medical care and counseling to cope.
She said she and other parents brought to a detective’s office to investigate the deepfakes were shocked to find that the printed images were stacked “a foot and a half high”.
“I wanted to see pictures of my daughter,” he said.
“If someone sees it, they’ll think it’s real, so it’s even more hurtful.”
‘Exploitation’
The alleged criminals, whose names have not been released, are accused of lifting photos from social media, altering them using AI applications and sharing them on Discord.
The mother told AFP that the fake photos of her daughter had been replaced primarily by public photos on the school’s Instagram page, as well as screenshots of FaceTime calls.
A simple online search brings up dozens of apps and websites that allow users to create “deepnudes”, digitally remove clothes or superimpose selected faces over pornographic images.
“While the results may not be as realistic or compelling as a professional presentation, these services mean no technical skills are required to produce deepfake content,” Roberta Duffield, director of intelligence at Blackbird.AI, told AFP. “
Only a handful of US states have passed laws to combat sexually explicit deepfakes, including Pennsylvania late last year.
Top leadership at a Pennsylvania school resigned after parents of victims filed a lawsuit accusing the administration of failing to report the activity when they were first alerted about it in the fall of 2023 .
Researchers say schools are inadequately equipped to deal with the threat of rapidly evolving AI technology because legislation is still catching up.
“Young girls are being deeply exploited by their friends, by their coworkers, by their schoolmates,” Duffield said.
“Education authorities should urgently develop clear, comprehensive policies regarding the use of AI and digital technologies.”
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)