Fake nude photos of 30 minors in New Jersey raise alarms about AI-generated content

Francesca Mani, one of the affected high school students, demands that Westfield High School authorities and legislators to take the initiative to protect minors.

"No girl knew what was happening, but we knew the boys knew something," said Westfield High School student Francesca Mani after leaving the principal's office crying. A classmate had used artificial intelligence (AI) to generate and distribute nude photos of her.

Like her, more than 30 girls were victims of this deepfake scandal that rocked New Jersey. Francesca and her mother, Dorota Mani, have been promoting a campaign in the national media to publicize the case and, to prevent it from happening again by calling on lawmakers to legislate the use of artificial intelligence.

In her last interview, broadcast on CNN, Francesca Mani said she knew who was responsible and that he had not suffered any punishment. The young woman asked the school to help her classmates feel "more comfortable, cause many girls don't feel comfortable knowing that he's walking our hallways."

The school claims to have begun an investigation. "We made counseling available for all affected students and encouraged them to return to class when they felt able to do so," school authorities assured the parents of the victims, in an email obtained by the New Jersey Digest.

I wanted to make you aware of the situation, as, in addition to harming the students involved and disrupting the school day, it is critically important to talk with your children about their use of technology and what they are posting, saving and sharing on social media. New technologies have made it possible to falsify images and students need to know the impact and damage those actions can cause to others.

Rise of deepfake porn

The problem of pornographic content generated with AI has been accelerating in recent months. Videos with deepfakes on websites that host this type of content increased by 54% in the first nine months of the year, according to an independent analysis published by Wired. At this rate, in 2023, there will have been more recordings than in all other years combined.

"In the wrong hands, generative AI could do untold damage," wrote Brendan Walker-Munro, a researcher at the University of Queensland in Australia, for The Conversation. "There’s a lot we stand to lose, should laws and regulation fail to keep up."

In the absence of state regulation, several states created their own regulations. California, Texas, Virginia and New York are some examples. However, the anonymity of the creators, the difficulty in detecting content generated with AI and the plain lack of knowledge present barriers for legislators, in addition to the problem of regulating the tool without restricting freedom of expression.

Not only are the laws not up to date. Users do not know how to distinguish between real and fake content, but they tend to believe they do. This was revealed by the study "Fooled Twice: People Cannot Detect Deepfakes But Think They Can," which also maintains that deepfakes are tricky to identify: "People are biased toward mistaking deepfakes as authentic videos (rather than vice versa)."

"These results suggest that people adopt a 'seeing-is-believing' heuristic for deepfake detection while being overconfident in their (low) detection abilities," wrote researchers Nils Köbis, Barbora Doležalová and Ivan Soraperra. "The combination renders people particularly susceptible to be influenced by deepfake content."