AI: Fears hundreds of children globally used in naked images

Miriam Al Adib
Miriam Al Adib's daughter had AI generated indecent images of her

The mother of a girl whose photo was used in AI-generated naked images says hundreds of parents have told her their children are also victims.

Miriam Al Adib's daughter was one of several children from a Spanish village who had indecent images created using photos of them fully clothed.

She says parents around the world claim their children have also been targeted.

One Welsh teacher said schools needed to play a role in explaining the dangers of AI to children.

The Internet Watch Foundation said it was "not surprising" the practice was so widespread.

The town of Almendralejo hit headlines in September after more than 20 girls, aged between 11-17 had AI-generated indecent images shared online without their knowledge.

Mrs Al Adib was among a group of parents who created a support group for those affected, which she said led to many other parents contacting her with their own concerns.

"Hundreds of people have written to me saying 'how lucky you have been [to have support] because this same thing has happened to us, it happened to my daughter, or it happened to me, and I haven't had any support'," she told Wales Live.

"If any girl is affected, please tell your parents."

Spanish police press conference
Spanish authorities have launched an investigation into the images

Ms Al Adib said mothers and fathers of those affected in her village had started a group to help support each other and their children.

She added: "This helped many girls to come forward to also say what had happened to them. It is important to know, because many girls are not able do not dare to talk about this with their parents."

She said the combination of access to social networks, pornography and artificial intelligence was a "weapon of destruction".

The UK's first AI safety summit last week heard Home Secretary Suella Braverman commit to clamp down on AI-generated child sexual abuse material.

The UK government said: "AI-generated child sexual exploitation and abuse content is illegal, regardless of whether it depicts a real child or not.

"The Online Safety Act will require companies to take proactive action in tackling all forms of online child sexual abuse - including grooming, live-streaming, child sexual abuse material and prohibited images of children - or face huge fines."

Susie Hargreaves, chief executive of the Internet Watch Foundation, said child sexual abuse material generated through AI needs to be addressed "urgently".

She said she was concerned there could be a "tsunami" of images created in the future.

"That's because it's not something that's about to happen. It is happening," she said.

In their October 2023 report, the foundation found that in just one month more than 20,000 AI-generated images were found on one forum which shares child sexual abuse material.

Comments included congratulations for creators on the realism of pictures, and users saying they had created images from pictures they'd taken of children in a park.

Dr Tamasine Preece, who leads health and wellbeing at Bryntirion Comprehensive school in Bridgend, said things like social media and smart phones mean her role has changed "immeasurably" since she started teaching.

She said it was "absolutely vital schools play a pivotal role" in working with children about topics like the dangers of AI.

Wales Live showed her an advert which claims an app can generate nude photos, which she described as "heart-breaking'.

"We as adults can bring them out into the foreground in a safe way rather than these subjects being taboo and discussed amongst themselves sharing misinformation," she added.

The Lucy Faithful Foundation, which works with offenders to tackle child sexual abuse, said it was bracing itself for an "explosion" of child sexual abuse material created by AI.

What is AI and how does it work?

AI (artificial intelligence) allows computers to learn and solve problems almost like a person.

AI systems are trained on huge amounts of information and learn to identify the patterns in it, in order carry out tasks such as having human-like conversation, or predicting a product an online shopper might buy.

The technology is behind the voice-controlled virtual assistants Siri and Alexa, and helps Facebook and X - formerly known as Twitter- decide which social media posts to show users.

Many experts are surprised by how quickly AI has developed, and fear its rapid growth could be dangerous. Some have even said AI research should be halted.

In October, the UK government published a report which said AI might soon assist hackers to launch cyberattacks or help terrorists plan chemical attacks.

What rules are in place at the moment about AI?

In the EU, the Artificial Intelligence Act, when it becomes law, will impose strict controls on high risk systems.

The UK government previously ruled out setting up a dedicated AI watchdog.

But Prime Minister Rishi Sunak wants the UK to be a leader in AI safety, and is hosting a global summit at Bletchley Park where firms and governments are discussing how to tackle the risks posed by the technology.