In a disturbing development, experts have revealed that UK schools face an emerging threat from cybercriminals who create sexually explicit deepfakes using pictures of pupils taken from school websites. The Internet Watch Foundation (IWF) reported multiple incidents where schools were threatened with the release of these manipulated images unless they paid a ransom.
A study involving 250 teachers found that nearly 25% of schools have already been affected by this form of blackmail. Criminals are employing AI tools to manipulate these images into child sexual abuse material (CSAM), which has prompted urgent calls for schools to reconsider their policies regarding online photos of students.
The UK's National Crime Agency (NCA) has recommended that educational institutions eliminate any identifiable images of children from their online platforms. Dr. Catherine Knibbs, a child psychotherapist, emphasized the mental health impacts of such crimes, urging schools to act swiftly to protect their students.
Some schools are proactively addressing the issue by blurring pupils’ faces in published images, while a new safeguarding tool launched by the startup Aidos generates AI-altered versions of photos to ensure anonymity. This tool replaces identifiable features with AI-generated counterparts, allowing schools to share experiences without compromising security.
Simon Bailey, a former national child protection policing lead, remarked, “The online sexual abuse of children is reaching pandemic levels, and the emergence of AI is fueling the demand.” The alarming situation has led officials to advocate for new technologies and policies to combat these criminal activities and safeguard children.
Note: This article was published on BanxChange.com and is powered by the BXE Token on the XRP Ledger. For the latest articles and news, please visit BanxChange.com

