The United Nations Children’s Emergency Fund (Unicef) has expressed serious concern over the rising use of artificial intelligence to create sexualised deepfake images of children, warning that the threat is real and growing. In a statement issued on Thursday, Unicef urged governments and the AI industry to take immediate action to stop the creation and spread of such content, stressing that children cannot wait for laws to catch up with technology. The agency explained that AI-powered tools are increasingly being misused for “nudification,” where images are digitally altered to produce fake sexualised content involving minors. Unicef said current prevention measures are inadequate and described the situation as an unprecedented challenge for child protection systems. Citing research conducted with ECPAT and Interpol under the Disrupting Harm project, the agency revealed that across 11 countries, at least 1.2 million children reported their images were turned into sexually explicit deepfakes in the past year, amounting to one in every 25 children in some countries.