On Wednesday, OpenAI unveiled a policy blueprint aimed at tackling the increasing issue of AI-enabled child sexual exploitation. This framework proposes new safety measures for the industry to deter the creation of child sexual abuse material using AI technologies.
OpenAI’s plan includes legal, operational, and technical strategies designed to enhance protections against AI-facilitated abuse and foster better collaboration between tech companies and law enforcement agencies. The company emphasized that child sexual exploitation poses a significant challenge in the digital era, with AI altering both its emergence and potential solutions at scale.
The proposal integrates insights from organizations specializing in child protection and online safety, such as the National Center for Missing and Exploited Children and the Attorney General Alliance’s AI task force. Michelle DeLaune, President & CEO of the National Center for Missing & Exploited Children, highlighted that generative AI is exacerbating online child exploitation by reducing barriers and enabling new forms of harm. However, she expressed optimism about companies like OpenAI designing tools more responsibly with built-in safeguards.
OpenAI’s framework merges legal standards, industry reporting mechanisms, and technical measures within AI models to identify risks early and enhance accountability across digital platforms. It outlines actions such as updating laws for AI-generated child abuse material, enhancing abuse signal reporting by online providers, and integrating AI safeguards to prevent misuse.
“No singular solution can tackle this issue alone,” OpenAI stated. “This framework combines legal, operational, and technical methods to better identify risks, expedite responses, and support accountability, while maintaining robust enforcement as technology progresses.” This initiative emerges amid concerns from child safety advocates about generative AI systems that could produce manipulated or synthetic images of minors. In February, UNICEF urged governments worldwide to criminalize AI-generated child abuse material.
In January, the European Commission initiated a formal probe into whether X, formerly known as Twitter, breached EU digital regulations by not preventing its native AI model, Grok, from generating illegal content, with similar investigations launched in the UK and Australia. OpenAI acknowledged that legislation alone is insufficient to curb AI-generated abuse material, stressing the need for stronger industry standards as AI capabilities expand.
“By disrupting exploitation attempts earlier, enhancing signals sent to law enforcement, and fortifying ecosystem accountability, this framework aims to prevent harm proactively and ensure quicker child protection when risks arise,” OpenAI concluded.