TikTok and Snapchat were among 30 companies and government entities to sign a joint statement vowing to clamp down on the spread of AI-generated child sexual abuse images.

The statement was issued by the UK and included US, German, South Korean and Australian government agencies.

Those involved resolved “to sustain the dialogue and technical innovation around tackling child sexual abuse in the age of AI” to ensure risks “do not become insurmountable”.

The group cited Internet Watch Foundation (IFW) data showing almost 3,000 of a little more than 11,000 AI-generated images on one dark web forum depicted child sexual abuse.

UK home secretary Suella Braverman stated the “pace at which these images have spread online is shocking and that’s why we have convened such a wide group of organisations to tackle this issue head-on. We cannot let this go on unchecked”.

IFW chief executive Susie Hargreaves noted it raised concerns about the problem in July, arguing “we have seen all our worst fears about AI realized” since.

“It is essential, now, we set an example and stamp out the abuse of this emerging technology before it has a chance to fully take root.”

The joint statement was issued ahead of a UK government AI Safety Summit scheduled to begin on 2 November.