A study by Australia’s eSafety Commissioner of practices by some of the largest tech companies in the world found they aren’t proactive in detecting child exploitation content in streaming and cloud services.

In a statement, eSafety commissioner Julie Inman Grant highlighted the inadequate and inconsistent use of technology to detect child abuse material and grooming, and slow response times when it is flagged by users.

Under the country’s new Basic Online Safety Expectations, the internet safety watchdog issued legal notices to seven companies including Apple, Meta and Microsoft, requiring them to answer questions about how they tackle the issue, then compiled responses in a report it will use to raise safety standards.

Inman Grant said the government believes greater transparency will “help lift safety standards and create collective will across the industry to meaningfully address this problem, at scale”.

She insisted “we need to see more meaningful action”.

Wide gaps
The report includes details from Apple and Microsoft that they do not attempt to proactively detect child abuse material stored in their iCloud and OneDrive services or use any technology to detect live-streaming of child sexual abuse in video chats on Skype, Microsoft Teams or FaceTime.

Wide gaps in how quickly companies respond to user reports of child sexual exploitation and abuse on their services were also found. Response times ranged from an average of four minutes from Snap to two days for Microsoft.

Meta Platforms revealed accounts blocked on Facebook are not always banned on Instagram, and when a user is barred from WhatsApp, the information is not shared with the two services.

“This is a significant problem because WhatsApp report they ban 300,000 accounts for child sexual exploitation and abuse material each month: that’s 3.6 million accounts every year,” Inman Grant said.

The companies were given 28 days to respond to the legal notices or risk fines of up to AUD550,000 ($36,960) a day.