Meta Platforms outlined plans to label images generated by competitors’ AI services, part of a push by industry players to align on common technical standards which signal when content has been created using the technology.

In a blog, the Facebook and Instagram-owner stated it would apply labels in the coming months to inform users if an image has been created using AI on services run by OpenAI, Microsoft, Google and more.

The company already labels images posted on its platforms generated using its in-house AI tools, but plans to expand the service.

Nick Clegg, president of Global Affairs at Meta Platforms, explained the difference between human and synthetic content continues to get blurred, and people want to know where the boundary lies.

He said users are often coming across AI-generated content for this first time and are keen to have transparency around the technology.

“This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,” said Clegg. “People and organisations that actively want to deceive people with AI-generated content will look [for] ways around safeguards that are put in place to detect it.”

Clegg, however added it was more difficult to mark and identify AI-generated audio and video content, with solutions still being developed.

It was also not able to label written text content generated by platforms including ChatGPT, with Clegg telling Reuters “that ship has sailed”.