Alphabet-owned YouTube unveiled a labelling tool that requires content creators to disclose when videos contain AI-generated or synthetic material, as part of a plan to improve transparency and trust with users.

When creators are uploading and posting videos to the platform, they are required to reveal whether “altered or synthetic” content, including generative AI (GenAI), is used to depict something viewers could mistake for a real person, place or event.

The new policy doesn’t require creators to identify content “that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance”.

YouTube stated it also won’t require creators to disclose if GenAI is used for productivity, such as generating scripts, content ideas, or automatic captions.

The tool in YouTube’s Content Creator Studio creates labels that will appear in an expanded description on the front of the video player. For videos related to more sensitive topics, such as health, news, elections, or finance, YouTube stated it will also place a more prominent label on the video itself.

It stated viewers will start to see the changes over the coming weeks first on its mobile app and soon on desktop and TV formats.

Last month, rival Meta Platforms also outlined plans to label images used on its platforms that are generated by competitors’ AI services, part of a push by industry players to align on common technical standards which signal when content has been created using AI.