YouTube has unveiled a new tool within Creator Studio that requires creators to disclose when content is produced using altered or synthetic media, including generative AI.
This announcement comes amidst the growing demand from viewers for clarity regarding the content’s authenticity.
The new tool mandates creators disclose when their content is realistic, defined as content that a viewer could easily mistake for a real person, place, or event. Such content includes digitally altered depictions of individuals, modified footage of real events or locations, and realistic scenes of fictional major events.
“The new label is meant to strengthen transparency with viewers and build trust between creators and their audience,” YouTube announced.
These disclosure labels will be displayed either in the expanded description or prominently on the video player, ensuring viewers are informed about the nature of the content they are watching.
However, YouTube clarified that not all content produced with generative AI requires disclosure. Creators are not obliged to disclose content that is clearly unrealistic, animated, includes special effects, or has used generative AI for production assistance, such as generating scripts or automatic captions.
The platform will roll out the labels across all YouTube surfaces and formats in the coming weeks, starting from the YouTube app on mobile devices and expanding to desktop and TV platforms.
While YouTube aims to give creators time to adjust to the new process, the platform hinted at potential enforcement measures for creators who consistently fail to disclose relevant information.

Additionally, YouTube may add disclosure labels independently if the altered or synthetic content has the potential to confuse or mislead viewers.
Along with these efforts, YouTube continues to collaborate with industry stakeholders, including its involvement as a steering member of the Coalition for Content Provenance and Authenticity (C2PA).
Furthermore, YouTube is also working towards an updated privacy process for individuals to request the removal of AI-generated or synthetic content that simulates identifiable individuals.
As the AI gathers momentum, it also opens several horrifying possibilities such as misleading content as happened with Gemini a few days back. To counter this, Google has stopped the image generation capabilities of Gemini and also programmed Gemini to avoid answering election queries.
In the News: Steam Families allows users to add up to 5 family members