
YouTube has announced a series of measures to address the risks associated with generative AI on its platform.
Jennifer Flannery O’Connor and Emily Moxley, vice-presidents of product management at YouTube, shared insights into the company’s evolving approach.
Generative AI, touted for its potential to unlock new dimensions of creativity on YouTube, is also recognised for introducing risks that demand careful consideration.

One of the key changes is the introduction of disclosure requirements and new content labels. Over the coming months, YouTube will mandate creators to disclose when they’ve produced altered or synthetic content that is realistic, especially when it involves sensitive topics such as elections, conflicts, public health crises, or public officials. Creators failing to comply may face content removal, suspension from the YouTube Partner Program, or other penalties.
To keep viewers informed, YouTube will introduce labels in two forms: a new label in the description panel and a more prominent label on the video player for certain sensitive topics. Notably, some synthetic media, regardless of labelling, may be removed if it violates community guidelines, particularly content aimed at shocking or disgusting viewers.

In response to community feedback, YouTube will enable users to request the removal of AI-generated or synthetic content that simulates identifiable individuals, including their face or voice. The privacy request process will consider factors such as parody, satire, unique identification, and the involvement of public officials or well-known individuals.
Music partners will also gain the ability to request the removal of AI-generated music content mimicking an artiste’s unique singing or rapping voice, with considerations for factors like news reporting, analysis, or critique.
Additionally, YouTube will leverage AI technology for content moderation, enhancing the speed and accuracy of its systems in identifying potentially violative content. The company emphasises a responsible approach to AI innovation, prioritising the development of guardrails to prevent the generation of inappropriate content and actively seeking user feedback to improve protections.
Comments