Summary: YouTube is rolling out a new AI-powered tool that helps creators in its Partner Program detect unauthorized videos using their likeness, including deepfakes. After verifying their identity, creators can review flagged videos and request removal of any AI-generated content that misuses their image. This feature is gradually becoming available and aims to help creators manage their presence on the platform amid growing AI-generated content.
How the Likeness Detection Tool Works
Starting today, creators in YouTube’s Partner Program can access a new AI detection feature designed to identify videos that use their likeness without permission. Once creators verify their identity, they can visit the Content Detection tab in YouTube Studio to review videos flagged by the system. If a video appears to be unauthorized AI-generated content—such as deepfakes—creators can submit a request to have it removed.
Early Access and Rollout
The first group of eligible creators received email notifications this morning. YouTube plans to gradually expand access to more creators over the coming months. YouTube notes that, since the tool is still in development, it may sometimes flag videos featuring a creator’s actual face rather than altered or synthetic versions. This approach is similar to YouTube’s existing Content ID system, which detects copyrighted audio and video.
Background and Development
YouTube initially announced this feature last year and began testing it in December through a pilot program with talent represented by Creative Artists Agency (CAA). At the time, YouTube stated, “Through this collaboration, several of the world’s most influential figures will have access to early-stage technology designed to identify and manage AI-generated content that features their likeness, including their face, on YouTube at scale.”
YouTube’s Broader AI Content Policies
YouTube and its parent company Google are actively developing AI video generation and editing tools. The likeness detection tool is just one part of YouTube’s efforts to address AI-generated content on the platform. For example, last March, YouTube began requiring creators to label uploads that include AI-generated or altered content. Additionally, YouTube introduced a strict policy targeting AI-generated music that mimics an artist’s unique singing or rapping voice.
Stay Updated with Related Topics
Follow topics and authors related to this story to see more like it in your personalized homepage feed and receive email updates.
Most Popular Stories
- Amazon hopes to replace 600,000 US workers with robots, according to leaked documents
- OpenAI’s AI-powered browser, ChatGPT Atlas, is here
- Apple iPad Pro (2025) review: fast, faster, fastest
- OpenAI is about to launch its new AI web browser, ChatGPT Atlas
- Even Xbox developer kits are getting a big price hike