Summary: YouTube is rolling out a new AI-powered “likeness detection” feature for creators in its Partner Program. This tool helps creators identify and report unauthorized videos that use their face or likeness, especially deepfakes. After verifying their identity, creators can review flagged videos in YouTube Studio and request removal of any AI-generated content they didn’t authorize. The feature is currently in early stages and will expand to more creators over the coming months.
Introducing YouTube’s AI Likeness Detection Tool
Starting today, creators in YouTube’s Partner Program can access a new AI detection feature designed to help them find and report unauthorized videos that use their likeness. This tool aims to give creators more control over how their image is used on the platform, especially as AI-generated deepfakes become more common.
How the Tool Works for Creators
Once creators verify their identity, they can visit the Content Detection tab in YouTube Studio to review videos flagged by the AI. If a video appears to be an unauthorized AI-generated deepfake or altered content featuring their face, creators can submit a removal request directly through the platform.
YouTube notes that since the feature is still in development, it may also flag videos featuring the creator’s actual face—such as clips from their own content—not just synthetic or altered versions. This system works similarly to YouTube’s existing Content ID technology, which detects copyrighted audio and video.
Early Access and Rollout Plans
The first group of eligible creators received email notifications this morning about the new feature. YouTube plans to gradually roll it out to more creators over the next few months. The tool was initially announced last year and began pilot testing in December with talent represented by Creative Artists Agency (CAA).
In a blog post at the time, YouTube explained, “Through this collaboration, several of the world’s most influential figures will have access to early-stage technology designed to identify and manage AI-generated content that features their likeness, including their face, on YouTube at scale.”
YouTube’s Broader Efforts Against AI-Generated Content
YouTube and its parent company Google are actively developing tools to address the rise of AI-generated videos. This likeness detection feature is just one part of their strategy. Last March, YouTube began requiring creators to label videos that include AI-generated or altered content. They also introduced a strict policy against AI-generated music that imitates an artist’s unique singing or rapping voice.
Stay Updated with Related Topics
Follow topics and authors related to this story to see more updates in your personalized homepage feed and receive email notifications.
Most Popular Stories
- Amazon hopes to replace 600,000 US workers with robots, according to leaked documents
- OpenAI’s AI-powered browser, ChatGPT Atlas, is here
- Apple iPad Pro (2025) review: fast, faster, fastest
- OpenAI is about to launch its new AI web browser, ChatGPT Atlas
- Apple adds a new toggle to make Liquid Glass less glassy