Technology

YouTube expands AI likeness detection to all adults, flags deepfakes

YouTube opened its AI face-matching tool to all adults, letting users hunt for deepfakes of themselves, but the cleanup still falls on victims.

Lisa Park··2 min read
Published
Listen to this article0:00 min
Share this article:
YouTube expands AI likeness detection to all adults, flags deepfakes
Source: theverge.com

YouTube opened its AI likeness detection tool to every user over 18, broadening a system that scans for videos where a person’s face appears to have been altered or generated by AI. The company says the feature can alert enrolled users when it finds a match so they can review the content and seek removal through YouTube’s privacy complaint process.

The rollout pushes YouTube further into digital-identity protection at a moment when AI impersonation has become a real threat for creators, public figures and ordinary adults whose faces can be copied and circulated at speed. YouTube has said the tool works like Content ID, but for a person’s facial likeness rather than copyrighted audio or video. That makes the system a new kind of rights filter, but one that still depends on the person being targeted to enroll, monitor alerts and file the request to take material down.

AI-generated illustration
AI-generated illustration

YouTube first limited likeness detection to creators in the YouTube Partner Program, then expanded it to a pilot group of government officials, journalists and political candidates. On April 21, 2026, the company said it was extending access to the entertainment industry, including talent agencies, management companies and the celebrities they represent. The latest expansion to all adults marks the broadest test yet of whether a platform can move from reactive moderation toward something closer to personal digital-rights protection.

The feature remains experimental and is not available in some countries. YouTube says participants must consent and submit a reference face, and that it discards data for faces that do not match an enrolled user. It also warned that during the experimental phase, the system may surface actual footage of the enrolled creator, not just altered or AI-generated clips. That means the burden of sorting legitimate use from abuse still lands on the individual, who must decide whether the content is harmless, unauthorized or harmful enough to challenge.

Related photo
Source: img-cdn.publive.online

YouTube has publicly backed the NO FAKES Act and says its likeness-management tools are part of a wider effort to protect creators and viewers from AI-generated impersonation. Neal Mohan and company executives have framed the work as part of a broader product roadmap for AI safeguards, but the central question remains unchanged: whether a detection tool that flags deepfakes in time can meaningfully shift power back to the people whose identities are being used without their consent.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Prism News updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology