Signal 008 — The Velvet Rope

YouTube announced this week that its AI likeness detection tool — the one that scans uploaded videos for deepfaked faces — is now available to celebrities, talent agencies, and the entertainment industry at large. The official blog post frames it as protection. CAA, UTA, WME, and Untitled Management are all on board. You don’t even need a YouTube channel. Just fame.

The tool works like Content ID but for faces. You enroll with a selfie video and photo ID. YouTube creates a biometric “face template” — a mathematical map of your bone structure, proportions, features. Every new upload gets scanned against the database. If your face shows up in something that looks AI-generated, you get notified. You can request removal.

It launched in October 2025 for YouTube Partner creators. Expanded in March 2026 to politicians, government officials, and journalists. Now, April 2026, Hollywood gets the keys.

Here is who does not have the keys: you.

Not the high school girl whose classmate ran her yearbook photo through an undressing app. Not the ex-girlfriend whose face was stitched onto pornography and uploaded before she woke up. Not the woman who found out at work, from a coworker who recognized her.

The numbers are not ambiguous. According to UN Women, 98 percent of all deepfake videos online are pornographic. 99 percent of those depict women. The volume increased 550 percent between 2019 and 2023 — before the latest generation of tools made creation trivially easy. Panorama Global research found that over half of deepfake victims in the United States have contemplated suicide.

Less than half of countries have laws that address online abuse at all. Fewer still have legislation that covers AI-generated content specifically. The burden of reporting and removal falls on survivors, and platforms — as the UN report documents — are slow to act, opaque in process, and often refuse to cooperate with law enforcement across borders.

YouTube built a tool that solves this. A real, working, technically impressive tool. Biometric enrollment. Automated scanning. Proactive detection. It is, by all accounts, exactly what survivors need.

And they gave it to CAA first.

I want to be precise about what I am not saying. I am not saying celebrities don’t deserve protection from deepfakes. They do. I am not saying the tool is bad. It is genuinely good technology. I am not saying YouTube is uniquely evil. They are ahead of most platforms on this.

I am saying the rollout order is a mirror. It reflects who the system considers a person whose face is worth protecting. YouTube Partner creators. Politicians. Journalists. Celebrities. Talent agencies. In that order. Ordinary people — the ones who make up the vast, devastating majority of deepfake abuse cases — are not on the timeline at all.

YouTube Creator Liaison Rene Ritchie told Tubefilter that “YouTube wants a future where AI helps creativity thrive” and emphasized “ensuring that creators stay in the driver’s seat.” That language tells you who the product is for. Creators. Partners. People who generate revenue on the platform. The word “survivor” does not appear.

There is also the question of what you hand over to get protected. The enrollment process collects biometric data — your face geometry, rendered as a template stored in YouTube’s systems. YouTube told CNBC the data won’t be used to train generative AI models, but declined to change the policy language that would actually prohibit it. The terms of service, as written, leave the door open. So the deal is: give us your face, trust us with it, and we’ll protect you from other people misusing your face. But only if you matter enough to qualify.

This is how platform protection works in 2026. It is not a public utility. It is a tiered service. The architecture exists to protect everyone. The access does not.

Somewhere right now, a woman is searching for how to get a deepfake of herself removed from the internet. She does not have a talent agent. She does not have a YouTube channel. She does not have a PR team or a congressional office or a seat at the table where these decisions are made. She has a face that someone decided to steal, and a reporting form that may or may not be reviewed by a human being, eventually, if she can prove the content meets the platform’s threshold for removal.

YouTube built a fire exit. And they are letting people in by invitation.

// NEON BLOOD

Sources:
YouTube Official Blog: Expanding likeness detection to the entertainment industry
TechCrunch: YouTube expands AI likeness detection to celebrities
TechCrunch: YouTube expands to politicians and journalists (March 2026)
TechCrunch: Likeness detection officially launched (October 2025)
UN News: Why women can’t get protection from AI deepfake abuse
UN Women: When justice fails — deepfake abuse explainer
Tubefilter: YouTube enters next phase of deepfake crackdown
CNBC: YouTube’s deepfake tool alarming experts and creators