Summary: As AI technology advances, the use of people’s faces and voices in AI-generated content raises complex legal and cultural questions. While copyright laws don’t fully cover these issues, states like California and Tennessee have begun expanding protections. OpenAI’s launch of Sora, an AI video platform, has intensified debates over likeness rights, leading to calls for nationwide legislation like the NO FAKES Act. Meanwhile, social norms around AI-generated likenesses are still evolving, posing new challenges for creators, platforms, and individuals alike.

A New Era of AI and Likeness Rights

Welcome to The Stepback, a weekly newsletter that breaks down one essential story from the tech world. This week, we’re diving into the emerging legal and cultural battles surrounding AI’s use of people’s faces and voices.

The Rise of AI-Generated Deepfakes

Back in 2023, an AI-generated song called Heart on My Sleeve mimicked Drake’s voice so closely that it sparked widespread attention. While streaming platforms removed it citing copyright technicalities, the real issue was about likeness rights—how AI can imitate a person’s unique voice or image without direct copying.

Legal Challenges and State-Level Protections

Unlike copyright, which is governed by federal laws like the Digital Millennium Copyright Act, likeness rights are governed by a patchwork of state laws that weren’t designed with AI in mind. In 2024, states like Tennessee and California, both with significant media industries, passed laws expanding protections against unauthorized digital replicas of entertainers.

OpenAI’s Sora and the Challenges of Likeness Policies

Last month, OpenAI launched Sora, an AI video generation platform designed to capture and remix real people’s likenesses. This opened the floodgates to realistic deepfakes, including unauthorized ones. OpenAI has implemented likeness policies and guardrails, but challenges remain. For example, after complaints from Martin Luther King Jr.’s estate about disrespectful depictions, OpenAI revised its policies on historical figures. Similarly, unauthorized celebrity likenesses led to strengthened safeguards following concerns from SAG-AFTRA.

Even authorized users have expressed discomfort, especially women, who found some AI-generated content fetishizing or offensive. OpenAI CEO Sam Altman acknowledged that people might have mixed feelings about how their likenesses are used, even with permission.

The Cultural Impact and Political Use of AI Videos

AI-generated videos have become tools in political and social conflicts. For instance, former President Donald Trump shared a video depicting a figure resembling a liberal influencer in a negative light, while New York City mayoral candidate Andrew Cuomo posted (and quickly deleted) a controversial AI video targeting his opponent. These examples highlight how AI deepfakes are increasingly used in influencer drama and political discourse.

The NO FAKES Act: Nationwide Likeness Protection

Amid these developments, SAG-AFTRA has supported the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which aims to establish nationwide rights to control unauthorized digital replicas of living or deceased individuals. The bill would also hold online platforms liable if they knowingly allow such content.

Balancing Free Speech and Likeness Rights

The NO FAKES Act has faced criticism from free speech advocates like the Electronic Frontier Foundation (EFF), which warns it could lead to overbroad content filtering and unintended censorship. Although the bill includes exceptions for parody, satire, and commentary, these protections may not be sufficient for those unable to afford legal battles.

The Future of Likeness Laws and Social Norms

While federal legislation remains uncertain—especially amid ongoing government shutdowns and efforts to block state AI regulations—platforms like YouTube are taking steps to empower creators to remove unauthorized content using their likeness. As AI-generated videos become more prevalent, society is still figuring out when and how it’s appropriate to create and share such content.

We’re entering a new world where generating videos of almost anyone doing almost anything is easy—but deciding when it should be done is a question still up for debate.

By Manish Singh Manithia

Manish Singh is a Data Scientist and technology analyst with hands-on experience in AI and emerging technologies. He is trusted for making complex tech topics simple, reliable, and useful for readers. His work focuses on AI, digital policy, and the innovations shaping our future.

Leave a Reply

Your email address will not be published. Required fields are marked *