Summary: The rapid advancement of AI technology, especially in generating realistic videos and audio of real people, is sparking a new wave of legal and cultural debates. Unlike copyright law, which is well-established, laws protecting individuals’ likenesses are fragmented and outdated. AI platforms like OpenAI’s Sora have pushed these issues into the spotlight, raising questions about consent, misuse, and regulation. Efforts like the NO FAKES Act aim to create nationwide protections, but concerns about free speech and enforcement remain. As AI-generated content becomes more common, society is still figuring out the rules and ethics around using someone’s face or voice.
A New Era of AI-Generated Likeness
Back in 2023, an AI-generated song called “Heart on My Sleeve” mimicked Drake’s voice so closely that it sparked a new legal and cultural battle. While streaming services removed the track on copyright grounds, the creator hadn’t copied any original recording—just created a very close imitation. This highlighted the limitations of copyright law and shifted attention to likeness law, which deals with unauthorized use of a person’s image or voice.
The Legal Landscape: Likeness Law vs. Copyright
Unlike copyright, which is governed by federal laws like the Digital Millennium Copyright Act and international treaties, likeness law is a patchwork of state regulations. These laws were originally designed to protect celebrities from unauthorized endorsements or parodies and weren’t created with AI in mind. Recently, states like Tennessee and California have expanded protections against unauthorized digital replicas of entertainers, reflecting the growing concern over AI-generated likenesses.
OpenAI’s Sora and the Challenges of AI Video Generation
In 2024, OpenAI launched Sora, an AI video platform designed to capture and remix real people’s likenesses. This opened the floodgates to realistic deepfakes, including those created without consent. OpenAI has implemented policies to restrict unauthorized use, but users have found ways around these guardrails, leading to complaints from industry groups like SAG-AFTRA. Even authorized likenesses have raised concerns, especially when AI-generated content includes offensive or fetishized portrayals.
The Rise of Deepfakes in Politics and Social Media
AI-generated videos have become a tool for political and social commentary, sometimes crossing into offensive or racist territory. For example, former President Donald Trump’s administration and various politicians have used AI videos in controversial ways. Influencers have also been caught up in AI drama, with deepfakes fueling online conflicts. These developments underscore the complex challenges of regulating AI-generated likenesses in a fast-moving digital landscape.
Legal Responses and the NO FAKES Act
While celebrities like Scarlett Johansson have taken legal action over unauthorized likeness use, the broader legal framework is still evolving. The NO FAKES Act, supported by SAG-AFTRA and YouTube, seeks to establish nationwide rights to control the use of highly realistic digital replicas of living or deceased individuals. However, free speech advocates like the Electronic Frontier Foundation warn that the bill could lead to overbroad censorship and unintended takedowns, creating a “heckler’s veto” effect online.
Evolving Social Norms Around AI-Generated Content
Despite the legal uncertainties, platforms like YouTube are taking steps to empower creators to remove unauthorized content featuring their likenesses. As AI makes it easier than ever to generate videos of almost anyone doing almost anything, society is still grappling with when and how such content should be used. The rules and expectations around AI-generated likenesses are still being written, both legally and culturally.