Summary: As AI technology advances, the use of people’s faces and voices in AI-generated content raises complex legal and ethical questions. With no federal law governing likeness rights, states are stepping in, and companies like OpenAI are creating their own policies. Meanwhile, proposed legislation like the NO FAKES Act aims to protect individuals from unauthorized digital replicas, sparking debate over free speech and censorship. This evolving landscape challenges us to rethink how we regulate and respect personal likeness in the digital age.
A New Era of AI and Likeness
Imagine hearing a song called “Heart on My Sleeve” and thinking it sounds just like Drake. But it’s not actually him—it’s an AI-generated track mimicking his voice. This was the reality in 2023, marking the start of a new legal and cultural battle over how AI can use people’s faces and voices, and how platforms should respond.
The Rise of AI-Generated Deepfakes
Back then, the AI-generated Drake song was a novelty, but it raised clear issues. Musicians were unsettled by the close imitation, and streaming services removed the track on copyright grounds. However, the creator hadn’t copied anything directly—just created a very close imitation. This shifted attention to likeness law, traditionally used by celebrities to fight unauthorized endorsements or parodies. As deepfake audio and video became more common, likeness law emerged as one of the few tools to regulate them.
Legal Challenges and State Laws
Unlike copyright, which is governed by federal laws like the Digital Millennium Copyright Act and international treaties, there’s no federal law specifically about likeness. Instead, there’s a patchwork of state laws, none originally designed with AI in mind. Recently, states like Tennessee and California—both with significant media industries—have expanded protections against unauthorized replicas of entertainers.
OpenAI’s Sora and the Likeness Debate
Despite legal efforts, technology often moves faster than the law. In 2024, OpenAI launched Sora, an AI video generation platform focused on capturing and remixing real people’s likenesses. This led to a surge of realistic deepfakes, including many created without consent. OpenAI and other companies have responded by implementing their own likeness policies, which could become the internet’s new rules of the road in the absence of clear laws.
OpenAI’s CEO Sam Altman defended Sora, saying it had “way too restrictive” guardrails. Still, the platform faced complaints. Initially, it had minimal restrictions on historical figures, but reversed course after Martin Luther King Jr.’s estate protested disrespectful depictions. Although it restricted unauthorized use of living people’s likenesses, users found ways to bypass these rules, creating videos of celebrities like Bryan Cranston in unexpected scenarios. This led to complaints from SAG-AFTRA and further tightening of guardrails.
Even authorized users sometimes felt uneasy about how their likenesses were used, especially women who found their images used in fetish content. Altman acknowledged that people might have “in-between” feelings about authorized likenesses, such as not wanting public cameos to say offensive or problematic things.
The NO FAKES Act and Its Controversy
Amid these challenges, SAG-AFTRA supported the NO FAKES Act, a bill aiming to establish nationwide rights to control the use of highly realistic digital replicas of living or deceased individuals. The act would also hold online services liable if they knowingly allow unauthorized digital replicas.
However, the bill has faced criticism from free speech advocates like the Electronic Frontier Foundation (EFF), which warns it could create a “new censorship infrastructure” forcing platforms to broadly filter content, leading to unintentional takedowns and a “heckler’s veto” online. While the bill includes exceptions for parody, satire, and commentary, the EFF notes these protections may not help those who cannot afford costly legal battles.
Given the current political climate, with government shutdowns and efforts to block state AI regulations, the NO FAKES Act’s future remains uncertain. Still, likeness rules are gradually taking shape. For example, YouTube recently announced that creators in its Partner Program can search for unauthorized uploads using their likeness and request removal, expanding on existing policies protecting artists’ unique voices.
Evolving Social Norms and Future Outlook
As AI makes it easier to generate videos of almost anyone doing almost anything, society is still figuring out when and how such content should be created and shared. Legal frameworks are catching up, but social norms remain in flux. The question remains: just because we can create these digital replicas, should we?