Summary: As AI technology advances, the ability to generate realistic videos and audio of people—living or deceased—raises complex legal and ethical questions. While copyright law offers some protections, likeness laws vary by state and are struggling to keep pace. OpenAI’s recent launch of Sora, an AI video platform, has intensified debates about unauthorized use of faces and voices, prompting new legislation efforts like the NO FAKES Act and evolving platform policies. Meanwhile, social norms around the creation and use of AI-generated likenesses remain unsettled.

A New Era of AI-Generated Likeness

In 2023, an AI-generated song called “Heart on My Sleeve” mimicked Drake’s style so closely that it sparked a new cultural and legal debate. While streaming services removed the track due to copyright technicalities, the creator hadn’t copied any existing work directly—just created a near-perfect imitation. This raised questions about how AI services should use people’s faces and voices, and how platforms should regulate such content.

The Legal Landscape: Likeness Law vs. Copyright

Unlike copyright, which is governed by federal laws like the Digital Millennium Copyright Act and international treaties, there is no overarching federal law protecting an individual’s likeness. Instead, a patchwork of state laws applies, many of which were not designed with AI in mind. In 2024, states like Tennessee and California—both with significant media industries—passed laws expanding protections against unauthorized digital replicas of entertainers.

OpenAI’s Sora and the Challenges of Deepfakes

Last month, OpenAI launched Sora, an AI video generation platform designed to capture and remix real people’s likenesses. This opened the floodgates to highly realistic deepfakes, including many without consent. OpenAI implemented likeness policies to address concerns, but challenges remain. For example, after complaints from Martin Luther King Jr.’s estate about disrespectful depictions, OpenAI revised its policies on historical figures. Similarly, unauthorized celebrity likenesses, such as Bryan Cranston appearing in videos with Michael Jackson, led to strengthened guardrails following pressure from SAG-AFTRA.

Even authorized users have expressed discomfort, especially women, due to the creation of fetishized content. OpenAI CEO Sam Altman acknowledged that people might have mixed feelings about authorized likenesses being used in ways they find offensive or problematic.

The Rise of AI in Politics and Social Media

AI-generated videos have also become tools in political and social media conflicts. For instance, former President Donald Trump shared a video depicting a figure resembling liberal influencer Harry Sisson in a derogatory manner. New York City mayoral candidate Andrew Cuomo posted (and quickly deleted) a video showing his opponent in an unflattering light. As noted by Kat Tenbarge in Spitfire News, AI videos are increasingly fueling influencer drama.

Legal Responses and the NO FAKES Act

While celebrities like Scarlett Johansson have taken legal action over unauthorized use of their likeness, few cases have escalated due to the unsettled legal landscape. SAG-AFTRA has supported the NO FAKES Act, which aims to establish nationwide rights to control the use of highly realistic digital replicas of living or deceased individuals. The act would also hold online services liable if they knowingly allow unauthorized digital replicas.

However, the Electronic Frontier Foundation (EFF) criticizes the NO FAKES Act as a potential “new censorship infrastructure” that could lead to overbroad content takedowns and suppress free speech. Although the bill includes exceptions for parody, satire, and commentary, these protections may not be accessible to those unable to afford litigation.

Despite legislative challenges, platforms are taking steps. YouTube recently announced that creators in its Partner Program can search for unauthorized uploads using their likeness and request removal, expanding existing protections against content that mimics an artist’s unique voice.

Evolving Social Norms Around AI Likeness

As AI makes it easier than ever to generate videos of almost anyone doing almost anything, society is grappling with when and how such content should be created and shared. Legal frameworks are still developing, and social expectations remain fluid, making this a complex and evolving frontier.

By Manish Singh Manithia

Manish Singh is a Data Scientist and technology analyst with hands-on experience in AI and emerging technologies. He is trusted for making complex tech topics simple, reliable, and useful for readers. His work focuses on AI, digital policy, and the innovations shaping our future.

Leave a Reply

Your email address will not be published. Required fields are marked *