Hollywood is no stranger to artificial intelligence, or AI. Filmmakers have relied on AI for decades to enhance and accelerate their audiovisual productions. However, recent advances in CGI, VFX, and AI technology have combined to produce hyper-realistic, AI-generated digital humans that are both wowing audiences and alarming performers across the entertainment industry. AI has become a major sticking point in the stalled SAG-AFTRA negotiations, and celebrities like Tom Hanks find themselves battling a growing flurry of deepfakes they neither created nor authorized. Using AI to duplicate the voice or likeness of actors and musicians is testing the traditional boundaries of copyright and right of publicity law.

The Technology

What is a deepfake? A “deepfake” is “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” The term originated in 2017 when a Reddit moderator, named “deepfakes,” created a subreddit called r/deepfakes where users posted pornographic videos starring famous celebrities whose faces were swapped in without their consent.

  • GANs. “Deepfake” denotes both the “deep learning” AI techniques used and the “fake” nature of the content produced. More specifically, deepfake technology relies on what are called generative adversarial networks (GANs). First introduced in 2014, GANs consist of two neural networks: a generator and a discriminator. While the generator works on obtaining more data, the discriminator focuses on performing authenticity checks. The two adversarial networks work together to create synthetic data that closely resembles accurate data.
  • Dubbing. AI is already disrupting the way in which we dub audio and video into different languages. With advances in natural language processing and machine learning algorithms, AI-powered translation has already moved from its earlier text-to-speech version to today’s speech-to-speech capabilities. David Beckham only needed to record his PSA for malaria once in English. New AI tools were able to not only quickly dub his message into nine additional languages but also to manipulate his mouth movements for a more authentic-looking lip sync.
  • Aging and de-aging. AI can not only generate a near-perfect digital double of what you look like today; it can rummage through large archives full of images and videos of your younger self and generate a super-realistic digital twin of a younger you. AI has pushed de-aging technology far beyond the hair and make-up department. When Martin Scorsese needed to de-age three of the most legendary stars in show business—Joe Pesci, Robert DeNiro, and Al Pacino—in The Irishman, he wanted to shoot the way he always does and avoid having them wear headgear or tracking dots during the shoot. Powered by AI, the de-aging system he used catalogued and referenced thousands of frames from earlier movies, like Goodfellas and Casino, to help match the current frames with earlier video actually performed by the actors themselves.
  • Voice cloning. Voice cloning is the “creation of an artificial simulation of a person’s voice using artificial intelligence technology.” The first voice cloning system appeared back in 1998, but only in recent years has the technology advanced enough to capture speech patterns, accents, inflection, and tone based on samples as short as three seconds. However, while fans welcomed hearing Val Kilmer’s revived voice in Top Gun: Maverick, public reaction was mixed when a documentary released after Anthony Bourdain’s death contained three lines of dialogue never uttered by him when he was alive.
  • Music cloning. Advances in voice cloning technology are generating a host of vocal deepfakes that sound a lot like some of our favorite musicians. The viral sensation, “Heart On My Sleeve,” shook the music industry earlier this year when it turned out the sound-alike vocals of The Weeknd and Drake were generated by AI. Fans and amateur musicians use stem separation tools to isolate their favorite vocals, run those vocals through an open-source voice cloning system, and layer that cloned voice into their favorite song, which might even be one they wrote themselves.
  • Digital humans. For SAG-AFTRA performers, AI represents an “existential threat” to their livelihood, especially in the case of background performers who could be scanned once, for one day’s pay, and have their digital replicas used in perpetuity on any project, all without them ever having a say or receiving a dime. On the other hand, some actors are taking steps to capitalize on this AI watershed moment. Why not have Jen AI (not to be confused with the real Jennifer Lopez) invite your team aboard a Virgin Voyages cruise, or send Messi Messages to your friends? Big-screen and small-screen celebrities and influencers are making time for their 3D photogrammetry scans, collaborating with “digital human” companies, and tasking their AI digital twins to do some of the hustling for them.
  • Face swapping. The OG of deepfakes—face swapping—made it big in the pornography industry, going on to amuse and alarm fans with any number of hilarious and horrifying celebrity face swaps. Movie fans in China went crazy over a face swapping app called Zao, that let them replace celebrity faces with their own in their favorite movie scenes.

The Law

The unauthorized use of AI to replicate a performer’s likeness or mimic an artist’s style can deprive them not only of the appropriate remuneration for their work and talent, but can irreparably damage their reputation, brand, and future earning potential. However, the protections traditionally afforded to artists and musicians under copyright and right of publicity law may not stretch to every aspect of these AI-generated digital humans and their human originals.

  • Copyright Law. Copyright protects original works of authorship and secures the exclusive rights for creators to copy, display, perform, distribute, and create derivatives of their copyrighted works. However, while copyright extends protection to the creator of the copyrighted work (e.g., the journalist who broke the story or the paparazzi who took the photo), it does not cover the subject of that work (e.g., the celebrity featured in the story or photo).
    • Names. Copyright does not protect names, titles, slogans, ideas, concepts, systems, or methods. Under the bedrock copyright principle known as the idea-expression dichotomy, ideas are not protectable; only the expressions of those ideas, when fixed in a tangible medium, are copyrightable.
    • Faces. Plastic surgery aside, your face is a natural phenomenon and “human authorship is an essential part of a valid copyright claim.” So, while your face is not copyrightable, the expression of your face fixed in a hand-painted portrait or photo portrait might be. However, a near-perfect AI-generated digital replica of your face might not have that “minimal spark of creativity” required for copyright protection. The CEO of Metaphysic begs to differ, becoming the first person to submit his AI likeness of his face for copyright registration with the U.S. Copyright Office.
    • Voice. Your voice cannot be copyrighted. Vocalists can certainly register for copyrights in their musical compositions, sound recordings and other performances, but copyright law has yet to extend to the tone, timbre, or style of any given vocalist. Voice, if protected at all, tends to be captured under state right of publicity laws.
    • Fair Use. Fair use is a legal doctrine that promotes freedom of expression by allowing for the unlicensed use of copyrighted works for educational and other noncommercial purposes and for certain “transformative” uses. Authors and artists claim, including in a number of class action lawsuits filed earlier this year, that ingesting their copyrighted works to train AI amounts to “systemic theft on a massive scale.” AI companies argue that copyright law does not protect “facts or the syntactical, structural, and linguistic information” extracted from the copyrighted works, copying copyrighted works to train AI constitutes fair use, and using AI to create new expressions is surely transformative and not an unauthorized derivative work.
  • Right of publicity. The right of publicity is “an intellectual property right that protects against the misappropriation of a person’s name, likeness, or other indicia of personal identity—such as nickname, pseudonym, voice, signature, likeness, or photograph—for commercial benefit.” Unlike copyright, trademark, and patent law, right of publicity is governed not by federal law, but by a patchwork of state laws. More than 30 states recognize a right of publicity, 25 by way of statute.
    • NIL. Name, image, and likeness (NIL) rights help actors and athletes capitalize on the value of their celebrity in the form of sponsorships, endorsements, social media marketing, and personal appearances. NIL rights vary from state to state and often require the rights holder to establish their name, image, voice, or likeness are recognizable and have commercial value.
    • Voice. While Bette Midler and Tom Waits were able to stop the use of sound-alikes of their voices in commercials, in the case of AI-generated vocals, courts may be reluctant to extend right of publicity rights if the voices are not sufficiently distinctive or if the use is noncommercial or could viewed as transformative.
    • Style. In one of several class action lawsuits brought against generative AI companies earlier this year, a group of artists claim that the scraping of billions of images to train AI amounts to copyright infringement, and the resulting AI-generated works constitute unauthorized derivative works. Of particular note is the plaintiff’s claim that by invoking the name of artists in “in the style of” prompts, the defendants violated their right of publicity. However, neither copyright law nor right of publicity law appear to protect the elusive attribute of a person’s style.
    • Postmortem rights. The right of publicity is unique among intellectual property (IP) rights in that it has its roots in the individual right to privacy under state law. Accordingly, while the right of publicity can be licensed during the rights holder’s lifetime like any other property right, in some states, the right to exploit a person’s name, image or likeness does not survive the death of the personality involved and are not transferrable or descendible to their heirs. New York became the first state to recognize a postmortem right of publicity applicable to “digital replicas” of dead performers.
    • Platforms. While the right of publicity is often described as an IP right, it diverges from IP when it comes to platform liability for user-generated content (UGC). Section 230 of the Communications Decency Act shields online platforms from liability as the “speaker” or “publisher” of UGC, with an important exception for IP infringement claims. If the UGC contains unauthorized copyrighted materials, the online platform is incentivized to take that content down to avoid a copyright claim by the creator of that content because Section 230 immunity would not apply. However, if the UGC contains unauthorized NIL materials, courts have differed on whether the exception for IP infringement claims applies to right of publicity claims. Accordingly, it could prove harder to get that social media platform to take down that deepfake that looks like you than it is to take down that painting that looks a lot like the one you painted.

The Future

Against a growing chorus of authors, artists and musicians demanding consent, credit, and compensation for AI’s use of their name, image, likeness, and creative works, policymakers are looking for answers. Deepfakes and digital humans do not fit neatly under federal copyright law or state right of publicity laws, and some advocates are pushing for new regulations that are specific to AI.

  • Federal right of publicity. With the rise of deepfakes, calls for a federal right of publicity have gotten louder in recent years. However, for free speech advocates, a broad federal right of publicity right could stifle creativity and innovation, and for copyright traditionalists, a federal right of publicity could topple the delicate balance provided under copyright law.
  • NO FAKES Act. The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act aims to “protect the voice and visual likeness of all individuals from unauthorized recreations from generative artificial intelligence” and attaches liability to any individual, company, or platform that produces or hosts a digital replica of an individual without the subject’s consent. Importantly, this proposal, as well as an earlier proposal Adobe had been floating called the Federal Anti-Impersonation Right (FAIR) Act, does not seek to overhaul right of publicity law across all 50 states, but rather targets the specific harms that arise from AI’s ability to generate “nearly indistinguishable digital replicas” of a “person’s voice or visual likeness.”
  • EU AI Act. In what will be the world’s first AI regulations, the European Union Artificial Intelligence Act is expected to become law later this year and to go into force in 2025. The AI Act attaches different sets of regulations to AI applications based on the level of risk they pose to users. High-risk applications that affect safety or fundamental rights would require approval before going to market and testing throughout their life cycle. Generative AI applications would need to comply with certain transparency requirements. Limited risk applications would have to comply with minimal disclosure requirements and give users an opportunity to return the product after trying it out. Finally, AI systems that engage in cognitive behavior manipulation, social scoring, or real-time biometrics are classified as an unacceptable risk and would be banned.
  • Curated data sets. While we wait for courts to set the parameters of copyright, fair use, and NIL rights, the practice of training data with unfiltered data scraped from the open internet will likely fall away as AI system providers and users look to improve or secure their output by training on curated and proprietary data sets.


AI has become a major game changer in the entertainment industry, transforming how content is created, produced, distributed, and monetized. With class action lawsuits pending, the continued SAG-AFTRA strike, and competing approaches to AI regulation, the future of AI-generated digital doubles and the rights of their human subjects remains in the balance.

Follow us on social media @PerkinsCoieLLP, and if you have any questions or comments, contact us here. We invite you to learn more about our Digital Media & Entertainment, Gaming & Sports industry group and check out our podcast: Innovation Unlocked: The Future of Entertainment.

Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Meeka Bondy Meeka Bondy

Meeka Bondy’s practice spans the content lifecycle, from the ways that such innovations as AI, AR, VR, and MR influence content creation and development, through to the impact of emerging platforms, networks, devices and apps on content acquisition, licensing and distribution. Serving as…

Meeka Bondy’s practice spans the content lifecycle, from the ways that such innovations as AI, AR, VR, and MR influence content creation and development, through to the impact of emerging platforms, networks, devices and apps on content acquisition, licensing and distribution. Serving as a strategic business partner to clients at the intersection of media and technology, she draws on nearly 20 years of executive experience guiding entrepreneurial ventures and innovative transactions at global media and entertainment companies.