As artificial intelligence (AI) technology becomes ubiquitous, news stories regarding the use (and abuse) of deepfakes—that is, AI-generated media used to impersonate real individuals—are increasingly common.

For example, in January, sexually explicit deepfakes of Taylor Swift proliferated on social media, prompting X (formerly Twitter) to temporarily lock all searches for the singer’s name on its platform to prevent user access to such deepfakes.

A high school in Westfield, New Jersey, recently found itself in the middle of a deepfake porn scandal when a student used AI to paste the faces of 30 female students onto pornographic images and then uploaded those images to a website.

In what is being called a record-breaking year for elections worldwide, deepfakes are being used in an effort to influence voting and election outcomes. For example, robocalls designed to sound like President Joe Biden urged New Hampshire voters not to cast their ballots in that state’s Democratic primary. The Federal Communications Commission swiftly declared the use of AI-generated voices in unsolicited robocalls illegal, and the League of Women Voters has since filed suit against the entities responsible for these robocalls under the Voting Rights Act, Telephone Consumer Protection Act, and New Hampshire state law.

Further, deepfakes impersonating Tom Hanks, Gayle King, and other celebrities are being used in advertisements and media without authorization, and, as we explored in a recent blog post, reality TV star Kyland Young has sued the developer of an AI-fueled app allowing users to swap their faces for Young’s face within photos and videos featuring Young. Deceased celebrities are also being targeted; for example, George Carlin’s estate recently settled a lawsuit over a one-hour comedy special featuring an AI-generated imitation of the comedian.

In the wake of these and other high-profile incidents, state and federal legislators are scrambling to address the unauthorized and nonconsensual use of deepfakes. Much of the current legislative activity regarding deepfakes focuses on specific sets of issues: (1) the nonconsensual dissemination of sexually explicit deepfakes; (2) the use of deepfakes to influence elections; and (3) the right of publicity as it relates to AI-generated digital replicas.

Laws Addressing Nonconsensual Sexually Explicit Deepfakes

While there is no U.S. federal legislation specifically addressing the nonconsensual dissemination of sexually explicit deepfakes, a minority of states have adopted such laws. However, following the Taylor Swift, Westfield High School, and other pornographic deepfake scandals (almost always targeting females), a flurry of proposed legislation at both the federal and state levels is taking aim at sexually explicit deepfakes.

  • State Laws. Although many states have implemented laws against the nonconsensual distribution or creation of sexually explicit deepfakes, the details of these laws can vary significantly from state to state.
    • Criminal Penalties. Many states have specifically criminalized the distribution of nonconsensual sexually explicit deepfakes. Several states, including Georgia, Hawaii, New York, and Virginia, did so by amending existing “revenge porn” laws to expressly include deepfakes. Other states, such as Florida, Minnesota, and Texas, created stand-alone statutes and amendments. Civil Remedies. In conjunction with or independent of specific anti-deepfake criminal statutes, California, Florida, Illinois, and Minnesota have also implemented new civil remedies and private rights of action arising from the distribution of nonconsensual sexually explicit deepfakes. How They Compare. Although these laws have many similarities, material variations exist from state to state on provisions such as (1) the definition of what qualifies as a nonconsensual sexually explicit deepfake; (2) the specific intent required for a violation; (3) exempted activities; (4) the class of offense and punishment; and (5) available remedies.
  • Federal Law. To date, no federal law exists to specifically target nonconsensual sexually explicit deepfakes. But efforts are underway in Congress to change this.
    • 2024 Senate Bill. The U.S. Senate is considering some proposals to address these nonconsensual sexually explicit deepfakes. One such bill is the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (DEFIANCE Act), which was introduced on January 30, 2024. This bill proposes a civil remedy for “digital forgeries,” which falsely appear to be authentic and depict an individual in certain levels of nudity or engaged in sexually explicit conduct or certain sexual scenarios without the consent of the identifiable individual.
    • 2023 House Bill. Another bill, the Preventing Deepfakes of Intimate Images Act, which was introduced in the U.S. House of Representatives on May 5, 2023, similarly includes a civil remedy but also imposes criminal liability for individuals who disclose or threaten to disclose certain nonconsensual sexually explicit deepfakes if such actions involve specific intent, actual knowledge, or reckless disregard toward the depicted individual.

Even in the absence of laws specifically targeting pornographic deepfakes, other laws are in place—including privacy, defamation, and copyright laws—that provide tools for combatting such deepfakes. However, advocates have asserted that such existing laws are limited in their ability to effectively protect victims of obscene deepfakes.

Laws Addressing Election-Specific Deepfakes

With the approaching U.S. presidential election, the rise of deceptive political deepfakes, and general concerns around the integrity of elections, several states have implemented or are proposing legislation to tackle the spread of misleading election-related deepfakes. No federal law targeting election-related deepfakes has been enacted to date, but several bills have been proposed in Congress to address these issues.

  • State Law. Some states have implemented laws targeting the use of deepfakes against candidates running for public office, with some restrictions applying during a certain time window before an election. California, Michigan, and Washington allow some exemptions or an affirmative defense to the use of deepfakes if certain disclaimers are displayed. However, other states, including Minnesota and Texas, do not have such disclaimer exemptions. The specific intent required for a violation and the liability that attaches also vary across states, with some laws attaching criminal liability while others only provide for civil or injunctive relief.
  • Proposed State Legislation. Although not yet implemented, a large number of election-specific deepfake laws have been proposed by state legislators across the political spectrum to counter the proliferation of political deepfakes. Alaska, Arizona, Colorado, Florida, Hawaii, Idaho, Indiana, Kentucky, Massachusetts, Nebraska, New Hampshire, Oklahoma, South Dakota, Virginia, and Wyoming are all currently considering such bills.
  • Proposed Federal Legislation. Several bills have also been proposed in the U.S. Senate to specifically address election interference through the use of deepfakes. These include the Require the Exposure of AI-Led Political Advertisements Act (the REAL Political Advertisements Act) and the Protect Elections from Deceptive AI Act.

Although laws aimed at regulating political deepfakes may raise First Amendment issues, courts have yet to tackle such concerns in connection with the existing state laws targeting election-specific deepfakes—but those challenges are likely to come as such deepfakes become increasingly common. 

Right of Publicity Laws Addressing Deepfakes

Although much of the movement in deepfakes has centered specifically on deepfake porn and deepfake politics, some state and federal legislators have looked to expand right of publicity law to make clear that such rights extend to the use of unauthorized deepfakes more generally. The right of publicity is an intellectual property right that protects against the misappropriation of a person’s name, image, or likeness, as well as other indicia of identity such as nickname, pseudonym, voice, signature, or photographs, for commercial benefit. (For more on the right of publicity as it relates to deepfakes, see our prior blog post here.)

  • New York. In 2021, New York implemented a limited update to its right of publicity statute that, in addition to extending certain post-mortem rights to “deceased personalities” (e.g., celebrities and other personalities with commercial value), also provided “deceased performers” (e.g., actors, singers, dancers, and musicians) with post-mortem rights specific to unauthorized “digital replicas.” The amendment protects against the unauthorized use of the name, voice, signature, photograph, or likeness of deceased performers in scripted audiovisual works (as a fictional character), or in live performances of a musical work, if the use is likely to deceive the public into thinking it was authorized, unless there is a disclaimer.
  • Tennessee. Tennessee recently passed, with significant music industry support, the Ensuring Likeness Voice and Image Security Act (ELVIS Act), which significantly expands that state’s right of publicity law to protect against the unauthorized use of an individual’s voice, including from unauthorized use resulting from simulations of an individual’s voice.
    • The fact that Tennessee—a key center of the U.S. music industry—has taken the lead in extending publicity rights to expressly cover AI-generated voices highlights the priority the music business has placed on combatting AI-generated voices; it is less clear whether other sectors of the entertainment industry, such as film and television (both centered outside Tennessee), share the same level of concern.
  • Federal. The United States has no federal right of publicity law, but two related bills have been proposed to create a federal right of publicity for digital depictions of a person’s voice or likeness: (1) the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act) and (2) the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act (NO AI FRAUD Act).
  • The Bills. The NO FAKES Act was introduced by a bipartisan group of U.S. senators on October 12, 2023. The NO AI FRAUD Act was based on the NO FAKES Act and was introduced by a bipartisan group within the U.S. House of Representatives on January 10, 2024. These bills generally create a federal right of publicity for individuals as it relates to their voice and likeness, particularly focused on unauthorized distribution of digital replicas or depictions of voice and likeness. In introducing these bills, federal legislators cite the unauthorized creation of “Heart on My Sleeve,” a song using AI-generated voice replicas of Drake and The Weeknd, as well as the unauthorized use of Tom Hanks’ image to sell dental plans. The NO AI FRAUD Act, in particular, and similar to Tennessee’s ELVIS Act, has gained significant support from the music industry.
  • First Amendment Concerns. Both of these federal bills have received significant pushback from several notable civil liberties organizations, such as the Electronic Frontier Foundation and the American Civil Liberties Union, which object to the broad nature of the federal publicity rights and the broad definition of voice and likeness as unconstitutional encroachments on free speech.
  • Section 230 Concerns.It should also be noted that both bills classify the right to publicity as an intellectual property under Section 230 of the Communications Decency Act, which could threaten the safe harbor protections currently enjoyed by Internet service providers and online platforms and introduce new risks for unauthorized digital replicas that they unknowingly host or transmit.

Final Thoughts

With AI-generated deepfakes becoming more common, more realistic, and more harmful, legislators will likely feel increased pressure to adopt laws regulating deepfakes. Although the likelihood of a comprehensive, omnibus deepfakes law at either the federal or state level in the near future seems low, we are already seeing, as discussed above, the adoption of state laws more narrowly focused on curbing deepfakes that are sexually explicit, seek to disrupt elections, or misappropriate celebrity voices or personas. Whether, in the absence of federal deepfakes legislation, these piecemeal state laws will be effective in curtailing problematic deepfakes remains to be seen. 

Follow us on social media @PerkinsCoieLLP, and if you have any questions or comments, contact us here. We invite you to learn more about our Digital Media & Entertainment, Gaming & Sports industry group and check out our podcast: Innovation Unlocked: The Future of Entertainment.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Meeka Bondy Meeka Bondy

Meeka Bondy’s practice spans the content lifecycle, from the ways that such innovations as AI, AR, VR, and MR influence content creation and development, through to the impact of emerging platforms, networks, devices and apps on content acquisition, licensing and distribution. Serving as…

Meeka Bondy’s practice spans the content lifecycle, from the ways that such innovations as AI, AR, VR, and MR influence content creation and development, through to the impact of emerging platforms, networks, devices and apps on content acquisition, licensing and distribution. Serving as a strategic business partner to clients at the intersection of media and technology, she draws on nearly 20 years of executive experience guiding entrepreneurial ventures and innovative transactions at global media and entertainment companies.

Photo of John Delaney John Delaney

John Delaney advises clients ranging from startups to Fortune 500 companies on licensing, intellectual property and technology-related matters. John routinely assists clients on matters relating to social media, mobile apps, cloud computing, AI, big data analytics, smart contracts, blockchain, AR, VR, and other…

John Delaney advises clients ranging from startups to Fortune 500 companies on licensing, intellectual property and technology-related matters. John routinely assists clients on matters relating to social media, mobile apps, cloud computing, AI, big data analytics, smart contracts, blockchain, AR, VR, and other emerging technologies. John’s experience also includes negotiating music and other media deals, software and content development agreements and licenses, complex outsourcing arrangements, joint ventures, and similar transactions.

Photo of Jeff Ong Jeff Ong

Jeff has experience advising emerging companies and venture capital clients on intellectual property (IP) and data privacy matters in connection with financings, mergers and acquisitions, and general commercial undertakings. He has experience drafting; revising; and reviewing terms of use, privacy policies, service agreements…

Jeff has experience advising emerging companies and venture capital clients on intellectual property (IP) and data privacy matters in connection with financings, mergers and acquisitions, and general commercial undertakings. He has experience drafting; revising; and reviewing terms of use, privacy policies, service agreements, as well as IP assignment and licensing agreements.