Artificial Intelligence (AI)-generated robocalls may trick some consumers into thinking they are being called by a human being, but the Federal Communications Commission clarified in a recent AI Declaratory Ruling that it will not be fooled. Moving forward, all AI-generated robocalls will be treated as artificial or prerecorded voice calls for purposes of the Telephone Consumer Protection Act (TCPA) and will require a called party’s prior express consent. The AI Declaratory Ruling reflects a first step by the FCC in crafting a new record on AI’s implications for consumers’ rights under the TCPA. That record began last November when the FCC released an AI-focused Notice of Inquiry (NOI), which sought industry and stakeholder comments on the potential benefits and risks of AI for consumers. By confirming that AI technologies used in robocalls are artificial or prerecorded voice calls under the TCPA, the FCC hopes to stem the rapid proliferation of AI-generated robocall scams, including popular “deep-fake” or voice cloning scams that solicit money by mimicking the voices of popular celebrities or even family members.
 
Admittedly, the FCC’s conclusion that AI-generated voice calls are “artificial” seems, at face value, axiomatic. But prior to the AI Declaratory Ruling, there was at least a colorable argument that AI-generated human-sounding voices capable of engaging in a live, interactive conversation was sufficient to qualify for the TCPA’s longstanding prior consent exception for live calls from human beings. That theory has been shut down, at least under current AI technologies. Continue Reading FCC Declares AI-Generated Robocalls Unlawful

Safety risk assessments are becoming a preferred regulatory tool around the world. Online safety laws in Australia, Ireland, the United Kingdom, and the United States will require a range of providers to evaluate the safety and user-generated content risks associated with their online services.

While the specific assessment requirements vary across jurisdictions, the common thread is that providers will need to establish routine processes to assess, document, and mitigate safety risks resulting from user-generated content and product design. This Update offers practical steps for providers looking to develop a consolidated assessment process that can be easily adapted to meet the needs of laws around the world. Continue Reading Online Safety Risk Assessments Have Arrived: Five Steps for Building a Globally Adaptable Process


Earlier this January, I was an industry attendee and speaker at Digital Hollywood at CES 2024, a one-day, in-person conference that kicks off the Consumer Electronics Show in Las Vegas. The event showcased the opportunities and challenges facing companies across the media, entertainment, and technology landscape, covering the future of TV, streaming, AI, XR, and advertising. Here are my key takeaways from this year’s 10 sessions.Continue Reading Notes from the Field: Digital Hollywood at CES 2024

The last few months have seen a flurry of activity in cases involving artificial intelligence (AI), including some of the first major rulings involving generative AI. 

Andersen et al. v. Stability AI Ltd.

As we have previously discussed, this case arose in January 2023, when a collective of artists filed a class action lawsuit involving three AI-powered image generation tools that produce images in response to text inputs: Stable Diffusion (developed by Stability AI), Midjourney (developed by Midjourney), and DreamUp (developed by DeviantArt). The plaintiffs asserted that the models powering these tools were trained using copyrighted images scraped from the internet (including their copyrighted works) without consent. The defendants filed motions to dismiss, and the U.S. District Court for the Northern District of California recently issued a ruling on these motions. The court dismissed most of the plaintiffs’ claims, with only one plaintiff’s direct copyright infringement claim against Stability AI surviving. The court granted leave to amend the complaint on most counts, and the plaintiffs have since filed an amended complaint.Continue Reading Recent Rulings in AI Copyright Lawsuits Shed Some Light, but Leave Many Questions

The White House recently issued its most extensive policy directive yet concerning the development and use of artificial intelligence (AI) through a 100-plus-page Executive Order (EO) titled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and accompanying “Fact Sheet” summary.

Following in the footsteps of last year’s Blueprint for AI Bill of Rights and updates to the National Artificial Intelligence Research and Development Strategic Plan published earlier this year, the EO represents the most significant step yet from the Biden administration regarding AI. Like these previous efforts, the EO acknowledges both the potential and the challenges associated with AI while setting a policy framework aimed at the safe and responsible use of the technology, with implications for a wide variety of companies. The EO also signals the government’s intentions to use its purchasing power to leverage Responsible AI and other initiatives, with significance for government contractors.Continue Reading White House Issues Comprehensive Executive Order on Artificial Intelligence

Sometimes the best learning experiences are local. After a quick subway ride on the 2 to Borough Hall, I walked into Brooklyn Law School (BLS) for its third annual Sports Law Symposium, presented each year by the Brooklyn Entertainment Sports Law Students and Intellectual Property Law Association. As a speaker and attendee, I was impressed by the substance and caliber of the completely student-organized program. Here are my takeaways from this year’s symposium, Sports Tech: A Sports Lawyer’s Playbook.Continue Reading Notes from the Field: Sports Tech: A Sports Lawyer’s Playbook, Brooklyn Law School Third Annual Sports Law Symposium

With all of the hubbub surrounding the growing wave of generative artificial intelligence (AI) lawsuits, a recent court decision involving a generative AI-powered app has received surprisingly little attention, despite addressing issues that will be relevant in other, higher profile AI litigation.

The case, Kyland Young v. NeoCortext, Inc., involved a photo-editing app, called Reface, that uses generative AI technology to allow users to manipulate photos and videos, including to swap faces with celebrities within photos and videos. A celebrity sued, and, in rejecting the app developer’s motion to dismiss, the U.S. District Court for the Central District of California held that the developer’s use of generative AI to superimpose user faces onto celebrity images could violate California’s right of publicity law. While this case is ongoing, Young illustrates the potential liability companies face when developing and using generative AI based on images and videos of celebrities.Continue Reading Reface/Off? Animating the Right of Publicity in the Dawn of Generative AI

Perkins Coie presented at Digital Hollywood’s “AI Bill of Rights, Ethics & the Law” Summit, a one-day virtual conference that seeks to advance the conversation around the establishment of a national regulatory policy for artificial intelligence (AI). The October 19 event highlighted the tension between efforts to unleash a once-in-a-generation burst of innovation, while simultaneously safeguarding against the dangers and risk inherent in complex and still developing technologies.

Over the course of the summit, panelists discussed a wide range of topics, including government regulation versus industry self-regulation, generative AI and intellectual property (IP) rights, human interaction with AI, and balancing the benefits and risks of deepfakes, among others.

Marc Martin moderated the panel “US and EU Regulation of AI: What To Expect and How To Prepare.” The panelists included Cass Matthews from Microsoft’s Office of Responsible AI and Benoit Barre, a partner at Le16 Law in Paris.Continue Reading Notes From the Field: AI Virtual Summit: New AI Regulation in the EU and US: What To Expect and How To Prepare

The generative AI revolution has arrived. Will copyright law snuff it out?

Despite all the excitement surrounding generative AI tools, a cloud darkens the horizon. These tools need to be trained on massive amounts of ingested content and, according to press reports, this content is often scraped without authorization from third-party websites, raising significant copyright law issues.Continue Reading Known Unknowns: Key Unanswered Copyright Questions Raised by Generative AI

Hollywood is no stranger to artificial intelligence, or AI. Filmmakers have relied on AI for decades to enhance and accelerate their audiovisual productions. However, recent advances in CGI, VFX, and AI technology have combined to produce hyper-realistic, AI-generated digital humans that are both wowing audiences and alarming performers across the entertainment industry. AI has become a major sticking point in the stalled SAG-AFTRA negotiations, and celebrities like Tom Hanks find themselves battling a growing flurry of deepfakes they neither created nor authorized. Using AI to duplicate the voice or likeness of actors and musicians is testing the traditional boundaries of copyright and right of publicity law.Continue Reading Deepfakes, Digital Humans, and the Future of Entertainment in the Age of AI