Artificial Intelligence

The explosive growth of generative AI has been accompanied by a corresponding growth of contractual provisions addressing generative AI issues.

Website operators in particular are increasingly seeking to use their online terms of service to prohibit the use of content and information hosted on their sites to train AI systems. Disney, for example, recently updated its online Subscriber Agreement for its Disney+ service to clarify that content from the service may not be accessed, copied, or extracted “for the purposes of creating or developing any AI Tool.”Continue Reading Does Copyright Law Preempt Contractual Provisions Imposing AI-Related Usage Restrictions on Content?

In the fall of 2023, the Writers Guild of America (WGA) and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) each ratified new agreements, amending and building upon their collective bargaining agreements with the Alliance of Motion Picture and Television Producers (AMPTP). The WGA, a union that represents film and TV writers, and SAG-AFTRA, a union that represents actors and performers, sought to protect their members from replacement by generative and non-generative artificial intelligence (AI). These negotiations followed months of strikes from both organizations that effectively halted the making of movies and TV shows for much of 2023. These new agreements take somewhat different approaches—in part because of the nature of what each union is trying to protect—a member’s voice and likeness for SAG-AFTRA vs. written content for WGA. But both agreements contain provisions aimed at protecting the jobs and income of their members. This blog post will provide an overview of key AI provisions in both agreements and how they will apply to the writers, performers, and producers covered by these guild agreements.Continue Reading Generative AI in Movies and TV: How the 2023 SAG-AFTRA and WGA Contracts Address Generative AI

What a great opportunity to speak and learn about today’s hot topics in sports law at New York University, School of Law, Sports Law Association’s 13th Annual Sports Law Colloquium on April 5, 2024. Like Brooklyn Law School’s third annual Sports Law Symposium, Sports Tech: a Sports Lawyer’s Playbook, NYU Law’s Colloquium covered the impact of artificial intelligence (AI) on sports, but also delved into sportswashing and the future of college sports. Here are my takeaways from this year’s NYU Sports Law Colloquium.Continue Reading Notes from the Field: NYU Law, Sports Law Association, 13th Annual Sports Law Colloquium

As artificial intelligence (AI) technology becomes ubiquitous, news stories regarding the use (and abuse) of deepfakes—that is, AI-generated media used to impersonate real individuals—are increasingly common.

For example, in January, sexually explicit deepfakes of Taylor Swift proliferated on social media, prompting X (formerly Twitter) to temporarily lock all searches for the singer’s name on its platform to prevent user access to such deepfakes.Continue Reading AI-Generated Deepfakes and the Emerging Legal Landscape

2023 was a breakout year for generative artificial intelligence (AI), but it was a rough year for protecting the content generated using such technology. The U.S. Copyright Office issued several rulings last year on the question of when works generated using AI technology are protected under U.S. copyright law, and so far, applicants have not been able to convince the Copyright Office that the AI-generated components of their works are protectable.Continue Reading Human Authorship Requirement Continues To Pose Difficulties for AI-Generated Works

Welcome back to Today’s Most Disruptive Technologies! We turn from quantum computing to a spotlight on multimodal AI. Artificial intelligence (AI) continues to dominate the news and the markets and while some of us are still mulling over existential questions of what it means to be human, AI is about to take yet another fast turn—AI is going multimodal. The next generation of AI will be looking to connect through the very real—and very human—five senses of sight, sound, touch, smell, and taste, not to mention a whole new set of modalities from thermal radiation and haptics to brain waves and who knows what else.Continue Reading Today’s Most Disruptive Technologies: Spotlight on Multimodal AI

Artificial Intelligence (AI)-generated robocalls may trick some consumers into thinking they are being called by a human being, but the Federal Communications Commission clarified in a recent AI Declaratory Ruling that it will not be fooled. Moving forward, all AI-generated robocalls will be treated as artificial or prerecorded voice calls for purposes of the Telephone Consumer Protection Act (TCPA) and will require a called party’s prior express consent. The AI Declaratory Ruling reflects a first step by the FCC in crafting a new record on AI’s implications for consumers’ rights under the TCPA. That record began last November when the FCC released an AI-focused Notice of Inquiry (NOI), which sought industry and stakeholder comments on the potential benefits and risks of AI for consumers. By confirming that AI technologies used in robocalls are artificial or prerecorded voice calls under the TCPA, the FCC hopes to stem the rapid proliferation of AI-generated robocall scams, including popular “deep-fake” or voice cloning scams that solicit money by mimicking the voices of popular celebrities or even family members.
 
Admittedly, the FCC’s conclusion that AI-generated voice calls are “artificial” seems, at face value, axiomatic. But prior to the AI Declaratory Ruling, there was at least a colorable argument that AI-generated human-sounding voices capable of engaging in a live, interactive conversation was sufficient to qualify for the TCPA’s longstanding prior consent exception for live calls from human beings. That theory has been shut down, at least under current AI technologies. Continue Reading FCC Declares AI-Generated Robocalls Unlawful

In his prescient 1994 book, Copyright’s Highway, Professor Paul Goldstein of Stanford Law School popularized the term “the celestial jukebox” for his prediction of a future where consumers could stream on-demand over the Internet any music, film, TV show, or other entertainment work. Professor Goldstein’s foresight anticipated the rise of massive cloud streaming platforms like Facebook, Netflix, Spotify, and YouTube, well before their inception.

The celestial jukebox has been the governing metaphor for the media landscape’s transformation over two decades. However, with the recent explosive advances in generative AI technologies, we are on the cusp of a new era. It’s time to introduce a fresh metaphor better capturing the forthcoming wave of disruption in content consumption: the infinite loom.Continue Reading Will the Infinite Loom Displace the Celestial Jukebox?

The last few months have seen a flurry of activity in cases involving artificial intelligence (AI), including some of the first major rulings involving generative AI. 

Andersen et al. v. Stability AI Ltd.

As we have previously discussed, this case arose in January 2023, when a collective of artists filed a class action lawsuit involving three AI-powered image generation tools that produce images in response to text inputs: Stable Diffusion (developed by Stability AI), Midjourney (developed by Midjourney), and DreamUp (developed by DeviantArt). The plaintiffs asserted that the models powering these tools were trained using copyrighted images scraped from the internet (including their copyrighted works) without consent. The defendants filed motions to dismiss, and the U.S. District Court for the Northern District of California recently issued a ruling on these motions. The court dismissed most of the plaintiffs’ claims, with only one plaintiff’s direct copyright infringement claim against Stability AI surviving. The court granted leave to amend the complaint on most counts, and the plaintiffs have since filed an amended complaint.Continue Reading Recent Rulings in AI Copyright Lawsuits Shed Some Light, but Leave Many Questions

The White House recently issued its most extensive policy directive yet concerning the development and use of artificial intelligence (AI) through a 100-plus-page Executive Order (EO) titled “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and accompanying “Fact Sheet” summary.

Following in the footsteps of last year’s Blueprint for AI Bill of Rights and updates to the National Artificial Intelligence Research and Development Strategic Plan published earlier this year, the EO represents the most significant step yet from the Biden administration regarding AI. Like these previous efforts, the EO acknowledges both the potential and the challenges associated with AI while setting a policy framework aimed at the safe and responsible use of the technology, with implications for a wide variety of companies. The EO also signals the government’s intentions to use its purchasing power to leverage Responsible AI and other initiatives, with significance for government contractors.Continue Reading White House Issues Comprehensive Executive Order on Artificial Intelligence