Document Type

Article

Publication Date

2-14-2025

Abstract

With the rise of generative artificial intelligence (AI), there has been an influx of “voice clones”—deep-learning algorithms that create synthetic speech to realistically mimic human voices. Celebrities and, in particular, music artists, have been subjected to the proliferation of AI voice clones on social media platforms like TikTok and streaming platforms such as Spotify. Despite music utilizing AI voice clones having amassed much popularity, this technology can be harmful and highly invasive to musicians whose livelihoods often depend on their distinct voices. While legal scholars have attempted to articulate various rights that could protect a person’s voice, individuals are largely left with minimal protection to prevent AI voice clones and have few options for redress. Some legal scholars suggest a variety of tort actions that could be applied in this context; however, torts such as the right of publicity, defamation, and false light ultimately fall short. This Note argues that a patchwork approach is necessary to regulate and combat the harms of AI voice clones, including action at the state and federal level, as well as self-regulation in the private sector by streaming platforms and musicians themselves. This approach, which includes input from all actors impacted by AI voice clones, should balance promoting creativity and continued development of AI while also protecting individuals’ interests in how their voice and likeness are used by others.

Share

COinS