The world’s biggest music company is now in the AI business. Last year, Universal Music Group (UMG), alongside labels including Warner Records and Sony Music Entertainment sued two AI music startups for allegedly using their recordings to train text-to-music models without permission.
But last month, UMG announced a deal with one of the defendants, Udio, to create an AI music platform. Their joint press release offered assurances that the label will commit to “do what’s right by [UMG’s] artists”. However, one advocacy group, the Music Artists Coalition, responded with the statement: “We’ve seen this before – everyone talks about ‘partnership’, but artists end up on the sidelines with scraps.”
The lawsuit is one of dozens across US courts. As artists, publishers and studios argue that the use of their material in AI training is copyright infringement, judges are struggling to reconcile copyright law with a technology that undermines the very concept of authorship. For many, this is as much a legal question as it is one of justice. In Andersen v Stability AI, one of the first class-action lawsuits over an AI image generator, artists allege that the use of their artwork to train AI models without credit, compensation or consent “violat[es] the rights of millions of artists”.
That creative workers bear the brunt of the AI boom is not in question – generative AI is already displacing creative labour. In January 2024, more than a third of illustrators who responded to a Society of Authors survey said they had lost income due to AI, and one study projects a 21% revenue loss for audiovisual creators by 2028.
In response, a new wave of activism has united entertainment executives and artists to take on the tech industry with social media campaigns, crowdfunded lobbying, and lawsuits. The Human Artistry Campaign, an industry-artist coalition founded on the principle that “AI can never replace human expression and artistry”, rallies creatives and executives to jointly endorse legislation protecting artists from AI and big tech. But some artists, creators and civil liberties groups warn of another danger: big content.
What might the consequences be of well-intentioned working creatives taking the side of large media conglomerates that have long exploited their labour and aggressively expanded copyright against public interest? While some artists insist that an “enemy of my enemy” approach strategically justifies joining big content’s side, this won’t work if big content and big tech seem to be going from “enemies-to-lovers”.
Dave Hansen, copyright lawyer and executive director of the non-profit Authors Alliance, argues that copyright lawsuits won’t protect artists against AI. Instead, they’ll lead to exclusive licensing deals between large media and tech companies while “everybody else gets sort of left out in the cold”. History supports the cynics. When the tech and entertainment industries negotiated licensing during the rise of streaming, labels and studios pocketed the profits and left musicians, writers and actors behind. Will AI licensing deals be any different? When the AI company Runway and Lionsgate struck a licensing deal, the CEO of United Talent Agency, Jeremy Zimmer, said: “If I’m an artist and I’ve made a Lionsgate movie, now suddenly that Lionsgate movie is going to be used to help build out an LLM for an AI company, am I going to be compensated for that?” In some multimillion dollar deals between publishers and AI companies, authors were given neither compensation nor the choice to opt-out of datasets.
Even if US courts rule that tech companies must pay for AI training data, working artists are unlikely to benefit. Creating a licensing regime under the current imbalance of power could embolden media companies to pressure artists into signing away their training rights as a condition of employment. Voice actors have already been asked to sign such contracts. Nor would mandatory licensing rein in big tech. Companies such as Google and OpenAI can afford to pay the cost of licensing this data; smaller, open-source AI developers cannot. Ironically, the quest to take down big tech through copyright is further consolidating power in its hands.
Too many of the solutions proposed in the name of “protecting artists” would not only fail to do so, but potentially hurt artists and the public at large. In the US, the proposed NO FAKES Act, supported by major entertainment coalitions, would create a federal “digital replication right” to regulate deepfakes: nonconsensual AI replicas of a person’s voice or likeness. However, civil liberties groups, including the Center for Democracy and Technology and the American Civil Liberties Union, have criticised the bill’s vague language, weak protections for free speech and potential for abuse. The NO FAKES Act would allow individuals – including children – to license and transfer their digital replica rights for up to 10 years (five for children). It’s easy to imagine studio executives salivating over the prospect of pressuring young artists into signing away control of their own faces and voices.
Why do these solutions fall so short? Because many of these copyright lawsuits, licensing solutions and digital replica rights are Trojan horses, inside of which sits big content. The Copyright Alliance, an influential non-profit advocating for the interests of the “copyright community”, argues for strong copyright solutions to generative AI. While it claims to “advocate for individual creators”, its board of directors is stacked with industry executives from media giants such as Paramount, NBC Universal, Disney and Warner Bros.
But why all the fanfare of coalition-building when the entertainment industry could just quietly pocket billions in deals with tech companies? Because big content needs artists. Its media empires need artists’ labour to profit, its lobbying needs artist support to seem legitimate and its new AI business partners need artists’ art.
This fact points to a strategy that entertainment executives fear far more than AI, one that would empower artists to challenge the status quo across big content and big tech: organised labour. Unionised creative workers, such as those in the Writers Guild and Screen Actors Guild–American Federation of Television and Radio Artists, have secured meaningful protections against AI through strikes and collective bargaining. Copyright is a tool too antiquated, too static and too indelicate to bear the task of deciding the future of an already precarious creative labour force. If big content truly cared about protecting artists from AI, it would stop trying to sell their voices as training data and start listening to them.
Alexander Avila is a video essayist, writer and researcher