Celebrities face a huge and rapidly growing problem: generative AI is facilitating the unauthorized use of their names, images and likenesses in fake content distributed across the internet.
But given the rapid increase in pirated content, as shown by VIP+ in exclusive data measuring the sheer volume of illegal content detected online since the second half of 2022, mitigating the problem feels like an eternal game of whack-a-mole.
Conversations with talent agencies and others from companies specializing in curbing the rise of infringement provided deeper insight into the problem itself and new approaches to dealing with infringing content.
Aggressive mitigation measures are underway, but rollout is still in its early stages and has been unevenly applied across talent and their teams. Officials said legal uncertainty is also partly hindering them in knowing the best course of action.
Traditionally, discovering NIL infringements and issuing takedown requests under the Digital Millennium Copyright Act has been a primarily manual process, often conducted by talent representatives (usually lawyers, managers, publicists, agents, etc.) These teams typically send weekly or monthly maintenance reports to their clients documenting infringements and takedowns.
But given the sheer scale of the breaches, manual takedown procedures may not be enough: One agency source told VIP+ that while some clients have started hiring cybersecurity firms to mitigate the onslaught of NIL deepfakes, even this defense is still slow to make headway, meaning a lot of illegal content will either slip through the cracks or require agencies to expand their teams and billable hours, inflating talent costs.
“Even when legal teams or cybersecurity companies find these breaches, so many of them slip through the cracks because it’s so manual,” one agency source told VIP+. “It’s really baffling the amount of breaches that clients are seeing, spending hundreds of thousands of dollars in legal fees on this very manual process.”
Scale is not the only challenge with AI-generation fake content. Another challenge is the viral and evasive way in which content is spread across the internet. Sources explain that various infringement scenarios are becoming common. For example, one agency source shared that a talent’s likeness appeared in a fake ad distributed on a pornographic website.
A further challenge for talent may be the increased complexity of their own teams as the pipeline of responders tasked with finding violating content and issuing deepfake takedown orders grows.
As the problem grows, some talent is turning to new automated detection and removal services offered by companies such as Vermillio and Loti, sources said. WME has partnered with both companies to make the services available to clients for an additional fee if they wish.
Agency officials expected more talent would start adopting such solutions because they were automated and “more all-in-one,” rather than having numerous teams manually tackling seemingly impossible problems.
Solutions such as Loti and Vermillio are part of a new and fast-growing category of companies offering data protection services, some of which focus specifically on the entertainment sector, with agency sources citing meetings with “dozens” of startup solutions obtained through referrals and active research.
But not all are equally effective, especially when it comes to what’s required to address the scale and multifaceted nature of talent and intellectual property issues. Few can not only detect and reliably identify AI-generated content, but also match it to specific identities, analyze whether the content is appropriate, and automatically issue takedown requests. Not to mention the requisite level of understanding of the entertainment industry and the ability to work with talent.
RELATED: Get exclusive data on how Gen AI is driving an “exponential” rise in celebrity NIL fraud
One agency source said vetting a solution means making sure it has the functionality (e.g. being able to show evidence of success), the ability to deliver at the scale required (managing more than 1,000 clients), cybersecurity (ensuring the company is safe when client data is shared) and legal (ensuring the solution is compliant and fits your needs).
Both Vermillio and Loti scan large swaths of the public internet to detect copyright infringing content that misuses specific individuals’ NILs or rights holders’ IPs, then automatically issue DMCA takedown requests to remove it.
These solutions are technically complex, involving dense technology stacks that require systems to ingest massive amounts of content every day, various machine learning models to analyze and detect AI-generated content, facial recognition to find and positively identify client faces, and systems to automate takedown requests across numerous platforms.
For agencies, addressing this talent issue also means communicating and working more directly with social media companies in particular.
“Agencies are in ongoing discussions with the major platforms, YouTube and Meta,” an agency source told VIP+. “We believe the biggest players in preventing and detecting all of this are the platforms. Who’s better at taking things down than YouTube?”
Meanwhile, social media companies have a strong incentive to build their own detection, identification, and moderation tools. Earlier this month, YouTube announced that it had developed synthetic singing identification technology within Content ID to help partners automatically detect and moderate content that impersonates them. It also said it plans to release similar tools for people working in other industries, including creators, actors, musicians, and athletes.
But even the most vigilant social platforms can only do so much: One source explained that synthetic content that jumps between platforms is common, with links embedded in social media posts directing users to other platforms or site pages (e.g. Patreon, product pages, adult content sites, interactive chatbot apps) to lure victims to places where transactions can be made.
This cross-platform “path to purchase” means that even if social media companies are able to detect AI content on their platforms, they may not necessarily have enough visibility to determine whether they need to remove content that directs users to harmful content off-platform.
As a result, third-party tech solutions that scan broader swaths of the internet, such as those offered by Loti and Vermillio, have become essential for social media companies and can even be integrated directly into their platforms to enhance content moderation in ways that wouldn’t be possible otherwise.
Integrating third-party services into social platforms as partnerships provides better support for talent teams and representatives’ removal requests and ensures that the social platform honors the requests.
Variety VIP+ Explores the AI Generation from Every Angle — Pick the Stories