AI has been undermining people’s ability to trust what they see, hear, and read for years. The Republican National Committee released a provocative ad in which “AI predicts the country’s future if Joe Biden is re-elected,” showing apocalyptic machine-generated images of devastated cityscapes and border chaos. A fake robocall posing as Biden urged New Hampshire residents not to vote in the 2024 primary. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group that was using ChatGPT to generate fake social media comments.
While it’s not entirely clear what damage AI itself could cause, the reason for concern is clear: AI technology makes it easier for bad actors to create highly persuasive and misleading content. With that risk in mind, there have been some moves to limit AI’s use, but progress has been extremely slow in the area where AI could matter most: the 2024 election.
Two years ago, the Biden administration released a blueprint for an AI Bill of Rights aimed at addressing “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices.” And last year, Biden issued an executive order on AI based on that document. And in 2023, Senate Majority Leader Chuck Schumer hosted an AI summit in Washington, attended by billionaires such as Bill Gates, Mark Zuckerberg, and Elon Musk. A few weeks later, the UK hosted an international AI safety summit, which led to the serious-sounding “Bletchley Declaration” to encourage international cooperation on AI regulation. The risk of AI fraud in elections has not gone unnoticed.
But none of these efforts have translated into any meaningful change to address the use of AI in U.S. political campaigns. To make matters worse, two federal agencies that had a chance to address the issue have held off until possibly after the election.
On July 25, the Federal Communications Commission announced a proposal to require disclosure of whether television and radio political ads use AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) While this seems like progress, there are two big problems. First, the proposed rules, even if enacted, are unlikely to take effect before early voting begins for this year’s election. Second, the proposal quickly devolved into a fierce partisan fight. A Republican FCC commissioner argued that the Democratic National Committee was orchestrating the rule change because Democrats are lagging behind Republicans in the use of AI in elections. Moreover, he argued that this is the job of the Federal Election Commission.
But last month, the FEC announced it would not even try to enact new rules to ban the use of AI to impersonate candidates in election ads using deepfake audio or video. The FEC also said it lacked the legal authority to enact rules on deepfake audio or video misrepresentations, and lamented that it lacked the technical expertise to do so in the first place. Then last week, the FEC compromised, announcing it would enforce existing rules against deceptive misrepresentations, no matter what technology is used. Groups such as Public Citizen, which advocate for stricter rules on AI in election ads, said this is far from enough and characterized it as a “wait-and-see approach” to address “election disruption.”
Perhaps this is to be expected. Free speech guaranteed by the First Amendment generally allows for lying in political ads. But Americans suggest they want some rules governing the use of AI in election campaigns. In 2023, more than half of Americans surveyed said the federal government should ban all use of AI-generated content in political ads. Moreover, in 2024, about half of Americans surveyed said political candidates who knowingly manipulate their audio, image, or video should be barred from holding public office or removed from office if they win an election. Only 4% think there should be no penalties at all.
The fundamental problem is that Congress has not explicitly given any agency the responsibility to ensure political ads are realistic, whether that be in response to AI or old-fashioned disinformation. The FTC has jurisdiction over the truthfulness of ads, but political ads are largely exempt. This, too, is part of the First Amendment tradition. The FTC’s power is campaign finance, but the Supreme Court has gradually stripped it of that power. Even when the commission can act, it is often hindered by political gridlock. The FTC has a more explicit responsibility to regulate political ads, but only in certain media: broadcast, robocalls, and text messages. To make matters worse, the FTC’s rules are not always strict. In fact, the FTC has loosened its rules on political spam over time, leading to the deluge of messages many people receive today (though in February, the FTC unanimously ruled that robocalls using AI voice cloning technology, like Biden’s ad in New Hampshire, were already illegal under a 30-year-old law).
It’s a fragmented system, with many important activities falling victim to gaps in statutory authority and turf wars between federal agencies. And as political campaigns have gone digital, they have made inroads into online spaces with even fewer disclosure requirements and other regulations. No one seems to agree on whether AI falls under these agencies’ jurisdiction, or whether it should. In the absence of widespread regulation, some states are making their own decisions. In 2019, California became the first state in the nation to ban the use of deceptively manipulated media in elections, and this fall it strengthened those protections with a series of newly passed laws. Currently, 19 states have passed laws regulating the use of deepfakes in elections.
One issue regulators will have to address is the broad applicability of AI. AI technology can be used for a variety of purposes, each of which requires intervention. It may be acceptable for a candidate to digitally enhance his or her own photo, but it would be unacceptable for them to do the same to make their opponent look worse. We are used to receiving election messages and letters signed by candidates. Will we be okay with receiving robocalls in which a clone of the same politician’s voice speaks our name? And what should we think of the AI-generated election memes shared by figures like Musk and Donald Trump?
Despite the stalemate in Congress, these are bipartisan concerns. So it’s conceivable that something could be done, but perhaps only until after the 2024 elections and if lawmakers overcome major obstacles. One bill under consideration, the AI Transparency in Elections Act, would direct the FEC to require disclosure when political ads use media substantially generated by AI. Critics say that, implausibly, disclosure would be burdensome and increase the cost of political advertising. The Honest Ads Act would modernize campaign finance laws and explicitly extend the FEC’s authority to digital advertising. But the bill has been stalled for years due to reported opposition from the tech industry. The Protecting Elections from Deceptive AI Act would ban substantially deceptive AI-generated content in federal elections, as California and other states do. These are promising proposals, but libertarian and civil rights groups are already challenging all of them on First Amendment grounds. And, troublingly, at least one FEC commissioner directly cited some of these bills pending in Congress as a reason for the FEC not taking action on AI for the time being.
One group benefits from this chaos: the tech platforms. With few or no clear rules regulating online political spending or the use of new technologies like AI, tech companies have maximum freedom to sell ads, services, and personal data to campaigns. This is reflected in their lobbying efforts and the self-imposed policy restrictions they sometimes trumpet to convince the public that stricter regulation is unnecessary.
Big tech companies have demonstrated that they will only honor these voluntary pledges if it benefits the industry. Facebook once briefly banned political ads on its platform, but no longer does so, and even allows ads that baselessly deny the results of the 2020 presidential election. OpenAI policies have long prohibited political campaigns from using ChatGPT, but these restrictions are easy to circumvent. Several companies have voluntarily offered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks could even exacerbate disinformation by giving the false impression that unwatermarked images are authentic.
This important public policy shouldn’t be left to the corporations, but Congress seems resigned to not acting before the election. Schumer suggested to NBC News in August that Congress might add deepfake regulations to a must-pass budget or defense bill this month to ensure they become law before the election. More recently, he has cited the need for action “after the 2024 election.”
The three bills above are worthy, but only a start. The FEC and FCC should not be left blaming each other over which areas belong to which agency. The FEC also needs more significant structural reforms to become less partisan and get more done. We also need transparency and control over the algorithmic amplification of misinformation on social media platforms. This will require increased lobbying and stronger campaign finance protections to limit the pervasive influence of tech companies and their billionaire investors.
Policing regulations have not kept up with social media or AI, let alone AOL. And deceptive videos, whether created by AI or by actors on a soundstage, harm our democratic process. But urgent concerns about AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the tides of election disinformation. It needs to act bolder to reshape the landscape of political campaign regulation.