Close Menu
  • Home
  • Business
  • Celebrities
  • Economy
  • Environment
  • Health
  • Millionaire
  • Politics
  • Regions
    • Africans
    • Asians
    • Americans
    • Latinos
    • Euro Zone
What's Hot

German economy could stagnate again this year, Bundesbank says – Politico

February 25, 2025

Tech billionaire Brian Johnson’s ‘I am a parallel Brahman’ video goes viral ahead of India visit: ‘I eat…’

November 28, 2024

Thanksgiving 2024: How Matthew McConaughey and other stars celebrated

November 28, 2024
Facebook X (Twitter) Instagram
Tuesday, May 20
Facebook X (Twitter) Instagram
Young Boss News
  • Home
  • Business
  • Celebrities
  • Economy
  • Environment
  • Health
  • Millionaire
  • Politics
  • Regions
    • Africans
    • Asians
    • Americans
    • Latinos
    • Euro Zone
Young Boss News
Home » America needs better laws on AI in political ads
Americans

America needs better laws on AI in political ads

adminBy adminSeptember 24, 2024No Comments9 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


AI has been undermining people’s ability to trust what they see, hear, and read for years. The Republican National Committee released a provocative ad in which “AI predicts the country’s future if Joe Biden is re-elected,” showing apocalyptic machine-generated images of devastated cityscapes and border chaos. A fake robocall posing as Biden urged New Hampshire residents not to vote in the 2024 primary. This summer, the Department of Justice cracked down on a Russian bot farm that was using AI to impersonate Americans on social media, and OpenAI disrupted an Iranian group that was using ChatGPT to generate fake social media comments.

While it’s not entirely clear what damage AI itself could cause, the reason for concern is clear: AI technology makes it easier for bad actors to create highly persuasive and misleading content. With that risk in mind, there have been some moves to limit AI’s use, but progress has been extremely slow in the area where AI could matter most: the 2024 election.

Two years ago, the Biden administration released a blueprint for an AI Bill of Rights aimed at addressing “unsafe or ineffective systems,” “algorithmic discrimination,” and “abusive data practices.” And last year, Biden issued an executive order on AI based on that document. And in 2023, Senate Majority Leader Chuck Schumer hosted an AI summit in Washington, attended by billionaires such as Bill Gates, Mark Zuckerberg, and Elon Musk. A few weeks later, the UK hosted an international AI safety summit, which led to the serious-sounding “Bletchley Declaration” to encourage international cooperation on AI regulation. The risk of AI fraud in elections has not gone unnoticed.

But none of these efforts have translated into any meaningful change to address the use of AI in U.S. political campaigns. To make matters worse, two federal agencies that had a chance to address the issue have held off until possibly after the election.

Read: Chatbots are ready to warp reality

On July 25, the Federal Communications Commission announced a proposal to require disclosure of whether television and radio political ads use AI. (The FCC has no jurisdiction over streaming, social media, or web ads.) While this seems like progress, there are two big problems. First, the proposed rules, even if enacted, are unlikely to take effect before early voting begins for this year’s election. Second, the proposal quickly devolved into a fierce partisan fight. A Republican FCC commissioner argued that the Democratic National Committee was orchestrating the rule change because Democrats are lagging behind Republicans in the use of AI in elections. Moreover, he argued that this is the job of the Federal Election Commission.

But last month, the FEC announced it would not even try to enact new rules to ban the use of AI to impersonate candidates in election ads using deepfake audio or video. The FEC also said it lacked the legal authority to enact rules on deepfake audio or video misrepresentations, and lamented that it lacked the technical expertise to do so in the first place. Then last week, the FEC compromised, announcing it would enforce existing rules against deceptive misrepresentations, no matter what technology is used. Groups such as Public Citizen, which advocate for stricter rules on AI in election ads, said this is far from enough and characterized it as a “wait-and-see approach” to address “election disruption.”

Perhaps this is to be expected. Free speech guaranteed by the First Amendment generally allows for lying in political ads. But Americans suggest they want some rules governing the use of AI in election campaigns. In 2023, more than half of Americans surveyed said the federal government should ban all use of AI-generated content in political ads. Moreover, in 2024, about half of Americans surveyed said political candidates who knowingly manipulate their audio, image, or video should be barred from holding public office or removed from office if they win an election. Only 4% think there should be no penalties at all.

The fundamental problem is that Congress has not explicitly given any agency the responsibility to ensure political ads are realistic, whether that be in response to AI or old-fashioned disinformation. The FTC has jurisdiction over the truthfulness of ads, but political ads are largely exempt. This, too, is part of the First Amendment tradition. The FTC’s power is campaign finance, but the Supreme Court has gradually stripped it of that power. Even when the commission can act, it is often hindered by political gridlock. The FTC has a more explicit responsibility to regulate political ads, but only in certain media: broadcast, robocalls, and text messages. To make matters worse, the FTC’s rules are not always strict. In fact, the FTC has loosened its rules on political spam over time, leading to the deluge of messages many people receive today (though in February, the FTC unanimously ruled that robocalls using AI voice cloning technology, like Biden’s ad in New Hampshire, were already illegal under a 30-year-old law).

It’s a fragmented system, with many important activities falling victim to gaps in statutory authority and turf wars between federal agencies. And as political campaigns have gone digital, they have made inroads into online spaces with even fewer disclosure requirements and other regulations. No one seems to agree on whether AI falls under these agencies’ jurisdiction, or whether it should. In the absence of widespread regulation, some states are making their own decisions. In 2019, California became the first state in the nation to ban the use of deceptively manipulated media in elections, and this fall it strengthened those protections with a series of newly passed laws. Currently, 19 states have passed laws regulating the use of deepfakes in elections.

One issue regulators will have to address is the broad applicability of AI. AI technology can be used for a variety of purposes, each of which requires intervention. It may be acceptable for a candidate to digitally enhance his or her own photo, but it would be unacceptable for them to do the same to make their opponent look worse. We are used to receiving election messages and letters signed by candidates. Will we be okay with receiving robocalls in which a clone of the same politician’s voice speaks our name? And what should we think of the AI-generated election memes shared by figures like Musk and Donald Trump?

Read: The worst cat memes you’ve ever seen

Despite the stalemate in Congress, these are bipartisan concerns. So it’s conceivable that something could be done, but perhaps only until after the 2024 elections and if lawmakers overcome major obstacles. One bill under consideration, the AI ​​Transparency in Elections Act, would direct the FEC to require disclosure when political ads use media substantially generated by AI. Critics say that, implausibly, disclosure would be burdensome and increase the cost of political advertising. The Honest Ads Act would modernize campaign finance laws and explicitly extend the FEC’s authority to digital advertising. But the bill has been stalled for years due to reported opposition from the tech industry. The Protecting Elections from Deceptive AI Act would ban substantially deceptive AI-generated content in federal elections, as California and other states do. These are promising proposals, but libertarian and civil rights groups are already challenging all of them on First Amendment grounds. And, troublingly, at least one FEC commissioner directly cited some of these bills pending in Congress as a reason for the FEC not taking action on AI for the time being.

One group benefits from this chaos: the tech platforms. With few or no clear rules regulating online political spending or the use of new technologies like AI, tech companies have maximum freedom to sell ads, services, and personal data to campaigns. This is reflected in their lobbying efforts and the self-imposed policy restrictions they sometimes trumpet to convince the public that stricter regulation is unnecessary.

Big tech companies have demonstrated that they will only honor these voluntary pledges if it benefits the industry. Facebook once briefly banned political ads on its platform, but no longer does so, and even allows ads that baselessly deny the results of the 2020 presidential election. OpenAI policies have long prohibited political campaigns from using ChatGPT, but these restrictions are easy to circumvent. Several companies have voluntarily offered to add watermarks to AI-generated content, but they are easily circumvented. Watermarks could even exacerbate disinformation by giving the false impression that unwatermarked images are authentic.

This important public policy shouldn’t be left to the corporations, but Congress seems resigned to not acting before the election. Schumer suggested to NBC News in August that Congress might add deepfake regulations to a must-pass budget or defense bill this month to ensure they become law before the election. More recently, he has cited the need for action “after the 2024 election.”

Read: A new front in the meme wars

The three bills above are worthy, but only a start. The FEC and FCC should not be left blaming each other over which areas belong to which agency. The FEC also needs more significant structural reforms to become less partisan and get more done. We also need transparency and control over the algorithmic amplification of misinformation on social media platforms. This will require increased lobbying and stronger campaign finance protections to limit the pervasive influence of tech companies and their billionaire investors.

Policing regulations have not kept up with social media or AI, let alone AOL. And deceptive videos, whether created by AI or by actors on a soundstage, harm our democratic process. But urgent concerns about AI should be harnessed to advance legislative reform. Congress needs to do more than stick a few fingers in the dike to control the tides of election disinformation. It needs to act bolder to reshape the landscape of political campaign regulation.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleCorporate email hackers steal $500,000 from San Jose victim, police say
Next Article John Leguizamo looks back and re-creates Latin American history in PBS documentary series
admin
  • Website

Related Posts

Trump’s Cabinet Picks Threaten Black Americans’ Rights and Safety

November 22, 2024

In Dearborn, Arab and Muslim Americans share hopes and fears about the new president

November 21, 2024

I’m voting for Trump. Harris is too left

November 1, 2024

Fear of election keeps Americans up at night

October 31, 2024

For these Arab Americans, elections do not offer easy choices.

October 31, 2024

U.S. Presidential Election Latest: Data Shows Almost 60 Million Americans Voted, Harris and Trump Hit Wisconsin | 2024 U.S. Election

October 31, 2024
Add A Comment
Leave A Reply Cancel Reply

Economy News

Northvolt files for bankruptcy in big blow to Europe’s EV sector

By adminNovember 22, 2024

Swedish battery developer and manufacturer Northvolt has filed for bankruptcy in the United States after…

President Putin says new Oleshnik missile test in Ukraine is a response to NATO aggression

November 22, 2024

Eurozone inflation data – Another argument for euro recovery? – Commerzbank

October 31, 2024
Top Trending

German economy could stagnate again this year, Bundesbank says – Politico

By adminFebruary 25, 2025

Europe’s largest economy has shrunk slightly for the past two years in…

Tech billionaire Brian Johnson’s ‘I am a parallel Brahman’ video goes viral ahead of India visit: ‘I eat…’

By adminNovember 28, 2024

Brian Johnson, the age-reversed billionaire who is scheduled to visit India this…

Thanksgiving 2024: How Matthew McConaughey and other stars celebrated

By adminNovember 28, 2024

Multiple stars are sharing what they’re thankful for in 2024. Former Bachelor…

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Young Boss TV!

At Young Boss TV, we are dedicated to providing you with the latest and most relevant information in the fast-paced world of urban news. Our platform serves as a vital resource for anyone looking to stay informed about global business news, politics, environmental issues, health trends, and celebrity happenings

Facebook X (Twitter) Instagram Pinterest YouTube

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise with Us
  • Contact us
  • DMCA Policy
  • Privacy Policy
  • Terms and Conditions
© 2025 youngboss. Designed by youngboss.

Type above and press Enter to search. Press Esc to cancel.