By Matt O'Brien

Deepfakes generated by artificial intelligence are having their moment this year, at least when it comes to making it look, or sound, like celebrities did something uncanny. Tom Hanks hawking a dental plan. Pope Francis wearing a stylish puffer jacket. U.S. Sen. Rand Paul sitting on the Capitol steps in a red bathrobe.

But what happens next year ahead of a U.S. presidential election?

Google was the first big tech company to say it would impose new labels on deceptive AI-generated political advertisements that could fake a candidate's voice or actions. Now some U.S. lawmakers are calling on social media platforms X, Facebook and Instagram to explain why they aren't doing the same.

Two Democratic members of Congress sent a letter Thursday to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino expressing “serious concerns” about the emergence of AI-generated political ads on their platforms and asking each to explain any rules they're crafting to curb the harms to free and fair elections.

“They are two of the largest platforms and voters deserve to know what guardrails are being put in place,” said U.S. Sen. Amy Klobuchar of Minnesota in an interview with The Associated Press. “We are simply asking them, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.”

The letter to the executives from Klobuchar and U.S. Rep. Yvette Clarke of New York warns: “With the 2024 elections quickly approaching, a lack of transparency about this type of content in political ads could lead to a dangerous deluge of election-related misinformation and disinformation across your platforms – where voters often turn to learn about candidates and issues."

X, formerly Twitter, and Meta, the parent company of Facebook and Instagram, didn't respond to requests for comment Thursday. Clarke and Klobuchar asked the executives to respond to their questions by Oct. 27.

The pressure on the social media companies comes as both lawmakers are helping to lead a charge to regulate AI-generated political ads. A House bill introduced by Clarke earlier this year would amend a federal election law to require labels when election advertisements contain AI-generated images or video.

“I think that folks have a First Amendment right to put whatever content on social media platforms that they’re moved to place there,” Clarke said in an interview Thursday. “All I’m saying is that you have to make sure that you put a disclaimer and make sure that the American people are aware that it’s fabricated.”

For Klobuchar, who is sponsoring companion legislation in the Senate that she aims to get passed before the end of the year, “that's like the bare minimum” of what is needed. In the meantime, both lawmakers said they hope that major platforms take the lead on their own, especially given the disarray that has left the House of Representatives without an elected speaker.

Google has already said that starting in mid-November it will require a clear disclaimer on any AI-generated election ads that alter people or events on YouTube and other Google products. Google's policy applies both in the U.S. and in other countries where the company verifies election ads. Facebook and Instagram parent Meta doesn’t have a rule specific to AI-generated political ads but has a policy restricting “faked, manipulated or transformed” audio and imagery used for misinformation.

A more recent bipartisan Senate bill, co-sponsored by Klobuchar, Republican Sen. Josh Hawley of Missouri and others, would go farther in banning “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire.

AI-generated ads are already part of the 2024 election, including one aired by the Republican National Committee in April meant to show the future of the United States if President Joe Biden is reelected. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic.

Klobuchar said such an ad would likely be banned under the rules proposed in the Senate bill. So would a fake image of Donald Trump hugging infectious disease expert Dr. Anthony Fauci that was shown in an attack ad from Trump's GOP primary opponent and Florida Gov. Ron DeSantis.

As another example, Klobuchar cited a deepfake video from earlier this year purporting to show Democratic Sen. Elizabeth Warren in a TV interview suggesting restrictions on Republicans voting.

“That is going to be so misleading if you, in a presidential race, have either the candidate you like or the candidate you don’t like actually saying things that aren’t true,” said Klobuchar, who ran for president in 2020. “How are you ever going to know the difference?”

Klobuchar, who chairs the Senate Rules and Administration Committee, presided over a Sept. 27 hearing on AI and the future of elections that brought witnesses including Minnesota's secretary of state, a civil rights advocate and some skeptics. Republicans and some of the witnesses they asked to testify have been wary about rules seen as intruding into free speech protections.

Ari Cohn, an attorney at think-tank TechFreedom, told senators that the deepfakes that have so far appeared ahead of the 2024 election have attracted “immense scrutiny, even ridicule,” and haven't played much role in misleading voters or affecting their behavior. He questioned whether new rules were needed.

“Even false speech is protected by the First Amendment,” Cohn said. “Indeed, the determination of truth and falsity in politics is properly the domain of the voters.”

Some Democrats are also reluctant to support an outright ban on political deepfakes. “I don't know that that would be successful, particularly when it gets to First Amendment rights and the potential for lawsuits,” said Clarke, who represents parts of Brooklyn in Congress.

But her bill, if passed, would empower the Federal Election Commission to start enforcing a disclaimer requirement on AI-generated election ads similar to what Google is already doing on its own.

The FEC in August took a procedural step toward potentially regulating AI-generated deepfakes in political ads, opening to public comment a petition that asked it to develop rules on misleading images, videos and audio clips.

The public comment period for the petition, brought by the advocacy group Public Citizen, ends Oct. 16.

Associated Press writer Ali Swenson contributed to this report.

Share:
More In Technology
AT&T Investors Digest WarnerMedia Spinoff Merger With Discovery for $43 Billion
AT&T announced earlier today it is spinning off its media properties in WarnerMedia in a merger with Discovery in a $43 billion deal.Scott Rostan, founder and CEO at Training The Street, joined Cheddar to talk about what the unwinding of the telecom giant's Time Warner media properties means for investors. "I think the investor sentiment is they're digesting the new information, and they're looking into the dividend, especially the reduction of the dividend," said Rostan, noting the transaction allows AT&T to focus on its core telecommunications business.
Energy Storage Solutions Company Leclanché Powers EV Fleets to Reduce Emissisions
A 2021 report from UK Research and Innovation found that the shipping industry makes up at least 2.5 percent of the world's total CO2 emissions. It's a problem that energy solutions company, Leclanché, is trying to solve. Founded in 1909, the company has been developing and producing batteries for more than 100 years. Today, Leclanché's lithium-ion battery is used to electrify not just ships, but also railroad locomotives, trucks, and specialty vehicles. Cheddar News spoke with Pierre Blanc, chief technology and industrial officer of Leclanché, to discuss.
Amazon Funds Amogy to Commercialize Ammonia-Powered Cargo-Shipping Vessels, Decarbonize Transportation
Amazon is betting that ammonia could be the fuel of the future, participating in a Series A round for the Brooklyn-based company Amogy in December. Amogy aims to de-carbonize transportation with a clean energy system that uses ammonia as a renewable fuel. Amogy is partnering with Amazon on its first commercial product - an ammonia-powered cargo-shipping vessel. Amogy CEO Seonghoon Woo joins Cheddar Climate to discuss.
FedEx Announces Student Ambassador Program With Historically Black Colleges & Universities
One of the world's largest transport companies is kicking off Black History Month with a new initiative aimed at the next generation of business leaders. Today, FedEx announced the launch of its Student Ambassador Program. Participants selected from eight historically black colleges and universities will receive career guidance from FedEx executives. The program is part of FedEx's ongoing commitment to HBCUs and will also help the company expand its pipeline for diverse talent. Cheddar News welcomes senior vice president at FedEx, Jenny Robertson, and Jerryl Briggs, President of Mississippi Valley State University, to discuss.
Indirect Driver Assistance Monitoring Significantly Worse Than Camera-Based Systems: Report
Driver assistance monitoring systems are meant to keep the driver's eyes on the road, but according to a report from AAA, different ways of monitoring provide significantly different results. The study found that direct camera-based systems that scanned the driver's eye movements were faster and more reliable than those indirect systems that looked at steering-wheel input. Megan McKernan, the manager of automotive services for the Automobile Club of Southern California, joined Cheddar to discuss the findings. "Triple-A is recommending that automakers include both direct and indirect systems just to really prevent consumers from trying to misuse these systems," she said, noting that neither system on its own is not foolproof.
Pinterest Adds Augmented Reality Shopping Experience
Pinterest recently added augmented reality to its portfolio. The image sharing and social media platform's new e-commerce tech will allow consumers to interact with retailers and visualize online products inside their homes.
Wave Neuroscience Tech Looks to Improve Brain Care for Pro Athletes
Wave Neuroscience is a neurological health tech company that specializes in clinical and at-home personalized brain stimulation technology. Erik Won, president and chief medical officer and Fred Walke, CEO, joined Cheddar's Opening Bell to discuss their company's hopes for allowing patients to identify and treat unique brain disorders while empowering them to understand their unique neurological makeups — including for the high-stress positions of professional athletes. "We have a mobile device that provides a very light stimulation that gives them a therapy that gives them confidence so there's a just knowing that they're doing something for it," said Walke. "But it also helps them get back into a rhythm. It helps their brain synchronize around certain frequencies that that we target, and it helps them really understand that they've done everything they can to get to their highest level of success."
Logitech CEO On Earnings, Growth Opportunities In 2022
Logitech posted better-than-expected earnings in its third quarter, reporting sales of $1.63 billion dollars, down 2% from the year ago quarter, but well ahead of the Wall Street consensus of $1.48 billion dollars. The PC and gaming peripherals company also raised its annual guidance for both sales and profitability. Bracken Darrell, Logitech CEO, joined Cheddar to break down his reaction to the results, how the pandemic played a role in its growth, and where he wants to take the company next.
Load More