WASHINGTON (AP) — The phone rings. It’s the secretary of state calling. Or is it?

For Washington insiders, seeing and hearing is no longer believing, thanks to a spate of recent incidents involving deepfakes impersonating top officials in President Donald Trump’s administration.

Digital fakes are coming for corporate America, too, as criminal gangs and hackers associated with adversaries including North Korea use synthetic video and audio to impersonate CEOs and low-level job candidates to gain access to critical systems or business secrets.

Thanks to advances in artificial intelligence, creating realistic deepfakes is easier than ever, causing security problems for governments, businesses and private individuals and making trust the most valuable currency of the digital age.

Responding to the challenge will require laws, better digital literacy and technical solutions that fight AI with more AI.

“As humans, we are remarkably susceptible to deception,” said Vijay Balasubramaniyan, CEO and founder of the tech firm Pindrop Security. But he believes solutions to the challenge of deepfakes may be within reach: “We are going to fight back.”

AI deepfakes become a national security threat

This summer, someone used AI to create a deepfake of Secretary of State Marco Rubio in an attempt to reach out to foreign ministers, a U.S. senator and a governor over text, voice mail and the Signal messaging app.

In May someone impersonated Trump’s chief of staff, Susie Wiles.

Another phony Rubio had popped up in a deepfake earlier this year, saying he wanted to cut off Ukraine’s access to Elon Musk’s Starlink internet service. Ukraine’s government later rebutted the false claim.

The national security implications are huge: People who think they’re chatting with Rubio or Wiles, for instance, might discuss sensitive information about diplomatic negotiations or military strategy.

“You’re either trying to extract sensitive secrets or competitive information or you’re going after access, to an email server or other sensitive network,” Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations.

Synthetic media can also aim to alter behavior. Last year, Democratic voters in New Hampshire received a robocall urging them not to vote in the state’s upcoming primary. The voice on the call sounded suspiciously like then-President Joe Biden but was actually created using AI.

Their ability to deceive makes AI deepfakes a potent weapon for foreign actors. Both Russia and China have used disinformation and propaganda directed at Americans as a way of undermining trust in democratic alliances and institutions.

Steven Kramer, the political consultant who admitted sending the fake Biden robocalls, said he wanted to send a message of the dangers deepfakes pose to the American political system. Kramer was acquitted last month of charges of voter suppression and impersonating a candidate.

“I did what I did for $500,” Kramer said. “Can you imagine what would happen if the Chinese government decided to do this?”

Scammers target the financial industry with deepfakes

The greater availability and sophistication of the programs mean deepfakes are increasingly used for corporate espionage and garden variety fraud.

“The financial industry is right in the crosshairs,” said Jennifer Ewbank, a former deputy director of the CIA who worked on cybersecurity and digital threats. “Even individuals who know each other have been convinced to transfer vast sums of money.”

In the context of corporate espionage, they can be used to impersonate CEOs asking employees to hand over passwords or routing numbers.

Deepfakes can also allow scammers to apply for jobs — and even do them — under an assumed or fake identity. For some this is a way to access sensitive networks, to steal secrets or to install ransomware. Others just want the work and may be working a few similar jobs at different companies at the same time.

Authorities in the U.S. have said that thousands of North Koreans with information technology skills have been dispatched to live abroad, using stolen identities to obtain jobs at tech firms in the U.S. and elsewhere. The workers get access to company networks as well as a paycheck. In some cases, the workers install ransomware that can be later used to extort even more money.

The schemes have generated billions of dollars for the North Korean government.

Within three years, as many as 1 in 4 job applications is expected to be fake, according to research from Adaptive Security, a cybersecurity company.

“We’ve entered an era where anyone with a laptop and access to an open-source model can convincingly impersonate a real person,” said Brian Long, Adaptive’s CEO. “It’s no longer about hacking systems — it’s about hacking trust.”

Experts deploy AI to fight back against AI

Researchers, public policy experts and technology companies are now investigating the best ways of addressing the economic, political and social challenges posed by deepfakes.

New regulations could require tech companies to do more to identify, label and potentially remove deepfakes on their platforms. Lawmakers could also impose greater penalties on those who use digital technology to deceive others — if they can be caught.

Greater investments in digital literacy could also boost people’s immunity to online deception by teaching them ways to spot fake media and avoid falling prey to scammers.

The best tool for catching AI may be another AI program, one trained to sniff out the tiny flaws in deepfakes that would go unnoticed by a person.

Systems like Pindrop’s analyze millions of datapoints in any person’s speech to quickly identify irregularities. The system can be used during job interviews or other video conferences to detect if the person is using voice cloning software, for instance.

Similar programs may one day be commonplace, running in the background as people chat with colleagues and loved ones online. Someday, deepfakes may go the way of email spam, a technological challenge that once threatened to upend the usefulness of email, said Balasubramaniyan, Pindrop’s CEO.

“You can take the defeatist view and say we’re going to be subservient to disinformation,” he said. “But that’s not going to happen.”

Share:
More In Technology
Spain fines Airbnb $75 million for unlicensed tourist rentals
Spain's government has fined Airbnb 64 million euros or $75 million for advertising unlicensed tourist rentals. The consumer rights ministry announced the fine on Monday. The ministry stated that many listings lacked proper license numbers or included incorrect information. The move is part of Spain's ongoing efforts to regulate short-term rental companies amid a housing affordability crisis especially in popular urban areas. The ministry ordered Airbnb in May to remove around 65,000 listings for similar violations. The government's consumer rights minister emphasized the impact on families struggling with housing. Airbnb said it plans to challenge the fine in court.
Militant groups are experimenting with AI, and the risks are expected to grow
The Islamic State group and other militant organizations are experimenting with artificial intelligence as a tool to boost recruitment and refine their operations. National security experts say that just as businesses, governments and individuals have embraced AI, extremist groups also will look to harness the power of AI. That means aiming to improve their cyberattacks, breaking into sensitive networks and creating deepfakes that spread confusion and fear. Leaders in Washington have responded with calls to investigate how militant groups are using AI and seek ways to encourage tech companies to share more about how their products are being potentially misused.
Trump signs executive order to block state AI regulations
President Donald Trump has signed an executive order to block states from regulating artificial intelligence. He argues that heavy regulations could stifle the industry, especially given competition from China. Trump says the U.S. needs a unified approach to AI regulation to avoid complications from state-by-state rules. The order directs the administration to draw up a list of problematic regulations for the Attorney General to challenge. States with laws could lose access to broadband funding, according to the text of the order. Some states have already passed AI laws focusing on transparency and limiting data collection.
San Francisco woman gives birth in a Waymo self-driving taxi
Waymo's self-driving taxis have been in the spotlight for both negative and positive reasons. This week, the automated ride-hailing taxis went viral after a San Francisco woman gave birth inside a Waymo taxi while on her way to the hospital. A Waymo spokesperson on Wednesday confirmed the unusual delivery. It said the company's rider support team detected unusual activity inside the vehicle and alerted 911. The taxi arrived safely at the hospital before emergency services. Waymo's popularity is growing despite heightened scrutiny following an illegal U-turn and the death of a San Francisco cat. The company, owned by Alphabet, says it is proud to serve riders of all ages.
OpenAI names Slack CEO Dresser as first chief of revenue
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
Trump approves sale of more advanced Nvidia computer chips used in AI to China
President Donald Trump says he will allow Nvidia to sell its H200 computer chip used in the development of artificial intelligence to “approved customers” in China. Trump said Monday on his social media site that he had informed China’s leader Xi Jinping and “President Xi responded positively!” There had been concerns about allowing advanced computer chips into China as it could help them to compete against the U.S. in building out AI capabilities. But there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.
Load More