Tucked in a two-sentence footnote in a voluminous court opinion, a federal judge recently called out immigration agents using artificial intelligence to write use-of-force reports, raising concerns that it could lead to inaccuracies and further erode public confidence in how police have handled the immigration crackdown in the Chicago area and ensuing protests.

U.S. District Judge Sara Ellis wrote the footnote in a 223-page opinion issued last week, noting that the practice of using ChatGPT to write use-of-force reports undermines agents’ credibility and “may explain the inaccuracy of these reports.” She described what she saw in at least one body camera video, writing that an agent asks ChatGPT to compile a narrative for a report after giving the program a brief description and several images.

The judge noted factual discrepancies between the official narrative about those law enforcement responses and what body camera footage showed. But experts say the use of AI to write a report that depends on an officer’s specific perspective without using an officer’s actual experience is the worst possible use of the technology and raises serious concerns about accuracy and privacy.

An officer’s needed perspective

Law enforcement agencies across the country have been grappling with how to create guardrails that allow officers to use the increasingly available AI technology while maintaining accuracy, privacy and professionalism. Experts said the example recounted in the opinion didn’t meet that challenge.

“What this guy did is the worst of all worlds. Giving it a single sentence and a few pictures — if that’s true, if that’s what happened here — that goes against every bit of advice we have out there. It’s a nightmare scenario,” said Ian Adams, assistant criminology professor at the University of South Carolina who serves on a task force on artificial intelligence at the Council on Criminal Justice, a nonpartisan think tank.

The Department of Homeland Security did not respond to requests for comment, and it was unclear if the agency had guidelines or policies on the use of AI by agents. The body camera footage cited in the order has not yet been released.

Adams said few departments have put policies in place, but those that have often prohibit the use of predictive AI when writing reports justifying law enforcement decisions, especially use-of-force reports. Courts have established a standard referred to as objective reasonableness when considering whether a use of force was justified, relying heavily on the perspective of the officer.

“We need the specific articulated events of that event and the specific thoughts of that specific officer to let us know if this was a justified use of force,” Adams said. “That is the worst case scenario, other than explicitly telling it to make up facts, because you’re begging it to make up facts in this high-stakes situation.”

Private information and evidence

Besides raising concerns about an AI-generated report inaccurately characterizing what happened, the use of AI also raises potential privacy issues.

Katie Kinsey, chief of staff and tech policy counsel at the Policing Project at NYU School of Law, said if the agent in the order was using a public ChatGPT version, he probably didn’t understand that he lost control of the images the moment he uploaded them, allowing them to be part of the public domain and potentially used by bad actors.

Kinsey said from a technology standpoint most departments are building the plane as it’s being flown when it comes to AI. She said it’s often a pattern in law enforcement to wait until new technologies are already being used — and in some cases, mistakes being made — to then talk about putting guidelines or policies in place.

“You would rather do things the other way around, where you understand the risks and develop guardrails around the risks,” Kinsey said. “Even if they aren’t studying best practices, there’s some lower-hanging fruit that could help. We can start from transparency.”

Kinsey said while federal law enforcement considers how the technology should be used or not used, it could adopt a policy like those put in place in Utah or California recently, where police reports or communications written using AI have to be labeled.

Careful use of new tools

The photographs the officer used to generate a narrative also caused accuracy concerns for some experts.

Well-known tech companies like Axon have begun offering AI components with their body cameras to assist in writing incident reports. Those AI programs marketed to police operate on a closed system and largely limit themselves to using audio from body cameras to produce narratives because the companies have said programs that attempt to use visuals are not effective enough for use.

“There are many different ways to describe a color, or a facial expression or any visual component. You could ask any AI expert and they would tell you prompts return very different results between different AI applications, and that gets complicated with a visual component,” said Andrew Guthrie Ferguson, a law professor at George Washington University Law School.

“There’s also a professionalism question. Are we OK with police officers using predictive analytics?” he added. “It’s about what the model thinks should have happened, but might not be what actually happened. You don’t want it to be what ends up in court, to justify your actions.”

Share:
More In Technology
Spain fines Airbnb $75 million for unlicensed tourist rentals
Spain's government has fined Airbnb 64 million euros or $75 million for advertising unlicensed tourist rentals. The consumer rights ministry announced the fine on Monday. The ministry stated that many listings lacked proper license numbers or included incorrect information. The move is part of Spain's ongoing efforts to regulate short-term rental companies amid a housing affordability crisis especially in popular urban areas. The ministry ordered Airbnb in May to remove around 65,000 listings for similar violations. The government's consumer rights minister emphasized the impact on families struggling with housing. Airbnb said it plans to challenge the fine in court.
Militant groups are experimenting with AI, and the risks are expected to grow
The Islamic State group and other militant organizations are experimenting with artificial intelligence as a tool to boost recruitment and refine their operations. National security experts say that just as businesses, governments and individuals have embraced AI, extremist groups also will look to harness the power of AI. That means aiming to improve their cyberattacks, breaking into sensitive networks and creating deepfakes that spread confusion and fear. Leaders in Washington have responded with calls to investigate how militant groups are using AI and seek ways to encourage tech companies to share more about how their products are being potentially misused.
Trump signs executive order to block state AI regulations
President Donald Trump has signed an executive order to block states from regulating artificial intelligence. He argues that heavy regulations could stifle the industry, especially given competition from China. Trump says the U.S. needs a unified approach to AI regulation to avoid complications from state-by-state rules. The order directs the administration to draw up a list of problematic regulations for the Attorney General to challenge. States with laws could lose access to broadband funding, according to the text of the order. Some states have already passed AI laws focusing on transparency and limiting data collection.
San Francisco woman gives birth in a Waymo self-driving taxi
Waymo's self-driving taxis have been in the spotlight for both negative and positive reasons. This week, the automated ride-hailing taxis went viral after a San Francisco woman gave birth inside a Waymo taxi while on her way to the hospital. A Waymo spokesperson on Wednesday confirmed the unusual delivery. It said the company's rider support team detected unusual activity inside the vehicle and alerted 911. The taxi arrived safely at the hospital before emergency services. Waymo's popularity is growing despite heightened scrutiny following an illegal U-turn and the death of a San Francisco cat. The company, owned by Alphabet, says it is proud to serve riders of all ages.
OpenAI names Slack CEO Dresser as first chief of revenue
OpenAI has appointed Slack CEO Denise Dresser as its first chief of revenue. Dresser will oversee global revenue strategy and help businesses integrate AI into daily operations. OpenAI CEO Sam Altman recently emphasized improving ChatGPT, which now has over 800 million weekly users. Despite its success, OpenAI faces competition from companies like Google and concerns about profitability. The company earns money from premium ChatGPT subscriptions but hasn't ventured into advertising. Altman had recently announced delays in developing new products like AI agents and a personal assistant.
Trump approves sale of more advanced Nvidia computer chips used in AI to China
President Donald Trump says he will allow Nvidia to sell its H200 computer chip used in the development of artificial intelligence to “approved customers” in China. Trump said Monday on his social media site that he had informed China’s leader Xi Jinping and “President Xi responded positively!” There had been concerns about allowing advanced computer chips into China as it could help them to compete against the U.S. in building out AI capabilities. But there has also been a desire to develop the AI ecosystem with American companies such as chipmaker Nvidia.
Load More