Google’s efforts to use artificial intelligence to more quickly process data requests from government and law enforcement agencies aren’t going as well as the company had hoped.
Endlessly swamped by a deluge of demands from police across the world (there were 236,000 in the first six months of 2024 alone), the company has increasingly been looking to AI to wrangle the mounting backlog of court orders and data requests faced by its long suffering Legal Investigations Support (LIS) team. Current and former members of that team told Forbes that Google engineers had been working on tools that could ingest court orders, subpoenas and other official requests, before going to find the relevant data for an LIS member to review. In theory they would significantly speed up the manual work of an LIS staffer. One source familiar with the matter told Forbes the backlog for requests is in the thousands.
But those tools have so far failed to do what is needed of them, sources said. Though the AI was trained on the work done by the LIS team, it has not yet been able to replicate it. And now 10 engineers charged with developing the AI have been sacked and the fate of the project has been thrown into doubt, staff told Forbes.
One staffer said the AI hasn’t yet been deployed and the layoffs were going to further delay them. Another added, “Calling any of our current tooling ‘AI’ feels like a stretch.” As Forbes had previously reported, a trial of the technology had actually created more work, because any requests processed by AI had to be double checked and often redone by humans before being released to law enforcement.
Google declined comment on the departures and would not answer questions about AI as a solution for its law enforcement request management problem. Google spokesperson Alex Krasov would say only that the company continued to make changes to operate more efficiently without “changing the way we receive or assess law enforcement requests.”
Cooper Quintin, senior public interest technologist at the Electronic Frontier Foundation, told Forbes he thinks it’s a “bad idea” to use AI for any kind of legal process because of models’ tendency to “hallucinate,” making up information. He pointed to a bevy of recent cases where judges warned lawyers about using AI to write up their filings after the software fabricated legal citations. “Clearly the solution is to hire more people to deal with these requests, but Google is deploying AI slop instead,” Quintin said.
If the AI isn’t capable of properly parsing lawful police requests, how can we trust it to detect fraudulent ones, Quintin asked. There have been numerous cases in which criminals have pretended to work for a police department and forged court orders and emergency requests to pilfer personal information that could be used to locate, stalk or harass individuals. In November last year, the FBI warned about an uptick in hackers using compromised government email accounts to make such fraudulent requests.
“Google already has a problem with responding to fake orders and reports,” Quintin added. “I think an AI system like this will exacerbate that issue.”