The Prompt is a weekly rundown of AI’s buzziest startups, biggest breakthroughs, and business deals. To get it in your inbox, subscribe here.
Welcome back to The Prompt.
Another AI startup is (partially) being swallowed up by a tech giant.
On Friday, Amazon announced that it is hiring the cofounders and about a quarter of the employees of AI robotics company Covariant. The e-commerce giant has also obtained a non-exclusive license to the company’s AI models, which it plans to integrate into its fleet of industrial robots. Founded in 2017, Covariant has raised more than $240 million in funding from backers like Index Ventures and Radical Ventures.
The announcement comes as similar deals have taken place over the past few months with big tech companies hiring the founders and teams of buzzy AI startups like Inflection, Adept and Character AI.
Now let’s get into the headlines.
ETHICS + LAW
Facial recognition company Clearview AI has been fined $30 million by a Netherlands-based data privacy authority for scraping billions images of people from the internet without their knowledge or consent and building an “illegal database” of photos. Clearview’s Chief Legal Officer Jack Mulcaire said that the company does not have any customers in the EU and that the decision is “unlawful.” The company’s facial recognition tools have been used by law enforcement agencies in hundreds of child exploitation cases, Forbes reported last year.
Two voice actors, Karissa Vacker and Mark Boyett, have sued AI voice generation startup ElevenLabs, for allegedly using hours of copyrighted audiobook narrations to produce customized synthetic voices that sound similar to their own and training its foundational AI model with the recordings. According to the filing, the company removed one of the AI-generated voices from its platform last year after the actor reached out, but for months was unable to remove the voice from its API due to a “technical challenge,” which allowed other websites to make duplicates of the voice. The company did not respond to Forbes’ request for comment.
POLITICS + ELECTION
Two convicted fraudsters and conspiracy theorists, Jacob Wohl and Jack Burkman, used fake names to secretly launch an AI lobbying firm called LobbyMatic, Politico reported. The duo also falsely claimed in demo screenshots that firms like Microsoft, Pfizer and Palantir used the AI platform to generate insights and analyze legislation, according to 404 Media. Late last year, the company also created a fake profile to publish blogs on Medium.
AI DEAL OF THE WEEK
ChatGPT maker OpenAI is in talks to raise several billions of dollars in a round that would value the AI behemoth at $100 billion, the Wall Street Journal reported last week. Investment firm Thrive Capital, founded by billionaire Josh Kushner, is leading the round and plans to infuse $1 billion into the company. Tech giants like Apple, Nvidia and Microsoft are also reportedly participating in the round.
Also notable: AI coding startup Codeium, featured on the Next Billion Dollar Startups list in August, raised $150 million at a $1.25 billion valuation.
DEEP DIVE
For many children visiting Disney World in Orlando, Florida, it was the trip of a lifetime. For the man that filmed them on a GoPro, it was something more nefarious: an opportunity to create child exploitation imagery.
The man, Justin Culmo, who was arrested in mid-2023, admitted to creating thousands of illegal images of children taken at the amusement park and at least one middle school, using a version of AI model Stable Diffusion, according to federal agents who presented the case to a group of law enforcement officials in Australia earlier this month. Forbes obtained details of the presentation from a source close to the investigation.
Culmo has been indicted for a range of child exploitation crimes in Florida, including allegations he abused his two daughters, secretly filmed minors and distributed child sexual abuse imagery (CSAM) on the dark web. He has not been charged with AI CSAM production, which is a crime under U.S. law. At the time of publication, his lawyers had not responded to requests for comment. He entered a not guilty plea last year. A jury trial has been set for October.
“This is not just a gross violation of privacy, it’s a targeted attack on the safety of children in our communities,” said Jim Cole, a former Department of Homeland Security agent who tracked the defendant’s online activities during his 25 years as a child exploitation investigator. “This case starkly highlights the ruthless exploitation that AI can enable when wielded by someone with the intent to harm.”
The case is one of a growing number in which AI is used to transform photos of real children into realistic images of abuse. In August, the DOJ unsealed charges against army soldier Seth Herrera, accusing him of using generative AI tools to produce sexualized images of children. Earlier this year, Forbes reported that Wisconsin resident Steven Anderegg had been accused of using Stable Diffusion to produce CSAM from images of children solicited over Instagram. In July, the U.K.-based nonprofit the Internet Watch Foundation (IWF) said it had detected over 3,500 AI CSAM images online this year.
Read the full story on Forbes.
WEEKLY DEMO
AI-generated reviews with five-star ratings are flooding mobile and smart TV app stores, according to media transparency company DoubleVerify, making it harder to decide which apps are worth downloading. Scammers are using AI tools to give high ratings to fraudulent apps that endlessly run ads— even when the phone may be switched off— to earn revenue. But some telltale signs, such as unusual formatting and similar writing styles across different reviews, can help you spot fake app reviews.
AI INDEX
200 million
People use ChatGPT at least once a week, OpenAI said. That’s double the number of users it had announced last November.
MODEL BEHAVIOR
An AI assistant called Lindy AI recently “rickrolled” a human customer when it was asked to provide a video tutorial of how to set up the assistant. In an email reply, the chatbot hallucinated and played the prank, directing the customer to the music video of Rick Astley’s 1987 song, “Never Gonna Give You Up” instead.