The new year is not yet two weeks old, and already the AI threat landscape is proving as dangerous as feared. Multiple cyber firms warned this would define 2025, and the FBI issued a specific warning on these fast advancing AI threats. These include more sophisticated, personalized phishing and AI-tuned malware to bypass defenses. Now Microsoft has warned that attackers may be hijacking some of the most powerful AI tools available to power some of their campaigns.
On Friday, the tech giant confirmed it was “taking legal action to protect the public from abusive AI-generated content.” Via a post from Steven Masada from its Digital Crimes Unit, the company warned is has discovered “a foreign-based threat–actor” that scraped “exposed customer credentials” to gain access to “generative AI services and purposely alter the capabilities of those services.”
Microsoft says the foreign cybercriminals then exploited those AI services and even “resold access to other malicious actors… to generate harmful and illicit content.” The company has revoked all known access and put in place “countermeasures,” which it says include “enhanced safeguards to further block such malicious activity.”
The specific threat here came from powerful AI tools — including Microsoft’s own — deployed to power attacks against third-party organizations. But the wider context is much more critical. Just a week ago, The Financial Times reported on generative AI being used to create malicious phishing campaigns, with content and tone tailored for every target based on the specific attributes of the person they purported to come from, through scraping social media and other sources.
“Every day,” Microsoft says, “individuals leverage generative AI tools to enhance their creative expression and productivity. Unfortunately, and as we have seen with the emergence of other technologies, the benefits of these tools attract bad actors who seek to exploit and abuse technology and innovation for malicious purposes.”
Last year, they company issued an advisory on “protecting the public from abusive AI-generated content, warning that “AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation – especially to target kids and seniors.”
As McAfee warns, “as AI continues to mature and become increasingly accessible, cybercriminals are using it to create scams that are more convincing, personalized, and harder to detect… The risks to trust and safety online have never been greater.” Now we have further indications as to how some of that AI access is taking place.
Fasten your seatbelts, because 2025 is only going to get worse from here.