Meta issued an apology Wednesday night after an “error” caused Instagram’s recommendation algorithm to flood users’ Reels feeds with disturbing and violent videos, some depicting fatal shootings and horrific accidents.
The issue affected a broad range of users, including minors.
The troubling content, which was recommended to users without their consent, featured graphic depictions of individuals being shot, run over by vehicles and suffering gruesome injuries.
While some videos carried “sensitive content” warnings, others were displayed with no restrictions.
A Wall Street Journal reporter’s Instagram account was inundated with back-to-back clips of people being shot, crushed by machinery and violently ejected from amusement park rides.
These videos originated from pages with names such as “BlackPeopleBeingHurt,” “ShockingTragedies” and “PeopleDyingHub” — accounts that the journalist did not follow.
Metrics on some of these posts suggested that Instagram’s algorithm had dramatically boosted their visibility.
View counts on certain videos surpassed those of other posts from the same accounts by millions.
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended,” an Instagram spokesman said late Wednesday.
“We apologize for the mistake.”
Despite the apology, the company declined to specify the scale of the issue.
However, even after Meta claimed the problem had been resolved, a Wall Street Journal reporter continued to see videos depicting shootings and deadly accidents late into Wednesday.
These disturbing clips appeared alongside paid advertisements for law firms, massage studios, and the e-commerce platform Temu.
The incident comes as Meta continues to adjust its content moderation policies, particularly regarding automated detection of objectionable material.
In a statement issued on Jan. 7, Meta announced it would change how it enforces certain content rules, citing concerns that past moderation practices had led to unnecessary censorship.
As part of the shift, the company said it would adjust its automated systems to focus only on “illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud, and scams,” rather than scanning for all policy breaches.
For less serious violations, Meta indicated it would rely on users to report problematic content before taking action.
The company also acknowledged that its systems had been overly aggressive in demoting posts that “might” violate its standards and said it was in the process of eliminating most of those demotions.
Meta has also scaled back AI-driven content suppression for some categories, though the company did not confirm whether its violence and gore policies had changed as part of these adjustments.
According to the company’s transparency report, Meta removed more than 10 million pieces of violent and graphic content from Instagram between July and September last year.
Nearly 99% of that material was proactively flagged and removed by the company’s systems before being reported by users.
However, Wednesday’s incident left some users unsettled.
Grant Robinson, a 25-year-old who works in the supply-chain industry, was one of those affected.
“It’s hard to comprehend that this is what I’m being served,” Robinson told the Journal.
“I watched 10 people die today.”
Robinson noted that similar videos had appeared for all his male friends, ages 22 to 27, none of whom typically engage with violent content on the platform.
Many have interpreted these changes as an effort by Zuckerberg to repair relations with President Trump, who has been a vocal critic of Meta’s moderation policies.
A company spokesperson confirmed on X that Zuckerberg visited the White House earlier this month “to discuss how Meta can help the administration defend and advance American tech leadership abroad.”
Meta’s shift in moderation strategy comes after significant staffing reductions.
During a series of tech layoffs in 2022 and 2023, the company cut approximately 21,000 jobs — nearly a quarter of its workforce — including positions in its civic integrity, trust, and safety teams.