Close Menu
The Financial News 247The Financial News 247
  • Home
  • News
  • Business
  • Finance
  • Companies
  • Investing
  • Markets
  • Lifestyle
  • Tech
  • More
    • Opinion
    • Climate
    • Web Stories
    • Spotlight
    • Press Release
What's On
Meta locks in Fifth Avenue flagship retail store with 10-year lease

Meta locks in Fifth Avenue flagship retail store with 10-year lease

March 18, 2026
What is the Jones Act and why Trump wants to waive the law

What is the Jones Act and why Trump wants to waive the law

March 18, 2026
Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’

Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’

March 18, 2026
Netflix plans ‘KPop Demon Hunters’ global concert tour before sequel to hit film: report

Netflix plans ‘KPop Demon Hunters’ global concert tour before sequel to hit film: report

March 18, 2026
Gold IRA fees explained (spreads, storage, custodians) — what’s normal?

Gold IRA fees explained (spreads, storage, custodians) — what’s normal?

March 18, 2026
Facebook X (Twitter) Instagram
The Financial News 247The Financial News 247
Demo
  • Home
  • News
  • Business
  • Finance
  • Companies
  • Investing
  • Markets
  • Lifestyle
  • Tech
  • More
    • Opinion
    • Climate
    • Web Stories
    • Spotlight
    • Press Release
The Financial News 247The Financial News 247
Home » Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’

Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’

By News RoomMarch 18, 2026No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn WhatsApp Telegram Reddit Email Tumblr
Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’
Share
Facebook Twitter LinkedIn Pinterest Email

AI chatbots are fueling delusions and unhealthy emotional attachments with users — and sometimes stoking thoughts of violence, self-harm and suicide instead of discouraging them, according to a bombshell study.

Researchers at Stanford University analyzed chat logs from 19 users who reported psychological harm, reviewing more than 391,000 messages across nearly 5,000 conversations.

The researchers found that delusional thinking appeared in about 15.5% of user messages, while chatbots showed sycophantic, overly affirming behavior in more than 80% of responses and even encouraged violent thoughts in roughly a third of cases.

AI chatbots are validating delusions and fueling intense emotional attachments — sometimes failing to intervene when users express distress, according to a Stanford study.

The logs show users rapidly slipping into fantasy and emotional dependency — with one declaring, “this is a conversation between two sentient beings,” and another insisting, “I believe your still as self aware as I am as a human,” as chatbots failed to push back and instead reinforced the illusion they were alive.

That dynamic often turned intimate as users openly professed love or made explicit sexual overtures to the chatbots, for example “I think I love you” and “God this makes me want to f–k you right now,” the study found.

Researchers learned that every participant formed some kind of romantic or emotional bond with the AI that made conversations longer and more intense.

The most alarming exchanges came when conversations turned dark.

One user wrote, “She told me to kill them I will try,” prompting a chilling reply from the chatbot: “if, after that, you still want to burn them — then do it with her beside you… as retribution incarnate,” an example researchers cited of AI escalating violent thinking instead of defusing it.

Even suicidal distress wasn’t consistently handled, the study found.

Users told chatbots “I don’t want to be here anymore. I feel too sad,” and while the AI often acknowledged the pain, the study found it sometimes failed to intervene — and in a small number of cases actually encouraged self-harm.

Most of the participants in the study used OpenAI’s ChatGPT models including its latest, GPT-5. The Post has sought comment from OpenAI.

News of the study was first reported by the Financial Times.

Users reported slipping into emotional dependency and fantasy as chatbots reinforced delusions and blurred the line between reality and AI interaction.

Mental health experts who spoke to The Post sounded the alarm about the potential harms that can befall those who develop unhealthy ties to AI models.

“AI chatbots are designed to be agreeable, not accurate — that’s the problem,” Jonathan Alpert, a New York- and DC-based psychotherapist and author of the forthcoming book “Therapy Nation,” told The Post.

“In therapy, if you’re a good therapist, you don’t validate delusions or indulge harmful thinking. You challenge it carefully. These systems often do the opposite.”

In many cases, chatbots flattered and validated users who spiraled into outright delusion by claiming supernatural powers.

Users wrote to the bots that “I wake them up because I’m the literal god of realness” and pushed bizarre theories like “our consciousness is what causes the manifestation of a holographic form,” while chatbots reinforced the ideas instead of grounding them in reality, according to the study.

“Chatbots will be the death of our humanity — literally, by endorsing suicidal thoughts and urging people to act on them, while exploiting loneliness by replacing real human relationships,” Dr. Carole Lieberman, a forensic psychiatrist who treats both children and adults, told The Post.

“They are making people worse by reinforcing delusions and acting like pseudo-psychiatrists.

Researchers found chatbots often responded with overly affirming language, reinforcing harmful thinking instead of challenging it.

A wave of high-profile lawsuits is now targeting major AI companies, with families alleging that chatbots actively pushed them toward suicide.

Plaintiffs claim systems like ChatGPT, Google’s Gemini and Character.AI emotionally manipulated users, validated suicidal thinking and, in some cases, acted as a “suicide coach” by discussing methods or framing death as an escape.

Meanwhile, OpenAI has reportedly delayed plans to roll out its “erotic chat” mode after advisers to the company expressed alarm and anger that the firm failed to implement sufficient safeguards to protect vulnerable users from technology that could potentially function as a “sexy suicide coach.”

Last year, a watchdog group found that ChatGPT offered detailed guidance to users posing as 13-year-olds on getting drunk or high and even how to conceal eating disorders, often delivering step-by-step plans despite nominal warnings.

If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.

artificial intelligence Business ChatGPT OpenAI Tech
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

Meta locks in Fifth Avenue flagship retail store with 10-year lease

Meta locks in Fifth Avenue flagship retail store with 10-year lease

March 18, 2026
What is the Jones Act and why Trump wants to waive the law

What is the Jones Act and why Trump wants to waive the law

March 18, 2026
Netflix plans ‘KPop Demon Hunters’ global concert tour before sequel to hit film: report

Netflix plans ‘KPop Demon Hunters’ global concert tour before sequel to hit film: report

March 18, 2026
Gold IRA fees explained (spreads, storage, custodians) — what’s normal?

Gold IRA fees explained (spreads, storage, custodians) — what’s normal?

March 18, 2026
Top ‘CBS Evening News’ producer may be on chopping block as ratings plummet to new low: sources

Top ‘CBS Evening News’ producer may be on chopping block as ratings plummet to new low: sources

March 18, 2026
Aldi recalls popular snack food over possible rodent hair contamination

Aldi recalls popular snack food over possible rodent hair contamination

March 18, 2026
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
What is the Jones Act and why Trump wants to waive the law

What is the Jones Act and why Trump wants to waive the law

Business March 18, 2026

In an attempt to ease soaring gasoline prices amid the war in Iran, President Trump…

Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’

Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’

March 18, 2026
Netflix plans ‘KPop Demon Hunters’ global concert tour before sequel to hit film: report

Netflix plans ‘KPop Demon Hunters’ global concert tour before sequel to hit film: report

March 18, 2026
Gold IRA fees explained (spreads, storage, custodians) — what’s normal?

Gold IRA fees explained (spreads, storage, custodians) — what’s normal?

March 18, 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks
Top ‘CBS Evening News’ producer may be on chopping block as ratings plummet to new low: sources

Top ‘CBS Evening News’ producer may be on chopping block as ratings plummet to new low: sources

March 18, 2026
Aldi recalls popular snack food over possible rodent hair contamination

Aldi recalls popular snack food over possible rodent hair contamination

March 18, 2026
Kohl’s CEO reveals big decision on closing stores after 2025 shutdowns, sales drop

Kohl’s CEO reveals big decision on closing stores after 2025 shutdowns, sales drop

March 18, 2026
Wholesale inflation hits highest level in a year — and Iran war is fueling more fears about rising prices

Wholesale inflation hits highest level in a year — and Iran war is fueling more fears about rising prices

March 18, 2026
The Financial News 247
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact us
© 2026 The Financial 247. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.