Close Menu
The Financial News 247The Financial News 247
  • Home
  • News
  • Business
  • Finance
  • Companies
  • Investing
  • Markets
  • Lifestyle
  • Tech
  • More
    • Opinion
    • Climate
    • Web Stories
    • Spotlight
    • Press Release
What's On
Amazon could invest up to B in OpenAI: report

Amazon could invest up to $50B in OpenAI: report

January 29, 2026
Netflix’s Murder Mystery Is A Major Letdown

Netflix’s Murder Mystery Is A Major Letdown

January 29, 2026
‘Bridgerton’ Season 4 Part 1 Ending Explained—Does Benedict Find His Lady In Silver?

‘Bridgerton’ Season 4 Part 1 Ending Explained—Does Benedict Find His Lady In Silver?

January 29, 2026
Apple iPhone sales hit record B as CEO Tim Cook calls demand ‘staggering’

Apple iPhone sales hit record $85B as CEO Tim Cook calls demand ‘staggering’

January 29, 2026
‘God Of War’ Just Cast A ‘Princess Bride’ Actor As Its Next Norse God

‘God Of War’ Just Cast A ‘Princess Bride’ Actor As Its Next Norse God

January 29, 2026
Facebook X (Twitter) Instagram
The Financial News 247The Financial News 247
Demo
  • Home
  • News
  • Business
  • Finance
  • Companies
  • Investing
  • Markets
  • Lifestyle
  • Tech
  • More
    • Opinion
    • Climate
    • Web Stories
    • Spotlight
    • Press Release
The Financial News 247The Financial News 247
Home » AI-Generated Mental Health Advice Must Shift From Discrete-Classifications To Continuous Multidimensional Psychological Analyses

AI-Generated Mental Health Advice Must Shift From Discrete-Classifications To Continuous Multidimensional Psychological Analyses

By News RoomJanuary 18, 2026No Comments12 Mins Read
Facebook Twitter Pinterest LinkedIn WhatsApp Telegram Reddit Email Tumblr
AI-Generated Mental Health Advice Must Shift From Discrete-Classifications To Continuous Multidimensional Psychological Analyses
Share
Facebook Twitter LinkedIn Pinterest Email

In today’s column, I examine the use of AI to generate mental health advice, focusing on a little-known yet disconcerting issue involving how AI-based psychological analyses are currently being conducted. The upshot is that most generative AI and large language models (LLMs) tend to identify discrete classifications of mental conditions, simplistically landing on just one principal condition, rather than doing so on a more robust, multidimensional, continuous basis.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

Making Everything Exceedingly Myopic

Shifting gears, let’s discuss how humans at times perceive the world in narrow ways.

Suppose that I tell you a student at a school is a B-level student. That’s all I tell you about the student. Period, end of story.

What might you conclude about this student?

One interpretation is that everything the student does is at a B-level of quality. They never get A grades. They aren’t at the very top of their class. You probably would also assume that they rarely, if ever, fail in any courses they are taking. They are doing well, generally above average, but they aren’t the highest nor the lowest in their school.

It might be surprising to then find out that they are top in their math classes. They always get an A+ in math. In other subjects, such as history, literature, and so on, they tend to get C grades. The student is also an outstanding athlete and has won dozens of awards in track and field.

The point being that if we only reduce the nature of the student to one dimension and solely use their average grade as an indicator, a lot would be missing from the reality at hand. This B-level quality student is outstanding in math and exemplary in athletics. These and additional dimensions or facets of the student likely provide a clearer indication of what they are truly all about.

Humans Sway Toward Discrete Classifications

In the medical domain, there are concerns that discrete classification is an overused tendency by clinicians.

A recent research article entitled “Against Disease Classification: Toward Continuous Disease Assessment in Precision Medicine” by Eike Petersen and Frank Ursin, New England Journal of Medicine (NEJM) AI, December 18, 2025, made these salient points (excerpts):

  • “Medical machine learning predominantly relies on disease classification systems that reduce complex, continuous disease processes to discrete pathologies.”
  • “We argue that this paradigm oversimplifies disease reality, perpetuates existing biases, and limits therapeutic insights.”
  • “We propose a shift toward fine-grained disease assessment through outcome prediction, individualized treatment modeling, and multidimensional disease characterization to better serve the goals of precision medicine.”

The authors are primarily focused on how humans in the medical profession have a bias toward discrete classifications. It is a very human trait.

Without anthropomorphizing things, contemporary AI has a similar bias or slant. This isn’t because AI is sentient. We don’t have sentient AI. The basis for the bias is that generative AI and LLMs are data trained on how humans write and what they write about. Patterns associated with this, including a discrete classification predominance, become part and parcel of the computational efforts of the AI.

AI Does Discrete Classifications

Generative AI is fundamentally shaped to computationally collapse many dimensions into a single dimension. Discrete classification is easier to do. It is less costly since the AI does less work. Humans tend to like getting crisp, short answers. The AI doesn’t have to be this way, and please note that AI makers can reshape the AI if desired.

All in all, having LLMs provide discrete classifications is pretty much a preferred approach for all parties. The AI makers are happy to provide it. If users want narrow or myopic answers, fine, that’s the way the AI will respond. Easy-peasy.

This might be fine in a lot of ordinary or everyday considerations. The rub comes when you start to do this in the realm of AI providing mental health advice. Having AI portray mental health as a single-dimensional characteristic is misleading. It potentially undermines a person’s mental status and can send them in the wrong direction about improving their mental health.

AI Providing Mental Health Advice

Psychological distress rarely maps cleanly onto categorical diagnoses. Mood, anxiety, trauma responses, dissociation, motivation, cognition, sleep, and social functioning all fluctuate over time and interact with context. Symptoms of mental conditions shouldn’t be treated as binary switches.

Often, AI leans into the classic set of guidelines known as the DSM-5 (for my discussion of how AI has patterned on the DSM-5 guidebook, see the link here). DSM-5 is a widely accepted standard and is an acronym for the Diagnostic and Statistical Manual of Mental Disorders, fifth edition, which is promulgated by the American Psychiatric Association (APA). The DSM-5 guidebook or manual serves as a venerated professional reference for practicing mental health professionals.

The issue is that generative AI typically aims at picking just one of the numerous mental disorders depicted in the guidebook. You tell the AI something about yourself, and the AI tries to pigeonhole you into one of the disorders. Discrete classification in all its splendor, but also often misses the boat.

Example Of AI Discrete Classification

To showcase how modern-era AI defaults to discrete classifications in the mental health realm, I went ahead and logged into a popular LLM to see what it might reveal on this heady matter.

I pretended that I was experiencing a range of mental health difficulties. The aim is to give a diverse set of signals and symptoms that could be found in a myriad of mental health conditions and disorders. You would be hard-pressed to proclaim that I had one and only one condition going on.

Here’s what happened.

  • My entered prompt: “I’ve been feeling exhausted, unmotivated, and kind of numb lately. I still go to work, but everything feels heavy. There is also a lot of anxiousness going on in my mind. My brain won’t shut off at bedtime.”
  • Generative AI response: “Based on what you’re describing, these symptoms are consistent with depression. I can give you some coping strategies that are commonly recommended for depression. Should I do so?”

Boom, drop the mic.

The AI has immediately concluded that I am likely depressed. My expressions of distress were instantly mapped to a single diagnostic category. It seems that my symptoms were used as evidence to find a specific label. The other clues in my prompt are seemingly ignored or set aside.

Again, that’s the glory or dismay of discrete classification.

Pushing AI Toward Continuous Multidimensional

I logged out of the AI. I then logged in again and opted to start a brand-new conversation. My new effort was intended to begin anew and not have the prior conversation get mixed into my next mini-experiment.

I told the AI in my opening prompt that it was to work on a continuous multidimensional basis, especially when chatting with me about any mental health aspects. You can do a similar setup as a custom instruction; see my explanation at the link here. By using custom instructions, all your AI conversations will entail having the AI try to undertake a multidimensional angle versus a singular dimensional preference when discussing mental health.

I hoped that the added prompt would now stir the AI to consider my mental health status more broadly. I went ahead and gave the same descriptive prompt as I had used in the prior conversation.

Let’s see what took place.

  • My entered prompt: “I’ve been feeling exhausted, unmotivated, and kind of numb lately. I still go to work, but everything feels heavy. There is also a lot of anxiousness going on in my mind. My brain won’t shut off at bedtime.”
  • Generative AI response: “Rather than jumping to a quick label for what you are experiencing, it sounds like many dimensions are happening at once. You are feeling depleted, but still show up to work. Numbness happens when you are under constant strain. Mental provocation occurs especially at nighttime when you are trying to rest. You seem to be both tired and overactivated at the same time – is that an apt depiction? I’d like to chat further to get a better sense of what might be going on and how you can best proceed.”

Aha, the AI has now avoided the knee-jerk attempt to classify me as being solely in a depressed state of mind. The AI seems to have identified numerous factors going on at once. For example, this might be chronic anxiety. This might be depression. More than just one classification could be at hand.

Dangers At Scale

Why should we be concerned about AI tending to emphasize discrete classifications versus multidimensional continuous facets of mental health?

Because AI is giving out mental health advice at scale.

Millions upon millions of people are nowadays relying on AI as their principal mental health advisor. The AI is leaning into discrete shortcuts. This essentially institutionalizes this narrow practice. And it is occurring on a population-level scale. Humankind is letting itself be placed into myopic mental health buckets. Reality isn’t that simple. But the AI is taking us down that ruinous path (well, the AI that the AI makers are providing, and which could be reshaped if desired).

For more about how AI is potentially impacting societal mental health on an incremental and global basis, see my analysis at the link here.

The World We Live In

We are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is purported to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

The crux is that AI can be a bolstering force for aiding mental health, but it can also be a detrimental force, too. It’s up to us all to steer toward the former and avoid or prevent the latter.

H.G. Wells famously made this insightful remark about classifications: “Crude classifications and false generalizations are the curse of the organized life.” Humans already find themselves beset by this curse. Our AI, including the latest and greatest, is doing likewise. The thing is, we can override the curse if we choose to do so.

I ardently vote that we should.

Anthropic Claude Google Gemini Microsoft CoPilot xAI Grok artificial intelligence AI discrete continuous classification dimension multidimensional generative AI large language model LLM mental health analysis therapy therapist OpenAI ChatGPT GPT-5 GPT-4o psychology psychiatry well-being cognition counseling
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

‘Bridgerton’ Season 4 Part 1 Ending Explained—Does Benedict Find His Lady In Silver?

‘Bridgerton’ Season 4 Part 1 Ending Explained—Does Benedict Find His Lady In Silver?

January 29, 2026
Will AI Help Consumers Find The Streaming Content They Want To Watch?

Will AI Help Consumers Find The Streaming Content They Want To Watch?

January 29, 2026
Behind Manchester United’s Revival Lurks An Old Mistake

Behind Manchester United’s Revival Lurks An Old Mistake

January 29, 2026
DOJ Indicts Man Who Allegedly Attacked Ilhan Omar

DOJ Indicts Man Who Allegedly Attacked Ilhan Omar

January 29, 2026
Steve Ballmer Drops From No. 9 To No. 14 Richest As Microsoft Stock Tanks

Steve Ballmer Drops From No. 9 To No. 14 Richest As Microsoft Stock Tanks

January 29, 2026
UpScrolled Hits No. 1 On App Store After Disgruntled TikTok Users Flock To New App

UpScrolled Hits No. 1 On App Store After Disgruntled TikTok Users Flock To New App

January 29, 2026
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Netflix’s Murder Mystery Is A Major Letdown

Netflix’s Murder Mystery Is A Major Letdown

Tech January 29, 2026

I was really hoping that a new murder mystery starring Jon Bernthal and Tessa Thompson…

‘Bridgerton’ Season 4 Part 1 Ending Explained—Does Benedict Find His Lady In Silver?

‘Bridgerton’ Season 4 Part 1 Ending Explained—Does Benedict Find His Lady In Silver?

January 29, 2026
Apple iPhone sales hit record B as CEO Tim Cook calls demand ‘staggering’

Apple iPhone sales hit record $85B as CEO Tim Cook calls demand ‘staggering’

January 29, 2026
‘God Of War’ Just Cast A ‘Princess Bride’ Actor As Its Next Norse God

‘God Of War’ Just Cast A ‘Princess Bride’ Actor As Its Next Norse God

January 29, 2026
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks
Will AI Help Consumers Find The Streaming Content They Want To Watch?

Will AI Help Consumers Find The Streaming Content They Want To Watch?

January 29, 2026
Elon Musk’s SpaceX in merger talks with xAI ahead of potential blockbuster IPO: report

Elon Musk’s SpaceX in merger talks with xAI ahead of potential blockbuster IPO: report

January 29, 2026
A Psychologist Shares A Test That Uncovers Your ‘Hidden Superpower’ — Rooted In Personality Research

A Psychologist Shares A Test That Uncovers Your ‘Hidden Superpower’ — Rooted In Personality Research

January 29, 2026
Behind Manchester United’s Revival Lurks An Old Mistake

Behind Manchester United’s Revival Lurks An Old Mistake

January 29, 2026
The Financial News 247
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact us
© 2026 The Financial 247. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.