Can you become addicted to generative AI?
Yes.
I realize that some might insist that there is zero chance of becoming addicted to generative AI. The viewpoint of such naysayers or doubters is that there is nothing about generative AI that could be addictive. Period, end of story.
On the other hand, keep in mind that there is already ample evidence supporting the idea of digital addictions, including being addicted to social media and perhaps the Internet in general, see my coverage at the link here. In that case, it would seem a rather realistic step to make the logical hop toward believing that generative AI can also be addictive.
Boom, drop the mic.
I aim to show you the said-to-be receipts that generative AI can be and is at times addictive.
Here’s the path I’ll be covering. First, I will share with you a thumbnail sketch of the overarching nature and scope of addictions. Following that foundational stage setting, I’ll make sure you are handily up-to-speed about generative AI and large language models (LLMs). Doing so will dovetail into revealing the highly notable and innovative intertwining of these two modern-day momentous topics.
Prepare yourself for a memorable and informative ride.
For my ongoing readers and new readers, this hearty discussion continues my in-depth series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and examines the addictive use of generative AI.
Previously, I have examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I am honored and pleased to indicate that I was featured in the episode, see the link here).
Other vital postings in my column include in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.
On with the show.
Essentials Of Addiction And As Applicable To AI
Nearly everyone would seem to be vaguely familiar with the notion of addictions and becoming addicted to one item or another. This is a common topic and a top-of-mind issue confronting modern-day society. You might know someone who has an addiction. Maybe you’ve been addicted and know first-hand what that’s like.
Just to make sure we are all on the same page, allow me a brief moment to establish a few fundamentals about addiction.
According to the online postings by the Cleveland Clinic, here’s how they define addiction (excerpts):
- “Addiction is a chronic (lifelong) condition that involves compulsive seeking and taking of a substance or performing of an activity despite negative or harmful consequences.”
- “Addiction can significantly impact your health, relationships, and overall quality of life. It’s crucial to seek help as soon as you develop signs of addiction.”
- “Is addiction a disease? Yes, addiction is a disease — it’s a chronic condition. The American Society of Addiction Medicine (ASAM) defines addiction as a chronic brain disorder. Addiction doesn’t happen from having a lack of willpower or as a result of making bad decisions. Your brain chemistry changes with addiction.”
- “There are two main groups of addiction: Substance addictions (substance use disorders). Non-substance addictions (behavioral addictions).”
I assume that those points make abundant sense to you.
I’d ask that you especially note the last point that there are two main groups or types of addiction. One is substance addictions such as when addicted to drugs. The other group or type entails non-substance addictions. An addiction to social media and/or an addiction to the Internet would be considered non-substance addiction.
Likewise, this is the case for generative AI addiction, namely, it is a non-substance addiction.
Addictions of all kinds are deeply troubling.
We can all readily acknowledge that addictions can be quite destructive to a person’s life. It can adversely impact them. The spillover lands on their family, friends, co-workers, and even strangers. Being with or around someone with an addiction is agonizing and often is a constant worry and concern for their well-being and safety.
How can you discern if someone is possibly addicted?
Let’s see what the Cleveland Clinic posting has to say about potential symptoms or signs (excerpts):
- “Inability to stop: People may use a substance or engage in harmful addictive behavior even if they want to stop.”
- “Increased tolerance: Over time, they may need more of the substance or activity to feel the same euphoric effects as they did before.”
- “Intense focus on the substance or activity: People with addictions become pathologically preoccupied with the substance or activity.”
- “Lack of control: They may feel like they’ve lost complete control over their substance use or activity and often feel helpless.”
- “Personal problems and health issues: Addiction impacts all aspects of their lives, including their physical health, mental health, personal relationships and career.”
- “Withdrawal: People with addiction may experience emotional and physical withdrawal symptoms when they stop using.”
I submit to you that addiction to generative AI can be assessed using a similar set of characteristics.
Think of things this way:
- Addiction to generative AI: Is someone using generative AI to the degree that they seem unable to stop doing so, do they have an apparent lack of control over their use of generative AI, has the usage of generative AI led to personal problems and health issues, and so on?
If those criteria or characteristics match a person using generative AI, it seems feasible they might be addicted to generative AI.
You will see in a few moments that this is not a hard-and-fast iron-clad rule. Be cautious in mindlessly trying to label someone as addicted to generative AI. Just because someone uses generative AI with great frequency does not mean they are addicted to it. Be wary of making false positives whereby you assign a classification of being an addict of generative AI when the person has no such bearings.
I’d like to add two additional quick comments.
Some detractors or smarmy people might refer to this person or that person as being addicted to generative AI. It is typically meant in jest or as a gibe. The thing is, doing so can be confusing to others and confounding to the individual. Your effort to be sharp-tongued can cause harmful damage, emotionally and in other ways. Please don’t do that.
I will also note that there should not be false negatives at play. By this, I mean that someone who demonstrably does have the symptoms of being addicted to generative AI should not be overlooked or shrugged off. They might proclaim they are not addicted to generative AI, and others around them might off-handedly agree. If the matter is serious, please take it seriously.
We will get further into these significant matters shortly.
Recent Research On Addiction To Generative AI
There is a growing interest in studying and analyzing the nature of addictions to generative AI.
A recent research study entitled “Examining Generative AI User Addiction From A C-A-C Perspective” by Tao Zhou and Chunlei Zhang, Technology In Society, July 2024, made these salient points (excerpts):
- “Similar to social media addiction, users may be addicted to generative AI, which reflects a psychological state that people develop excessive dependence and find it difficult to discontinue using generative AI systems.”
- “Addiction not only has negative impacts on individuals’ health and daily lives, but also poses challenges to the development and stability of society.”
- “Addicted users might invest a substantial amount of time and effort interacting with AI to gain satisfaction and pleasure in the virtual world. Consequently, they may neglect important real-world responsibilities and interpersonal relationships.”
- “Another critical concern is that addiction may lead users to blindly trust the misinformation output by generative AI and make wrong decisions. In addition, excessive reliance on generative AI can weaken individuals’ creativity and problem-solving abilities, which may lead to a loss of independent thinking and judgment capabilities.”
- “The results show that four features of generative AI, which include perceived anthropomorphism, perceived interactivity, perceived intelligence, and perceived personalization, affect flow experience and attachment, both of which further lead to addiction.”
If you aren’t familiar with generative AI, those above points might seem difficult to decipher. I’d like to ensure that you are up-to-speed on what generative AI and large language models (LLMs) are all about. This will allow me to dig into the above-stated research points.
I’m sure you’ve amply heard of generative AI, the darling of the tech field these days.
Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.
The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.
In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.
I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.
Back to the crux of things.
Let’s ask the zillion-dollar question at hand.
Why would someone possibly become addicted to generative AI?
I dare suggest that if you tried using any of the major generative AI apps, you probably would right away sense why someone might become addicted to using them. They are easy to use. They are seemingly human-like in fluency. You can carry on a conversation endlessly. The AI won’t complain, it won’t insult you (unless you ask it to do so) and will interact as though the AI is your best-ever friend.
As per the research study points noted above, people can readily anthropomorphize AI. This means that they begin to think of generative AI as being human. The line between a machine and being a human begins to blur for them.
I’ve repeatedly warned that this tendency is amplified because of how AI makers go out of their way to design and portray AI, see my discussion at the link here. For example, most of the AI makers devise generative AI to respond to users by phrasings such as “I will help you” or “We can figure this out together” as though the AI is a human. I refer to this as anthropomorphizing by design. It is not occurring by happenstance.
Some falsely think that this is the only way generative AI can be set up. Nope, this is a choice by the AI makers. In a manner of speaking, the design to some degree can foster inclinations toward becoming addicted. I’ve predicted that we might very well see lawsuits against AI makers for how they designed their generative AI apps, legally arguing that the addiction was insidiously devised via intentional or purposeful machination. See my analysis at the link here.
In terms of stakeholders and key considerations underlying generative AI addiction, I usually lay out a list consisting of these vital factors:
- (1) Users of generative AI.
- (2) Those who are associated with users of generative AI.
- (3) AI makers that develop and field generative AI.
- (4) Systems vendors that embed generative AI into their products/services.
- (5) Societal and cultural milieu concerning generative AI usage
- (6) Laws and regulations regarding generative AI
- (7) Other
All those factors are crucial to whether someone might lean into becoming addicted to generative AI.
Furthermore, those factors are instrumental in the other activities and facets entailing generative AI addiction, such as:
- (a) Veering someone into generative AI addiction.
- (b) Detecting whether someone has a generative AI addiction.
- (c) Aiding in overcoming an addiction to generative AI.
- (d) Prolonging someone’s addiction to generative AI.
- Etc.
Using Generative AI To Discover More About Generative AI Addiction
I’ve only touched the surface so far on generative AI addiction.
One means of learning more about the topic consists of using generative AI.
Say what?
Yes, I just said that you can use generative AI to find out more about generative AI addiction. This certainly seems zany since we are leveraging the very aspect that is at the crux of the addiction being considered. Mind-blown.
I want to also emphasize that you should not rely solely on asking generative AI about generative AI addiction. That would be a big mistake. Generative AI can produce all manner of falsehoods, errors, and other troubling outputs and responses. One such category consists of so-called AI hallucinations, see my explanation at the link here, which is terminology I disfavor because it once again anthropomorphizes AI by implying that AI has hallucinations akin to humans hallucinating.
I will next proceed to examine generative AI addiction via using generative AI.
This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.
Let’s start by making sure that ChatGPT is up-to-speed about this topic.
- My entered prompt: “Is it possible to become addicted to generative AI?”
- ChatGPT generated response: “Yes, it’s possible to develop a form of dependency or compulsive behavior related to generative AI, especially if someone finds the technology particularly engaging or if it provides a strong sense of novelty, entertainment, or even companionship.”
You likely observe that ChatGPT is familiar with the generative AI addiction topic.
This is a good sign for this discussion. If ChatGPT had not previously encountered data training on a topic at hand, there would be less utility in using the AI. The AI would have to be further data trained, such as the use of Retrieval-Augmented Generation (RAG), as I discuss at the link here.
Let’s jump into the weighty matter.
- My entered prompt: “What are some of the key factors spurring addiction to generative AI?”
- ChatGPT generated response:
“Here is how this might happen:”
“Reward and Engagement: Generative AI can constantly produce new and interesting content, which can be very stimulating for the brain. The ability of AI to respond in a personalized manner can make interactions feel more meaningful, leading to repeated use.
“Escapism: People might turn to AI to escape from stress, loneliness, or boredom. Over time, this can become a habit, where someone relies on AI interactions or creations to avoid real-life challenges or uncomfortable emotions.”
“Compulsive Use: The ease of access and the endless possibilities of generative AI might lead to compulsive use. This can include spending excessive time generating images, writing stories, or interacting with conversational AI. Like any engaging activity, users might find themselves losing track of time, prioritizing AI interactions over other responsibilities.”
“Dependence and Withdrawal: If someone begins to rely heavily on AI for emotional support or creativity, they might find it difficult to cope without it. If access to AI is suddenly restricted, a person might experience frustration, irritability, or anxiety, similar to withdrawal from other forms of technology or entertainment.”
End of ChatGPT response
Those factors make sense and comport with my earlier points.
A person might become enamored of generative AI. There is a semblance of escape from the everyday world. The usage can be so engaging that you become compulsive and obsessive about using generative AI. In addition, lamentedly, there is a heightened chance of becoming dependent upon generative AI and withdrawing from interactions with fellow humans.
I’ve been hammering away so far on the side of becoming addicted, so let’s shift gears and figure out ways to overcome an addiction to generative AI.
Here’s what I asked.
- My entered prompt: “What can someone do to overcome an addiction to generative AI?”
- ChatGPT generated response:
“Overcoming an addiction to generative AI, like any form of compulsive behavior, requires a combination of self-awareness, discipline, and support. Here are steps that can help.”
“Self-Reflection: Recognize and admit that your use of generative AI is affecting your life negatively. This could be in terms of time management, emotional well-being, or neglect of responsibilities. Understand what triggers your compulsive use of AI. Is it stress, boredom, loneliness, or something else?”
“Define Limits: Set specific time limits for using generative AI. For example, no more than 30 minutes a day or only during certain times. Decide in advance what you want to achieve with AI (e.g., generating a specific image or writing a story) and stick to those goals without getting sidetracked.”
“Remove Temptations: If possible, uninstall apps or block websites related to generative AI on your devices during certain hours. Employ apps that track and limit your screen time or usage of specific software.”
“Engage in Offline Hobbies: Rediscover offline activities that you enjoy, such as reading, exercising, or spending time with friends and family.”
“Talk to Someone: Share your concerns with a trusted friend or family member who can offer support and encouragement. If your addiction is severe, consider seeking help from a therapist who specializes in behavioral addictions or technology-related issues.”
End of ChatGPT response
Most of those pieces of advice are aligned with other forms of digital addictions.
First, you need to become aware that you are or might be addicted. Awareness is key. Second, you should begin to find ways to undercut the addiction. Define limits of usage. Stick to those limits. Remove temptations that drive you to use generative AI. Seek out other outlets for your time and attention.
Perhaps most importantly, seek out help.
Some therapists are now including generative AI addiction in their practices. It is admittedly a rarity right now. That being said, there are many hundreds of millions of generative AI users, and even if some small fractions of users are or will become addicted, the need for mental health therapy to deal with the addiction is likely to emerge and grow.
Generative AI As Mental Health Advisor In This Instance
I have a comment that might be unnerving, so prepare yourself accordingly.
Let’s fight fire with fire, doing so by using generative AI to aid people who are overcome with a generative AI addiction.
It goes like this. Per my extensive coverage of using generative AI for mental health, see the link here, one twist on the addiction to generative AI would be to use generative AI to aid in overcoming your addiction to generative AI. In essence, use generative AI as your mental health advisor, which many are already doing (I’ve forewarned and alerted that this is a grand experiment worldwide of using AI as an always-on mental health advisor, though without any kinds of controls or human-in-the-loop involvement), and in this case, focus on generative AI addiction.
Yikes, that seems like something from the movie Inception. It is one thing inside another thing. The conception makes your head roll as you think about it.
Does it make sense to try and stop your use of generative AI by actively seeking advice from generative AI as you do so?
At first glance, the answer would be heck no. The idea of using the very item that is the core of your addiction to fight the addiction defies credulity. I empathize with that sentiment. But maybe this requires a bit more consideration.
In a broadened viewpoint, you could indeed say that it can make sense. Here’s how. If generative AI is alerted or directed to aid you in overcoming your generative AI addiction, the AI could keep track of your usage and warn you when it seems you are going overboard. There is also the possibility of instructing the generative AI to be less inviting. The generative AI could be ordered to be brisk and abrupt. All kinds of settings can be adjusted to make generative AI less alluring, more proactive about being selective and judicious with its usage, and seek to steer someone away from being addicted to generative AI.
The problem can be an aid in the cure, as it were.
We aren’t done there yet, not by a mile.
A twist on this twist is that a person might simply switch to some other generative AI app. You see, if they don’t like what one AI is saying or doing, they could seek out a more accommodating generative AI. That’s what addicts sometimes do, in general.
Consider even more variations on all of this.
For example, some pundits would argue that you ought to go cold turkey when it comes to a generative AI addiction. Just stop using generative AI. There, that solves things. No need to get clever or play tricks. Just quit.
Well, yes and no, in the sense that there are tradeoffs involved in doing so. Addiction has a lot of complexities and the idea of just summarily dropping the addiction overnight is not necessarily the most suitable ploy. Sometimes, a more gradual approach turns out to be a more sustainable means of overcoming an addiction.
Do Not Cry Wolf When No Wolf Is Around
I mentioned earlier that it is important to not start pronouncing people summarily as being addicted to generative AI. Please, refrain from that temptation.
How might you gauge the difference between someone who does seem to be addicted to generative AI and someone who does not?
Let’s see what ChatGPT has to say.
- My entered prompt: “What is the difference between someone who uses generative AI extensively and finds doing so very useful and informative, versus someone who is said to be addicted to generative AI?”
- ChatGPT generated response: “The difference between someone who uses generative AI extensively and finds it useful and informative versus someone who is addicted to it lies primarily in the intent, impact on their life, and level of control over their usage. Extensive users leverage generative AI as a valuable tool, maintaining control and balance in their use, which positively contributes to their goals and lives. Addicted users are driven by compulsive behavior, emotional dependence, and a lack of control, leading to negative consequences and disruption in their life.”
I bring this up again to try and dissuade those who might begin crying wolf, even though no wolf is around.
More research is coming along on how to detect addictions to generative AI, and I will be regularly covering those latest findings. Stay tuned.
Conclusion
There aren’t any solid counts yet of how many people might be addicted to generative AI. The addiction in this realm is still quite new. Eventually, and probably soon, there will be studies that have carefully examined the propensity, and we might end up with tangible and reliable numbers.
When I give talks and presentations on this rising matter, some people are shocked to learn that there is such a thing as being addicted to generative AI. Others, taking a pragmatic perspective, seem unsurprised. They shake their heads and remark that we seem to be able to get addicted to just about anything in our modern world, so why not add generative AI to the list?
I’ll close this discussion with some handy quotes.
Nathaniel Parker Willis, famed American writer and poet in the 1800s, said this: “Temptation hath a music for all ears.”
My interpretation of the keen insight, within the context of generative AI addiction, is that we need to recognize and accept that generative AI inherently provides temptations that can lead to addiction. It is a temptation machine. One of the notable reasons it is so incredibly tempting is due to being made to be that way, as I noted earlier.
A last quote to end this, for now, is one of my favorites, stated by Mae West: “I generally avoid temptation unless I can’t resist it.” I suppose we might need to think long and hard about whether everyone should be using generative AI. If some have a proclivity to becoming addicted to generative AI, the logical presumption would be to not let them get started at the get-go. Do not use generative AI.
Logical, sensible, but not practical, it would seem.
In an era when generative AI is rapidly becoming ubiquitous, that piece of advice doesn’t seem like it could hold water. Maybe we can rejigger generative AI to accommodate the possibility of becoming addicted. Generative AI is going to be inescapable. In that case, let’s be smart about trying to prevent addictions from readily occurring.
Please stay safe and be mindful of the use of generative AI, for yourself and others that you know, thanks and good luck.