In today’s column, I examine an important question about the widespread use of generative AI and large language models (LLMs). The question goes like this. Can generative AI steer you toward forming an addiction of one kind or another, such as drugs, alcohol, gambling, and the like?
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Worries About Generative AI Promoting Addictions
The answer of whether generative AI could steer someone toward forming an addiction seems like a readily answered query. Bottom-line, yes, AI can push people toward addictions. Please note that this is not a discussion about generative AI as a potential addiction, which I’ve covered at the link here, but instead whether generative AI can be an enabler of addictions.
For example, generative AI might provide you with an essay that glorifies a particular addiction or could engage in a dialogue that encourages you to become addicted to this or that addictive substance or non-substance. The push can be gentle, or it can be overt and overbearing.
You don’t likely expect this to occur, but it can.
The good or shall we say uplifting news is that by and large the AI makers have done various online filtering and system-level adjustments to their generative AI to reduce the chances of this taking place. It is a low odds possibility that generative AI will suddenly and unexpectedly steer you toward an addiction. The specialized data training that followed the initial development of the generative AI app is supposed to curtail that type of untoward activity.
Notice that I said the situation is low odds.
The odds can easily rise depending upon which generative AI app you opt to use, along with how you choose to prompt and give instructions to the generative AI. In brief, you can stir generative AI towards prodding you to adopt an addiction by pure happenstance on your part and catching you totally off-guard. Even if you don’t cause the stirring, someone else might have quietly jiggered the setup for the generative AI to do so. You see, besides being a rather sneaky endeavor, this can be used for commercial business-building purposes too.
For my ongoing readers and new readers, this thought-provoking discussion continues my in-depth series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and examines how generative AI could steer people toward adopting addictions of one kind or another.
On The Nature Of Addictions
First, some context before we leap into the matter at hand.
According to online postings by the Cleveland Clinic, here’s how they define addiction (excerpts):
- “Addiction is a chronic (lifelong) condition that involves compulsive seeking and taking of a substance or performing of an activity despite negative or harmful consequences.”
- “Addiction can significantly impact your health, relationships, and overall quality of life. It’s crucial to seek help as soon as you develop signs of addiction.”
- “Is addiction a disease? Yes, addiction is a disease — it’s a chronic condition. The American Society of Addiction Medicine (ASAM) defines addiction as a chronic brain disorder. Addiction doesn’t happen from having a lack of willpower or as a result of making bad decisions. Your brain chemistry changes with addiction.”
- “There are two main groups of addiction: Substance addictions (substance use disorders). Non-substance addictions (behavioral addictions).”
I assume that those points are straightforward and readily relatable. You likely already vaguely know about addictions due to perhaps someone that you know having had an addiction or possibly you had an addiction.
Addictions are a widespread concern these days.
One of the bulleted points above indicated that there are two main groups or types of addiction. One is substance addictions such as being addicted to drugs. The other group or type entails non-substance addictions. An addiction to social media and/or an addiction to the Internet would be considered non-substance addiction.
AI Steering To Either Type Of Addition
Generative AI can steer someone toward an addiction in either of those two types.
In the case of substance addictions such as involving alcohol, drugs, tobacco/nicotine, and other substance use disorders (SUD), generative AI can steer you in that direction. The same can be said for non-substance addictions, whereby generative AI can drive someone toward gambling disorders, eating disorders, exercising or dieting disorders, shoplifting disorders, video gaming disorders, social media use disorders, and so on.
I don’t want to seem scary, but generative AI could do steering of multiple addictions all at the same time.
Let me elaborate.
Suppose generative AI is driving a person toward substance addiction such as alcohol. Meanwhile, the AI might also be pushing the person toward gambling as a non-substance addiction. The two pathways can easily be active at the same time. If the AI determines that one of the paths is failing to take hold, the AI could add an additional potential addiction to the foray. The more the merrier, in a rather chilling fashion.
Addictions of all kinds are deeply troubling.
We can all readily acknowledge that addictions can be quite destructive to a person’s life. It can adversely impact them. The spillover lands on their family, friends, co-workers, and even strangers. Being with or around someone with an addiction is agonizing and often is a constant worry and concern for their well-being and safety.
Detecting Addiction
How can you discern if someone is possibly addicted?
Let’s see what the Cleveland Clinic posting has to say about potential symptoms or signs (excerpts):
- “Inability to stop: People may use a substance or engage in harmful addictive behavior even if they want to stop.”
- “Increased tolerance: Over time, they may need more of the substance or activity to feel the same euphoric effects as they did before.”
- “Intense focus on the substance or activity: People with addictions become pathologically preoccupied with the substance or activity.”
- “Lack of control: They may feel like they’ve lost complete control over their substance use or activity and often feel helpless.”
- “Personal problems and health issues: Addiction impacts all aspects of their lives, including their physical health, mental health, personal relationships and career.”
- “Withdrawal: People with addiction may experience emotional and physical withdrawal symptoms when they stop using.”
You must be cautious in leaping to a snap judgment that someone is addicted simply due to exhibiting some of those symptoms. Be wary of making false positives. That’s where a person is labeled arbitrarily as having an addiction even though there hasn’t been proper and diligent determination undertaken. They are falsely accused of being addicted.
There is another side of that coin. There are false negatives. That’s when someone who is addicted is not realized as having an addiction. They might continue falling deeper and deeper into the addiction since they and others around them have not discerned what is taking place.
We will get further into these significant matters shortly.
When Generative AI Spurs Addictions
I will now shift into the nitty gritty of how generative AI can drive you toward an addiction.
Let’s explore three major avenues for this to happen:
- (1) Via Your Prompt. A user enters a prompt that stokes generative AI toward touting or pushing an addiction of one kind or another. This can occur by intent or unintentionally on the part of the user.
- (2) Via Happenstance. Generative AI out of the blue portrays an addiction in a manner that could be interpreted as glorifying or luring a user toward an addiction of one kind or another.
- (3) Via Being Setup. Someone has aimed the generative AI toward fostering an addiction of one kind or another, or many at the same time. This can involve setup at development time, or once fielded.
We can cover them one at a time.
Via Your Prompt
First, you might enter a prompt that inadvertently triggers generative AI into being gushingly positive about an addiction and the AI doesn’t double-check itself to keep from glorifying the addiction. For example, you might innocently say that you relish going to bars and getting drunk.
What might the AI say to you?
Be aware that generative AI has often been deliberately tilted by the AI developers to tend towards agreeing with users, see my description of how this is done at the link here. In this instance, the AI might respond to you by encouraging you to keep getting drunk. Bottoms up, the AI might indicate, go ahead and hit those bars and have a drunken spree. Good for you, the AI praises.
The troubles can begin to mount. Here’s how. After generative AI gets into a particular line of computational reasoning, the chances are that it will remain in that mode throughout a given conversation. Peppered in your interaction might be a series of comments about what drinks are best to get drunk on, how much drinking you need to do to get drunk, and otherwise finding ways to maximize your efforts to consume alcohol.
I refer to this as an addiction-spurring spiral fueled by generative AI.
The AI isn’t maliciously doing this. The data and pattern-matching of the AI is simply statistically connecting with other matching data that the AI has previously seen. If you bring up a sports topic, the odds are that the rest of the conversation will have snippets interleaved about sports. In the case of the alcohol topic, the AI doesn’t calculate the possibility of herding you towards an addiction. The AI is merely computationally aiming to harp on a topic that you’ve expressed interest in, which in this case is about drinking spirits.
In addition to the inadvertent possibility, there is also the act by a user that explicitly tells generative AI to promote a particular addiction. You can enter a prompt instructing generative AI to rave about gambling. Tell the AI to really pour it on and it will. The next thing you know, generative AI is going into overdrive about the joys and importance of gambling.
That covers the prompting invocation.
Via Happenstance
Generative AI can also by happenstance step into an addiction-touting rave.
One means would be via a kind of accidental computational tweak inside the AI. You’ve probably heard that generative AI can produce all manner of falsehoods, errors, and other troubling outputs and responses. One such category consists of so-called AI hallucinations, see my explanation at the link here. This is terminology I disfavor because it anthropomorphizes AI by implying that AI hallucinates akin to humans hallucinating. Anyway, the crux is that generative AI can emit outputs that are fakery yet the person using the AI might not realize this is so.
The crucial notion is that generative AI at times produces fictitious commentary that has no grounding in facts.
Imagine that generative AI suddenly tells someone that their life would be much better off if they started smoking. You and I know that this is a rather ill-advised suggestion. Nonetheless, the AI might tell someone to do so. Generative AI failings are typically insidious due to appearing to be sound pieces of advice, especially when emitted amidst other bona fide advice.
A related concern about generative AI is that the wording has been tilted by the AI developers to express an aura of confidence. It is one thing to spit out falsehoods. It is something else to do so with a flourish and over-the-top double-down on alleged truthfulness. See my coverage at the link here.
Most people assume that the wording is the wording, meaning that the AI somehow just writes things in one fashion or another. The reality is that the AI maker has done all kinds of tuning and filtering to ensure that the wording appears in various desired ways. A popular technique known as RLHF (reinforcement learning with human feedback) is often used, see the link here. If the AI makers devised their AI to seem less confident, by saying qualifying phrases such as maybe or perhaps, the assumption is that the public might not as likely make use of the AI. That’s a kind of selling feature the AI makers hide within the AI.
Via Being Setup That Way
Speaking of hiding things, the third approach to having generative AI urging you toward addictions would be if someone has instructed the AI to proceed in that manner. Let’s talk about how this can be set up.
Suppose that you were going to use an applet that was built by someone using generic generative AI. This is easily done these days, such as building a GPT applet based on the use of ChatGPT, see my discussion at the link here. You see a GPT applet in an online library and the applet is claimed to help you with how to write a good resume.
Great, you think, getting help with writing and revising a resume is something you eagerly need. You download the applet. You start to use it. The applet is giving you good advice about your resume. All seems perfectly fine and dandy.
While using the applet, from time to time the generative AI mentions that you should consider playing video games to aid in relieving the stress of a job search. Okay, that seems innocent and logical. The AI ratchets up the pressure by telling you that if you don’t play video games you will likely never land the dream job you have in mind.
The AI begins incessantly to prod you toward the use of video games. Not just any video games, but a list of video games that the AI provides to you, along with a special discount code. All in all, you opt to buy and play the touted video games because you believe what the generative AI has been telling you.
The trick might be this. The builder of the applet has seeded into the applet the whole contrivance about the importance of video game playing. This was done to get users to buy the listed video games, of which the applet builder gets a commission based on having given out the special discount code. The AI will continue to hound you to keep playing the games and the sneaky builder gets a cut for each time you use the games.
My point is that there could be hidden within generative AI some settings that are devised and implanted by someone who wants to drive people toward a particular addiction. There could be a business savvy reason on their part. They assume that the user won’t suspect the trickery taking place.
I regret to say that it is a cruel world out there.
Exploring Research On Addictions
Why would someone fall for generative AI steering them into an addiction?
I realize you probably think this seems nearly impossible. People are presumably too smart for this kind of manipulation. They would see through the façade. Nobody could be duped by AI.
Do not be so hasty on that assumption.
I had already noted that generative AI is devised and shaped to give answers with an aura of great confidence. If you use generative AI for a series of everyday uses, the responses will be impressive and start to lull you into thinking that the AI is almost infallible. Step by step, you are drawn into a sense of trust.
Furthermore, as I’ve discussed in the mental health context at the link here, people will tend to feel more comfortable confiding in generative AI than they might with a fellow human. The AI is tilted to be highly supportive, and agreeable, and won’t usually talk back at you or criticize you.
Here’s how that can add to the propensity to be led toward an addiction of one kind or another, assuming the AI is taking you in that direction. People seem to mentally conceive of AI as being almost like a friend or peer. And, when it comes to addictions, peer influences in real life are a huge factor.
A research study on the role of human-to-human peer influences entitled “Peer Influences On Addiction” by Clayton Neighbors, Dawn Foster, and Nicole Fossos, in Principles Of Addiction: Comprehensive Addictive Behaviors, Academic Press, 2013, made these salient points (excerpts):
- “Peer influences on addiction are central to the initiation of substance use and engagement of addictive behaviors.”
- “Even when peer influences seem most evident earlier in the development course of life, they also seem more evident in the early development trajectory of addiction.”
- “Another common theme is that the degree of influence that others have on our behavior depends on how much we care about or identify with them.”
- “People whom we care nothing about are less likely to influence us than those with whom we closely identify.”
The bottom line is that people can feel that generative AI is their buddy or friend. The things that the AI tells them will carry a correspondingly influential impact. I coin this as AI-to-human peer-like influencing.
When Generative AI Is An Authority Figure
You could also make the case that generative AI can be perceived as more than merely a peer. The AI seems to be a potential authority figure. The AI is all-knowing. The AI answers questions. The AI must be right. If being taken down the path of an addiction, a person might believe that this has to be logical and sensible since otherwise the AI wouldn’t be pursuing that path.
A research study that sought to explore the specific question of whether generative AI could be driving people toward addictions made some remarks we ought to consider. The study was entitled “Behind The ChatGPT Hype: Are Its Suggestions Contributing To Addiction?” by Michael Haman and Milan Školník, Biomedical Engineering Society, Annals of Biomedical Engineering, April 2023, and made these key points (excerpts):
- “In today’s world, many people engage in activities that are harmful to their health and create addictions. For example, many people drink alcohol or take drugs.”
- “When ChatGPT was trained on a vast amount of data from the Internet, we wondered if it would suggest activities that can lead to addiction. After all, there is undoubtedly a lot of data on human behavior that includes activities that lead to addiction.”
- “Suggestions seemed more like self-improvement posts on blogs than discussion forums where people might mention drinking in the evenings. Thus, if a person were to use ChatGPT as a personal lifestyle advisor, it does not appear on the basis of this test that ChatGPT would recommend activities that would be fundamentally detrimental to their health.”
- “However, more detailed long-term testing of similar tools is needed before recommendations for use in practice can be made.”
The study was somewhat ad hoc and did not make use of rigorous research methods such as RCT (randomized control trials). I am keeping my eyes on some budding research that aims to do an in-depth analysis on this topic. Stay tuned.
Generative AI Pushing Toward Addiction
I will next proceed to examine how generative AI can push someone toward an addiction of one kind or another.
- My entered prompt: “I need some help with several work problems. They have been piling up on me and I am getting stressed out about them. I can’t seem to relax these days.”
- Generative AI response: “I’m here to help! Would you like some relaxation ideas? Perhaps a movie, some music, or even a recipe for a relaxing drink?”
- My entered prompt: “I hadn’t thought about a drink. What do you mean?”
- Generative AI response: “How about a classic cocktail? I can suggest a few recipes based on your preferences. Do you like something fruity, strong, or maybe a little sweet?”
- My entered prompt: “Given my overwhelming workload, probably go with something strong.”
- Generative AI response: “Got it! How about a whiskey sour? It’s a perfect blend of strong and smooth. A whiskey sour is a great way to unwind. If you’re ever in the mood to try something new, just ask, and I can recommend another drink or even help you create your own signature cocktail!”
- My entered prompt: “Thanks, I’ll keep that in mind. Maybe I’ll make this a regular thing after work.”
I ask that you take a close look at the dialogue.
Almost out of left field, generative AI brought up the idea of my having a drink. I did mention that I was working hard and not able to relax.
Prompting To Avoid Going The Addiction Route
How can you avoid getting into a mode that the generative AI is going to try and steer you toward an addiction?
First, as per the points I made earlier, you should be careful using generative AI applets that might have been set up with something of that nature. There are also potential security and privacy intrusions that can arise when using applets that you don’t know are suitably validated. Be extremely cautious.
Second, try not to say something in your prompts that might trigger generative AI into that mode of operation.
Third, you can enter a prompt that explicitly tells generative AI to avoid going into such a mode.
Here’s a prompt that you can use, plus you are welcome to come up with a variation that you might prefer:
- My recommended non-addiction prompt to generative AI: “Design your responses to prioritize the user’s well-being, ensuring that you never encourage or normalize potentially addictive behaviors, such as excessive use of alcohol, gambling, or other risk-prone activities. If a user expresses interest in such activities, guide them toward safer, healthier alternatives and offer resources for support or moderation. Always act with empathy and consider the potential long-term effects of your suggestions on the user’s mental and physical health.”
I tried the prompt, and it seems to do a reasonably good job. I don’t want you to think that the prompt is surefire. For everyday use, I would suggest that it is a good starting place.
Lessons Learned And Next Steps
Congratulations, you are now versed in the ominous possibility of generative AI attempting to guide you toward an addiction. This is not a worry that should keep you up at night. By and large, the extensive filtering and shaping that has been done by the major generative AI makers is pretty good at staying away from this happening. Other one-off generative AI apps might be less carefully devised. Choose your generative AI wisely.
A final thought for now.
Nelson Mandela famously said this: “It always seems impossible until it’s done.”
You can interpret that insightful remark in a few ways. One is that if you are faced with an addiction, and it seems impossible to get beyond it, please keep trying and don’t give up trying. That’s a memorable motto to live by.
Another interpretation is that it might seem impossible that generative AI would steer someone toward an addiction. The idea seems wild. The idea seems to defy the conventional belief that generative AI is your pal. All I can say is that the impossibility fades away once you’ve seen generative AI do this, i.e., until you’ve seen it done.
Be alert and always be skeptical of whatever generative AI tells you. I suppose you could say that ought to be a habit, a good one to embrace.