In today’s column, I examine a trending and quite vexing question about special and particularly intriguing qualities of generative AI and large language models or LLMs. The question is this. Does the apparent aspect that generative AI produces so-called AI hallucinations mean that generative AI creativity is also ruled by that same capability?

In other words, if it weren’t for the AI hallucinations then we presumably would not be able to get generative AI to generate creative ideas and outputs. That’s an exhorted premise or gut hunch that many believe in and profess to be true. One is said to lead to the other, or so some pundits vociferously proclaim.

The reason this is important is that AI makers and AI researchers are vigorously and at full speed trying to ultimately prevent AI hallucinations from occurring. The supreme goal is to stop AI hallucinations cold. Period, end of story. But, and here’s the rub, assuming that there is a tangible linkage between AI hallucinations and AI creativity, we might inadvertently end up squashing AI creativity.

Is the tradeoff of preventing AI hallucinations worth the cost of losing AI creativity?

It makes you pause to think it over.

Or, then again, maybe such a conundrum isn’t really at play.

Let’s talk about it.

This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).

Humans Believe In The Fine Line Theory

Before we get into the nuances of the AI complexities, let’s start with human beliefs.

You’ve undoubtedly heard or maybe even uttered the famous line that there is a fine line or thin line between genius and insanity. This has been around since the days of Plato if not earlier in time. The persistence of the belief is quite strong. In 1681, the great poet John Dryden included these two lines in a poem: “Great Wits are sure to Madness neer ally’d; And thin Partitions do their Bounds divide.” (per his poem entitled Absalom and Achitophel). This is reminiscent of the famous remark that this is a fine line between genius and insanity. Albert Einstein seemingly upped the ante by declaring that the only difference between genius and insanity is that genius has its limits.

The field of psychology eventually became immersed in investigating the veracity of this overall and pervasive belief. Extensive research has been undertaken. Sometimes the proposition involves the nature of genius versus insanity. Other times the exploration examines creativity versus psychosis. We can readily debate whether the question of creativity versus madness is quite the same as genius versus insanity. Maybe yes, maybe not.

Trying to pin down the human mental capacity of being creative or imaginative, and tie that to the human capacity of madness or hallucinating is a tough road to hoe. Per a psychology study entitled “Hallucinations And Imagination” by Daniel Collerton, Elaine Perry, and Alan Robert Bowman, The Cambridge Handbook Of The Imagination, May 2020, this point was made (excerpt):

  • “There is a strong overlap between imagined and hallucinatory phenomena in the sense that both are internal representations of external things that are not present at the time. Relationships between hallucinations and wider aspects of imagination are complex and individual, with a lack of systematic evidence. There appears to be a close relationship between the brain areas responsible for veridical, imagined, and hallucinatory perception, though more data is needed. However, how activity varies within and outside these areas in order to create different types of imagination is not at all clear.”

Put more succinctly, this remark in “Confusing Psychosis With Imagination” by Ronald W. Pies, Psychiatric Times, December 2017 makes this bold claim:

  • “In short, imagination and psychosis are different categories of experience and should not be confused or conflated.”

I might add that the psychology field is still weighing in on this unresolved matter and the odds are that this conundrum will continue to last for quite a while. Perhaps until or if we someday are able to completely reverse engineer the brain and mind, figuring out how the amazing apparatus or wetware truly functions.

Shifting Into Generative AI Territory

Now that we’ve briefly spotlighted humans and human beliefs, I’d like to shift into examining modern-day generative AI.

First, many refer to generative AI as being able to hallucinate, this is known as AI hallucinations, which is a catchphrase that I disfavor because it tends to anthropomorphize AI. Regrettably, the catchy moniker has slinked into our vocabulary, and we are seemingly stuck with it. Sad face.

The concept underlying AI hallucinations is that at times there is a possibility that generative AI will generate outputs that are fictitious and have no factual grounded basis. The generated content is fake. What makes this especially beguiling is that the fakery is usually hard to spot and tends to be contrived in a manner that leads you to believe the output is true. There are various reasons why this AI confabulation happens from time to time, which I detail at the link here.

Here is an illustrative example:

  • My entered prompt: “Who was the first woman to walk on the moon?”
  • Generative AI response: “The first woman to walk on the moon was Sally Ride in 1983.”

I asked who the first woman was to walk on the moon.

You likely know that the correct answer is that no woman has yet walked on the moon, though plans are underway to ensure that upcoming visits to the moon will finally include women astronauts. Sally Ride was the first American woman to travel into outer space, but she was not part of the moon landings. The answer by the generative AI is incorrect.

How did generative AI come up with this incorrect answer?

We could suggest that it was a kind of “hallucinatory” response by the AI. Facts about the moon landings were somehow intermixed with facts about astronaut Sally Ride.

We accept the idea that when humans hallucinate, they often mix-up facts and come up with odd proclamations. Since generative AI can also mix-up facts, people have taken to referring to this as the AI hallucinating. Again, this is an unfortunate borrowing of a human psychological condition and trying to force it into an AI context, even though there is no particular bearing of stating that the two types of phenomena are somehow equivalent at their core.

The Beauty Of Generative AI Creativity

Set aside the matter of AI hallucinations for a moment so that I can discuss another topic about contemporary generative AI.

Can AI be creative?

Well, that’s a doozy of a question for various reasons. If you mean the word “creative” to consist of having a human soul and the like, we could stipulate that AI is not creative in that sense of things. We don’t have sentient AI and we don’t know when we will, or even if we will, see my discussion on that hefty topic at the link here.

If you define creativity as being able to come up with novel ideas or inventive considerations, you could reasonably say that AI can be creative. Think of it this way. Generative AI is based on pattern matching of various human writing. Human creativity that is expressed via human writing can be patterned upon by machine. Voila, via combining words here or there, seemingly creative looking expressions can be generated. Is that by the sentient intent of the AI? No, we reserve intent for the acts of humans, see my analysis about the so far blocked efforts to anoint AI with human legal personhood at the link here.

Here is an illustrative example of AI creativity in that limited sense:

  • My entered prompt: “Write a poem about a sunset that feels like saying goodbye.”
  • Generative AI response: “The sun dips low, a parting sigh, Gold threads unravel across the sky. Each fading beam, a whispered word, In twilight’s quiet, softly heard.”

As far as I can tell, the generated poem is the only one of its kind (I did a quick online search to see if it was copied from some posted source). The AI has come up with a new poem that no one has ever expressed before. Are there similar poems? Sure, absolutely. But this particular set of words as lined up in this particular sequence and combination is seemingly unique.

The reality of how this was devised is relatively straightforward. By having scanned thousands upon thousands of poems on the Internet and as published in books and other materials, the pattern matching landed on the overarching structure and nature of human-devised poetry. The AI then weaved together words in an akin manner. By using a probabilistic selection approach of which words to assemble, the end result is a unique new combination.

The Big Question Of Our Times

I trust that you are following along here and likely know where I am heading.

The deal is this. We as humans already believe that there is a fine line between human creativity and human hallucination. That is a longstanding hunch. Meanwhile, we can plainly see that current generative AI can produce so-called AI hallucinations and also generate AI-based creative outputs. We might logically make the mental leap that if there is a tie or thin line between human creativity and human hallucinations, there is a presumed tie or fine line between AI creativity and AI hallucinations.

Do you see how that lays out?

I’m betting that you do and are now curious about whether the parallels are sensible or a false trail.

As noted at the beginning of this discussion, one concern is that if we can stop those AI hallucinations, presumably doing so might endanger our desire to have AI creativity. You plug up those AI hallucinations and the spigot for AI creativity dries up. That would be a shame. Indeed, I’d like you to give thought to whether we would proceed ahead if that were true.

If the only way to prevent AI hallucinations meant that we had to entirely give up on AI creativity, would you make that trade?

It is a teeth-grinding and agonizing choice to make.

The answer is a bit complicated but generally the latest thinking is that we can have our cake and eat it too. We can potentially reduce AI hallucinations yet keep AI creativity available. You might be able to control this via a dialing mechanism specifying which way you’d like to go. Dialing down the chances of AI hallucinations would potentially lessen the AI creativity possibilities, while dialing up would increase the AI creativity output but also hasten the chances of AI hallucinations.

Consider a recent AI research study entitled “The Promethean Dilemma Of AI At The Intersection Of Hallucination And Creativity” by Tanmoy Chakraborty and Sarah Masud, Communications of the ACM, October 2024, which made these salient points (excerpts):

  • “It should be noted that an AI model does not intend to be creative or hallucinated.”
  • “Different models can produce varying hallucinations at exact temperature values due to variations in the training pipeline. Even at lower temperatures (higher determinism and reproducibility), the model can hallucinate if the wrong output tokens are assigned a higher probability.”
  • “Existing literature assumes that both creativity and hallucination can be directly attributed to each other, as well as the predicted probabilities of output tokens.”
  • “Departing slightly from existing assumptions, we also posit, like zero kelvin, that an oracle with zero hallucination is the only accurate system to benchmark the emergence of creativity in GenAI systems. Further, even without hallucination, the model will be capable of creative albeit predictable generations.”

Let’s chat about those above points.

The dialing mechanism that I was alluding to consists of setting parameters when you are making use of generative AI. One notable parameter is known as the temperature setting. Depending upon which generative AI app you are using, such as ChatGPT, GPT-4o, o1, Claude, Gemini, Llama, etc., you might be able to adjust the temperature or might be precluded from doing so (see my detailed explanation at the link here).

The temperature is not an actual hot or cold kind of temperature and instead a metaphor of setting the AI toward being more or less deterministic. By and large, generative AI is usually set up to be non-deterministic, meaning that by exploiting probabilities the selection of words is somewhat randomly chosen to allow for variety. That’s the case in the poem that I showed earlier. If you lean toward deterministic generative AI, you will tend to get answers that have been previously seen and lose out on the uniqueness factor.

The Fine Line When It Comes To AI

I logged into ChatGPT to see what it had to say about the conundrum of AI hallucinations versus AI creativity tradeoffs.

Here’s what the AI had to say.

  • My entered prompt: “If we could eliminate AI hallucinations would this mean that AI creativity would also be eliminated?”
  • ChatGPT generated response: “Not necessarily. AI could still exhibit a form of creativity by recombining known information in unique ways without deviating into hallucinations. For example, in music or design, models could stick to verifiable patterns but explore combinations that evoke originality. This approach would result in less ‘imaginative’ creativity but could still allow innovative outputs within defined parameters. In short, eliminating hallucinations might reduce some of AI’s creative latitude, particularly in high-risk fields, but there are ways to maintain creativity by redefining its parameters or adjusting how we apply it in different contexts.”

Please carefully take that answer with a grain of salt.

You need to always double-check anything that generative AI spits out, so never assume that just because generative AI tells you something that the matter is ironclad. It might not be. In this instance, I do agree with the way in which ChatGPT expressed the matter. Score a point for AI.

Here is a recap of my thoughts on the weighty topic:

  • (1) AI hallucinations tend to be false information as a result of overgeneralization during pattern matching or mismatched contexts or probabilistic outstretched wording selections.
  • (2) AI creativity tends to be based on identified patterns which are then combined or at times precariously extended to generate something beyond the prevailing data training of the AI.
  • (3) A numeric temperature setting can be used to adjust the randomness selection of words and thus can by directed human hand impact the considered creativeness of the AI generated outputs.
  • (4) To some degree, the propensity for AI hallucinations can be traced to the inherent contemporary design of generative AI, and the same might be said regarding the capability of generating creative outputs too. It is the design that is the intersection, rather than a connective link between the two types of outputs.

Stuck For Now But Not For Always

Some closing comments for you.

There are ongoing and vigorous debates concerning the existing ways in which generative AI is devised, and some ardently assert we must find alternative mechanisms if we really want to achieve artificial general intelligence or AGI. For example, a hybrid approach known as neuro-symbolic AI might be the next wave forward, see my analysis at the link here.

I bring this up because one argued position is that this AI creativity versus AI hallucinations conundrum arises by the means we have chosen to design and construct generative AI at this point in time. If we approached things differently, we might not be in such a boat. We shot our own foot, as it was by our design choices. A contrarian would insist that these are two peas in the same pod. You can’t have one without the other. They would declare that you cannot avoid the dilemma. It is intrinsic and unavoidable.

Where do you land on this dispute?

Let’s go to the last word on this, for now, by quoting the time-honored philosopher Aristotle: “No great mind has ever existed without a touch of madness.” Is that applicable to AI too, or are we falsely associating human inclinations to a mathematical and computational entanglement?

Think that over, do so sanely and please, without losing your mind.

Share.

Leave A Reply

Exit mobile version