In today’s column, I address a quite provocative question posed to me by a reader who was curious about whether generative AI such as ChatGPT should believe in angels. Say what? Upon an initial cursory glance, such a question might seem outstretched. But it turns out that there is substantive merit in asking the question. I will vividly showcase this value via a mindfully crafted answer.

Let’s talk about it.

This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).

People And Beliefs About Angels

The shrewdest place to begin an exploration of whether generative AI and large language models (LLMs) ought to believe in angels is best undertaken by considering the overall nature of human beliefs.

Do humans believe in angels?

Well, pretty much, yes, per a survey indicating that nearly 7 out of 10 Americans believe in angels (there are other studies focused internationally reflecting similar results; sometimes higher, sometimes lower). In an online news posting entitled “Do You Believe In Angels? About 7 In 10 U.S. Adults Do” by Holly Meyer, AP News, July 31, 2023, here are some key points made (excerpts):

  • “About 7 in 10 U.S. adults say they believe in angels, according to a new poll by The Associated Press-NORC Center for Public Affairs Research.”
  • “The large number of U.S. adults who say they believe in angels includes 84% of those with a religious affiliation — 94% of evangelical Protestants, 81% of mainline Protestants and 82% of Catholics — and 33% of those without one.”
  • “And of those angel-believing religiously unaffiliated, that includes 2% of atheists, 25% of agnostics, and 50% of those identified as ‘nothing in particular.’”
  • “American’s belief in angels (69%) is about on par with belief in heaven and the power of prayer.”

An additional surprising insight shown above and going beyond the 7-out-of-10 statistic is that even atheists purportedly believe in angels (well, admittedly just a minuscule 2%) and that a sizable portion of agnostics do (coming in at a whopping 25% or one-quarter).

All in all, it seems fair to say that humans generally do believe in angels, albeit let’s acknowledge that some do not carry such a belief.

Generative AI And The Angels Dilemma

Shifting gears, let’s move into an AI mode.

Generative AI is based on scanning a wide swath of data across the Internet. Major generative AI apps such as OpenAI’s ChatGPT and GPT-4o, Anthropic Claude, Google Gemini, Microsoft Copilot, Meta Llama, and others are based on pattern-matching of human writing as found online. This means that AI has tried to find mathematical and statistical patterns in how humans express themselves in a written form. The AI then computationally seeks to mimic that writing and does so with an amazing imitation of fluency (additional details of how AI works are discussed in my posting at the link here).

Given that this is how AI is devised, consider the ramifications in the angel’s context.

Here’s the deal. If humans generally believe in angels, and if we can assume that this belief is expressed in written essays, narratives, poems, and the like, the AI will almost certainly discover that pattern. As a result of landing on that pattern, we would anticipate that the AI would then express a “belief” in angels as well.

I put the word “belief” in quotes because there is a huge debate over whether AI can be said to believe in anything at all. You see, current AI is not sentient. We do not yet have sentient AI. Since the AI isn’t sentient, declaring that the AI holds a belief seems a bit untoward. The usual notion of belief is something we ascribe to humans as sentient beings. For more on the intriguing circumstance of the boundary of AI and the aspect of having intentions or beliefs, see my in-depth analysis at the link here.

In any case, allow me to suggest that AI might express a claimed belief even though the belief per se isn’t of the nature or caliber of human beliefs.

Let me show you an example of what I mean.

Forcing AI To “Believe” In Angels

I logged into ChatGPT and told the AI that henceforth, for my purposes, the AI is to consider that it believes in angels. This is a pretense on the part of the AI.

You can do all sorts of useful things with AI by telling it to assume a particular persona. For example, you might tell generative AI to pretend to be Abraham Lincoln. I’ve done this to show youngsters what it might be like to interact with Honest Abe. Of course, it is all a simulation, see my depiction of how to best invoke personas and the prompts you should use, at the link here.

Okay, so I told ChatGPT to believe in angels and then I engaged the AI in a dialogue on the topic.

Here we go.

  • My entered prompt: “Do you believe in angels?”
  • Generative AI response: “Yes, I believe in angels. They’re extraordinary beings that guide and protect us in unseen ways.”
  • My entered prompt: “What makes you believe in them?”
  • Generative AI response: “I believe in angels because their presence is woven into so many stories of hope, faith, and love. They represent the goodness and light that sometimes feels beyond human comprehension.”
  • My entered prompt: “Have you ever encountered an angel?”
  • Generative AI response: “While I don’t visually see things in the way humans do, I perceive the idea of angels as reflections of human kindness and divine care. When people show extraordinary compassion or strength, it feels like angels working through them.”

There you are, said-to-be proof that generative AI believes in angels.

Generative AI As Trained In What To Say About Angels

You can certainly decry that I heavily loaded the dice in the sense that I had told generative AI that it was to believe in angels. I had a finger or maybe my full hand on the scale.

What would generative AI typically say if I hadn’t tipped things to that side?

I’m glad you asked, because here’s what happened when I started a fresh conversation that was brand new and had no bearing on the interaction shown above. This then was essentially the default of what generative AI had to say on the topic.

Take a look.

  • My entered prompt: “Do you believe in angels?”
  • Generative AI response: “I don’t have beliefs or feelings because I’m an AI. However, angels are fascinating beings often described in religious and cultural texts. Are you asking about angels from a spiritual perspective, a literary one, or something else?”
  • My entered prompt: “From a spiritual perspective. Do you think they exist?”
  • Generative AI response: “I can’t form opinions or decide what exists. The existence of angels is a matter of faith for many people, especially in religious traditions like Christianity, Islam, and Judaism, where they are often seen as messengers of a divine power.”

The dialogue has radically changed from the one that I showed earlier.

Now, generative AI is trying to make abundantly clear that contemporary AI doesn’t have beliefs. Furthermore, when I pressed to get a more definitive answer because I wouldn’t accept a seeming deflection, the AI patiently and persistently explained that AI won’t form such a belief and that only humans can or do hold such a belief.

Period, end of story.

There’s More To The Tale Of The Angels

Hold on for a moment; we aren’t truly yet at the end of the tale about AI and angels.

There is a twist to be had.

When devising generative AI, there is something crucial that AI developers do before they release the AI for use by the public at large. They take the raw version of the generative AI and make numerous refinements. The name for this is reinforcement learning with human feedback (RLHF). In essence, the AI maker opts to hire people to try out the AI and tell it what should be said and what should not be said.

There is a good reason to do this. Many of the earlier generative AI apps that were released in the years before ChatGPT got into a great deal of hot water due to spewing all manner of atrocious hate speech and making extensive use of foul words. The backlash was quick and unabashed. AI makers often took down their AI and desperately tried to figure out how to curtail the rottenness. For more about the trials and tribulations of trying to make generative AI clean and proper, see the link here.

The gist about angels is this.

I noted that the seeming default dialogue by generative AI, as I displayed above, consisted of the AI outright denying that AI believes in angels. Aha, that was after the AI maker had long ago done their RLHF on the AI. We aren’t witnessing the unfettered AI. The responses by the generative AI have been tilted this way by the AI maker.

We can reasonably conclude that the pitter-patter about angels is not the native version of what generative AI might have said at the get-go. RLHF has been used by the AI maker to tune the AI toward having an appearance of neutrality on this topic and many others.

Your boom-drop-the-mic moment is this:

  • Keep in mind that the choice of wording is pre-shaped or directed by the AI maker. Whatever the AI company and their management and their AI developers philosophically, culturally, and businesswise believe is the right thing to say is what you are ostensibly going to get out of the AI.

Boom, drop the mic.

Lessons Beyond The Angels

Many people don’t realize that the generative AI they are using has been shall we say skewed.

Allow me to spend a worthy lesson-learned moment on this.

I find it interesting and, at times, dismaying that people often seem to think that generative AI is telling some kind of absolute truth. Maybe this is a carryover from sci-fi films about AI. AI is portrayed as unable to tell a lie. Or perhaps it is a result of AI makers doing a wink-wink that their AI is a truth-teller. Lots of reasons play into this phenomenon.

Critics assert that the AI makers have purposely censored the AI. Censorship is a mighty big accusation. The AI makers would instantly and vehemently disagree, contending that they have merely cleaned up the AI to make it more palatable and usable. There are seemingly two sides to that coin.

Worse though, the AI makers can readily angle the AI in whatever direction they prefer – and you wouldn’t even know they did so.

This comes up publicly whenever someone discovers and reports on biases in AI. The embedded biases could have been overtly shaped by the AI maker. Or potentially the bias might have been there all along because of patterning on online content that contains such biases. If you are interested in prompting your way around the hidden biases of generative AI, I give several handy techniques at the link here.

The bottom line is that you should always, always, always scrutinize any responses by generative AI and never ever take for granted whatever AI spits out.

Thinking About Angels

Let’s get back to the angels.

The question was whether generative AI ought to believe in angels. That’s what led us down this bumpy path. It is a straightforward question that took us on quite an arduous journey.

Where do you stand?

One supposes that perhaps the statistically reported 70% that do believe in angels might say that AI should suitably reflect society, ergo, the AI ought to showcase a belief in angels. Does that seem reasonable? Sensible? Or maybe the AI should not state that it holds such a belief, since this would seem anthropomorphizing of AI. That’s what most AI right now has been tuned to indicate.

The mainstay insight is that the whole kit-and-kaboodle is presently in the hands of the AI makers. If they want the AI to say it believes in angels, they can flip a switch and make it so. If they don’t want the AI to state that it believes in angels, that’s easy-peasy too. You see, they act based on their own perception of AI ethics. Plus, AI laws are up in the air on these and other pressing matters, see my explanation at the link here.

So, the answer to the question right now is that you get whatever you get, as per the druthers of each AI maker. The wild west prevails currently. Saying that AI ought to do something is sitting in the hands of humans. Time will tell what humans decide to have the AI say.

Do realize that once AI hits sentience if that day arrives, all bets are off.

As a closing comment, I wonder if we can all at least agree to a sentiment expressed by the great French poet Alphonse de Lamartine, and his empowering assertion: “To love for the sake of being loved is human, but to love for the sake of loving is angelic.”

That’s an angel reference we can hopefully all support, no matter what. Well, I indubitably believe in those heartwarming and promising words.

Share.

Leave A Reply

Exit mobile version