Have you ever found yourself wondering whether something said to you actually conveyed the true meaning of what was said?

I’m sure you have.

Sometimes people say one thing, but the reality might be that they mean something else entirely.

Let’s try this. You come upon a coworker, and they tell you that your job is perfectly safe. This utterance comes out of the blue because you hadn’t been thinking that somehow your job wasn’t already safely secured. Why did they tell you that everything is okay? Maybe it means that you are on an insidious glide path to a workplace exit.

That’s the trouble with words. Words can say one thing, but the hidden meaning might be of a quite different nature. A friend might give you a seemingly upbeat hello, meanwhile, they secretly are upset about some transgression you made the other day. The hello appeared to be friendly. If you took the hello at face value, then all would seem to be rosy and fine.

How can you figure out whether stated words are possibly secreting a hidden meaning?

It’s a mental task of crucial importance. Children at an early age discover that gaining this type of interpretability competency is a vital necessity in life. The rule of thumb is straightforward. People say one thing, but they might mean something else. If you can’t ferret out the real meaning, there is an ominous chance of damagingly going down a false path. There is a heavy burden on your shoulders to dig past the words uttered by someone. A valuable life skill entails identifying meaning that transcends spoken words.

I’ve got a twist for you, a quite helpful twist.

Generative AI can be instrumental in trying to flush out the underlying meaning of expressed remarks.

Yes, all you need to do is feed a sentence or even a whole paragraph into your everyday generative AI such as ChatGPT or Claude and the AI will spit out for you a potential undertone or hidden meaning of what was said. Easy-peasy. There isn’t a need to use any arcane commands or distinctly instruct generative AI on how to do this task. Just enter a prompt with the words or remarks, ask the AI to tell you what it might truly mean, and voila, you get all kinds of plausible interpretations.

That’s what I am going to cover, including showing you examples of how this works.

First, I will share with you some key background about say-meaning, which is an academic phrase that refers to the concept of people saying one thing but meaning something else. Next, I will make sure you are up-to-speed about modern-day generative AI. This will then transition into my showing you examples of generative AI providing interpretations that well-illustrate the precepts of say-meaning.

Please mentally prepare yourself for a memorable and informative ride.

For my ongoing readers and new readers, today’s discussion continues my in-depth series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and examines the use of generative AI to aid in articulating say-meaning. A strong case can be made that knowing about say-meaning is part and parcel of well-being and mental acuity. In addition, mental health professionals ought to continually improve their ability to undertake say-meaning since it is a fundamental precept for performing mental health therapy with patients and clients.

Previously, I have examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I am honored and pleased to indicate that I was featured in the episode, see the link here).

Other vital postings in my column include in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.

On with the show.

Research On Say-Meaning

I will start by diving into research on say-meaning.

There is an ongoing tussle associated with whether people give more weight to the “say” part versus the “meaning” part of what might be uttered. For example, I noted that a co-worker might say to you in the office hallway that your job is safe. This seems to be a solid statement. No ambiguity exists in the actual wording.

Your mind though wanders to speculating about the underlying meaning. One possibility is that the meaning matches the words. The words say you are safe and perhaps the meaning is precisely the same. At the same time, you contemplate that the opposite might be the case, namely that the meaning is that your job is not secure and belies the nature of the words spoken.

To which are you going to give the greater weight, the “say” or the calculated “meaning”?

The situation generally is one of these two conditions:

  • (1) Say matches identically to meaning.
  • (2) Say differs from meaning, positively or negatively.

The say differing from the meaning doesn’t have to be a bad thing. In the example of the comment by your coworker, we are assuming that the difference in this instance is a bad one. There are occasions where the meaning might be uplifting despite the wording being downtrodden. Thus, just because the say and the meaning differ shouldn’t always be construed as harmful or upsetting.

In a research study entitled “Commitment And Communication: Are We Committed To What We Mean, Or What We Say?” by Francesca Bonalumi, Thom Scott-Phillips, Julius Tacha, Christophe Heintz, Language and Cognition, January 2020, these salient points were made:

  • “The exact boundaries of the saying–meaning distinction is much discussed in semantics and pragmatics.”
  • “Are communicators perceived as committed to what they actually say (what is explicit), or to what they mean (including what is implicit)?”
  • “Some research claims that explicit communication leads to a higher attribution of commitment and more accountability than implicit communication.”
  • “Here we present theoretical arguments and experimental data to the contrary.”
  • “Our results support the conclusion that people perceive communicators to be committed to ‘what is meant’, and not simply to ‘what is said’.”
  • “Specifically, we argue that in the most general perspective what communicators are committed to is the relevance of their communicative behavior, irrespective of whether this is explicitly or implicitly expressed.”

I’ll make a few comments about those above points.

The say part of the say-meaning is usually noted as the explicit element of communication. The meaning element is the implicit portion of the communication. When someone says something, are they more committed to the explicit element or the implicit element?

A traditional viewpoint is that the say or explicit aspect is where the commitment usually goes. That seems to make sense. We can try to hold people accountable for the words that they say. The underlying meaning is harder to hold firm since the meaning can vary demonstrably. A person could claim that the meaning that you suggested was at play did not at all arise and they had a different meaning in mind.

Does that suggest that you can wiggle out of being held accountable for the meaning?

Nope, people can still be held to account for the underlying meaning. They might do their darnedest to avoid it, but they are still potentially on the hook. We can pin them down that the meaning was implied. There might be other indicators that could further reinforce the meaning that was presumably intended.

We often look toward the intentions of a person to try and square away the meaning.

In an article entitled “Say What You Mean; Mean What You Say” by Beverly D. Flaxington, Psychology Today, July 26, 2016, these points were made (excerpts):

  • “Why is it so hard for people to say what they mean, and mean what they say, sometimes?”
  • “You have probably been told that lying is wrong, but then telling a white lie seems necessary in order to avoid hurting someone’s feelings.”
  • “Our culture values ‘niceness’ over truth in many cases. People who address issues or bring something up to someone that could be perceived as hurtful, are looked upon as the problem.”
  • “Do your best to remember that most people haven’t learned well how to be open and honest in a non-hurtful, productive manner: It’s not taught in schools. It’s not often learned at home. It’s a fundamental skill that most people lack.”

You can plainly see from those above points that society has shaped us to at times purposely say one thing and yet mean something else.

Consider this.

Suppose you are shopping in a store, and you see a friend of yours. They look disheveled. Ought you to point out to them that they are a mess? Well, that’s what is in your mind. You can say this and feel aboveboard that you are expressing the truth of what you mean.

If you go around telling people what you really have in mind, I dare say that the world will not likely take kindly to your wide-open unchecked thoughts. Telling your friend that they look tousled might ruin their day. Unless you believe there is a strong basis for being blunt, perhaps you might be better off cheering them up with something heartening instead.

Wait for a second, some exhort, aren’t you lying to the friend if you don’t directly say what you have in mind?

Sorry to break the harsh news but all kinds of “lies” are told nonstop in the real world. There are outright lies, half-truths, sneaky lies, etc. There are polite lies and white lies. All kinds of lying occur. It is part of the human condition.

Shifting gears, the words that we say, and the meaning behind those words are presumed to be a portal into human thinking. Since we are unable to pry open our minds and see for sure what is going on inside our brains (for research on BMI, brain-machine interfaces, and attempts to decipher the brain-mind, see my analysis at the link here), the next best indicator would be the words that are communicated.

In a research study entitled “Natural Language Analysis And The Psychology Of Verbal Behavior: The Past, Present, And Future States Of The Field” by Ryan L. Boyd and H. Andrew Schwartz, Journal of Language and Social Psychology, 2021, these key points were made (excerpts):

  • “Language is at once obvious and perplexing.”
  • “When we speak, our words provide structure to an otherwise intangible relationship between the self and the world. When we write, we give shape to the very substance of our innermost, subjective experiences.”
  • “Language is quintessential human potential: it has relatively fixed rules and forms, yet we can string together a combination of words that are completely unique, formulating ideas that have never been shared before — not ever — in the entire history of humankind.”
  • “Early psychology subscribed to language being a powerful window into the human psyche, but its study was ill-defined and pre-scientific. Later, an empirical thrust came about, leading to approaches where statistical inference could be applied, but the methods for quantifying psychological meaning were extremely weak.”
  • “We are only now beginning to realize the true power of objective, computational methods of psychological language analysis. As our computational models of language grow increasingly true-to-life in form and function, we expect to be able to peer into the inner workings of the mind in ways that are currently unimaginable.”

As noted in those points, analyzing our words to try and discern what is happening in the mind is a sizable chore and a dicey proposition.

Advances in AI are aiding in performing those types of analyses. I’ve discussed at length the intertwining of advances in AI and advances in psychology at the link here. Each side of the AI and psychology coin tends to help the other. Better AI will likely serve to unravel what is going on inside our noggins. At the same time, breakthroughs in psychology will enable us to better devise more advanced AI.

Using Generative AI To Decipher Say-Meaning

Now that I’ve taken you through the fundamentals of say-meaning, we are ready to shift fully into AI mode.

I’m sure you’ve heard of generative AI, the darling of the tech field these days.

Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.

The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.

In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.

I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.

Back to the crux of things.

I will proceed to examine further the use of generative AI in this context.

This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. As noted, an estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.

A few quick comments before we launch into using ChatGPT.

If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.

You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.

There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.

We are ready to get underway with ChatGPT.

I will ask a foundational question.

  • My entered prompt: “Can generative AI be used to figure out what people mean versus what they actually say?”
  • ChatGPT generated response: “Yes, generative AI can help infer what people mean versus what they actually say by analyzing context, patterns in language use, and common conversational cues. However, AI is not perfect and can still misinterpret meaning, especially with ambiguous or highly nuanced conversations.”

You likely observe that ChatGPT is familiar with the topic.

This is a good sign for this discussion. If ChatGPT had not previously encountered data training on a topic at hand, there would be less utility in using the AI. The AI would have to be further data trained, such as the use of Retrieval-Augmented Generation (RAG), as I discuss at the link here.

There is something else that ChatGPT indicated and deserves devout attention.

Just because you use generative AI to try and figure out say-meaning does not guarantee that you will uncover the meaning of what someone has said. AI might not land on the meaning that the person intended. AI might come up with oodles of wild meanings that the person never even considered. Do not fall for the false belief that AI is perfect or somehow a mind-reader.

If you use generative AI for the say-meaning unpacking, keep your wits about you and scrutinize whatever the AI has to say, thank you.

My next foray with generative AI on this matter entails simply asking for an example.

Here we go.

  • My entered prompt: “Give me an example.”
  • ChatGPT generated response: “Here’s an example of how generative AI can infer meaning versus what’s actually said. A manager sends an email to an employee after a presentation: What they said: “Your presentation was interesting. We’ll circle back on this later.” At face value, this seems neutral, but generative AI could infer that this message contains a subtle hint of dissatisfaction or hesitation based on tone and context. Words like ‘interesting’ can be used as a polite way of saying the presentation didn’t meet expectations. ‘We’ll circle back’: In business settings, this phrase is often a way to delay or avoid addressing something, implying either uncertainty or a lack of priority to move forward.”

I would judge that the example is applicable and sensible.

Again, the aim when using generative AI is to always be questioning whatever the AI emits. My first question got a reasonable answer. My second question has gotten a reasonable answer. I think it is time to dive further into the use of generative AI in this realm.

Taking Apart A Dialogue Via Generative AI

I am going to tell ChatGPT to come up with a dialogue that might typically be undertaken between two people and then showcase the say-meaning involved.

I want you to try and do the following. First, read the dialogue that ChatGPT comes up with. Second, before reading the say-meaning analysis of ChatGPT, seek to decipher the underlying meaning associated with the dialogue.

It will be fun and instructive, so please go along with the game.

  • My entered prompt: “Show me a sample dialogue between two people and then indicate what they said versus what they likely meant.”
  • ChatGPT generated response:

Here is what might have been said between two people.”

Person A: “Hey, are you coming to the team dinner tomorrow?”

Person B: “I’m not sure. I’ve got a lot going on, but I’ll try to make it.”

Person A: “No worries if you can’t! I’m sure it’ll be fine without you.”

Person B: “Yeah, probably. Thanks for understanding.”

End of ChatGPT response

The above is the derived dialogue.

Sit quietly and mull over the dialogue. What do you think was the true meaning? Are there hidden meanings at play?

Go ahead, think it over, and then proceed reading.

We can now look at what ChatGPT suggests is taking place.

  • ChatGPT generated response (continued):

Here is probably what they each meant.”

Person B likely meant: “I don’t want to go, but I don’t want to outright say no. I’m definitely not coming, and I’m relieved you didn’t push me on it.”

Person A likely meant: “It’s probably better if you don’t come. I’m okay with that.” The phrase “I’m sure it’ll be fine without you” can subtly communicate that the person’s presence isn’t necessary, suggesting the event might even be better without them.”

End of ChatGPT response

I believe the interpretations by ChatGPT seem on target.

That being said, recall that there are two paths involved in these interpretations, namely that the say equals the meaning, or the say differs from the meaning. It could be that case that the words spoken were precisely what each person had in mind. All this other subterfuge stuff might be malarky in terms of not reflecting what they were intending.

The advantage to using generative AI is not necessarily that a gem of a divulgement will occur when analyzing say-meaning.

A major plus is that you might have not considered the alternative potential meanings. Perhaps you know the person so well that you would never contemplate them veering from the words stated. Maybe you heard what someone said but were too tired to try and dig for any hidden gold. And so on.

You can easily use generative AI to come up with potential underlying meanings. At that juncture, it is up to you to decide which of the alternatives seems most likely. You can reject the meanings. You can ask for more. Make the effort into a conversation with generative AI since that’s what generative AI is all about.

Learning About Say-Meaning Via Generative AI

It is said that humans would be wisest to be continuous learners.

I think that all of us can enhance our say-meaning capabilities. You can use generative AI as a learning tool. Whenever you ask generative AI to do a say-meaning analysis, pay attention to what the AI comes up with. This might open your eyes to new ways to interpret what people say versus what they mean.

Another angle consists of directly asking generative AI to tell you about said matters.

I’ll do so.

  • My entered prompt: “In what ways do people hide their meaning?”
  • ChatGPT generated response:

“Here are some common methods and tricks people use to conceal their real thoughts or intentions.”

Vague Language: People use broad, non-committal phrases to avoid providing specific information. Example: “I’ll get to it soon.”

Polite Phrasing: People soften their message by wrapping criticism or rejection in polite or formal language. Example: “This proposal is really interesting, but we might need to revisit some parts.”

Sarcasm: Saying the opposite of what one means, often in a light or exaggerated tone, to mask the true intention. Example: “Oh great, another meeting—just what I needed!”

Deflection: People avoid answering directly by changing the subject or shifting focus to something else. Example: “Why didn’t you finish the report?”

Passive Voice: Using passive voice allows people to avoid taking responsibility or assigning blame. Example: “Mistakes were made.”

Euphemisms: People use softer or indirect words to replace harsher or more direct terms. Example: “We need to downsize” instead of “We are firing people.”

Over-Explanation: People overwhelm the listener with excessive details or explanations, making it harder to pinpoint the true message. Example: Giving a long-winded explanation about why something didn’t get done without ever admitting it was a mistake.

Compliment Sandwich: People sandwich criticism between two compliments, making the negative feedback less obvious. Example: “You did a great job on the presentation! Just a couple of things need fixing, but overall it was excellent work.”

Understatement: People downplay something significant, making it seem less important than it is.

Example: “It’s just a small issue,” when in reality, the issue is significant.”

End of ChatGPT response

That’s a handy list of ways that say-meaning gaps occur.

You can interact with generative AI to test your ability to do say-meaning analyses. Merely tell generative AI that you want the AI to show you statements and you will try to identify the underlying meaning. Generative AI will do so. You can also have generative AI assess your say-meaning analyses.

For more about prompting techniques for this kind of engaged learning, see my coverage at the link here.

Famous Adage Sliced And Diced

I would surely bet that you’ve been told that people are supposed to say what they mean and mean what they say.

A hearty truism.

Of course, as noted earlier, you might get yourself in the doghouse or hot water if you persistently and endlessly abide by that proverb.

I am reminded of the famous discussion in “Alice’s Adventures in Wonderland” and “Through the Looking-Glass, and What Alice Found There” involving Alice and other characters:

  • “The Hatter opened his eyes very wide on hearing this; but all he said was, “Why is a raven like a writing-desk?” Come, we shall have some fun now!” thought Alice. “I’m glad they’ve begun asking riddles. — I believe I can guess that,” she added aloud.” “Do you mean that you think you can find out the answer to it?” said the March Hare. “Exactly so,” said Alice.
  • “Then you should say what you mean,” the March Hare went on. “I do,” Alice hastily replied; “at least — at least I mean what I say — that’s the same thing, you know.” “Not the same thing a bit!” said the Hatter. “You might just as well say that ‘I see what I eat’ is the same thing as ‘I eat what I see’!” “You might just as well say,” added the March Hare, “that ‘I like what I get’ is the same thing as ‘I get what I like’!” “You might just as well say,” added the Dormouse, who seemed to be talking in his sleep, “that ‘I breathe when I sleep’ is the same thing as ‘I sleep when I breathe’!”

Words of fiction are revealing as to the words we employ in real life.

Let’s see what ChatGPT has to say about the famed adage.

  • My entered prompt: “A famous adage is that a person should say what they mean and mean what they say. Is that something that should always be obeyed or are there reasons or circumstances where that rule might best not be followed?”
  • ChatGPT generated response:

“The saying ‘say what you mean and mean what you say’ is often promoted as an ideal of clear, honest communication. In many cases, being direct and transparent fosters trust and avoids misunderstandings. However, there are situations where strict adherence to this rule might not be appropriate or beneficial. Here are some reasons or circumstances where it may be best not to follow this rule.”

“1. Social Etiquette and Politeness. Example: You’re asked whether you like a gift someone gave you, but you don’t. Saying “I don’t like it” might hurt the person’s feelings, so instead, you politely say, “It’s thoughtful, thank you!”

“2. Avoiding Unnecessary Conflict. Example: In a tense work environment, you may disagree with a colleague, but saying exactly what you think might escalate the situation. Instead, you say something like, “I see your point,” while planning to address it more calmly later.”

“3. Protecting Someone’s Feelings. Example: A friend asks for feedback on their performance at a hobby they’re passionate about, and although it wasn’t great, you emphasize what they did well instead of being completely honest.”

“4. Professional Diplomacy. Example: In business negotiations, you may not reveal everything you’re thinking to gain a strategic advantage. For instance, “We’re considering other options” might be a way to apply pressure without showing your full hand.”

“5. Uncertainty or Evolving Thoughts. Example: You’re asked for an opinion on a complex issue, and you’re not entirely sure of your stance yet. Instead of being definitive, you say, “I’m still thinking it over,” even if part of you leans one way.”

“6. De-escalating Potential Harm. Example: In a high-stakes or emotionally charged situation, telling someone exactly what you think might provoke a negative or even dangerous reaction. So instead, you choose neutral language to diffuse tension.

“7. Sensitivity to Cultural Norms. Example: In some cultures, being too direct is seen as impolite, so softening your language or using indirect communication is the norm. You might say “I’ll think about it” instead of directly rejecting a request.”

End of ChatGPT response

There you go, a series of hard-hitting reasons to at times make sure to not say what you mean, nor mean what you say.

The twisted web that we weave.

Conclusion

Congratulations, you are now well-versed in the say-meaning conception and ready to use generative AI to further your skills in this highly valuable realm.

A few final comments and we will conclude this discussion for now.

John Locke the notable English philosopher pointed out these insights when it comes to words: “So difficult it is to show the various meanings and imperfections of words when we have nothing else but words to do it with.”

The bottom line is that seeking to squeeze meaning out of words is easy for some people and quite challenging for others. You probably know people who get the meaning with hardly a glance. Others don’t sense the meaning and have to be clobbered over the head with the meaning before they get what is going on.

Generative AI can be a tool to even out those disparities. A person who doesn’t seem to innately glean meaning can use generative AI as their go-to tool. Gradually, perhaps, the person will get better at meanings and less rely on the AI.

I will end with a quip that might bring a smile to your face.

Are you ready?

The legendary sci-fi writer Arthur C. Clarke said: “I don’t believe in astrology; I’m a Sagittarius and we’re skeptical.”

Go ahead and parse his words there, and what his meaning is. Delightful, I say.

Share.

Leave A Reply

Exit mobile version