Do you want to appear to be wicked smart?

I believe most people do.

In case you aren’t familiar with the phrase wicked smart, it was popularized in the 1997 film Good Will Hunting in which Ben Affleck’s character says that his friend is wicked smart, a common expression among Bostonians indicating that a person is highly intelligent. This local catchphrase has since entered widespread use and rolls right off the tongue.

One of the ways that someone seems to notably be bright is via the questions they ask.

Yes, that’s right, the mere act of asking clever questions can create an aura of grand intelligence. We normally expect that just answers alone are supposed to be the intelligent part of speaking and thinking. But turns out that questions count too. A lot. Someone who rattles off a bunch of awe-inspiring questions is almost certainly going to get accolades and be lauded as beaming with brilliance.

Well, the good news is that you too can rattle off lots of superior questions.

How would you come up with such questions?

Easy-peasy.

Just use generative AI to do so.

I want to emphasize that intelligent questions aren’t only used to boost your standing amongst friends, colleagues, and family. Generative AI can arm you with probing questions on all sorts of topics and circumstances. Imagine that you are aiming to buy a used car and do not know what questions to ask at a car dealership. There’s no reason you should somehow magically know what questions ought to be asked. Thus, you can lean into generative AI to come up with rock-solid questions.

One significant point before getting underway. I want to gladly acknowledge and thank a great blog post that inspired this discussion, namely “LLMs Are Better For Their Questions Than Answers” by Michael Eliot, September 1, 2024, available at the link here. Make sure to look at his other postings too. Good stuff.

Why Questions Are So Important

Modern-day generative AI can whip out handy and intelligence-bursting questions that you can call your own. I suppose there are ethical qualms about taking credit for what generative AI generates for you, so you’ll need to wrestle with revealing the source of your questions. Mull that over. You decide.

The questions that you ask generative AI to produce can be high-quality questions. I mention this because trying to ferret out questions on your own, such as doing an Internet search to find useful questions, might not provide the most enlightening questions to be asked. With generative AI, you can specify whether you want typical questions, or you want extraordinary ones.

I recently discussed in my column the importance of asking yourself questions during a life review, a type of mental health therapy that is gaining traction for all ages and during all stages of life, see the link here. That’s yet another example of where questions are weighty and vital.

At the opening above, I opted to garner attention by pointing out that good questions can make you look smart, but a whole slew of other benefits arise by being able to ask top-notch questions. You can use questions to figure things out that might otherwise be unknown to you, good questions can help you start a conversation, insightful questions can aid in exploring new areas of interest, and so on.

Plus, by using generative AI to create questions, you can also learn from generative AI by observing the questions produced. In that sense, you might improve your own ability to derive engaging and enthralling questions. The more questions you see or encounter, the odds are this will rub off on your personal capability to devise especially useful questions.

Questions make the world go round.

Please mentally prepare yourself for a memorable and informative ride on the use of generative AI to derive questions.

For my ongoing readers and new readers, today’s discussion continues my in-depth series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and examines the use of generative AI to devise questions. A strong case can be made that knowing about questions and doing questioning is part and parcel of well-being and mental acuity. In addition, mental health professionals ought to continually improve their ability to ask questions since it is a fundamental precept for performing mental health therapy with patients and clients.

Previously, I have examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I am honored and pleased to indicate that I was featured in the episode, see the link here).

Other vital postings in my column include in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.

On with the show.

Essentials Of Good Questions

I will start by diving into research on developing and utilizing questions.

First, do you think that composing questions is an art or a science?

My question is somewhat of a trick question, I sheepishly admit. The viewpoint historically was that asking questions was artful and required a sense of creativity and inventiveness. Later, a realization arose that coming up with questions can be a type of scientific phenomenon. There are systematic ways to compose and assess questions. It isn’t exclusively an art form.

Thus, the contemporary answer is that questions are devised by dipping into both art and science.

Some would convincingly argue that one of the most important arenas requiring close attention to devising questions entails the creation of surveys or polls. The vexing issue about those types of questions is that they must stand on their own. Usually, surveys or polls are a take-it-or-leave-it affair. The person reading the question does not have any means of discussing or debating the question. They must interpret and understand the question at face value.

In addition, those who write the questions have to be cautious in writing questions that can be understood without any added interaction involved. A question must be worded in a relatively iron-tight way. If those answering the questions are all over the map in interpreting what the question says, the survey or poll results will almost certainly be undercut and possibly a waste.

In a pertinent research study entitled “The Science of Asking Questions” by Nora Cate Schaeffer and Stanley Presser, Annual Review of Sociology, 2003, these key points were made (excerpts):

  • “Research on the wording of survey questions flourished in the first two decades after the modern sample survey was invented, culminating in Stanley Payne’s1951 classic, The Art of Asking Questions.”
  • “For many years, there was little basis for quarreling with the title of Stanley Payne’s 1951 classic. Asking questions was an art.”
  • “Now, however, a body of work has accumulated that lays a foundation for the science of asking questions.”
  • “Researchers can make decisions about some aspects of question-wording informed by the results of theoretically motivated experimental comparisons.”

I bring this up to highlight a bit of a warning for you.

It goes like this.

If you come up with questions that are devised by generative AI, you had better hope and seek to ensure that the questions are not going to be questioned by whomever you pitch the questions to. I say this for the reason that unless you are comfortable being able to explain the question, you are going to be on the hook about what your question means. This in turn could expose you in the sense that you might have to admit that you didn’t come up with the questions at hand.

I suppose you could try to dance around any questions about the questions. A ploy some use is to mockingly state that if the person doesn’t grasp the question, they obviously are incapable of answering the question. You then quickly change the topic and hope that no further interrogation takes place.

If you are willing to fess up that it isn’t your self-derived question, you can then urge the person to try and make sense of the question. Perhaps they can. One possible hiccup could be that you didn’t deliver the question as it was fully composed. Or perhaps the question requires a person to contemplate the question heavily rather than trying to answer off the top of their head. Tell the person to sleep on it.

Survey questions don’t typically involve a back-and-forth. The gist here is that by striving toward questions that could exist on a survey, you are increasing your odds that the person you mention the questions to will seemingly comprehend the question. This might reduce the chances of them bouncing the question back at you, imploring you to explain it to them.

A somewhat terse retort is to tell the person that the question is the question, i.e., live with it and do not ask any questions. That might quiet them down (or enrage them, be cautious in such sneaky schemes).

The Structure Of Questions

Now that we are talking about questions, I’d like to dig into the structure of questions.

In a relatively modern analysis of questions undertaken in 2020 and entitled “Advances in the Science of Asking Questions” by Nora Cate Schaeffer and Jennifer Dykema, Annual Review Of Sociology, 2020, the researchers made these notable points (excerpts):

  • “We distinguish several broad classes of question characteristics, including question topic, conceptualization and operationalization of the target object and response dimension, question type and response dimension, the response format (or question form), response categories, question structure, question specifications, question-wording, and question implementation.”
  • “Question wording encompasses question length and measures of the complexity of language, including vocabulary and grammatical structure.”
  • “Question specifications: features that aid respondents in comprehension or recall, such as task instructions, definitions, examples, parenthetical statements, and reference periods.”
  • “Question stem: the part of a question in a battery that is repeated; repetition may be implied or explicit in a given battery item.”

I know that the points made by the researchers might seem rather daunting.

The good news is that you don’t need to know how to compose good questions since you’ll be using generative AI to do the heavy lifting for you. As noted, of course, you might want to pick up some tips and techniques along the way.

What you will need to keep in mind is the manner in which you tell generative AI to devise questions for you. In the parlance of the generative AI field, crafting your instructions or directives for entry into generative AI is known as prompt engineering. If you are interested in prompt engineering overall, see my extensive analysis of prompting techniques and tactics at the link here.

This is how things work.

You enter a prompt to get generative AI to do something. If your prompt is confusing or vague, you might not get what you expect to see. All in all, during the classes I conduct on prompt engineering, I point out that using generative AI is somewhat like a box of chocolates, you never know for sure what you might get.

I will in a moment share with you the core elements worthy of potentially mentioning in your prompt when asking generative AI to generate questions for you. The crux is that there is a kind of structured approach that will improve the odds of your getting quality questions. There is a science to it.

The Setting For The Questions Is Crucial

A crucial ingredient entails the setting or circumstances under which you aim to use questions.

I’d like to briefly explore when you might opt to use questions.

Here are some likely relevant circumstances:

  • Questions to be used in work settings at the office or when remote.
  • Questions in social settings such as at parties or gatherings.
  • Questions that can be used at school.
  • Questions covering particular domains.
  • Questions when buying an item.
  • Questions for use at conferences and events.
  • Questions for interviews (for candidates and also interviewers).
  • Questions to bring up at an organized debate.
  • Etc.

Think about those various settings and situations.

Which settings or circumstances might be coming up for you, and that you would like to make use of intelligent questions while attending?

There are two ways to consider the crafting of questions for those circumstances:

  • (1) Prepare beforehand. Use generative AI to prepare questions so that you are ready when needed.
  • (2) Devise on-the-spot. Use generative AI at the moment to come up with questions that you can immediately deploy.

Let’s unpack those two approaches.

You can prepare beforehand by using generative AI to craft a handful of questions that you will have in your back pocket. When the right opening arises, you spring forth with your in-hand questions. I noted earlier that you might be shopping for a used car and want to have questions ready for use. Doing your preparation upfront can make things more productive versus having to scramble for questions on the fly.

Suppose you don’t prepare beforehand or that a setting arises unexpectedly, and you dearly wish you had intelligent questions for the asking. You can shift into real-time mode. Generative AI apps are usually available 24×7 and accessed just about anywhere that you can get an Internet connection. Using your smartphone, you can readily have generative AI derive questions in real-time.

I dare say that the real-time option is somewhat dicey. You might end up wrestling with generative AI to come up with suitable questions. You might be delayed or confounded due to a lousy Internet connection, or the AI server gets bogged down. If you are trying to hide the fact that you are relying on generative AI, you will need to come up with a devil of a tale as to why you are pounding away on your smartphone right in front of the person you are presumably going to launch questions at.

As the old saying goes, preparation is the key to success.

Specifying The Questions That You Want

When you tell generative AI to make questions for you, this can be done in a slim fashion or a more detailed manner.

The slim fashion merely consists of indicating what the circumstances are. For example, you might tell generative AI that you are looking to rent an apartment and need questions about what to ask of a landlord. That’s it. The AI will then produce a list of questions. Period, end of story.

A difficulty with the slim approach is that you might get all sorts of responses from the generative AI. Perhaps the AI will opt to list twenty questions, though maybe you just wanted a handful. The questions might be run-of-the-mill and not seem very insightful. There is also the possibility that the AI will try to answer the questions in addition to posing the questions. Sometimes all you want is questions, but the default parameters within generative AI often trigger the production of answers too.

The gist is that you might be better off to be more specific about what you want.

Here is my handy indication of aspects you might want to specify or mention in a more detailed prompt that you might enter:

  • (a) Explicitly and directly indicate that you want questions to be generated.
  • (b) Indicate the setting, circumstances, domain, or other contextual requirements.
  • (c) If you don’t want answers too, say that you only want questions and do not want answers.
  • (d) Specify how many questions you want.
  • (e) Specify whether you want short or long questions.
  • (f) Stipulate the tone of the questions such as serious versus lighthearted.
  • (g) Indicate the hardness of the questions, easy or challenging.
  • (h) State if you want multi-layered questions that would have subordinated questions.
  • (i) Mention whether the questions are to be unique or can be commonplace.
  • (j) Provide any additional customizations or particulars.

That might seem excessive and a boatload of work simply to get some questions. No worries. You can be loosey-goosey about having to specify those facets. There isn’t a need to be extremely precise and overly detailed.

The good thing about generative AI is that usually, the overall semblance of what you want is sufficient to get the AI to pretty much do what you have in mind. Just make sure the AI is aware of the ballpark and that ought to do it.

Generative AI As Question Generator

Now that I’ve taken you through the fundamentals of questions, we are ready to shift fully into AI mode.

I’m sure you’ve heard of generative AI, the darling of the tech field these days.

Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.

The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.

In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.

I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.

Back to the crux of things.

I will proceed to examine further the use of generative AI in this context.

This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. As noted, an estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.

A few quick comments before we launch into using ChatGPT.

If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.

You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.

There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.

We are ready to get underway with ChatGPT.

I will ask a foundational question.

  • My entered prompt: “Can generative AI be used to come up with questions, and if so, why would someone use generative AI in that way?”
  • ChatGPT generated response: “Yes, generative AI can be used to come up with questions, and there are several reasons why someone might use it in that way. Using generative AI to come up with questions can save time, inspire creativity, and ensure that no important aspects are overlooked when exploring a topic or preparing for a task. Types of uses include: (1) Brainstorming and Ideation, (2) Personalized Learning, (3) Research and Exploration, (4) Survey and Data Collection, (5) Content Creation and Engagement., (6) Problem-Solving, and (7) Interview Preparation.”

You likely observe that ChatGPT is familiar with the topic.

This is a good sign for this discussion. If ChatGPT had not previously encountered data training on a topic at hand, there would be less utility in using the AI. The AI would have to be further data trained, such as the use of Retrieval-Augmented Generation (RAG), as I discuss at the link here.

One aspect that wasn’t mentioned in that response but that I think is worth highlighting is that you can use a prompting technique known as flip-the-script, see my coverage at the link here, which is similar to this concept of getting generative AI to produce questions. In brief, you tell AI to start asking you questions, and you then try answering the questions. This is construed as flipping the script in the sense that usually you are the one to ask questions and generative AI answers the questions.

Why would you do this?

Suppose you are preparing for a job interview. You can have generative AI pretend to be the interviewer. The AI will proceed to ask you questions. You aim to answer the questions. The AI will then rate and assess your answers.

That is different than what I’ve been discussing here. Here, you are not going to interact per se with generative AI and instead merely ask for a list of questions. You are also not trying to answer the questions. The crux is that you want questions that you can ask of someone else.

Let me show you an example of this.

  • My entered prompt: “I am thinking about buying a used car and I want you to come up with questions that I should be asking. Make sure they are good questions and not merely obvious or usual questions. I want a set of high-quality questions that are beyond just looking up a list online.”
  • ChatGPT generated response:

“Here’s a set of thoughtful, high-quality questions you can ask when considering buying a used car.”

Ownership History: What was the primary use of the car by previous owners (e.g., commuting, long-distance travel, city driving)?”

Maintenance and Repairs: Can you provide detailed records of any recurring issues or repairs?

Modifications: Has the car been modified in any way, either mechanically or cosmetically?”

Environmental Exposure: Has the car been exposed to extreme climates or conditions (e.g., very cold winters, coastal air, extreme heat)?”

Usage Patterns: What was the typical driving environment (e.g., rural, suburban, urban)?”

Pending Repairs or Maintenance: Is there any upcoming maintenance or repairs that the current owner is aware of but hasn’t yet addressed?”

Reason for Selling: What is the primary reason for selling the car now?”

End of ChatGPT response

Those appear to be reasonable questions.

I suppose with a moment of reflective thought, you could have come up with those same questions. In that sense, they don’t seem to be especially out of the ordinary. Almost ho-hum.

I don’t think we can really ding AI for this. There are only so many questions that one might ask when buying a used car. This isn’t rocket science.

You can at least agree that rather than having to come up with questions from scratch, using AI helped provide a useful set of starter questions. Based on those questions, you can modify them as needed, add more, and otherwise proceed.

Keep in mind that you can tell generative AI to refine the questions, add more, and all told continue to make sure you end up with questions that meet your needs. This is not a one-and-done path. Generative AI is intended to be conversational.

Going Further With Generative AI On Generating Questions

To illustrate the practice of telling generative AI to refine some derived questions, I will show you an example that came up while I was performing a series of mini-experiments associated with getting AI to produce questions.

I had told generative AI to generate questions because I was going to a medical convention and wanted to impress some cardiologists that I would be meeting with.

Here’s what happened on my first round of getting questions.

  • My entered prompt: “I am going to a medical convention. I want to ask some cardiologists a few questions that they will find interesting and thought-provoking. Provide me with such questions. I only need four questions. Keep the questions relatively short so I can memorize them. The tone of the questions should be positive and not seem to denigrate or insult them.”
  • ChatGPT generated response:

“Here are four short, thought-provoking questions you can ask cardiologists at the convention.”

(1) “How do you see AI and machine learning shaping the future of personalized cardiac care?”

(2) “What recent advancements in non-invasive imaging are most exciting for early heart disease detection?”

(3) “Can you share your thoughts on the role of lifestyle medicine in preventing cardiovascular conditions?”

(4) “How are emerging technologies, like wearable devices, impacting patient monitoring and outcomes in cardiology?”

End of ChatGPT response

Those questions seem reasonable, and I could potentially use them.

My disappointment with the questions was that they didn’t seem very technical. The questions are readily understandable by just about anyone. I was hoping for questions that were steeped in the field of cardiology. I wanted something that would knock their socks off and be only comprehensible to an experienced cardiologist.

I went ahead and told generative AI to try again.

  • My entered prompt: “Those questions do not seem very technical. I want questions that contain lots of cardiology technical terminology and that are clever or hard for even an experienced cardiologist to answer.”
  • ChatGPT generated response:

“Here are four technically challenging and clever questions for cardiologists.”

(1) “How do you balance the risk of increased thromboembolism with the benefits of left atrial appendage closure devices in patients with atrial fibrillation who are contraindicated for long-term anticoagulation?”

(2) “In cases of refractory heart failure with preserved ejection fraction (HFpEF), how do you approach the decision between advanced diuretic therapy and potential candidacy for atrioventricular nodal ablation with biventricular pacing?”

(3) “With the growing evidence supporting the role of coronary microvascular dysfunction in ischemia with non-obstructive coronary arteries (INOCA), how do you integrate coronary flow reserve measurement into your diagnostic and treatment protocols?”

(4) “Given the complexities in managing patients with hypertrophic cardiomyopathy and concomitant arrhythmias, what are your criteria for opting for septal myectomy versus alcohol septal ablation, particularly in patients with borderline gradients?”

End of ChatGPT response

Now that’s what I had in mind.

Looks awesome.

I have no idea if those questions make any sense, but they sure look complex. There are probably cardiologists reading this right now who are either impressed at the level of the questions or possibly laughing their heads off because maybe those questions are cloaked in jargon but not especially challenging.

Watch Out For Generative AI Going Off The Rails

Speaking of sensibility, I can’t readily discern whether the medical vernacular produced by generative AI in the above example is valid or not. It certainly looks remarkable. The problem though is that I might look foolish by presenting those questions to someone with expertise in that realm.

Some of you might be thinking that generative AI would always produce ironclad questions that are perfectly on target. Thus, if that were the case, it doesn’t matter whether I understand the questions. The questions are presumably ready for use with specialists in the given domain.

Sorry to say that you’ll need to be cautious and retain a whiff of doubt about the questions produced by generative AI.

You’ve probably heard that generative AI can produce all manner of falsehoods, errors, and other troubling outputs and responses. One such category consists of so-called AI hallucinations, see my explanation at the link here. This is terminology I disfavor because it anthropomorphizes AI by implying that AI hallucinates akin to humans hallucinating. Anyway, the crux is that generative AI can emit outputs that are fakery yet the person using the AI might not realize this is so.

The crucial notion is that generative AI at times produces fictitious commentary that has no grounding in facts.

Can this happen when generative AI is merely producing questions?

Yes, absolutely.

A question might be based on faulty data training. A question might contain biases and discriminatory language or assumptions. A question could be utterly out of this world and have zero validity. Etc.

Generally, the odds are probably low that this will happen. Part of the reason is that questions are typically short in length, ergo, the roll of the dice is not quite as bad as would be a lengthy essay. Keep your fingers crossed at all times and you might be lucky.

The dilemma is that when you employ the questions there could be problematic wording embedded but you didn’t realize it. You might be walking around with a question that contains a wording timebomb. It will blow up on you when you decide to use it in the heat of battle. Splattered egg might end up on your face.

One means to try and avoid this unpleasant fate would be to tell generative AI to double-check that the questions are apt and correctly worded. I’ve covered that type of prompting technique at the link here. You can either ask for a double-check at the get-go of your prompt or after seeing the listed questions, you can ask the AI if it is sure about the validity of those questions.

Another approach consists of asking generative AI to answer the questions. There is a chance that in the act of answering the questions, the AI will detect that a question was worded improperly and alert you. You could even consider feeding the questions into a different generative AI app, serving as a kind of second opinion.

Bottomline is never blindly believe anything that generative AI spits out. Make sure to verify, double-check, and ensure you are relying on proper and correct indications. That is a rule of thumb for whatever type of use you undertake with generative AI.

Live by that piece of sage advice.

A Fun And Engaging Example Of Obtaining Questions

Let’s do one more example and then I’ll provide some concluding remarks.

This one is fun.

Take a look.

  • My entered prompt: “I am going on a first date. I want to impress my date by asking intelligent questions. Prepare one set that has somewhat conventional questions. Prepare a second set of questions that are more unusual and might catch the eye of my date as being above and beyond the norm.”
  • ChatGPT generated response:

“Here are some thoughtful and impressive questions you can ask on a first date.”

1.1 “What’s something you’re passionate about that you could talk about for hours?”

1.2 “If you could learn any new skill instantly, what would it be and why?”

1.3 “What’s the most interesting book or article you’ve read recently?”

1.4 “If you could have dinner with any historical figure, who would it be and what would you want to ask them?”

“Here are some more innovative and less typical questions you can ask on your first date.”

2.1 “If you could create a museum dedicated to any one thing, what would it be, and what would the main exhibit look like?”

2.2 “What’s a seemingly insignificant moment in your life that ended up having a huge impact on who you are today?”

2.3 “If emotions were colors, how would you paint your happiest memory?”

2.4 “Imagine you’re tasked with designing a city from scratch. What would be the first thing you’d prioritize, and why?”

End of ChatGPT response

I trust that you noticed how I asked ChatGPT to provide two sets of questions.

I wanted you to realize that when you ask for questions, you can do various sets at the same time. In this instance, I wanted somewhat ordinary questions that I could use without taking a risk. I also wanted questions that were a stretch and might be advantageous if I felt daring.

If you were on a first date, which of those questions would you use?

If you were on a first date and the other person used those questions, what would be your reaction?

I’m sure that some of you might be thinking that anyone who has to use generative AI to come up with questions for a first date must not have any rizz (that’s present-day slang for charisma or charm).

For those of you who are smarmy or snooty about this, consider whether the first dates you’ve been on might have been using generative AI to prep for the date, though you didn’t know. And, assuming you thought the person was a charmer, maybe you fell for it.

Don’t be quite so judgmental, thanks.

Conclusion

Congratulations, you are now versed in coming up with questions, including how to do so via generative AI. You ought to pat yourself on the back.

The famous scientist and philosopher, Francis Bacon, said this about questions: “A prudent question is one-half of wisdom.” That reinforces the earlier point that being able to artfully and with skill come up with questions is vital for daily existence.

I realize that most of us can devise good questions without resorting to the use of AI. Can we feel free to lean into generative AI from time to time? Sure. Might you glean something notable and lifelong about asking questions due to using generative AI for this purpose? I believe so.

Let’s let Einstein have the last word on this.

According to Albert Einstein: “Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning.”

I think that makes abundant sense, though if you want to question it, do so. It is right to keep questioning all that can be questioned.

Share.

Leave A Reply

Exit mobile version