In today’s column, I examine a longstanding expression that we all have undoubtedly used from time to time, namely that when making a decision sometimes you say that you will sleep on it. The semblance of this common remark is that you want more time to think about the decision that needs to be made.
Humans say this quite frequently.
Does the same remark of aiming to sleep-on-it apply to generative AI?
I say yes, it does.
This analysis of an intriguing proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various AI complexities (see the link here).
Unpacking The Human Version Of Sleep-On-It
Some background about the human meaning of sleep-on-it will help set the stage for bringing AI into the picture.
In a research article entitled “Think, Blink Or Sleep On It? The Impact Of Modes Of Thought On Complex Decision Making” by Ben Newell, Kwan Yao Wong, Jeremy Cheung, and Tim Rakow, Quarterly Journal of Experimental Psychology, April 2009, these key points were made (excerpts):
- “Decision makers have been encouraged to make ‘snap’ decisions (‘blink’) or to leave complex choices to the powers of unconscious thought (‘sleep on it’).”
- “The unconscious thought theory (UTT) states that optimal weighting of attributes occurs naturally during periods of unconscious thought and that this optimal weighting leads to a closer connection between idiosyncratic preferences and objective choices for unconscious than for conscious or immediate thinkers.”
- “There was little evidence for the superiority of choices made ‘unconsciously’, but some evidence that conscious deliberation can lead to better choices.”
The core of the matter is that by saying you want to sleep on it, you do so in lieu of making a snap judgment. You want more time to mull over your options.
By someone saying that they need to sleep on a weighty consideration, it gives them a kind of polite or deft sidestep of not having to instantly render a decision. Sometimes this is acknowledged and agreed to. They are granted a reprieve. In other cases, such as a pressing in-the-moment circumstance, they aren’t allowed more time and must decide on the spot or in the blink of an eye.
Thus, one notable element is that the sleep-on-it proclamation buys you added time to decide.
There is another facet that we need to ponder.
Do you really need to sleep on the matter at hand?
In other words, if you say that you will sleep on it, in one sense this is just saying that you want a day or so to figure things out. It doesn’t necessarily require that you are only going to mentally determine things because of sleeping. The sleeping contrivance is merely a quick way to convey that you want more time, such as overnight, and will have a response the next day.
Controversy Over The Sleep Element
Sticklers might insist that if you say you will sleep on it, you must indeed make your decision while being asleep. You put your head on your pillow, fall into slumber, and voila, you wake up with your decision.
This seems a bit silly as a stipulation and altogether questionable. I’d dare suggest that much of the time we generally accept that you aren’t truly going to decide while asleep. Again, the utterance is more so about gaining time to decide.
In any case, much research has been undertaken about what seems to happen while you are essentially unconsciously thinking while in the act of sleeping. I say unconsciously due to the fact that when asleep you don’t seem to be conscious in the normal way we think of things. You aren’t knocked out per se, nor are you likely abundantly aware of your surroundings.
In recap, sleep-on-it consists of:
- (1) Gets you additional time to make a decision in contrast to a snap judgment.
- (2) Usually suggests at least an overnight delay, possibly longer, possibly shorter.
- (3) Might truly involve a decision-making process while you are asleep.
- (4) Does not seemingly absolutely require a rendered decision during sleep.
- (5) Sleep might merely clear your mind and make it clearer for you to make a decision.
- (6) You might then decide while fully awake, benefiting from the time delay and the granted sleeping effort.
Bringing AI Into The Big Picture
What does sleep-on-it mean in an AI context?
I’ll share my thoughts on this but first want to engage you in something that spurred my resolve to write about the matter.
There is a great deal of postulation about what AI is or isn’t, and what AI can and cannot do, oftentimes stated by AI experts and via non-AI experts too. I’ve been extensively covering the role of generative AI in the medical and healthcare sectors, especially in the realm of AI and mental health (see my comprehensive analysis at the link here). Sometimes, I witness remarks about AI that stir me to provide vital commentary.
Here’s what got my eyebrows raised. A write-up about a medical doctor who after seeing a patient was later the next day able to make further effort because of sleeping on the matter, brought up that AI could presumably not do anything of a similar kind. Ergo, we are led to believe that AI as we know it even today is portrayed as categorically inferior.
The topic came up in a recent article on medical decision-making entitled “Echoes of Concern: AI and Moral Agency in Medicine” by Sarah Hull and Joseph J. Fins, JAMA Cardiology, September 2, 2024, which made these points (excerpts):
- “Although this case underscores the ongoing relevance of physical examination skills even in this era of advanced imaging, perhaps its more salient lesson is the importance of clinical rumination and worry.”
- “If the task of diagnostic evaluation had been delegated to a futuristic stethoscope turbocharged with AI, its bell and diaphragm would sit quietly in a drawer overnight, unworried about what it “heard.”
- “Although it might have algorithmically recommended an echocardiogram without delay — or perhaps not, depending on the fidelity and reliability of its acoustic capabilities and software – it certainly would not have the moral awareness to lose proverbial sleep over whether it made the right decision.”
- “Does this matter? Should we care that AI is intrinsically incapable of worrying about us?”
The crux is that oftentimes I hear or read that AI lacks a sense of moral agency and therefore we are to seemingly condemn AI or assume that AI cannot do this or that.
Let’s get this cleared up.
Laying The Facts Straight Out
The circumstance of a human mentally percolating on something and coming up with additional ideas or a sense of resolve can arise after a sleep-on-it, as I’ve above noted. This might entail the sleep part or might just be that extra time has been utilized to do deeper thinking.
In a manner of speaking, we can certainly do likewise with AI.
For example, I’ve recently covered that the latest ChatGPT generative AI by OpenAI known as o1 can potentially arrive at improved answers partially due to an added approach referred to as chain-of-thought processing (see the link here). The chain-of-thought approach forces the AI to take more computational cycles, working stepwise on a systematic basis, and calculate answers to a more extensive degree. You need to wait a bit longer to get a reply, but hopefully, the wait is worth the benefit of better responses.
I liken this to playing chess. If you are time-pressured and must make split-second chess moves, the amount of mental exploration is highly limited. Playing at a measured pace allows you to examine many more moves and try to look ahead at what might happen on the chessboard. AI-based chess-playing systems face a similar conundrum, namely if less time is available, the search space must be narrowed, while if time is aplenty, the search space can be more extensively assessed.
My point is that with generative AI, we can readily tell the AI to take more time and computationally explore further than it might otherwise have done. This can be set up to automatically take place. In essence, you can simply grant that more time is always allowed, and henceforth the AI will consume additional computational cycles at your behest to find a deeper answer (of course, this is likely to cost you something extra if paying for the computational usage, but that’s a story for another day).
Example Using ChatGPT To Sleep On It
Go with me that we are setting aside the sleeping aspects and focusing on the idea of getting more time to make a decision. I logged into the widely popular ChatGPT and decided to see if the generative AI could help me come up with a marketing tagline.
Here’s the dialogue:
- My entered prompt: “I need a catchy tagline for my eco-friendly coffee brand. Can you generate a few options?”
- ChatGPT generated response: “Sure! Here are three options: “Brew Better, Live Greener.”; “Sip Sustainably, Thrive Naturally.”; “Eco Brews for a Better Tomorrow.”
- My entered prompt: “I want something more unique and bold. These feel a bit too standard. Can you come up with something different?”
- ChatGPT generated response: “Let me process that. Would you like me to try again immediately, or would you like to come back after a break to refine the creative direction (in a sense, sleep on it)?”
- My entered prompt: “Take a break, use more time.”
I then logged out.
The next day, I logged back in.
- My entered prompt: “What did you come up with about my catchy taglines?”
- ChatGPT generated response: “After thinking it over, here’s a new batch of taglines, considering the adventurous and bold tone you’re going for: “Fuel Your Fire, Save the Earth.”; “Dare to Brew, Defy Waste.”; “Sustainable Sips, Bold Moves.”
- My entered prompt: “These are much better! I like “Dare to Brew, Defy Waste” a lot. It captures the rebellious spirit I wanted.”
- ChatGPT generated response: “Glad you like it! Taking some time allowed me to adjust the creative direction and offer something more aligned with your vision.”
As you can see, the additional computational time appeared to get an improved response.
One aside that I feel compelled to mention. I abhor the wording of most generative AI responses that say things ostensibly anthropomorphizing the AI. This is unnecessary and egregiously misleading. I’ve repeatedly noted that the AI makers can easily change the wording that comes out of generative AI to not mislead in this fashion (see my discussion at the link here).
Back to the matter at hand.
You might quibble whether the proposed taglines were that much better than the initial set, but my emphasis is that the AI was granted additional time to work up a response. Sometimes the response might be improved, but admittedly, other times the improvement might seem marginal or minimal.
That’s the way the ball bounces.
Benjamin Franklin Knew To Sleep On It
A few final thoughts.
An AI-powered system, even one in a medical context, can be established to make one-time instantaneous responses or be set up to provide both immediate and longer-term answers. You can tell the AI to give an instant answer because you want the quickest response right away. Plus, you can tell the AI to consume more computational cycles, work let’s say overnight, and take a deeper exploration, akin to the chess playing that I mentioned before.
This brings up a slew of complex AI ethical questions and AI legal questions.
Suppose that a vendor providing AI has opted to short-cut the computationally allowed processing time. The AI spits out an answer. The answer is a roughly calculated one. Had the AI been given a nudge to process more deeply, perhaps a different or better answer might have been found. What are the proper limits for this kind of setup? If life-or-death decisions are at stake, you can undoubtedly discern how important these design and fielding decisions are.
I want to clarify too that I am not suggesting that today’s AI has any semblance of moral agency. Modern AI is not sentient. We seem to be heading toward artificial general intelligence or AGI, which if we get there, will indubitably change our perspective on AI moral considerations (see my discussions at the link here and the link here, for example). We aren’t there yet.
Generative AI is a means of mimicking human writing. The AI has been computationally data trained on human written works found on the Internet. When you interact with generative AI, it is essentially parroting somewhat the patterns found in how humans express themselves.
I mention this because we can do a similar form of mimicry about the human sleep-on-it phenomenon. AI can be readily guided to expend additional computational cycles to seek a deeper or better response. This is not an AI-self-based moral choice; it is a choice by the humans who are making the AI and/or making use of the AI.
One last comment.
There are tradeoffs in the way that humans do things and the way that AI does things.
The idea that a human might do a sleep-on-it, well, that’s a pretty much idiosyncratic aspect. Sometimes a doctor might sleep on it, but other times they might decide that their first decision was sound and there isn’t any need to give it further thought (despite the possibility that their initial judgment could be amplified if they did a sleep on it). Or they might be so overwhelmed with patient cases and simply not have the mental bandwidth to sleep on it, certainly not with all the myriad of cases and each day of seeing patients.
For an AI system, you can essentially guarantee a sleep-on-it will occur, if that’s what you want to do (and can afford to do so). All cases and all patients. Each and every day. No exceptions. This would be an ironclad proposition.
I’m not saying that this makes AI better than humans. Nope. So please don’t troll me. I am noting that humans involve a great deal of variability, though we do get the human mind at play. AI of today is not on par with the human mind, but we can get a form of consistency and extensiveness from automation if willing to do so.
In situations of both humans and AI working together, we can possibly get the best of both worlds. A human doctor that sometimes does a sleep-on-it, coupled with an AI-based system that always sleeps on it, propelling the two to work hand-in-hand toward providing the highest possible quality of care. An aspirational goal.
Let’s go to the last word for now.
As famed Benjamin Franklin in 1772 noted about a somewhat sleep on it regimen:
- “I find at length where the balance lies; and if, after a day or two of further consideration, nothing new that is of importance occurs on either side, I come to a determination according. I think I can judge better and am less liable to make a rash step.”
True words then, still true words today.