In today’s column, I examine two recent tweets that OpenAI CEO Sam Altman posted that have caused quite a heated stir. The upshot is that his posting suggests we might be at, near, or even passing through a momentous moment of AI advancement referred to as the AI singularity. The implication is that artificial general intelligence (AGI) or possibly artificial superintelligence (ASI) is essentially staring us directly in the face.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).

About The Nature Of Intelligence

Before I get to the provocative tweets, I’d like to set some foundational considerations.

The place to begin is by thinking about intelligence. The deal is this: A compelling case can be made that intelligence begets intelligence. In other words, it is possible to produce even more intelligence by accumulating and interplaying intelligence with intelligence. That seems like a reasonable assertion and has an intuitive air about it.

If you buy into that assumption, we can start to toss around various offshoot theorems.

The most prominent offshoot is that there is a chance of being able to instigate a chain reaction of intelligence. Think of this as akin to a nuclear explosion. A nuclear atomic chain reaction ignites nuclear activity, and it then fuels itself to further expansion. Perhaps the same can be said of intelligence. There might be a phenomenon coined as an intelligence explosion. Intelligence might seemingly “ignite” to foster the expansion of more and more intelligence, running rapidly at an incredible pace.

You might know that when nuclear reactions were first devised during World War II, there was some concern raised that once a nuclear reaction began it might proceed nearly indefinitely. The thought was that the earth’s atmosphere might get carried into the reaction and the next thing that happens is the entire planet is engulfed in flames and ruin. This famous moment of grave concern has been portrayed in many movies such as the blockbuster 2023 film Oppenheimer.

Reflecting on that potentially destructive outcome gives rise to a similar question in the context of intelligence.

What might happen if there is an unfettered intelligence explosion?

The answer is that no one can say for sure what will happen. Abundant theories are floating here and there. Some take the upbeat viewpoint and declare that this would be the best thing ever. Others are downbeat and worry that utter obliteration could be afoot.

The Impetus For An Intelligence Explosion

It seems unlikely that any individual human is going to somehow have an intelligence explosion in their brain that miraculously produces immense intelligence far beyond anything we’ve ever seen. For those of you waiting on that possibility, sorry, it seems highly doubtful. I’ve met doting parents who think their beloved prodigy is going to have that occur. Best of luck with that.

Okay, where then would an intelligence explosion have a modicum of chance occurring?

Aha, the answer for that is via a computer system that is running AI. Maybe an AI system that is operating on servers in the cloud would lean into an intelligence explosion. The AI would fuel itself and produce vast quantities of artificially produced intelligence.

Let’s go with that concept and see where it takes us.

First, contemplate how this AI intelligence explosion is going to get underway. One possibility is that humans such as AI researchers and AI developers provide the spark for this to happen. There they are, playing around with the AI and tweaking it, when bam, an intelligence explosion gets launched. It is conceivable those humans at work intentionally did this, knowingly, while the other possibility is that it happens by accident. For further assessment on this, see my discussion at the link here.

Second, it could be that the AI stirs itself to initiate an intelligence explosion.

Maybe the AI has some embedded element that perchance spurs the rest of the AI to start multiplying in terms of increasingly accumulating computational-based intelligence. Since this seems like something we don’t want AI to do on its own accord, various AI containment techniques exist (see my coverage at the link here), and numerous AI human-values alignment approaches are being adopted (see my analysis at the link here).

Third, the eyebrow-raising question is how far the intelligence explosion would go. Is there no limit to how much AI-based intelligence there might be? Would the intelligence fill up whatever computer servers were accessible and stop expanding at that point? Or might the AI grab up other computer servers, as many as could be found, and keep expanding?

Does the server constraint even matter, since perhaps intelligence isn’t bound by the underlying computing and can keep going anyway?

The AI Singularity Is On Our Minds

By and large, this speculated artificially focused intelligence explosion is referred to as the AI singularity.

A prevailing hunch is that AI is going to reach a juncture where it will start to “explode” into more and more intelligence. Some theorize that the AI singularity will take place in the briefest of split seconds, happening so rapidly that no human can somehow watch it occur. Not everyone agrees with that supposition. Maybe it will take minutes, hours, days, weeks, months, years, decades, etc. Conjecture abounds.

Will humankind be able to do anything about the AI singularity?

That’s also quite a sticking point.

One viewpoint is that we ought to slow down AI development until we have figured out how to deal with AI singularity. Political proposals exist to ban certain types of AI in hopes that we won’t unknowingly fall into the AI singularity, see my discussion at the link here.

You can certainly imagine why the AI singularity sends chills up humankind’s spine. We don’t know what the outcome will be. Perhaps after the AI singularity, we will have AI that can cure cancer and save humanity from all sorts of maladies. A gloomy view is that such AI is an existential risk and will indubitably enslave us or wipe us out totally.

The other hair-raising issue is that we aren’t sure if we can stop it. Would it happen so quickly that we are caught off-guard and can’t pull the plug? Perhaps it happens at a measured pace, but we want to garner the hoped-for benefits such as aiding humanity, so we let it keep going. The downside is that the AI opts to trick us by playing dumb, the so-called AI dimwit ploy (see my description at the link here), and we let the AI singularity continue until AI does the mighty takeover.

Which Era Are We In Now

For the sake of discussion, I shall divide the singularity into three main AI eras:

  • (1) Pre-Singularity AI era. The AI singularity hasn’t yet happened, and we are presumably possibly making our way towards it.
  • (2) Underway-Singularity AI era. The AI singularity gets underway, we don’t know how long it will last (instantaneous, seconds, minutes, days, weeks, months, years, etc.).
  • (3) Post-Singularity AI era. At some point, the AI singularity is said to have been roughly completed, assuming it isn’t never-ending, and we find ourselves in a post-singularity circumstance.

I’d like you to take a moment, provide yourself with a glass of fine wine to sip from, and vigilantly sit down for my next question to you.

Which of those three AI singularity eras are we in right now?

Go ahead, take a reflective moment then announce aloud which era we are in. I’ll wait, thanks.

I’d bet that most people would say that the answer is we are in the first era, pre-singularity. It seems obvious and indisputable. There is no apparent evidence to indicate that we are in the midst of the second era, the underway-singularity, and furthermore, absolutely no evidence to support that we are in the finality or third era of the post-singularity.

Some brazenly claim your eyes deceive you.

Let’s dive into that supposition next.

The Simulation Theory Or Hypothesis

It turns out that an assumed certainty of our being in the first era or pre-singularity is due to a rather mind-bending reason – you see, it is because that’s what you’ve been told to believe.

Boom, drop the mic.

It could be that we are all immersed in a simulation that is being run by AI. The AI singularity already has taken place. AI then established a massive simulation to house humankind. Within that simulation, the AI is making us all believe that the AI singularity hasn’t yet taken place. Alternatively, maybe evildoer humans have done this in conjunction with AI. Lots of permutations and combinations come to mind.

It is an incredible ruse of which you, me, and nearly everyone else have fallen for it.

I am guessing that your mind is invoking thoughts of (spoiler alert!) the famous movie The Matrix. The gist of suggesting that we aren’t in the first era pre-singularity of AI and that we are instead in the second or third era is a popular sci-fi plotline. Been there, done that.

Maybe it’s true.

I hear you scoffing. Why would the AI allow us to make movies that reveal the truth? Doesn’t seem to add up. The retort is that by making it a fictional account, AI hopes that humankind sets aside the overarching premise as absurd and merely a made-up tall tale. That way, if any humans start to figure out that the AI singularity has indeed occurred, those said humans will be ridiculed as overly imaginative and foolish following a sci-fi contrivance.

It is a head-spinner.

Sam Altman Tweets A Storm

Speaking of spinning heads, we can now drop into the tweets of Sam Altman. You likely know that he is the CEO of OpenAI, the AI company that makes the widely and wildly popular ChatGPT. There are an estimated 300 million weekly active users of ChatGPT around the globe. It is a staggering statistic.

All in all, Sam Altman has become a kind of informal spokesperson about the status of AI and the ongoing newest advances in AI. When he speaks or writes, his words are given weighty consideration. Since OpenAI operates on a secretive proprietary basis about their AI, it is difficult to know where things stand in terms of their AI advancements. Thus, reading of the tea leaves such as tweets by Sam Altman is a prevalent pastime.

On January 4, 2025, Sam Altman, CEO of OpenAI, posted these two separate tweets on X:

  • Posted at 10:00 a.m.: “I always wanted to write a six-word story. here it is: near the singularity; unclear which side.”
  • Posted at 10:08 a.m.: “(it’s supposed to either be about 1. the simulation hypothesis or 2. the impossibility of knowing when the critical moment in the takeoff actually happens, but i like that it works in a lot of other ways too.)”

Carefully examine those tweets.

What do you make of the remarks?

One interpretation of those remarks is that we are near enough to AI singularity that it is no longer a vague far-off futuristic conception. We are presumably still in the first era, pre-singularity, but are now butting up to the second era. It is within our ready grasp. Maybe Sam Altman has seen things within OpenAI’s AI latest tech that prompt him to genuinely believe that the AI singularity is right around the next corner (how near or far – days, weeks, months, years?).

This has generated much controversy since many others within the AI community do not see whatever he seems to be seeing. There is no agreed consensus that we are on the cusp of AI singularity. In that case, is there something going on at OpenAI that the rest of the world doesn’t know about?

If so, should Sam Altman and OpenAI be societally obligated to let everyone else see what appears to be the nearing of AI singularity? In essence, the AI singularity as noted earlier has such immense consequences that a company that is landing near it ought to be a responsible citizen of humankind and ensure that humanity can partake in anticipating and dealing with the AI singularity.

For my coverage of numerous AI ethics and AI law considerations about outsized advances in AI, see the link here.

Being Unclear Of What Side

An additional interpretation has to do with the comment in the first tweet that says the status is unclear as to which side we are currently on related to the AI singularity. Give that a mental chewing. Plus, combine that comment with the second tweet about the simulation hypothesis.

Here’s what might be deduced.

It could be that we are past the first era and have already entered the AI singularity, maybe even zoomed into the post-singularity. Perhaps everything around us is part of an elaborate AI simulation. Thus, we are now on the other side. We aren’t on the pre-singularity side of things; we are beyond that stage.

The bonus remark in the latter half of the second tweet that we presumably won’t know when the critical moment happens seems to further cement the idea that the matter of whether we are near the AI singularity is up for grabs. Maybe we are, maybe we are not. We might have slid past it and don’t know that we did.

Reactions Are Aplenty

It might seem surprising that those two rather brisk tweets caused a bit of a firestorm in the AI community. A keystone reason is that this topic overall has become quite serious business and daily there is handwringing concerning the existential risk of AI. Some were upset that the remarks were overly ambiguous and couched as a mystery or riddle. Come out and take a firm stand, some exhort. Say what you mean. Don’t be so cryptic.

Another expressed qualm was that someone of such top stature in the AI community ought to be less casual about hand-waving when it comes to the AI singularity. Further, putting aside the idea that we don’t necessarily know what side of the singularity we are on, at least be more specific about how it is that we are nearing the AI singularity. Provide tangible direct evidence so that others can double-check it to gauge the veracity of the claim being made.

Well, there you have it, two tweets at the beginning of the new year and already lots of provocative AI considerations underway. Let’s give the revered Albert Einstein the last word on this for now: “Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning.”

Yes, indeed, keep learning, living, and questioning as AI advances since we all have a mighty big stake in what the outcome will be.

Share.

Leave A Reply

Exit mobile version