Today’s AI is amazing—tools like ChatGPT can do things that seemed impossible just a few years ago.

But those of us who grew up watching Star Trek, Blade Runner, or 2001: A Space Odyssey know it’s just the beginning.

Unlike the AIs in those fictional worlds, or indeed humans, today’s AI can’t fully explore, interact with, and learn from the world. If it could, then just like the super-useful robot Data in Star Trek (or a human), it could learn how to solve any problem or do any job. Not just whatever it had initially been trained to do.

Some of the world’s top AI researchers, including ChatGPT creators OpenAI, believe building machines this smart, known as Artificial General Intelligence (AGI), is the holy grail of AI development. AGI would allow machines to “generalize” knowledge and handle virtually any task a human can perform.

There are some pretty big problems we have to solve before we get there, though. Further breakthroughs in AI, huge amounts of investment, and widespread societal change will all be needed.

So here’s my rundown of the five biggest obstacles we have to overcome if we want to build the bright, fully automated, AI-powered future we were promised in movies (what could go wrong?)

1. Common Sense And Intuition

Today’s AI lacks the capacity to fully explore and exploit the world it exists in. As humans, we’ve adapted via evolution to be good at solving real-world problems, using whatever tools and data we can. Machines haven’t – they learn about the world through digital data distilled, at whatever level of fidelity is possible, from the real world.

As humans, we build up a “map” of the world that informs our understanding and, therefore, our ability to succeed at tasks. This map is informed by all of our senses, everything we learn, our innate beliefs and prejudices, and everything we experience. Machines, analyzing digital data moving over networks, or collecting it with sensors, can’t yet bring this depth of understanding.

For example, with computer vision, an AI can watch videos of birds in flight and learn a lot about them– maybe their size, shape, species, and behavior. But it’s unlikely to realize that by studying their behavior, it might work out how to fly itself and apply that learning to building flying machines as humans did.

Common sense and intuition are two aspects of intelligence that are still exclusively human and vital to our ability to navigate ambiguity, chaos and opportunity. We will probably need to work out their relationship to machine intelligence in far greater depth before we arrive at AGI.

2. Transferability Of Learning

One of the innate abilities we’ve developed through the extent and breadth of our worldly interactions is taking knowledge learned from one task and applying it to another.

Today’s AI is built for narrow tasks. A medical chatbot may be able to analyze scans, consult with patients, assess symptoms, and prescribe treatment. But ask it to diagnose a broken refrigerator, and it will be clueless. Despite both tasks relying on pattern recognition and logical thinking, AI simply lacks the ability to process data in a way that will help it solve problems beyond those it was explicitly trained to solve.

Humans, on the other hand, can adapt problem-solving, reasoning and creative thinking skills across entirely different domains. So, a human doctor, for example, might use their diagnostic reasoning to troubleshoot a faulty fridge, even without formal training.

For AGI to exist, AI must develop this ability—to apply knowledge across fields without requiring complete retraining. When AI can make those connections without needing to be retrained on an entirely new dataset, we’ll be one step closer to true general intelligence.

3. The Phygital Divide

We humans interface with the world through our senses. Machines have to use sensors. The difference comes down to evolution again, which has honed our ability to see, hear, touch, smell and taste over millions of years.

Machines, on the other hand, rely on the tools we give them. These may or may not be the best way to gather the data they really need to solve problems in the best way. They can interface with external systems in ways that we allow them to – whether that’s digitally through APIs or physically via robotics. But they don’t have a standard set of tools that they can adapt to be suitable for interacting with any aspect of the world in the way that we have hands and feet.

Interacting with the physical world in as sophisticated a way as we can – to assist with manual labor, for example, or to access a computer system it wasn’t specifically given access to – will require AI that is able to bridge this divide. We can see this shaping up in early iterations of agentic AI tools like Operator, which uses computer vision to understand websites and access external tools. However, more work will have to be done to enable machines to independently explore, understand, and interface with physical and digital systems before AGI becomes more than a dream.

4. The Scalability Dilemma

The amount of data and processing power needed to train and then deploy even today’s AI models is enormous. But the amount that will be needed to achieve AGI, according to our current understanding, could be exponentially larger. There are already concerns over the energy footprint of AI, and increasingly large infrastructure projects will be needed to support this ambition. Whether or not there is a willingness to invest to the necessary extent will be largely dependent on AI companies proving they can earn ROI with prior generations of AI technology (such as the genAI wave many companies are surfing right now.)

According to some experts we are already seeing diminishing returns from simply throwing more processing power and data at the problem of building smarter AI. The most recent updates to ChatGPT – the Omni series of models – as well as the recently unveiled challenger DeepSeek, have focused on adding reasoning and logic capabilities instead. This has the trade-off of requiring more power during the inference phase, when the tool is in the hands of a user, rather than at the training stage. Whatever the solution, the fact that AGI is likely to require processing power at orders of magnitude greater than those available now is another reason it isn’t here already.

5. Trust Issues

This is a non-technological obstacle, but that doesn’t in any way make it less of a problem. The question is, even if the technology is ready, is society ready to accept humans being replaced by machines as the most capable, intelligent and adaptable entities on the planet?

One very good reason they might not do this would be because machines (or those creating them) haven’t yet achieved the required level of trust. Think about how the emergence of natural-language, genAI chatbots caused shockwaves as we came to terms with the implications on everything from jobs to human creativity. Now imagine how much more fear and concern there will be when machines arrive that can think for themselves and beat us at just about anything.

Today, many AI systems are “black boxes”, meaning we have very little idea about what goes on inside them or how they operate. For society to trust AI enough to let it make decisions for us, AGI systems will have to be both explainable and accountable to a level far beyond the AI systems of today.

So, Will We Ever Get To AGI?

These are the five most significant challenges that the world’s best AI researchers are trying to crack today as AI companies race toward the goal of AGI. We don’t know how long it will take them to get there, and the winners might not be those who are in the lead today, close to the start of the race. Other emerging technologies, such as quantum computing or new energy solutions, could provide some of the answers. But there will be a need for human cooperation and oversight at a level beyond what we’ve seen so far if AGI is going to safely usher in a new age of more powerful and useful AI.

Share.

Leave A Reply

Exit mobile version