One way to get a good idea of the recipe for stronger artificial intelligence is to talk to experts convening at today’s conferences and tradeshow events to brainstorm about what the near future is likely to look like.

There was CES early this month, and other industry presentations throughout the year, but there are also other kinds of symposiums (symposia?) and conference events where people close to the industry talk about probabilities, priorities, and solutions.

As I’ve been talking to some of these people informally, and listening to formal presentations, some common themes are coming out regarding what we will need in order to usher in the next generation of AI – artificial intelligence systems that are more vibrant and more capable than what we have right now.

Here’s some of that secret sauce that people are talking about in making modern advances in AI in 2025.

Physics-Aware Systems

In order to really be impressive, AI systems need to understand the world around them. That’s hard, because they don’t have biological bodies that were endowed naturally with all kinds of gear and equipment to help navigate three-dimensional space.

However, AI systems are learning physics, the same way they learn anything else – through enormous amounts of training data, and extremely complex neural networks that will target in on the right results through processes like backpropagation, and stochastic analysis.

So we’re getting closer to that part of strong artificial general intelligence or AGI.

Persistent Memory

Another big element of AI getting stronger is systems becoming better able to remember what they experienced in the past.

That comes in all kinds of forms – you have information about prior interactions with humans, information about sensory data from the world around you, and other kinds of data that are either experiential, or inform the machine’s experience.

For example, when you ask ChatGPT, it defines dynamic memory as “storage and retrieval of information over long periods” and lifelong learning as “the ability to continuously acquire retain and refine knowledge without catastrophic forgetting.”

Catastrophic forgetting?

That’s sort of poetic, in a way.

Physical Interaction and Sensorimotor Skills

AI is only as strong as its hardware systems and physical footprint within the physical world.

In other words, there’s not a lot that a computer can do from a desktop. It has to be able to move and interact with physical systems. That means having complex sensory systems, but it also means having physics-aware bionic structure that can navigate three-dimensional space.

When we talk about robot dexterity, this is the category that we’re tackling.

Access to Training Data

And then, also, AI is only as good as its training data. It has to have accurate input in order to achieve useful output. Here’s where people are concerned with the process of “hallucination,” or AI just giving wildly false statements.

This is also where you can talk about natural bias, and problems with miscalibration of AI systems.

That might not be a big deal if you’re making recommendations on music, but it can be a very big deal if the AI is responsible for, say, approving loans, or helping people to get jobs.

Multidimensional AI

Here’s another idea I picked up recently from some experts who were talking about the path toward AGI itself.

AI, they argue, is not linear. It’s multidimensional. It goes on not just one trajectory, but several, which combine to form the elements of what we see as the artificial intelligence frontier.

This is where I think the work of Marvin Minsky comes into play. In his “Society of the Mind” book, Minsky was pretty specific about his theory that the human brain is not one computer, but a collection of collaborative components that work together to perform human cognition in real time.

You could also call them “agents.”

This year, we have people talking about “agentic AI” and multi-agent collaborative intelligence systems. When computers can share the work and delegate tasks, they can begin to keep building complex systems that function more like the human brain.

Early science fiction writers had it wrong – it’s not about the linear scanning of human brain activity and its replication. It’s about those artificial systems being able to evolve to the point where they work like a human brain and deliver a lot of the same performance.

That’s some of what I’ve been hearing over the past few weeks as we get ready for a banner year in AI. Keep an eye on this space – because there’s going to be a lot going on, not just in terms of the models and the hardware, but also in terms of planning, and hopefully, a regulatory framework. We have to reckon with the power of AI to harness it in the right ways. And that will take work.

Share.

Leave A Reply

Exit mobile version