For a while now we’ve been talking about transformers, frontier neural network logic models, as a transformative technology, no pun intended.
But now these attention mechanisms have other competing methodology that might push AI‘s ability to function even further ahead, toward what people are already calling AGI or artificial general intelligence, where AI gets better than people at routine tasks.
To illustrate one of these new alternatives. I’ll take liberally from a presentation by Konstantin Rusch that happened recently at TEDxMIT, and also get some help from ChatGPT along the way, to put some of the more technical material into plain English.
Using Oscillating State-Space Models
“New models are all powered by either transformers or liquid foundation models,” Rusch said, “but what if I told you there’s a completely radically new approach, a new AI system that is more efficient, more expressive, and more capable?”
Then Rusch explained oscillating state-space models, where you take a mass-spring system, plot its position in time, and review what he called “beautiful harmonic oscillations.”
This type of thing, he said, is abundant in biological neuroscience.
“If we look at a single biological neuron, the firing of the action potential is in fact nothing else then some sort of relaxation oscillator,” he noted. “But even if you look at whole networks, we find that those oscillatory dynamics emerge.”
He cited the example of the EEG invented by Hans Berger in 1924.
“Can we somehow abstract the essence, the core idea of oscillators, and build an AI system based on that?” he asked.
After sitting through this, I still wasn’t completely clear on the oscillating state-space models, so I asked ChatGPT to explain it in a simple way, in two paragraphs.
Here’s what I got:
“Imagine you’re watching a spring bounce up and down or tracking temperatures that rise and fall with the seasons. An oscillatory state-space model is a math tool that helps us describe and predict these kinds of back-and-forth patterns. It breaks the motion or changes into “states”—hidden parts of a system that evolve step by step—and shows how they circle around in a regular, repetitive way. Instead of steadily going up or down, these states swing like a pendulum, capturing the idea that something is repeating over time. By using state-space models, we can build equations that keep track of what’s happening inside a system (like how stretched a spring is or how high or low the temperature might be), as well as what we can see or measure from the outside. When these equations describe recurring ups and downs, we call them oscillatory. This approach helps us forecast future behavior—like figuring out when a spring will be at its lowest point or a climate cycle will peak—so that we can plan or control things effectively, whether that’s in engineering, economics, or even brain research.”
So this methodology helps keep track of what’s happening inside a system – not just something like a spring, but maybe a neural network…
Emerging Neurological Patterns
Rusch continued to talk about how these systems mimic the human brain.
“We would expect the system to be very, very expressive, but then it would also further bridge the connection (and bridge) the gap between artificial and biological intelligence,” he said.
Contrasting oscillating state-space models with recurrent neural networks, he pointed out how the relationship between weighted connections works, and how there’s “dense connectivity” in the system. He also addresses the principle of sparse representation. But all of that adds up to one thing: faster and more powerful systems.
“If you have a standard recurrent model, the computational time is proportional to the sequence length,” he explained. “For our approach, we can show that the computational time is actually proportional to log 2 of N. This is a tremendous speed up.”
Using Associated Parallel Scans
Midway through the explanation, Rusch mentioned a term that’s useful in understanding how these models make things a lot faster.
Again, if you go and ask ChatGPT, you find that an associated parallel scan is a method in computing that applies a certain change to each part of a series or matrix. The model tells us this, succinctly:
“Imagine you have a list of numbers, like the distances your car traveled each hour on a road trip. A parallel scan could quickly figure out the running total at every hour—hour 1, hour 2, hour 3, and so on—by splitting the list into chunks and having each chunk do its own cumulative work. Afterward, it stitches those chunks together into one final list that shows the total distance you’ve covered at each step along the way. Because this happens in parallel, it can be way faster than making a single processor do every step one at a time.”
That helps explain some of the math behind these concepts, but toward the end of his presentation, Rusch moves into very different territory.
Universality and Task Robots
Universality is essentially the idea that a Turing machine could complete the work of other Turing machines in a unified field theory of capability.
Rusch mentions this idea in transitioning from the experimental stuff to actual lab experiments.
“We trained a humanoid robot in our lab to perform some kitchen chores,” he explained. “Oscillating dynamics emerge …physically meaningful representations, representations which were extremely close to human trajectories.”
This in itself points to a solution for some of the biggest challenges that people note about modern robotics.
The argument often runs like this – yes, AI is supremely intelligent in processing information and creating things, but what about physical dexterity?
Too many people have misconceptions about what it would take for AI placed into a robot to be able to mimic human motions. This starts to lay out how that would work in detail, and so you would have robots capably doing the physical human work that we all take for granted is our exclusive domain – doing the dishes, taking out the garbage, or caring for a human, with hygiene and other personal care. All that, or cooking you your favorite meal with whatever you have in the refrigerator.
In a way, it’s that last jump that we haven’t yet seen. Our computers are super-smart, but don’t have physical bodies. I would submit to you that all of that is going to be changing soon.