In the fourth and last component of our “The Mind’s Mirror” book review, let’s look at the appendix, a timeline in AI advancement as presented by the authors Daniela Rus and Gregory Mone.

This timeline starts in 1943 with the work of Warren McCulloch and Walter Pitts, their ‘McCulloch and Pitts neuron’ and their paper: “A logical calculus of the ideas immanent in nervous activity.” (Note how at that time, the word ‘neural’ hadn’t even been popularized– – that’s how old this early research was.)

The next entry after this is a reference to the work of Alan Turing, who introduced the Turing test at mid-century. That would turn out to be a very important part of how we talk about AI to this day.

But there’s also the Dartmouth summer research project in 1956 – I’ve written about that before on this blog, because of its fundamental role in ushering in the AI age, and the participation of great names like Claude Shannon and Marvin Minsky.

The timeline also covers the advent a few years later, of the evocatively named ADALINE and MADALINE models to provide a solution to phone line echoes.

Later, the 1969 introduction of Perceptrons by Minsky et al. led to what the authors called the first AI winter, as it showed the limitations of neural network capabilities at the time.

According to this timeline, Fukushima revived this area of technology in 1975.

By the 1980s, we had ‘expert systems’ – early forays into AI solving particular sector problems.

To most of us, the 1980s was the era of HyperCard and Print Shop, but behind the scenes, some were working on big improvements in AI. Rus cites John Hopfield’s work in 1982, the development of the Boltzmann machine by Hinton in 1985, and the pursuit of backpropagation in 1986.

With all of that in hand, Rus goes into more advancements through the rest of the decade, like Sutton’s work on reinforcement learning in 1988, and the introduction of support vector machines in 1990. Yann LeCun is mentioned again with advances in convolutional neural networks in 1998, and there’s also a reference to Hinton‘s work on deep belief networks in 2006.

Next, there’s the development of AlexNet in 2012, followed by the advent of generative AI networks in 2014, and the achievements of the AlphaGo program the year after.

Rus marks 2017 as the year for the rise of transformer architectures, and then cites three successive software advances in three consecutive years – AlphaFold in 2020, Stable Diffusion in 2021 AlphaCode in 2022.

Then we’re in the era ChatGPT, where all of the stuff got popularized and released to the public, and we started to see the true power of large language networks. All of a sudden, we were using AI to create pictures, reports, and all kinds of other creative items that were previously the products of human thought and creativity.

We’re still seeing how this is going to affect our world, but the timeline gives you a better view of how we got here. It’s an eye-opener for a lot of people who weren’t paying much attention before AI exploded into the limelight just a few years ago. But really, the foundations of this have been laid over decades!

Share.

Leave A Reply

Exit mobile version