If you’ve been watching the emergence of artificial intelligence onto the scene in our societies, you probably agree that it’s happened at a pretty fast clip.

Let’s start with this chart of GPT progress, showing what happened in just two years.

As presented here, the first model of its kind, GPT-1, was released in 2018 in June. GPT-2 followed in February 2019, and GPT-3 went beta in May of 2020.

At the same time, we had the coronavirus pandemic, and all of the resources of the world turned toward how to handle this disease.

Quietly, though, GPT kept being iterated, with 3.5 in November 2022, and GPT-4 in March of 2023.

Most recently we have the GPT-4o model released this past May – and we have the o1 preview with its chain of thought and reasoning capability, currently in limited release. We also have that model code named Orion that’s said to drop next year.

Putting GPT Into Our Telephone System

How are we using these technologies?

Recently, I read an example from a user who had figured out how to turn on ChatGPT into a bot for cold calling other smartphone users.

You could think of this as one of the first of many ‘sneak attacks’ of the AI age, a shot across the bow, from oneenterprising soul to a multi-cast audience of other people who may or may not be susceptible to his marketing.

This guy was talking about how to automate real estate calls to get people to sell their houses to his company.

And according his testimony, ChatGPT worked better than human callers.

Specifically, the numbers that this individual put out were around 12-15 actionable leads per 100 calls, as opposed to 2-3 before implementing ChatGPT.

What is This? (Not, Who is This?)

Major factors in the AI include the novelty factor, which was a focus in this post that I read (you can read it here). People like new things, the poster suggested, and they’re likely to stay on the line just to figure out what’s going on. By the time they get used to talking to an AI, they may already be in the sales funnel,or considering the overall offer more closely.

On the other hand, most of us hang up immediately when we get a call from a person that we don’t want to talk to.

Everyone’s LLM

This is just the tip of the iceberg when it comes to how people are going to use AI when it’s distributed to every smartphone in America. Basically, the people who learn how to use it first will target the others in a big game of round robin selling. Money will change hands based on this proactive and creative approach,and then slowly, other people will start to catch up. What you’re likely to have is an era of annoying unwanted calls from everywhere, all at once. in other words, all of us are going to blow up everyone else’s cell phones with AI.

Part of the response on Reddit to the new reality of LLMs had to do with how you handle this barrage of incoming calls. Many of us already don’t use our phones primarily as a telephone – it’s a small computer that helps us our lives. Slowly, we’re all going to learn that you don’t pick up the telephone when it rings – you let it go to voicemail, and have some kind of sorting technology to show you if there’s somebody you want to talk to or not.

But text spamming is off the hook too, and I hardly ever talk to anyone who hasn’t had these annoying intrusions in their lives, sometimes dozens of them every day. The bottom line is we have to figure out a way to adapt quickly to a technology that has developed overnight, so to speak.

That’s not even the whole picture: we have Anthropic’s Claude showing us how to use a computer, and other kinds of brand new models from places like China reinventing math and science testing (more on that later.)

So in all of that discussion about training data, and logical ability, we have to think about the real practical ramifications for our societies. We’re going to have to learn to talk to each other in a new way, given this kind of automation.

Share.

Leave A Reply

Exit mobile version