Often, our technological threshold races too far ahead of our societal threshold. That’s when AI proponents need to step back and consider the human impact of their work.
That’s what’s happening today with artificial intelligence, according to Zack Kass, AI futurist and former head of go-to-market at OpenAI. Ultimately, AI, running in the background, will enable us to interact with machines and applications as easily as we interact with each other, he said, speaking at the recent Precisely conference in Philadelphia.
“My prediction is it gets weird before it gets great,” Kass said. “And we’re going to have to accept that. All progress has costs. But one of the most interesting things that we need to start preparing for in this transition is understanding the idea of technological thresholds and societal thresholds.”
A technological threshold “is simply asking the question, ‘what can a machine do?’” he explained. “The societal threshold asks the questions, ‘what do we want a machine to do? Or are we willing to let it do?’”
In the meantime, there are three obstacles that may slow down or inhibit progress, he cautions: humans’ fear of loss of control; disproportionate views of AI’s risks; and low tolerance of machine failure.
These challenges are tied into the rise of autonomous vehicles, which Kass identified as the “bellwether” of AI adoption. Just as Otis Elevators struggled to alleviate peoples’ fears of elevators in the late 1800s and early 1900s, there is similar fear and loathing of autonomous vehicles.
“In the autonomous vehicle I think we are about to unlock an incredible understanding of how we view technology specifically related to AI,” he said. He points to the three challenges autonomous vehicles — and by extension, AI — face.
- Loss of control. “Humans love control,” he explained. “We love getting in the car putting our foot down on the pedal, turning the wheel, and controlling this massive machine.”
- Disproportional fears. “Fifteen times more people in the United States are afraid of flying than driving,” said Kass. “One is empirically safe and the other one is empirically dangerous. Also, people don’t appreciate how good autonomous vehicles are. Most people don’t know that today, 50,000 people will drive in autonomous vehicles in Arizona without accident. 10,000 in the Bay Area.”
- Low tolerance for machine failure. “Humans have exceptional tolerance for human failure, and we have no tolerance for mechanical failure,” he observed. “This is why 20,000 people can die by drunk drivers a year, and we’re very willing to say, that’s simply the cost of doing business. But if a Tesla on autopilot swerves into the wrong lane, everyone calls to shut the program down.” Holding machines or AI to a higher standard isn’t necessarily a bad thing, he added. “That’s why the building I’m in will never fall over, its why the planes we fly don’t fall out of the sky. And it has never been safer to fly than it is today. We’re building so much robustness into our mechanical systems. Our expectations for the deliveries of these technologies are very high.”
What’s happening, Kass said, is technology has gotten ahead of humans’ ability to deal with it. Looking at the Otis Elevator analogy, people were afraid to ride elevators, but the company responded with human touches — music, mirrors, and human elevator operators. “It worked. People started using elevators. The technological threshold, having been met, was updated by really analog adjustments in the societal threshold.”
Likewise, fears or confusion about AI will diminish as we see more human touches added to solutions. For example, there is agentive AI. or autonomous agents. “we will assign tasks or goals to AI and have those systems execute the tasks and goals across out apps and browsers. Imagine world where we disintermediate ourselves from the 100 or 150 apps on our phones.”
This is being facilitated by natural language operating systems, in which “we’re going to basically move to a world where we interact with machines the way we interact with each other,” Kass explained. “The reason is for the digital divide we live in today. The systems that we design like the personal computer isn’t actually second nature. You have to spend a lot of time becoming familiar with the machine in order to harness its full potential.”
Even Google keyword search “isn’t very obvious to some people,” Kass related. “ChatGPT gives us this first glimpse into what the world will look like in the future, where you can interact with a machine like you can interact with each other. And the natural language operating system which we think will arrive potentially within the next 10 years, or certainly the next 15, will shift away from this awkward communication with machines to a much more natural one.”