At the recent IEEE Global Tech Forum that Imagination in Action curated ideas for, we heard from some innovators who are on the younger side, and have come up with a product that’s a combination of AI and augmented reality.
Richa Gupta and Alexander Htet Kyaw described this project as “real language understanding for contextual product recommendation.” It’s a prize-winning design that evolved during AI build week, and got presented at an MIT AI conference this year.
What It Looks Like
The presentation started off with a demo of a user trying to select furniture items for a room. She wasn’t having a lot of success, partly because platforms were recommending products that weren’t correctly targeted to her needs.
The ‘curator AI’, on the other hand, that Gupta and Kyaw worked on, aggregates multiple products and specifications, and the users can choose based on their criteria.
“The problem that we’re actually trying to solve is that most people don’t even know where to even start when they’re furnishing the room, because it is actually hard to even type what you’re looking for in a search bar,” Kyaw explained.
Gupta cited the phenomenon of “decision fatigue,” where people don’t actually follow through to make purchases.
As the duo showed with the curation tool, smart suggestions can help.
Contextually aware AI tools can reduce browsing time, and effectively showcase some of the best fits for a room.
How does this work? The augmented reality system involves object recognition, so that the AI agent can actually survey a room or space, and report back characteristics and attributes.
Then the product search starts.
The system is matching objects to a space in three dimensions!
It also takes advantage of voice AI and speech recognition, so that users can be conversational, since they’re in AR mode and not typing with their hands.
Gupta explained that the product recommendation engine and generative voice come together to provide an intuitive and interactive experience.
A Dual System for Innovation
“(Together, the two implementations) form a very intuitive and interactive AI recommendation system,” she said. “For the first one, when we are using visual language in AI, they understand the context … and when they’re using generative voice AI, it understands what the user wants. And together, if you combine them, they give you an AI recommendation. But also, to go back to our digital system, where it pulls out the real world data … we also have an error template system, so that a user fails to transcribe something they can get for prompts.”
This, she noted, can be applied to a multi-billion dollar market.
“How do you grow in such innovative areas?” Gupta asked. “First, we can expand partnerships, which is endless, and then come to new markets. You don’t stop at furniture. You go to shoe design. Maybe go to gardening. We go to other ventures, like the fashion industry. And we can enable AR, which will add to the future of immersive shopping.”
Thinking About the Future
In someways there’s a lot to unpack here as we look at the potential of AI to change the way that we select consumer goods.
It’s actually been many years in the making – since about a decade ago, the more prescient among us have been thinking about what type of interface is going to come after the keyboard and screen. (Prior to that, we were still getting used to the videoconferencing interface that had been the jurisdiction of sci-fi and Dick Tracy comics.)
What most of us didn’t take into account in these musings is that as the interface was changing, the intelligence in the computing system would be changing, too.
In the future, we’ll probably be interacting with smart AI systems, not with our hands, but with our voices. This project helps to usher in that change. And that’s a big shift, bigger than a lot of us might think. Eliminating some of these interface barriers means our children will be looking back and wondering how we ever lived like we are now.