Make way for unattended, autonomous systems, right? Actually they have been with us for a long time, and we’ve gotten quite use to them because they are invisible. Utilities have had self-healing and automatic re-routing capabilities in place for decades now. Manufacturers have had mechanical feedback loops within their equipment. Servers and PCs have long had automatic failover capabilities.
Now, artificial intelligence takes autonomous systems to a whole new level, and in the process, becoming more obvious to customers. We’re entering an era of hyper-autonomy, in which AI agents will interact with other agents to fulfill customer needs. The catch is ensuring that these human-free transactions or services are trustworthy.
The challenge for organizations is “AI use is racing ahead of people’s confidence in how it is governed, controlled and accounted for,” states a new study out of EY.
Consumers are beginning to see AI’s potential in their shopping and purchasing habits, and are willing to give some control for the convenience of letting AI deliver actions.
At this point, 16% of people have used AI that acts on their behalf in the past six months, the EY survey of 18,000 people worldwide finds. At least 11% let AI automatically refill shopping carts and make purchases, and another 11% allow AI to manage their finances and carry out banking tasks without intervention. Nine percent have used a self-driving vehicle or taxi.
While these numbers are still in the minority, it’s enough to cause business leaders to sit up and pay attention. The driving factor isn’t intelligence but convenience, the EY team states. “By taking care of small, everyday tasks, AI has slipped into daily routines with little resistance. Route planning, movie suggestions and customer support are activities that people expect technology to handle. These are tasks where outcomes are easy to review, correct or override and that sense of control matters.”
Reliance on unattended AI may be moving up the ladder from these small tasks – and organizations need to be ready to provide autonomous services. “A growing number of people are no longer just asking for advice; they are starting to let AI act on their behalf,” the EY researchers state. “Decision-making authority is migrating from humans to systems, and what started as low-risk assistance is now evolving into something far more consequential.”
This puts design thinking at the front line of AI governance with safeguards, accountability and control, the EY team urges. “When designed well, AI provides reassurance and clarity. When designed poorly, it does the opposite.”
Six in 10 people in the survey worry about organizations failing to hold themselves accountable for AI use that leads to negative consequences. Almost as many (55%) feel that organizations may fail to comply with their own AI policies or relevant government regulations, the study shows.
Moving to AI autonomy gradually is the best approach to assure such confidence. “The most trusted experiences expand autonomy gradually, giving users confidence
first and then expanding what the system is allowed to do,” the EY researchers state. “The most successful organizations will not be those that move fastest to introduce AI everywhere, or those that take a wait and see approach. They will use design to set a deliberate pace – to accelerate where trust and value already exist, and slow down where clarity, safeguards or confidence are still needed.”
The EY co-authors offer the following advice for moving forward in the era of hyper-autonomy:
- Build trust through transparency: “Prioritize AI applications that deliver clear, everyday value and allow people to review, override or opt out as confidence grows. Provide clear disclosures, strong data protections, human‑in‑the‑loop options, and transparent accountability.”
- Understand where users are at: “People and markets are progressing at different speeds,” the EY team observes. “Design experiences, messaging and controls that meet users where they are, rather than assuming a single path to adoption.”
- Design for emotional context: “Design that takes into account empathy, simplicity and pacing is as important as performance.”
Then there is security. Two-thirds of respondents worry about “AI systems getting hacked or breached, and less than half trust governments or companies to protect personal data used by their AI systems,” the study shows. “As autonomy increases, this concern becomes more than just a background worry. The more AI can do, the more security comes into focus. Leaders must consider what could happen if systems are compromised and how quickly that damage could scale.”
Control is another concern. “They worry that decisions made by AI won’t reflect their personal values or priorities; seven in 10 agree that human oversight remains essential,” the researchers stated, “This is not a rejection of AI but a request for agency and an ability to understand what the system is doing, intervene when it matters and opt out when stakes rise.”











