Artificial intelligence has transformed nearly every industry over the past five years from how businesses interpret consumer data to how products are discovered and sold. Yet as algorithms grow more capable, consumers are growing more cautious. AI has matured, and with it, so have expectations for transparency, privacy, and digital ethics.
According to a recent Prosper Insights & Analytics Survey, nearly 56% of U.S. adults are either “extremely” or “very concerned” about their privacy being violated by AI.
28% of U.S. Adults say they worry that AI systems don’t have their best interests in mind, emphasizing that trust is fast becoming the new currency of competition.
“AI can’t thrive in a trust vacuum,” said Donna Dror, CEO of Usercentrics, a global leader in privacy technology that recently surpassed S117 million in annual recurring revenue (ARR). “Consumers are done trading privacy for convenience. They still want the benefits of personalization, but they’re demanding control, transparency, and respect for their data.”
The Privacy Awakening
Usercentrics’ new State of Digital Trust 2025 report captures this changing consumer mindset. Nearly 40% of Americans have deleted apps or stopped using websites over privacy fears. Even more telling, 46% now routinely read cookie banners before consenting to data sharing, a behavior once thought to be impossible at scale.
In a country that operates largely on an opt-out system, these numbers are striking. 35% of consumers now accept only essential cookies, while 16% go a step further by customizing their settings to minimize tracking. Additionally, 37% have adjusted their privacy preferences, and another 35% use ad blockers or privacy-centric browsers like Brave or Firefox.
And the result? A fundamental shift in digital power dynamics. 65% of U.S. consumers say they feel like they’ve “become the product,” yet they aren’t logging off. Instead, they’re rewriting the terms of engagement.
“Trust has moved from being a brand value to a brand strategy,” Dror added. “Privacy is no longer a compliance box to check. Today, it’s a growth driver. Companies that build user trust into their AI systems will win the decade.”
AI’s Double-Edged Sword: Experience vs. Exposure
AI’s growing role in content, commerce, and communication presents both an opportunity and a risk. Consumers want better recommendations and faster answers, but not at the cost of being watched.
According to Prosper Insights & Analytics, most consumers remain open to AI-enabled personalization, so long as transparency is built in. While 30% said they’re “slightly concerned” about privacy, nearly one-third of Millennials (32%) and Gen-Z respondents (33%) expressed curiosity about using AI for everyday tasks like shopping or managing finances.
Yet more than 6 in 10 U.S. adults (62%) say they wouldn’t use agentic AI for any task today, showing a cautious optimism: people are willing to engage with AI, but only on their terms.
But brands must tread carefully. “AI can be a bridge or a barrier between your products and your consumer,” said Adrien Menard, co-founder and CEO of Botify, a company helping enterprises improve how search engines, and now AI bots, interpret their online content.
Botify’s proprietary data shows AI bots hitting e-commerce sites nearly four times more often year-to-date. Much of this traffic comes from OpenAI’s ChatGPT-User bot, which crawls websites when consumers ask questions. “When you see a visit from ChatGPT-User in your logs, it means your content was considered for an AI-generated answer,” Menard said.
With the release of GPT-5 in August, which integrates deeper web search to reduce hallucinations, Menard predicts AI bot activity will only accelerate. “The question is whether your content is ready for that,” he said. “If you’re blocking bots, you’re invisible in AI search. If you’re not optimizing, you’re misunderstood.”
That same balance between innovation and accountability is shaping how enterprises deploy AI, especially when it comes to agents. “Our data tells a different story from MIT’s “95% of AI fails” narrative, said Tim Sanders, Chief Innovation Officer at G2. “Nearly 60% of enterprises have moved AI agents in production, and fewer than 2% fail to launch. Agentic AI is now operational, profitable, and proving that the greatest risk is waiting too long to get started.”
Sanders reframes AI from an existential threat to an execution challenge, one where trust, performance, and results must scale together.
Trust, Speed, and the AI Internet
Even as consumers push for more control, they expect digital experiences to be instant. But AI, and the massive computing power it requires, has introduced new friction points.
Mehdi Daoudi, CEO and co-founder of Catchpoint, notes that the race to deploy AI also exposed dependencies and weaknesses in Internet resilience. “When ChatGPT goes down, the world feels it,” he said. Catchpoint’s 2024 GenAI Benchmark Report found that ChatGPT had the longest user authentication time across all countries, with especially high delays in Singapore and South Africa.
“These aren’t small hiccups,” Daoudi said. “You can’t build trust in an unreliable system.”
Catchpoint recently launched industry-first AI monitoring capabilities, helping companies measure and mitigate performance issues caused by AI workloads in real time. “In the new trust economy, uptime equals credibility,” Daoudi added. “Because today’s digital experience is customer experience.”
The Risk Behind the Curtain
AI trust doesn’t end with consumers; it extends through entire supply chains. With over 371 corporate bankruptcies in the first half year of 2025, according to S&P Global, volatility is the new norm. When disruptions ripple through supply chains, whether it’s a factory shutdown, a raw material shortage, or a sanctions-related delay, the effects land squarely on consumers in the form of higher prices, longer wait times, and fewer product choices.
“We’re in an era of economic uncertainty and geopolitical tension, what I call the Bermuda Triangle of tariffs, regulations, and sanctions,” said Ted Krantz, CEO of interos.ai, a supply chain risk intelligence company. “Organizations can’t afford to fly blind. They need external insights layered into their internal data to make confident, real-time decisions.”
Krantz emphasized the need for AI-driven visibility into risks that may otherwise remain hidden. “Trust isn’t just about your users. It’s also about your partners, your suppliers, your entire ecosystem,” he said. “Turning complex global data into confident decisions is what separates the leaders from the laggards.”
The New Equation for Trust
The convergence of these forces: privacy reform, AI search, resilience, and supply chain risk, reveals a new truth: innovation without trust is unsustainable. Prosper Insights & Analytics data also reveals that 30% of U.S. adults think AI needs more disclosure and transparency about the data it uses, and 39% say it should always have human oversight. That growing insistence on openness reinforces that consumers reward brands that lead with clarity and accountability, values that directly influence their purchase decisions and brand loyalty.
“AI maturity isn’t about capability,” Dror of Usercentrics concluded. “It’s about accountability. Consumers are telling us loud and clear: earn our trust, and we’ll give you our data. Betray it, and we’ll delete your app.”
As AI reshapes the digital economy, consumer trust has become its most valuable and volatile asset. Brands that understand this shift aren’t just future proofing their technology; they’re humanizing it. Because in the age of intelligent machines, the smartest move any company can make is to act more human.
Disclosure: The consumer sentiment study referenced above was conducted by my company, Prosper Insights & Analytics. This is the same dataset used by the National Retail Federation, and available from Amazon Web Services, Bloomberg, and the London Stock Exchange Group for economic benchmarking.









