Close Menu
The Financial News 247The Financial News 247
  • Home
  • News
  • Business
  • Finance
  • Companies
  • Investing
  • Markets
  • Lifestyle
  • Tech
  • More
    • Opinion
    • Climate
    • Web Stories
    • Spotlight
    • Press Release
What's On
By Reverse-Engineering Shahed Drone, U.S. Gives Iran A Dose Of Its Own Medicine

By Reverse-Engineering Shahed Drone, U.S. Gives Iran A Dose Of Its Own Medicine

December 7, 2025
Dartmouth Data Breach Exposes 40,000 Social Security Numbers In Cl0p’s Oracle Rampage

Dartmouth Data Breach Exposes 40,000 Social Security Numbers In Cl0p’s Oracle Rampage

December 7, 2025
Aurora Could Be Seen From These 15 States Sunday

Aurora Could Be Seen From These 15 States Sunday

December 7, 2025
Lockout Looms For 2027 Season If MLB, Union Can’t Come To Terms

Lockout Looms For 2027 Season If MLB, Union Can’t Come To Terms

December 7, 2025
They Send Messages Via The Air

They Send Messages Via The Air

December 7, 2025
Facebook X (Twitter) Instagram
The Financial News 247The Financial News 247
Demo
  • Home
  • News
  • Business
  • Finance
  • Companies
  • Investing
  • Markets
  • Lifestyle
  • Tech
  • More
    • Opinion
    • Climate
    • Web Stories
    • Spotlight
    • Press Release
The Financial News 247The Financial News 247
Home » Building Trust In AI Starts With Protecting The Data Behind It

Building Trust In AI Starts With Protecting The Data Behind It

By News RoomNovember 14, 2025No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn WhatsApp Telegram Reddit Email Tumblr
Building Trust In AI Starts With Protecting The Data Behind It
Share
Facebook Twitter LinkedIn Pinterest Email

Artificial intelligence has entered an era of autonomy. It’s no longer a tool that simply predicts or automates—it acts. This evolution, often called agentic AI, represents a profound shift: we’re building systems that reason, make choices and take action in the world.

That power comes with risk. Every decision an AI makes reflects the integrity of the data it’s trained on and the safeguards defining its boundaries. As Jason Clark, chief strategy officer at Cyera, told me, “AI is a superpower that consumes a lot of data and creates a lot of data.” The challenge, he said, is that those two forces—AI and data governance—have to go together.

That’s exactly the premise of Cyera’s DataSecAI 2025 Conference, a hybrid event uniting CISOs, researchers and policymakers to redefine how data and AI security intersect. In just its second year, the conference has already drawn over 1,000 attendees from around the world. “When we started this, we thought maybe a hundred people would come,” Clark said with a laugh. “We ended up with three hundred, 70 percent of them C-level.”

The reason, he added, is simple: everyone is trying to figure out how to secure AI before it scales beyond control.

From Defense to Discovery

For decades, cybersecurity has focused on chasing threats—patching vulnerabilities and reacting to breaches. Clark argues that AI changes that equation entirely. “We’ve been chasing threats and vulnerabilities for so long that we never solved the data problem,” he told me. “Because of AI, that problem just got a hundred times bigger.”

Cyera’s latest AI Readiness Report backs that up. The research found that 70 percent of organizations are deploying AI tools without fully understanding their data exposure. The result is a widening gap between innovation and protection—a gap that AI itself could help close if governed responsibly.

Industry data from Omdia further underscores the pace of adoption and the risks accompanying it. “Enterprises are enthusiastically investing in generative AI and AI agents in particular,” said Todd Thiemann, cybersecurity industry analyst at Omdia. “Omdia research shows that an overwhelming 80% of organizations say AI agents are the top or a high priority compared to other AI initiatives.”

According to Thiemann, the first wave of agent adoption is focused on relatively low-risk use cases—embedded assistants like Salesforce Agentforce or Workday agents that improve productivity without touching sensitive systems. “Where things are headed is for agents to touch core enterprise applications,” he explained. “That is where you can unlock big value, but also where you encounter big cybersecurity risk.”

As AI agents move closer to the heart of the enterprise, the stakes rise. Without robust access controls, auditability and behavioral monitoring, what begins as a time-saving tool can become an unmonitored point of compromise.

That’s part of what DataSecAI 2025 aims to explore: how AI can strengthen, not weaken, the security posture of enterprises. Clark sees it as an opportunity for security leaders to step into a new role—one that’s not just defensive but strategic. “Security teams are becoming the first part of the business that truly understands where data lives, how valuable it is and who’s using it,” he said. “That changes the conversation from ‘no, you can’t do that’ to ‘let me help you do it safely.’”

Keeping Agents on a Leash

Our conversation turned to the growing phenomenon of AI agents—digital workers that log in, perform tasks and interact with systems much like humans do. I mentioned that some companies are already assigning employee numbers to AI agents. It sounds far-fetched, but it’s happening. And as I told Clark, that means we can’t treat these agents as simple software processes anymore. They behave like employees, and they need to be governed like employees—with distinct identities, permissions and oversight.

Clark agreed. “You need to have your agents on a leash,” he said. “It’s all about how loose you let that leash be, but you still have to keep them on the leash.”

It’s a vivid metaphor, and it captures the challenge perfectly. Agentic AI gives us what Clark calls “Tony Stark Iron Man-suit capabilities”—superhuman memory, speed and reach—but without guardrails, that power can run amok.

We both agreed that the only sustainable approach is layered trust: giving AI agents autonomy in stages, just as you would with a new employee. At first, you check their work constantly; later, you loosen the leash as confidence builds. “There’s always a human in the loop,” Clark reminded me. “The question is how far along the autonomy scale you’re willing to go.”

Education as Infrastructure

One of the most encouraging steps Cyera has taken is launching the AI Security School, a free program designed to train professionals to understand and mitigate AI-driven risks. The courses cover everything from model governance to data classification and behavioral monitoring—bridging the gap between theoretical security and practical defense.

This is more than corporate goodwill. As Clark pointed out, security talent is being asked to evolve faster than ever before. “AI breaks everything we do today,” he said. “So we have to re-educate the industry on how to think about access, behavior and data all together.”

Education becomes the connective tissue of resilience—the means by which organizations can adapt as fast as technology itself.

Trust as the New Perimeter

Trust has become a recurring theme in my conversations this year, especially around AI. It’s not just about trusting the data or the model—it’s about trusting that the systems interpreting our world are doing so faithfully.

Clark and I agreed that AI will inevitably make mistakes, and that perfection isn’t the right benchmark. “If you expect 100 percent accuracy, you’re missing the point,” I told him. “Even human employees aren’t infallible.” Clark agreed but emphasized the need for constant oversight at scale. “You might check one agent’s work manually,” he said, “but what happens when you have a thousand agents running across applications? You’ll need agents overseeing agents—and something watching them all.”

It’s a dizzying vision: an ecosystem of digital workers, supervisors and monitors—a hierarchy of intelligence that mirrors human organizations. But at its heart, it all comes back to one principle: the data must be right. Without that, trust collapses.

Building Intelligent Trust

As I see it, the DataSecAI 2025 Conference isn’t just about technical controls—it’s about redefining confidence in the age of autonomy. Cyera’s combination of research, education and community shows what that future could look like: transparent, governed and human-aligned.

We don’t need to fear AI; we need to understand it. And that understanding begins with the data beneath it.

agentic AI AI Readiness Report AI security Cyera DataSecAI 2025 Jason Clark Omdia Todd Thiemann
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

Dartmouth Data Breach Exposes 40,000 Social Security Numbers In Cl0p’s Oracle Rampage

Dartmouth Data Breach Exposes 40,000 Social Security Numbers In Cl0p’s Oracle Rampage

December 7, 2025
They Send Messages Via The Air

They Send Messages Via The Air

December 7, 2025
FBI Issues Critical Facebook, LinkedIn And X Photo Attack Warning

FBI Issues Critical Facebook, LinkedIn And X Photo Attack Warning

December 7, 2025
See Jupiter, The Moon, Aurora And ‘Shooting Stars’ On Sunday

See Jupiter, The Moon, Aurora And ‘Shooting Stars’ On Sunday

December 7, 2025
FBI Warns iPhone And Android Users—Stop Making These Calls

FBI Warns iPhone And Android Users—Stop Making These Calls

December 7, 2025
NYT Mini Crossword Hints, Answers For Sunday, December 7

NYT Mini Crossword Hints, Answers For Sunday, December 7

December 7, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Dartmouth Data Breach Exposes 40,000 Social Security Numbers In Cl0p’s Oracle Rampage

Dartmouth Data Breach Exposes 40,000 Social Security Numbers In Cl0p’s Oracle Rampage

Tech December 7, 2025

Dartmouth College has confirmed that a three-day cyberattack in August compromised the personal information of…

Aurora Could Be Seen From These 15 States Sunday

Aurora Could Be Seen From These 15 States Sunday

December 7, 2025
Lockout Looms For 2027 Season If MLB, Union Can’t Come To Terms

Lockout Looms For 2027 Season If MLB, Union Can’t Come To Terms

December 7, 2025
They Send Messages Via The Air

They Send Messages Via The Air

December 7, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks
Why Is Zohran Mamdani Ridiculed, While Kevin Hassett Is Revered?

Why Is Zohran Mamdani Ridiculed, While Kevin Hassett Is Revered?

December 7, 2025
FBI Issues Critical Facebook, LinkedIn And X Photo Attack Warning

FBI Issues Critical Facebook, LinkedIn And X Photo Attack Warning

December 7, 2025
Why Mark Cuban’s Drug Cost Message Resonates: It’s Simple

Why Mark Cuban’s Drug Cost Message Resonates: It’s Simple

December 7, 2025
Younger Americans making riskier investments, nonessential purchases for tragic reason

Younger Americans making riskier investments, nonessential purchases for tragic reason

December 7, 2025
The Financial News 247
Facebook X (Twitter) Instagram Pinterest
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact us
© 2025 The Financial 247. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.