Peter Guagenti is president at Tabnine. Peter is an accomplished entrepreneur, and has been working in AI business tools for 10+ years.

Trust can be more essential than love in relationships and, without it, relationships often falter.

The same dynamic will play out in our new and exciting relationship with our current obsession, artificial intelligence (AI). Infatuation is currently high given the possibilities. But the adoption of AI across the board will depend heavily on whether we can trust it.

Here’s why: Generative AI today accelerates what humans do in a way we have not seen before. Unlike past waves of automation, this is the first technology that can replicate (to a greater or lesser extent) how humans think and thus can automate higher-order tasks like knowledge work. AI is called that for a reason. We are starting to delegate creative and decision-making tasks to this new tech.

Today, AI is really a helper or assistant as we work. In the future (say in five or so years), the tasks we assign to AI will be considerably more autonomous. That’s where the trust factor really comes in. With AI, we will hand over more control to automation than ever before.

For example, the current generation of software coding assistants will slowly and steadily evolve to become an AI “engineer” capable of taking in requirements and delivering an application, breaking down requests into tasks and working independently through them. The product likely won’t be perfect (as is often true with human coders, too), but the AI will iterate with you to get it right.

Similar scenarios will play out in other industries.

AI tools will help customer support staff identify the right response to a customer, rather than simply replying in a way that isn’t spot-on. Similarly, AI tools can assist lawyers in identifying potential issues in a contract, instead of just redlining and making changes based on broad company policy. There are three distinct waves of evolution: AI as an assistant that helps humans; AI as an agent able to complete more tasks; and autonomous AI, capable of completing complex tasks. That evolution is well underway.

Trust In Machines

Allowing AI to work autonomously only works if we create and maintain trust in these tools. The more autonomy we give what is now a black box, the less appetite we have for its current lack of transparency.

That trust needs to be earned, too, which is a tall order for technology. Anytime we move from a human-centric to an automated system, the bar for trust goes up. With the potential for true autonomy, the bar for AI will be incredibly high.

Consider self-driving cars. While research shows them to be safer than human drivers in many situations (machines don’t tire, text while driving or have too many drinks), most people simply don’t trust self-driving cars. Each time another self-driving accident hits the media, trust takes another hit.

Clearly, self-driving cars have not earned our trust. We are seemingly holding AI-enabled cars to a higher standard than the rideshare driver in their personal vehicle, despite the skills and capabilities of both being equally unproven to us. There is a higher standard for autonomous systems that will need to be met.

Building Trust In AI

What will it take to earn and maintain trust with autonomous AI systems, particularly in the enterprise where it will automate knowledge work? Two things are key.

The first is a comprehensive understanding of a company’s standards and knowledge, and the ability to apply it. The behavior of the AI must be 100% compliant with how a company works. This means the AI is versed in company practices, standards, formulas, expectations, workflows, policies and so on. In effect, the AI behaves like one of the company’s best and brightest employees—not a newbie.

The second thing is for a company to be able to exert absolute control over the AI’s scope, behaviors and decisions. It won’t be enough to have an AI that can do the work; you need an AI that follows orders precisely and stays closely within the constraints you expect it to.

If AI will become an autonomous worker, it is not enough for it to simply be competent. AI will be expected to always behave like your “employee of the month.”

Weighing AI Solutions

Most companies won’t build their own large language models or engineer the AI agents they apply. Instead, as we saw with digital transformation, enterprises will rely on vendor solutions. But if you delegate complete tasks to AI, how can you ensure trust in these autonomous systems comparable to a vetted employee?

Executives should question their AI suppliers on:

• LLM Knowledge: Do vendors know what’s in the model, meaning what it knows and what it doesn’t know? Are they aware of how the model was trained and whether the data it was trained on is reliable? Transparency around LLM training is crucial.

• Customization And Use Of Context: Context-awareness ensures that AI works the way your company does and is informed by your data. How easy is it to customize the solution for your use case, or to train on your unique information? Generic AI systems know what they know, but they won’t know you or your ways. As AI gets more autonomous, personalized AI will make all the difference.

• Ability To Apply Constraints: Constraints ensure that AI works the way we expect in a compliant way. As AI gets more autonomous, AI will have to follow individual corporate standards and rules. As such, the ability to apply constraints to the AI needs to be easily accessible.

Onus On The Software Industry

Ultimately, the onus is on the software industry to build trust in AI. Silicon Valley’s oft-touted adage of “move fast and break things” won’t work for this level of critical capabilities. Shortsighted behaviors to increase profit or competitive advantage but break trust with customers will do more harm than good to future profits and AI’s ability to deliver positive impacts. It’s time for the black box to be cracked open.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Share.

Leave A Reply

Exit mobile version