Russell Sarder, CEO & Founder of AI CERTs – advancing global AI certification & education.
Most AI strategies still read like shopping lists: Pick a model, pick a platform, hire a few specialists and then wait for productivity.
That’s not where the constraint is showing up. In the U.S., most workers still aren’t using AI meaningfully in day-to-day work. The Pew Research Center found that, as of October 2025, 65% still “don’t use AI much or at all.”
If usage is low, model capability doesn’t matter. The bottleneck is people and how work is designed.
The productivity promise is real but slow to show up.
This is why the “ROI debate” keeps coming back every budget cycle. The Conference Board captured the gap: In 2024, 86% of CEOs expected boosted productivity, but a year later, only 44% said workforce productivity was the biggest improvement they’d actually seen from AI.
That gap isn’t solved by upgrading models. It’s solved by turning AI into a workforce capability with new roles, new standards and new operating habits.
Skills churn is now a CEO problem.
AI is reshaping work faster than most companies can reskill. LinkedIn’s 2025 Work Change Report says that “by 2030, 70% of the skills used in most jobs will change, with AI emerging as a catalyst.”
The labor market is already signaling it. Stanford’s AI Index 2025 reports that “in 2024, U.S. job postings citing generative AI skills increased by more than a factor of three” year over year.
So, the “talent strategy” can’t be a hiring plan alone. It has to be a redesign plan.
Define AI talent the right way.
Most companies still treat “AI talent” as engineers and data scientists. That’s necessary, but not sufficient. You also need the following:
• Builders: Integrate tools, manage data access, run evaluations and ship reliably.
• Translators: Workflow owners who can turn a business job into a repeatable AI-enabled process.
• Governors: Handle security, legal, risk and compliance with clear decision rights.
If you underbuild translators, AI stays stuck as a side tool. If you underbuild governors, AI stays stuck as a risk conversation.
Start with work design, not training decks.
Reskilling fails when it’s abstract. It works when the workflow forces practice.
Here’s a practical method that scales:
1. Pick two to three workflows that matter (revenue, cost or risk).
2. Break them into tasks: Draft, decide, verify, approve and communicate.
3. Determine what AI drafts and what humans decide, and define what “good” looks like.
4. Put those standards into the tools people already use.
This is where you stop talking about “use cases” and start changing throughput.
Governance needs owners, not committees.
Most organizations react to AI risk by creating a committee. Committees don’t ship, and they don’t own outcomes.
What scales is ownership, tied to workflow decisions. NIST’s Generative AI Profile (AI RMF companion) is useful here because it frames risk management as concrete actions: how you measure, monitor and govern AI systems in real use.
Governance isn’t paperwork. It’s the set of operating rules that lets people move fast without creating avoidable incidents.
Leaders are the pacing item.
Even when employees are curious, adoption stalls if leaders don’t set clear direction. McKinsey’s January 2025 “Superagency” report says almost all companies invest in AI, but only 1% believe they’re at maturity, and it argues the biggest barrier to scaling is leadership steering speed.
This is where manager capability matters. Managers define what “acceptable work” looks like, how outputs are checked and what gets escalated. If managers aren’t equipped, AI turns into inconsistent usage and hidden risk.
Measure people impact, not tool activity.
AI success metrics often become vanity: logins, prompts and number of licenses.
Use metrics that hold up in a boardroom, such as:
• Adoption in workflow (weekly active usage by role and inside core tools)
• Cycle time (time-to-decision and time-to-resolution)
• Quality (error rate, rework and escalations)
• Risk (policy breaches and sensitive data incidents)
If you can’t measure these, you don’t have scale.
In October 2025, Citigroup’s AI usage freed up 100,000 hours per week for developers, with nearly 180,000 employees across 83 countries having access to Citi’s internal AI tools.
That metric matters because it’s operational. It translates into capacity you can redeploy into reliability, security, modernization and customer-facing speed.
It also shows the real point: The value appears when adoption is broad enough that work actually changes.
The talent economics make reskilling non-optional.
Hiring your way out is expensive, and it gets more expensive as demand rises.
PwC’s 2025 AI Jobs Barometer found workers with AI skills command a 56% wage premium, up from 25% the prior year. So, the competitive move is to build internal supply: role pathways, manager enablement and workflow-based learning that turns AI into a standard way of working.
Follow a 90-day plan.
A credible AI strategy includes a workforce operating plan with deadlines and metrics, like the following 90-day guideline:
• Days One To 30: Pick three workflows, map tasks and publish a one-page safe-use standard.
• Days 31 To 60: Assign builder, translator and governor roles, ship workflow templates, and train managers on review standards.
• Days 61 To 90: Scale what works, kill low-value pilots, and report on four metrics (adoption, cycle time, quality and risk).
Models will keep improving. Your advantage comes when your people and operating model improve faster.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


