AI systems are moving into frontline roles, backed by solid infrastructure and detailed rollout plans. The technology appears ready, yet adoption inside organizations is slipping.
Deloitte’s TrustID Index shows how quickly confidence has erode. In mid-2025, trust in employer-provided generative AI dropped by 31%. Over the same period, confidence in autonomous systems, tools that take independent action, fell by 89%.
At the same time, usage of official tools declined, and nearly half of frontline employees moved to unapproved alternatives. Workers are making deliberate judgments based on how the tools actually perform in their day-to-day work. To them, AI is arriving without clear explanation, useful context, or visible benefit.
As a result, they’re gravitating toward systems they can adjust and understand, rather than ones simply handed to them.
Why It Matters: Many enterprise AI projects prioritize technical strength and overlook trust. Yet what workers believe about a tool often matters more than its capabilities. If the purpose isn’t clear or the benefits don’t show up in daily work, usage falls. Once that happens, the system stops adding value, no matter how well it was built.
- Trust Functions as a Leading Indicator: Trust can be measured in ways that connect directly to adoption. Deloitte’s TrustID framework looks at key dimensions of how people judge a tool, drawing on recurring surveys that track behavior across roles and industries. Workers who trust their AI tools tend to use them more often, speak well of their workplace, and keep building new skills. These changes appear early, before other indicators move, which makes trust a reliable signal of whether a deployment is gaining ground or starting to slip.
- Training Must Fit the Work: Teaching employees how AI works doesn’t lead to adoption unless it’s linked to how their jobs are changing. IKEA treated this as a starting point. When it introduced a chatbot to handle basic customer questions, it kept headcount steady and shifted the work instead. More than 8,000 call center employees moved into interior design and sales roles that required judgment and deeper customer interaction. At the same time, the company rolled out a learning program tailored to each job and department. By keeping the training relevant and well-timed, IKEA improved retention and gave employees clearer paths forward.
- Tool Design Needs to Include the People Using It: Many AI rollouts focus on the technology and leave little room for user input before launch. Walmart took a different approach. Frontline workers used an internal development platform to test early versions of scheduling and language tools and shape the features that mattered most. Their feedback guided design decisions that would have been missed from the top down. Early changes, including flexible shift requests and translation features that reflected real brand vocabulary, made the tools easier to use and helped adoption grow from the start.
- Experimentation Has to Be Structured to Scale: Colgate-Palmolive built a system that encouraged experimentation while still capturing meaningful results. Employees without coding experience could create AI assistants through a no-code interface that offered templates for common tasks. One factory manager trained a tool to interpret technical manuals, and an HR employee built a digital coach to support goal-setting. Each use case was logged, rated, and reviewed. Over time, employees created thousands of assistants, and the most effective ones were scaled to larger teams. The approach opened space for new ideas while maintaining a clear view of what worked.
- Peer Influence Drives Adoption Faster Than Executive Direction: Workers often take their cues from the managers they interact with every day. Intuit learned this when its AI rollout, built on top-level communication, stalled until managers became involved. The company brought together 150 frontline employees who had already started experimenting with AI on their own. During a full-day session led by mid-level managers, they built tools grounded in their actual workflows. When they returned to their teams, they shared what they had learned through informal channels. Adoption grew as people followed trusted examples inside their own teams rather than formal directives.
Go Deeper -> Workers Don’t Trust AI. Here’s How Companies Can Change That. – Harvard Business Review
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


