A new study digs into why so many professionals are still on the fence about using Agentic AI at work. 58% of people surveyed are already using AI agents to help with tasks, but that doesn’t mean everyone’s fully on board just yet.
What’s holding them back? A lot of it comes down to trust.
Around a third of respondents worry that AI can’t deliver accurate or high-quality results. Others say they’re uneasy with how AI lacks the human touch, things like emotional intelligence and intuition.
And these concerns aren’t just coming out of nowhere. As AI becomes more common in the workplace, experts caution against diving in without a game plan. The study makes it clear that while Agentic AI can seriously boost how humans and machines work together, it only works well if organizations roll it out thoughtfully.
That means clear governance, ongoing training, and open communication, all key to building trust and making the transition smoother.
Why It Matters: Trust is foundational to successful technology adoption. If employees feel uneasy about using AI, especially in high-stakes or judgment-driven tasks, it can reduce productivity, introduce errors, and slow down innovation. Addressing these concerns head-on will be key to fully realizing AI’s potential.
- Lingering Doubts About AI Accuracy and Quality: Roughly one-third of workers surveyed expressed concern over the quality of work produced by AI systems. Many fear that automated outputs may be inconsistent, superficial, or simply not up to the standards expected in professional environments. This hesitance reflects a deeper issue: if users don’t believe AI tools can meet or exceed their own performance levels, they are far less likely to rely on them for meaningful tasks or decision-making support.
- Ethical and Security Risks Exist: Thought leaders across industries caution that while AI can enhance productivity, it should not replace human judgment or oversight. There is a growing consensus that organizations must remain vigilant about ethical considerations, especially when AI is applied to sensitive data or high-stakes decisions. Blind trust in automated outputs can lead to mistakes, compliance issues, or reputational harm if not rigorously monitored and validated by human professionals.
- Emotional Intelligence Deficit Is a Dealbreaker for Many: Nearly half of respondents highlighted the absence of emotional intelligence and human intuition as a critical shortcoming of AI tools. These capabilities are often necessary for interpersonal communication, creative thinking, and complex problem-solving—areas where workers feel AI falls short. The inability to recognize context, tone, or subtle cues limits AI’s usefulness in roles that require empathy, negotiation, or nuanced decision-making.
- Strong AI Policies and Training Programs: Experts emphasize that adopting AI successfully requires more than just installing new software. Companies should implement comprehensive policies that define clear boundaries for AI use, outline ethical standards, and assign accountability. Ongoing employee training is also critical to ensure users understand how to work alongside AI, recognize its limitations, and make informed decisions.
- Discomfort with Using and Submitting AI-Generated Work: A large portion of respondents, 40%, reported feeling uncomfortable presenting AI-created content as their own. Some even feared that reliance on AI might diminish their professional credibility or lead to errors that they would ultimately be held responsible for. Additionally, 34% believed their personal output was of higher quality than what AI could produce. This indicates a clear lack of confidence that could reduce adoption or lead to hidden resistance within organizations.