Artificial intelligence may be transforming how organizations operate, but according to former White House CIO Theresa Payton, the most important cybersecurity lesson hasn’t changed. Speaking at Zero Trust World, Payton told attendees that even as AI accelerates innovation and threat activity, security leaders must remain focused on one constant factor: human behavior.
“The most valuable and the most vulnerable asset we have in an organization right now is trust,” she said.
Drawing on experiences from government and the private sector, Payton argued that organizations often already know how to address emerging risks.
The challenge remains in designing security in ways that work for people rather than against them.
A Lesson from the White House
To illustrate the point, Payton shared a story from her time as CIO of the Executive Office of the President.
Early in her tenure, she noticed a pattern in security reports. Government-issued smartphones were being reported lost or stolen far too late, sometimes nearly a full business day after they disappeared. At the same time, many staff members skipped required security briefings before traveling internationally.
At first glance, the issue looked like a compliance problem.
Instead of tightening enforcement, Payton decided to ask employees why the process was failing, prompting two answers:
- Security briefings were long and tedious.
- The language users had to sign when receiving devices suggested serious consequences if a device went missing. Employees worried that reporting a lost phone could harm their careers.
Payton realized the system itself was discouraging the behavior security teams actually wanted.
“We have to design for the human user story.”
So, instead of an hour-long briefing, her team introduced a five-minute onboarding kit that included the phone, a card with a direct 24/7 security contact number, and even candy.
The team called it the “security happy meal” and the results were immediate.
The average time it took employees to report missing devices dropped from nearly a full business day to about thirty minutes. Travel notifications improved as well, rising from about half of travelers to full participation.
When leadership removed friction and built trust with users, compliance improved naturally.
AI Is Accelerating Everything
Payton shared several statistics illustrating how quickly artificial intelligence is expanding across industries. Every minute, global spending on AI reaches about $1.21 million. During that same minute, dozens of new AI agents are created and deployed while tens of thousands of AI-generated images appear online.
“The most valuable and the most vulnerable asset we have in an organization right now is trust.”
The technology is fueling innovation across business operations, but it is also empowering cybercriminals.
Since the release of modern generative AI tools in 2022, phishing attacks have increased dramatically. Payton cited figures showing phishing activity has surged by more than 4,000% as attackers use AI to generate convincing messages and impersonations at scale.
Attackers now have tools that allow them to automate deception, making traditional detection methods more difficult and increasing the importance of layered defenses and identity verification.
A Global Perspective on AI Governance
Conversations with governments and organizations around the world offered a window into how different regions are approaching AI governance.
Several themes stood out:
- Australia is investing heavily in workforce upskilling to ensure humans remain in the loop of AI decision-making.
- New Zealand has anchored its national AI ethics approach in indigenous values and collective responsibility for public safety.
- Jordan has launched a national initiative to train one million AI coders while implementing a national AI ethics framework.
- Brazil is focusing on privacy protections while using AI to assist teachers in grading and student support.
- Japan is integrating robotics and AI into elder care while maintaining a philosophy that technology should support, not replace, human relationships.
These varied strategies reflect the common concern of ensuring AI innovation advances responsibly without undermining societal trust.
The Question Many Organizations Cannot Answer
One of the most engaging moments of the talk came when she asked a question she frequently poses to boards and executives.
“When AI scales and makes decisions without human intervention and one of them goes terribly wrong, who answers for this in the boardroom?”
In many organizations, there is no clear answer.
Executives often assume cybersecurity teams will handle the consequences, even when the issue stems from an automated business decision rather than a security breach.
But that assumption exposes a major gap in governance.
Few companies have updated their incident response and resilience plans to address mistakes made by autonomous systems, and without defined ownership, the risks of AI adoption increase significantly.
The Wrap
Payton closed her session with a reminder that organizations do not need to solve every challenge at once.
Security leaders should begin with small pilots, test new ideas, and create guardrails around AI deployments. Just as importantly, they should invest in developing their own workforce.
Studies increasingly show that companies that train employees and build internal expertise tend to innovate more successfully than those that rely entirely on outside vendors.
“Bet on your people.”
The organizations that succeed will be the ones that adopt new technology and design systems that understand the humans who use them.


