Despite years of headlines about a cybersecurity talent shortage, hiring more people may no longer be the answer. The speed and scale of today’s threats have outpaced what human teams, even large ones, can handle alone. At the same time, companies are struggling to fill AI-related roles with people who don’t yet exist in the talent market.
It’s a Catch-22.
We need AI to handle threats, but we also need people who can manage and secure the AI.
So, the conversation is shifting.
Instead of asking how many people to hire, organizations are starting to ask better questions:
- Are we focused on the right risks?
- Can our teams adapt fast enough?
- Are we measuring security by outcomes, not headcount?
Why It Matters: Cybersecurity has slowly shifted into a prioritization and adaptation problem. AI has changed how threats emerge and how defense must operate. Yet many organizations still rely on outdated hiring strategies, looking to staff their way through challenges that require smarter systems, not just more people. The shift to AI-driven risk management demands new skills, new workflows, and a clearer link between security efforts and business outcomes.
- AI Is Taking Over Repetitive Work, But Not Replacing Humans: As threat volume explodes, AI is becoming essential for detecting patterns, correlating signals, and highlighting the risks that matter. This allows human teams to focus on higher-level strategy and decision-making, rather than drowning in alert fatigue. The human-plus-AI model is emerging as the most effective path forward.
- The Talent Shortage May Be a Hiring System Failure: There’s growing evidence that cybersecurity’s hiring gap is less about talent scarcity and more about mismatched expectations. Entry-level applicants are blocked by overqualified job postings, HR systems mislabel roles, and there’s too little focus on mentorship or hands-on learning, leaving available talent on the sidelines.
- The Rise of the Risk Operations Center (ROC): In response to the limits of traditional Security Operations Centers (SOCs), cybersecurity firm Qualys advocates for a Risk Operations Center (ROC) model. Unlike SOCs, which react to incidents, ROCs are designed to proactively prioritize and orchestrate remediation for the risks that matter most to the business. With AI at the core, ROCs enable continuous threat assessment aligned with an organization’s unique risk posture, helping connect cybersecurity efforts directly to business outcomes.
- AI Introduces New Vulnerabilities and New Roles: AI-generated code is fast but often insecure. Studies show nearly half of it contains vulnerabilities, meaning organizations must embed security reviews, continuous scanning, and human oversight into their pipelines. New roles around AI governance, model defense, and lifecycle security are emerging in response.
- Outcomes Over Headcount: Boards and executives are asking new questions: not “How big is the team?” but “What risk did we reduce?” Demonstrable security outcomes like fewer incidents, faster responses, and better continuity are now the benchmark. Upskilling, cross-functional deployment, and smarter tools will outperform sheer hiring volume.
Go Deeper -> Why cybersecurity cannot hire its way through the AI era – CyberScoop
Proof of Concept: What’s Broken in Cybersecurity Hiring? – Bank Info Security
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


