OpenAI announced an expansion of its Trusted Access for Cyber (TAC) program alongside the release of GPT-5.4-Cyber, introducing changes to how verified defenders use AI in security environments.
The update enables more detailed investigation of software behavior and supports the identification of vulnerabilities within compiled systems, providing greater visibility into potential threats.
The effort expands availability while maintaining safeguards tied to identity verification and how the tools are used. Access levels are connected to the strength of verification and the clarity of user intent, with more capable systems provided under tighter conditions.
This structure is designed to enable defensive use while maintaining oversight and accountability for sensitive capabilities.
Why It Matters: Cyber threats are advancing in capability and frequency, while critical systems remain vulnerable across software, infrastructure, and supply chains. Expanding access to advanced defensive tools tightens the window between detection and response, limiting how far issues spread and strengthening the reliability of the systems that organizations depend on.
- Expanding Access Through Verification Systems: The TAC program now supports a large group of individual defenders and organized teams, with additional tiers for those who complete deeper authentication. Individuals can verify directly, while enterprises apply through structured channels. This approach allows access decisions to rely on clear criteria and automation, allowing advanced capabilities to be extended to those responsible for protecting critical systems.
- Introduction of GPT-5.4-Cyber for Advanced Security Tasks: GPT-5.4-Cyber is designed to support defensive workflows with fewer restrictions when intent is validated. It enables detailed analysis of compiled software and is used for vulnerability discovery without requiring source code. The model is being released in a limited rollout to vetted users, with added constraints in environments where usage visibility is reduced, such as certain third-party integrations.
- Ongoing Deployment With Continuous Safety Updates: Development follows a staged release model where capabilities and safeguards are refined through real-world usage. Cyber-specific safety training has been introduced and expanded across recent model versions, alongside improvements in resistance to adversarial inputs and misuse attempts. This process allows safety measures to evolve alongside model capability without delaying deployment.
- Investment in Tools That Detect and Fix Vulnerabilities: Supporting systems such as Codex Security are used for continuous monitoring and remediation. These tools scan codebases and validate findings, contributing to thousands of resolved high-severity issues. Additional efforts, including grant programs and support for open-source projects, contribute to the security ecosystem and improve coverage across different types of infrastructure.
- Long-Term Approach to Balancing Access and Risk: Cybersecurity capabilities depend on user intent, the environment in which tools are used, and the level of access granted. This framework allows general use of standard models while applying tighter controls to more capable systems through verification and accountability. As AI systems advance, defensive measures expand alongside them, with future models expected to require stronger protections and closer oversight.
Go Deeper -> Trusted access for the next era of cyber defense – OpenAI
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


