Recently announced AI systems from Google and Microsoft take distinct approaches to cybersecurity.
Google’s Big Sleep is built to examine software code and identify vulnerabilities before attackers can take advantage of them. Microsoft’s Project Ire begins its work after a threat has already been introduced, studying malware in detail to determine how it operates and whether it poses a risk.
Each system addresses a different stage of the cybersecurity process. Together, they illustrate how prevention and response can both be strengthened through automation.
These tools are producing findings and acting on them with a level of autonomy that was previously limited to human analysts. One system explores the unknown in software that appears secure, while the other inspects unknown software that is already behaving suspiciously.
They do not overlap in function, but they are aligned in purpose.
Why It Matters: The window between a system flaw and its exploitation is shrinking. The longer it takes to find a vulnerability or understand an active threat, the greater the damage. By allowing AI to operate at both ends, these tools reduce reliance on reaction time and expand the reach of defense beyond what human teams alone can manage. This kind of automation could mean the difference between containment and compromise.
- Big Sleep Identifies Flaws Before Attacks: Developed by DeepMind and Project Zero, Big Sleep searches code for weaknesses that could be exploited. It discovered 20 issues in open-source tools such as FFmpeg and ImageMagick. Each vulnerability was found and reproduced by the AI without human guidance. Human reviewers confirmed the findings before reports were sent out.
- Project Ire Detects and Analyzes Active Malware: Microsoft’s Project Ire works without prior knowledge of a file’s behavior or origin. It examines software and evaluates whether it is dangerous. The system recently produced an internal report that led to the automatic blocking of a threat. It was the first time the company had acted on such a report without a human writing it.
- Separate Tools, Complementary Roles: Big Sleep and Project Ire exhibit parallel efforts to address different stages of the cybersecurity process. One focuses on preventing attacks by identifying weaknesses in code, while the other responds to threats already in progress by analyzing unfamiliar software behavior, shortening the time required to respond when a breach occurs.
- AI as an Independent Security Analyst: Both of these tools conduct original analysis. They evaluate risks, explain their findings, and generate actionable results. This degree of autonomy points to a future where AI is a functional part of the security team in its own right. For organizations, this could mean faster decisions and greater consistency.
- Responding to Strategic Pressure: Google and Microsoft both point to a growing competitive environment in which attackers are adopting AI-based tools of their own. Meeting that challenge requires systems that can adapt and improve quickly. These projects show that major players are moving to close the gap.
Go Deeper -> Google says its AI-based bug hunter found 20 security vulnerabilities – TechCrunch
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.


