Anthropic is investigating reports that an unauthorized group gained access to its restricted AI cybersecurity tool, Claude Mythos Preview. The system, which was released only to a limited group of organizations under the “Project Glasswing” initiative, is designed to identify vulnerabilities in software systems.
According to reports, the access did not come from a direct breach of Anthropic’s infrastructure but instead through a third-party vendor environment connected to the rollout.
The group involved is reportedly part of a private Discord community that tracks and experiments with unreleased AI models. Members are said to have identified potential access points based on patterns in how Anthropic deploys its systems and began using the tool shortly after its public announcement.
Anthropic has stated that it has not found evidence of impact to its internal systems, and the investigation into the scope and implications of the access is ongoing.
Why It Matters: Mythos was presented as one of Anthropic’s most tightly controlled AI releases, yet the incident raises questions about how access is managed once systems are extended to vendors and partners. While there is no indication of a direct breach of Anthropic’s core infrastructure, the situation adds pressure on the company to demonstrate that its safeguards around limited-release tools are working as intended. It also draws added scrutiny given existing regulatory and industry attention, with key details, such as the scope, duration, and extent of the access, still not publicly confirmed.
- Vendor environment as entry point: Reports indicate the group accessed Mythos through a third-party contractor setup rather than Anthropic’s own systems, suggesting the exposure may be linked to shared environments, credentials, or access controls used in partner networks.
- Coordinated discovery methods: The individuals were connected to a Discord-based community that tracks unreleased AI tools, using techniques such as scanning public repositories, monitoring infrastructure patterns, and testing likely endpoints to locate systems.
- Timing aligned with launch: The access reportedly occurred on or very close to the day Mythos was announced, indicating that the system was identified and reached quickly after it became active in limited release.
- Evidence shared with media: Members of the group provided screenshots and live demonstrations to journalists as proof that they were able to interact with the model, though details of what was done with the system remain limited.
- Scope still under review: Anthropic has acknowledged the reports and is conducting an investigation, but has not confirmed how broadly the system was accessed, how long the access persisted, or whether any vulnerabilities were identified during that time.
Go Deeper -> Anthropic investigates unauthorized Mythos access by Discord group – Cybernews
Unauthorized Group Gains Access to Anthropic’s Exclusive Cyber Tool Mythos – Cyber Security News
Anthropic’s latest AI model is sparking fears from cybersecurity experts and the banking sector. Here’s why. – CBC
Trusted insights for technology leaders
Our readers are CIOs, CTOs, and senior IT executives who rely on The National CIO Review for smart, curated takes on the trends shaping the enterprise, from GenAI to cybersecurity and beyond.
Subscribe to our 4x a week newsletter to keep up with the insights that matter.



