Back to Articles
Technical excellence

Project Glasswing Explained: Anthropic’s Push for Defensive Cybersecurity in the AI Era

Fakhar Khan
Fakhar Khan
6 min read
Project Glasswing Explained: Anthropic’s Push for Defensive Cybersecurity in the AI Era

Introduction to Project Glasswing

Project Glasswing is a new initiative from Anthropic that brings together Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks around a single goal: use frontier AI to secure critical software at a moment when automated exploitation is getting easier, not harder. The name references the glasswing butterfly (Greta oto): transparent wings as a metaphor for vulnerabilities that can hide in plain sight, and for the transparency Anthropic says it wants in how defensive work is shared.

At the center of the announcement is Claude Mythos Preview, described as an unreleased frontier model trained by Anthropic and positioned for defensive security work by launch partners and a broader set of organizations that build or maintain critical infrastructure. Anthropic states it has committed up to $100 million in usage credits for Mythos Preview across these efforts, plus $4 million in direct donations to open-source security organizations.

This article provides a practical overview for technical leaders and security-minded engineers. Here we summarize what Glasswing is, why Anthropic frames the moment as urgent, what partners are doing with Mythos Preview, and how pricing and follow-on reporting may work after the initial credit period. Details will evolve, so treat Anthropic’s Project Glasswing page and the Frontier Red Team blog as primary references for timelines, disclosures, and technical specifics.

Cybersecurity in the age of capable coding models

The challenge is to reconcile two truths at once. First, the software that runs finance, healthcare, logistics, and public infrastructure has always contained bugs; some become high-impact vulnerabilities when discovered by the wrong party. Second, frontier models are increasingly strong at reading code, reasoning about edge cases, and chaining steps that resemble agentic workflows. That combination changes the economics of both defense and offense: finding and exploiting flaws can require less bespoke human time than it did even a few years ago.

Anthropic’s public framing for Glasswing is explicitly defender-first: put advanced capability in the hands of vendors and maintainers who are accountable for patching, testing, and hardening systems before attackers with similar tools move at scale. That posture sits alongside a broader industry conversation about agentic coding and parallel automation in engineering workflows. If you are already tracking how teams adopt tools like Cursor’s parallel cloud agents, Glasswing is the security flip side of the same trend: models that can operate over codebases and systems are not only productivity levers, they are capability multipliers for whoever holds them.

For context on how Anthropic positions agentic work outside the terminal, Claude Cowork is a useful contrast: same family of ideas (multi-step execution, outcomes over single replies), different surface area. Glasswing is not a product tutorial; it is an industry coordination play with a security mandate.

What Anthropic reports about Claude Mythos Preview

According to Anthropic’s announcement, Claude Mythos Preview has been used to identify thousands of zero-day vulnerabilities, including issues in major operating systems and web browsers, with examples called out across OpenBSD, FFmpeg, and the Linux kernel (with coordinated disclosure and patching for those described in detail). The company also points to benchmark gaps between Mythos Preview and Claude Opus 4.6 on security-relevant evaluations such as CyberGym, alongside strong scores on software engineering benchmarks (for example, several SWE-bench variants). Anthropic states that Mythos Preview is not planned for general availability, and that future safeguards shipped with a forthcoming Claude Opus model are part of the plan to make Mythos-class models deployable more safely over time.

Explore the implications in three practical buckets:

  • For defenders: If the claims hold in your environment, a model with this profile becomes another signal in vulnerability discovery, code review, and penetration-style testing, not a replacement for policy, ownership, or patch processes.
  • For leaders: Budget and governance questions follow immediately: who may use the model, on what systems, with what logging, and under what responsible disclosure rules.
  • For the ecosystem: Credits and donations toward open-source maintainers matter because much of the shared attack surface lives in libraries and services maintained with volunteer or under-funded effort.

Partners, credits, and what comes after the preview

Launch partners are expected to use Mythos Preview inside defensive security programs. Anthropic describes an expanded group of more than 40 additional organizations that build or maintain critical software infrastructure, with access to scan and secure first-party and open-source systems. After the committed $100 million in usage credits is consumed, Anthropic states Mythos Preview will be available to participants at $25 per million input tokens and $125 per million output tokens on Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.

On the open-source side, Anthropic mentions $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation, with maintainers able to apply through programs such as Claude for Open Source.

Within 90 days, Anthropic commits to a public report on lessons learned, vulnerabilities fixed where disclosure allows, and best practices shared between partners. The announcement also lists possible focus areas for future industry recommendations, including disclosure processes, update pipelines, supply-chain security, secure-by-design practices, and patching automation.

Plans, limitations, and how to use this news responsibly

Project Glasswing is a starting point, not a checklist you can paste into your own program without adaptation. Discover these guardrails as you read secondary coverage:

  1. Verify against your threat model. Public benchmarks and partner quotes do not replace measurement on your stacks, your languages, and your release cadence.
  2. Treat powerful models as regulated capabilities. Access control, audit trails, and separation of duties matter as much as model quality.
  3. Coordinate disclosure. If tooling surfaces a serious issue, align with vendor processes and legal guidance before any public discussion.
  4. Watch the policy layer. Anthropic notes ongoing discussion with U.S. government stakeholders; national security and critical infrastructure rules may intersect with how models like Mythos Preview are used in practice.

Conclusion

Project Glasswing names a simple idea with hard execution: put state-of-the-art model capability on the defensive side of the line while critical software still has room to patch, test, and redesign under pressure. Whether it becomes a durable shift depends on partner follow-through, open-source resourcing, and transparent reporting in the months ahead.

Next steps: Read Project Glasswing and the Mythos Preview Red Team post for technical detail and disclosure status. If your organization maintains upstream dependencies, map how you would evaluate a preview model under your security and procurement rules before you promise outcomes to leadership. Finally, keep pairing automation with human judgment: frontier models change the speed of discovery, but ownership, patch discipline, and trust still decide whether defenses actually improve.

Fakhar Khan

Enjoyed this article?

Let's connect. SoftPyramid helps enterprises scale through architecture, AI operations, and cloud delivery — outcomes first.