Blog | Linux Foundation

Introducing Project Glasswing: Giving Maintainers Advanced AI to Secure the World's Code

Written by Jim Zemlin | Apr 7, 2026 6:07:23 PM

In the late fall of 2025, artificial intelligence models made a big leap in coding ability. Since then, we have been hearing about a darker side of this breakthrough — how the new generation of AI models are also astoundingly good at identifying previously undiscovered software vulnerabilities. These discoveries are impacting some of the most security-hardened systems in the world. What’s more, the AI systems making these discoveries demonstrate incredible sophistication, often chaining together multiple vulnerabilities to generate more critical risks.

Software in the crosshairs

Because software powers everything in the world, attackers have long targeted code, both proprietary and open source, as a way to leverage impact. Open source is the dominant form of software consumed in enterprise today, making it the world's biggest target. This is especially true for the most widely used software projects that underpin a wide swathe of our economy, our society, and other aspects of our lives. From hospitals to banks to telecommunications and transportation providers, open source is the essential ingredient in their technology stacks.

At the same time, open source software maintainers have never faced more stress. Higher velocity of pull requests and security bug reports (many of them AI-generated), a greater volume of cyberattacks, and increasingly sophisticated campaigns to compromise supply chains combine to make maintainers' lives harder. Add the looming threat of a tidal wave of AI-generated zero-day vulnerabilities, and we face a potentially catastrophic situation.

Addressing the maintainers' dilemma with Project Glasswing

This is why Project Glasswing matters. Project Glasswing brings together Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks to put a new frontier AI model — Claude Mythos Preview — to work for defensive security purposes. Anthropic is committing up to $100M in usage credits across this effort, with $2.5M donated to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to help open source maintainers respond directly to this changing landscape.

Open source security has historically been a thankless task. Triaging and fixing bugs, writing and testing patches, crafting careful communications strategies – none of this is what maintainers had in mind when they sent their first project commit. We believe AI can help address this. When it works, AI provides scale and leverage. The new generation of frontier models can analyze code at a broad scale and at high velocity and detect patterns based on previous bug fixes.

Equally important, early indications point to Claude Mythos Preview and other advanced AI models not only finding vulnerabilities but also providing viable patches. When I recently spoke with the Linux Project’s Greg Kroah-Hartman, he was initially skeptical, but more recently, he has told me that some of the patches generated by AI tools were “pretty good” – which is high praise, coming from him.

The combination of the ability to identify and patch, on a broad scale and at a faster pace, can dramatically reduce the security burden on maintainers. Project Glasswing will allow maintainers alongside critical enterprises to improve their project security and to spend more time focusing on advancing their project code instead of playing defense. For all of us, it’s a win-win: safer code, faster software development and more incentive to become a maintainer.

Making powerful AI accessible to maintainers of critical software

In the past, the most advanced security capabilities have been a luxury reserved for organizations with large budgets and dedicated teams. Open source maintainers, whose software underpins most of the world's critical infrastructure, have been left to figure it out on their own. Because the dark side of AI-augmented security is AI-augmented insecurity, we must ensure that access to the best AI cybersecurity tooling is evenly distributed and not concentrated in the hands of the few with the cash and the headcount.

None of this matters if the cost is prohibitive. Project Glasswing is designed to ensure that maintainers get access to these tools for free. This is the only way to foster wide adoption of top AI cybersecurity capabilities – by removing any economic friction.

The time is now to make the world’s software safer

I am optimistic but the urgency is real. We are in the most dangerous period, the transition when attackers might gain a significant advantage as the technology ecosystem digests the impact of AI. We have already seen evidence of what smart cybersecurity crews can do when leveraging AI and witnessed in the wild novel exploit kits written with AI assistance. Falling behind is not an option. Project Glasswing is a major step and it is only the first. Together, we can keep the world’s open source software safe.