The New Era of Tiered AI Deployment: Balancing Power and Safety
The current landscape of artificial intelligence is shifting away from a “one size fits all” release strategy. We are seeing a transition toward tiered deployment, where the most capable models are kept behind closed doors while “generally available” versions are optimized for the masses.

A prime example is the distinction between Claude Opus 4.7 and Claude Mythos Preview. While Opus 4.7 is the most powerful model available to the general public, it does not actually advance the “capability frontier.” That title belongs to Mythos Preview, a model that has consistently outperformed Opus 4.7 on every relevant evaluation.
This strategy allows developers to deploy practical tools for software engineering and creative work while limiting the reach of models that possess high-risk capabilities. By releasing a slightly less capable model first, companies can test cybersecurity safeguards in the real world before attempting a broader rollout of “Mythos-class” models.
Project Glasswing and the Future of AI-Driven Cybersecurity
The introduction of Project Glasswing signals a pivot toward AI models specifically designed for the cybersecurity domain. Unlike general-purpose AI, models like Mythos Preview excel at identifying software weaknesses and security flaws.
Because of the inherent risks associated with these capabilities, access is strictly controlled. Currently, Mythos Preview is limited to a select group of high-profile partners, including Nvidia, JPMorgan Chase, Google, Apple, and Microsoft.
This trend suggests a future where “offensive” and “defensive” AI capabilities are siloed. We can expect to see more specialized initiatives where AI is used to proactively hunt for vulnerabilities before malicious actors can exploit them, provided the tools remain in the hands of verified entities.
The Evolution of Software Engineering Tasks
Beyond high-level security, the general-purpose evolution of these models is focusing on reducing “hand-holding.” Opus 4.7 represents a step up from previous versions like Opus 4.6, specifically in complex coding areas and advanced software engineering.
Industry leaders and early testers—including companies like Shopify, Notion, Vercel, Databricks, and Replit—are already leveraging these improvements to handle real-world work, image analysis, and the creation of professional documents and slides with increased creativity.
The Rise of the “Verification Program” Model
As AI models become more powerful, the “block all” approach to safety is proving insufficient for specialists. The emergence of the Cyber Verification Program suggests a future where access to AI is based on identity and intent verification.

Instead of a blanket ban on high-risk requests, AI providers are moving toward a system where verified security professionals can access more potent tools for legitimate research. This creates a controlled environment that supports innovation in cybersecurity without risking the general public’s safety.
This shift toward verified access is likely to expand into other sensitive fields, such as biotechnology or financial forecasting, where the potential for misuse is high but the potential for professional benefit is even higher. You can read more about these evolving safety frameworks on our site.
Frequently Asked Questions
It is the most powerful “generally available” model, but it is less capable than the privately released Claude Mythos Preview.
Mythos Preview is a cybersecurity-focused model that excels at finding software flaws and outperformed Opus 4.7 on every relevant evaluation. Opus 4.7 is designed for practical, real-world tasks with stricter built-in safeguards.
Pricing remains consistent with Opus 4.6, at $5 per million input tokens and $25 per million output tokens.
Access is currently limited to select partners, such as Microsoft, Google, Apple, Nvidia, and JPMorgan Chase.
Should the most powerful models remain private, or should they be open to more users? Let us know your thoughts in the comments below or subscribe to our newsletter for more deep dives into the future of AI.
