Anthropic Limits AI Model Distribution Amidst Security Fears

Anthropic, a leading AI research company, has recently stirred the tech industry by limiting the distribution of its robust new AI model, Mythos Preview. The decision was made due to the model’s unprecedented cybersecurity capabilities, which could potentially pose national security risks if made widely accessible. The company has initiated Project Glasswing, confining the model’s usage to approximately 40 select organizations. These include tech giants like Amazon, Apple, Microsoft, Google, and several major cybersecurity firms.

Mythos Preview has reportedly identified thousands of zero-day vulnerabilities across all major operating systems and web browsers in the past few weeks. This includes bugs that have persisted for one to two decades. The model has shown an exceptional ability not just to detect security flaws, but also to develop sophisticated exploits. This has raised concerns about potential misuse by malicious entities.

In response to these concerns, Anthropic is pledging up to $100 million in usage credits and $4 million in direct donations to open-source security organizations. This is aimed at helping defenders secure critical infrastructure before similar capabilities become widely accessible. High-ranking federal officials, such as Treasury Secretary Bessent and Federal Reserve Chair Powell, have engaged in discussions with tech leaders regarding the national security implications. It’s also worth noting that OpenAI is reportedly developing a similar model named “Spud” with comparable cybersecurity capabilities.

Source: Anthropic

Move to the category:

Leave a Reply

Your email address will not be published. Required fields are marked *