Anthropic is facing criticism after reports that unauthorized users accessed its Mythos AI model. The system was designed to detect software vulnerabilities.
The Mythos model was released under Project Glasswing. It was limited to selected partners for testing cybersecurity use cases.
Anthropic confirmed that some users outside the approved group accessed the system. The company said this did not happen through its main API.
Officials stated the issue may be linked to a third-party vendor environment. This has raised concerns about supply chain security risks.
Reports suggest attackers may have guessed the model’s online location. This allowed them to gain limited access without hacking core systems.
Anthropic said there is no evidence that its internal systems were breached. The company is still investigating the incident.
The case has highlighted risks related to external partners. Even strong systems can be exposed through third-party access points.
There are also links to Mercor, a startup connected with AI labs. It was previously affected by a supply-chain attack.
Anthropic had promoted Mythos as a major cybersecurity tool. It claimed the model could identify thousands of vulnerabilities.
However, experts like Bobby Holley questioned these claims. He said the results were similar to what human experts can find.
Some researchers also raised concerns about missing details in public claims. They questioned zero-day discoveries and accuracy levels.
Industry leaders say attackers already use open-source tools for research. This means Mythos may not change the threat landscape significantly.
Experts also pointed out that some exploits required human guidance. This shows the model is not fully autonomous.
Overall, the incident has sparked debate about AI security. It shows that even advanced cybersecurity tools can face access risks.


