Published on
A group of unauthorised users reportedly gained access to Anthropic’s new product, which the artificial intelligence company says is too powerful to release to the public as it “poses unprecedented cybersecurity risks”.
Anthropic’s new AI technology, Mythos, is designed for enterprise security and is being tested by a few technology and cybersecurity firms.
A “private online forum” has managed to gain access to Mythos through a third-party vendor, according to Bloomberg.
The company said it was investigating the Bloomberg report, an Anthropic spokesperson told TechCrunch, adding that there was so far no evidence that the reported activity had impacted Anthropic’s systems.
Members of the unauthorised group are part of a Discord channel that seeks out information about unreleased AI models, Bloomberg reported.
Citing a person employed by a third-party contractor that works for Anthropic, Bloomberg added that the group tried several strategies to gain access to the model.
The outlet also reported that the unauthorised group had been regularly using Mythos once it gained access.
Euronews Next has reached out to Anthropic for comment but did not receive a reply at the time of publication.
Anthropic said it would limit the release of its new AI model to a few tech and cybersecurity firms as part of its so-called Project Glasswing. The list includes Amazon, Apple and JP Morgan Chase.
Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are reportedly testing the Anthropic model too, according to reports.
Treasury Secretary Scott Bessent convened a meeting of senior American bankers in Washington in April to discuss the Mythos model. The meeting encouraged the banking executive to use Antropic’s Mythos model to detect vulnerabilities, according to Bloomberg.
Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are reportedly testing the Anthropic model too, according to reports.

