BREAKING NOW
Apr 3, 2025 4:52 pm
Global Media Network
Anthropic Mythos AI Hack Access Probe Raises Alarm
Anthropic is investigating a possible Mythos AI security breach after reports claimed that unauthorized users may have gained access to its highly advanced AI model, which is designed to help detect cybersecurity weaknesses and simulate cyber-attacks. The US-based AI company confirmed it is reviewing the situation after a media report suggested that a small group of users accessed the restricted model through a third-party vendor environment. The company said it is taking the matter seriously and is now checking how the access may have happened. Mythos AI is not publicly available. It is a powerful research model developed by Anthropic and is considered highly sensitive due to its ability to support advanced cybersecurity testing and potential hacking simulations. The model has only been shared with a small number of selected companies for testing purposes. According to reports, the alleged access took place on the same day Anthropic announced limited testing of Mythos with major firms, including Apple and Goldman Sachs. This raised concerns that early access controls may not have fully prevented outside entry. The report claims that a few individuals accessed the system through credentials linked to a third-party contractor working with Anthropic. It also suggested that they used methods similar to those used by cybersecurity researchers to explore system behavior. While the users reportedly did not carry out harmful cyber-attack simulations, the situation still triggered concern. Sources said the group was more interested in experimenting with the technology rather than attempting any malicious activity. Despite this, experts say even limited access to such a system can create serious risks. AI models like Mythos are designed to identify vulnerabilities in digital systems. In the wrong hands, such tools could potentially be used to find weaknesses in real-world networks. The Mythos AI security breach report has increased pressure on AI companies to strengthen safety controls. Governments and cybersecurity agencies are closely watching how advanced AI systems are managed and protected. UK AI minister Kanishka Narayan has previously warned that businesses should be concerned about the model’s capabilities. He said such systems could identify flaws in IT systems that hackers might later exploit. The UK’s AI Security Institute has also reviewed Mythos. It described the model as a significant step forward in cyber capability compared to earlier AI systems. The institute warned that it could perform complex multi-step cyber tasks without human assistance. In controlled testing, Mythos was able to complete a 32-step cyber-attack simulation designed by researchers. It succeeded in three out of ten attempts, showing a high level of problem-solving ability in cybersecurity scenarios. These findings have raised concerns about how such advanced AI tools should be controlled before wider release. Experts say the challenge is balancing innovation with safety and preventing misuse. Anthropic has said that Mythos was designed specifically for controlled environments and security research. The company emphasizes that strict access rules are in place to prevent misuse and limit exposure to trusted partners only. However, the latest reports highlight the difficulty of fully securing powerful AI systems, especially when third-party vendors are involved. Security experts warn that supply chain access points often become weak spots in digital security systems. AI safety researchers say incidents like this show the importance of stronger monitoring, identity checks, and access controls. They also stress the need for constant auditing when advanced models are shared with external partners. The investigation is still ongoing, and Anthropic has not confirmed whether a full breach occurred or how many users may have gained access. The company is expected to release more details after completing its internal review. Cybersecurity experts say the situation is a warning sign for the entire AI industry. As AI models become more powerful, controlling access and preventing misuse will become increasingly important. Governments are also expected to increase oversight of advanced AI systems that can be used for hacking simulations or cyber defense research. Many policymakers are calling for global standards on AI safety and access control. The Mythos AI security breach investigation highlights growing concerns about how cutting-edge AI technology is managed, especially when it involves tools capable of influencing cybersecurity risks on a global scale.
Got a Story to Share?
Join our network of global voices. Whether you're an experienced journalist or a passionate writer with a unique perspective, GMN offers a platform to reach millions.
Stay in the loop with news, offers, and writing opportunities.

©️ 2025-2026 GMN Group LLC - Global Media Network. All rights reserved.