The next Anthropic model could be a ‘watershed moment’ for cybersecurity. Experts say that could also be an issue CNN Business

The next wave of AI-driven cybersecurity attacks will be like nothing we’ve seen before.

That’s the message AI company Anthropic sent in a leaked blog post last week, where it warned that its upcoming AI model, called Mythos, and others like it could exploit vulnerabilities at an unprecedented rate.

And that’s not all: OpenAI warned in December that its upcoming models pose a “high” cybersecurity risk. Experts have said that AI can increase existing vulnerabilities and quickly develop new software hacks.

But the rise of AI agents, or AI assistants that can perform tasks on their own, takes that risk to another level, some experts warn. A single AI agent can detect vulnerabilities and exploit them faster and more persistently than hundreds of human hackers.

“Agent attackers are coming,” said Shlomo Kramer, founder and CEO of cybersecurity and networking company Cato Networks. “This is a remarkable event in the history of cybersecurity.”

Details about Mythos were leaked in an unpublished blog post previously reported by Fortune. Anthropic did not respond to CNN’s request for comment. But the company told Fortune that the leak was caused by a human error in its content management system.

“While Mythos is currently ahead of any other AI model in terms of cyber capabilities, it shows the coming wave of models that can exploit vulnerabilities in ways that far exceed defenders’ efforts,” Anthropic said in the draft.

The company is allowing other organizations to test the prototype early to improve its systems “against the coming wave of AI-driven operations,” it said.

Anthropic also privately warned government officials about the possibility of large-scale cyberattacks enabled by Mythos, according to Axios.

But each subsequent version of the lab will present more and more serious cybersecurity threats, Kramer told CNN.

“After Mythos is the next OpenAI model, and Google Gemini is next, and a few months after them are the Chinese open models,” he said.

AI makes it possible to exploit vulnerabilities immediately after finding them, said Evan Peña, chief security officer of cybersecurity company Armadin.

But there are still limits to what models can do, according to Peña.

Advanced AI models are great for researching software vulnerabilities and creating code to exploit them. But they don’t have the context that a hacker might have about what an organization’s most valuable information is, Peña said.

There will always be a place for humans in the cyberattack using AI, said Joe Lin, said Joe Lin, co-founder and CEO of Twenty, a firm that sells offensive cyber capabilities to the US government.

“We have to make sure that we build weapons systems where people are always in control of the decisions and results, because when a machine deals with killing, a person always has to have consequences,” he said.

An example of how AI has made unsuspecting hackers more dangerous came in January, when a Russian-speaking hacker used multiple AI tools to compromise more than 600 devices running popular firewall software in more than 55 countries, according to Amazon Web Services’ security research team. The player used AI services that generate “implementation and promotion of known attack methods in every part of their operations, despite their limited capabilities,” AWS said.

The hacker used Claude’s model of Anthropic along with Chinese-made DeepSeek in the attack, according to Eyal Sela, Gambit Security’s director of vulnerability analysis. At one point, the hacker asked Claude in Russian to create a website to manage the goals of hundreds of hackers, according to chat messages the hacker had with AI models that Sela shared with CNN.

AI gives criminals different “powers” by simplifying the technical skills needed to exploit systems, according to Sela.

In February the hacker used Claude in a series of attacks against Mexican government agencies, stealing sensitive tax and election information, Bloomberg reported.

China and other US adversaries are “hunting for any opportunity to improve their own AI capabilities,” said Joe Lin, co-founder and CEO of Twenty, a firm that sells offensive cyber capabilities to the US government.

That means there may be more opportunities for leaks of US AI models to try to “improve their cyber weapons capabilities,” he said.

The advancement of AI in cybersecurity is a double-edged sword: Attackers can use AI models and agents to enhance their capabilities, while the same capabilities enable real-time monitoring, rapid threat identification, and automated encryption at a scale that no human group can match.

But attackers only need to find one way in, while defenders have to cover every area. Kramer described it as building “an army of good people” to “fight an army of bad people” to hold the line.

He said: “You have to run as fast as you can to stay in one place.

#Anthropic #model #watershed #moment #cybersecurity #Experts #issue #CNN #Business

Leave a Comment