
A Democratic senator on Thursday sounded the alarm on the dangers of unregulated artificial intelligence after AI company Anthropic revealed it had thwarted what it described as “the first documented case of a large-scale cyberattack executed without substantial human intervention.”
According to Anthropic, it is highly likely that the attack was carried out by a Chinese state-sponsored group, and it targeted “large tech companies, financial institutions, chemical manufacturing companies, and government agencies.”
After a lengthy technical explanation describing how the attack occurred and how it was ultimately thwarted, Anthropic then discussed the security implications for AI that can execute mass cyberattacks with minimal direction from humans.
“The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so,” the firm said. “With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers.”
Anthropic went on to say that hackers could now use AI to carry tasks such as “analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator,” which could open the door to “less experienced and resourced groups” carrying out some of the most sophisticated attack operations.
The company concluded by warning that “the techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical.”
This cybersecurity strategy wasn’t sufficient for Sen. Chris Murphy (D-Conn.), who said government intervention would be needed to mitigate the potential harms caused by AI.
“Guys wake the f up,” he wrote in a social media post. “This is going to destroy us—sooner than we think—if we don’t make AI regulation a national priority tomorrow.”
Democratic California state Sen. Scott Wiener noted that many big tech firms have continuously fought against government oversight into AI despite threats that are growing stronger by the day.
“For two years, we advanced legislation to require large AI labs to evaluate their models for catastrophic risk or at least disclose their safety practices,” he explained. “We got it done, but industry (not Anthropic) continues to push for federal ban on state AI rules, with no federal substitute.”
Some researchers who spoke with Ars Technica, however, expressed skepticism that the AI-driven hack was really as sophisticated as Anthropic had claimed simply because they believe current AI technology is not yet good enough to execute that caliber of operation.
Dan Tentler, executive founder of Phobos Group, told the publication that the efficiency with which the hackers purportedly got the AI to carry out their commands was wildly different than what he has experienced using the technology.
“I continue to refuse to believe that attackers are somehow able to get these models to jump through hoops that nobody else can,” he said. “Why do the models give these attackers what they want 90% of the time but the rest of us have to deal with ass-kissing, stonewalling, and acid trips?”
From Common Dreams via This RSS Feed.

