Dario Amodei, the CEO of leading AI company Anthropic, has written a 19,000 word warning that AI technology could spell disaster for humanity. While insisting that he and his company are developing AI responsibly, Amodei says that we are facing unprecedented risks, in part because AI is soon going to have a much greater capacity to help people and governments commit crimes against humanity. AI models, Amodei says, are getting smarter all the time, and it may soon be possible for nefarious actors to commit absolute mayhem with them, including releasing engineered pathogens, creating child sex abuse images on a massive scale, killing people with swarms of tiny drones, manipulating and blackmailing millions of people simultaneously, and more. We are, he says, at a crucial moment that will determine whether our species is capable of dealing with an exponential increase in our power to inflict cruelty and destruction, and because the technology is advancing faster than anyone expected, “we have no time to waste.”

From blog via This RSS Feed.
For quick context, this comes after this:
January 3rd: Anthropic’s artificial-intelligence model Claude was used in the U.S. military’s operation to capture former Venezuelan President Nicolas Maduro, the Wall Street Journal reported on Friday, citing people familiar with the matter. Claude’s deployment came via Anthropic’s partnership with data firm Palantir Technologies (PLTR.O) -WSJ via Reuters
At which point:
January 29th: The Pentagon is at odds with artificial-intelligence developer Anthropic over safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct U.S. domestic surveillance.
-ReutersFollowed by a resignation letter from an Anthropic Safety Researcher:
February 9th: “The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment,” Sharma wrote. He said he had “repeatedly seen how hard it is to truly let our values govern our actions” - including at Anthropic which he said “constantly face pressures to set aside what matters most”.-BBC
And another response from the Pentagon:
February 16th: Defense Secretary Pete Hegseth is “close” to cutting business ties with Anthropic and designating the AI company a “supply chain risk” — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios. The senior official said: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.” -Axios




