Anthropic and Pentagon are reportedly at odds, potentially stalling a high-stakes $200 million partnership between the AI startup and defense department. The standstill is said to be over a fundamental clash of ethics and authority on how to use artificial intelligent (AI) solutions for national defense.Citing sources familiar with the matter, news agency Reuters has reported that the developer is resisting government demands that would allow its technology to be used for autonomous weapons targeting and domestic surveillance within America.
Anthropic’s reported conflict with Pentagon over AI use in national security
Anthropic has reportedly insisted on safeguards that would prevent its models from being used to spy on American citizens or assist in lethal weapons targeting without direct human oversight.The report said that these “safety guardrails” are baked into the core training of Anthropic’s AI models, essentially making them resistant to taking actions that could lead to harm.However, Pentagon officials have ‘bristled’ at these restrictions. Citing a January 9 department memo on AI strategy, the government argues it should have the right to deploy commercial AI however it sees fit as long as the actions comply with US law.It also said that US law compliance is regardless of a private company’s internal usage policies.“We are in productive discussions with the Department of War about ways to continue that work,” a spokesperson for Anthropic stated, though they maintained that their AI is currently used for “national security missions” that fall outside of the disputed lethal categories.
Anthropic CEO Dario Amodei’s AI warning in his 20,000-word essay
The standoff is been reported days after Amodei said in his 20,000-word essay that while AI should support national defense, it must draw a line when it comes to ‘AI abuse’. Here’s what he said.We need to draw a hard line against AI abuses within democracies. There need to be limits to what we allow our governments to do with AI, so that they don’t seize power or repress their own people. The formulation I have come up with is that we should use AI for national defense in all ways except those which would make us more like our autocratic adversaries.Where should the line be drawn? In the list at the beginning of this section, two items—using AI for domestic mass surveillance and mass propaganda—seem to me like bright red lines and entirely illegitimate. Some might argue that there’s no need to do anything (at least in the US), since domestic mass surveillance is already illegal under the Fourth Amendment. But the rapid progress of AI may create situations that our existing legal frameworks are not well designed to deal with. For example, it would likely not be unconstitutional for the US government to conduct massively scaled recordings of all public conversations (e.g., things people say to each other on a street corner), and previously it would have been difficult to sort through this volume of information, but with AI it could all be transcribed, interpreted, and triangulated to create a picture of the attitude and loyalties of many or most citizens. I would support civil liberties-focused legislation (or maybe even a constitutional amendment) that imposes stronger guardrails against AI-powered abuses.The other two items—fully autonomous weapons and AI for strategic decision-making—are harder lines to draw since they have legitimate uses in defending democracy, while also being prone to abuse. Here I think what is warranted is extreme care and scrutiny combined with guardrails to prevent abuses. My main fear is having too small a number of “fingers on the button,” such that one or a handful of people could essentially operate a drone army without needing any other humans to cooperate to carry out their orders. As AI systems get more powerful, we may need to have more direct and immediate oversight mechanisms to ensure they are not misused, perhaps involving branches of government other than the executive. I think we should approach fully autonomous weapons in particular with great caution,36 and not rush into their use without proper safeguards.
