In late September, Shield AI co-founder Brandon Tseng stated that guns in the United States would never be fully autonomous, meaning that an artificial intelligence algorithm would not make the final decision to kill. "Congress doesn't want that, nobody wants that," Tseng said.

A few days later, however, Palmer Luckey, co-founder of Anduril, showed a more open stance toward autonomous weapons, expressing skepticism about the arguments against it. "America's opponents use nice-sounding phrases, like, 'Can't you agree that a robot shouldn't decide who lives or dies?" commented Luckey in a talk at Pepperdine University. "My answer is: Where is the moral high ground on a mine that doesn't distinguish between a school bus full of children and a Russian tank?"

 

Regulatory ambiguity and the role of AI

Shannon Prior, spokesperson for Anduril, clarified that Luckey was not advocating that robots be programmed to kill autonomously, but that he was concerned about the use of "bad AI" by people with bad intentions. This stance is shared by Trae Stephens, co-founder of Anduril, who last year stated that the technologies they are developing allow humans to make the right decisions, ensuring that there is always someone responsible in situations involving lethality.

However, the U.S. government's position on fully autonomous weapons remains ambiguous. Although it does not currently purchase fully autonomous weapons, some technologies such as missiles and mines already operate autonomously. The difference is that these systems do not have the capability to make complex decisions such as identifying and attacking a target without human intervention.

 

Fears of an arms race

The fear of many in Silicon Valley and Washington is that countries like China or Russia will be the first to deploy fully autonomous weapons, forcing the U.S. to follow suit. At a UN debate on autonomous weapons, a Russian diplomat hinted that for Russia, human controlwas not as high a priority as for other nations.

Joe Lonsdale, co-founder of Palantir and shareholder in Anduril, emphasized at a Hudson Institute event that policymakers should take a flexible approach to autonomous weapons, arguing that strict rules could jeopardize the country's security on the battlefield. As conflicts, such as the war in Ukraine, provide new data and scenarios for testing military technologies, the debate over total weapons autonomy remains a central issue. Meanwhile, companies such as Anduril and Palantir are actively working to influence policymakers to consider the possibilities and risks of integrating AI into defense systems.