As US military goes full speed ahead on AI, how is it being used?

Some lawmakers have concerns about what they say is a lack of guardrails.

May 4, 2026, 5:49 PM

Top U.S. defense officials are making it clear -- the U.S. military is full throttle in the race to implement artificial intelligence despite concerns from some Democratic lawmakers that the military is ignoring critical guardrails for quicker technological advancement. 

"We absolutely have to stay ahead," Defense Secretary Pete Hegseth said in a Senate Armed Services Committee hearing on Thursday. "The advantage that AI provides applies to any number of capabilities, whether it's domain awareness, targeting cycles, you name it -- AI and leveraging it -- that's why we've made it the forefront."

Without clear guardrails on AI use, lawmakers have concerns -- after the Pentagon's last AI contract ended in public battles over the use of autonomous weapons and domestic mass surveillance.

Secretary of Defense Pete Hegseth and Chairman of the Joint Chiefs of Staff General Dan Caine testify before the Senate Armed Services Committee, April 30, 2026 in Washington.
Anna Moneymaker/Getty Images

President Donald Trump told Time magazine in early April he had clear boundaries for AI in the chain of command: lethal decisions would always be made by a human.

"I wouldn't allow AI to do it. I respect AI. It's a decision that a president has to make -- assuming he's competent," Trump told Time. 

Here's how AI is currently used across the military: 

More data than eyeballs

Gregory Allen, former director of strategy and policy for the Department of Defense's Joint Artificial Intelligence Center, was an architect of the current AI policy for the U.S military. 

He likened the implementation of AI in the military to the growth and eventual near-total adoption of computers in both fighting and national defense.

Early adopters, Allen said, found uses for AI in computer vision -- the model's ability to recognize and identify patterns in images. Allen said across the department's many reconnaissance capabilities, there's "more data being collected than there are trained eyeballs."

"Where we were with military AI in the 2019 timeframe was that, you would feed, say, drone imagery into an AI model, and it would say there are 20 people in this photo," Allen said. 

Modern AI's reasoning capabilities, through large language models, takes a step farther. 

"We cannot just say, 'Hey, there's 20 people in this photo.' It can say, 'Hey, there's 20 people in this photo, and none of them were there yesterday and they're standing next to a vehicle that has a range of X and they have these weapons near them,'" Allen added. "The point is, it can almost write the first draft of the intelligence report for the human analyst to review."

"How fast they want to strike it, with what size payload -- it can say, 'Hey, these planes are within range' or 'These artillery assets are within range. I recommend you order them to be the one to strike,'" Allen added. 

The Pentagon announced last Friday that it signed deals with seven major AI companies -- SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft and Amazon Web Services to "streamline data synthesis, elevate situational understanding, and augment warfighter decision-making in complex operational environments."

In the expansion, GenAI.mil, the department's own AI platform, has now been used by more than 1.3 million DOD personnel -- "civilians and contractors are putting these capabilities to practical use right now," the Pentagon said in its release.

U.S. adversaries are already using some AI-enabled weapons. In Ukraine, Russia has used lethal autonomous weapons on the battlefield, Allen said, while Chinese weapons manufacturers are developing similar technologies.

"Right now the United States has an edge in military AI performance, but Russia has demonstrated a willingness to introduce AI faster because it doesn’t care nearly as much about the risks of civilian casualties and friendly fire," Allen said

A workaround to costly weapons

In combat, AI has also found a foothold in cost-cutting. 

In the U.S. arsenal, military technologies in weapons precision, like GPS guided missiles or radar position tracking, are expensive to deploy. Tomahawk cruise missiles use these technologies to be incredibly precise, but costs millions of dollars per strike. 

Computer vision through AI is the workaround, Allen said. An AI-enabled drone would use images of the ground instead of radar to guide itself. 

Arleigh Burke-class guided-missile destroyer USS Thomas Hudner fires a Tomahawk land attack missile during Operation Epic Fury, Mar. 21, 2026.
US Navy

"The point is not that the air-enabled drone looking down is superior to the Tomahawk system. In fact, it's probably less reliable than the Tomahawk system, but it's a fraction of the cost instead of millions of dollars per shot," Allen said. 

U.S. officials told ABC News that the Army has deployed nearly 10,000 AI-powered drones to the Middle East since the war with Iran started -- drones that have already seen extensive use in Russia's war against Ukraine, where Ukraine has downed more than 1,000 Iranian-made Shahed drones used by Russia.

"AI is making things that used to be expensive and complicated, affordable and tractable and usable in a new kind of a way," Allen said. "That's a really exciting moment for national security planners and also a really concerning moment as they think about what the rest of the world is up to with these technologies."

Concerns in Congress

Members of Congress have expressed concern over the U.S. military's lack of regulations on AI. 

In mid-March, Michigan Democratic Sen. Elissa Slotkin announced a bill that would require a human presence to decide "when and how" autonomous weapons are launched, prohibits the DoD from using AI for mass surveillance and mandates that a nuclear decision "rests solely with the Commander in Chief," Slotkin wrote in a press release. 

Slotkin's proposed requirements align closely with demands from the AI company Anthropic -- the Pentagon's former partner in AI use across the military.

The company defied an ultimatum from Hegseth to remove restrictions on domestic mass surveillance or autonomous weapons. This led to public disagreements, and ultimately resulted in Anthropic labeled as a supply chain risk, a penalty usually reserved for foreign adversaries that prevents a company from doing business with the Pentagon and its partners.

"Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered," Hegseth posted on social media.

Secretary of Defense Pete Hegseth testifies before the Senate Armed Services Committee, on Capitol Hill, in Washington, April 30, 2026.
Cliff Owen/AP

Anthropic said after negotiations fell through that, "New language framed as compromise was paired with legalese that would allow those safeguards [restrictions on mass surveillance and autonomous weapons] to be disregarded at will" and that the safeguards "have been the crux of our negotiations for months."

Allen said the U.S. military does not have a policy against developing new AI-enabled offensive autonomous weapons.

"It says that if the United States military chooses to go down that route, such systems are subject to additional technical scrutiny and additional procedural scrutiny," he said.

Allen, who worked in the DoD through the end of Trump's first administration and into Joe Biden's presidency, said he believes the system in place is effective in protecting against AI misuse.

Democratic lawmakers on Thursday were less certain. 

"I just want you to reconfirm what it is you plan to use this technology for," Nevada Democratic Sen. Jacky Rosen told Hegseth in the Armed Services hearing.

New York Democratic Sen. Kirsten Gillibrand agreed.

"[Americans] read in the paper that 22 schools have been hit. They read in the paper about a girls' school -- hundreds getting killed," Gillibrand said in Thursday's hearing. "We have a debate going on in this country -- a serious debate about AI -- and I haven't heard yet from you that you will not allow AI to make final targeting determinations."

"That's a huge issue that we need to discuss," Gillibrand said.

Sponsored Content by Taboola