Some U.S. Military Personnel, Including Generals, Begin Using Generative Chatbots for Decision-Making

Some U.S. Military Personnel, Including Generals, Begin Using Generative Chatbots for Decision-Making
One World Terrain software used for simulating battles on real terrain. Photo credits: U.S. Army
News

The commander of the U.S. 8th Army in South Korea said he is experimenting with generative chatbots to refine decision-making processes.

Business Insider reported that Maj. Gen. William Taylor, commander of the 8th Army, is using generative AI to improve how his headquarters makes operational and day-to-day decisions.

“Chat and I” have become “really close lately,” he told reporters during a roundtable at the annual meeting of the Association of the U.S. Army in Washington.

“I’m asking to build, trying to build models to help all of us,” Taylor said.

He explained that he uses the technology to analyze how he makes both military and personal decisions affecting not only himself but the thousands of soldiers under his command.

Army Maj. Gen. William “Hank” Taylor attends a training event in South Korea. Photo credits: Staff Sgt. Lisette Espinel/U.S. Army

“As a commander, I want to make better decisions. I want to make sure that I make decisions at the right time to give me the advantage,” Taylor shared.

Commanders like Taylor are focusing on faster decision-making and how AI can provide an edge through the “OODA loop” — the theory that those who can observe, orient, decide, and act faster than their opponents often gain an advantage.

U.S. Special Operations Forces are also integrating AI into their work to “reduce the cognitive load” on personnel. AI tools are being used for administrative tasks, report preparation, operational planning, logistics management, and other routine functions.

At the same time, the Pentagon has urged caution as commanders test these tools, warning that generative AI can “leak” sensitive information.

Illustrative image of artificial intelligence in use. Photo credits: Ministry of Defense of Ukraine

Officials also noted that AI can produce highly inaccurate outputs if not properly trained, posing potential risks if commanders rely on it for critical decisions.

For example, during a U.S. Air Force experiment, AI algorithms produced attack plans about 400 times faster than humans, but not all of those plans were viable.

Without elaborating, Maj. Gen. Robert Claude said the mistakes were not obvious but subtle, such as choosing the wrong type of sensor for certain weather conditions, rather than attempting to send tanks on air missions.

Share this post:

SUPPORT MILITARNYI

PrivatBank ( Bank card )
5169 3351 0164 7408
Bank Account in UAH (IBAN)
UA043052990000026007015028783
ETH
0x6db6D0E7acCa3a5b5b09c461Ae480DF9A928d0a2
BTC
bc1qv58uev602j2twgxdtyv4z0mvly44ezq788kwsd
USDT
TMKUjnNbCN4Bv6Vvtyh7e3mnyz5QB9nu6V
Popular
Button Text