The Pentagon is feeding tens of thousands of pages of data to AI for military applications - what could possibly go wrong?

image
Artificial intelligence by is licensed under Canva

Washington, D.C. - A story from last July flew under our radar until recently. However, this one bears rehashing. 

Bloomberg reported then that the United States Air Force was experimenting with using artificial intelligence to perform military tasks. In this case, Air Force Col. Matthew Strohmeyer had been running data-based exercises inside the US Defense Department. However, for the first time, he utilized “large language models,” or “LLMs,” to perform military tasks. 

“It was highly successful. It was very fast,” Strohmeyer told Bloomberg. “We are learning that this is possible for us to do.” 

For the unfamiliar, LLMs utilize “huge swaths of internet data to help artificial intelligence predict and generate human-like responses to user prompts,” Bloomberg reported. For example, ChatGPT utilizes LLMs. 

Bloomberg reported that five such LLMs were being utilized as part of a wider-ranging DoD program “focused on developing data integration and digital programs across the military.” 

Using artificial intelligence is a new step for the military, which still operates using somewhat archaic methods. Bloomberg said simple requests for information to specific parts of the military can take “hours or even days to complete,” whereas using AI can winnow those requests down to 10 minutes. 

“That doesn’t mean it’s ready for primetime right now. But we just did it live. We did it with secret-level data,” Strohmeyer said. 

One company, Palantir Technologies, Inc.” was co-founded by Peter Thiel and is one of those companies developing IA-based platforms for the Pentagon. If you’re not familiar with Thiel, he is the co-founder of PayPal and was an early investor in Facebook. Thiel was also a prominent supporter of former President Donald Trump in the 2016 election. 

Somewhat disturbing is Microsoft’s involvement with Pentagon AI initiatives. For example, Microsoft announced last year that Azure Government cloud service users could access AI models via OpenAI. The Department of Defense is one of Azure Government’s customers. 

While artificial intelligence could assist the military, some are concerned that generative AI could be manipulated. Bloomberg noted, "AI can compound bias and relay incorrect information with striking confidence. AI can also be hacked in multiple ways, including by poisoning the data that feeds it.” 

Some argue that having humans in charge of the military who hopefully possess ethics, empathy, and, yes, even emotion is not a bad thing. Lack of human oversight could lead to unintended consequences, not the least of which are excessive civilian casualties and escalation of conflicts. 

Many ethical and legal issues come into play. For example, how does one establish rules and legal framework for AI's responsible, ethical use in warfare? The possibility of violating international law, such as the Geneva Convention, is distinct. 

Since AI uses historical data, there may be biases inherent in that data. AI algorithms could possibly discriminate, albeit inadvertently, against particular groups without addressing those biases. 

There is also the possibility of cyber attacks. If adversarial hackers somehow exploit AI algorithm vulnerabilities, it could compromise critical military information's integrity, confidentiality, and availability. 

For those old enough to remember the Cold War, we could see a repeat of that arms race between the United States and parts of the old Soviet Union, this time with the addition of China. There is a distinct possibility a cyber-arms race could be undertaken, which could lead to further conflicts. 

There are also issues of job displacement, unintended consequences leading to catastrophic events or conflicts, and a lack of accountability. For example, if mistakes or ethical violations occurred, attributing responsibility can become a complicated endeavor, affecting accountability and justice. 

Bloomberg asked a company called Scale AI to war game if the US could deter a Chinese attack on Taiwan. Within seconds, they received an answer. 

“Direct US intervention with ground, air, and naval forces would probably be necessary,” the system stated while warning the US would have issues holding off China’s military. “There is little consensus in military circles regarding the outcome of a potential military conflict between the US and China over Taiwan.” 

For corrections or revisions, click here.
The opinions reflected in this article are not necessarily the opinions of LET
Sign in to comment

Comments

Richard

What I see in this article is "FACTS and POSSIBILITIES"! The "facts" are being dealt with. The "possibilities" which are not facts, yet, will be dealt with once they decide the possibilities become closer to facts.

Richard

What I see in this article is "FACTS and POSSIBILITIES"! The "facts" are being dealt with. The "possibilities" which are not facts, yet, will be dealt with once they decide the possibilities become closer to facts.

Powered by LET CMS™ Comments

Get latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 Law Enforcement Today, Privacy Policy