Posted on May 12, 2020
1LT Chaplain Candidate
4.81K
22
12
5
5
0
In response to the following article: https://mwi.usma.edu/artificial-intelligence-bomb-nuclear-command-control-age-algorithm/

Hell no! I read an article the other day, "On the Limits of Strong Artificial Intelligence: Where Don't We Want AI on Tomorrow's Battlefield?" by LTC Daniel Thetford with Army AL&T magazine (if anyone cares to find it).

His premise was that AI is "limited to its programming" and thus "can never act as a moral agent." Given the capacity to destroy in combat operations, let alone nuclear operations, AI cannot utilize the fundamental "human" aspect of strategy that is required for such decisions, i.e. morality. Moral agency is a requirement of a leader in war and surrendering that tenet to a machine would compromise our fundamental existence - to protect U.S. interests and uphold constitutional values. Consider LTC Thetford's explanation on moral agency: "Moral agency requires the ability to see both truths in a given situation and truths beyond a given situation. It matters morally both that something is achieved, and how it is achieved. Only a moral actor is capable of such a task." That sums it up for me.

- So in what way could the military utilize AI?

Thinking as a logistician, casualty evacuation. Former mentors of mine wrote the following article: https://www.benning.army.mil/infantry/magazine/issues/2018/JUL-SEP/PDF/10)Frye-AI_txt.pdf

Imagine if we could create robots that could navigate through a battle to retrieve a casualty and transport them back to the CCP. The robot would be agile and unmanned, capable of navigating itself in the most effective and efficient route without getting tired. It could be programmed to perform the same life-saving functions of that of a Combat Medic within the care under fire stage, which is essentially applying a tourniquet and rarely anything else. This robot would free human personnel to stay in the fight and on mission (where moral agency might be required) and mitigate the risk that other personnel are injured in the casualty recovery process.
Avatar feed
Responses: 9
SSG Robert Mark Odom
4
4
0
That's a very scary proposition.
(4)
Comment
(0)
Avatar small
SSgt Owner/Operator
3
3
0
We are a long, long way away from morality enabled AI. Truth be told, we don't have AI yet no matter how many apps are labeled as AI. It does not even come up to the definition of RI (Restricted Intelligence). While I am a supporter of RI & AI, I am smart enough to understand we still have a long way to go both in technology and programming. We have plenty of apps that can mimic RI/AI in a very narrowly defined arena but these are heuristic algorithms. Basically, just a single synapse in a RI/AI "brain".

In the future, using RI to take over 90% of tracking, social pressure monitoring and rolling the info up to humans will become a thing. Putting a non-human entity capable of pushing "the button" is human arrogance, pride and just plain irresponsible.
(3)
Comment
(0)
1LT Chaplain Candidate
1LT (Join to see)
4 y
We have so much to learn. Reminds of a Star trek quote I heard the other day, "In our arrogance we feel we are so advanced." - Picard
(2)
Reply
(0)
Avatar small
Lt Col Jim Coe
2
2
0
Some older veterans will recall this quote, “Would you like to play thermonuclear war?” Google it. Cool movie.
(2)
Comment
(0)
1LT Chaplain Candidate
1LT (Join to see)
4 y
Saw it shortly after it came out!
(0)
Reply
(0)
Avatar small

Join nearly 2 million former and current members of the US military, just like you.

close