Of course it is possible to exploit any system. Our Army is based on systems - all of which are exploitable. There is no significant drawback in the enemy eventually being able to identify the type of AI in use and destroy it - in much the same way the enemy is able to identify our convoy patterns, actions on contact, and response times - and exploit that as well. When that happens, we go back and reassess our actions and change our tactics - at the cost of lives. If AI tactics and plans are compromised or identified, the loss is the AI platform - not lives. AI can then be changed, by people and programming, to do something different.
For me, the bottom line is that AI combat systems will decrease both friendly and civilian casualties and reduce collateral damage.
"AI doesn't care".
But.....Soldiers do.
I agree that when we hit the limit of progression, there will be a hinderance, as in we will no longer be able to advance our tech, or we'll have to start stacking ICs like we do with processors now, at least until we can break that barrier with new tech. I don't see how it will spell disaster as you have stated at least twice now. All I can forsee is that our tech advancement will be stymied for a period. What do you predict for this disaster you speak of?
I wish I could remember where I read this, but there was a study done in the last few years demonstrating that Soldiers were quicker on the trigger when using automated weapons systems (CROWS, et al.) than when they were required to actually pull the actual trigger on the actual weapon. Using the automated systems increases the separation between the trigger puller and the consequences of their actions. It reduces the humanity of the person on the receiving end of the round, combatant or not.
Every safeguard you program into an automated system is a weakness that can be exploited by an enemy. If you program the system to not fire on a unarmed individual, you give the enemy the ability to disarm themselves in order to gain up close access to the automated system.
AI will happen, but if we allow it to assume direct combat roles we will be sorry.
I had read somewhere about the study you mentioned as well, and I would have to agree with it with my own observations of being on the ground with the troops and working with other unmanned aerial system assets that are armed over the last 10 years. It is a trend that has become more pronounced and I think will only continue on, especially since we have the x-box generation at the helm now.
"I'm gonna go out a limb here and say that unmanned fighter aircraft will be programed with target identification just like human fighter pilots are."
Point to consider:
The software (computers in general) are only as 'smart' as the information programmed in. They can process hard data, but not make 'decisions'.
I'll give an example:
Have you observed the commercials for the new vehicles (the brand escapes me) that have 'accident sensing technology' allowing the vehicle to 'see' an accident a couple of vehicles ahead, and applies the brake 'for' the driver?
Those sensors have NO idea what the vehicles following will do, whether or not those drivers are paying attention, et cetera. In other words, you may not slam head-long in to the vehicle in front of you, but you may get crunched by the vehicles trailing you. That aspect is not factored in, though the vehicle has outstanding reaction time for the forward-facing issues.
And yes, I am well aware of tactics concerning airstrips, refueling, defensive perimeters, et cetera.
What you describe is essentially a self-piloting AUV, with a larger margin for error than the current base-controlled UAVs we have. Either that, or they will simply be larger, faster, more dangerous versions of that which we already know are fallible and responsible for more than enough civilian casualties (and probably more incidents than manned aircraft). I doubt a computer-controlled aircraft has the ability to make a split-second 'abort' if haji decides to bring in 'civilian cover' at the last minute. The computer simply has a 'target' 'identified' based on a program, not a decision.
Me no likey. ;)
I do see the benefits that you pointed out, but, in my opinion, they aren't worth the overall risk.
And I was agreeing with you on a certain point. We've been discussing many points, but that doesn't mean that since we were aligning views on one point that I automatically align on all the others. It's usually difficult to get someone to concede their views on a subject, especially if that is their career field and field of expertise. If you could be more precise on which topic you'd like me to concede to, it could be discussed.
I have 2 questions: 1. What happens if & when the enemy steals the keys?
2. What happens if & when the things we built to keep us safe are turned against us?
Anyone care to answer those questions?
Thank goodness the NSA is the only entity in the world that is leaps and bounds ahead of everybody else when it comes to cracking encryptions. I lay my head down on my pillow at night knowing I don't have to worry about my digital privacy being infringed upon by anybody in the world except my own government.
"...combat robots would be safer to civilians on the battlefield..."
How is this conclusion reached? Will they have weaponry that the soldier would not, or will they be firing the same caliber and grade of weaponry? Will the robot be able to differentiate between a combatant and a "civilian"?
I would offer that the robotic units will enhance the battlefield balance in the favor of which ever side deploys them, but I do not see much more than robot on robot violence in the end, as AI will be deployed to defeat AI, and men will still kill men.
AI will run algorithms to determine pre-programmed tactics, but strategy will still need to be introduced by a conscious mind before said tactics could be employed (see: UAVs that need input to know what the target is, whether or not to drop or fire, et cetera).
As to the comment concerning auto-piloted aircraft: They are already in use: (see: UAV). I wouldn't trust or rely on them too much in a dogfight, however.
I doubt a programmed aircraft could take on a living, breathing pilot in a matching aircraft, successfully. An automated aircraft cannot estimate it's opponent, while a human could over or under-estimate. Maybe in generations to come, but think about this: combat flight simulators see the professional and amateur pilots 'shot down' regularly. Why hasn't anyone stuck a game program in a plane yet?
I doubt AI will do so much thinking to determine warfighting.
Look at automated assembly lines: They still have quality control inspectors. Those high-tech automated machines still err in operation. No they don't make 'mistakes', because that implies that they are making conscious decisions. They simply perform a pre-programmed function - no more, no less.
I doubt AI will be so intrusive on the battlefield or in the skies any time soon.
And I certainly do not underestimate people. I have been deployed for 51 out of the last 98 months all over the world using unmanned or robotic systems, and I've always been engaging with rather low tech participants. But I have a healthy respect for how smart and cunning they can be, especially in using the tools they have to defeat our methods and tactics. They may be crude or simplistic peoples, but it never ceases to amaze me sometimes what tricks or tactics they come up with to deal with us and our technologically heavy forces. I have seen "low tech" means handily win over "high tech" implements too many times to know that anybody that is a chronological snobb is an idiot and doomed to failure. I am also a study of military war and history and have had plenty to learn from centuries of warfare waged in all arenas. I firmly believe that the one that forgets the past is doomed to lose his present and future. This is my firm stance on the issue no matter what may have been interpolated from my past statements.

Warfare
Future
Combat
