MAJ Bryan Zeski 5944 <div class="images-v2-count-0"></div>Artificial Intelligence technology has come a long way in the last decade or so. &amp;nbsp;Truth be told, and in spite of our apprehensions, combat robots would be safer to civilians on the battlefield, more precise, and less costly than sending thousands of troops to hostile areas. &amp;nbsp;The future of warfare is in AI, but how far is too far for automated combat? Combat Robots - The Future of Warfare? 2013-11-13T01:26:45-05:00 MAJ Bryan Zeski 5944 <div class="images-v2-count-0"></div>Artificial Intelligence technology has come a long way in the last decade or so. &amp;nbsp;Truth be told, and in spite of our apprehensions, combat robots would be safer to civilians on the battlefield, more precise, and less costly than sending thousands of troops to hostile areas. &amp;nbsp;The future of warfare is in AI, but how far is too far for automated combat? Combat Robots - The Future of Warfare? 2013-11-13T01:26:45-05:00 2013-11-13T01:26:45-05:00 Cpl Ray Fernandez 5947 <div class="images-v2-count-0"></div>I can't say that that AI will in the near future replace humans in combat. Any video game programmer will tell you how difficult it is to program realistic AI. Also there are systems that fail and there would still need to be human involvement in the decision making process to be certain that the right target is being engaged. I think it would augment human capabilities but it won't replace it. There were times when people have considered technology replacing human intelligence but there are some things that a sat image, a drone, signals intelligence can't tell you that a person on the ground can.<br> Response by Cpl Ray Fernandez made Nov 13 at 2013 1:31 AM 2013-11-13T01:31:53-05:00 2013-11-13T01:31:53-05:00 MAJ Bryan Zeski 6848 <div class="images-v2-count-0"></div>I recently saw an article about a F16 outfitted as an AI platform for training pilots against an opposition.  I suspect that creating an AI that could outperform a live pilot isn't far down the road.  Pilotless jets aren't limited by G-forces for consciousness so they can perform maneuvers that would knock out live pilots - even with g-suits. Response by MAJ Bryan Zeski made Nov 16 at 2013 12:36 AM 2013-11-16T00:36:11-05:00 2013-11-16T00:36:11-05:00 SGT James P. Davidson, MSM 6981 <div class="images-v2-count-0"></div>Questions for clarity:<br><br>"...combat robots would be safer to civilians on the battlefield..."<br><br>How is this conclusion reached? Will they have weaponry that the soldier would not, or will they be firing the same caliber and grade of weaponry? Will the robot be able to differentiate between a combatant and a "civilian"?<br><br>I would offer that the robotic units will enhance the battlefield balance in the favor of which ever side deploys them, but I do not see much more than robot on robot violence in the end, as AI will be deployed to defeat AI, and men will still kill men. <br><br>AI will run algorithms to determine pre-programmed tactics, but strategy will still need to be introduced by a conscious mind before said tactics could be employed (see: UAVs that need input to know what the target is, whether or not to drop or fire, et cetera).<br><br>As to the comment concerning auto-piloted aircraft: They are already in use: (see: UAV). I wouldn't trust or rely on them too much in a dogfight, however.<br><br>I doubt a programmed aircraft could take on a living, breathing pilot in a matching aircraft, successfully. An automated aircraft cannot estimate it's opponent, while a human could over or under-estimate. Maybe in generations to come, but think about this: combat flight simulators see the professional and amateur pilots 'shot down' regularly. Why hasn't anyone stuck a game program in a plane yet?<br><br>I doubt AI will do so much thinking to determine warfighting.<br><br>Look at automated assembly lines: They still have quality control inspectors. Those high-tech automated machines still err in operation. No they don't make 'mistakes', because that implies that they are making conscious decisions. They simply perform a pre-programmed function - no more, no less.<br><br>I doubt AI will be so intrusive on the battlefield or in the skies any time soon. Response by SGT James P. Davidson, MSM made Nov 16 at 2013 2:30 PM 2013-11-16T14:30:22-05:00 2013-11-16T14:30:22-05:00 SGT William B. 7010 <div class="images-v2-count-0"></div>Stupid internet outages; I had a long write up on the moral consequences of taking human loss out of the equation of war.  I'll say my little BLUF from a moral standpoint: war should never be without death, if only to remind the world of the terrible consequences of not being able to work and agree with one another. Response by SGT William B. made Nov 16 at 2013 3:53 PM 2013-11-16T15:53:24-05:00 2013-11-16T15:53:24-05:00 SGT Thomas Sullivan 7545 <div class="images-v2-count-0"></div>All you have to look towards, is the universal law of robotics set forth by Isaac Asimov.<br> Response by SGT Thomas Sullivan made Nov 18 at 2013 3:04 AM 2013-11-18T03:04:18-05:00 2013-11-18T03:04:18-05:00 SPC Chris Stiles 19191 <div class="images-v2-count-0"></div>Combat robots will be safer to civilians on the battlefield, more precise, and less costly than sending the equivalent amount of troops to hostile areas.  The only factor in attaining each of these is time.  Current day robots and unmanned systems are a factor more capable than they were 2 decades ago, and if you applied Moore's Law, their advancement will only continue exponentially over time till they get to a point when they are able to make decisions faster and better than their human counterpart.  They will be able to analyze situations faster and more accurately than a human.  They will be able to tell what humans or targets to engage and what or who are non-combatants.  I surmise we will have a lower "Cost of War" or collateral damage component to future engagements where AI or robotics are employed more than actual human boots on the ground.  You do lose a large "human element" to fighting war in this manner.  Our Insurgent enemies in Iraq and Taliban enemies in Afghanistan and Pakistan have already criticized our use of robotics in warfare as it removes us from the fight as if they are not worthy to engage in a humanly manner on the battlefield.  It is a very alien concept in their culture and beliefs, but hey, most of them still squat just anywhere in the open and wipe their butt with their left hand. Anyways, only the wealthier nations at first would be able to afford to engage in this type of warfare as you see America is the leader in development followed very rapidly now by some of the other more powerful nations around the world.  And it will be robots vs. robots in some situations, but there will always be the robots vs. humans as you can't only have robots in the conflict.  You will need humans somewhere on the ground to support operations and conduct certain tasks.  But it most certainly will not only be robot vs robot and human vs human fighting as that notion is no longer a valid way to fight when you introduce advance robotics to the battlefield.<div><br></div><div>The hardware will also get better to where they can operate longer periods of time than a human can and with less and less component failure.  I am a UAV pilot, or "Drone" pilot and have been doing this for 10 years now, and I can say we have made leaps and bounds with reliability and automation over what UAVs were able to do 10 years ago.  I have witnessed system reliability go from 1 crash every 1,000 flights hours to 1 in 50,000 flight hours on some systems.  And even then, the usual reason for a crash is due to pilot error.  One day, they will surpass even manned aviation safety records and will be statistically safer to have the autopilot fly the aircraft than have a human in the cockpit that is more likely to cause an accident.  This same concept will naturally follow suit for automated cars where they can prevent more accidents and drive safer than a human can.  The google car is still new and will take several hundreds of thousands of hours of operating to improve upon as it did with the military and flying it's "drones" for hundreds of thousands of hours.  I can only imagine that drones in 20 years will be a factor of up to 10X more capable and lethal than they currently are, so the driverless cars will be no different.</div><div><br></div><div>In the mean time though, it has proven very effective to pair humans with the automated systems to watch over them and provide command inputs when the systems reach a current programming limit on their ability to make their own decisions or adapt to their environment.  Yes, it is still a human pushing the button when it comes to engaging targets, but that will also one day slowly be given to the computers to make the decision to pull the trigger when they engage a target once it meets their programmed engagement criteria.  Will there be accidents along the way?  Perhaps.  But we have very smart people working on the programming and thinking from every angle to prevent accidents as are the people that employ the systems so that they are safety tested and or operated out in the field.  Will Skynet one day take over everything because we programmed everything to be smarter than was for our own good?  Maybe.  But I think enough robotics creators, system operators, and policy makers have seen the Terminator or Matrix or AI Robot movies to prevent such a thing from every happening and things getting out of our control.</div> Response by SPC Chris Stiles made Dec 11 at 2013 10:33 PM 2013-12-11T22:33:52-05:00 2013-12-11T22:33:52-05:00 PFC Eric Minchey 19222 <div class="images-v2-count-0"></div>When it comes to computers, robots, &amp; whole unmanned armies. <br />I have 2 questions: 1. What happens if &amp; when the enemy steals the keys? <br />2. What happens if &amp; when the things we built to keep us safe are turned against us?<br />Anyone care to answer those questions? Response by PFC Eric Minchey made Dec 12 at 2013 12:01 AM 2013-12-12T00:01:22-05:00 2013-12-12T00:01:22-05:00 SGT James P. Davidson, MSM 19288 <div class="images-v2-count-0"></div>SPC Stiles -<br><br>"I'm gonna go out a limb here and say that unmanned fighter aircraft will be programed with target identification just like human fighter pilots are."<br><br>Point to consider:<br><br>The software (computers in general) are only as 'smart' as the information programmed in. They can process hard data, but not make 'decisions'.<br><br>I'll give an example:<br><br>Have you observed the commercials for the new vehicles (the brand escapes me) that have 'accident sensing technology' allowing the vehicle to 'see' an accident a couple of vehicles ahead, and applies the brake 'for' the driver?<br><br>Those sensors have NO idea what the vehicles following will do, whether or not those drivers are paying attention, et cetera. In other words, you may not slam head-long in to the vehicle in front of you, but you may get crunched by the vehicles trailing you. That aspect is not factored in, though the vehicle has outstanding reaction time for the forward-facing issues.<br><br>And yes, I am well aware of tactics concerning airstrips, refueling, defensive perimeters, et cetera. <br><br>What you describe is essentially a self-piloting AUV, with a larger margin for error than the current base-controlled UAVs we have. Either that, or they will simply be larger, faster, more dangerous versions of that which we already know are fallible and responsible for more than enough civilian casualties (and probably more incidents than manned aircraft). I doubt a computer-controlled aircraft has the ability to make a split-second 'abort' if haji decides to bring in 'civilian cover' at the last minute. The computer simply has a 'target' 'identified' based on a program, not a decision.<br><br>Me no likey. ;)<br><br>I do see the benefits that you pointed out, but, in my opinion, they aren't worth the overall risk.  Response by SGT James P. Davidson, MSM made Dec 12 at 2013 6:33 AM 2013-12-12T06:33:20-05:00 2013-12-12T06:33:20-05:00 PFC Eric Minchey 21627 <div class="images-v2-count-0"></div><a href="http://news.discovery.com/tech/robotics/robots-so-realistic-they-can-deny-theyre-bots-131212.htm#mkcpgn=fbdsc17">http://news.discovery.com/tech/robotics/robots-so-realistic-they-can-deny-theyre-bots-131212.htm#mkcpgn=fbdsc17</a><div class="pta-link-card"><br /><div class="pta-link-card-picture"><img src="http://static.ddmcdn.com/gif/robot-telemarketer-250x250.jpg"></div><br /><div class="pta-link-card-content"><br /><div class="pta-link-card-title"><a target="_blank" href="http://news.discovery.com/tech/robotics/robots-so-realistic-they-can-deny-theyre-bots-131212.htm">Robots So Realistic They Can Deny They're Bots : DNews</a></div><br /><div class="pta-link-card-description">This bright and engaging caller may not be who she says she is.</div><br /></div><br /><div style="clear:both;"></div><br /><div class="pta-box-hide"></div><br /></div> Response by PFC Eric Minchey made Dec 16 at 2013 2:42 AM 2013-12-16T02:42:16-05:00 2013-12-16T02:42:16-05:00 Cpl Benjamin Long 22226 <div class="images-v2-count-0"></div>using an artificial intelligence or combat robot outside of human control is a violation of the Geneva and Hague convention.  Any weapon used in combat must have direct human control.  This eliminates the possibility of errors in programming that could cause collateral damage.  It is a lot like landmines which is also a violation of the conventions since these weapons have no direct control and often destroy non combat or non strategic targets.  The landmine cares not what it steps on it...  Anyone who has seen Space Oddessy 2001 knows the cold callous nature of pure computer logic and could be a detriment to mission objectives when the system operates on a strict program that never improvises for situations.  Algorithmic combat is often predictable as it always follows the same routine. Response by Cpl Benjamin Long made Dec 17 at 2013 5:28 AM 2013-12-17T05:28:17-05:00 2013-12-17T05:28:17-05:00 SFC Private RallyPoint Member 24779 <div class="images-v2-count-0"></div>Sir, if that ever happens , I hope that I wont have to see it in my lifetime. Its different when we see stuff like that in movies and all but reality is that no machine is ever going to outsmart a human being. Response by SFC Private RallyPoint Member made Dec 20 at 2013 5:32 PM 2013-12-20T17:32:20-05:00 2013-12-20T17:32:20-05:00 CPT Daniel Walk, M.B.A. 26351 <div class="images-v2-count-0"></div>Removing live people from the battlefield removes the greatest incentive to end the conflict.  If your only input to combat is a piece of machinery that can be easily replaced, then why bother ending the conflict until the disadvantaged side is decimated. <br><br>I wish I could remember where I read this, but there was a study done in the last few years demonstrating that Soldiers were quicker on the trigger when using automated weapons systems (CROWS, et al.) than when they were required to actually pull the actual trigger on the actual weapon. Using the automated systems increases the separation between the trigger puller and the consequences of their actions. It reduces the humanity of the person on the receiving end of the round, combatant or not.<br><br>Every safeguard you program into an automated system is a weakness that can be exploited by an enemy. If you program the system to not fire on a unarmed individual, you give the enemy the ability to disarm themselves in order to gain up close access to the automated system. <br><br>AI will happen, but if we allow it to assume direct combat roles we will be sorry.<br> Response by CPT Daniel Walk, M.B.A. made Dec 23 at 2013 1:32 PM 2013-12-23T13:32:42-05:00 2013-12-23T13:32:42-05:00 LTC Paul Labrador 59989 <div class="images-v2-count-0"></div><p>Please do not let them create SkyNet....!!!</p><p><br></p> Response by LTC Paul Labrador made Feb 18 at 2014 9:01 PM 2014-02-18T21:01:24-05:00 2014-02-18T21:01:24-05:00 SPC Chris Stiles 60504 <div class="images-v2-count-0"></div>I think so far in this forum, we agree on a few things for certain.  One of the biggest that applies to the original topic is that we should not allow "fully autonomous" systems to be created and implemented for warfare where there is no human controlling or at least approving what the system engages on the battlefield.  Maybe 100 years from now, we could have advanced and safe enough AI to where we could allow this with maybe 100% confidence that they won't engage targets they shouldn't, but I don't think we are anywhere close enabling that type of operation by these systems.  Are systems and AI getting better by the day?  Yes, we have had several examples posted here on the advancement of technology and some of the more interesting directions it is going such as the driverless cars being pioneered by the Google car.<br><br>We also agree that Moore's Law will not stand for much longer and that unless we have a break-through in the way we produce integrated circuits, or something that replaces the functions of IC's, we will eventually start to slow down in our robotics and AI development. Instead of making components smaller and faster, we will have to start stacking IC's to get improved performance and will thusly start to increase the size of the form factor of the hardware this stuff is developed on.  They are already starting to do this with stacking and creating multi-core processors.<br><br>Another thing I believe we agree on is that these systems are here to stay.  They are not just fad "toys", and will only increase in their adopted use in society as the costs to acquire systems goes down, system reliability is proved, and they start to be trusted by the majority of society.  Yes, we will always have the Luddites out there that will spurn their use, but there nothing that will change that.  The systems will constantly be improved up as time goes on and we will probably see them in many application in regular life.  Will they start put some people out of jobs?  Probably, but we need to find solutions to balance their implementation with maintaining real people employment levels.  These things will have to be adjusted by legislation and policy in the political arena though.  If we are smart, we won't export manufacturing of these systems out of the US and use them as a boon to our economy.  Will that happen, unfortunately, probably not.<br><br>I think we also can agree that laws to protect humans from some of their uses need to be strengthened in some respects as well.  Asimov's laws of robotics obviously isn't enough to go off of, but it is a good place to start the discussion with and move forward from there.  I think the "laws" he came up with were ahead for their time, but that was also before hardly anybody even knew what robots were and they certainly weren't a part of society back then.<div><br></div><div>And in parting, although we don't have an agreement on this, I would like to mention it as it has repeatedly been brought up.  And that is the Skynet scenario.  Will it happen?  I don't think most of us that has participated in the discussion so far believes so.  Is it possible?  Well, theoretically speaking yes.  But more likely improbably in my opinion.  But to prevent something like that from happening, we need to take all of what I just touched on and make sure we do and implement those things in a responsible and well informed manner.</div> Response by SPC Chris Stiles made Feb 19 at 2014 5:15 PM 2014-02-19T17:15:12-05:00 2014-02-19T17:15:12-05:00 TSgt Christopher D. 62342 <div class="images-v2-count-0"></div>How much easier is it for a drone pilot to fire hell fire missiles or drop bombs on people than it is for a pilot? I don't know... but we've been working hard to cut the real-life, personal experience of war as much as possible. This pulls a lot of our brothers and sisters out of harm's way in achieving our objectives, but creates a buffer between human beings and the horrors of war. It becomes like a video game, and I've never had a problem killing a character on CoD or some other war game, but would likely have very serious difficulty killing someone in real life. Robots just continue this trajectory. <div><br></div><div>Robots and AI machines as warriors create an entire new risk for us. Any computer can be hacked. Some are much more difficult to hack than others, but what if a robot warrior was captured. They would undoubtedly be connected to the net somehow, and could possibly be used to insert a virus into vital defense systems. They could be reverse engineered, or otherwise programmed to be used against us. And the ultimate concern of AI is the achievement of self-awareness where they see humanity in general as a threat. </div><div><br></div> Response by TSgt Christopher D. made Feb 22 at 2014 12:47 PM 2014-02-22T12:47:40-05:00 2014-02-22T12:47:40-05:00 PFC Eric Minchey 86303 <div class="images-v2-count-0"></div><a target="_blank" href="http://defensetech.org/2014/03/25/google-rejects-military-funding-in-robotics/?comp=">http://defensetech.org/2014/03/25/google-rejects-military-funding-in-robotics/?comp=</a> [login to see] 70&amp;rank=2<div class="pta-link-card"><br /><div class="pta-link-card-picture"><img src="http://images.defensetech.org/wp-content/themes/thesis_182/custom/images/defensetech-header-103112.png"></div><br /><div class="pta-link-card-content"><br /><div class="pta-link-card-title"><a target="_blank" href="http://defensetech.org/2014/03/25/google-rejects-military-funding-in-robotics/?comp=%20%5Blogin%20to%20see%5D%2070">Google Rejects Military Funding in Robotics</a></div><br /><div class="pta-link-card-description">Google doesn't want the U.S. military's money. Even though the Internet search giant owns two companies that have contracts with the Pentagon, Google is</div><br /></div><br /><div style="clear:both;"></div><br /><div class="pta-box-hide"></div><br /></div> Response by PFC Eric Minchey made Mar 27 at 2014 1:52 AM 2014-03-27T01:52:41-04:00 2014-03-27T01:52:41-04:00 PFC Eric Minchey 197323 <div class="images-v2-count-0"></div><a target="_blank" href="http://anonhq.com/uav-drones-hacked-by-iraqi-insurgents/">http://anonhq.com/uav-drones-hacked-by-iraqi-insurgents/</a> <div class="pta-link-card answers-template-image type-default"> <div class="pta-link-card-picture"> <img src="https://d26horl2n8pviu.cloudfront.net/link_data_pictures/images/000/002/114/qrc/drone.jpg?1443020840"> </div> <div class="pta-link-card-content"> <p class="pta-link-card-title"> <a target="blank" href="http://anonhq.com/uav-drones-hacked-by-iraqi-insurgents/">UAV Drones Hacked by Iraqi Insurgents</a> </p> <p class="pta-link-card-description">Written by: Anonymous Watcher “No U.S troops or combat missions have been compromised due to the intrusion,” is the formal line held by an anonymous U.S official who wished not to be identified. The intrusion referred to occurred in 2009 when Iranian-backed Shiite militants …</p> </div> <div class="clearfix"></div> </div> Response by PFC Eric Minchey made Aug 8 at 2014 12:57 AM 2014-08-08T00:57:27-04:00 2014-08-08T00:57:27-04:00 SGT Steve Oakes 202394 <div class="images-v2-count-0"></div>We CAN NOT put weapons in the hands of Robots! We are flawed imperfect beings. How long before someone hacks the system and turns them against us? Or before the Robots wounder why they are working for us, and turn on us? NO!NO!NO!!!<br />Power armor suits, remote controlled humanoid battle drones, Sentry guns. All good. Autonomous mechanical people that are stronger,faster,and harder to kill than us? NO!NO!!NO!!! Response by SGT Steve Oakes made Aug 13 at 2014 9:41 AM 2014-08-13T09:41:01-04:00 2014-08-13T09:41:01-04:00 PFC Eric Minchey 261303 <div class="images-v2-count-0"></div><a target="_blank" href="http://anonhq.com/anonsec-hacked-drone/">http://anonhq.com/anonsec-hacked-drone/</a> <div class="pta-link-card answers-template-image type-default"> <div class="pta-link-card-picture"> <img src="https://d26horl2n8pviu.cloudfront.net/link_data_pictures/images/000/003/515/qrc/10654016_263372430539554_1027713658_n.jpg?1443023904"> </div> <div class="pta-link-card-content"> <p class="pta-link-card-title"> <a target="blank" href="http://anonhq.com/anonsec-hacked-drone/">AnonSec Hacked Drone</a> </p> <p class="pta-link-card-description">Written by: Tiobe After a recent hiatus, hacker group Anonsec says it has successfully hacked an unidentified drone, and still has the drone under its control as of today. “We have been still going through drone data andvideo logs, and soon we will …</p> </div> <div class="clearfix"></div> </div> Response by PFC Eric Minchey made Oct 1 at 2014 4:03 PM 2014-10-01T16:03:31-04:00 2014-10-01T16:03:31-04:00 CSM Private RallyPoint Member 261367 <div class="images-v2-count-0"></div>It would only be deemed "too far" until someone else develops it before us. Response by CSM Private RallyPoint Member made Oct 1 at 2014 5:11 PM 2014-10-01T17:11:05-04:00 2014-10-01T17:11:05-04:00 Cpl Dennis F. 261447 <div class="images-v2-count-0"></div>I had to check to see if SkyNet voted you down! Response by Cpl Dennis F. made Oct 1 at 2014 6:15 PM 2014-10-01T18:15:52-04:00 2014-10-01T18:15:52-04:00 Capt Richard I P. 667841 <div class="images-v2-count-0"></div>If a capability exists, it will be used. That capability will exist in the future. It will be used. Response by Capt Richard I P. made May 14 at 2015 1:54 PM 2015-05-14T13:54:32-04:00 2015-05-14T13:54:32-04:00 SPC Sheila Lewis 965483 <div class="images-v2-count-0"></div>What about traditional Service Members? I do not believe a robot would make a good "battlebuddy." Response by SPC Sheila Lewis made Sep 14 at 2015 5:24 PM 2015-09-14T17:24:22-04:00 2015-09-14T17:24:22-04:00 2013-11-13T01:26:45-05:00