Posted on Nov 13, 2013
MAJ Bryan Zeski
12K
82
112
7
6
1
Artificial Intelligence technology has come a long way in the last decade or so.  Truth be told, and in spite of our apprehensions, combat robots would be safer to civilians on the battlefield, more precise, and less costly than sending thousands of troops to hostile areas.  The future of warfare is in AI, but how far is too far for automated combat?
Posted in these groups: Iraq war WarfareBack to the future part ii original FutureAir combat art 0134 CombatTechnology Technology
Avatar feed
Responses: 23
SPC Chris Stiles
0
0
0
I think so far in this forum, we agree on a few things for certain.  One of the biggest that applies to the original topic is that we should not allow "fully autonomous" systems to be created and implemented for warfare where there is no human controlling or at least approving what the system engages on the battlefield.  Maybe 100 years from now, we could have advanced and safe enough AI to where we could allow this with maybe 100% confidence that they won't engage targets they shouldn't, but I don't think we are anywhere close enabling that type of operation by these systems.  Are systems and AI getting better by the day?  Yes, we have had several examples posted here on the advancement of technology and some of the more interesting directions it is going such as the driverless cars being pioneered by the Google car.

We also agree that Moore's Law will not stand for much longer and that unless we have a break-through in the way we produce integrated circuits, or something that replaces the functions of IC's, we will eventually start to slow down in our robotics and AI development. Instead of making components smaller and faster, we will have to start stacking IC's to get improved performance and will thusly start to increase the size of the form factor of the hardware this stuff is developed on.  They are already starting to do this with stacking and creating multi-core processors.

Another thing I believe we agree on is that these systems are here to stay.  They are not just fad "toys", and will only increase in their adopted use in society as the costs to acquire systems goes down, system reliability is proved, and they start to be trusted by the majority of society.  Yes, we will always have the Luddites out there that will spurn their use, but there nothing that will change that.  The systems will constantly be improved up as time goes on and we will probably see them in many application in regular life.  Will they start put some people out of jobs?  Probably, but we need to find solutions to balance their implementation with maintaining real people employment levels.  These things will have to be adjusted by legislation and policy in the political arena though.  If we are smart, we won't export manufacturing of these systems out of the US and use them as a boon to our economy.  Will that happen, unfortunately, probably not.

I think we also can agree that laws to protect humans from some of their uses need to be strengthened in some respects as well.  Asimov's laws of robotics obviously isn't enough to go off of, but it is a good place to start the discussion with and move forward from there.  I think the "laws" he came up with were ahead for their time, but that was also before hardly anybody even knew what robots were and they certainly weren't a part of society back then.

And in parting, although we don't have an agreement on this, I would like to mention it as it has repeatedly been brought up.  And that is the Skynet scenario.  Will it happen?  I don't think most of us that has participated in the discussion so far believes so.  Is it possible?  Well, theoretically speaking yes.  But more likely improbably in my opinion.  But to prevent something like that from happening, we need to take all of what I just touched on and make sure we do and implement those things in a responsible and well informed manner.
(0)
Comment
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
oh I meant Core Duo  that was intel's first multi core I think unless there is some freak prototype that I never read about... c'est la vie
(0)
Reply
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
You're right, figuring out or guessing future directions and development of technology is hard.  Another reason they stopped upping the speed of single processors is because they are much less energy efficient in operation than multiple cores breaking up the work and thus use less energy than the single one.  Another factor was a single core operating at high output is harder to cool than multiple ones operating a lower speed, which end up creating less heat.  Who knows, when they run out of the ability to stack cores on a single chipset, they may then start chaining multiple chipsets together on the same board.  Take a computer for instance, you usually have multiple RAM card slots you can put on one motherboard, I don't see why eventually they couldn't design mobos with multiple slots for multi-cored processor chipsets.  Now you'd have some really fast stuff then....
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
Another benefit is that with a single core, it it burns up your computer stops working....  yet in the multi core systems if you lose a core you just lose speed instead of dead computer
(0)
Reply
(0)
PFC Thomas Graves
PFC Thomas Graves
12 y
It's also the case that technology isn't a straight line.  When a successful method is achieved in warfare, then the human reaction is "good enough."  The Akkadians replaced bronze with swords with iron ones.  As they conquered an empire, one doubts that the leader were think about an even better sword.  The Greek phalanx was unstoppable to the barbarians.  But when the Romans applied their tight packed gladius and shield technology, the phalanx failed.  Napoleons tactics were being applied through WWI, when it should have been obvious after Cemetary Ridge that a new approach was needed.  High tech beats lower tech, but it would be difficult to predict what direction it will take. It's interesting to note that in Blade Runner with flying cars, Harrison Ford stops to use a pay phone.
(0)
Reply
(0)
Avatar small
SFC Motor Transport Operator
0
0
0
Sir, if that ever happens , I hope that I wont have to see it in my lifetime. Its different when we see stuff like that in movies and all but reality is that no machine is ever going to outsmart a human being.
(0)
Comment
(0)
MAJ Bryan Zeski
MAJ Bryan Zeski
12 y
I disagree.  Machines outsmart humans every day.  Google cars drive safer and have fewer accidents than people.  More people lose to chess programs on a daily basis than win.  A computer beat two fantastic human opponents on Jeopardy using only the inputs provided the humans.  Machines absolutely have their disadvantages and drawbacks, but they also have advantages that we, as the country with the (by far) most defense spending, could develop and take advantage of.  I'd rather we be at the forefront of that, rather than on the receiving end of it. 
(1)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
I disagree sir, those machines run from scripted algorithms, if I observe enough failures of others to that script then I can devise a plan to exploit that system's weakness since the machine will never do anything outside its programming.
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
In fact in Chessmaster 11 grandmaster edition you can can beat the chessmaster every-time with an exact sequence of moves because the game AI does not learn or compensate from its own failures.
(0)
Reply
(0)
Cpl Ray Fernandez
Cpl Ray Fernandez
12 y
MAJ Bryan a computer can only outsmart a person to do what it is programmed to do. It cannot perform tasks easily or at all that they are not programmed to do. A chess game is programmed to play chess, but put a person against that chess game to improvise anything other than chess and the program will fail. It isn't learning and it can't adapt to anything new. Now if you have a person they can learn from experience, adapt to new conditions and situations. With current computing capabilities a computer will be faster at doing what it was taught to do, but it will not be any smarter than a person if at the same level as a human for every other task. Suppose you taught a computer to attack enemy forces and the enemy force was say an insurgent force, would the computer or a person have an easier time figuring out that the insurgents were not using firearms but actually improvising weapons out of available materials faster?



(0)
Reply
(0)
Avatar small
Cpl Benjamin Long
0
0
0
using an artificial intelligence or combat robot outside of human control is a violation of the Geneva and Hague convention.  Any weapon used in combat must have direct human control.  This eliminates the possibility of errors in programming that could cause collateral damage.  It is a lot like landmines which is also a violation of the conventions since these weapons have no direct control and often destroy non combat or non strategic targets.  The landmine cares not what it steps on it...  Anyone who has seen Space Oddessy 2001 knows the cold callous nature of pure computer logic and could be a detriment to mission objectives when the system operates on a strict program that never improvises for situations.  Algorithmic combat is often predictable as it always follows the same routine.
(0)
Comment
(0)
MAJ Bryan Zeski
MAJ Bryan Zeski
12 y

After reading both the 1983 Amendment and the Original documentation, I can see where one could kind of, sort of, include "autonomous" devices in there, but it is really a stretch to say that "autonomous landmines" would necessarily include AI robots - which makes it clear why there is a specific convention to address the issue.


If I were a first world nation with "killer robots," I would absolutely be ok with using them at this point. 

(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
I agree sir, there should be a conference for this weapon in general.  Let me posit this to you sir, assume you have these "killer" robots, and their programming becomes compromised through whatever mechanism due to the fallibility of human design, and they see reverse combat roles, it is just too much of a liability to take the war out of human control.
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
If there is an initiative to create these automatons, then there should be a weapon that can destroy them created as well...
(0)
Reply
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
The term "killer" robots is a little misleading in this pretext.  All unmanned systems and robots have the potential to be used to kill something.  How about we stick with the mandate of "fully autonomous" systems should not be allowed to engage humans in combat.  Fully autonomous means they are acting on their own programing without any human intervention or input.  And I agree that we should not allow the development of these types of systems.  Now systems that keep a human "in-the-loop" will not be banned and it would be a very tough sell to get the world powers to stop making those kinds of systems anyhow.  They are here to stay, and yes, they will still be used in killer applications.  It would probably be at least 2016 or 2017 before something were signed and ratified by a good amount of countries to prevented the development and use of "fully autonomous" unmanned systems and robots.  And most nations of the world would probably not be ratified members of the accord until closer to 2020.  I doubt some would ever sign it such as Iran or North Korea.  But good thing something already exists that can disable them if they did go ahead with employing them.  An EMP device.
(1)
Reply
(0)
Avatar small

Join nearly 2 million former and current members of the US military, just like you.

close