Posted on Nov 13, 2013
MAJ Bryan Zeski
12K
82
112
7
6
1
Artificial Intelligence technology has come a long way in the last decade or so.  Truth be told, and in spite of our apprehensions, combat robots would be safer to civilians on the battlefield, more precise, and less costly than sending thousands of troops to hostile areas.  The future of warfare is in AI, but how far is too far for automated combat?
Posted in these groups: Iraq war WarfareBack to the future part ii original FutureAir combat art 0134 CombatTechnology Technology
Avatar feed
Responses: 23
Cpl Ray Fernandez
5
5
0
I can't say that that AI will in the near future replace humans in combat. Any video game programmer will tell you how difficult it is to program realistic AI. Also there are systems that fail and there would still need to be human involvement in the decision making process to be certain that the right target is being engaged. I think it would augment human capabilities but it won't replace it. There were times when people have considered technology replacing human intelligence but there are some things that a sat image, a drone, signals intelligence can't tell you that a person on the ground can.
(5)
Comment
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
As for the systems that pin point fire.. they are good at what they do, but human ingenuity can overcome such devices.  such as decoy shooting.  Setting up a remote rifle via radio control to do a hitsuke distraction to concentrate your fire to the straw dummy, where they ambush you from the flank or rear...
(0)
Reply
(0)
MAJ Bryan Zeski
MAJ Bryan Zeski
12 y

Of course it is possible to exploit any system.  Our Army is based on systems - all of which are exploitable.  There is no significant drawback in the enemy eventually being able to identify the type of AI in use and destroy it - in much the same way the enemy is able to identify our convoy patterns, actions on contact, and response times - and exploit that as well.  When that happens, we go back and reassess our actions and change our tactics - at the cost of lives.  If AI tactics and plans are compromised or identified, the loss is the AI platform - not lives.  AI can then be changed, by people and programming, to do something different.


For me, the bottom line is that AI combat systems will decrease both friendly and civilian casualties and reduce collateral damage.

(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
Sir, I don't believe a compentent enemy would destroy such hardware...  If these systems are as reliable as they are made out to be, they will reprogram or or reverse engineer them to be deployed against us.  The SEA already stole the designs for one of the next generation fighter jets and the Chinese cloned the technology...  It is not as simple as sending out the bulls.  In a game of chess, if I give up a queen with no fight... Do you take it?  War is about more than just the implements, the toys and the fancy automatons that we use...
(0)
Reply
(0)
SGT Richard H.
SGT Richard H.
11 y
MAJ Bryan Zeski my biggest reservation to the use of AI on the battlefield is one that you mentioned four times in your first response to Cpl Ray Fernandez (though I gather that you mean it quite a different context than I do)

"AI doesn't care".

But.....Soldiers do.
(1)
Reply
(0)
Avatar small
TSgt Christopher D.
3
3
0
How much easier is it for a drone pilot to fire hell fire missiles or drop bombs on people than it is for a pilot? I don't know... but we've been working hard to cut the real-life, personal experience of war as much as possible. This pulls a lot of our brothers and sisters out of harm's way in achieving our objectives, but creates a buffer between human beings and the horrors of war. It becomes like a video game, and I've never had a problem killing a character on CoD or some other war game, but would likely have very serious difficulty killing someone in real life. Robots just continue this trajectory. 

Robots and AI machines as warriors create an entire new risk for us. Any computer can be hacked. Some are much more difficult to hack than others, but what if a robot warrior was captured. They would undoubtedly be connected to the net somehow, and could possibly be used to insert a virus into vital defense systems. They could be reverse engineered, or otherwise programmed to be used against us. And the ultimate concern of AI is the achievement of self-awareness where they see humanity in general as a threat. 

(3)
Comment
(0)
Cpl Ray Fernandez
Cpl Ray Fernandez
11 y
TSgt, you bring up some great points. The problem I see above all by automating a military force and removing the human from the equation is that we lose the biggest bit of security against tyranny and abuse of power, the ability for a human to question if what they are being told to do is legal or immoral. Suppose we went to an AI based force as LTC Paul Labrador ala SkyNet. If someone ordered it to fire on civilians would it have in its core programming to refuse the order? Would it improvise in a manner of having self preservation in mind to adapt to tactics of an enemy that may not be armed with conventional weapons. The ability to adapt and improvise as opposed to sticking to pure doctrine and tactics is what has made us one of the most formidable fighting forces in history.
(2)
Reply
(0)
Avatar small
SPC Chris Stiles
3
3
0
Combat robots will be safer to civilians on the battlefield, more precise, and less costly than sending the equivalent amount of troops to hostile areas.  The only factor in attaining each of these is time.  Current day robots and unmanned systems are a factor more capable than they were 2 decades ago, and if you applied Moore's Law, their advancement will only continue exponentially over time till they get to a point when they are able to make decisions faster and better than their human counterpart.  They will be able to analyze situations faster and more accurately than a human.  They will be able to tell what humans or targets to engage and what or who are non-combatants.  I surmise we will have a lower "Cost of War" or collateral damage component to future engagements where AI or robotics are employed more than actual human boots on the ground.  You do lose a large "human element" to fighting war in this manner.  Our Insurgent enemies in Iraq and Taliban enemies in Afghanistan and Pakistan have already criticized our use of robotics in warfare as it removes us from the fight as if they are not worthy to engage in a humanly manner on the battlefield.  It is a very alien concept in their culture and beliefs, but hey, most of them still squat just anywhere in the open and wipe their butt with their left hand. Anyways, only the wealthier nations at first would be able to afford to engage in this type of warfare as you see America is the leader in development followed very rapidly now by some of the other more powerful nations around the world.  And it will be robots vs. robots in some situations, but there will always be the robots vs. humans as you can't only have robots in the conflict.  You will need humans somewhere on the ground to support operations and conduct certain tasks.  But it most certainly will not only be robot vs robot and human vs human fighting as that notion is no longer a valid way to fight when you introduce advance robotics to the battlefield.

The hardware will also get better to where they can operate longer periods of time than a human can and with less and less component failure.  I am a UAV pilot, or "Drone" pilot and have been doing this for 10 years now, and I can say we have made leaps and bounds with reliability and automation over what UAVs were able to do 10 years ago.  I have witnessed system reliability go from 1 crash every 1,000 flights hours to 1 in 50,000 flight hours on some systems.  And even then, the usual reason for a crash is due to pilot error.  One day, they will surpass even manned aviation safety records and will be statistically safer to have the autopilot fly the aircraft than have a human in the cockpit that is more likely to cause an accident.  This same concept will naturally follow suit for automated cars where they can prevent more accidents and drive safer than a human can.  The google car is still new and will take several hundreds of thousands of hours of operating to improve upon as it did with the military and flying it's "drones" for hundreds of thousands of hours.  I can only imagine that drones in 20 years will be a factor of up to 10X more capable and lethal than they currently are, so the driverless cars will be no different.

In the mean time though, it has proven very effective to pair humans with the automated systems to watch over them and provide command inputs when the systems reach a current programming limit on their ability to make their own decisions or adapt to their environment.  Yes, it is still a human pushing the button when it comes to engaging targets, but that will also one day slowly be given to the computers to make the decision to pull the trigger when they engage a target once it meets their programmed engagement criteria.  Will there be accidents along the way?  Perhaps.  But we have very smart people working on the programming and thinking from every angle to prevent accidents as are the people that employ the systems so that they are safety tested and or operated out in the field.  Will Skynet one day take over everything because we programmed everything to be smarter than was for our own good?  Maybe.  But I think enough robotics creators, system operators, and policy makers have seen the Terminator or Matrix or AI Robot movies to prevent such a thing from every happening and things getting out of our control.
(3)
Comment
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
I think it has already been well stated that Moore's law cannot progress for infinity, I don't think anybody is arguing against that.  But so I can get some clarity, your two previous comments seem to contradict one another.  In one you say when "moore's exception kicks in, all the tech is useless".  Then the next comment you say "Moore's exception has nothing to about being useless", so which is it?

I agree that when we hit the limit of progression, there will be a hinderance, as in we will no longer be able to advance our tech, or we'll have to start stacking ICs like we do with processors now, at least until we can break that barrier with new tech.  I don't see how it will spell disaster as you have stated at least twice now.  All I can forsee is that our tech advancement will be stymied for a period.  What do you predict for this disaster you speak of?
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
spelling disaster?  I can say that it would collapse, but the manner in which it does so is unknown as the variables are infinite to calculate.  All I can say is once you reach the limit, the platform must be changed to one that has a higher limit.  It is like the wheel vs the jet engines.  You can only do so many things with a wheel until you have done everything with it.  So therefore, once the wheel is tapped out so to say; then we must go to a different platform that has better versatility.  With our current design of the computer... It can never achieve an intelligence; so we have to design something other than a computer to reach an AI.  I believe this would have to involve integrating biological systems and electromechanical systems as the only intelligence systems known in the universe exist in biological systems.
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
As for the contradiction; that is intentional.  The tech at the platform for which the scope applies would be at the limit for that scope.  If you apply the lim to a different platform then it would be at a different constant so therefore when certain tech becomes antiquated, changing the platform would redefine the Limit function for the new platform being used.
(0)
Reply
(0)
Cpl Ray Fernandez
Cpl Ray Fernandez
12 y
Even Moore's Law has its limitations, Intel and a lot of other chipmakers are looking for a new material to develop chips with, due to nearing the max of transistors they can fit on chips. Until there is a new material developed we may see the doubling decrease to about every 3 years and with in a year to five years reach our limits with current tech. Is it any wonder why CPUs have been increasing cores instead of their clockspeeds? 
(0)
Reply
(0)
Avatar small
Avatar feed
Combat Robots - The Future of Warfare?
SGT William B.
3
3
0
Stupid internet outages; I had a long write up on the moral consequences of taking human loss out of the equation of war.  I'll say my little BLUF from a moral standpoint: war should never be without death, if only to remind the world of the terrible consequences of not being able to work and agree with one another.
(3)
Comment
(0)
MAJ Bryan Zeski
MAJ Bryan Zeski
12 y
That's a nice thought, but the truth is that there will be war, whether people are doing the killing or someone else.  I have to assume, for the sake of argument, that the hypothetical war is a just war and therefore, the taking of life is a regrettable necessity.  The fact that one nation doesn't lose as many due to technological advances doesn't change that.
(1)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
there is no better weapon than one that one knows how to use the best.  all the advancement in technology will never defeat a superior strategy.  If you commandeer the tech of you enemy then it no longer serves his purpose and works against him.
(0)
Reply
(0)
Avatar small
LTC Paul Labrador
2
2
0

Please do not let them create SkyNet....!!!


(2)
Comment
(0)
Avatar small
CPT Daniel Walk, M.B.A.
2
2
0
Removing live people from the battlefield removes the greatest incentive to end the conflict.  If your only input to combat is a piece of machinery that can be easily replaced, then why bother ending the conflict until the disadvantaged side is decimated.

I wish I could remember where I read this, but there was a study done in the last few years demonstrating that Soldiers were quicker on the trigger when using automated weapons systems (CROWS, et al.) than when they were required to actually pull the actual trigger on the actual weapon. Using the automated systems increases the separation between the trigger puller and the consequences of their actions. It reduces the humanity of the person on the receiving end of the round, combatant or not.

Every safeguard you program into an automated system is a weakness that can be exploited by an enemy. If you program the system to not fire on a unarmed individual, you give the enemy the ability to disarm themselves in order to gain up close access to the automated system.

AI will happen, but if we allow it to assume direct combat roles we will be sorry.
(2)
Comment
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
Captain, I agree with most everything you have said.  I would like to say though that a high loss of life is not the only incentive to end a conflict.  There are also ideological, political, logistical, and economic factors that can break off a conflict even with low to no casualty rates.  Removing the loss of lives in a conflict through the use of advanced systems just means the enemy loses more lives, not necessarily that we could continue to do it for an indefinite amount of time that the people would allow the total decimation of our enemy with the systems.

I had read somewhere about the study you mentioned as well, and I would have to agree with it with my own observations of being on the ground with the troops and working with other unmanned aerial system assets that are armed over the last 10 years.  It is a trend that has become more pronounced and I think will only continue on, especially since we have the x-box generation at the helm now.
(0)
Reply
(0)
Avatar small
SGT James P. Davidson, MSM
2
2
0
SPC Stiles -

"I'm gonna go out a limb here and say that unmanned fighter aircraft will be programed with target identification just like human fighter pilots are."

Point to consider:

The software (computers in general) are only as 'smart' as the information programmed in. They can process hard data, but not make 'decisions'.

I'll give an example:

Have you observed the commercials for the new vehicles (the brand escapes me) that have 'accident sensing technology' allowing the vehicle to 'see' an accident a couple of vehicles ahead, and applies the brake 'for' the driver?

Those sensors have NO idea what the vehicles following will do, whether or not those drivers are paying attention, et cetera. In other words, you may not slam head-long in to the vehicle in front of you, but you may get crunched by the vehicles trailing you. That aspect is not factored in, though the vehicle has outstanding reaction time for the forward-facing issues.

And yes, I am well aware of tactics concerning airstrips, refueling, defensive perimeters, et cetera.

What you describe is essentially a self-piloting AUV, with a larger margin for error than the current base-controlled UAVs we have. Either that, or they will simply be larger, faster, more dangerous versions of that which we already know are fallible and responsible for more than enough civilian casualties (and probably more incidents than manned aircraft). I doubt a computer-controlled aircraft has the ability to make a split-second 'abort' if haji decides to bring in 'civilian cover' at the last minute. The computer simply has a 'target' 'identified' based on a program, not a decision.

Me no likey. ;)

I do see the benefits that you pointed out, but, in my opinion, they aren't worth the overall risk. 
(2)
Comment
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
Benjamin,
   I do not fail to realize that there is no computer in existence that has a self consciousness.  You are 100% right about that one, and I doubt we will see one soon that gains that capability.  What I was touching on is that computers can be programed to take input from sensors and based on their pre-programming determine what to do in the instance something it detects with it's sensors occurs and react based on that how it has been programmed to react in that situation.  Not that they are somehow clairvoyant, but their programmers would have to have some foresight to program those abilities into them, and I think we are now on the same page in that respect.  Also, I wasn't trying to take you back to the 3rd grade in respect to sending you off on a research project, but I didn't know what your knowledge on the subjects were.  And with your previous limited output performance in this forum for which was the only thing I had to judge from, I thought it would actually help you to go for an information plus up, or anybody else that reads this topic in the future.  Also, I guess wikipedia was useful as you directly quoted stuff from it.  Wikipedia may not have been a good academic research tool 10 years ago, but it is a very accurate medium these days as the subject matter experts of the topics regularly manage the content that is on the pages to keep them accurate.  Scholars in the university and the secondary school systems regularly send their students to the site for research these days.  Just go ask any current or recently graduated student.  As stated before, it's a good place to start, but by no means the final resting place as there is a wealth of knowledge on these topics in books, academic journals and so forth just waiting to be read by the likes of you and I.  I have actually enjoyed the engagement in the conversation so far and feel that people can learn from it rather than just letting the topic die off with a few simple comments.  It is a subject that will begin to affect more and more people in time.
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
hmm I wonder how you could possibly know what I directly quoted...  Is it an assumption, rather you are right or wrong is irrelevant, however I never saw the wikipedia entry thus I could not have directly quoted it... any similarity would be strictly coincidental.  and at that you agree with me, yet argue against me.. is this debate bipolar... or can you not concede?
(0)
Reply
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
It was an observation, not an assumption since I read the wikipedia article after I suggested it and some of things you said were almost verbatim to the wikipedia article.  But who knows, you could have gotten the same knowledge from the same source as the wikipedia article did.  But you know what they say about coincidences....
And I was agreeing with you on a certain point.  We've been discussing many points, but that doesn't mean that since we were aligning views on one point that I automatically align on all the others.  It's usually difficult to get someone to concede their views on a subject, especially if that is their career field and field of expertise.  If you could be more precise on which topic you'd like me to concede to, it could be discussed.
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
I know you are agreeing with me on point Stiles, but i am a jerk most of the time and argumentative...
(0)
Reply
(0)
Avatar small
PFC Eric Minchey
2
2
0
Edited >1 y ago
When it comes to computers, robots, & whole unmanned armies.
I have 2 questions: 1. What happens if & when the enemy steals the keys?
2. What happens if & when the things we built to keep us safe are turned against us?
Anyone care to answer those questions?
(2)
Comment
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
In most cases, use and employment of a system and it's risk of loss (i.e. damage or destruction) is weighed more so than rather it gets captured and or reverse engineered.  There are only a very select few systems where this is even weighed when they contemplate using the system on a particular mission.  98% of the systems used out there don't employ technologies that are sensitive to capture and or don't care if they can be reverse engineered.  And even ones that do have sensitive components, most of the time they have devices inside them that either zero out any stored information or self destruct when tampered with.

Thank goodness the NSA is the only entity in the world that is leaps and bounds ahead of everybody else when it comes to cracking encryptions.  I lay my head down on my pillow at night knowing I don't have to worry about my digital privacy being infringed upon by anybody in the world except my own government.
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
Do you really think the NSA is the only organization that can crack advanced encryption algorithms, Offensive Software has public domain packages as in Kali Linux and Back Track Suites that can crack any known encryption algorithm... how do you think Anonymous gets into everyone's stuff?
(0)
Reply
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
No, I know the NSA is not the only organization that can crack advanced encryption algorithms, but I did say they were ahead of everybody else in the arena, and that is irrefutable.  And they mostly use a lot of exploits in system coding or find backdoors to get into much of the stuff they snoop.  Brute hacking a major system would throw up too many red flags on the operator's end, say Facebook, so the NSA probably doesn't go with that approach often.  I have a co-worker that uses BackTrack, and no you can't just hack and crack everything with it as a stand-alone, although I'm told it is useful for accomplishing certain things in this arena and up to certain encryption standards.  Anonymous gets into everyone's stuff primarily through custom coding, scripts, worms, malware, brute-force, or chaining up to thousands of computers to overwhelm and crash certain systems so they can do what they are out to do.  A lot of large entities are already working on ways to counter the NSA's snooping and better protect their customer's private information, which is good because that prevents bad people from getting it too.  And I'm sure it's only a matter of time before they implement higher bit encryption standards.  758 bit encryption has been know to be exploited, but I don't know of any reported cases of 1024 bit encryption being cracked (so I'm sure NSA can do it along with a VERY few other's), but 4096 bit keys aren't forecasted to be breakable in the near foreseeable future by any quantum super-computer in existence.  But as the game goes, once they do that, the encryption standards just have to increase.  I suppose it would be a long time before an 8192 bit encryption gets cracked by anything.
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
correct, extremely complex algorithms require network clustering.  Many unfunded hackers are limited by only having access to small networks, however larger groups that have a clusters of thousands of computers can crack any algorithm in a matter of minutes.  Utilization of BT and Kali to the maximum potential requires access to large resources.  also you don't brute force the logon, you hit the shadow file offline so that you don't red flag yourself.  This is accomplished through trapping the password handshake when it crosses the network via vampire tap or Wi Fi sniffer.
(0)
Reply
(0)
Avatar small
SGT James P. Davidson, MSM
2
2
0
Questions for clarity:

"...combat robots would be safer to civilians on the battlefield..."

How is this conclusion reached? Will they have weaponry that the soldier would not, or will they be firing the same caliber and grade of weaponry? Will the robot be able to differentiate between a combatant and a "civilian"?

I would offer that the robotic units will enhance the battlefield balance in the favor of which ever side deploys them, but I do not see much more than robot on robot violence in the end, as AI will be deployed to defeat AI, and men will still kill men.

AI will run algorithms to determine pre-programmed tactics, but strategy will still need to be introduced by a conscious mind before said tactics could be employed (see: UAVs that need input to know what the target is, whether or not to drop or fire, et cetera).

As to the comment concerning auto-piloted aircraft: They are already in use: (see: UAV). I wouldn't trust or rely on them too much in a dogfight, however.

I doubt a programmed aircraft could take on a living, breathing pilot in a matching aircraft, successfully. An automated aircraft cannot estimate it's opponent, while a human could over or under-estimate. Maybe in generations to come, but think about this: combat flight simulators see the professional and amateur pilots 'shot down' regularly. Why hasn't anyone stuck a game program in a plane yet?

I doubt AI will do so much thinking to determine warfighting.

Look at automated assembly lines: They still have quality control inspectors. Those high-tech automated machines still err in operation. No they don't make 'mistakes', because that implies that they are making conscious decisions. They simply perform a pre-programmed function - no more, no less.

I doubt AI will be so intrusive on the battlefield or in the skies any time soon.
(2)
Comment
(0)
Cpl Ray Fernandez
Cpl Ray Fernandez
12 y
SPC Stiles the only problem I see with having an AI attempt to recognize between a threat and an unarmed civilian is what the cost will be to upgrade the AI to recognize improvised weapons, with the rise of unconventional warfare. So far I have yet to see an AI that is capable yet of improvising.
(1)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
Ray will these upgrades be as buggy as the GTA 5 patches that cause the infinite loop syndrome?  Thanks Rockstar... you made my game unplayable....
(1)
Reply
(0)
Cpl Ray Fernandez
Cpl Ray Fernandez
12 y
Cpl Long that would be terrible if the designers of these devices would adopt the same business model that video game companies seem to have been doing since the rise of the internet of releasing bug filled games, then patching them in hopes that it fixes the problems without causing more issues to come to the forefront.

(0)
Reply
(0)
LTC Paul Labrador
LTC Paul Labrador
11 y
And what if there are bugs in the programming? Or if they get hacked? I think the Robocop reboot shows a good example of what happnes when Murphy exerts his influence on combat AI....
(0)
Reply
(0)
Avatar small
MAJ Bryan Zeski
2
2
0
I recently saw an article about a F16 outfitted as an AI platform for training pilots against an opposition.  I suspect that creating an AI that could outperform a live pilot isn't far down the road.  Pilotless jets aren't limited by G-forces for consciousness so they can perform maneuvers that would knock out live pilots - even with g-suits.
(2)
Comment
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
the greatest illusion is security Stiles....  and those stupid and ancient systems have been replaced by stupid modern systems...  Ask you self why can a fancy new computer can be broken so easily by a stupid caveman with a club?  You seem to relish in treating the enemy like they are stupid...  oh and the Chinese were successful in copying the RQ 170.  You  over glorify too much and underestimate people.  Your argument is predicated on Chronological snobbery argument with a preference for modern.
(0)
Reply
(0)
SPC Chris Stiles
SPC Chris Stiles
12 y
Well, anybody can break an electronic device by smashing it with an object that is harder than the device in question.  But the discussion was about taking the things we use and turning them against us without managing to breaking them along the way.  I'm not saying our enemy is stupid.  Our current and potential enemies are just not as technologically able in this arena as we are...yet.  That's not to say that a lot of countries aren't playing catch-up, and some, rather quickly.  And I suppose you'd know all about the "stupid modern systems" that have replaced the old ones, so I'll leave that up to your expertise to talk about.  And could you give a source for what you are talking about where the Chinese were successful in copying the RQ-170?  I'm not sure to what degree you are meaning here.  

And I certainly do not underestimate people.  I have been deployed for 51 out of the last 98 months all over the world using unmanned or robotic systems, and I've always been engaging with rather low tech participants.  But I have a healthy respect for how smart and cunning they can be, especially in using the tools they have to defeat our methods and tactics.  They may be crude or simplistic peoples, but it never ceases to amaze me sometimes what tricks or tactics they come up with to deal with us and our technologically heavy forces.  I have seen "low tech" means handily win over "high tech" implements too many times to know that anybody that is a chronological snobb is an idiot and doomed to failure.  I am also a study of military war and history and have had plenty to learn from centuries of warfare waged in all arenas.  I firmly believe that the one that forgets the past is doomed to lose his present and future.  This is my firm stance on the issue no matter what may have been interpolated from my past statements.
(0)
Reply
(0)
Cpl Benjamin Long
Cpl Benjamin Long
12 y
RQ 170 - The Aviationist is displaying a photo of said cloned device in Article (Unverified)

Pakistan Defense Article Claims same position using same photo (Unverified)

RT News displays different photo in article of cloned vehicle

Digital Journal Displays third photo of cloned device

Military PR site dismisses these claims as a mock up, but again who do you trust?
(0)
Reply
(0)
SPC Chris Stiles
SPC Chris Stiles
>1 y
Well, if the claims to the copies are true, none have been reported to be flying yet, and Iran just loves to show off their new toys as early as possible.  Besides, rumor has it that the RQ-170's replacement is already well under development and may even already be fielding.  I'm not close enough in that circle of "know" though.  The RQ-170 had a limited implementation in the force and from what I can gather, only about 30 were purchased from the manufacturer, and my guestimates put it at only about 20 remaining anyways from crashes and other stuff.
(0)
Reply
(0)
Avatar small

Join nearly 2 million former and current members of the US military, just like you.

close