• Killer robots: a safe future or a deserted planet?

    A military conflict is not a rare thing on our planet nowadays. A lot of territories are involved into war campaigns, a great deal of which are ongoing and there is no perspective of their ending in a nearest future. Any war has always been a very devastating factor. Both parties inevitably incur heavy losses loosing people, important economic objects, spending after a lot of money to rebuild the ruined economy thus slowing down the development of a country. The losses of people and their sufferings are, perhaps, the most sad moment in the whole story. If it is so hard to avoid wars, is it possible to diminish the role of human factor or even replace it by some other means, say, robots? Using them we can avoid unnecessary loss of population or at least minimize it. The very idea seems to be very appealing and promising. But after a little closer consideration the outlooks appear to be not so bright.


    The robots on a battlefield is not a science fiction any more. The developed countries pay more and more attention to autonomous weapons, implying the use of technical means that would be able to take fully autonomous armed combat in any place of the planet. While the programs on war robots are being approved the most conscious people launch the protest campaigns all over the world aiming at banning so-called “killer robots” when it is not too late.


    Some experts consider that autonomous “killer robots” can be developed and put to practical use within 20-30 years. Autonomous robotic weapons imply that a machine with an artificial intellect will search for a target, make a decision whether it is a right target and destroy it afterwards. And that is the most worrying point since there is no warranty at all whether the machine could do it right and a civilian wouldn’t be missed with a real combatant.


    Fully autonomous weapons aren’t created yet but the search is so intensive that some countries do have samples that can really be regarded as the precursors of “robot killers”. A lot of countries have weapons that are able to destroy automatically the air threats. These systems are valuable mainly because of their ability to reduce risks to the soldiers and expedite time of response. Presently the USA is a leader in developing autonomous weapons and seems to be close to achieving a full success. US Reaper and Predator drones used in Afghanistan proved their efficiency on the battlefield and can be considered as the first steps in a long way to creating really autonomous weapons. Modern robotic weapons are dependable upon a human operator taking the final decision whether to pull the trigger or not. Nevertheless the autonomy level of such systems is increasingly growing leaving no place for a human in a decision-taking process. Military experts think that the tendency of unmanned air forces will preserve in future and regard the Predator and similar models as a first generation. The US Department of Defence expenditures on the programs of unmanned war systems development only confirm the assumption of the experts. Total spendings of this kind reached about $6 billion annually and grow very swiftly. The Defence Department expects to put autonomous weapons into use in 2025.


    Our society has developed the principles of humane warfare that is legally written down in the international humanitarian law. There are some types of weapons (a chemical one in particular) that are forbidden by an international law. Considering this the situation concerning fully autonomous weapons is not clear. The main question is whether “robot killers” could abide by international humanitarian law requirements, i.e. minimize the losses of civilians in a military conflict area. Even the proponents of autonomous weapons, let alone critics, admit it is a serious problem and look for the ways of making “robot killers” more humane.


    A dilemma of distinguishing among a civilian and a combatant is also very serious. Fully autonomous weapons cannot feel or interpret the difference between the soldiers and civilians especially in terms of modern military environment. It is very easy to cheat these weapons, for instance, wearing no military uniform or hiding the guns.


    Another aspect concerns the rule of proportionality that being one of the most complicated principles of international humanitarian law implies the evaluation of the military situation by a human being. The proportionality test forbids the application of the weapons if the expected harm to civilians exceeds the expected effect from its military use. It’s highly possible that a robot wouldn’t be able to analyze a great number of possible scenarios in order to interpret the situation correctly in a real time mode.


    The problem of military necessity like the proportionality rule requires a subjective analysis of a situation. It enables military forces plan war actions considering practical requirements of a military situation in a certain time span.


    Even more serious problem is that fully autonomous weapons are completely deprived of human emotions that are critical for evaluating the intentions of an individual as well as distinguishing a right target. The proponents of autonomous weapons argue that the very absence of emotions is a main advantage. But they seem to miss the existing side effects. Human emotions provide one of the best possible kinds of protection from killing civilians. The absence of emotions can make the killing process much easy. Robots cannot identify themselves with humans, i.e. they can’t feel compassion. For example, a robot can easily kill a child in an area of a conflict.


    On the other part, fully autonomous weapons may become an ideal instrument of terror for dictators wishing to preserve their power. There is no possibility of mutiny among the armed forces since there is no much difference for a robot what kind of order to fulfill.


    Considering the problems fully autonomous weapons create for the international law it’s inevitable that some day autonomous robots would injure or even kill a lot of civilians. When there are many victims people usually want to know who is responsible: a military leader, a developer or maybe a robot? None of these versions is satisfactory enough. So the positions of another instrument of civilian protection during military conflicts are considerably undermined.


    The proponents of autonomous weapons admit the robots should abide by international humanitarian law. So several mechanisms describing how to make robots do it were offered. The most known are Ronald Arkin’s ethical governor rule and the concept of powerful artificial intellect. The “ethical governor” is a complex system enabling a robot assess the situation correctly and make a right decision in a given situation. The second concept presumes that robots will abide by law due to a mind that is compared to the human’s one if not surpassing it. Such robots are expected to have a computing power equal to the cognitive abilities of human brain. But most scientists consider it rather desirable than accessible.


    Taking into account all the problems described above the one thing is absolutely clear: the leading powers together with scientists should immediately make the attempts to adjust the development of a military technologies connected with the robot autonomy. The possible risks far outweigh possible benefits. The situation when the fully autonomous weapons will decide by themselves when and where to start military operations is completely unacceptable.


    At least all the experiments with autonomous robots should be conducted in a strict correspondence with an international humanitarian law and be put under severe public control as the ones that threaten the safety of the entire mankind.
    It’s not so difficult to grow up a monster. But what to do if a monster stops obey the orders of a master?..


    Read more here
  • Social buttons

    Share