原创翻译:龙腾网 http://www.vemp.tw 翻译:chinawungbo2 转载请注明出处
论坛地址:http://www.vemp.tw/bbs/thread-484865-1-1.html


Restraining the robots

约束机器人

Autonomous weapons and the new laws of war

自动武器与新战争法

A technology that may prove hard to restrain

一种可能难以管制的?#38469;?br />


The harop, a kamikaze drone, bolts from its launcher like a horse out of the gates. But it is not built for speed, nor for a jockey. Instead it just loiters, unsupervised, too high for those on the battlefield below to hear the thin old-fashioned whine of its propeller, waiting for its chance.

自杀式无人机“哈洛普”犹如一匹脱缰的野马,从发射者手中飞驰而出。但它不是为速度或比赛而生,而只是在空中无人监督地盘旋。螺旋桨发出轻微老掉牙的呼啸声,飞得太高而不会被地面作战人员听见,它在等待战机。

If the Harop is left alone, it will eventually fly back to a pre-assigned airbase, land itself and wait for its next job. Should an air-defence radar lock on to it with malicious intent, though, the drone will follow the radar signal to its source and the warhead nestled in its bulbous nose will blow the drone, the radar and any radar operators in the vicinity to kingdom come.

如果“哈洛普”无人监督,它最?#25112;?#39134;回事先指定的空军基地,自动降落,等待下一次任务。当“哈洛普”被防空雷达恶意锁定时,它会跟踪雷达信号源,隐蔽在蒜头状鼻部内的弹头将对雷达及其周围的所有雷达操作员实施致命打击。



Acknowledging the long, unpleasant history of devices which kill indiscriminately, or without direct human command, is crucial to any discussion of the risks, and morality, of autonomous weapons. It should not mask the fact that their capabilities are increasing quickly—and that although agreements to limit their use might be desirable, they will be very difficult to enforce. It is not that hard to decide if a landmine fits the criteria that ban such weapons under the Ottawa treaty. But whether a Harop is an autonomous robot or a remote-controlled weapon depends on the software it is running at the time.

这种恶劣的武器?#34923;?#24050;久,它们滥杀无辜或不受人类的直接指挥去杀人,?#40092;?#21040;这一点对于探讨自动武器的风险与道德至关重要。事实不容掩盖:它们的能力正在迅速发展,或许达成使用限制协议是可取的,但实施起来非常困难。判断一种地雷是否符合《渥太华条约》禁?#20040;?#31867;武器的标准并不难,但判断“哈洛普”是自动机器人还是遥控武器,取决于当时运行的软件。

Weapons have been able to track their prey unsupervised since the first acoustic-homing torpedoes were used in the second world war. Most modern weapons used against fast-moving machines home in on their sound, their radar reflections or their heat signatures. But, for the most part, the choice about what to home in on—which aircraft’s hot jets, which ship’s screws—is made by a person.

自从二战首次使用声自导鱼雷以来,武器就已具备自动跟踪目标能力。用于攻击快速移动装置的大部分现代武器能够锁定目标的声音、雷达反射、热信号。但在多数情况下,锁定哪个目标由人来决定:哪架飞机的燃气射流,哪艘舰船的螺钉。

An exception is in defensive systems, such as the Phalanx guns used by the navies of America and its allies. Once switched on, the Phalanx will fire on anything it sees heading towards the ship it is mounted on. And in the case of a ship at sea that knows itself to be under attack by missiles too fast for any human trigger finger, that seems fair enough. Similar arguments can be made for the robot sentry guns in the demilitarised zone (dmz) between North and South Korea.

防御系统是个例外,比如美国海军及其盟友使用的“密集阵?#20445;≒halanx)火炮。一旦被开启,“密集阵”会对探测到的向本舰靠近的所有目标开火。当舰船意识到来袭导弹速度太快,人类来不及扣动扳机时,“密集阵”似乎合乎情理。朝韩非军事区部署哨兵机器人也有这番?#35272;懟?br />
Rise of the machines

机器觉醒

The challenge that modern armed forces, and armsmakers like iai, are working on is the ability to pick the target out from a field of non-targets. There are two technological developments that make the challenge a timely one. One is that computers are far more powerful than they used to be and, thanks to “machine learning”, getting much more sophisticated in their ability to distinguish between objects. If an iPhone can welcome your face but reject your sibling’s, why shouldn’t a missile be able to distinguish a tank from a school bus?

现代军队和武器制造商(例如:以色列飞机工业公司)面临一个挑战,即从大量非目标中识别目标的能力。这一挑战恰逢两大?#38469;?#21457;展:一是计算机比以往更加强大,二是“机器学习”使计算机的物体区分能力更加精密。?#28909;籭Phone能区分你与同胞的面容,导弹怎么不能区分坦克与校车?



Cost is also a factor in armies where trained personnel are pricey. “The thing about robots is that they don’t have pensions,” says General Sir Richard Barrons, one of Britain’s most senior commanders until 2016. Nor do they have dependents. The loss of a robot is measured in money and capability, not human potential.

?#26432;?#20063;是军队考虑的一个因素,训练有素的军事人员耗资巨大。“关键是机器人不需要养老金?#20445;?016年?#24230;?#30340;英国最高指挥官之一理查德·巴伦将军说道。机器人也不需要?#38590;?#23478;属,损失一台机器人是以金钱和能力去衡量,而不是人的潜能。

If keeping a human in the loop was merely a matter of spending more, it might be deemed worthwhile regardless. But human control creates vulnerabilities. It means that you must pump a lot of encrypted data back and forth. What if the necessary data links are attacked physically—for example with anti-satellite weapons—jammed electronically or subverted through cyberwarfare? Future wars are likely to be fought in what America’s armed forces call “contested electromagnetic environments”. The Royal Air Force is confident that encrypted data links would survive such environments. But air forces have an interest in making sure there are still jobs for pilots; this may leave them prey to unconscious bias.

假如由人类操控仅仅会增加?#26432;荊?#37027;可能是值得的。但人类操控存在缺陷,这意味着你必须反复输入大量的加密数据,如果关键的数据链遭受物理攻击(例如?#27425;?#26143;武器)、电磁干扰或网络战的破坏怎么办?未来战争可能在美军所谓的“你争我夺的电磁环境”下进?#23567;?#33521;国皇家空军坚信,加密数据链在电磁环境下会继续存在,但空军希望确保飞行员有事可做,这可能使飞行员受到无意识的偏见。

The vulnerability of communication links to interference is an argument for greater autonomy. But autonomous systems can be interfered with, too. The sensors for weapons like Brimstone need to be a lot more fly than those required by, say, self-driving cars, not just because battlefields are chaotic, but also because the other side will be trying to disorient them. Just as some activists use asymmetric make-up to try to confuse face-recognition systems, so military targets will try to distort the signatures which autonomous weapons seek to discern. Paul Scharre, author of “Army of None: Autonomous Weapons and the Future of War”, warns that the neural networks used in machine learning are intrinsically vulnerable to spoofing.

提高自主性的一个理由是通信链路?#36164;?#24178;扰,但自动装置?#19981;?#21463;干扰。相比无人驾驶汽车,“硫磺石”这类武器的传感器灵敏度要高得多,不仅因为战场环境混杂,而?#19994;?#20891;?#19981;?#24819;办法误导它们。正如一些激进分子利用非对称式化?#27604;?#36855;惑人脸识别系?#24120;?#20891;事目标?#19981;?#35774;法改变自身特征,?#20048;?#34987;自动武器识别。《无人军团:自动武器与未来战争》一书的作者保罗·斯查瑞(Paul Scharre)提出警告:依?#21487;?#32463;网络的机器学习本质?#20808;菀资?#39575;。

Judgment day

审判日

New capabilities, reduced costs, resistance to countermeasures and the possibility of new export markets are all encouraging r&d in autonomous weapons. To nip this in the bud, the Campaign to Stop Killer Robots is calling for a pre-emptive ban on “fully autonomous” weapons. The trouble is that there is little agreement on where the line is crossed. Switzerland, for instance, says that autonomous weapons are those able to act “in partial or full replacement of a human in the use of force, notably in the targeting cycle”, thus encompassing Harop and Brimstone, among many others. Britain, by contrast, says autonomous weapons are only those “capable of understanding higher level intent and direction”. That excludes everything in today’s arsenals, or for that matter on today’s drawing boards.

新的能力、节省?#26432;尽?#19981;怕反制、可能出现新的出口市场,这一?#21368;?#22312;促进自动武器的研发。为了扼杀于萌芽,“杀手机器人禁令运动”呼吁对“全自动”武器出台先发制人的禁令。问题是没有统一标准,例如瑞士认为,自动武器能够“部分或完全替代人类行使武力,尤其是目标锁定?#20445;?#22240;此“哈洛普”和“硫磺石”属于自动武器。英国则认为,只?#23567;?#33021;够理解人的高级意图和指令”才称得上自动武器。照此说法,已有或设计中的所有武器都不属于自动武器。

Partly in order to sort these things out, in 2017 the un’s Convention on Certain Conventional Weapons formalised its earlier discussions of the issues by creating a group of governmental experts (gge) to study the finer points of autonomy. As well as trying to develop a common understanding of what weapons should be considered fully autonomous, it is considering both a blanket ban and other options for dealing with the humanitarian and security challenges that they create.

在一定程度上为了解决这些问题,2017年,联合国《特定常规武器公约?#39134;?#31435;了政府专家小组研究武器自主性的细节,使早先对有关议题的论述具有法律效力。该公约不仅试图在哪些武器应被视为全自动武器上达成共识,也在考虑出台全面禁令,?#32422;?#24212;对人道和安全挑战的其他?#26728;浮?br />
Most states involved in the convention’s discussions agree on the importance of human control. But they differ on what this actually means. In a paper for Article 36, an advocacy group named after a provision of the Geneva conventions that calls for legal reviews on new methods of warfare, Heather Roff and Richard Moyes argue that “a human simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness” is not really in control. “Meaningful control”, they say, requires an understanding of the context in which the weapon is being used as well as capacity for timely and reasoned intervention. It also requires accountability.

参与公约讨论的多数国家认同人类操控的重要性,但对实质性意义存在?#21046;紜!?#31532;36条”是以?#24230;?#20869;瓦公约》条款命名的一个倡议组织,该条款呼吁对新的战争方法进行合法性审查,希瑟·罗夫和理查德·莫伊斯在该组织报告中指出,“人类在情况不明或不假思索的情况下,仅通过扣动扳机来响应计算机的指示?#20445;?#36825;不算是人类操控。他们认为,“有意义的操控”不仅要了解武器的使用环?#24120;?#36824;需要适时理性的干预能力和责任心。



The two dozen states that want a legally binding ban on fully autonomous weapons are mostly military minnows like Djibouti and Peru, but some members, such as Austria, have diplomatic sway. None of them has the sort of arms industry that stands to profit from autonomous weapons. They ground their argument in part on International Humanitarian Law (ihl), a corpus built around the rules of war laid down in the Hague and Geneva conventions. This demands that armies distinguish between combatants and civilians, refrain from attacks where the risk to civilians outweighs the military advantage, use no more force than is proportional to the objective and avoid unnecessary suffering.

二十多个国家希望对全自动武器出台有法律约束力的禁令,这些国家多是军事上无足轻重的国?#36965;?#20363;如吉?#32487;帷?#31192;?#24120;?#20294;也有在外交上有影响力的国?#36965;?#20363;如澳大利?#24688;?#36825;些国家没有受益于自动武器的军工产业,它们的理由部?#21482;?#20110;《国际人?#20048;?#20041;法》,它是《海牙公约》和?#24230;?#20869;瓦公约》包含的战争规则的总称。《国际人?#20048;?#20041;法》要求军队区分作战人员和平民,避免攻击给平民带来的风险大于军事利益,仅采取与目标相称的武力,避免不必要的伤亡。

When it comes to making distinctions, Vincent Boulanin and Maaike Verbruggen, experts at sipri, note that existing target-recognition systems, for all their recent improvement, remain “rudimentary”, often vulnerable to bad weather or cluttered backgrounds. Those that detect humans are “very crude”. And this is before wily enemies try to dupe the robots into attacking the wrong things.

在目标识别方面,来自斯德哥尔摩国际和平研究所(sipri)的两名专家文森特·伯兰宁和玛依柯·维尔布鲁根指出,尽管现有的目标识别系统有所进步,但仍停留在“初期”阶段,容?#36164;?#24694;?#29369;?#27668;或?#24615;?#29615;境的影响。人类探测系统还“很不成熟?#20445;?#26356;何况狡猾的敌人会试图欺骗机器人攻击错误的目标。

Necessity and proportionality, which requires weighing human lives against military aims, are even more difficult. “However sophisticated new machines may be, that is beyond their scope,” says Major Kathleen McKendrick of the British army. An army that uses autonomous weapons needs to be set up so as to be able to make proportionality decisions before anything is fired.

必要与相称原则需要在人类性命与军事目标之间权衡,这就更难了。英国陆军少校凯斯琳·麦肯德里克指出:“无论新型武器多么尖端,都超出了它们的能力范围”。这需要建立使用自动武器的军队,具备先制定相称决策再开火的能力。

Salvation?

救赎?

More broadly, ihl is shaped by the “Martens clause”, originally adopted in the Hague convention of 1899. This says that new weapons must comply with “the principles of humanity” and “dictates of public conscience”. Bonnie Docherty of Human Rights Watch, the ngo which co-ordinates the anti-robot campaign, argues that, “As autonomous machines, fully autonomous weapons could not appreciate the value of human life and the significance of its loss...They would thus fail to respect human dignity.” A strong argument, but hardly legally watertight; other philosophies are available. As for the dictates of public conscience, research and history show that they are more flexible than a humanitarian would wish.

更广泛地讲,《国际人?#20048;?#20041;法?#39134;?#21463;“马尔顿斯条款”的影响,1899年海牙公约首次引入这条法律,规定新型武器必须符合“人?#20048;?#20041;原则”和“公众良知要求”。非政府组织“人权观察”为“?#31181;?#26426;器人运动”提供协助,该组织的邦妮·多彻蒂指出:“由于自动装置和全自动武器无法衡量人类生命的价值和失去生命的意义……因此它们不会尊重人的尊严”。论据很有力,但几乎不具有法律严谨性,何况还存在其他观点。?#21155;?#20844;众良知要求,研究和历史表明,它比人?#20048;?#20041;者所想的更灵活。

Leaving aside law and ethics, autonomous weapons could pose new destabilising risks. Automatic systems can interact in seemingly unpredictable ways, as when trading algorithms cause “flash crashes” on stockmarkets. Mr Scharre raises the possibility of a flash war caused by “a cascade of escalating engagements”. “If we are open to the idea that humans make bad decisions”, says Peter Roberts, director of military sciences at the Royal United Services Institute, a think-tank, “we should also be open to the idea that ai systems will make bad decisions—just faster.”

且不论法律和道德,自动武器可能带来新的不稳定风险。自动武器的?#25442;?#26041;式似乎难以预?#24076;?#27491;如交易算法会引发股?#23567;?#38378;电暴跌”。斯查瑞先生提出“一系列?#25442;?#21319;级”可能导致战争突然爆发。智囊机构“英国皇家联合军种研究院”的负责人彼得·罗伯茨指出:“如果我们认为人会做出错误决策,那么人工智能系统?#19981;?#20570;出错误决策,只不过速度比人更快”。

Beyond the core group advocating a ban there is a range of opinions. China has indicated that it supports a ban in principle; but on use, not development. France and Germany oppose a ban, for now; but they want states to agree a code of conduct with wriggle room “for national interpretations”. India, which chaired the gge, is reserving its position. It is eager to avoid a repeat of nuclear history, in which technological have-nots were locked out of game-changing weaponry by a discriminatory treaty.

除了核心的政府专家小组支持禁令,还有其他?#26728;浮?#20013;国表示原则?#29616;?#25345;禁令,但仅针对应用而非研发。目前法国和德国反对禁令,但希望各国就行为准则达成共识,并“为国家解读”留有余地。印度作为政府专家小组的主席国?#30452;?#30041;态度,不希望再次上演核武的历史,即歧视性条约禁?#36764;际?#33853;后的国家拥有颠覆性武器。



The urge to restrict the technology before it is widely fielded, and used, is understandable. If granting weapons ever more autonomy turns out, in practice, to yield a military advantage, and if developed countries see themselves in wars of national survival, rather than the wars of choice they have waged recently, past practice suggests that today’s legal and ethical restraints may fall away. States are likely to sacrifice human control for self-preservation, says General Barrons. “You can send your children to fight this war and do terrible things, or you can send machines and hang on to your children.” Other people’s children are other people’s concern.

在自动?#38469;?#34987;广泛部署和应用之前,强烈要求对其加以限制是可以理解的。如果事实证明,赋予武器越来越多的自主性能带来军事优势,如果发达国家认为?#32422;合?#20837;国家生?#26469;?#20129;之战,而非由他们发动的可打可不打的战争,以往事实证明,当今的法律与道德约束可能不?#21019;?#22312;。巴伦将军指出:“国家可能出于自卫本能而放弃人类操控。你可以派子孙上战场干可怕的事情,也可以派机器上战场保住子孙的性命”。?#21155;?#21035;国的子孙,那是他们的事情。