Today I have been both murderous and merciful.
今天,我既凶残又仁慈。
I have deliberately mown down pensioners and a pack of dogs.
我故意杀死了领取养老金者和几条狗。
I have ploughed into the homeless, slain a couple of athletes and run over the obese.
我撞了无家可归者,杀死了两名运动员,轧过了肥胖者。

But I have always tried to save the children.
但是,我始终努力救孩子。
As I finish my session on the Moral Machine — a public experiment being run by the Massachusetts Institute of Technology — I learn that my moral outlook is not universally shared.
我在道德机器(Moral Machine)——麻省理工学院(MIT)运行的一项公开实验——上完成测试后发现,我的道德观跟很多人不一样。
Some argue that aggregating public opinions on ethical dilemmas is an effective way to endow intelligent machines, such as driverless cars, with limited moral reasoning capacity.
有些人辩称,在道德困境上把公众意见汇集到一起,是向无人驾驶汽车等智能机器赋予有限道德推理能力的有效手段。
Yet after my experience, I am not convinced that crowdsourcing is the best way to develop what is essentially the ethics of killing people.
然而,在测试之后,我不相信众包是形成杀戮道德(本质上就是这么回事)的最佳途径。
The question is not purely academic: Tesla is being sued in China over the death of a driver of a car equipped with its semi-autonomous autopilot.
这个问题并不单纯是学术层面的:一辆配备半自动式Autopilot的特斯拉(Tesla)汽车的驾车者死亡,导致该公司在中国被起诉。
Tesla denies the technology was at fault.
特斯拉否认那起事故的过错在于该项技术。
Anyone with a computer and a coffee break can contribute to MIT’s mass experiment, which imagines the brakes failing on a fully autonomous vehicle.
任何人只要有台电脑,利用咖啡时间就可以参加麻省理工学院的大众实验。
The vehicle is packed with passengers, and heading towards pedestrians.
该实验想象一辆全自动驾驶汽车的刹车失灵。这辆车载满了乘客,正朝行人开过去。
The experiment depicts 13 variations of the trolley problem — a classic dilemma in ethics that involves deciding who will die under the wheels of a runaway tram.
实验给出了这一无轨电车难题的13个版本。这是一个经典的道德难题,需要决定谁将死于一辆失控电车的车轮之下。
In MIT’s reformulation, the runaway is a self-driving car that can keep to its path or swerve; both mean death and destruction.
在麻省理工学院的重新设计中,失控的是一辆自动驾驶汽车,它既可以按原来路线行驶,也可以急转弯;两种情形都会造成死亡和破坏。
The choice can be between passengers and pedestrians, or two sets of pedestrians.
被选对象可以是乘客或行人,或者两组行人。
Calculating who should perish involves pitting more lives against less, young against old, professionals against the homeless, pregnant women against athletes, humans against pets.
计算谁应送命,需要在较多生命和较少生命之间、年轻人和老年人之间、专业人士和无家可归者之间、怀孕女性和运动员之间,以及人类和宠物之间做出抉择。
At heart, the trolley problem is about deciding who lives, who dies — the kind of judgment that truly autonomous vehicles may eventually make.
电车难题的核心是决定谁生、谁死——这正是真正自动驾驶的汽车最终或许要做出的那种判断。
My preferences are revealed afterwards: I mostly save children and sacrifice pets.
我的偏好在实验后被披露出来:基本上,我会救孩子,牺牲宠物。
Pedestrians who are not jaywalking are spared and passengers expended.
没有乱穿马路的行人得以幸免,而乘客被牺牲了。
It is obvious: by choosing to climb into a driverless car, they should shoulder the burden of risk.
很明显:选择上一辆无人驾驶汽车的人,应当分担一部分风险。
As for my aversion to swerving, should caution not dictate that driverless cars are generally programmed to follow the road?
至于我不愿急转弯,难道谨慎没有意味着无人驾驶汽车的程序指令通常是沿道路行驶吗?
It is illuminating — until you see how your preferences stack up against everyone else.
这很有启发意义——直到你看到自己的偏好跟其他所有人有多么不同。
In the business of life-saving, I fall short — especially when it comes to protecting car occupants.
我在救命这件事上做得不够好——尤其是在保护汽车乘员方面。
Upholding the law and not swerving seem more important to me than to others; the social status of my intended victims much less so.
相比其他事项,守法和避免急转弯似乎对我更重要一些;我选择的受害人的社会地位对我完全不重要。
We could argue over the technical aspects of dishing out death judiciously.
我们可能对于明智而审慎地分发死亡的技术方面争论不休。
For example, if we are to condemn car occupants, would we go ahead regardless of whether the passengers are children or criminals?
例如,如果我们宣判汽车乘员死刑,那么无论乘客是孩子还是罪犯,我们都会照做不误吗?
But to fret over such details would be pointless.
但是,为此类细节烦恼将是毫无意义的。
If anything, this experiment demonstrates the extreme difficulty of reaching a consensus on the ethics of driverless cars.
如果说有任何收获的话,那就是这个实验证明,要在无人驾驶汽车的道德上达成共识是极其困难的。
Similar surveys show that the utilitarian ideal of saving the greatest number of lives works pretty well for most people as long as they are not the roadkill.
类似调查显示,对大多数人而言,救下最多条命这个功利主义观念合情合理——只要他们自己不在车轮下丧生。
I am pessimistic that we can simply pool our morality and subscribe to a norm — because, at least for me, the norm is not normal
我对于只是把大家的道德集合到一起、然后遵守一个规范感到很悲观,因为,至少在我看来,这个规范不是正常的。
This is the hurdle faced by makers of self-driving cars, which promise safer roads overall by reducing human error: who will buy a vehicle run on murderous algorithms they do not agree with, let alone a car programmed to sacrifice its occupants?
这是自动驾驶汽车厂商面临的障碍。他们承诺通过减少人类过错来提高整体道路安全,但是谁会购买一辆由他本人并不认可的杀戮算法操控的汽车呢?更别提程序设定牺牲车上乘客的汽车了。
It is the idea of premeditated killing that is most troubling.
最令人不安的正是这种预谋杀戮的构想。
That sensibility renders the death penalty widely unpalatable, and ensures abortion and euthanasia remain contentious areas of regulation.
那种敏感性让死刑普遍难以接受,并确保堕胎和安乐死仍是引起争议的监管领域。
Most of us, though, grudgingly accept that accidents happen.
不过,我们大多数人咬牙接受事故可能发生。
Even with autonomous cars, there may be room for leaving some things to chance.
即便是自动驾驶汽车,或许也应该留下让某些事情听天由命的空间。
雅思听力题干生词的处理方法
浅谈真正提高雅思听力水平的几个要点
雅思听力备考的三条建议
雅思听力高分 读猜听写查是必备
雅思听力考试和答题过程的注意事项
注意4个要点 雅思听力失误可以避免
雅思听力6分的学习计划表分享
雅思听力号码考点的应对策略指导
详解雅思听力考试中的同义转换
给新手的14个雅思听力tips
雅思听力备考的三个常见误区
雅思听力完成句子题的五大答题要领
专家解析雅思听力的七大陷阱
避免雅思听力考试出现失误的准备方法
解析雅思听力中的缩写词:字母、图像
细数雅思听力的24条高分技巧
掌握雅思听力高分的三大策略
雅思听力考试的三种备考方案
雅思听力必须了解的缩略词
半个月提高雅思听力水平的练习方法
雅思听力考试拿高分的三大策略
如何解决雅思听力同义词问题
雅思听力备考过程中需要注意哪些方面
雅思听力简化笔记符号、缩略词大全
雅思听力备考相关问题解答
雅思听力攀登高分的六个步骤
雅思听力简化符号笔记分享:字母、图像
详解精听法提高雅思听力水平的练习方法
半个月提高雅思听力1.5分的练习方法
雅思听力场景重要词汇大全
| 不限 |
| 英语教案 |
| 英语课件 |
| 英语试题 |
| 不限 |
| 不限 |
| 上册 |
| 下册 |
| 不限 |