Don’t Fear ArtificialIntelligence
By Ray Kurzweil
1 Stephen Hawking,the pre-eminent physicist, recently warned that artificial intelligence (AI),once it surpasses human intelligence, could pose a threat to the existence ofhuman civilization. Elon Musk, the pioneer of digital money, privatespaceflight and electric cars, has voiced similar concerns.
2 If AI becomes anexistential threat, it won’t be the first one. Humanity was introduced toexistential risk when I was a child sitting under my desk during thecivil-defense drills of the 1950s. Since then we have encountered comparablespecters, like the possibility of a bioterrorist creating a new virus for whichhumankind has no defense. Technology has always been a double-edged sword,since fire kept us warm but also burned down our villages.
3 The typicaldystopian futurist movie has one or two individuals or groups fighting forcontrol of “the AI.” Or we see the AI battling the humans for world domination.But this is not how AI is being integrated into the world today. AI is not inone or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with asmartphone has more intelligent access to knowledge than the President of theUnited States had 20 years ago. As AI continues to get smarter, its use willonly grow. Virtually everyone’s mental capabilities will be enhanced by itwithin a decade.
4 We will stillhave conflicts among groups of people, each enhanced by AI. That is already thecase. But we can take some comfort from a profound, exponential decrease inviolence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined.According to Pinker, although the statistics vary somewhat from location tolocation, the rate of death in war is down hundredfold compared with sixcenturies ago. Since that time, murders have declined tenfold. People aresurprised by this. The impression that violence is on the rise results from anothertrend: exponentially better information about what is wrong with the world —another development aided by AI.
5 There arestrategies we can deploy to keep emerging technologies like AI safe. Considerbiotechnology, which is perhaps a couple of decades ahead of AI. A meetingcalled the Asilomar Conference on Recombinant DNA was organized in 1975 toassess its potential dangers and devise a strategy to keep the field safe. Theresulting guidelines, which have been revised by the industry since then, haveworked very well: there have been no significant problems, accidental orintentional, for the past 39 years. We are now seeing major advances in medicaltreatments reaching clinical practice and thus far none of the anticipatedproblems.
6 Consideration ofethical guidelines for AI goes back to Isaac Asimov’s three laws of robotics,which appeared in his short story “Runaround” in 1942, eight years before AlanTuring introduced the field of AI in his 1950 paper “Computing Machinery andIntelligence.” The median view of AI practitioners today is that we are stillseveral decades from achieving human-level AI. I am more optimistic and put thedate at 2029, but either way we do have time to devise ethical standards.
7 There areefforts at universities and companies to develop AI safety strategies andguidelines, some of which are already in place. Similar to the Asilomarguidelines, one idea is to clearly define the mission of each AI program and tobuild in encrypted safeguards to prevent unauthorized uses.
8 Ultimately, themost important approach we can take to keep AI safe is to work on our humangovernance and social institutions. We are already a human-machinecivilization. The best way to avoid destructive conflict in the future is tocontinue the advance of our social ideals, which has already greatly reducedviolence.
9 AI today isadvancing the diagnosis of disease, finding cures, developing renewable cleanenergy, helping to clean up the environment, providing high-quality educationto people all over the world, helping the disabled (including providingHawking’s voice) and contributing in a myriad of other ways. We have theopportunity in the decades ahead to make major strides in addressing the grandchallenges of humanity. AI will be the pivotal technology in achieving thisprogress. We have a moral imperative to realize this promise while controllingthe peril. It won’t be the first time we’ve succeeded in doing this.
不用害怕人工智能
雷·库兹威尔
1 杰出的物理学家史蒂芬·霍金近期发出警告说,人工智能(简写为AI)一旦超过人类的智力,可能对人类文明的生存造成威胁。数字货币、私人宇宙飞行和电动汽车的先锋伊隆·马斯克,也表达了类似的忧虑。
2 如果人工智能变成生存威胁,这并非是第一个。在20世纪50年代民防演习期间,我还是个孩子,躲在桌子下面。那时候人类就开始领教了生存危险。此后,我们还遭遇过类似的幽灵,例如生物恐怖主义者可能制造出一种人类无法防护的新病毒。技术一直是把双刃剑,火可以给予我们温暖,但也可以烧毁我们的村庄。
3 在一部典型的反乌托邦未来主义电影中,有一两个人或群体争夺“人工智能”的控制权,或者我们看到人工智能为了主宰世界与人类搏斗。然而,这不是人工智能融入当今世界的方式。人工智能并非由一两个人掌控,而是由十亿、二十亿人掌控。和美国总统20年前接触知识的渠道相比,一个拥有智能手机的非洲孩子接触知识的渠道更加智能化。随着人工智能变得越来越聪明,它的使用将只增不减。事实上,人工智能有望在10年内提高每个人
的大脑能力。
4 人类不同群体间的冲突仍将持续,而人工智能会使每次冲突变得更加激烈。情况已经是这样的了。然而,史蒂文·平克在2011年出版的《人性中的天使:暴力为什么已经减少》一书中表明,暴力冲突大幅度减少,这让我们得到一些安慰。史蒂文·平克写道,尽管地区之间的数据有所不同,但与6个世纪以前相比,战争中的死亡率成百倍地降低了。自那时起,凶杀案件数量下降了10倍。人们对此感到惊讶。其实暴力有增无减的印象源自另外一个趋势,那就是人们对世界问题的了解越来越多、也越来越快——而这种进展也是受到人工智能推动的。
5 我们可以使用策略保证人工智能等新技术的安全性。我们看一下生物技术的情况。它领先人工智能约20多年。1975年,一个名为阿西洛马DNA重组技术会议召开,其目的是评估生物技术的潜在危险,制定策略以确保该领域的安全。自那时起,该行业不断对会议制定的指南进行修订,到目前为止效果良好:在过去39年没有出现重大的问题,不管是意外的还是故意的。目前,医疗上的重大进步进入临床实践,之前预见到的问题尚未出现。
6 为人工智能建立道德指南可以追溯到艾萨克·阿西莫夫于1942年出版的短篇故事《环舞》中提出的机器人三定律,而8年后阿兰·图灵才在其论文“计算机器和智能”中提出了人工智能这一概念。今天,人工智能从业者们趋中的观点是,人工智能要和人类智能媲美仍需几十年时间,而我更加乐观,把这个日期提前至2029年。但无论如何,我们的确有时间制定人工智能道德标准。
7 各大高校和企业都在着手制定人工智能安全策略和指南,有些已经就绪。与阿西洛马指南相类似,它们的想法之一是清楚地界定每项人工智能计划的使命,建立加密安全措施,防止未经许可的使用。
8 归根结底,为了保证人工智能的安全,最重要的是继续改善我们人类的治理和社会制度。我们早已进入人类与机器共存的文明社会,避免未来破坏性冲突的最好方法是继续推进已经使暴力大幅度降低的社会行为准则。
9 当今的人工智能正在提升疾病诊断水平,寻找治疗方法,开发可再生清洁能源,帮助清洁环境,给全世界人们提供高质量的教育,帮助残疾人(包括为霍金提供声音),以及用无数的其他方式对人类做出贡献。在未来几十年里,我们有机会在应对人类巨大挑战方面取得长足进展。人工智能将成为取得这些进步的关键技术。我们有道义上的责任在实现这一目的的同时控制危险。此前,我们已经有过成功的先例。

