上海品茶

您的当前位置:上海品茶 > 报告分类 > PDF报告下载

BelferCenter-2017年人工智能与国家安全报告英文-2017.7-132页(132页).pdf

编号:25262 PDF 132页 1.48MB 下载积分:免费下载
下载报告请您先登录!

BelferCenter-2017年人工智能与国家安全报告英文-2017.7-132页(132页).pdf

1、STUDY JULY 2017 Artificial Intelligence and National Security Greg Allen Taniel Chan A study on behalf of Dr. Jason Matheny, Director of the U.S. Intelligence Advanced Research Projects Activity (IARPA) BE L F E R CENTER STUDY Belfer Center for Science and International Affairs Harvard Kennedy Schoo

2、l 79 JFK Street Cambridge, MA 02138 www.belfercenter.org Statements and views expressed in this report are solely those of the authors and do not imply endorsement by Harvard University, Harvard Kennedy School, the Belfer Center for Science and International Affairs, or IARPA. Design attacks can be

3、defended In 1945, fighter aircraft were roughly 50 times as expensive as a new civilian car By WW2, only sophisticated orgs. could match state of the art in aero- space tech. One of the first passenger airlines used reconfigured WW1 bombers Factories appear similar to other industry and can be conce

4、aled Cyber Cyber can damage physical infrastruc- ture and steal key info. but less assured Even terrorists and criminals can afford quite useful capabilities Low-end attacks require minimal expertise; high-end reserved for states Commercial IT sys- tems can be used for attacks; similar skills in dem

5、and for civil/military Even sensitive national security systems are routinely infiltrated without detection Biotech Natural pandemics have killed tens of millions; bioweapons could also Equipment is cheap, though expertise can be expensive Though different now, at first relatively few people had nee

6、ded expertise Biopharma and medical industries need similar equip- ment and expertise as bioweapons Weaponization facilities difficult to distinguish from commercial LowModerateHigh 44 Artificial Intelligence and National Security Government Technology Management Approach In what is admittedly (and

7、necessarily) a partial oversimplification, we have classified the U.S. governments management paradigm for each of the four technologies. Our goal here is to clarify how government viewed the nature of the challengeespecially in its early decadesand characterize what approach they ultimately took to

8、 meet it. A more detailed justification of our analysis is provided in the Appendix. The four approaches are summa- rized in Table 2: Table 2: Government Technology Management Approach Nuclear All-out effort, government-led development and utilization Extraordinary levels of spending and dedication

9、of national resources to nuclear technology continued for many decades after development From 1940 to 1996, 11% of total federal government spend- ing was related to nuclear weapons, even with arms control and voluntary restrictions Initially, nuclear technology was treated as classified regard- les

10、s of origin. Illegal to hold patents on nuclear. Aerospace Government-led public private partnership Heavy government involvement in the aerospace sector with research and development support, acting as an anchor customer, and major regulation Tech. superiority seen as key to national power; govt. r

11、estricted access to aerospace tech. using classification and export restrictions Despite predominant government role, the U.S. Aircraft industry remained within the Amerian economic model of capitalism and free enterprise Cyber Government seeding and harvesting Govt. heavily involved in supportin R

12、U.S. repeatedly ignores need for safety upgrades/investment Aerospace Success Aside from brief periods during WW1 and WW2, U.S. was and is undisputed leader in developing and using military aerospace tech. Success After WW2, the U.S. emerged as the clear winner in building commercial aircraft for th

13、e rapidly growing market in air transportation Success Main risks are accidental crashes and attacks from superior air forces, both of which the U.S. has responded to effectively Cyber Success Though cyber domain is not as amenable to dominance as aero- space, the U.S. clearly has leading tech and c

14、apabilities in both cyber and defense Partial Success U.S. commercial industry leads the world in computing and internet sectors, but U.S. govt. left commer- cial too vulnerable to criminal and nation-state cyber attacks Partial Failure While the U.S. developed offensive cyber superiority, the govt.

15、 failed for decades to address the asymmetric vulnerability it faced in espionage and attack Biotech N/A U.S. voluntarily disbanded bioweapons program, saying deterrent from nukes was suffi- cient. USSR bioweapons program continued, however. Success U.S. has largest biotech industry worldwide and th

16、e R Favorable government support of R most risky research was delayed until risks better understood, BWC helpful but had key failures (USSR) 46 Artificial Intelligence and National Security AI Technology Profile: A Worst-case Scenario? Comparing the technology profile of AI with the prior technology

17、 cases, we find that it has the potential to be a worst-case scenario. Proper pre- cautions might alter this profile in the future, but current trends suggest a uniquely difficult challenge. Destructive Potential: High At a minimum, AI will dramatically augment autonomous weapons and espionage capab

18、ilities and will represent a key aspect of future military power. Speculative but plausible hypotheses suggest that General AI and especially superintelligence systems pose a potentially existential threat to humanity.87 O Cost Profile: Diverse, but potentially low Developing cutting-edge capabiliti

19、es in machine learning and AI can be expensive: many firms are spending billions or hundreds of millions of dollars on R leaked copies of AI software might be virtually free. Complexity Profile: Diverse, but potentially low Advancing the state of the art in AI basic research requires world- class ta

20、lent, of which there is a very limited pool. O Nick Bostrom, Elon Musk, Bill Gates, Stephen Hawking, and many others have expressed concern regarding this scenario. 47 Belfer Center for Science and International Affairs | Harvard Kennedy School However, applying existing AI research to specific prob

21、lems can sometimes be relatively straightforward and accomplished with less elite talent. Technical expertise required for converting commercially available AI capabilities into military systems is currently high, but this may decline in the future as AI improves. Military/Civil Dual-Use Potential:

22、High Militaries and commercial businesses are competing for essentially the exact same talent pool and using highly similar hardware infrastructure. Some military applications (e.g. autonomous weapons) require additional access to non-AI related expertise to deliver capability. Difficulty of Espiona

23、ge and Monitoring: High Overlap between commercial and military technology makes it difficult to distinguish which AI activities are potentially hostile. Few if any physical markers of AI development exist. Total number of actors developing and fielding advanced AI sys- tems will be significantly hi

24、gher than nuclear or even aerospace. Monitors will find it difficult to assess AI aspects of any autono- mous weapon system without direct access. 48 Artificial Intelligence and National Security Lessons Learned Having provided our observations of previous cases, we will now attempt to summarize les

25、sons learned. We recognize that there are vast differences of time, technology, and context between these cases and AI. This is our effort to characterize some lessons which endure nevertheless. Lesson #1: Radical technology change begets radical government policy ideas The transformative implicatio

26、ns of nuclear weapons technology, com- bined with the Cold War context, led the U.S. government to consider some extraordinary policy measures, including but not limited to the following: EnactedGiving one individual sole authority to start nuclear war: The United States President, as head of govern

27、ment and commander in chief of the military, was invested with supreme authority regarding nuclear weapons88 ConsideredInternationalizing control of nuclear weapons under the exclusive authority of the United Nations in a collective security arrangement P 89 EnactedVoluntarily sharing atomic weapons

28、 technology with allies (which occurred) and adversaries including the Soviet Union (which did not)90 ConsideredAtomic annihilation: Pre-emptive and/or retaliatory atomic annihilation of adversaries, which could have resulted in mil- lions or even billions of deathsQ P This was the so-called Baruch

29、Plan, which the U.S. proposed at the United Nations but abandoned shortly thereafter. To this day there is significant debate over whether the United States offered the Baruch Plan in sincerity. Q Senior U.S. military officials, including Lieutenant General Leslie Groves, the director of the Manhatt

30、an Project, and General Orvil Anderson, commander of the Air University, publicly argued that the United States should strike the Soviet Union with nuclear weapons to prevent them from acquiring nuclear technology. Respected foreigners including Winston Churchill, John Von Neumann, and Bertrand Russ

31、ell all advised the United States to do the same. How seriously the United States senior leadership considered this first strike advice is difficult to say with certainty. Retaliatory nuclear strikes and mutually assured destruction remain the official policy of the United States. 49 Belfer Center f

32、or Science and International Affairs | Harvard Kennedy School EnactedVoluntarily restricting development in arms control frameworks to ban certain classes of nuclear weapons and certain classes of nuclear tests The world has lived with some of these policies for seven decades, so the true extent of

33、their radicalism (at the time they were first considered) is hard to convey. The first example is perhaps the easiest, because it required passage of the Presidential Succession Act of 1947, which laid the founda- tion for the 25th Amendment to the United States Constitution. Though there were other

34、 proximate causes for the 25th Amendment, such as the assassination of President Kennedy, it is only a mild stretch to say that the invention of nuclear weapons was so significant that it led to a change in the United States Constitution. Though nuclear weapons clearly resulted in the most radical p

35、olicy pro- posals, the other cases also led to significant changes. For instance, the Department of Defense ultimately created a full armed service to make use of aerospace technology, the organization now called the U.S. Air Force. Cyber challenges led to the creation of U.S. Cyber Command. These w

36、ere significant changes, though time has made them familiar. It remains unclear what the full impact of AI technology on national security will be, and how fast it will arrive. So far, we have argued that it is highly likely to be a transformative military technology. Some, such as Nick Bos- trom, b

37、elieve that the recursive improvement property of AI has the potential to create a superintelligence that might lead to the extinction of the entire human species.91 If continued rapid progress in AI leads some governments to share Bostroms view, they may consider policies as truly radical as those

38、considered in the early decades of nuclear weapons. The bigger and more visible the impacts of AI become (and we argue the impacts are likely to be increasingly large and obvious over time) the more policymakers will feel justified in making extreme departures from existing policy. Lesson #2: Arms r

39、aces are sometimes unavoidable, but they can be managed 50 Artificial Intelligence and National Security Fears of aerial bombing led to an international treaty banning the use of weaponized aircraft, but voluntary restraint was quickly abandoned and did not stop air war in WWI. In 1899, diplomats fr

40、om the worlds leading military powers convened in The Hague for a peace conference. One of the more interesting outcomes of the conference was a five-year moratorium on all offensive military uses of aircraft.R Though the intention was to later make the ban permanent, it was abandoned at the second

41、Hague conference of 1907 once countries saw the irresistible potential of aerial warfare. Accordingly, all the great powers began constructing and planning for the use of aircraft bombers.92 In 1910, the combined military air fleets of the European great powers contained 50 airplanes. By 1914, the n

42、umber reached 700.93 When World War I broke out, the only real limitation on the use of military air power was technology: the primitive airplanes had limited range and bomb-carrying capacity. Still, every European belligerents capital, save Rome, was bombed from the air.94 The applications of AI to

43、 warfare and espionage are likely to be as irresistible as aircraft. Preventing expanded military use of AI is likely impossible. Aerospace technology ultimately became nearly synonymous with military power, and it seems likely that applications of AI will ultimately go the same route. Just as busin

44、esses are choosing machine learning because competitively they have no choice, so too will militaries and intelligence agencies feel pressure to expand the use of military AI applications. Michael Rogers, head of the United States National Security Agency and Cyber Command, agrees: “It is not the if

45、. Its only the when to me. This is coming.”95 That sense of inevitability derives not only from how useful AI is already proving to be, but also from the belief that current applications have only scratched the surface of what capabilities are likely to come. Though outright bans of AI applications

46、in the national security sector are unrealistic, the more modest goal of safe and effective technology management must be pursued. R At the time, diplomats were primarily concerned with aerial bombardment from motor-driven balloons, but the treaty language was sufficiently broad that it applied to f

47、ixed-wing aircraft upon their invention. 51 Belfer Center for Science and International Affairs | Harvard Kennedy School The ban of aircraft fell apart, but the United States, its allies, and even its adversaries did develop a framework that sought to limit the risks of aerospace technology. Though

48、many details will remain unclear until the technology is more mature, eventually the United States and other actors will have to develop a regime that limits the risk of military AI technology proliferation. Lesson #3: Government must both promote and restrain commercial activity Failure to recogniz

49、e the inherent dual-use nature of technology can cost lives, as the example of the Rolls-Royce Nene jet engine shows. After World War II, the United States recognized that facilitating economic growth of the commercial aerospace industry and maintaining military secrecy were often at odds. For instance, the United Kingdom had superior jet engine technology at the end of World War II but faced significant financial challenges. The British engine manufacturers, seeking export rev- enues, sold 25 of their “commercial” Rolls-Royce Nene Jet Engines to the Soviet

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(BelferCenter-2017年人工智能与国家安全报告英文-2017.7-132页(132页).pdf)为本站 (菜菜呀) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
会员购买
客服

专属顾问

商务合作

机构入驻、侵权投诉、商务合作

服务号

三个皮匠报告官方公众号

回到顶部