Changes

Jump to: navigation, search

Timeline of AI safety

5,829 bytes added, 09:32, 8 September 2019
no edit summary
|-
| 1947 || {{dts|July}} || Fictional portrayal || ''{{w|With Folded Hands}}'', a novelette by {{w|Jack Williamson}}, is published. The novelette describes how advanced robots (humanoids) take over large parts of the world to fulfil their Prime Directive, which is to make humans happy.
|-
| 1950 || || Publication || {{w|Alan Turing}} publishes ''Computing Machinery and Intelligence'' in the philosophy journal ''Mind''. The paper introduces the concept of the {{w|Turing test}}, the simple idea of which is that a machine would be said to have achieved human-level intelligence if it can convince a human that it is human. The Turing test would become a key part of discussions around benchmarking artificial intelligence, and also enter popular culture over the next few decades.<ref>{{cite web|url = http://www.mind.ilstu.edu/curriculum/turing_machines/turing_test_and_machine_intelligence.php|title = Turing Test and Machine Intelligence|last = Bradley|first = Peter|publisher = Consortium on Computer Science Instruction}}</ref><ref>{{cite web|url = https://intelligence.org/2013/08/11/what-is-agi/|title = What is AGI?|last = Muehlhauser|first = Luke|date = August 11, 2013|accessdate = September 8, 2019|publisher = Machine Intelligence Research Institute}}</ref>
|-
| 1960 || {{dts|May 6}} || Publication || {{w|Norbert Wiener}}'s article ''Some Moral and Technical Consequences of Automation'' is published.<ref>{{cite web|url = https://www.jstor.org/stable/1705998|title = Some Moral and Technical Consequences of Automation|last = Wiener|first = Norbert|date = May 6, 1960|accessdate = August 18, 2019}}</ref> In 2013, Jonah Sinick would note the similarities between the points raised in this article and the thinking of AI safety leader Eliezer Yudkowsky.<ref>{{cite web|url = https://www.lesswrong.com/posts/2rWfmahhqASnFcYLr/norbert-wiener-s-paper-some-moral-and-technical-consequences|title = Norbert Wiener's paper "Some Moral and Technical Consequences of Automation"|date = July 20, 2013|accessdate = August 18, 2019|publisher = LessWrong|last = Sinick|first = Jonah}}</ref>
|-
| 1984 || {{dts|October 26}} || Fictional portrayal || The American science fiction film ''{{w|The Terminator}}'' is released. The film contains the first appearance of [[w:Skynet (Terminator)|Skynet]], a "neural net-based conscious group mind and artificial general intelligence" that "seeks to exterminate the human race in order to fulfill the mandates of its original coding".
|-
| 1987 || || Publication || In an article ''A Question of Responsibility'' for AI Magazine, Mitchell Waldrop introduces the term {{w|machine ethics}}. He writes: ''However, one thing that is apparent from the above discussion is that intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov’s three laws of robotics.''<ref name="Waldrop1987">{{cite journal |last1=Waldrop |first1=Mitchell |title=A Question of Responsibility |journal=AI Magazine |date=Spring 1987 |volume=8 |issue=1 |pages=28–39 |doi=10.1609/aimag.v8i1.572}}</ref>
|-
| 1993 || || Publication || {{w|Vernor Vinge}}'s article "The Coming Technological Singularity: How to Survive in the Post-Human Era" is published. The article popularizes the idea of an intelligence explosion.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/History_of_AI_risk_thought |title=History of AI risk thought |website=Lesswrongwiki |accessdate=July 28, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref>
|-
| 2005 || || Organization || The {{w|Future of Humanity Institute}} (FHI) is founded.<ref>{{cite web |url=http://www.oxfordmartin.ox.ac.uk/research/programmes/future-humanity/ |publisher=Oxford Martin School |title=About |accessdate=July 25, 2017 |quote=The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School).}}</ref>
|-
| 2005 || || Conference || The field of {{w|machine ethics}} is delineated at the Fall 2005 Symposium on Machine Ethics held by the {{w|Association for Advancement of Artificial Intelligence}}.<ref>{{cite web |url=http://www.aaai.org/Library/Symposia/Fall/fs05-06 |title=Papers from the 2005 AAAI Fall Symposium |deadurl=yes |archiveurl=https://web.archive.org/web/20141129044821/http://www.aaai.org/Library/Symposia/Fall/fs05-06 |archivedate=2014-11-29 |df= }}</ref>
|-
| 2005 || {{dts|August 21}} || AI box || The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://sl4.org/archive/0508/index.html#12007 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
|-
| 2008 || || Publication || ''[[w:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published. The book includes Eliezer Yudkowsky's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk".<ref>{{cite web |url=https://intelligence.org/files/AIPosNegFactor.pdf |title=AIPosNegFactor.pdf |accessdate=July 27, 2017}}</ref>
|-
| 2008 || {{dts|October 13}} || Publication || ''Moral Machines: Teaching Robots Right from Wrong'' by Wendell Wallach and Colin Allen is published by {{w|Oxford University Press}}. The book advertises itself as "the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics."
|-
| 2008 || {{dts|November}}–December || Outside review || The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate |title=The Hanson-Yudkowsky AI-Foom Debate |website=Lesswrongwiki |accessdate=July 1, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6k1b |title=Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong |accessdate=July 15, 2017 |quote=Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI. |publisher=[[w:LessWrong|LessWrong]]}}</ref>
|-
| 2013 || {{dts|July}} || Organization || The Foundational Research Institute (FRI) is founded. Some of FRI's work discusses risks from artificial intelligence.<ref>{{cite web |url=https://foundational-research.org/transparency |title=Transparency |publisher=Foundational Research Institute |accessdate=July 27, 2017}}</ref>
|-
| 2013 || {{dts|July 8}} || Publication || Luke Muehlhauser's blog post ''Four Focus Areas of Effective Altruism'' is published. The four focus areas listed are poverty reduction, meta effective altruism, the long term future, and animal suffering. AI safety concerns fall under the "long term future" focus area.<ref>{{cite web|url = https://www.lesswrong.com/posts/JmmA2Mf5GrY9D6nQD/four-focus-areas-of-effective-altruism|title = Four Focus Areas of Effective Altruism|last = Muehlhauser|first = Luke|date = July 8, 2013|accessdate = September 8, 2019|publisher = LessWrong}}</ref> This identification of focus areas would persist for several years, and would also be incorporated into the design of the Effective Altruism Funds in 2017. In particular, the blog post encapsulates the central position of AI safety in the then-nascent effective altruist movement.
|-
| 2013 || {{dts|October 1}} || Publication || ''[[w:Our Final Invention|Our Final Invention: Artificial Intelligence and the End of the Human Era]]'' by {{w|James Barrat}} is published. The book discusses risks from human-level of superhuman artificial intelligence.
|-
| 2014 || {{dts|October 22}}–24 || Opinion || During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[w:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref>
|-
| 2014 || {{dts|November 5}} || Publication || The book ''Ethical Artificial Intelligence'' by {{w|Bill Hibbard}} is released on {{w|ArXiV}}. The book brings together ideas from Hibbard's past publications related to technical AI risk.<ref>Hibbard, Bill (2014): Ethical Artificial Intelligence. https://arxiv.org/abs/1411.1373</ref>
|-
| 2014 || {{dts|December 2}} || Opinion || In an interview with BBC, Stephen Hawking states that advanced artificial intelligence could end the human race.<ref>{{cite web |url=http://www.bbc.com/news/technology-30290540 |title=Stephen Hawking warns artificial intelligence could end mankind |publisher=BBC News |date=December 2, 2014 |accessdate=July 25, 2017}}</ref>
|-
| 2014 || {{dts|December 16}} || Fictional portrayal || The movie ''[[w:Ex Machina (film)|Ex Machina]]'' is released. The movie highlights the paperclip maximizer idea: it shows how a robot programmed to optimize for being able to make sure it escapes can callously damage human lives in the process. It also covers the ideas of the Turing test (a robot convincing a human that it has human-like qualities, even though the human knows that the robot is not human) and the AI box experiment (a robot convincing a human to release it from its "box" similar to the experiment proposed and carried out by Eliezer Yudkowsky). It leads to more public discussion of AI safety.<ref>{{cite web|url = https://blog.ycombinator.com/ex-machinas-scientific-advisor-murray-shanahan/|title = Ex Machina’s Scientific Advisor – Murray Shanahan|publisher = Y Combinator|date = June 28, 2017|accessdate = August 18, 2019}}</ref><ref>{{cite web|url = https://eandt.theiet.org/content/articles/2015/01/ex-machina-movie-asks-is-ai-research-in-safe-hands/|title = Ex Machina movie asks: is AI research in safe hands?|date = January 21, 2015|accessdate = August 18, 2019}}</ref><ref>{{cite web|url = https://www.lesswrong.com/posts/rvFzgeracFc7PRxf4/go-see-ex-machina|title = Go see Ex Machina|date = February 26, 2016|accessdate = August 18, 2019|publisher = LessWrong}}</ref><ref>{{cite web|url = https://www.engadget.com/2015/04/01/ex-machina-alex-garland-interview/|title = 'Ex Machina' director embraces the rise of superintelligent AI|last = Hardawar|first = Devindra|date = April 1, 2015|accessdate = August 18, 2019|publisher = Engadget}}</ref>
|-
| 2015 || || || Daniel Dewey joins the Open Philanthropy Project.<ref>{{cite web |url=http://www.openphilanthropy.org/about/team/daniel-dewey |title=Daniel Dewey |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref> He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence.
|-
| 2017 || {{dts|December 20}} || Publication || The "2017 AI Safety Literature Review and Charity Comparison" is published. The lengthy blog post covers all the published work of prominent organizations focused on AI safety, and is a refresh of a similar post published a year ago.<ref>{{cite web|url = https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison|title = 2017 AI Safety Literature Review and Charity Comparison|author = Larks|publisher = Effective Altruism Forum|date = December 20, 2017|accessdate = August 18, 2019}}</ref>
|-
| 2018 || {{dts|Februay 28}} || Publication || 80,000 Hours publishes a blog post ''A new recommended career path for effective altruists: China specialist'' suggesting specialization in China as a career path for people in the effective altruist movement. China's likely leading role in the development of artificial intelligence is highlighted as particularly relevant to AI safety efforts.<ref>{{cite web|url = https://80000hours.org/articles/china-careers/|title = 'A new recommended career path for effective altruists: China specialist|last = Todd|first = Benjamin|last2 = Tse|first2 = Brian|date = February 28, 2018|accessdate = September 8, 2019|publisher = 80,000 Hours}}</ref>
|-
| 2018 || {{dts|April 5}} || Documentary || The documentary ''{{w|Do You Trust This Computer?}}'', directed by {{w|Chris Paine}}, is released. It covers issues related to AI safety and includes interviews with prominent individuals relevant to AI, such as {{w|Ray Kurzweil}}, {{w|Elon Musk}} and {{w|Jonathan Nolan}}.
|-
| 2018 || {{dts|April 9}} || Newsletter || The first issue of the weekly AI Alignment Newsletter, managed by Rohin Shah of the Center for Human-Compatible AI, is sent. Over the next two years, the team responsible for producing the newsletter would grow to four people, and the newsletter would also be produced in Chinese and in podcast form.<ref>{{cite web|url = https://forum.effectivealtruism.org/posts/Prxqvhr9JFj7JyJRX/alignment-newsletter-one-year-retrospective|title = Alignment Newsletter One Year Retrospective|date = Aprril 10, 2019|accessdate = September 8, 2019|publisher = Effective Altruism Forum}}</ref><ref>{{cite web|url = https://mailchi.mp/ff6340049bd0/alignment-newsletter-1|title = Alignment Newsletter 1|date = April 9, 2018|accessdate = September 8, 2019}}</ref><ref>{{cite web|url = https://rohinshah.com/alignment-newsletter/|title = Alignment Newsletter|accessdate = September 8, 2019}}</ref>
|-
| 2018 || {{dts|April 12}} to 22 || Conference || The first AI Safety Camp is held in Gran Canaria.<ref>{{cite web|url = https://aisafetycamp.com/previous-camps/|title = Previous Camps|accessdate = September 7, 2019}}</ref> The AI Safety Camp team runs about two camps a year.
2,438
edits

Navigation menu