Changes

Jump to: navigation, search

Timeline of AI safety

4,365 bytes added, 09:13, 18 August 2019
no edit summary
{| class="wikitable"
! Time period !! Development summary !! More details
|-
| Until 1950 || Fictional portrayals only || Most discussion of AI safety is in the form of fictional portrayals. It warns of the risks of robots who, either through stupidity or lack of goal alignment, no longer remain under the control of humans.
|-
| 1950 to 2000 || Scientific speculation + fictional portrayals || During this period, discussion of AI safety moves from merely being a topic of fiction to one that scientists who study technological trends start talking about. The era sees commentary by I. J. Good, Vernor Vinge, and Bill Joy.
|-
| 2000 to 2012 || Birth of AI safety organizations || This period sees the creation of the Singularity Institute for Artificial Intelligence (SIAI) (which would later become the Machine Intelligence Research Institute (MIRI)) and the evolution of its mission from creating friendly AI to reducing the risk of unfriendly AI. The Future of Humanity Institute (FHI) is also founded.
|-
| 2013 to present || Mainstreaming of AI safety || SIAI changes name to MIRI, grows considerably in size, and gets a lot of funding. The Future of Life Institute (FLI) is founded. ''Superintelligence'', the book by Nick Bostrom, is released. The Future of Life Institute (FLI) and OpenAI are started, and the latter grows considerably. Prominent individuals such as Elon Musk, Sam Altman, and Bill Gates, talk of the importance of AI safety and the risks of unfriendly AI.
|}
|-
| 1942 || {{dts|March}} || Fictional portrayal || The {{w|Three Laws of Robotics}} are introduced by {{w|Isaac Asimov}} in his short story "[[w:Runaround (story)|Runaround]]".
|-
| 1947 || {{dts|July}} || Fictional portrayal || ''{{w|With Folded Hands}}'', a novelette by {{w|Jack Williamson}}, is published. The novelette describes how advanced robots (humanoids) take over large parts of the world to fulfil their Prime Directive, which is to make humans happy.
|-
| 1960 || {{dts|May 6}} || Publication || {{w|Norbert Wiener}}'s article ''Some Moral and Technical Consequences of Automation'' is published.<ref>{{cite web|url = https://www.jstor.org/stable/1705998|title = Some Moral and Technical Consequences of Automation|last = Wiener|first = Norbert|date = May 6, 1960|accessdate = August 18, 2019}}</ref> In 2013, Jonah Sinick would note the similarities between the points raised in this article and the thinking of AI safety leader Eliezer Yudkowsky.<ref>{{cite web|url = https://www.lesswrong.com/posts/2rWfmahhqASnFcYLr/norbert-wiener-s-paper-some-moral-and-technical-consequences|title = Norbert Wiener's paper "Some Moral and Technical Consequences of Automation"|date = July 20, 2013|accessdate = August 18, 2019|publisher = LessWrong|last = Sinick|first = Jonah}}</ref>
|-
| 1965 || || Publication || {{w|I. J. Good}} [[w:Existential risk from artificial general intelligence#History|originates]] the concept of intelligence explosion in "Speculations Concerning the First Ultraintelligent Machine".
|-
| 2014 || {{dts|December 2}} || Opinion || In an interview with BBC, Stephen Hawking states that advanced artificial intelligence could end the human race.<ref>{{cite web |url=http://www.bbc.com/news/technology-30290540 |title=Stephen Hawking warns artificial intelligence could end mankind |publisher=BBC News |date=December 2, 2014 |accessdate=July 25, 2017}}</ref>
|-
| 2014 || {{dts|December 16}} || Fictional portrayal || The movie ''[[w:Ex Machina (film)|Ex Machina]]'' is released. The movie highlights the paperclip maximizer idea: it shows how a robot programmed to optimize for being able to make sure it escapes can callously damage human lives in the process. It leads to more public discussion of AI safety.<ref>{{cite web|url = https://blog.ycombinator.com/ex-machinas-scientific-advisor-murray-shanahan/|title = Ex Machina’s Scientific Advisor – Murray Shanahan|publisher = Y Combinator|date = June 28, 2017|accessdate = August 18, 2019}}</ref><ref>{{cite web|url = https://eandt.theiet.org/content/articles/2015/01/ex-machina-movie-asks-is-ai-research-in-safe-hands/|title = Ex Machina movie asks: is AI research in safe hands?|date = January 21, 2015|accessdate = August 18, 2019}}</ref><ref>{{cite web|url = https://www.lesswrong.com/posts/rvFzgeracFc7PRxf4/go-see-ex-machina|title = Go see Ex Machina|date = February 26, 2016|accessdate = August 18, 2019|publisher = LessWrong}}</ref><ref>{{cite web|url = https://www.engadget.com/2015/04/01/ex-machina-alex-garland-interview/|title = 'Ex Machina' director embraces the rise of superintelligent AI|last = Hardawar|first = Devindra|date = April 1, 2015|accessdate = August 18, 2019|publisher = Engadget}}</ref>
|-
| 2015 || || || Daniel Dewey joins the Open Philanthropy Project.<ref>{{cite web |url=http://www.openphilanthropy.org/about/team/daniel-dewey |title=Daniel Dewey |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref> He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence.
|-
| 2017 || {{dts|October}} || Grant || The Open Philanthropy Project awards MIRI a grant of $3.75 million over three years ($1.25 million per year). The cited reasons for the grant are a "very positive review" of MIRI's "Logical Induction" paper by an "outstanding" machine learning researcher, as well as the Open Philanthropy Project having made more grants in the area so that a grant to MIRI is less likely to appear as an "outsized endorsement of MIRI's approach".<ref>{{cite web |url=https://intelligence.org/2017/11/08/major-grant-open-phil/ |title=A major grant from the Open Philanthropy Project |author=Malo Bourgon |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=November 8, 2017 |accessdate=November 11, 2017}}</ref><ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 |publisher=Open Philanthropy Project |title=Machine Intelligence Research Institute — General Support (2017) |date=November 8, 2017 |accessdate=November 11, 2017}}</ref>
|-
| 2018 || {{dts|April 5}} || Documentary || The documentary ''{{w|Do You Trust This Computer?}}'', directed by {{w|Chris Paine}}, is released. It covers issues related to AI safety and includes interviews with prominent individuals relevant to AI, such as {{w|Ray Kurzweil}}, {{w|Elon Musk}} and {{w|Jonathan Nolan}}.
|-
| 2019 || {{dts|June 7}} || Fictional portrayal || The movie ''{{w|I Am Mother}}'' is released on Netflix. According to a comment on Slate Star Codex: "you can use it to illustrate everything from paperclip maximization to deontological kill switches".<ref>{{cite web|url = https://slatestarcodex.com/2019/06/05/open-thread-129-25/|title = OPEN THREAD 129.25|date = June 8, 2019|accessdate = August 18, 2019}}</ref>
|}
===What the timeline is still missing===
* TODO add Norbert Wiener [http://lesswrong.com/lw/i2g/norbert_wieners_paper_some_moral_and_technical/] (via anonymous)The Matrix
* maybe more at [https://wiki.lesswrong.com/wiki/History_of_AI_risk_thought]
* https://en.wikipedia.org/wiki/With_Folded_Hands
* more AI box results at [http://lesswrong.com/lw/6dr/discussion_yudkowskys_actual_accomplishments/4eva] but unfortunately no dates
* stuff in [http://lesswrong.com/lw/bd6/ai_risk_opportunity_a_timeline_of_early_ideas_and/] and [http://lesswrong.com/lw/b0v/ai_risk_and_opportunity_humanitys_efforts_so_far/]
* AI alignment prize
* Translations of ''Superintelligence''?
* universal prior/distant superintellignces superintelligences stuff* https://en.wikipedia.org/wiki/Do_You_Trust_This_Computer%3F
* Steven Pinker?
* when did the different [https://causeprioritization.org/Template_for_views_on_AI_safety#approaches-to-alignment approaches to alignment] come along?
2,438
edits

Navigation menu