Changes

Jump to: navigation, search

Timeline of AI safety

1,686 bytes added, 16:30, 18 August 2019
no edit summary
|-
| 2005 || || Publication || ''{{w|The Singularity is Near}}'' by inventor and futurist {{w|Ray Kurzweil}} is published. The book builds upon Kurzweil's previous books ''{{w|The Age of Intelligent Machines}}'' (1990) and ''{{w|The Age of Spiritual Machines}}'' (1999), but unlike its predecessors, uses the term {{w|technological singularity}} introduced by Vinge in 1993. Unlike Bill Joy, Kurzweil takes a very positive view of the impact of smarter-than-human AI and the upcoming (in his view) technological singularity.
|-
| 2006 || {{dts|November}} || || {{w|Robin Hanson}} starts ''{{w|Overcoming Bias}}''.<ref>{{cite web |url=http://www.overcomingbias.com/bio |title=Overcoming Bias : Bio |accessdate=June 1, 2017}}</ref> Eliezer Yudkowsky's posts on ''Overcoming Bias'' would form seed material for ''LessWrong'', which would grow to be an important community for discussion related to AI safety.
|-
| 2008 || || Publication || {{w|Steve Omohundro}}'s paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by {{w|Nick Bostrom}} as part of his instrumental convergence thesis.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/Basic_AI_drives |title=Basic AI drives |website=Lesswrongwiki |accessdate=July 26, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref>
|-
| 2008 || {{dts|November}}–December || Outside review || The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate |title=The Hanson-Yudkowsky AI-Foom Debate |website=Lesswrongwiki |accessdate=July 1, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6k1b |title=Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong |accessdate=July 15, 2017 |quote=Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI. |publisher=[[w:LessWrong|LessWrong]]}}</ref>
|-
| 2009 || {{dts|February}} || Project || Eliezer Yudkowsky starts LessWrong using as seed material his posts on Overcoming Bias.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/FAQ#Where_did_Less_Wrong_come_from.3F |title=FAQ - Lesswrongwiki |accessdate=June 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> On the 2009 accomplishments page, MIRI describes LessWrong as being "important to the Singularity Institute's work towards a beneficial Singularity in providing an introduction to issues of cognitive biases and rationality relevant for careful thinking about optimal philanthropy and many of the problems that must be solved in advance of the creation of provably human-friendly powerful artificial intelligence". And: "Besides providing a home for an intellectual community dialoguing on rationality and decision theory, ''Less Wrong'' is also a key venue for SIAI recruitment. Many of the participants in SIAI's Visiting Fellows Program first discovered the organization through ''Less Wrong''."<ref name="siai_accomplishments_20110621">{{cite web |url=https://web.archive.org/web/20110621192259/http://singinst.org/achievements |title=Recent Singularity Institute Accomplishments |publisher=Singularity Institute for Artificial Intelligence |accessdate=July 6, 2017}}</ref>
|-
| 2009 || {{dts|December 11}} || Publication || The third edition of ''{{w|Artificial Intelligence: A Modern Approach}}'' by {{w|Stuart J. Russell}} and {{w|Peter Norvig}} is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
2,422
edits

Navigation menu