Open main menu

Timelines β

Timeline of AI safety

Revision as of 19:09, 25 July 2017 by Issa (talk | contribs) (Created page with "This is a '''timeline of friendly artificial intelligence'''. ==Big picture== {| class="wikitable" ! Time period !! Development summary !! More details |- | 1965 || || || {{...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This is a timeline of friendly artificial intelligence.

Contents

Big picture

Time period Development summary More details
1965 I. J. Good originates the concept of intelligence explosion in "Speculations Concerning the First Ultraintelligent Machine".
2000 April Bill Joy's article "Why The Future Doesn't Need Us" is published in Wired.
2000 July 27 Machine Intelligence Research Institute is founded as the Singularity Institute for Artificial Intelligence by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.[1]:3[2]
2002 March 8 AI box The first AI box experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.[3]
2002 July 4–5 AI box The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.[4]
2003 Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.[5]
2005 The Future of Humanity Institute is founded.[6]
2005 August 21 AI box The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.[7]
2009 December 11 The third edition of Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
2013 October 1 Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is published. The book discusses risks from human-level of superhuman artificial intelligence.
2014 March–May Influence Future of Life Institute (FLI) is founded.[8]
2014 July–September Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published.
2014 October 22–24 During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".[9][10]
2014 December 2 In an interview with BBC, Stephen Hawking states that advanced artificial intelligence could end the human race.[11]
2015 Daniel Dewey joins the Open Philanthropy Project.[12] He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence.
2015 January The Open Letter on Artificial Intelligence, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published.
2015 January 28 During an "ask me anything" (AMA) session on reddit, Bill Gates states his concern about artificial superintelligence.[13][14]
2015 January 2–5 Conference The Future of AI: Opportunities and Challenges, an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute.[15] Nate Soares of the Machine Intelligence Research Institute would later call this the "turning point" of when top academics begin to focus on AI risk.[16]
2015 January 22–27 Tim Urban publishes on Wait But Why a two-part series of blog posts about superhuman AI.[17][18]
2015 October The Open Philanthropy Project first publishes its page on AI timelines.[19]
2015 December The Leverhulme Centre for the Future of Intelligence launches around this time.[20]
2016 April 7 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".[21][22]
2016 August The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".[23]
2017 April The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.[24][25]
2017 April 6 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".[26][27]
2017 June 14 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.[28]
2017 June 21 "Concrete Problems in AI Safety" is submitted to the arXiv.[29]
2017 July 23 During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways […] pretty irresponsible".[30]

Full timeline

Year Month and date Event type Details

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Issa likes to work locally and track changes with Git, so the revision history on this wiki only shows changes in bulk. To see more incremental changes, refer to the commit history.

Funding information for this timeline is available.

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. "Form 990-EZ 2000" (PDF). Retrieved June 1, 2017. Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999. 
  2. "About the Singularity Institute for Artificial Intelligence". Retrieved July 1, 2017. The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors. 
  3. "SL4: By Thread". Retrieved July 1, 2017. 
  4. "SL4: By Thread". Retrieved July 1, 2017. 
  5. "Ethical Issues In Advanced Artificial Intelligence". Retrieved July 25, 2017. 
  6. "About". Oxford Martin School. Retrieved July 25, 2017. The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School). 
  7. "SL4: By Thread". Retrieved July 1, 2017. 
  8. Victoria Krakovna. "New organization - Future of Life Institute (FLI)". LessWrong. Retrieved July 6, 2017. As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself. 
  9. Samuel Gibbs (October 27, 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Retrieved July 25, 2017. 
  10. "AeroAstro Centennial Webcast". Retrieved July 25, 2017. The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium 
  11. "Stephen Hawking warns artificial intelligence could end mankind". BBC News. December 2, 2014. Retrieved July 25, 2017. 
  12. "Daniel Dewey". Open Philanthropy Project. Retrieved July 25, 2017. 
  13. "Hi Reddit, I'm Bill Gates and I'm back for my third AMA. Ask me anything. • r/IAmA". reddit. Retrieved July 25, 2017. 
  14. Stuart Dredge (January 29, 2015). "Artificial intelligence will become strong enough to be a concern, says Bill Gates". The Guardian. Retrieved July 25, 2017. 
  15. "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017. 
  16. Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017. 
  17. "The Artificial Intelligence Revolution: Part 1". Wait But Why. January 22, 2017. Retrieved July 25, 2017. 
  18. "The Artificial Intelligence Revolution: Part 2". Wait But Why. January 27, 2015. Retrieved July 25, 2017. 
  19. "What Do We Know about AI Timelines?". Open Philanthropy Project. Retrieved July 25, 2017. 
  20. "The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity". University of Cambridge. December 3, 2015. Retrieved July 26, 2017. 
  21. "How and why to use your career to make artificial intelligence safer". 80,000 Hours. April 7, 2016. Retrieved July 25, 2017. 
  22. "Risks posed by artificial intelligence". 80,000 Hours. 
  23. "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". Berkeley News. August 29, 2016. Retrieved July 26, 2017. 
  24. "May 2017 Newsletter". Machine Intelligence Research Institute. May 10, 2017. Retrieved July 25, 2017. Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere. 
  25. "Update on Effective Altruism Funds". Effective Altruism Forum. April 20, 2017. Retrieved July 25, 2017. 
  26. "Positively shaping the development of artificial intelligence". 80,000 Hours. Retrieved July 25, 2017. 
  27. "Completely new article on the pros/cons of working on AI safety, and how to actually go about it". April 6, 2017. 
  28. "New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford's Future of Humanity Institute". 80,000 Hours. June 14.  Check date values in: |date= (help)
  29. "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017. 
  30. Catherine Clifford (July 24, 2017). "Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'". CNBC. Retrieved July 25, 2017.