Difference between revisions of "Timeline of AI safety"
From Timelines
Line 16: | Line 16: | ||
| 2000 || {{dts|April}} || || {{w|Bill Joy}}'s article "{{w|Why The Future Doesn't Need Us}}" is published in ''[[w:Wired (magazine)|Wired]]''. | | 2000 || {{dts|April}} || || {{w|Bill Joy}}'s article "{{w|Why The Future Doesn't Need Us}}" is published in ''[[w:Wired (magazine)|Wired]]''. | ||
|- | |- | ||
− | | 2000 || {{dts|July 27}} || || [[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] is founded as the Singularity Institute for Artificial Intelligence by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.<ref>{{cite web |url=https://intelligence.org/files/2000-SIAI990.pdf |title=Form 990-EZ 2000 |accessdate=June 1, 2017 |quote=Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999.}}</ref>{{rp|3}}<ref>{{cite web |url=https://web.archive.org/web/20060704101132/http://www.singinst.org:80/about.html |title=About the Singularity Institute for Artificial Intelligence |accessdate=July 1, 2017 |quote=The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors.}}</ref> | + | | 2000 || {{dts|July 27}} || || [[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] (MIRI) is founded as the Singularity Institute for Artificial Intelligence (SIAI) by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.<ref>{{cite web |url=https://intelligence.org/files/2000-SIAI990.pdf |title=Form 990-EZ 2000 |accessdate=June 1, 2017 |quote=Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999.}}</ref>{{rp|3}}<ref>{{cite web |url=https://web.archive.org/web/20060704101132/http://www.singinst.org:80/about.html |title=About the Singularity Institute for Artificial Intelligence |accessdate=July 1, 2017 |quote=The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors.}}</ref> |
|- | |- | ||
| 2002 || {{dts|March 8}} || AI box || The first [[wikipedia:AI box|AI box]] experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0203/index.html#3128 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref> | | 2002 || {{dts|March 8}} || AI box || The first [[wikipedia:AI box|AI box]] experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0203/index.html#3128 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref> | ||
|- | |- | ||
| 2002 || {{dts|July 4}}–5 || AI box || The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0207/index.html#4689 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref> | | 2002 || {{dts|July 4}}–5 || AI box || The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0207/index.html#4689 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref> | ||
+ | |- | ||
+ | | 2002 || {{dts|October 31}} || || {{w|Bill Hibbard}}'s ''Super-Intelligent Machines'' is published.<ref>{{cite web |url=https://www.amazon.com/Super-Intelligent-Machines-International-Systems-Engineering/dp/0306473887/ |title=Amazon.com: Super-Intelligent Machines (Ifsr International Series on Systems Science and Engineering) (9780306473883): Bill Hibbard: Books |accessdate=July 26, 2017 |quote=Publisher: Springer; 2002 edition (October 31, 2002)}}</ref> | ||
|- | |- | ||
| 2003 || || || Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.<ref>{{cite web |url=http://www.nickbostrom.com/ethics/ai.html |title=Ethical Issues In Advanced Artificial Intelligence |accessdate=July 25, 2017}}</ref> | | 2003 || || || Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.<ref>{{cite web |url=http://www.nickbostrom.com/ethics/ai.html |title=Ethical Issues In Advanced Artificial Intelligence |accessdate=July 25, 2017}}</ref> | ||
|- | |- | ||
− | | 2005 || || || The {{w|Future of Humanity Institute}} is founded.<ref>{{cite web |url=http://www.oxfordmartin.ox.ac.uk/research/programmes/future-humanity/ |publisher=Oxford Martin School |title=About |accessdate=July 25, 2017 |quote=The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School).}}</ref> | + | | 2005 || || || The {{w|Future of Humanity Institute}} (FHI) is founded.<ref>{{cite web |url=http://www.oxfordmartin.ox.ac.uk/research/programmes/future-humanity/ |publisher=Oxford Martin School |title=About |accessdate=July 25, 2017 |quote=The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School).}}</ref> |
|- | |- | ||
| 2005 || {{dts|August 21}} || AI box || The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://sl4.org/archive/0508/index.html#12007 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref> | | 2005 || {{dts|August 21}} || AI box || The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://sl4.org/archive/0508/index.html#12007 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref> | ||
+ | |- | ||
+ | | 2008 || || || {{w|Steve Omohundro}}'s paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by {{w|Nick Bostrom}} as part of his instrumental convergence thesis.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/Basic_AI_drives |title=Basic AI drives |website=Lesswrongwiki |accessdate=July 26, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> | ||
|- | |- | ||
| 2009 || {{dts|December 11}} || || The third edition of ''[[wikipedia:Artificial Intelligence: A Modern Approach|Artificial Intelligence: A Modern Approach]]'' by [[wikipedia:Stuart J. Russell|Stuart J. Russell]] and [[wikipedia:Peter Norvig|Peter Norvig]] is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited. | | 2009 || {{dts|December 11}} || || The third edition of ''[[wikipedia:Artificial Intelligence: A Modern Approach|Artificial Intelligence: A Modern Approach]]'' by [[wikipedia:Stuart J. Russell|Stuart J. Russell]] and [[wikipedia:Peter Norvig|Peter Norvig]] is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited. | ||
+ | |- | ||
+ | | 2010 || || || {{w|DeepMind}} is founded by {{w|Demis Hassabis}}, {{w|Shane Legg}}, and {{w|Mustafa Suleyman}}. | ||
+ | |- | ||
+ | | 2011 || || || The Global Catastrophic Risk Institute (GCRI) is sounded by {{w|Seth Baum}} and Tony Barrett.<ref>{{cite web |url=http://gcrinstitute.org/about/ |title=About |publisher=Global Catastrophic Risk Institute |accessdate=July 26, 2017 |quote=The Global Catastrophic Risk Institute (GCRI) is a nonprofit, nonpartisan think tank. GCRI was founded in 2011 by Seth Baum and Tony Barrett.}}</ref> | ||
+ | |- | ||
+ | | 2011 || || || {{w|Google Brain}} is started by [[w:Jeff Dean (computer scientist)|Jeff Dean]], Greg Corrado, and {{w|Andrew Ng}}. | ||
+ | |- | ||
+ | | 2011 || {{dts|September}} || || The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.<ref>{{cite web |url=http://www.futuretech.ox.ac.uk/www.futuretech.ox.ac.uk/index.html |title=Welcome |publisher=Oxford Martin Programme on the Impacts of Future Technology |accessdate=July 26, 2017 |quote=The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University.}}</ref> | ||
+ | |- | ||
+ | | 2013 || {{dts|July}} || || The Center for the Study of Existential Risk (CSER) launches.<ref>{{cite web |url=https://phys.org/news/2012-11-cambridge-technology-humans.html |title=Cambridge to study technology's risk to humans |author=Sylvia Hui |date=November 25, 2012 |accessdate=July 26, 2017 |quote=The university said Sunday the center's launch is planned next year.}}</ref><ref>{{cite web |url=https://web.archive.org/web/20131201030705/http://cser.org/ |title=Centre for the Study of Existential Risk}}</ref> | ||
|- | |- | ||
| 2013 || {{dts|October 1}} || || [[w:Our Final Invention|Our Final Invention: Artificial Intelligence and the End of the Human Era]] by {{w|James Barrat}} is published. The book discusses risks from human-level of superhuman artificial intelligence. | | 2013 || {{dts|October 1}} || || [[w:Our Final Invention|Our Final Invention: Artificial Intelligence and the End of the Human Era]] by {{w|James Barrat}} is published. The book discusses risks from human-level of superhuman artificial intelligence. | ||
+ | |- | ||
+ | | 2014 || {{dts|January 26}} || || Google announces that it has acquired {{w|DeepMind}}. | ||
|- | |- | ||
| 2014 || {{dts|March}}–May || Influence || [[wikipedia:Future of Life Institute|Future of Life Institute]] (FLI) is founded.<ref>{{cite web |url=http://lesswrong.com/lw/kcm/new_organization_future_of_life_institute_fli/ |title=New organization - Future of Life Institute (FLI) |author=Victoria Krakovna |accessdate=July 6, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]] |quote=As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.}}</ref> | | 2014 || {{dts|March}}–May || Influence || [[wikipedia:Future of Life Institute|Future of Life Institute]] (FLI) is founded.<ref>{{cite web |url=http://lesswrong.com/lw/kcm/new_organization_future_of_life_institute_fli/ |title=New organization - Future of Life Institute (FLI) |author=Victoria Krakovna |accessdate=July 6, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]] |quote=As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.}}</ref> | ||
|- | |- | ||
| 2014 || {{dts|July}}–September || || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published. | | 2014 || {{dts|July}}–September || || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published. | ||
+ | |- | ||
+ | | 2014 || {{dts|August}} || Project || The AI Impacts website launches.<ref>{{cite web |url=https://intelligence.org/2014/09/01/september-newsletter-2/ |title=MIRI's September Newsletter |publisher=Machine Intelligence Research Institute |date=September 1, 2014 |accessdate=July 15, 2017 |quote=Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts.}}</ref> | ||
|- | |- | ||
| 2014 || {{dts|October 22}}–24 || || During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[wikipedia:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref> | | 2014 || {{dts|October 22}}–24 || || During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[wikipedia:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref> | ||
Line 41: | Line 59: | ||
|- | |- | ||
| 2015 || || || Daniel Dewey joins the Open Philanthropy Project.<ref>{{cite web |url=http://www.openphilanthropy.org/about/team/daniel-dewey |title=Daniel Dewey |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref> He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence. | | 2015 || || || Daniel Dewey joins the Open Philanthropy Project.<ref>{{cite web |url=http://www.openphilanthropy.org/about/team/daniel-dewey |title=Daniel Dewey |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref> He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence. | ||
+ | |- | ||
+ | | 2015 || || || The Strategic Artificial Intelligence Research Centre launches around this time.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/strategic-centre-for-artificial-intelligence-policy/ |author=Future of Humanity Institute - FHI |title=Strategic Artificial Intelligence Research Centre - Future of Humanity Institute |publisher=Future of Humanity Institute |accessdate=July 27, 2017}}</ref><ref>{{cite web |url=https://docs.google.com/document/d/16Te6HnZN2OEviYFA-42Tf9Pal_Idovtgr5Y1RGEPW_g/edit |title=Landscape of current work on potential risks from advanced AI |publisher=Google Docs |accessdate=July 27, 2017}}</ref> | ||
|- | |- | ||
| 2015 || {{dts|January}} || || The {{w|Open Letter on Artificial Intelligence}}, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published. | | 2015 || {{dts|January}} || || The {{w|Open Letter on Artificial Intelligence}}, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published. | ||
Line 49: | Line 69: | ||
|- | |- | ||
| 2015 || {{dts|January 22}}–27 || || Tim Urban publishes on {{w|Wait But Why}} a two-part series of blog posts about superhuman AI.<ref>{{cite web |url=https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html |title=The Artificial Intelligence Revolution: Part 1 |publisher=Wait But Why |date=January 22, 2017 |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html |title=The Artificial Intelligence Revolution: Part 2 |publisher=Wait But Why |date=January 27, 2015 |accessdate=July 25, 2017}}</ref> | | 2015 || {{dts|January 22}}–27 || || Tim Urban publishes on {{w|Wait But Why}} a two-part series of blog posts about superhuman AI.<ref>{{cite web |url=https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html |title=The Artificial Intelligence Revolution: Part 1 |publisher=Wait But Why |date=January 22, 2017 |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html |title=The Artificial Intelligence Revolution: Part 2 |publisher=Wait But Why |date=January 27, 2015 |accessdate=July 25, 2017}}</ref> | ||
+ | |- | ||
+ | | 2015 || {{dts|July 1}} || || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref> | ||
|- | |- | ||
| 2015 || {{dts|October}} || || The Open Philanthropy Project first publishes its page on AI timelines.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines |title=What Do We Know about AI Timelines? |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref> | | 2015 || {{dts|October}} || || The Open Philanthropy Project first publishes its page on AI timelines.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines |title=What Do We Know about AI Timelines? |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref> | ||
|- | |- | ||
| 2015 || {{dts|December}} || || The {{w|Leverhulme Centre for the Future of Intelligence}} launches around this time.<ref>{{cite web |url=http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of |publisher=University of Cambridge |title=The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity |date=December 3, 2015 |accessdate=July 26, 2017}}</ref> | | 2015 || {{dts|December}} || || The {{w|Leverhulme Centre for the Future of Intelligence}} launches around this time.<ref>{{cite web |url=http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of |publisher=University of Cambridge |title=The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity |date=December 3, 2015 |accessdate=July 26, 2017}}</ref> | ||
+ | |- | ||
+ | | 2015 || {{dts|December 11}} || || {{w|OpenAI}} is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)<ref>{{cite web |url=https://www.nytimes.com/2015/12/12/science/artificial-intelligence-research-center-is-founded-by-silicon-valley-investors.html |date=December 11, 2015 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors |author=John Markoff |accessdate=July 26, 2017 |quote=The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco.}}</ref><ref>{{cite web |url=https://blog.openai.com/introducing-openai/ |publisher=OpenAI Blog |title=Introducing OpenAI |date=December 11, 2015 |accessdate=July 26, 2017}}</ref> | ||
|- | |- | ||
| 2016 || {{dts|April 7}} || || 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".<ref>{{cite web |url=https://80000hours.org/2016/04/why-and-how-to-use-your-career-to-make-artificial-intelligence-safe/ |title=How and why to use your career to make artificial intelligence safer |publisher=80,000 Hours |date=April 7, 2016 |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://web.archive.org/web/20160627024909/https://80000hours.org/problem-profiles/artificial-intelligence-risk/ |title=Risks posed by artificial intelligence |publisher=80,000 Hours}}</ref> | | 2016 || {{dts|April 7}} || || 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".<ref>{{cite web |url=https://80000hours.org/2016/04/why-and-how-to-use-your-career-to-make-artificial-intelligence-safe/ |title=How and why to use your career to make artificial intelligence safer |publisher=80,000 Hours |date=April 7, 2016 |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://web.archive.org/web/20160627024909/https://80000hours.org/problem-profiles/artificial-intelligence-risk/ |title=Risks posed by artificial intelligence |publisher=80,000 Hours}}</ref> | ||
+ | |- | ||
+ | | 2016 || {{dts|May 6}} || || Holden Karnofsky of the Open Philanthropy Project publishes a blog post on why Open Phil is making potential risks from artificial intelligence a major priority for the year.<ref>{{cite web |url=http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity |title=Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref> | ||
+ | |- | ||
+ | | 2016 || {{dts|June 21}} || || "Concrete Problems in AI Safety" is submitted to the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref> | ||
|- | |- | ||
| 2016 || {{dts|August}} || || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref> | | 2016 || {{dts|August}} || || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref> | ||
+ | |- | ||
+ | | 2016 || {{dts|September 28}} || || The {{w|Partnership on AI}} is publicly announced. | ||
+ | |- | ||
+ | | 2016 || {{dts|December 3}}, 12 || || A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.<ref>{{cite web |url=http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/ |title=CFAR's new focus, and AI Safety - Less Wrong |accessdate=July 13, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/ |title=Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong |accessdate=July 13, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> | ||
|- | |- | ||
| 2017 || {{dts|April}} || || The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.<ref>{{cite web |url=https://intelligence.org/2017/05/10/may-2017-newsletter/ |title=May 2017 Newsletter |publisher=Machine Intelligence Research Institute |date=May 10, 2017 |accessdate=July 25, 2017 |quote=Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ |title=Update on Effective Altruism Funds |publisher=Effective Altruism Forum |date=April 20, 2017 |accessdate=July 25, 2017}}</ref> | | 2017 || {{dts|April}} || || The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.<ref>{{cite web |url=https://intelligence.org/2017/05/10/may-2017-newsletter/ |title=May 2017 Newsletter |publisher=Machine Intelligence Research Institute |date=May 10, 2017 |accessdate=July 25, 2017 |quote=Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ |title=Update on Effective Altruism Funds |publisher=Effective Altruism Forum |date=April 20, 2017 |accessdate=July 25, 2017}}</ref> | ||
Line 63: | Line 95: | ||
|- | |- | ||
| 2017 || {{dts|June 14}} || || 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.<ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1416435978438136 |title=New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford’s Future of Humanity Institute |date=June 14, 2017 |publisher=80,000 Hours}}</ref> | | 2017 || {{dts|June 14}} || || 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.<ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1416435978438136 |title=New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford’s Future of Humanity Institute |date=June 14, 2017 |publisher=80,000 Hours}}</ref> | ||
− | |||
− | |||
|- | |- | ||
| 2017 || {{dts|July 23}} || || During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways […] pretty irresponsible".<ref>{{cite web |url=http://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html |publisher=CNBC |title=Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible' |date=July 24, 2017 |author=Catherine Clifford |accessdate=July 25, 2017}}</ref> | | 2017 || {{dts|July 23}} || || During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways […] pretty irresponsible".<ref>{{cite web |url=http://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html |publisher=CNBC |title=Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible' |date=July 24, 2017 |author=Catherine Clifford |accessdate=July 25, 2017}}</ref> |
Revision as of 22:54, 26 July 2017
This is a timeline of friendly artificial intelligence.
Contents
Big picture
Time period | Development summary | More details |
---|
Full timeline
Year | Month and date | Event type | Details |
---|---|---|---|
1965 | I. J. Good originates the concept of intelligence explosion in "Speculations Concerning the First Ultraintelligent Machine". | ||
2000 | April | Bill Joy's article "Why The Future Doesn't Need Us" is published in Wired. | |
2000 | July 27 | Machine Intelligence Research Institute (MIRI) is founded as the Singularity Institute for Artificial Intelligence (SIAI) by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.[1]:3[2] | |
2002 | March 8 | AI box | The first AI box experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.[3] |
2002 | July 4–5 | AI box | The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.[4] |
2002 | October 31 | Bill Hibbard's Super-Intelligent Machines is published.[5] | |
2003 | Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.[6] | ||
2005 | The Future of Humanity Institute (FHI) is founded.[7] | ||
2005 | August 21 | AI box | The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.[8] |
2008 | Steve Omohundro's paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by Nick Bostrom as part of his instrumental convergence thesis.[9] | ||
2009 | December 11 | The third edition of Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited. | |
2010 | DeepMind is founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman. | ||
2011 | The Global Catastrophic Risk Institute (GCRI) is sounded by Seth Baum and Tony Barrett.[10] | ||
2011 | Google Brain is started by Jeff Dean, Greg Corrado, and Andrew Ng. | ||
2011 | September | The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.[11] | |
2013 | July | The Center for the Study of Existential Risk (CSER) launches.[12][13] | |
2013 | October 1 | Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is published. The book discusses risks from human-level of superhuman artificial intelligence. | |
2014 | January 26 | Google announces that it has acquired DeepMind. | |
2014 | March–May | Influence | Future of Life Institute (FLI) is founded.[14] |
2014 | July–September | Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published. | |
2014 | August | Project | The AI Impacts website launches.[15] |
2014 | October 22–24 | During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".[16][17] | |
2014 | December 2 | In an interview with BBC, Stephen Hawking states that advanced artificial intelligence could end the human race.[18] | |
2015 | Daniel Dewey joins the Open Philanthropy Project.[19] He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence. | ||
2015 | The Strategic Artificial Intelligence Research Centre launches around this time.[20][21] | ||
2015 | January | The Open Letter on Artificial Intelligence, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published. | |
2015 | January 28 | During an "ask me anything" (AMA) session on reddit, Bill Gates states his concern about artificial superintelligence.[22][23] | |
2015 | January 2–5 | Conference | The Future of AI: Opportunities and Challenges, an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute.[24] Nate Soares of the Machine Intelligence Research Institute would later call this the "turning point" of when top academics begin to focus on AI risk.[25] |
2015 | January 22–27 | Tim Urban publishes on Wait But Why a two-part series of blog posts about superhuman AI.[26][27] | |
2015 | July 1 | The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.[28][29][30] | |
2015 | October | The Open Philanthropy Project first publishes its page on AI timelines.[31] | |
2015 | December | The Leverhulme Centre for the Future of Intelligence launches around this time.[32] | |
2015 | December 11 | OpenAI is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)[33][34] | |
2016 | April 7 | 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".[35][36] | |
2016 | May 6 | Holden Karnofsky of the Open Philanthropy Project publishes a blog post on why Open Phil is making potential risks from artificial intelligence a major priority for the year.[37] | |
2016 | June 21 | "Concrete Problems in AI Safety" is submitted to the arXiv.[38] | |
2016 | August | The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".[39] | |
2016 | September 28 | The Partnership on AI is publicly announced. | |
2016 | December 3, 12 | A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.[40][41] | |
2017 | April | The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.[42][43] | |
2017 | April 6 | 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".[44][45] | |
2017 | June 14 | 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.[46] | |
2017 | July 23 | During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways […] pretty irresponsible".[47] |
Meta information on the timeline
How the timeline was built
The initial version of the timeline was written by Issa Rice.
Issa likes to work locally and track changes with Git, so the revision history on this wiki only shows changes in bulk. To see more incremental changes, refer to the commit history.
Funding information for this timeline is available.
What the timeline is still missing
Timeline update strategy
See also
External links
- Funding information for AI risk on Vipul Naik's donations portal
References
- ↑ "Form 990-EZ 2000" (PDF). Retrieved June 1, 2017.
Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999.
- ↑ "About the Singularity Institute for Artificial Intelligence". Retrieved July 1, 2017.
The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors.
- ↑ "SL4: By Thread". Retrieved July 1, 2017.
- ↑ "SL4: By Thread". Retrieved July 1, 2017.
- ↑ "Amazon.com: Super-Intelligent Machines (Ifsr International Series on Systems Science and Engineering) (9780306473883): Bill Hibbard: Books". Retrieved July 26, 2017.
Publisher: Springer; 2002 edition (October 31, 2002)
- ↑ "Ethical Issues In Advanced Artificial Intelligence". Retrieved July 25, 2017.
- ↑ "About". Oxford Martin School. Retrieved July 25, 2017.
The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School).
- ↑ "SL4: By Thread". Retrieved July 1, 2017.
- ↑ "Basic AI drives". Lesswrongwiki. LessWrong. Retrieved July 26, 2017.
- ↑ "About". Global Catastrophic Risk Institute. Retrieved July 26, 2017.
The Global Catastrophic Risk Institute (GCRI) is a nonprofit, nonpartisan think tank. GCRI was founded in 2011 by Seth Baum and Tony Barrett.
- ↑ "Welcome". Oxford Martin Programme on the Impacts of Future Technology. Retrieved July 26, 2017.
The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University.
- ↑ Sylvia Hui (November 25, 2012). "Cambridge to study technology's risk to humans". Retrieved July 26, 2017.
The university said Sunday the center's launch is planned next year.
- ↑ "Centre for the Study of Existential Risk".
- ↑ Victoria Krakovna. "New organization - Future of Life Institute (FLI)". LessWrong. Retrieved July 6, 2017.
As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.
- ↑ "MIRI's September Newsletter". Machine Intelligence Research Institute. September 1, 2014. Retrieved July 15, 2017.
Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts.
- ↑ Samuel Gibbs (October 27, 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Retrieved July 25, 2017.
- ↑ "AeroAstro Centennial Webcast". Retrieved July 25, 2017.
The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium
- ↑ "Stephen Hawking warns artificial intelligence could end mankind". BBC News. December 2, 2014. Retrieved July 25, 2017.
- ↑ "Daniel Dewey". Open Philanthropy Project. Retrieved July 25, 2017.
- ↑ Future of Humanity Institute - FHI. "Strategic Artificial Intelligence Research Centre - Future of Humanity Institute". Future of Humanity Institute. Retrieved July 27, 2017.
- ↑ "Landscape of current work on potential risks from advanced AI". Google Docs. Retrieved July 27, 2017.
- ↑ "Hi Reddit, I'm Bill Gates and I'm back for my third AMA. Ask me anything. • r/IAmA". reddit. Retrieved July 25, 2017.
- ↑ Stuart Dredge (January 29, 2015). "Artificial intelligence will become strong enough to be a concern, says Bill Gates". The Guardian. Retrieved July 25, 2017.
- ↑ "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017.
- ↑ Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017.
- ↑ "The Artificial Intelligence Revolution: Part 1". Wait But Why. January 22, 2017. Retrieved July 25, 2017.
- ↑ "The Artificial Intelligence Revolution: Part 2". Wait But Why. January 27, 2015. Retrieved July 25, 2017.
- ↑ "Grants Timeline - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017.
- ↑ "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017.
- ↑ "AI Safety Research - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017.
- ↑ "What Do We Know about AI Timelines?". Open Philanthropy Project. Retrieved July 25, 2017.
- ↑ "The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity". University of Cambridge. December 3, 2015. Retrieved July 26, 2017.
- ↑ John Markoff (December 11, 2015). "Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors". The New York Times. Retrieved July 26, 2017.
The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco.
- ↑ "Introducing OpenAI". OpenAI Blog. December 11, 2015. Retrieved July 26, 2017.
- ↑ "How and why to use your career to make artificial intelligence safer". 80,000 Hours. April 7, 2016. Retrieved July 25, 2017.
- ↑ "Risks posed by artificial intelligence". 80,000 Hours.
- ↑ "Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity". Open Philanthropy Project. Retrieved July 27, 2017.
- ↑ "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017.
- ↑ "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". Berkeley News. August 29, 2016. Retrieved July 26, 2017.
- ↑ "CFAR's new focus, and AI Safety - Less Wrong". LessWrong. Retrieved July 13, 2017.
- ↑ "Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong". LessWrong. Retrieved July 13, 2017.
- ↑ "May 2017 Newsletter". Machine Intelligence Research Institute. May 10, 2017. Retrieved July 25, 2017.
Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.
- ↑ "Update on Effective Altruism Funds". Effective Altruism Forum. April 20, 2017. Retrieved July 25, 2017.
- ↑ "Positively shaping the development of artificial intelligence". 80,000 Hours. Retrieved July 25, 2017.
- ↑ "Completely new article on the pros/cons of working on AI safety, and how to actually go about it". April 6, 2017.
- ↑ "New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford's Future of Humanity Institute". 80,000 Hours. June 14, 2017.
- ↑ Catherine Clifford (July 24, 2017). "Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'". CNBC. Retrieved July 25, 2017.