Difference between revisions of "Timeline of AI safety"

From Timelines
Jump to: navigation, search
Line 16: Line 16:
 
| 1818 || || Fictional portrayal || The novel ''{{w|Frankenstein}}'' is published. ''Frankenstein'' pioneers the archetype of the artificial intelligence that turns against its creator, and is sometimes discussed in the context of an AI takeoff.<ref>{{cite web |url=http://www.retirementsingularity.com/seven-ways-frankenstein-relates-to-singularity/ |title=Seven Ways Frankenstein Relates to Singularity |author=Michael Nuschke |publisher=RetirementSingularity.com |date=October 10, 2011 |accessdate=July 27, 2017}}</ref><ref>{{cite web |url=http://www.mitchellhowe.com/QAIntellectualHistory.html |title=What is the intellectual history of the Singularity concept? |author=Mitchell Howe |year=2002 |accessdate=July 27, 2017 |quote=Bearing little resemblance to the campy motion pictures he would inspire, Dr. Frankenstein's monster was a highly intelligent being of great emotional depth, but who could not be loved because of his hideous appearance; for this, he vowed to take revenge on his creator.  The monster actually comes across as the most intelligent character in the novel, making Frankenstein perhaps the first work to touch on the core idea of the Singularity.}}</ref><ref>{{cite web |url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield |date=August 9, 2014 |author=Alan Winfield |title=Artificial Intelligence will not turn into a Frankenstein monster |publisher=[[wikipedia:The Guardian|The Guardian]] |accessdate=July 27, 2017 |quote=From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong.}}</ref>
 
| 1818 || || Fictional portrayal || The novel ''{{w|Frankenstein}}'' is published. ''Frankenstein'' pioneers the archetype of the artificial intelligence that turns against its creator, and is sometimes discussed in the context of an AI takeoff.<ref>{{cite web |url=http://www.retirementsingularity.com/seven-ways-frankenstein-relates-to-singularity/ |title=Seven Ways Frankenstein Relates to Singularity |author=Michael Nuschke |publisher=RetirementSingularity.com |date=October 10, 2011 |accessdate=July 27, 2017}}</ref><ref>{{cite web |url=http://www.mitchellhowe.com/QAIntellectualHistory.html |title=What is the intellectual history of the Singularity concept? |author=Mitchell Howe |year=2002 |accessdate=July 27, 2017 |quote=Bearing little resemblance to the campy motion pictures he would inspire, Dr. Frankenstein's monster was a highly intelligent being of great emotional depth, but who could not be loved because of his hideous appearance; for this, he vowed to take revenge on his creator.  The monster actually comes across as the most intelligent character in the novel, making Frankenstein perhaps the first work to touch on the core idea of the Singularity.}}</ref><ref>{{cite web |url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield |date=August 9, 2014 |author=Alan Winfield |title=Artificial Intelligence will not turn into a Frankenstein monster |publisher=[[wikipedia:The Guardian|The Guardian]] |accessdate=July 27, 2017 |quote=From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong.}}</ref>
 
|-
 
|-
| 1920 || || Fictional portrayal || The science fiction play {{w|R.U.R.}} is published. The play introduces the word "robot" to the English language and the plot contains a robot rebellion that leads to human extinction.
+
| 1920 || || Fictional portrayal || The science fiction play ''{{w|R.U.R.}}'' is published. The play introduces the word "robot" to the English language and the plot contains a robot rebellion that leads to human extinction.
 
|-
 
|-
| 1965 || || || {{w|I. J. Good}} [[w:Existential risk from artificial general intelligence#History|originates]] the concept of intelligence explosion in "Speculations Concerning the First Ultraintelligent Machine".
+
| 1942 || {{dts|March}} || Fictional portrayal || The {{w|Three Laws of Robotics}} are introduced by {{w|Isaac Asimov}} in his short story "[[wikipedia:Runaround (story)|Runaround]]".
 
|-
 
|-
| 1942 || {{dts|March}} || Fictional portrayal || The {{w|Three Laws of Robotics}} are introduced by {{w|Isaac Asimov}} in his short story "[[wikipedia:Runaround (story)|Runaround]]".
+
| 1965 || || Publication || {{w|I. J. Good}} [[w:Existential risk from artificial general intelligence#History|originates]] the concept of intelligence explosion in "Speculations Concerning the First Ultraintelligent Machine".
 
|-
 
|-
 
| 1984 || {{dts|October 26}} || Fictional portrayal || The American science fiction film ''{{w|The Terminator}}'' is released. The film contains the first appearance of [[wikipedia:Skynet (Terminator)|Skynet]], a "neural net-based conscious group mind and artificial general intelligence" that "seeks to exterminate the human race in order to fulfill the mandates of its original coding".
 
| 1984 || {{dts|October 26}} || Fictional portrayal || The American science fiction film ''{{w|The Terminator}}'' is released. The film contains the first appearance of [[wikipedia:Skynet (Terminator)|Skynet]], a "neural net-based conscious group mind and artificial general intelligence" that "seeks to exterminate the human race in order to fulfill the mandates of its original coding".
 
|-
 
|-
| 2000 || {{dts|April}} || || {{w|Bill Joy}}'s article "{{w|Why The Future Doesn't Need Us}}" is published in ''[[w:Wired (magazine)|Wired]]''.
+
| 1993 || || || {{w|Vernor Vinge}}'s article "The Coming Technological Singularity: How to Survive in the Post-Human Era" is published. The article popularizes the idea of an intelligence explosion.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/History_of_AI_risk_thought |title=History of AI risk thought |website=Lesswrongwiki |accessdate=July 28, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 
|-
 
|-
| 2000 || {{dts|July 27}} || || [[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] (MIRI) is founded as the Singularity Institute for Artificial Intelligence (SIAI) by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.<ref>{{cite web |url=https://intelligence.org/files/2000-SIAI990.pdf |title=Form 990-EZ 2000 |accessdate=June 1, 2017 |quote=Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999.}}</ref>{{rp|3}}<ref>{{cite web |url=https://web.archive.org/web/20060704101132/http://www.singinst.org:80/about.html |title=About the Singularity Institute for Artificial Intelligence |accessdate=July 1, 2017 |quote=The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors.}}</ref>
+
| 2000 || {{dts|April}} || Publication || {{w|Bill Joy}}'s article "{{w|Why The Future Doesn't Need Us}}" is published in ''[[w:Wired (magazine)|Wired]]''.
 +
|-
 +
| 2000 || {{dts|July 27}} || Organization || [[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] (MIRI) is founded as the Singularity Institute for Artificial Intelligence (SIAI) by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.<ref>{{cite web |url=https://intelligence.org/files/2000-SIAI990.pdf |title=Form 990-EZ 2000 |accessdate=June 1, 2017 |quote=Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999.}}</ref>{{rp|3}}<ref>{{cite web |url=https://web.archive.org/web/20060704101132/http://www.singinst.org:80/about.html |title=About the Singularity Institute for Artificial Intelligence |accessdate=July 1, 2017 |quote=The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors.}}</ref>
 
|-
 
|-
 
| 2002 || {{dts|March 8}} || AI box || The first [[wikipedia:AI box|AI box]] experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0203/index.html#3128 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
 
| 2002 || {{dts|March 8}} || AI box || The first [[wikipedia:AI box|AI box]] experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0203/index.html#3128 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
Line 32: Line 34:
 
| 2002 || {{dts|July 4}}–5 || AI box || The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0207/index.html#4689 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
 
| 2002 || {{dts|July 4}}–5 || AI box || The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0207/index.html#4689 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
 
|-
 
|-
| 2002 || {{dts|October 31}} || || {{w|Bill Hibbard}}'s ''Super-Intelligent Machines'' is published.<ref>{{cite web |url=https://www.amazon.com/Super-Intelligent-Machines-International-Systems-Engineering/dp/0306473887/ |title=Amazon.com: Super-Intelligent Machines (Ifsr International Series on Systems Science and Engineering) (9780306473883): Bill Hibbard: Books |accessdate=July 26, 2017 |quote=Publisher: Springer; 2002 edition (October 31, 2002)}}</ref>
+
| 2002 || {{dts|October 31}} || Publication || {{w|Bill Hibbard}}'s ''Super-Intelligent Machines'' is published.<ref>{{cite web |url=https://www.amazon.com/Super-Intelligent-Machines-International-Systems-Engineering/dp/0306473887/ |title=Amazon.com: Super-Intelligent Machines (Ifsr International Series on Systems Science and Engineering) (9780306473883): Bill Hibbard: Books |accessdate=July 26, 2017 |quote=Publisher: Springer; 2002 edition (October 31, 2002)}}</ref>
 
|-
 
|-
| 2003 || || || Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.<ref>{{cite web |url=http://www.nickbostrom.com/ethics/ai.html |title=Ethical Issues In Advanced Artificial Intelligence |accessdate=July 25, 2017}}</ref>
+
| 2003 || || Publication || Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.<ref>{{cite web |url=http://www.nickbostrom.com/ethics/ai.html |title=Ethical Issues In Advanced Artificial Intelligence |accessdate=July 25, 2017}}</ref>
 
|-
 
|-
| 2005 || || || The {{w|Future of Humanity Institute}} (FHI) is founded.<ref>{{cite web |url=http://www.oxfordmartin.ox.ac.uk/research/programmes/future-humanity/ |publisher=Oxford Martin School |title=About |accessdate=July 25, 2017 |quote=The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School).}}</ref>
+
| 2005 || || Organization || The {{w|Future of Humanity Institute}} (FHI) is founded.<ref>{{cite web |url=http://www.oxfordmartin.ox.ac.uk/research/programmes/future-humanity/ |publisher=Oxford Martin School |title=About |accessdate=July 25, 2017 |quote=The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School).}}</ref>
 
|-
 
|-
 
| 2005 || {{dts|August 21}} || AI box || The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://sl4.org/archive/0508/index.html#12007 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
 
| 2005 || {{dts|August 21}} || AI box || The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://sl4.org/archive/0508/index.html#12007 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
 
|-
 
|-
| 2008 || || || {{w|Steve Omohundro}}'s paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by {{w|Nick Bostrom}} as part of his instrumental convergence thesis.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/Basic_AI_drives |title=Basic AI drives |website=Lesswrongwiki |accessdate=July 26, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
| 2008 || || Publication || {{w|Steve Omohundro}}'s paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by {{w|Nick Bostrom}} as part of his instrumental convergence thesis.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/Basic_AI_drives |title=Basic AI drives |website=Lesswrongwiki |accessdate=July 26, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 
|-
 
|-
| 2008 || || || ''[[wikipedia:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published. The book includes Eliezer Yudkowsky's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk".<ref>{{cite web |url=https://intelligence.org/files/AIPosNegFactor.pdf |title=AIPosNegFactor.pdf |accessdate=July 27, 2017}}</ref>
+
| 2008 || || Publication || ''[[wikipedia:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published. The book includes Eliezer Yudkowsky's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk".<ref>{{cite web |url=https://intelligence.org/files/AIPosNegFactor.pdf |title=AIPosNegFactor.pdf |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2009 || {{dts|December 11}} || || The third edition of ''[[wikipedia:Artificial Intelligence: A Modern Approach|Artificial Intelligence: A Modern Approach]]'' by [[wikipedia:Stuart J. Russell|Stuart J. Russell]] and [[wikipedia:Peter Norvig|Peter Norvig]] is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
+
| 2009 || {{dts|December 11}} || Publication || The third edition of ''[[wikipedia:Artificial Intelligence: A Modern Approach|Artificial Intelligence: A Modern Approach]]'' by [[wikipedia:Stuart J. Russell|Stuart J. Russell]] and [[wikipedia:Peter Norvig|Peter Norvig]] is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
 
|-
 
|-
| 2010 || || || {{w|DeepMind}} is founded by {{w|Demis Hassabis}}, {{w|Shane Legg}}, and {{w|Mustafa Suleyman}}.
+
| 2010 || || Organization || {{w|DeepMind}} is founded by {{w|Demis Hassabis}}, {{w|Shane Legg}}, and {{w|Mustafa Suleyman}}.
 
|-
 
|-
| 2010 || || || [[wikipedia:Vicarious (company)|Vicarious]] is founded by Scott Phoenix and Dileep George. The company "has publicly expressed some concern about potential risks from future AI development" and the founders are signatories on the FLI open letter.<ref name="landscape-current-work-potential-risks" />
+
| 2010 || || Organization || [[wikipedia:Vicarious (company)|Vicarious]] is founded by Scott Phoenix and Dileep George. The company "has publicly expressed some concern about potential risks from future AI development" and the founders are signatories on the FLI open letter.<ref name="landscape-current-work-potential-risks" />
 
|-
 
|-
| 2011 || || || Baum, Goertzel, and Goertzel's "How Long Until Human-Level AI? Results from an Expert Assessment" is published.<ref>{{cite web |url=http://sethbaum.com/ac/2011_AI-Experts.pdf |title=How Long Untill Human-Level AI - 2011_AI-Experts.pdf |accessdate=July 28, 2017}}</ref>
+
| 2011 || || Publication || Baum, Goertzel, and Goertzel's "How Long Until Human-Level AI? Results from an Expert Assessment" is published.<ref>{{cite web |url=http://sethbaum.com/ac/2011_AI-Experts.pdf |title=How Long Untill Human-Level AI - 2011_AI-Experts.pdf |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
| 2011 || || || The Global Catastrophic Risk Institute (GCRI) is sounded by {{w|Seth Baum}} and Tony Barrett.<ref>{{cite web |url=http://gcrinstitute.org/about/ |title=About |publisher=Global Catastrophic Risk Institute |accessdate=July 26, 2017 |quote=The Global Catastrophic Risk Institute (GCRI) is a nonprofit, nonpartisan think tank. GCRI was founded in 2011 by Seth Baum and Tony Barrett.}}</ref>
+
| 2011 || || Organization || The Global Catastrophic Risk Institute (GCRI) is sounded by {{w|Seth Baum}} and Tony Barrett.<ref>{{cite web |url=http://gcrinstitute.org/about/ |title=About |publisher=Global Catastrophic Risk Institute |accessdate=July 26, 2017 |quote=The Global Catastrophic Risk Institute (GCRI) is a nonprofit, nonpartisan think tank. GCRI was founded in 2011 by Seth Baum and Tony Barrett.}}</ref>
 
|-
 
|-
| 2011 || || || {{w|Google Brain}} is started by [[w:Jeff Dean (computer scientist)|Jeff Dean]], Greg Corrado, and {{w|Andrew Ng}}.
+
| 2011 || || Organization || {{w|Google Brain}} is started by [[w:Jeff Dean (computer scientist)|Jeff Dean]], Greg Corrado, and {{w|Andrew Ng}}.
 
|-
 
|-
| 2011 || {{dts|September}} || || The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.<ref>{{cite web |url=http://www.futuretech.ox.ac.uk/www.futuretech.ox.ac.uk/index.html |title=Welcome |publisher=Oxford Martin Programme on the Impacts of Future Technology |accessdate=July 26, 2017 |quote=The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University.}}</ref>
+
| 2011–2013 || || || Sometime during this period, the Back of the Envelope Guide to Philanthropy, a website created by Gordon Irlam, includes prevention of "hostile artificial intelligence" as a top 10 philanthropic opportunity by impact.<ref>{{cite web |url=https://web.archive.org/web/20130518170309/http://beguide.org/ |title=Back of the Envelope Guide to Philanthropy |accessdate=July 28, 2017}}</ref><ref>{{cite web |url=https://meteuphoric.wordpress.com/2014/10/16/gordon-irlam-on-the-beguide/ |title=Gordon Irlam on the BEGuide |website=Meteuphoric |date=October 16, 2014 |publisher=WordPress.com |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
| 2013 || || || Luke Muehlhauser's book ''Facing the Intelligence Explosion'' is published.<ref>{{cite web |url=http://intelligenceexplosion.com/en/about/ |title=About |website=Facing the Intelligence Explosion |accessdate=July 27, 2017}}</ref>
+
| 2011 || {{dts|September}} || Organization || The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.<ref>{{cite web |url=http://www.futuretech.ox.ac.uk/www.futuretech.ox.ac.uk/index.html |title=Welcome |publisher=Oxford Martin Programme on the Impacts of Future Technology |accessdate=July 26, 2017 |quote=The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University.}}</ref>
 +
|-
 +
| 2013 || || Publication || Luke Muehlhauser's book ''Facing the Intelligence Explosion'' is published.<ref>{{cite web |url=http://intelligenceexplosion.com/en/about/ |title=About |website=Facing the Intelligence Explosion |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
 
| 2013 || {{dts|April 13}} || || MIRI publishes an update on its strategy on its blog. In the blog post, MIRI executive director Luke Muehlhauser states that MIRI plans to put less effort into public outreach and shift its research to Friendly AI math research.<ref>{{cite web |url=https://intelligence.org/2013/04/13/miris-strategy-for-2013/ |title=MIRI's Strategy for 2013 |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=December 11, 2013 |accessdate=July 6, 2017}}</ref>
 
| 2013 || {{dts|April 13}} || || MIRI publishes an update on its strategy on its blog. In the blog post, MIRI executive director Luke Muehlhauser states that MIRI plans to put less effort into public outreach and shift its research to Friendly AI math research.<ref>{{cite web |url=https://intelligence.org/2013/04/13/miris-strategy-for-2013/ |title=MIRI's Strategy for 2013 |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=December 11, 2013 |accessdate=July 6, 2017}}</ref>
 
|-
 
|-
| 2013 || {{dts|July}} || || The Center for the Study of Existential Risk (CSER) launches.<ref>{{cite web |url=https://phys.org/news/2012-11-cambridge-technology-humans.html |title=Cambridge to study technology's risk to humans |author=Sylvia Hui |date=November 25, 2012 |accessdate=July 26, 2017 |quote=The university said Sunday the center's launch is planned next year.}}</ref><ref>{{cite web |url=https://web.archive.org/web/20131201030705/http://cser.org/ |title=Centre for the Study of Existential Risk}}</ref>
+
| 2013 || {{dts|July}} || Organization || The Center for the Study of Existential Risk (CSER) launches.<ref>{{cite web |url=https://phys.org/news/2012-11-cambridge-technology-humans.html |title=Cambridge to study technology's risk to humans |author=Sylvia Hui |date=November 25, 2012 |accessdate=July 26, 2017 |quote=The university said Sunday the center's launch is planned next year.}}</ref><ref>{{cite web |url=https://web.archive.org/web/20131201030705/http://cser.org/ |title=Centre for the Study of Existential Risk}}</ref>
 
|-
 
|-
| 2013 || {{dts|July}} || || The Foundational Research Institute (FRI) is founded. Some of FRI's work discusses risks from artificial intelligence.<ref>{{cite web |url=https://foundational-research.org/transparency |title=Transparency |publisher=Foundational Research Institute |accessdate=July 27, 2017}}</ref>
+
| 2013 || {{dts|July}} || Organization || The Foundational Research Institute (FRI) is founded. Some of FRI's work discusses risks from artificial intelligence.<ref>{{cite web |url=https://foundational-research.org/transparency |title=Transparency |publisher=Foundational Research Institute |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2013 || {{dts|October 1}} || || [[w:Our Final Invention|Our Final Invention: Artificial Intelligence and the End of the Human Era]] by {{w|James Barrat}} is published. The book discusses risks from human-level of superhuman artificial intelligence.
+
| 2013 || {{dts|October 1}} || Publication || ''[[w:Our Final Invention|Our Final Invention: Artificial Intelligence and the End of the Human Era]]'' by {{w|James Barrat}} is published. The book discusses risks from human-level of superhuman artificial intelligence.
 
|-
 
|-
| 2014 || || || Müller and Bostrom's "Future Progress in Artificial Intelligence: A Survey of Expert Opinion" is published.<ref>{{cite web |url=http://www.nickbostrom.com/papers/survey.pdf |title=Future Progress in Artificial Intelligence: A Survey of Expert Opinion - survey.pdf |accessdate=July 28, 2017}}</ref>
+
| 2014 || || Publication || Müller and Bostrom's "Future Progress in Artificial Intelligence: A Survey of Expert Opinion" is published.<ref>{{cite web |url=http://www.nickbostrom.com/papers/survey.pdf |title=Future Progress in Artificial Intelligence: A Survey of Expert Opinion - survey.pdf |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
 
| 2014 || {{dts|January 26}} || || Google announces that it has acquired {{w|DeepMind}}.
 
| 2014 || {{dts|January 26}} || || Google announces that it has acquired {{w|DeepMind}}.
 
|-
 
|-
| 2014 || {{dts|March}}–May || || [[wikipedia:Future of Life Institute|Future of Life Institute]] (FLI) is founded.<ref>{{cite web |url=http://lesswrong.com/lw/kcm/new_organization_future_of_life_institute_fli/ |title=New organization - Future of Life Institute (FLI) |author=Victoria Krakovna |accessdate=July 6, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]] |quote=As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.}}</ref>
+
| 2014 || {{dts|March}}–May || Organization || [[wikipedia:Future of Life Institute|Future of Life Institute]] (FLI) is founded.<ref>{{cite web |url=http://lesswrong.com/lw/kcm/new_organization_future_of_life_institute_fli/ |title=New organization - Future of Life Institute (FLI) |author=Victoria Krakovna |accessdate=July 6, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]] |quote=As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.}}</ref>
 
|-
 
|-
| 2014 || {{dts|July}}–September || || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.
+
| 2014 || {{dts|July}}–September || Publication || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.
 
|-
 
|-
 
| 2014 || {{dts|August}} || Project || The AI Impacts website launches.<ref>{{cite web |url=https://intelligence.org/2014/09/01/september-newsletter-2/ |title=MIRI's September Newsletter |publisher=Machine Intelligence Research Institute |date=September 1, 2014 |accessdate=July 15, 2017 |quote=Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts.}}</ref>
 
| 2014 || {{dts|August}} || Project || The AI Impacts website launches.<ref>{{cite web |url=https://intelligence.org/2014/09/01/september-newsletter-2/ |title=MIRI's September Newsletter |publisher=Machine Intelligence Research Institute |date=September 1, 2014 |accessdate=July 15, 2017 |quote=Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts.}}</ref>
 
|-
 
|-
| 2014 || Fall || || The One Hundred Year Study on Artificial Intelligence (AI100) launches.<ref>{{cite web |url=https://ai100.stanford.edu/sites/default/files/ai100report10032016fnl_singles.pdf |title=One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel |date=September 2016 |author=Peter Stone |collaboration=AI100 Standing Committee and Study Panel |accessdate=July 27, 2017 |quote=The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a longterm investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society.}}</ref>
+
| 2014 || Fall || Project || The One Hundred Year Study on Artificial Intelligence (AI100) launches.<ref>{{cite web |url=https://ai100.stanford.edu/sites/default/files/ai100report10032016fnl_singles.pdf |title=One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel |date=September 2016 |author=Peter Stone |collaboration=AI100 Standing Committee and Study Panel |accessdate=July 27, 2017 |quote=The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a longterm investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society.}}</ref>
 
|-
 
|-
 
| 2014 || {{dts|October 22}}–24 || || During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[wikipedia:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref>
 
| 2014 || {{dts|October 22}}–24 || || During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[wikipedia:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref>
Line 86: Line 90:
 
| 2015 || || || Daniel Dewey joins the Open Philanthropy Project.<ref>{{cite web |url=http://www.openphilanthropy.org/about/team/daniel-dewey |title=Daniel Dewey |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref> He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence.
 
| 2015 || || || Daniel Dewey joins the Open Philanthropy Project.<ref>{{cite web |url=http://www.openphilanthropy.org/about/team/daniel-dewey |title=Daniel Dewey |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref> He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence.
 
|-
 
|-
| 2015 || || || The Strategic Artificial Intelligence Research Centre launches around this time.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/strategic-centre-for-artificial-intelligence-policy/ |author=Future of Humanity Institute - FHI |title=Strategic Artificial Intelligence Research Centre - Future of Humanity Institute |publisher=Future of Humanity Institute |accessdate=July 27, 2017}}</ref><ref name="landscape-current-work-potential-risks">{{cite web |url=https://docs.google.com/document/d/16Te6HnZN2OEviYFA-42Tf9Pal_Idovtgr5Y1RGEPW_g/edit |title=Landscape of current work on potential risks from advanced AI |publisher=Google Docs |accessdate=July 27, 2017}}</ref>
+
| 2015 || || Organization || The Strategic Artificial Intelligence Research Centre launches around this time.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/strategic-centre-for-artificial-intelligence-policy/ |author=Future of Humanity Institute - FHI |title=Strategic Artificial Intelligence Research Centre - Future of Humanity Institute |publisher=Future of Humanity Institute |accessdate=July 27, 2017}}</ref><ref name="landscape-current-work-potential-risks">{{cite web |url=https://docs.google.com/document/d/16Te6HnZN2OEviYFA-42Tf9Pal_Idovtgr5Y1RGEPW_g/edit |title=Landscape of current work on potential risks from advanced AI |publisher=Google Docs |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2015 || {{dts|January}} || || The {{w|Open Letter on Artificial Intelligence}}, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published.
+
| 2015 || {{dts|January}} || Publication || The {{w|Open Letter on Artificial Intelligence}}, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published.
 
|-
 
|-
 
| 2015 || {{dts|January 28}} || || During an "ask me anything" (AMA) session on reddit, Bill Gates states his concern about artificial superintelligence.<ref>{{cite web |url=https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third/co3r3g8/ |publisher=reddit |title=Hi Reddit, I'm Bill Gates and I'm back for my third AMA. Ask me anything. &bull; r/IAmA |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://www.theguardian.com/technology/2015/jan/29/artificial-intelligence-strong-concern-bill-gates |author=Stuart Dredge |date=January 29, 2015 |title=Artificial intelligence will become strong enough to be a concern, says Bill Gates |publisher=[[wikipedia:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref>
 
| 2015 || {{dts|January 28}} || || During an "ask me anything" (AMA) session on reddit, Bill Gates states his concern about artificial superintelligence.<ref>{{cite web |url=https://www.reddit.com/r/IAmA/comments/2tzjp7/hi_reddit_im_bill_gates_and_im_back_for_my_third/co3r3g8/ |publisher=reddit |title=Hi Reddit, I'm Bill Gates and I'm back for my third AMA. Ask me anything. &bull; r/IAmA |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://www.theguardian.com/technology/2015/jan/29/artificial-intelligence-strong-concern-bill-gates |author=Stuart Dredge |date=January 29, 2015 |title=Artificial intelligence will become strong enough to be a concern, says Bill Gates |publisher=[[wikipedia:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref>
Line 94: Line 98:
 
| 2015 || {{dts|January 2}}–5 || Conference || ''The Future of AI: Opportunities and Challenges'', an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute.<ref>{{cite web |url=https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ |title=AI safety conference in Puerto Rico |publisher=Future of Life Institute |date=October 12, 2015 |accessdate=July 13, 2017}}</ref> Nate Soares of the Machine Intelligence Research Institute would later call this the "turning point" of when top academics begin to focus on AI risk.<ref>{{cite web |url=https://intelligence.org/2015/07/16/an-astounding-year/ |title=An Astounding Year |publisher=Machine Intelligence Research Institute |author=Nate Soares |date=July 16, 2015 |accessdate=July 13, 2017}}</ref>
 
| 2015 || {{dts|January 2}}–5 || Conference || ''The Future of AI: Opportunities and Challenges'', an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute.<ref>{{cite web |url=https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ |title=AI safety conference in Puerto Rico |publisher=Future of Life Institute |date=October 12, 2015 |accessdate=July 13, 2017}}</ref> Nate Soares of the Machine Intelligence Research Institute would later call this the "turning point" of when top academics begin to focus on AI risk.<ref>{{cite web |url=https://intelligence.org/2015/07/16/an-astounding-year/ |title=An Astounding Year |publisher=Machine Intelligence Research Institute |author=Nate Soares |date=July 16, 2015 |accessdate=July 13, 2017}}</ref>
 
|-
 
|-
| 2015 || {{dts|January 22}}–27 || || Tim Urban publishes on {{w|Wait But Why}} a two-part series of blog posts about superhuman AI.<ref>{{cite web |url=https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html |title=The Artificial Intelligence Revolution: Part 1 |publisher=Wait But Why |date=January 22, 2017 |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html |title=The Artificial Intelligence Revolution: Part 2 |publisher=Wait But Why |date=January 27, 2015 |accessdate=July 25, 2017}}</ref>
+
| 2015 || {{dts|January 22}}–27 || Publication || Tim Urban publishes on {{w|Wait But Why}} a two-part series of blog posts about superhuman AI.<ref>{{cite web |url=https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html |title=The Artificial Intelligence Revolution: Part 1 |publisher=Wait But Why |date=January 22, 2017 |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html |title=The Artificial Intelligence Revolution: Part 2 |publisher=Wait But Why |date=January 27, 2015 |accessdate=July 25, 2017}}</ref>
 
|-
 
|-
 
| 2015 || {{dts|February 25}} || || {{w|Sam Altman}}, president of [[wikipedia:Y Combinator (company)|Y Combinator]], publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref>{{cite web |url=http://blog.samaltman.com/machine-intelligence-part-1 |title=Machine intelligence, part 1 |publisher=Sam Altman |accessdate=July 27, 2017}}</ref>
 
| 2015 || {{dts|February 25}} || || {{w|Sam Altman}}, president of [[wikipedia:Y Combinator (company)|Y Combinator]], publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref>{{cite web |url=http://blog.samaltman.com/machine-intelligence-part-1 |title=Machine intelligence, part 1 |publisher=Sam Altman |accessdate=July 27, 2017}}</ref>
Line 100: Line 104:
 
| 2015 || {{dts|June 4}} || || At {{w|Airbnb}}'s Open Air 2015 conference, {{w|Sam Altman}}, president of [[wikipedia:Y Combinator (company)|Y Combinator]], states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.<ref>{{cite web |url=http://www.businessinsider.com/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6 |author=Matt Weinberger |date=June 4, 2015 |title=Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last |publisher=Business Insider |accessdate=July 27, 2017}}</ref>
 
| 2015 || {{dts|June 4}} || || At {{w|Airbnb}}'s Open Air 2015 conference, {{w|Sam Altman}}, president of [[wikipedia:Y Combinator (company)|Y Combinator]], states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.<ref>{{cite web |url=http://www.businessinsider.com/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6 |author=Matt Weinberger |date=June 4, 2015 |title=Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last |publisher=Business Insider |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2015 || {{dts|July 1}} || || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref>
+
| 2015 || {{dts|July 1}} || Grant || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref>
 
|-
 
|-
| 2015 || {{dts|August}} || || The Open Philanthropy Project awards a grant of $1.2 million to the {{w|Future of Life Institute}}.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2015 || {{dts|August}} || Grant || The Open Philanthropy Project awards a grant of $1.2 million to the {{w|Future of Life Institute}}.<ref name="donations-portal-open-phil-ai-risk" />
 
|-
 
|-
 
| 2015 || {{dts|August}} || || The Open Philanthropy Project publishes its cause report on potential risks from advanced artificial intelligence.<ref>{{cite web |url=http://www.openphilanthropy.org/research/cause-reports/ai-risk |title=Potential Risks from Advanced Artificial Intelligence |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
 
| 2015 || {{dts|August}} || || The Open Philanthropy Project publishes its cause report on potential risks from advanced artificial intelligence.<ref>{{cite web |url=http://www.openphilanthropy.org/research/cause-reports/ai-risk |title=Potential Risks from Advanced Artificial Intelligence |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
Line 108: Line 112:
 
| 2015 || {{dts|October}} || || The Open Philanthropy Project first publishes its page on AI timelines.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines |title=What Do We Know about AI Timelines? |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref>
 
| 2015 || {{dts|October}} || || The Open Philanthropy Project first publishes its page on AI timelines.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-timelines |title=What Do We Know about AI Timelines? |publisher=Open Philanthropy Project |accessdate=July 25, 2017}}</ref>
 
|-
 
|-
| 2015 || {{dts|December}} || || The {{w|Leverhulme Centre for the Future of Intelligence}} launches around this time.<ref>{{cite web |url=http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of |publisher=University of Cambridge |title=The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity |date=December 3, 2015 |accessdate=July 26, 2017}}</ref>
+
| 2015 || {{dts|December}} || Organization || The {{w|Leverhulme Centre for the Future of Intelligence}} launches around this time.<ref>{{cite web |url=http://www.cam.ac.uk/research/news/the-future-of-intelligence-cambridge-university-launches-new-centre-to-study-ai-and-the-future-of |publisher=University of Cambridge |title=The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity |date=December 3, 2015 |accessdate=July 26, 2017}}</ref>
 +
|-
 +
| 2015 || {{dts|December 11}} || Organization || {{w|OpenAI}} is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)<ref>{{cite web |url=https://www.nytimes.com/2015/12/12/science/artificial-intelligence-research-center-is-founded-by-silicon-valley-investors.html |date=December 11, 2015 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors |author=John Markoff |accessdate=July 26, 2017 |quote=The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco.}}</ref><ref>{{cite web |url=https://blog.openai.com/introducing-openai/ |publisher=OpenAI Blog |title=Introducing OpenAI |date=December 11, 2015 |accessdate=July 26, 2017}}</ref>
 +
|-
 +
| 2016 || {{dts|April 28}} || Publication || The Global Catastrophic Risks 2016 report is published. The report is a collaboration between the Global Priorities Project and the {{w|Global Challenges Foundation}}.<ref>{{cite web |url=http://globalprioritiesproject.org/2016/04/global-catastrophic-risks-2016/ |publisher=The Global Priorities Project |title=Global Catastrophic Risks 2016 |date=April 28, 2016 |accessdate=July 28, 2017}}</ref> The report includes discussion of risks from artificial general intelligence under "emerging risks".<ref>{{cite web |url=http://globalprioritiesproject.org/wp-content/uploads/2016/04/Global-Catastrophic-Risk-Annual-Report-2016-FINAL.pdf |title=Global-Catastrophic-Risk-Annual-Report-2016-FINAL.pdf |accessdate=July 28, 2017}}</ref><ref>{{cite web |url=https://gizmodo.com/these-are-the-most-serious-catastrophic-threats-faced-b-1773661869 |title=These Are the Most Serious Catastrophic Threats Faced by Humanity |author=George Dvorsky |publisher=Gizmodo |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
| 2015 || {{dts|December 11}} || || {{w|OpenAI}} is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)<ref>{{cite web |url=https://www.nytimes.com/2015/12/12/science/artificial-intelligence-research-center-is-founded-by-silicon-valley-investors.html |date=December 11, 2015 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors |author=John Markoff |accessdate=July 26, 2017 |quote=The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco.}}</ref><ref>{{cite web |url=https://blog.openai.com/introducing-openai/ |publisher=OpenAI Blog |title=Introducing OpenAI |date=December 11, 2015 |accessdate=July 26, 2017}}</ref>
+
| 2016 || {{dts|April 7}} || Publication || 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".<ref>{{cite web |url=https://80000hours.org/2016/04/why-and-how-to-use-your-career-to-make-artificial-intelligence-safe/ |title=How and why to use your career to make artificial intelligence safer |publisher=80,000 Hours |date=April 7, 2016 |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://web.archive.org/web/20160627024909/https://80000hours.org/problem-profiles/artificial-intelligence-risk/ |title=Risks posed by artificial intelligence |publisher=80,000 Hours}}</ref>
 
|-
 
|-
| 2016 || {{dts|April 7}} || || 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".<ref>{{cite web |url=https://80000hours.org/2016/04/why-and-how-to-use-your-career-to-make-artificial-intelligence-safe/ |title=How and why to use your career to make artificial intelligence safer |publisher=80,000 Hours |date=April 7, 2016 |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://web.archive.org/web/20160627024909/https://80000hours.org/problem-profiles/artificial-intelligence-risk/ |title=Risks posed by artificial intelligence |publisher=80,000 Hours}}</ref>
+
| 2016 || {{dts|May}} || Publication || Luke Muehlhauser of the Open Philanthropy Project publishes "What should we learn from past AI forecasts?".<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts |title=What should we learn from past AI forecasts? |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2016 || {{dts|May}} || || Luke Muehlhauser of the Open Philanthropy Project publishes "What should we learn from past AI forecasts?".<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts |title=What should we learn from past AI forecasts? |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
+
| 2016 || {{dts|May 6}} || Publication || Holden Karnofsky of the Open Philanthropy Project publishes a blog post on why Open Phil is making potential risks from artificial intelligence a major priority for the year.<ref>{{cite web |url=http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity |title=Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2016 || {{dts|May 6}} || || Holden Karnofsky of the Open Philanthropy Project publishes a blog post on why Open Phil is making potential risks from artificial intelligence a major priority for the year.<ref>{{cite web |url=http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity |title=Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
+
| 2016 || {{dts|May 6}} || Publication || Holden Karnofsky of the Open Philanthropy Project publishes "Some Background on Our Views Regarding Advanced Artificial Intelligence" on the Open Phil blog.<ref>{{cite web |url=http://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence |title=Some Background on Our Views Regarding Advanced Artificial Intelligence |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2016 || {{dts|May 6}} || || Holden Karnofsky of the Open Philanthropy Project publishes "Some Background on Our Views Regarding Advanced Artificial Intelligence" on the Open Phil blog.<ref>{{cite web |url=http://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence |title=Some Background on Our Views Regarding Advanced Artificial Intelligence |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
+
| 2016 || {{dts|June}} || Grant || The Open Philanthropy Project awards a grant of $264,525 to {{w|George Mason University}} for work by {{w|Robin Hanson}}.<ref name="donations-portal-open-phil-ai-risk" />
 
|-
 
|-
| 2016 || {{dts|June}} || || The Open Philanthropy Project awards a grant of $264,525 to {{w|George Mason University}} for work by {{w|Robin Hanson}}.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2016 || {{dts|June 21}} || Publication || "Concrete Problems in AI Safety" is submitted to the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref>
 
|-
 
|-
| 2016 || {{dts|June 21}} || || "Concrete Problems in AI Safety" is submitted to the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref>
+
| 2016 || {{dts|August}} || Organization || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref>
 
|-
 
|-
| 2016 || {{dts|August}} || || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref>
+
| 2016 || {{dts|August}} || Grant || The Open Philanthropy Project awards a grant of $5.6 million to the {{w|Center for Human-Compatible AI}}.<ref name="donations-portal-open-phil-ai-risk" />
 
|-
 
|-
| 2016 || {{dts|August}} || || The Open Philanthropy Project awards a grant of $5.6 million to the {{w|Center for Human-Compatible AI}}.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2016 || {{dts|August}} || Grant || The Open Philanthropy Project awards a grant of $500,000 to the {{w|Machine Intelligence Research Institute}}.<ref name="donations-portal-open-phil-ai-risk" />
 
|-
 
|-
| 2016 || {{dts|August}} || || The Open Philanthropy Project awards a grant of $500,000 to the {{w|Machine Intelligence Research Institute}}.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2016 || {{dts|August 24}} || || US president Barack Obama speaks to entrepreneur and MIT Media Lab director {{w|Joi Ito}} about AI risk.<ref>{{cite web |url=https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/ |title=Barack Obama Talks AI, Robo Cars, and the Future of the World |publisher=[[wikipedia:WIRED|WIRED]] |date=October 12, 2016 |author=Scott Dadich |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
| 2016 || {{dts|September 28}} || || The {{w|Partnership on AI}} is publicly announced.
+
| 2016 || {{dts|September 28}} || Organization || The {{w|Partnership on AI}} is publicly announced.
 
|-
 
|-
| 2016 || {{dts|November}} || || The Open Philanthropy Project awards a grant of $199,000 to the {{w|Electronic Frontier Foundation}} for work by {{w|Peter Eckersley}}.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2016 || {{dts|October 12}} || Publication || Under the Obama Administration, the United States White House releases two reports, ''Preparing for the Future of Artificial Intelligence'' and ''National Artificial Intelligence Research and Development Strategic Plan''. The former "surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy".<ref>{{cite web |url=https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence |publisher=whitehouse.gov |title=The Administration's Report on the Future of Artificial Intelligence |date=October 12, 2016 |accessdate=July 28, 2017}}</ref><ref>{{cite web |url=https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy |date=December 21, 2016 |publisher=Harvard Business Review |title=The Obama Administration's Roadmap for AI Policy |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
| 2016 || {{dts|December}} || || The Open Philanthropy Project awards a grant of $32,000 to AI Impacts for work on strategic questions related to potential risks from advanced artificial intelligence.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2016 || {{dts|November}} || Grant || The Open Philanthropy Project awards a grant of $199,000 to the {{w|Electronic Frontier Foundation}} for work by {{w|Peter Eckersley}}.<ref name="donations-portal-open-phil-ai-risk" />
 
|-
 
|-
| 2016 || {{dts|December 3}}, 12 || || A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.<ref>{{cite web |url=http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/ |title=CFAR's new focus, and AI Safety - Less Wrong |accessdate=July 13, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/ |title=Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong |accessdate=July 13, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
| 2016 || {{dts|December}} || Grant || The Open Philanthropy Project awards a grant of $32,000 to AI Impacts for work on strategic questions related to potential risks from advanced artificial intelligence.<ref name="donations-portal-open-phil-ai-risk" />
 
|-
 
|-
| 2017 || {{dts|February 9}} || || The Effective Altruism Funds (EA Funds) is announced on the Effective Altruism Forum. EA Funds includes a Long-Term Future Fund that is partly intended to support "priorities for robust and beneficial artificial intelligence".<ref>{{cite web |url=https://app.effectivealtruism.org/funds/far-future |title=EA Funds |accessdate=July 27, 2017 |quote=In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/174/introducing_the_ea_funds/ |author=William MacAskill |title=Introducing the EA Funds |publisher=Effective Altruism Forum |date=February 9, 2017 |accessdate=July 27, 2017}}</ref>
+
| 2016 || {{dts|December 3}}, 12 || Publication || A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.<ref>{{cite web |url=http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/ |title=CFAR's new focus, and AI Safety - Less Wrong |accessdate=July 13, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/ |title=Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong |accessdate=July 13, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 
|-
 
|-
| 2017 || {{dts|March}} || || The Open Philanthropy Project awards a grant of $2.0 million to the {{w|Future of Humanity Institute}} for general support.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2017 || || Publication || The Global Catastrophic Risks 2017 report is published.<ref>{{cite web |url=https://www.globalchallenges.org/en/our-work/annual-report |publisher=Global Challenges Foundation |title=Annual Report on Global Risks |accessdate=July 28, 2017}}</ref> The report discusses risks from artificial intelligence in a dedicated chapter.<ref>{{cite web |url=https://api.globalchallenges.org/static/files/Global%20Catastrophic%20Risks%202017.pdf |title=Global Catastrophic Risks 2017.pdf |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
| 2017 || {{dts|March}} || || The Open Philanthropy Project awards a grant of $30 million to {{w|OpenAI}} for general support.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2017 || || Publication || ''The Global Risks Report 2017'' is published by the {{w|World Economic Forum}}. The report contains a section titled "Assessing the Risk of Artificial Intelligence" under "Emerging Technologies".<ref>{{cite web |url=http://reports.weforum.org/global-risks-2017/acknowledgements/ |title=Acknowledgements |publisher=Global Risks Report 2017 |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
| 2017 || {{dts|April}} || || The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.<ref>{{cite web |url=https://intelligence.org/2017/05/10/may-2017-newsletter/ |title=May 2017 Newsletter |publisher=Machine Intelligence Research Institute |date=May 10, 2017 |accessdate=July 25, 2017 |quote=Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ |title=Update on Effective Altruism Funds |publisher=Effective Altruism Forum |date=April 20, 2017 |accessdate=July 25, 2017}}</ref>
+
| 2017 || {{dts|February 9}} || Project || The Effective Altruism Funds (EA Funds) is announced on the Effective Altruism Forum. EA Funds includes a Long-Term Future Fund that is partly intended to support "priorities for robust and beneficial artificial intelligence".<ref>{{cite web |url=https://app.effectivealtruism.org/funds/far-future |title=EA Funds |accessdate=July 27, 2017 |quote=In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/174/introducing_the_ea_funds/ |author=William MacAskill |title=Introducing the EA Funds |publisher=Effective Altruism Forum |date=February 9, 2017 |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2017 || {{dts|April 6}} || || 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".<ref>{{cite web |url=https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/ |title=Positively shaping the development of artificial intelligence |publisher=80,000 Hours |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1341451772603224 |title=Completely new article on the pros/cons of working on AI safety, and how to actually go about it |date=April 6, 2017}}</ref>
+
| 2017 || {{dts|March}} || Grant || The Open Philanthropy Project awards a grant of $2.0 million to the {{w|Future of Humanity Institute}} for general support.<ref name="donations-portal-open-phil-ai-risk" />
 
|-
 
|-
| 2017 || {{dts|May}} || || The Open Philanthropy Project awards a grant of $1.5 million to the {{w|UCLA School of Law}} for work on governance related to AI risk.<ref name="donations-portal-open-phil-ai-risk" />
+
| 2017 || {{dts|March}} || Grant || The Open Philanthropy Project awards a grant of $30 million to {{w|OpenAI}} for general support.<ref name="donations-portal-open-phil-ai-risk" />
 +
|-
 +
| 2017 || {{dts|April}} || Organization || The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.<ref>{{cite web |url=https://intelligence.org/2017/05/10/may-2017-newsletter/ |title=May 2017 Newsletter |publisher=Machine Intelligence Research Institute |date=May 10, 2017 |accessdate=July 25, 2017 |quote=Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ |title=Update on Effective Altruism Funds |publisher=Effective Altruism Forum |date=April 20, 2017 |accessdate=July 25, 2017}}</ref>
 +
|-
 +
| 2017 || {{dts|April 6}} || Publication || 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".<ref>{{cite web |url=https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/ |title=Positively shaping the development of artificial intelligence |publisher=80,000 Hours |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1341451772603224 |title=Completely new article on the pros/cons of working on AI safety, and how to actually go about it |date=April 6, 2017}}</ref>
 +
|-
 +
| 2017 || {{dts|May}} || Grant || The Open Philanthropy Project awards a grant of $1.5 million to the {{w|UCLA School of Law}} for work on governance related to AI risk.<ref name="donations-portal-open-phil-ai-risk" />
 
|-
 
|-
 
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the [[wikipedia:arXiv|arXiv]].<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref> Two researchers from AI Impacts are authors on the paper.<ref>{{cite web |url=http://aiimpacts.org/media-discussion-of-2016-espai/ |title=Media discussion of 2016 ESPAI |publisher=AI Impacts |date=June 14, 2017 |accessdate=July 13, 2017}}</ref>
 
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the [[wikipedia:arXiv|arXiv]].<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref> Two researchers from AI Impacts are authors on the paper.<ref>{{cite web |url=http://aiimpacts.org/media-discussion-of-2016-espai/ |title=Media discussion of 2016 ESPAI |publisher=AI Impacts |date=June 14, 2017 |accessdate=July 13, 2017}}</ref>
 
|-
 
|-
| 2017 || {{dts|June 14}} || || 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.<ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1416435978438136 |title=New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford’s Future of Humanity Institute |date=June 14, 2017 |publisher=80,000 Hours}}</ref>
+
| 2017 || {{dts|June 14}} || Publication || 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.<ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1416435978438136 |title=New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford’s Future of Humanity Institute |date=June 14, 2017 |publisher=80,000 Hours}}</ref>
 +
|-
 +
| 2017 || {{dts|July}} || Grant || The Open Philanthropy Project awards a grant of $2.4 million to the {{w|Montreal Institute for Learning Algorithms}}.<ref name="donations-portal-open-phil-ai-risk">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+risk |title=Open Philanthropy Project donations made (filtered to cause areas matching AI risk) |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2017 || {{dts|July}} || || The Open Philanthropy Project awards a grant of $2.4 million to the {{w|Montreal Institute for Learning Algorithms}}.<ref name="donations-portal-open-phil-ai-risk">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+risk |title=Open Philanthropy Project donations made (filtered to cause areas matching AI risk) |accessdate=July 27, 2017}}</ref>
+
| 2017 || {{dts|July 15}}–16 || || At the National Governors Association in Rhode Island, Elon Musk tells US governors that artificial intelligence is an "existential threat" to humanity.<ref>{{cite web |url=http://www.npr.org/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk |date=July 17, 2017 |title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk' |publisher=NPR.org |accessdate=July 28, 2017}}</ref>
 
|-
 
|-
 
| 2017 || {{dts|July 23}} || || During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways [&hellip;] pretty irresponsible".<ref>{{cite web |url=http://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html |publisher=CNBC |title=Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible' |date=July 24, 2017 |author=Catherine Clifford |accessdate=July 25, 2017}}</ref>
 
| 2017 || {{dts|July 23}} || || During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways [&hellip;] pretty irresponsible".<ref>{{cite web |url=http://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html |publisher=CNBC |title=Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible' |date=July 24, 2017 |author=Catherine Clifford |accessdate=July 25, 2017}}</ref>

Revision as of 19:04, 27 July 2017

This is a timeline of friendly artificial intelligence.

Big picture

Time period Development summary More details

Full timeline

Year Month and date Event type Details
1630–1650 Fictional portrayal The publication of the story of the Golem of Chełm dates to around this period. Wikipedia: "Golems are not intelligent, and if commanded to perform a task, they will perform the instructions literally. In many depictions Golems are inherently perfectly obedient. In its earliest known modern form, the Golem of Chełm became enormous and uncooperative. In one version of this story, the rabbi had to resort to trickery to deactivate it, whereupon it crumbled upon its creator and crushed him."
1818 Fictional portrayal The novel Frankenstein is published. Frankenstein pioneers the archetype of the artificial intelligence that turns against its creator, and is sometimes discussed in the context of an AI takeoff.[1][2][3]
1920 Fictional portrayal The science fiction play R.U.R. is published. The play introduces the word "robot" to the English language and the plot contains a robot rebellion that leads to human extinction.
1942 March Fictional portrayal The Three Laws of Robotics are introduced by Isaac Asimov in his short story "Runaround".
1965 Publication I. J. Good originates the concept of intelligence explosion in "Speculations Concerning the First Ultraintelligent Machine".
1984 October 26 Fictional portrayal The American science fiction film The Terminator is released. The film contains the first appearance of Skynet, a "neural net-based conscious group mind and artificial general intelligence" that "seeks to exterminate the human race in order to fulfill the mandates of its original coding".
1993 Vernor Vinge's article "The Coming Technological Singularity: How to Survive in the Post-Human Era" is published. The article popularizes the idea of an intelligence explosion.[4]
2000 April Publication Bill Joy's article "Why The Future Doesn't Need Us" is published in Wired.
2000 July 27 Organization Machine Intelligence Research Institute (MIRI) is founded as the Singularity Institute for Artificial Intelligence (SIAI) by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.[5]:3[6]
2002 March 8 AI box The first AI box experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.[7]
2002 July 4–5 AI box The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.[8]
2002 October 31 Publication Bill Hibbard's Super-Intelligent Machines is published.[9]
2003 Publication Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.[10]
2005 Organization The Future of Humanity Institute (FHI) is founded.[11]
2005 August 21 AI box The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.[12]
2008 Publication Steve Omohundro's paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by Nick Bostrom as part of his instrumental convergence thesis.[13]
2008 Publication Global Catastrophic Risks is published. The book includes Eliezer Yudkowsky's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk".[14]
2009 December 11 Publication The third edition of Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
2010 Organization DeepMind is founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman.
2010 Organization Vicarious is founded by Scott Phoenix and Dileep George. The company "has publicly expressed some concern about potential risks from future AI development" and the founders are signatories on the FLI open letter.[15]
2011 Publication Baum, Goertzel, and Goertzel's "How Long Until Human-Level AI? Results from an Expert Assessment" is published.[16]
2011 Organization The Global Catastrophic Risk Institute (GCRI) is sounded by Seth Baum and Tony Barrett.[17]
2011 Organization Google Brain is started by Jeff Dean, Greg Corrado, and Andrew Ng.
2011–2013 Sometime during this period, the Back of the Envelope Guide to Philanthropy, a website created by Gordon Irlam, includes prevention of "hostile artificial intelligence" as a top 10 philanthropic opportunity by impact.[18][19]
2011 September Organization The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.[20]
2013 Publication Luke Muehlhauser's book Facing the Intelligence Explosion is published.[21]
2013 April 13 MIRI publishes an update on its strategy on its blog. In the blog post, MIRI executive director Luke Muehlhauser states that MIRI plans to put less effort into public outreach and shift its research to Friendly AI math research.[22]
2013 July Organization The Center for the Study of Existential Risk (CSER) launches.[23][24]
2013 July Organization The Foundational Research Institute (FRI) is founded. Some of FRI's work discusses risks from artificial intelligence.[25]
2013 October 1 Publication Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is published. The book discusses risks from human-level of superhuman artificial intelligence.
2014 Publication Müller and Bostrom's "Future Progress in Artificial Intelligence: A Survey of Expert Opinion" is published.[26]
2014 January 26 Google announces that it has acquired DeepMind.
2014 March–May Organization Future of Life Institute (FLI) is founded.[27]
2014 July–September Publication Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published.
2014 August Project The AI Impacts website launches.[28]
2014 Fall Project The One Hundred Year Study on Artificial Intelligence (AI100) launches.[29]
2014 October 22–24 During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".[30][31]
2014 December 2 In an interview with BBC, Stephen Hawking states that advanced artificial intelligence could end the human race.[32]
2015 Daniel Dewey joins the Open Philanthropy Project.[33] He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence.
2015 Organization The Strategic Artificial Intelligence Research Centre launches around this time.[34][15]
2015 January Publication The Open Letter on Artificial Intelligence, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published.
2015 January 28 During an "ask me anything" (AMA) session on reddit, Bill Gates states his concern about artificial superintelligence.[35][36]
2015 January 2–5 Conference The Future of AI: Opportunities and Challenges, an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute.[37] Nate Soares of the Machine Intelligence Research Institute would later call this the "turning point" of when top academics begin to focus on AI risk.[38]
2015 January 22–27 Publication Tim Urban publishes on Wait But Why a two-part series of blog posts about superhuman AI.[39][40]
2015 February 25 Sam Altman, president of Y Combinator, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".[41]
2015 June 4 At Airbnb's Open Air 2015 conference, Sam Altman, president of Y Combinator, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.[42]
2015 July 1 Grant The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.[43][44][45]
2015 August Grant The Open Philanthropy Project awards a grant of $1.2 million to the Future of Life Institute.[46]
2015 August The Open Philanthropy Project publishes its cause report on potential risks from advanced artificial intelligence.[47]
2015 October The Open Philanthropy Project first publishes its page on AI timelines.[48]
2015 December Organization The Leverhulme Centre for the Future of Intelligence launches around this time.[49]
2015 December 11 Organization OpenAI is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)[50][51]
2016 April 28 Publication The Global Catastrophic Risks 2016 report is published. The report is a collaboration between the Global Priorities Project and the Global Challenges Foundation.[52] The report includes discussion of risks from artificial general intelligence under "emerging risks".[53][54]
2016 April 7 Publication 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".[55][56]
2016 May Publication Luke Muehlhauser of the Open Philanthropy Project publishes "What should we learn from past AI forecasts?".[57]
2016 May 6 Publication Holden Karnofsky of the Open Philanthropy Project publishes a blog post on why Open Phil is making potential risks from artificial intelligence a major priority for the year.[58]
2016 May 6 Publication Holden Karnofsky of the Open Philanthropy Project publishes "Some Background on Our Views Regarding Advanced Artificial Intelligence" on the Open Phil blog.[59]
2016 June Grant The Open Philanthropy Project awards a grant of $264,525 to George Mason University for work by Robin Hanson.[46]
2016 June 21 Publication "Concrete Problems in AI Safety" is submitted to the arXiv.[60]
2016 August Organization The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".[61]
2016 August Grant The Open Philanthropy Project awards a grant of $5.6 million to the Center for Human-Compatible AI.[46]
2016 August Grant The Open Philanthropy Project awards a grant of $500,000 to the Machine Intelligence Research Institute.[46]
2016 August 24 US president Barack Obama speaks to entrepreneur and MIT Media Lab director Joi Ito about AI risk.[62]
2016 September 28 Organization The Partnership on AI is publicly announced.
2016 October 12 Publication Under the Obama Administration, the United States White House releases two reports, Preparing for the Future of Artificial Intelligence and National Artificial Intelligence Research and Development Strategic Plan. The former "surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy".[63][64]
2016 November Grant The Open Philanthropy Project awards a grant of $199,000 to the Electronic Frontier Foundation for work by Peter Eckersley.[46]
2016 December Grant The Open Philanthropy Project awards a grant of $32,000 to AI Impacts for work on strategic questions related to potential risks from advanced artificial intelligence.[46]
2016 December 3, 12 Publication A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.[65][66]
2017 Publication The Global Catastrophic Risks 2017 report is published.[67] The report discusses risks from artificial intelligence in a dedicated chapter.[68]
2017 Publication The Global Risks Report 2017 is published by the World Economic Forum. The report contains a section titled "Assessing the Risk of Artificial Intelligence" under "Emerging Technologies".[69]
2017 February 9 Project The Effective Altruism Funds (EA Funds) is announced on the Effective Altruism Forum. EA Funds includes a Long-Term Future Fund that is partly intended to support "priorities for robust and beneficial artificial intelligence".[70][71]
2017 March Grant The Open Philanthropy Project awards a grant of $2.0 million to the Future of Humanity Institute for general support.[46]
2017 March Grant The Open Philanthropy Project awards a grant of $30 million to OpenAI for general support.[46]
2017 April Organization The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.[72][73]
2017 April 6 Publication 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".[74][75]
2017 May Grant The Open Philanthropy Project awards a grant of $1.5 million to the UCLA School of Law for work on governance related to AI risk.[46]
2017 May 24 Publication "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the arXiv.[76] Two researchers from AI Impacts are authors on the paper.[77]
2017 June 14 Publication 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.[78]
2017 July Grant The Open Philanthropy Project awards a grant of $2.4 million to the Montreal Institute for Learning Algorithms.[46]
2017 July 15–16 At the National Governors Association in Rhode Island, Elon Musk tells US governors that artificial intelligence is an "existential threat" to humanity.[79]
2017 July 23 During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways […] pretty irresponsible".[80]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Issa likes to work locally and track changes with Git, so the revision history on this wiki only shows changes in bulk. To see more incremental changes, refer to the commit history.

Funding information for this timeline is available.

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. Michael Nuschke (October 10, 2011). "Seven Ways Frankenstein Relates to Singularity". RetirementSingularity.com. Retrieved July 27, 2017. 
  2. Mitchell Howe (2002). "What is the intellectual history of the Singularity concept?". Retrieved July 27, 2017. Bearing little resemblance to the campy motion pictures he would inspire, Dr. Frankenstein's monster was a highly intelligent being of great emotional depth, but who could not be loved because of his hideous appearance; for this, he vowed to take revenge on his creator. The monster actually comes across as the most intelligent character in the novel, making Frankenstein perhaps the first work to touch on the core idea of the Singularity. 
  3. Alan Winfield (August 9, 2014). "Artificial Intelligence will not turn into a Frankenstein monster". The Guardian. Retrieved July 27, 2017. From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong. 
  4. "History of AI risk thought". Lesswrongwiki. LessWrong. Retrieved July 28, 2017. 
  5. "Form 990-EZ 2000" (PDF). Retrieved June 1, 2017. Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999. 
  6. "About the Singularity Institute for Artificial Intelligence". Retrieved July 1, 2017. The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors. 
  7. "SL4: By Thread". Retrieved July 1, 2017. 
  8. "SL4: By Thread". Retrieved July 1, 2017. 
  9. "Amazon.com: Super-Intelligent Machines (Ifsr International Series on Systems Science and Engineering) (9780306473883): Bill Hibbard: Books". Retrieved July 26, 2017. Publisher: Springer; 2002 edition (October 31, 2002) 
  10. "Ethical Issues In Advanced Artificial Intelligence". Retrieved July 25, 2017. 
  11. "About". Oxford Martin School. Retrieved July 25, 2017. The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School). 
  12. "SL4: By Thread". Retrieved July 1, 2017. 
  13. "Basic AI drives". Lesswrongwiki. LessWrong. Retrieved July 26, 2017. 
  14. "AIPosNegFactor.pdf" (PDF). Retrieved July 27, 2017. 
  15. 15.0 15.1 "Landscape of current work on potential risks from advanced AI". Google Docs. Retrieved July 27, 2017. 
  16. "How Long Untill Human-Level AI - 2011_AI-Experts.pdf" (PDF). Retrieved July 28, 2017. 
  17. "About". Global Catastrophic Risk Institute. Retrieved July 26, 2017. The Global Catastrophic Risk Institute (GCRI) is a nonprofit, nonpartisan think tank. GCRI was founded in 2011 by Seth Baum and Tony Barrett. 
  18. "Back of the Envelope Guide to Philanthropy". Retrieved July 28, 2017. 
  19. "Gordon Irlam on the BEGuide". Meteuphoric. WordPress.com. October 16, 2014. Retrieved July 28, 2017. 
  20. "Welcome". Oxford Martin Programme on the Impacts of Future Technology. Retrieved July 26, 2017. The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University. 
  21. "About". Facing the Intelligence Explosion. Retrieved July 27, 2017. 
  22. Luke Muehlhauser (December 11, 2013). "MIRI's Strategy for 2013". Machine Intelligence Research Institute. Retrieved July 6, 2017. 
  23. Sylvia Hui (November 25, 2012). "Cambridge to study technology's risk to humans". Retrieved July 26, 2017. The university said Sunday the center's launch is planned next year. 
  24. "Centre for the Study of Existential Risk". 
  25. "Transparency". Foundational Research Institute. Retrieved July 27, 2017. 
  26. "Future Progress in Artificial Intelligence: A Survey of Expert Opinion - survey.pdf" (PDF). Retrieved July 28, 2017. 
  27. Victoria Krakovna. "New organization - Future of Life Institute (FLI)". LessWrong. Retrieved July 6, 2017. As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself. 
  28. "MIRI's September Newsletter". Machine Intelligence Research Institute. September 1, 2014. Retrieved July 15, 2017. Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts. 
  29. Peter Stone; et al. (AI100 Standing Committee and Study Panel) (September 2016). "One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel" (PDF). Retrieved July 27, 2017. The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a longterm investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. 
  30. Samuel Gibbs (October 27, 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Retrieved July 25, 2017. 
  31. "AeroAstro Centennial Webcast". Retrieved July 25, 2017. The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium 
  32. "Stephen Hawking warns artificial intelligence could end mankind". BBC News. December 2, 2014. Retrieved July 25, 2017. 
  33. "Daniel Dewey". Open Philanthropy Project. Retrieved July 25, 2017. 
  34. Future of Humanity Institute - FHI. "Strategic Artificial Intelligence Research Centre - Future of Humanity Institute". Future of Humanity Institute. Retrieved July 27, 2017. 
  35. "Hi Reddit, I'm Bill Gates and I'm back for my third AMA. Ask me anything. • r/IAmA". reddit. Retrieved July 25, 2017. 
  36. Stuart Dredge (January 29, 2015). "Artificial intelligence will become strong enough to be a concern, says Bill Gates". The Guardian. Retrieved July 25, 2017. 
  37. "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017. 
  38. Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017. 
  39. "The Artificial Intelligence Revolution: Part 1". Wait But Why. January 22, 2017. Retrieved July 25, 2017. 
  40. "The Artificial Intelligence Revolution: Part 2". Wait But Why. January 27, 2015. Retrieved July 25, 2017. 
  41. "Machine intelligence, part 1". Sam Altman. Retrieved July 27, 2017. 
  42. Matt Weinberger (June 4, 2015). "Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last". Business Insider. Retrieved July 27, 2017. 
  43. "Grants Timeline - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  44. "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  45. "AI Safety Research - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  46. 46.0 46.1 46.2 46.3 46.4 46.5 46.6 46.7 46.8 46.9 "Open Philanthropy Project donations made (filtered to cause areas matching AI risk)". Retrieved July 27, 2017. 
  47. "Potential Risks from Advanced Artificial Intelligence". Open Philanthropy Project. Retrieved July 27, 2017. 
  48. "What Do We Know about AI Timelines?". Open Philanthropy Project. Retrieved July 25, 2017. 
  49. "The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity". University of Cambridge. December 3, 2015. Retrieved July 26, 2017. 
  50. John Markoff (December 11, 2015). "Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors". The New York Times. Retrieved July 26, 2017. The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco. 
  51. "Introducing OpenAI". OpenAI Blog. December 11, 2015. Retrieved July 26, 2017. 
  52. "Global Catastrophic Risks 2016". The Global Priorities Project. April 28, 2016. Retrieved July 28, 2017. 
  53. "Global-Catastrophic-Risk-Annual-Report-2016-FINAL.pdf" (PDF). Retrieved July 28, 2017. 
  54. George Dvorsky. "These Are the Most Serious Catastrophic Threats Faced by Humanity". Gizmodo. Retrieved July 28, 2017. 
  55. "How and why to use your career to make artificial intelligence safer". 80,000 Hours. April 7, 2016. Retrieved July 25, 2017. 
  56. "Risks posed by artificial intelligence". 80,000 Hours. 
  57. "What should we learn from past AI forecasts?". Open Philanthropy Project. Retrieved July 27, 2017. 
  58. "Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity". Open Philanthropy Project. Retrieved July 27, 2017. 
  59. "Some Background on Our Views Regarding Advanced Artificial Intelligence". Open Philanthropy Project. Retrieved July 27, 2017. 
  60. "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017. 
  61. "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". Berkeley News. August 29, 2016. Retrieved July 26, 2017. 
  62. Scott Dadich (October 12, 2016). "Barack Obama Talks AI, Robo Cars, and the Future of the World". WIRED. Retrieved July 28, 2017. 
  63. "The Administration's Report on the Future of Artificial Intelligence". whitehouse.gov. October 12, 2016. Retrieved July 28, 2017. 
  64. "The Obama Administration's Roadmap for AI Policy". Harvard Business Review. December 21, 2016. Retrieved July 28, 2017. 
  65. "CFAR's new focus, and AI Safety - Less Wrong". LessWrong. Retrieved July 13, 2017. 
  66. "Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong". LessWrong. Retrieved July 13, 2017. 
  67. "Annual Report on Global Risks". Global Challenges Foundation. Retrieved July 28, 2017. 
  68. "Global Catastrophic Risks 2017.pdf" (PDF). Retrieved July 28, 2017. 
  69. "Acknowledgements". Global Risks Report 2017. Retrieved July 28, 2017. 
  70. "EA Funds". Retrieved July 27, 2017. In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence. 
  71. William MacAskill (February 9, 2017). "Introducing the EA Funds". Effective Altruism Forum. Retrieved July 27, 2017. 
  72. "May 2017 Newsletter". Machine Intelligence Research Institute. May 10, 2017. Retrieved July 25, 2017. Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere. 
  73. "Update on Effective Altruism Funds". Effective Altruism Forum. April 20, 2017. Retrieved July 25, 2017. 
  74. "Positively shaping the development of artificial intelligence". 80,000 Hours. Retrieved July 25, 2017. 
  75. "Completely new article on the pros/cons of working on AI safety, and how to actually go about it". April 6, 2017. 
  76. "[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved July 13, 2017. 
  77. "Media discussion of 2016 ESPAI". AI Impacts. June 14, 2017. Retrieved July 13, 2017. 
  78. "New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford's Future of Humanity Institute". 80,000 Hours. June 14, 2017. 
  79. "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. July 17, 2017. Retrieved July 28, 2017. 
  80. Catherine Clifford (July 24, 2017). "Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'". CNBC. Retrieved July 25, 2017.