Difference between revisions of "Timeline of AI safety"

From Timelines
Jump to: navigation, search
Line 2: Line 2:
  
 
==Big picture==
 
==Big picture==
 +
 +
===Overall summary===
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 13: Line 15:
 
|-
 
|-
 
| 2013 to present || Mainstreaming of AI safety, separation from transhumanism || SIAI changes name to MIRI, sells off the "Singularity" brand to Singularity University, grows considerably in size, and gets a lot of funding. ''Superintelligence'', the book  by Nick Bostrom, is released. The Future of Life Institute (FLI) and OpenAI are started, and the latter grows considerably. New organizations founded include the Center for the Study of Existential Risk (CSER), Leverhulme Centre for the Future of Intelligence (CFI), Future of Life Institute (FLI), OpenAI, Center for Human-Compatible AI (CHAI), Berkeley Existential Risk Initiative (BERI), Ought, and the Center for Science and Emerging Technology (CSET). OpenAI in particular becomes quite famous and influential. Prominent individuals such as Elon Musk, Sam Altman, and Bill Gates talk about the importance of AI safety and the risks of unfriendly AI. Key funders of this ecosystem include the Open Philanthropy Project and Elon Musk.
 
| 2013 to present || Mainstreaming of AI safety, separation from transhumanism || SIAI changes name to MIRI, sells off the "Singularity" brand to Singularity University, grows considerably in size, and gets a lot of funding. ''Superintelligence'', the book  by Nick Bostrom, is released. The Future of Life Institute (FLI) and OpenAI are started, and the latter grows considerably. New organizations founded include the Center for the Study of Existential Risk (CSER), Leverhulme Centre for the Future of Intelligence (CFI), Future of Life Institute (FLI), OpenAI, Center for Human-Compatible AI (CHAI), Berkeley Existential Risk Initiative (BERI), Ought, and the Center for Science and Emerging Technology (CSET). OpenAI in particular becomes quite famous and influential. Prominent individuals such as Elon Musk, Sam Altman, and Bill Gates talk about the importance of AI safety and the risks of unfriendly AI. Key funders of this ecosystem include the Open Philanthropy Project and Elon Musk.
 +
|}
 +
 +
===Highlights by year (2013 onward)===
 +
 +
{| class="wikitable"
 +
! Year !! Highlights
 +
|-
 +
| 2013 || Research and outreach focused on forecasting and timelines continues. Connections with the nascent effective altruism movement strengthen. The Center for the Study of Existential Risk and the Foundational Research Institute launch.
 +
|-
 +
| 2014 || ''{{w|Superintelligence: Paths, Dangers, Strategies}}'' by Nick Bostrom is published. The {{w|Future of Life Institute}} is founded and AI Impacts launches. AI safety gets more mainstream attention, including from {{w|Elon Musk}}, {{w|Stephen Hawking}}, and the fictional portrayal ''{{w|Ex Machina}}''. While forecasting and timelines remain a focus of AI safety efforts, the effort shifts toward the technical AI safety agenda, with the launch of the Intelligent Agent Foundations Forum.
 +
|-
 +
| 2015 || AI safety continues to get more mainstream, with the founding of {{w|OpenAI}} (supported by Elon Musk and {{w|Sam Altman}}) and the {{w|Leverhulme Centre for the Future of Intelligence}}, the {{w|Open Letter on Artificial Intelligence}}, the Puerto Rico conference, and coverage on {{w|Wait But Why}}. This also appears to be the last year that Peter Thiel donates in the area.
 +
|-
 +
| 2016 || The Open Philanthropy Project (Open Phil) makes AI safety a focus area; it would ramp up giving in the area considerably starting around this time. The landmark paper "Concrete Problemms in AI Safety" is published, and OpenAI's safety work picks up pace. The Center for Human-Compatible AI launches. The annual tradition of LessWrong posts providing an AI alignment literature review and charity comparison for the year beginis. AI safety continues to get more mainstream, with the {{w|Partnership on AI}} and the Obama administration's efforts to understand the subject.
 +
|-
 +
| 2017 || This is a great year for cryptocurrency prices, causing a number of donations to MIRI from people who got rich through cryptocurrency. The AI safety funding and support landscape changes somewhat with the launch of the Berkeley Existential Risk Initiative (BERI) (and funding of its grants program by Jaan Tallinn) and the Effective Altruism Funds, specifically the Long-Term Future Fund. Open Phil makes several grants in AI safety, including a $30 million grant to OpenAI and a $3.75 million grant to MIRI. AI safety attracts dismissive commentary from Mark Zuckerberg, while Elon Musk continues to highlight its importance. Initiatives such as AI Watch and the AI Alignment Prize begin.
 +
|-
 +
| 2018 || Activity in the field of AI safety becomes more steady, in terms of both ongoing discussion (with the launch of the AI Alignment Newsletter) and funding (with structural changes to the Long-Term Future Fund to make it grant more regularly, the introduction of the annual Open Phil AI Fellowship grants, and more grantmaking by BERI). Near the end of the year, MIRI announces its nondisclosure-by-default policy.
 +
|-
 +
| 2019 || The Center for Security and Emerging Technology (CSET), that is focused on AI safety and other security risks, launches with a 5-year $55 million grant from Open Phil. Grantmaking from the Long-Term Future Fund picks up pace; BERI hands off its grantmaking of Jaan Tallinn's money to the Survival and Flourishing Fund (SFF). Open Phil begins using the Committee for Effective Altruism Support to decide grant amounts for some of its AI safety grants, including grants to MIRI.
 
|}
 
|}
  

Revision as of 08:14, 19 April 2020

This is a timeline of AI safety. AI safety is the field focused on reducing risks from artificial intelligence (AI).[1][2]

Big picture

Overall summary

Time period Development summary More details
Until 1950 Fictional portrayals only Most discussion of AI safety is in the form of fictional portrayals. It warns of the risks of robots who, through either stupidity or lack of goal alignment, no longer remain under the control of humans.
1950 to 2000 Scientific speculation + fictional portrayals During this period, discussion of AI safety moves from merely being a topic of fiction to one that scientists who study technological trends start talking about. The era sees commentary by I. J. Good, Vernor Vinge, and Bill Joy.
2000 to 2012 Birth of AI safety organizations, close connections with transhumanism This period sees the creation of the Singularity Institute for Artificial Intelligence (SIAI) (which would later become the Machine Intelligence Research Institute (MIRI)) and the evolution of its mission from creating friendly AI to reducing the risk of unfriendly AI. The Future of Humanity Institute (FHI) and Global Catastrophic Risk Institute (GCRI) are also founded. AI safety work during this time is closely tied to transhumanism and has close connections with techno-utopianism. Peter Thiel and Jaan Tallinn are key funders of the early ecosystem.
2013 to present Mainstreaming of AI safety, separation from transhumanism SIAI changes name to MIRI, sells off the "Singularity" brand to Singularity University, grows considerably in size, and gets a lot of funding. Superintelligence, the book by Nick Bostrom, is released. The Future of Life Institute (FLI) and OpenAI are started, and the latter grows considerably. New organizations founded include the Center for the Study of Existential Risk (CSER), Leverhulme Centre for the Future of Intelligence (CFI), Future of Life Institute (FLI), OpenAI, Center for Human-Compatible AI (CHAI), Berkeley Existential Risk Initiative (BERI), Ought, and the Center for Science and Emerging Technology (CSET). OpenAI in particular becomes quite famous and influential. Prominent individuals such as Elon Musk, Sam Altman, and Bill Gates talk about the importance of AI safety and the risks of unfriendly AI. Key funders of this ecosystem include the Open Philanthropy Project and Elon Musk.

Highlights by year (2013 onward)

Year Highlights
2013 Research and outreach focused on forecasting and timelines continues. Connections with the nascent effective altruism movement strengthen. The Center for the Study of Existential Risk and the Foundational Research Institute launch.
2014 Superintelligence: Paths, Dangers, Strategies by Nick Bostrom is published. The Future of Life Institute is founded and AI Impacts launches. AI safety gets more mainstream attention, including from Elon Musk, Stephen Hawking, and the fictional portrayal Ex Machina. While forecasting and timelines remain a focus of AI safety efforts, the effort shifts toward the technical AI safety agenda, with the launch of the Intelligent Agent Foundations Forum.
2015 AI safety continues to get more mainstream, with the founding of OpenAI (supported by Elon Musk and Sam Altman) and the Leverhulme Centre for the Future of Intelligence, the Open Letter on Artificial Intelligence, the Puerto Rico conference, and coverage on Wait But Why. This also appears to be the last year that Peter Thiel donates in the area.
2016 The Open Philanthropy Project (Open Phil) makes AI safety a focus area; it would ramp up giving in the area considerably starting around this time. The landmark paper "Concrete Problemms in AI Safety" is published, and OpenAI's safety work picks up pace. The Center for Human-Compatible AI launches. The annual tradition of LessWrong posts providing an AI alignment literature review and charity comparison for the year beginis. AI safety continues to get more mainstream, with the Partnership on AI and the Obama administration's efforts to understand the subject.
2017 This is a great year for cryptocurrency prices, causing a number of donations to MIRI from people who got rich through cryptocurrency. The AI safety funding and support landscape changes somewhat with the launch of the Berkeley Existential Risk Initiative (BERI) (and funding of its grants program by Jaan Tallinn) and the Effective Altruism Funds, specifically the Long-Term Future Fund. Open Phil makes several grants in AI safety, including a $30 million grant to OpenAI and a $3.75 million grant to MIRI. AI safety attracts dismissive commentary from Mark Zuckerberg, while Elon Musk continues to highlight its importance. Initiatives such as AI Watch and the AI Alignment Prize begin.
2018 Activity in the field of AI safety becomes more steady, in terms of both ongoing discussion (with the launch of the AI Alignment Newsletter) and funding (with structural changes to the Long-Term Future Fund to make it grant more regularly, the introduction of the annual Open Phil AI Fellowship grants, and more grantmaking by BERI). Near the end of the year, MIRI announces its nondisclosure-by-default policy.
2019 The Center for Security and Emerging Technology (CSET), that is focused on AI safety and other security risks, launches with a 5-year $55 million grant from Open Phil. Grantmaking from the Long-Term Future Fund picks up pace; BERI hands off its grantmaking of Jaan Tallinn's money to the Survival and Flourishing Fund (SFF). Open Phil begins using the Committee for Effective Altruism Support to decide grant amounts for some of its AI safety grants, including grants to MIRI.

Full timeline

Year Month and date Event type Details
1630–1650 Fictional portrayal The publication of the story of the Golem of Chełm dates to around this period. Wikipedia: "Golems are not intelligent, and if commanded to perform a task, they will perform the instructions literally. In many depictions Golems are inherently perfectly obedient. In its earliest known modern form, the Golem of Chełm became enormous and uncooperative. In one version of this story, the rabbi had to resort to trickery to deactivate it, whereupon it crumbled upon its creator and crushed him."
1818 Fictional portrayal The novel Frankenstein is published. Frankenstein pioneers the archetype of the artificial intelligence that turns against its creator, and is sometimes discussed in the context of an AI takeoff.[3][4][5]
1863 June Publication In Darwin among the Machines, Samuel Butler raises the possibility that intelligent machines will eventually supplant humans as the dominant form of life.[6]
1920 Fictional portrayal The science fiction play R.U.R. is published. The play introduces the word "robot" to the English language and the plot contains a robot rebellion that leads to human extinction.
1942 March Fictional portrayal The Three Laws of Robotics are introduced by Isaac Asimov in his short story "Runaround".
1947 July Fictional portrayal With Folded Hands, a novelette by Jack Williamson, is published. The novelette describes how advanced robots (humanoids) take over large parts of the world to fulfil their Prime Directive, which is to make humans happy.
1948 Publication In The general and logical theory of automata, John von Neumann articulates the idea of self-improving AI. Notable quote: "There is, however, a certain minimum level where this degenerative characteristic ceases to be universal. At this point automata which can reproduce themselves, or even construct higher entities, become possible."[6] He would expand on this idea further in 1949, coming close to articulating what is now called an "intelligence explosion."[6]
1950 Publication Alan Turing publishes Computing Machinery and Intelligence in the philosophy journal Mind. The paper introduces the concept of the Turing test, the simple idea of which is that a machine would be said to have achieved human-level intelligence if it can convince a human that it is human. The Turing test would become a key part of discussions around benchmarking artificial intelligence, and also enter popular culture over the next few decades.[7][8]
1960 May 6 Publication Norbert Wiener's article Some Moral and Technical Consequences of Automation is published.[9] In 2013, Jonah Sinick would note the similarities between the points raised in this article and the thinking of AI safety leader Eliezer Yudkowsky.[10]
1965 Publication I. J. Good originates the concept of intelligence explosion in "Speculations Concerning the First Ultraintelligent Machine".
1966 Fictional portrayal The science fiction novel Colossus by British author Dennis Feltham Jones is published. In the novel, both the United States and the USSR develop supercomputers, called Colossus and Guardian respectively, that they connect with each other to share knowledge. The supercomputers coordinate with each other to become increasingly powerful, and humans belatedly try to regain control.[6]
1966 Publication In Other Worlds than Ours, Cecil Maxwell argues that nations would invest in building powerful intelligent machines and surrender decisionmaking to them, and any nation that succeeded would grow into a major world power. He writes: It seems that, in the forseeable future, the major nations of the world will have to face the alternative of surrendering national control to mechanical ministers, or being dominated by other nations which have already done this. Such a process will eventually lead to the domination of the whole Earth by a dictatorship of an unparalleled type — a single supreme central authority.[6]
1974 Publication In The Ignorance Explosion, Julius Lukasiewicz argues that it is very hard to predict the future after we have machine superintelligence.[6]
1977 Fictional portrayal The science fiction novel The Adolescence of P-1 by Thomas Joseph Ryan is published. It "tells the story of an intelligent worm that at first is merely able to learn to hack novel computer systems and use them to propagate itself, but later (1) has novel insights on how to improve its own intelligence, (2) develops convergent instrumental subgoals for self-preservation and resource acquisition, and (3) learns the ability to fake its own death so that it can grow its powers in secret and later engage in a "treacherous turn"against humans."[6]
1984 October 26 Fictional portrayal The American science fiction film The Terminator is released. The film contains the first appearance of Skynet, a "neural net-based conscious group mind and artificial general intelligence" that "seeks to exterminate the human race in order to fulfill the mandates of its original coding".
1985 Fictional portrayal In Robots and Empire, Isaac Asimov introduces the Zeroth Law of Robotics, which states: "A robot may not injure humanity, or through inaction, allow humanity to come to harm." This is analogous to, and must take precedence over, the Fist Law "A robot may not injure a human being, or through inaction, allow a human being to come to harm." The law was self-programmed by the robot R. Daneel Olivaw based on the ideas of R. Giskard Reventlov. In particular, the zeroth law now allows a robot to harm or allow harm to individual humans for the greater good of humanity.
1987 Publication In an article A Question of Responsibility for AI Magazine, Mitchell Waldrop introduces the term machine ethics. He writes: However, one thing that is apparent from the above discussion is that intelligent machines will embody values, assumptions, and purposes, whether their programmers consciously intend them to or not. Thus, as computers and robots become more and more intelligent, it becomes imperative that we think carefully and explicitly about what those built-in values are. Perhaps what we need is, in fact, a theory and practice of machine ethics, in the spirit of Asimov’s three laws of robotics.[11]
1993 Publication Vernor Vinge's article "The Coming Technological Singularity: How to Survive in the Post-Human Era" is published. The article popularizes the idea of an intelligence explosion.[12]
2000 April Publication Bill Joy's article "Why The Future Doesn't Need Us" is published in Wired.
2000 July 27 Organization Machine Intelligence Research Institute (MIRI) is founded as the Singularity Institute for Artificial Intelligence (SIAI) by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.[13]:3[14]
2002 March 8 AI box The first AI box experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.[15]
2002 July 4–5 AI box The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.[16]
2002 October 31 Publication Bill Hibbard's Super-Intelligent Machines is published.[17]
2003 Publication Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.[18]
2005 Organization The Future of Humanity Institute (FHI) is founded.[19]
2005 Conference The field of machine ethics is delineated at the Fall 2005 Symposium on Machine Ethics held by the Association for Advancement of Artificial Intelligence.[20]
2005 August 21 AI box The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.[21]
2005 Publication The Singularity is Near by inventor and futurist Ray Kurzweil is published. The book builds upon Kurzweil's previous books The Age of Intelligent Machines (1990) and The Age of Spiritual Machines (1999), but unlike its predecessors, uses the term technological singularity introduced by Vinge in 1993. Unlike Bill Joy, Kurzweil takes a very positive view of the impact of smarter-than-human AI and the upcoming (in his view) technological singularity.
2006 November Robin Hanson starts Overcoming Bias.[22] Eliezer Yudkowsky's posts on Overcoming Bias would form seed material for LessWrong, which would grow to be an important community for discussion related to AI safety.
2008 Publication Steve Omohundro's paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by Nick Bostrom as part of his instrumental convergence thesis.[23]
2008 Publication Global Catastrophic Risks is published. The book includes Eliezer Yudkowsky's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk".[24]
2008 October 13 Publication Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Colin Allen is published by Oxford University Press. The book advertises itself as "the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics."
2008 November–December Outside review The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.[25][26]
2009 February Project Eliezer Yudkowsky starts LessWrong using as seed material his posts on Overcoming Bias.[27] On the 2009 accomplishments page, MIRI describes LessWrong as being "important to the Singularity Institute's work towards a beneficial Singularity in providing an introduction to issues of cognitive biases and rationality relevant for careful thinking about optimal philanthropy and many of the problems that must be solved in advance of the creation of provably human-friendly powerful artificial intelligence". And: "Besides providing a home for an intellectual community dialoguing on rationality and decision theory, Less Wrong is also a key venue for SIAI recruitment. Many of the participants in SIAI's Visiting Fellows Program first discovered the organization through Less Wrong."[28]
2009 December 11 Publication The third edition of Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
2010 Organization DeepMind is founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman. Legg had previously received the $10,000 Canadian Singularity Institute for Artificial Intelligence Prize.[29]
2010 Organization Vicarious is founded by Scott Phoenix and Dileep George. The company "has publicly expressed some concern about potential risks from future AI development" and the founders are signatories on the FLI open letter.[30]
2011 Publication Baum, Goertzel, and Goertzel's "How Long Until Human-Level AI? Results from an Expert Assessment" is published.[31]
2011 Organization The Global Catastrophic Risk Institute (GCRI) is founded by Seth Baum and Tony Barrett.[32]
2011 Organization Google Brain is started by Jeff Dean, Greg Corrado, and Andrew Ng.
2011–2013 Project Sometime during this period, the Back of the Envelope Guide to Philanthropy, a website created by Gordon Irlam, includes prevention of "hostile artificial intelligence" as a top 10 philanthropic opportunity by impact.[33][34]
2011 September Organization The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.[35]
2013 Publication Luke Muehlhauser's book Facing the Intelligence Explosion is published.[36]
2013 April 13 MIRI publishes an update on its strategy on its blog. In the blog post, MIRI executive director Luke Muehlhauser states that MIRI plans to put less effort into public outreach and shift its research to Friendly AI math research.[37]
2013 July Organization The Center for the Study of Existential Risk (CSER) launches.[38][39]
2013 July Organization The Foundational Research Institute (FRI) is founded. Some of FRI's work discusses risks from artificial intelligence.[40]
2013 July 8 Publication Luke Muehlhauser's blog post Four Focus Areas of Effective Altruism is published. The four focus areas listed are poverty reduction, meta effective altruism, the long term future, and animal suffering. AI safety concerns fall under the "long term future" focus area.[41] This identification of focus areas would persist for several years, and would also be incorporated into the design of the Effective Altruism Funds in 2017. In particular, the blog post encapsulates the central position of AI safety in the then-nascent effective altruist movement.
2013 October 1 Publication Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is published. The book discusses risks from human-level of superhuman artificial intelligence.
2014 Publication Müller and Bostrom's "Future Progress in Artificial Intelligence: A Survey of Expert Opinion" is published.[42]
2014 January 26 Google announces that it has acquired DeepMind. At the same time, it sets up an AI ethics board. DeepMind co-founders Shane Legg and Demis Hassabis, as well as AI safety funders Peter Thiel and Jaan Tallinn, are believed to have been influential in the process.[43][44]
2014 March–May Organization Future of Life Institute (FLI) is founded.[45]
2014 July–September Publication Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published.
2014 August Project The AI Impacts website launches.[46]
2014 Fall Project The One Hundred Year Study on Artificial Intelligence (AI100) launches.[47]
2014 October 22–24 Opinion During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".[48][49]
2014 November 4 Project The Intelligent Agent Foundations Forum, run by MIRI, is launched.[50]
2014 November 5 Publication The book Ethical Artificial Intelligence by Bill Hibbard is released on ArXiV. The book brings together ideas from Hibbard's past publications related to technical AI risk.[51]
2014 December 2 Opinion In an interview with BBC, Stephen Hawking states that advanced artificial intelligence could end the human race.[52]
2014 December 16 Fictional portrayal The movie Ex Machina is released. The movie highlights the paperclip maximizer idea: it shows how a robot programmed to optimize for being able to make sure it escapes can callously damage human lives in the process. It also covers the ideas of the Turing test (a robot convincing a human that it has human-like qualities, even though the human knows that the robot is not human) and the AI box experiment (a robot convincing a human to release it from its "box" similar to the experiment proposed and carried out by Eliezer Yudkowsky). It leads to more public discussion of AI safety.[53][54][55][56]
2015 Daniel Dewey joins the Open Philanthropy Project.[57] He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence.
2015 Organization The Strategic Artificial Intelligence Research Centre launches around this time.[58][30]
2015 January Publication The Open Letter on Artificial Intelligence, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published.
2015 January 28 Opinion During an "ask me anything" (AMA) session on reddit, Bill Gates states his concern about artificial superintelligence.[59][60]
2015 January 2–5 Conference The Future of AI: Opportunities and Challenges, an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute.[61] Nate Soares of the Machine Intelligence Research Institute would later call this the "turning point" of when top academics begin to focus on AI risk.[62]
2015 January 22–27 Publication Tim Urban publishes on Wait But Why a two-part series of blog posts about superhuman AI.[63][64]
2015 February 25 Opinion Sam Altman, president of Y Combinator, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".[65]
2015 May 1 Publication The Wikipedia article on existential risk from artificial general intelligence is published.[66]
2015 June 4 Opinion At Airbnb's Open Air 2015 conference, Sam Altman, president of Y Combinator, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.[67]
2015 June 17,21 Publication The Kindle edition of Artificial Superintelligence: A Futuristic Approach by Roman Yampolskiy is published. The paperback would be published on June 21.[68] Yampolskiy takes an AI safety engineering perspective, rather than a machine ethics perspective, to the problem of AI safety.[69]
2015 July 1 Grant The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.[70][71][72]
2015 August Grant The Open Philanthropy Project awards a grant of $1.2 million to the Future of Life Institute.[73]
2015 August Publication The Open Philanthropy Project publishes its cause report on potential risks from advanced artificial intelligence.[74]
2015 October Publication The Open Philanthropy Project first publishes its page on AI timelines.[75]
2015 December Organization The Leverhulme Centre for the Future of Intelligence launches around this time.[76]
2015 December 11 Organization OpenAI is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)[77][78]
2016 April 28 Publication The Global Catastrophic Risks 2016 report is published. The report is a collaboration between the Global Priorities Project and the Global Challenges Foundation.[79] The report includes discussion of risks from artificial general intelligence under "emerging risks".[80][81]
2016 April 7 Publication 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".[82][83]
2016 May Publication Luke Muehlhauser of the Open Philanthropy Project publishes "What should we learn from past AI forecasts?".[84]
2016 May 6 Publication Holden Karnofsky of the Open Philanthropy Project publishes a blog post on why Open Phil is making potential risks from artificial intelligence a major priority for the year.[85]
2016 May 6 Publication Holden Karnofsky of the Open Philanthropy Project publishes "Some Background on Our Views Regarding Advanced Artificial Intelligence" on the Open Phil blog.[86]
2016 June Grant The Open Philanthropy Project awards a grant of $264,525 to George Mason University for work by Robin Hanson.[73]
2016 June 21 Publication "Concrete Problems in AI Safety" by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané is submitted to the arXiv.[87] The paper would receive a shoutout from the Open Philanthropy Project.[88] It would become a landmark in AI safety literature, and many of its authors would continue to do AI safety work at OpenAI in the years to come.
2016 August Organization The UC Berkeley Center for Human-Compatible Artificial Intelligence launches under the leadership of AI expert Stuart J. Russell (co-author with Peter Norvig of Artificial Intelligenece: A Modern Approach). The focus of the center is "to ensure that AI systems are beneficial to humans".[89]
2016 August Grant The Open Philanthropy Project awards a grant of $5.6 million over two years to the newly formed Center for Human-Compatible AI at the University of California, Berkeley.[73]
2016 August Grant The Open Philanthropy Project awards a grant of $500,000 to the Machine Intelligence Research Institute.[73]
2016 August 24 US president Barack Obama speaks to entrepreneur and MIT Media Lab director Joi Ito about AI risk.[90]
2016 September 28 Organization The Partnership on AI is publicly announced.
2016 October 12 Publication Under the Obama Administration, the United States White House releases two reports, Preparing for the Future of Artificial Intelligence and National Artificial Intelligence Research and Development Strategic Plan. The former "surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy".[91][92]
2016 November Grant The Open Philanthropy Project awards a grant of $199,000 to the Electronic Frontier Foundation for work by Peter Eckersley.[73]
2016 December Grant The Open Philanthropy Project awards a grant of $32,000 to AI Impacts for work on strategic questions related to potential risks from advanced artificial intelligence.[73]
2016 December 3, 12 Publication A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.[93][94]
2016 December 13 Publication The "2016 AI Risk Literature Review and Charity Comparison" is published on the Effective Altruism Forum. The lengthy blog post covers all the published work of prominent organizations focused on AI safety.[95]
2017 Publication The Global Catastrophic Risks 2017 report is published.[96] The report discusses risks from artificial intelligence in a dedicated chapter.[97]
2017 Publication The Global Risks Report 2017 is published by the World Economic Forum. The report contains a section titled "Assessing the Risk of Artificial Intelligence" under "Emerging Technologies".[98]
2017 February 9 Project The Effective Altruism Funds (EA Funds) is announced on the Effective Altruism Forum. EA Funds includes a Long-Term Future Fund that is partly intended to support "priorities for robust and beneficial artificial intelligence".[99][100]
2017 March Grant The Open Philanthropy Project awards a grant of $2.0 million to the Future of Humanity Institute for general support.[73]
2017 March Grant The Open Philanthropy Project awards a grant of $30 million to OpenAI for general support.[73]
2017 April Organization The Berkeley Existential Risk Initiative (BERI) launches around this time (under the leadership of Andrew Critch, who previously helped found the Center for Applied Rationality) to assist researchers working at institutions working to mitigate existential risk, including AI risk.[101][102]
2017 April 6 Publication 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".[103][104]
2017 May Grant The Open Philanthropy Project awards a grant of $1.5 million to the UCLA School of Law for work on governance related to AI risk.[73]
2017 May 24 Publication "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the arXiv.[105] Two researchers from AI Impacts are authors on the paper.[106]
2017 June 14 Publication 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.[107]
2017 July Grant The Open Philanthropy Project awards a grant of $2.4 million to the Montreal Institute for Learning Algorithms.[73]
2017 July Grant The Open Philanthropy Project awards a grant of about $300,000 to Yale University to support research into the global politics of artificial intelligence led by Allan Dafoe.[73]
2017 July Grant The Open Philanthropy Project awards a grant of about $400,000 to the Berkeley Existential Risk Initiative to support core functions of grantee, and to help them provide contract workers for the Center for Human-Compatible AI (CHAI) housed at the University of California, Berkeley.[73]
2017 July 15–16 Opinion At the National Governors Association in Rhode Island, Elon Musk tells US governors that artificial intelligence is an "existential threat" to humanity.[108]
2017 July 23 Opinion During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways […] pretty irresponsible".[109]
2017 October Grant The Open Philanthropy Project awards MIRI a grant of $3.75 million over three years ($1.25 million per year). The cited reasons for the grant are a "very positive review" of MIRI's "Logical Induction" paper by an "outstanding" machine learning researcher, as well as the Open Philanthropy Project having made more grants in the area so that a grant to MIRI is less likely to appear as an "outsized endorsement of MIRI's approach".[110][111]
2017 October Project The first commit for AI Watch, a repository of organizations, people, and products in AI safety, is made on October 23.[112] Work on the web portal at aiwatch.issarice.com would begin the next day.[113]
2017 November 3 Project Zvi Mowshowitz and Vladimir Slepnev announce the AI Alignment Prize, a $5,000 prize funded by Paul Christiano for publicly posted work advancing AI alignment.[114] The prize would be discontinued after the fourth round (ending December 31, 2018) due to reduced participation.[115]
2017 December Grant Jaan Tallinn makes a donation of about $5 million to the Berkeley Existential Risk Initiative (BERI) Grants Program.[116]
2017 December 20 Publication The "2017 AI Safety Literature Review and Charity Comparison" is published. The lengthy blog post covers all the published work of prominent organizations focused on AI safety, and is a refresh of a similar post published a year ago.[117]
2018 February 28 Publication 80,000 Hours publishes a blog post A new recommended career path for effective altruists: China specialist suggesting specialization in China as a career path for people in the effective altruist movement. China's likely leading role in the development of artificial intelligence is highlighted as particularly relevant to AI safety efforts.[118]
2018 April 5 Documentary The documentary Do You Trust This Computer?, directed by Chris Paine, is released. It covers issues related to AI safety and includes interviews with prominent individuals relevant to AI, such as Ray Kurzweil, Elon Musk and Jonathan Nolan.
2018 April 9 Newsletter The first issue of the weekly AI Alignment Newsletter, managed by Rohin Shah of the Center for Human-Compatible AI, is sent. Over the next two years, the team responsible for producing the newsletter would grow to four people, and the newsletter would also be produced in Chinese and in podcast form.[119][120][121]
2018 April 12 to 22 Conference The first AI Safety Camp is held in Gran Canaria.[122] The AI Safety Camp team runs about two camps a year.
2018 May Grant The Open Philanthropy Project announces the first set of grants for the Open Phil AI Fellowship, to 7 AI Fellows pursuing research relevant to AI risk. It also makes a grant of $525,000 to Ought and $100,000 to AI Impacts.[73]
2018 July Grant The Open Philanthropy Project grants $429,770 to the University of Oxford to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom.[73]
2018 July 10 (beta), October 29 (out of beta) Project The team behind LessWrong 2.0 launches a beta for the AI Alignment Forum at AlignmentForum.org on July 10, as a successor to the Intelligent Agent Foundations Forum (IAFF) at agentfoundations.org.[123] On October 29, the Alignment Forum exits beta and becomes generally available.[124][125]
2018 August 14 Grant Nick Beckstead grants the Machine Intelligence Research Institute (MIRI) $488,994 from the Long-Term Future Fund. This is part of his last set of grants as fund manager; he would subsequently step down and the fund management would move to a different team.[126][127]
2018 September to October Grant During this period, the Berkeley Existential Risk Initiative (BERI) makes a number of grants to individuals working on projects related to AI safety.[128]
2018 November 22 Disclosure norms Nate Soares, executive director of MIRI, publishes MIRI's 2018 update post that announces MIRI's "nondisclosed-by-default" policy for most of its research.[129] The 2018 AI alignment literature review and charity comparison post would discuss the complications created by this policy for evaluating MIRI's research,[130] and so would the 2019 post.[131] In its 2019 fundraiser review, MIRI would mention the nondisclosure-by-default policy as one possible reason for it raising less money in its 2019 fundraiser.[132]
2018 November 29 Grant The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set of grants: $40,000 to Machine Intelligence Research Institute, $10,000 to Ought, $21,000 to AI Summer School, and $4,500 to the AI Safety Unconference.[127]
2018 December 17 Publication The "2018 AI Alignment Literature Review and Charity Comparison" is published on the Effective Altruism Forum. It surveys AI safety work in 2018. It continues an annual tradition of similar blog posts in 2016 and 2017.[130]
2019 January Grant The Open Philanthropy Project grants $250,000 to the Berkeley Existential Risk Initiative (BERI) to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).[73]
2019 January Grant The Open Philanthropy Project provides a founding grant for the Center for Security and Emerging Technology (CSET) at Georgetown University of $55 million over 5 years.[73]
2019 February Grant The Open Philanthropy Project grants $2,112,500 to the Machine Intelligence Research Institute (MIRI) over two years. This is part of the first batch of grants decided by the Committee for Effective Altruism Support, which will set "grant sizes for a number of our largest grantees in the effective altruism community, including those who work on long-termist causes."[73] Around the same time, BERI grants $600,000 to MIRI.[128]
2019 April 7 Grant The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set of 23 grants totaling $923,150. About half the grant money is to organizations or projects directly working in AI safety. Recipients include the Machine Intelligence Research Institute (MIRI), AI Safety Camp, Ought, and a number of individuals working on AI safety projects, including three in deconfusion research.[127]
2019 May Grant The Open Philanthropy Project announces the second class of the Open Phil AI Fellowship, with 8 machine learning researchers in the class, receiving a total of $2,325,000 in grants.[73]
2019 June 7 Fictional portrayal The movie I Am Mother is released on Netflix. According to a comment on Slate Star Codex: "you can use it to illustrate everything from paperclip maximization to deontological kill switches".[133]
2019 August 25 Grantmaking done by Berkeley Existential Risk Initiative (BERI) funded by Jaan Tallinn moves to the newly created Survival and Flourishing Fund (SFF).[134] BERI's grantmaking in this space had previously included AI safety organizations. However, as of April 2020, SFF's grants have not included any grants to organizations exclusively focused on AI safety, but rather been to organizations working on broader global catastrophic risks and other adjacent topics.
2019 August 30 Grant The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set 13 grants totaling $415,697 USD to organizations and individuals. About half the grant money is to organizations or projects working in AI safety and related AI strategy, governance, and policy issues. With the exception of a grant to AI Safety Camp, all the other grants related to AI safety are to individuals.[135]
2019 October 8 Publication The book Human Compatible by Stuart J. Russell (co-author with Peter Norvig of Artificial Intelligence: A Modern Approach and head of the Center for Human-Compatible AI at UC Berkeley) is published by Viking Press. The book is reviewed by The Guardian[136] and interviews with the author is published by Vox[137] and TechCrunch.[138]
2019 November Grant The Open Philanthropy Project makes a $1 million grant to Ought, double the previous grant of $525,000.[73]
2019 November 6 Publication An "AI Alignment Research Overview" by Jacob Steinhardt (one of the co-authors of Concrete Problems in AI Safety) is published to LessWrong and the AI Alignment Forum.[139]
2019 November 21 Grant The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set 13 grants totaling $466,000 USD to organizations and individuals. About a quarter of the grant money is to organizations and individuals working on AI safety. With the exception of a grant to AI Safety Camp, all the other grants related to AI safety are to individuals.[140]
2019 November 24 Toon Alfrink publishes on LessWrong a postmortem for RAISE, an attempt to build an online course for AI safety. The blog post explains the challenges with running RAISE and the reasons for eventually shutting it down.[141]
2019 December 18 Publication The "2019 AI Alignment Literature Review and Charity Comparison" is published on the Effective Altruism Forum. It surveys AI safety work in 2019. It continues an annual tradition of similar blog posts in 2016, 2017, and 2018. One feature new to the document this year is the author's effort to make it easier for readers to jump to and focus on parts of the document most relevant to them, rather than read it beginning to end. To make this easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.[131]
2020 January Publication Rohin Shah (the person who started the weekly AI Alignment Newsletter) publishes a blog post on LessWrong titled "AI Alignment 2018-19 Review" that he describes as "a review post of public work in AI alignment over 2019, with some inclusions from 2018."[142]
2020 February Grant The Open Philanthropy Project makes grants to AI safety organizations Ought ($1.59 million) and Machine Intelligence Research Institute ($7.7 million) with the money amount determined by the Committee for Effective Altruism Support (CEAS). Other organizations receiving money based on CEAS recommendations at around the same time are the Centre for Effective Altruism and 80,000 Hours, neither of which is primarily focused on AI safety.[73]
2020 March Publication The Precipice: Existential Risk and the Future of Humanity by Toby Ord (affiliated with the Future of Humanity Institute and with Oxford University) is published by Hachette Books. The book covers risks including artificial intelligence, biological risks, and climate change. The author appears on podcasts to talk about the book, for Future of Life Institute[143] and 80,000 Hours.[144]
2020 April 2 – 3 Publication Asya Bergal of AI Impacts publishes a series of interviews (done along with Robert Long) of people optimistic about AI safety, along with a summmary post to LessWrong. Their one-sentence summary: "Relative optimism in AI often comes from the belief that AGI will be developed gradually, and problems will be fixed as they are found rather than neglected."[145][146]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Issa likes to work locally and track changes with Git, so the revision history on this wiki only shows changes in bulk. To see more incremental changes, refer to the commit histories at the old location and the new location.

Funding information for this timeline is available.

What the timeline is still missing

  • The Matrix
  • maybe more at [1]
  • more AI box results at [2] but unfortunately no dates
  • stuff in [3] and [4]
  • siren/marketing worlds
  • TDT/UDT
  • Paul Christiano AI alignment blog. Also, more on Christiano's trajectory in AI safety
  • The launch of Ought
  • Translations of Superintelligence?
  • universal prior/distant superintelligences stuff
  • Steven Pinker?
  • AI summer school
  • when did the different approaches to alignment come along?
  • the name change from "friendly AI" to "AI safety" and "AI alignment" is probably worth adding, though this was gradual so kind of hard to pin down as an event. See also this comment.
  • CS 294-149: Safety and Control for Artificial General Intelligence (Fall 2018), taught by Andrew Critch and Stuart Russell
  • Median Group (insofar as some of their research is about AI)
  • Norms set by OpenAI's decision not to release the full GPT2 model, and some of the discussion this sparked

More detail would help for these rows:

  • Launch of AI Impacts (who launched it, what sort of stuff it covers)
  • Center for Security and Emerging Technology -- talk more about who's founding it, and the relation with AI safety and their approach to it.

Timeline update strategy

See also

Timelines of organizations working in AI safety

Other timelines related to AI

Other timelines about cause areas prioritized in effective altruism

External links

References

  1. Paul Christiano (November 19, 2016). "AI "safety" vs "control" vs "alignment"". AI Alignment. AI Alignment. Retrieved November 18, 2017. 
  2. Eliezer Yudkowsky (November 16, 2017). "Hero Licensing". LessWrong. Retrieved November 18, 2017. I'll mention as an aside that talk of "Friendly" AI has been going out of style where I'm from. We've started talking instead in terms of "aligning smarter-than-human AI with operators' goals," mostly because "AI alignment" smacks less of anthropomorphism than "friendliness." 
  3. Michael Nuschke (October 10, 2011). "Seven Ways Frankenstein Relates to Singularity". RetirementSingularity.com. Retrieved July 27, 2017. 
  4. Mitchell Howe (2002). "What is the intellectual history of the Singularity concept?". Retrieved July 27, 2017. Bearing little resemblance to the campy motion pictures he would inspire, Dr. Frankenstein's monster was a highly intelligent being of great emotional depth, but who could not be loved because of his hideous appearance; for this, he vowed to take revenge on his creator. The monster actually comes across as the most intelligent character in the novel, making Frankenstein perhaps the first work to touch on the core idea of the Singularity. 
  5. Alan Winfield (August 9, 2014). "Artificial Intelligence will not turn into a Frankenstein monster". The Guardian. Retrieved July 27, 2017. From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong. 
  6. 6.0 6.1 6.2 6.3 6.4 6.5 6.6 Muehlhauser, Luke (March 31, 2012). "AI Risk & Opportunity: A Timeline of Early Ideas and Arguments". LessWrong. Retrieved September 14, 2019. 
  7. Bradley, Peter. "Turing Test and Machine Intelligence". Consortium on Computer Science Instruction. 
  8. Muehlhauser, Luke (August 11, 2013). "What is AGI?". Machine Intelligence Research Institute. Retrieved September 8, 2019. 
  9. Wiener, Norbert (May 6, 1960). "Some Moral and Technical Consequences of Automation". Retrieved August 18, 2019. 
  10. Sinick, Jonah (July 20, 2013). "Norbert Wiener's paper "Some Moral and Technical Consequences of Automation"". LessWrong. Retrieved August 18, 2019. 
  11. Waldrop, Mitchell (Spring 1987). "A Question of Responsibility". AI Magazine. 8 (1): 28–39. doi:10.1609/aimag.v8i1.572. 
  12. "History of AI risk thought". Lesswrongwiki. LessWrong. Retrieved July 28, 2017. 
  13. "Form 990-EZ 2000" (PDF). Retrieved June 1, 2017. Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999. 
  14. "About the Singularity Institute for Artificial Intelligence". Retrieved July 1, 2017. The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors. 
  15. "SL4: By Thread". Retrieved July 1, 2017. 
  16. "SL4: By Thread". Retrieved July 1, 2017. 
  17. "Amazon.com: Super-Intelligent Machines (Ifsr International Series on Systems Science and Engineering) (9780306473883): Bill Hibbard: Books". Retrieved July 26, 2017. Publisher: Springer; 2002 edition (October 31, 2002) 
  18. "Ethical Issues In Advanced Artificial Intelligence". Retrieved July 25, 2017. 
  19. "About". Oxford Martin School. Retrieved July 25, 2017. The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School). 
  20. "Papers from the 2005 AAAI Fall Symposium". Archived from the original on 2014-11-29. 
  21. "SL4: By Thread". Retrieved July 1, 2017. 
  22. "Overcoming Bias : Bio". Retrieved June 1, 2017. 
  23. "Basic AI drives". Lesswrongwiki. LessWrong. Retrieved July 26, 2017. 
  24. "AIPosNegFactor.pdf" (PDF). Retrieved July 27, 2017. 
  25. "The Hanson-Yudkowsky AI-Foom Debate". Lesswrongwiki. LessWrong. Retrieved July 1, 2017. 
  26. "Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong". LessWrong. Retrieved July 15, 2017. Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI. 
  27. "FAQ - Lesswrongwiki". LessWrong. Retrieved June 1, 2017. 
  28. "Recent Singularity Institute Accomplishments". Singularity Institute for Artificial Intelligence. Retrieved July 6, 2017. 
  29. Legg, Shane. "About". Retrieved September 15, 2019. 
  30. 30.0 30.1 "Landscape of current work on potential risks from advanced AI". Google Docs. Retrieved July 27, 2017. 
  31. "How Long Untill Human-Level AI - 2011_AI-Experts.pdf" (PDF). Retrieved July 28, 2017. 
  32. "About". Global Catastrophic Risk Institute. Retrieved July 26, 2017. The Global Catastrophic Risk Institute (GCRI) is a nonprofit, nonpartisan think tank. GCRI was founded in 2011 by Seth Baum and Tony Barrett. 
  33. "Back of the Envelope Guide to Philanthropy". Retrieved July 28, 2017. 
  34. "Gordon Irlam on the BEGuide". Meteuphoric. WordPress.com. October 16, 2014. Retrieved July 28, 2017. 
  35. "Welcome". Oxford Martin Programme on the Impacts of Future Technology. Retrieved July 26, 2017. The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University. 
  36. "About". Facing the Intelligence Explosion. Retrieved July 27, 2017. 
  37. Luke Muehlhauser (December 11, 2013). "MIRI's Strategy for 2013". Machine Intelligence Research Institute. Retrieved July 6, 2017. 
  38. Sylvia Hui (November 25, 2012). "Cambridge to study technology's risk to humans". Retrieved July 26, 2017. The university said Sunday the center's launch is planned next year. 
  39. "Centre for the Study of Existential Risk". 
  40. "Transparency". Foundational Research Institute. Retrieved July 27, 2017. 
  41. Muehlhauser, Luke (July 8, 2013). "Four Focus Areas of Effective Altruism". LessWrong. Retrieved September 8, 2019. 
  42. "Future Progress in Artificial Intelligence: A Survey of Expert Opinion - survey.pdf" (PDF). Retrieved July 28, 2017. 
  43. Efrati, Amir (January 26, 2014). "Google Beat Facebook for DeepMind, Creates Ethics Board". Huffington Post. Retrieved September 15, 2019. 
  44. Bosker, Bianca (January 29, 2014). "Google's New A.I. Ethics Board Might Save Humanity From Extinction". Retrieved September 15, 2019. 
  45. Victoria Krakovna. "New organization - Future of Life Institute (FLI)". LessWrong. Retrieved July 6, 2017. As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself. 
  46. "MIRI's September Newsletter". Machine Intelligence Research Institute. September 1, 2014. Retrieved July 15, 2017. Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts. 
  47. Peter Stone; et al. (AI100 Standing Committee and Study Panel) (September 2016). "One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel" (PDF). Retrieved July 27, 2017. The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a longterm investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. 
  48. Samuel Gibbs (October 27, 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Retrieved July 25, 2017. 
  49. "AeroAstro Centennial Webcast". Retrieved July 25, 2017. The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium 
  50. Benja Fallenstein. "Welcome!". Intelligent Agent Foundations Forum. Retrieved June 30, 2017. post by Benja Fallenstein 969 days ago 
  51. Hibbard, Bill (2014): Ethical Artificial Intelligence. https://arxiv.org/abs/1411.1373
  52. "Stephen Hawking warns artificial intelligence could end mankind". BBC News. December 2, 2014. Retrieved July 25, 2017. 
  53. "Ex Machina's Scientific Advisor – Murray Shanahan". Y Combinator. June 28, 2017. Retrieved August 18, 2019. 
  54. "Ex Machina movie asks: is AI research in safe hands?". January 21, 2015. Retrieved August 18, 2019. 
  55. "Go see Ex Machina". LessWrong. February 26, 2016. Retrieved August 18, 2019. 
  56. Hardawar, Devindra (April 1, 2015). "'Ex Machina' director embraces the rise of superintelligent AI". Engadget. Retrieved August 18, 2019. 
  57. "Daniel Dewey". Open Philanthropy Project. Retrieved July 25, 2017. 
  58. Future of Humanity Institute - FHI. "Strategic Artificial Intelligence Research Centre - Future of Humanity Institute". Future of Humanity Institute. Retrieved July 27, 2017. 
  59. "Hi Reddit, I'm Bill Gates and I'm back for my third AMA. Ask me anything. • r/IAmA". reddit. Retrieved July 25, 2017. 
  60. Stuart Dredge (January 29, 2015). "Artificial intelligence will become strong enough to be a concern, says Bill Gates". The Guardian. Retrieved July 25, 2017. 
  61. "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017. 
  62. Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017. 
  63. "The Artificial Intelligence Revolution: Part 1". Wait But Why. January 22, 2017. Retrieved July 25, 2017. 
  64. "The Artificial Intelligence Revolution: Part 2". Wait But Why. January 27, 2015. Retrieved July 25, 2017. 
  65. "Machine intelligence, part 1". Sam Altman. Retrieved July 27, 2017. 
  66. "Existential risk from artificial general intelligence". May 1, 2015. Retrieved August 18, 2019. 
  67. Matt Weinberger (June 4, 2015). "Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last". Business Insider. Retrieved July 27, 2017. 
  68. Yampolskiy, Roman (June 17, 2015). "Artificial Superintelligence: A Futuristic Approach". Retrieved August 20, 2017. 
  69. Muehlhauser, Luke (July 15, 2013). "Roman Yampolskiy on AI Safety Engineering". Machine Intelligence Research Institute. Retrieved August 20, 2017. 
  70. "Grants Timeline - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  71. "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  72. "AI Safety Research - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  73. 73.00 73.01 73.02 73.03 73.04 73.05 73.06 73.07 73.08 73.09 73.10 73.11 73.12 73.13 73.14 73.15 73.16 73.17 73.18 73.19 "Open Philanthropy Project donations made (filtered to cause areas matching AI risk)". Retrieved July 27, 2017. 
  74. "Potential Risks from Advanced Artificial Intelligence". Open Philanthropy Project. Retrieved July 27, 2017. 
  75. "What Do We Know about AI Timelines?". Open Philanthropy Project. Retrieved July 25, 2017. 
  76. "The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity". University of Cambridge. December 3, 2015. Retrieved July 26, 2017. 
  77. John Markoff (December 11, 2015). "Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors". The New York Times. Retrieved July 26, 2017. The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco. 
  78. "Introducing OpenAI". OpenAI Blog. December 11, 2015. Retrieved July 26, 2017. 
  79. "Global Catastrophic Risks 2016". The Global Priorities Project. April 28, 2016. Retrieved July 28, 2017. 
  80. "Global-Catastrophic-Risk-Annual-Report-2016-FINAL.pdf" (PDF). Retrieved July 28, 2017. 
  81. George Dvorsky. "These Are the Most Serious Catastrophic Threats Faced by Humanity". Gizmodo. Retrieved July 28, 2017. 
  82. "How and why to use your career to make artificial intelligence safer". 80,000 Hours. April 7, 2016. Retrieved July 25, 2017. 
  83. "Risks posed by artificial intelligence". 80,000 Hours. 
  84. "What should we learn from past AI forecasts?". Open Philanthropy Project. Retrieved July 27, 2017. 
  85. "Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity". Open Philanthropy Project. Retrieved July 27, 2017. 
  86. "Some Background on Our Views Regarding Advanced Artificial Intelligence". Open Philanthropy Project. Retrieved July 27, 2017. 
  87. "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017. 
  88. Karnofsky, Holden (June 23, 2016). "Concrete Problems in AI Safety". Retrieved April 18, 2020. 
  89. "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". Berkeley News. August 29, 2016. Retrieved July 26, 2017. 
  90. Scott Dadich (October 12, 2016). "Barack Obama Talks AI, Robo Cars, and the Future of the World". WIRED. Retrieved July 28, 2017. 
  91. "The Administration's Report on the Future of Artificial Intelligence". whitehouse.gov. October 12, 2016. Retrieved July 28, 2017. 
  92. "The Obama Administration's Roadmap for AI Policy". Harvard Business Review. December 21, 2016. Retrieved July 28, 2017. 
  93. "CFAR's new focus, and AI Safety - Less Wrong". LessWrong. Retrieved July 13, 2017. 
  94. "Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong". LessWrong. Retrieved July 13, 2017. 
  95. Larks (December 13, 2016). "2016 AI Risk Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved August 18, 2019. 
  96. "Annual Report on Global Risks". Global Challenges Foundation. Retrieved July 28, 2017. 
  97. "Global Catastrophic Risks 2017.pdf" (PDF). Retrieved July 28, 2017. 
  98. "Acknowledgements". Global Risks Report 2017. Retrieved July 28, 2017. 
  99. "EA Funds". Retrieved July 27, 2017. In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence. 
  100. William MacAskill (February 9, 2017). "Introducing the EA Funds". Effective Altruism Forum. Retrieved July 27, 2017. 
  101. "May 2017 Newsletter". Machine Intelligence Research Institute. May 10, 2017. Retrieved July 25, 2017. Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere. 
  102. "Update on Effective Altruism Funds". Effective Altruism Forum. April 20, 2017. Retrieved July 25, 2017. 
  103. "Positively shaping the development of artificial intelligence". 80,000 Hours. Retrieved July 25, 2017. 
  104. "Completely new article on the pros/cons of working on AI safety, and how to actually go about it". April 6, 2017. 
  105. "[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved July 13, 2017. 
  106. "Media discussion of 2016 ESPAI". AI Impacts. June 14, 2017. Retrieved July 13, 2017. 
  107. "New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford's Future of Humanity Institute". 80,000 Hours. June 14, 2017. 
  108. "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. July 17, 2017. Retrieved July 28, 2017. 
  109. Catherine Clifford (July 24, 2017). "Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'". CNBC. Retrieved July 25, 2017. 
  110. Malo Bourgon (November 8, 2017). "A major grant from the Open Philanthropy Project". Machine Intelligence Research Institute. Retrieved November 11, 2017. 
  111. "Machine Intelligence Research Institute — General Support (2017)". Open Philanthropy Project. November 8, 2017. Retrieved November 11, 2017. 
  112. Rice, Issa (October 23, 2017). "Initial commit: AI Watch". Retrieved April 19, 2020. 
  113. Rice, Issa (October 24, 2017). "start on portal: AI Watch". Retrieved April 19, 2020. 
  114. Slepnev, Vladimir (November 3, 2017). "Announcing the AI Alignment Prize". LessWrong. Retrieved April 19, 2020. 
  115. Slepnev, Vladimir (Janaury 20, 2019). "Announcement: AI alignment prize round 4 winners". Alignment Forum. Retrieved April 19, 2020.  Check date values in: |date= (help)
  116. "Berkeley Existential Risk Initiative | Activity Update - December 2017". Retrieved February 8, 2018. 
  117. Larks (December 20, 2017). "2017 AI Safety Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved August 18, 2019. 
  118. Todd, Benjamin; Tse, Brian (February 28, 2018). "'A new recommended career path for effective altruists: China specialist". 80,000 Hours. Retrieved September 8, 2019. 
  119. "Alignment Newsletter One Year Retrospective". Effective Altruism Forum. Aprril 10, 2019. Retrieved September 8, 2019.  Check date values in: |date= (help)
  120. "Alignment Newsletter 1". April 9, 2018. Retrieved September 8, 2019. 
  121. "Alignment Newsletter". Retrieved September 8, 2019. 
  122. "Previous Camps". Retrieved September 7, 2019. 
  123. Arnold, Raymond (Juluy 10, 2018). "Announcing AlignmentForum.org Beta". LessWroong. Retrieved April 18, 2020.  Check date values in: |date= (help)
  124. Habryka, Oliver; Pace, Ben; Arnold, Raymond; Babcock, Jim (October 29, 2018). "Introducing the AI Alignment Forum (FAQ)". LessWrong. Retrieved April 18, 2020. 
  125. "Announcing the new AI Alignment Forum". Machine Intelligence Research Institute. October 29, 2018. Retrieved April 18, 2020. 
  126. "July 2018 - Long-Term Future Fund Grants". Effective Altruism Funds. August 14, 2018. Retrieved August 18, 2019. 
  127. 127.0 127.1 127.2 "Effective Altruism Funds donations made (filtered to cause areas matching AI safety)". Retrieved August 18, 2019. 
  128. 128.0 128.1 "Berkeley Existential Risk Initiative donations made (filtered to cause areas matching AI safety)". Retrieved August 18, 2019. 
  129. "2018 Update: Our New Research Directions - Machine Intelligence Research Institute". Machine Intelligence Research Institute. November 22, 2018. Retrieved February 14, 2019. 
  130. 130.0 130.1 Larks (December 17, 2018). "2018 AI Alignment Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved August 18, 2019. 
  131. 131.0 131.1 Larks (December 18, 2019). "2019 AI Alignment Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved April 18, 2020. 
  132. "Our 2019 Fundraiser Review". Machine Intelligence Research Institute. February 13, 2020. Retrieved April 19, 2020. 
  133. "OPEN THREAD 129.25". June 8, 2019. Retrieved August 18, 2019. 
  134. "The Future of Grant-making Funded by Jaan Tallinn at BERI". Berkeley Existential Risk Initiative. August 25, 2019. Retrieved April 18, 2020. 
  135. "August 2019: Long-Term Future Fund Grants and Recommendations". Effective Altruism Funds. August 30, 2019. Retrieved April 18, 2020. 
  136. Sample, Ian (October 24, 2019). "Human Compatible by Stuart Russell review -- AI and our future. Creating machines smarter than us could be the biggest event in human history -- and the last". The Guardian. Retrieved April 18, 2020. 
  137. Piper, Kelsey (October 26, 2019). "AI could be a disaster for humanity. A top computer scientist thinks he has the solution. Stuart Russell wrote the book on AI and is leading the fight to change how we build it.". Vox. Retrieved April 18, 2020. 
  138. Coldewey, Devin (March 20, 2020). "Stuart Russell on how to make AI 'human-compatible': 'We've actually thought about AI the wrong way from the beginning'". TechCrunch. Retrieved April 18, 2020. 
  139. Pace, Ben (November 6, 2019). "AI Alignment Research Overview (by Jacob Steinhardt)". LessWrong. Retrieved April 18, 2020. 
  140. "November 2019: Long-Term Future Fund Grants". Effective Altruism Funds. November 21, 2019. Retrieved April 18, 2020. 
  141. Alfrink, Toon (November 24, 2019). "RAISE post-mortem". LessWrong. Retrieved April 18, 2020. 
  142. Shah, Rohin (January 27, 2020). "AI Alignment 2018-19 Review". LessWrong. Retrieved April 18, 2020. 
  143. Perry, Lucas (March 31, 2020). "FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord". Future of Life Institute. Retrieved April 18, 2020. 
  144. Wiblin, Robert; Koehler, Arden; Harris, Kieran (March 7, 2020). "Toby Ord on the precipice and humanity's potential futures". 80,000 Hours. Retrieved April 18, 2020. 
  145. Bergal, Asya (April 3, 2020). "Takeaways from safety by default interviews". LessWrong. Retrieved April 18, 2020. 
  146. "Interviews on plausibility of AI safety by default". AI Impacts. April 2, 2020. Retrieved April 18, 2020.