Difference between revisions of "Timeline of AI safety"

From Timelines
Jump to: navigation, search
Line 119: Line 119:
 
|-
 
|-
 
| 2015 || {{dts|February 25}} || Opinion || {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]], publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref>{{cite web |url=http://blog.samaltman.com/machine-intelligence-part-1 |title=Machine intelligence, part 1 |publisher=Sam Altman |accessdate=July 27, 2017}}</ref>
 
| 2015 || {{dts|February 25}} || Opinion || {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]], publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref>{{cite web |url=http://blog.samaltman.com/machine-intelligence-part-1 |title=Machine intelligence, part 1 |publisher=Sam Altman |accessdate=July 27, 2017}}</ref>
 +
|-
 +
| 2015 || {{dts|May 1}} || Publication || The Wikipedia article on {{w|existential risk from artificial general intelligence}} is published.<ref>{{cite web|url = https://en.wikipedia.org/w/index.php?title=Existential_risk_from_artificial_general_intelligence&oldid=660315767|title = Existential risk from artificial general intelligence|date = May 1, 2015|accessdate = August 18, 2019}}</ref>
 
|-
 
|-
 
| 2015 || {{dts|June 4}} || Opinion || At {{w|Airbnb}}'s Open Air 2015 conference, {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]], states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.<ref>{{cite web |url=http://www.businessinsider.com/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6 |author=Matt Weinberger |date=June 4, 2015 |title=Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last |publisher=Business Insider |accessdate=July 27, 2017}}</ref>
 
| 2015 || {{dts|June 4}} || Opinion || At {{w|Airbnb}}'s Open Air 2015 conference, {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]], states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.<ref>{{cite web |url=http://www.businessinsider.com/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6 |author=Matt Weinberger |date=June 4, 2015 |title=Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last |publisher=Business Insider |accessdate=July 27, 2017}}</ref>

Revision as of 11:56, 18 August 2019

This is a timeline of AI safety. AI safety is the field focused on reducing risks from artificial intelligence (AI).[1][2]

Big picture

Time period Development summary More details
Until 1950 Fictional portrayals only Most discussion of AI safety is in the form of fictional portrayals. It warns of the risks of robots who, either through stupidity or lack of goal alignment, no longer remain under the control of humans.
1950 to 2000 Scientific speculation + fictional portrayals During this period, discussion of AI safety moves from merely being a topic of fiction to one that scientists who study technological trends start talking about. The era sees commentary by I. J. Good, Vernor Vinge, and Bill Joy.
2000 to 2012 Birth of AI safety organizations This period sees the creation of the Singularity Institute for Artificial Intelligence (SIAI) (which would later become the Machine Intelligence Research Institute (MIRI)) and the evolution of its mission from creating friendly AI to reducing the risk of unfriendly AI. The Future of Humanity Institute (FHI) and Global Catastrophic Risk Institute (GCRI) are also founded.
2013 to present Mainstreaming of AI safety SIAI changes name to MIRI, grows considerably in size, and gets a lot of funding. Superintelligence, the book by Nick Bostrom, is released. The Future of Life Institute (FLI) and OpenAI are started, and the latter grows considerably. New organizations founded include the Center for the Study of Existential Risk (CSER), Leverhulme Centre for the Future of Intelligence (CFI), Future of Life Institute (FLI), OpenAI, Center for Human-Compatible AI (CHAI), Berkeley Existential Risk Initiative (BERI), Ought, and the Center for Science and Emerging Technology (CSET). OpenAI in particular becomes quite famous and influential. Prominent individuals such as Elon Musk, Sam Altman, and Bill Gates talk about the importance of AI safety and the risks of unfriendly AI.

Full timeline

Year Month and date Event type Details
1630–1650 Fictional portrayal The publication of the story of the Golem of Chełm dates to around this period. Wikipedia: "Golems are not intelligent, and if commanded to perform a task, they will perform the instructions literally. In many depictions Golems are inherently perfectly obedient. In its earliest known modern form, the Golem of Chełm became enormous and uncooperative. In one version of this story, the rabbi had to resort to trickery to deactivate it, whereupon it crumbled upon its creator and crushed him."
1818 Fictional portrayal The novel Frankenstein is published. Frankenstein pioneers the archetype of the artificial intelligence that turns against its creator, and is sometimes discussed in the context of an AI takeoff.[3][4][5]
1920 Fictional portrayal The science fiction play R.U.R. is published. The play introduces the word "robot" to the English language and the plot contains a robot rebellion that leads to human extinction.
1942 March Fictional portrayal The Three Laws of Robotics are introduced by Isaac Asimov in his short story "Runaround".
1947 July Fictional portrayal With Folded Hands, a novelette by Jack Williamson, is published. The novelette describes how advanced robots (humanoids) take over large parts of the world to fulfil their Prime Directive, which is to make humans happy.
1960 May 6 Publication Norbert Wiener's article Some Moral and Technical Consequences of Automation is published.[6] In 2013, Jonah Sinick would note the similarities between the points raised in this article and the thinking of AI safety leader Eliezer Yudkowsky.[7]
1965 Publication I. J. Good originates the concept of intelligence explosion in "Speculations Concerning the First Ultraintelligent Machine".
1984 October 26 Fictional portrayal The American science fiction film The Terminator is released. The film contains the first appearance of Skynet, a "neural net-based conscious group mind and artificial general intelligence" that "seeks to exterminate the human race in order to fulfill the mandates of its original coding".
1993 Publication Vernor Vinge's article "The Coming Technological Singularity: How to Survive in the Post-Human Era" is published. The article popularizes the idea of an intelligence explosion.[8]
2000 April Publication Bill Joy's article "Why The Future Doesn't Need Us" is published in Wired.
2000 July 27 Organization Machine Intelligence Research Institute (MIRI) is founded as the Singularity Institute for Artificial Intelligence (SIAI) by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.[9]:3[10]
2002 March 8 AI box The first AI box experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.[11]
2002 July 4–5 AI box The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.[12]
2002 October 31 Publication Bill Hibbard's Super-Intelligent Machines is published.[13]
2003 Publication Nick Bostrom's paper "Ethical Issues in Advanced Artificial Intelligence" is published. The paper introduces the paperclip maximizer thought experiment.[14]
2005 Organization The Future of Humanity Institute (FHI) is founded.[15]
2005 August 21 AI box The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.[16]
2005 Publication The Singularity is Near by inventor and futurist Ray Kurzweil is published. The book builds upon Kurzweil's previous books The Age of Intelligent Machines (1990) and The Age of Spiritual Machines (1999), but unlike its predecessors, uses the term technological singularity introduced by Vinge in 1993. Unlike Bill Joy, Kurzweil takes a very positive view of the impact of smarter-than-human AI and the upcoming (in his view) technological singularity.
2008 Publication Steve Omohundro's paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by Nick Bostrom as part of his instrumental convergence thesis.[17]
2008 Publication Global Catastrophic Risks is published. The book includes Eliezer Yudkowsky's chapter "Artificial Intelligence as a Positive and Negative Factor in Global Risk".[18]
2008 November–December Outside review The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.[19][20]
2009 December 11 Publication The third edition of Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
2010 Organization DeepMind is founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman.
2010 Organization Vicarious is founded by Scott Phoenix and Dileep George. The company "has publicly expressed some concern about potential risks from future AI development" and the founders are signatories on the FLI open letter.[21]
2011 Publication Baum, Goertzel, and Goertzel's "How Long Until Human-Level AI? Results from an Expert Assessment" is published.[22]
2011 Organization The Global Catastrophic Risk Institute (GCRI) is founded by Seth Baum and Tony Barrett.[23]
2011 Organization Google Brain is started by Jeff Dean, Greg Corrado, and Andrew Ng.
2011–2013 Project Sometime during this period, the Back of the Envelope Guide to Philanthropy, a website created by Gordon Irlam, includes prevention of "hostile artificial intelligence" as a top 10 philanthropic opportunity by impact.[24][25]
2011 September Organization The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.[26]
2013 Publication Luke Muehlhauser's book Facing the Intelligence Explosion is published.[27]
2013 April 13 MIRI publishes an update on its strategy on its blog. In the blog post, MIRI executive director Luke Muehlhauser states that MIRI plans to put less effort into public outreach and shift its research to Friendly AI math research.[28]
2013 July Organization The Center for the Study of Existential Risk (CSER) launches.[29][30]
2013 July Organization The Foundational Research Institute (FRI) is founded. Some of FRI's work discusses risks from artificial intelligence.[31]
2013 October 1 Publication Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is published. The book discusses risks from human-level of superhuman artificial intelligence.
2014 Publication Müller and Bostrom's "Future Progress in Artificial Intelligence: A Survey of Expert Opinion" is published.[32]
2014 January 26 Google announces that it has acquired DeepMind.
2014 March–May Organization Future of Life Institute (FLI) is founded.[33]
2014 July–September Publication Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published.
2014 August Project The AI Impacts website launches.[34]
2014 Fall Project The One Hundred Year Study on Artificial Intelligence (AI100) launches.[35]
2014 October 22–24 Opinion During an interview at the AeroAstro Centennial Symposium, Elon Musk calls artificial intelligence humanity's "biggest existential threat".[36][37]
2014 December 2 Opinion In an interview with BBC, Stephen Hawking states that advanced artificial intelligence could end the human race.[38]
2014 December 16 Fictional portrayal The movie Ex Machina is released. The movie highlights the paperclip maximizer idea: it shows how a robot programmed to optimize for being able to make sure it escapes can callously damage human lives in the process. It leads to more public discussion of AI safety.[39][40][41][42]
2015 Daniel Dewey joins the Open Philanthropy Project.[43] He begins as or would become Open Phil's program officer for potential risks from advanced artificial intelligence.
2015 Organization The Strategic Artificial Intelligence Research Centre launches around this time.[44][21]
2015 January Publication The Open Letter on Artificial Intelligence, titled "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", is published.
2015 January 28 Opinion During an "ask me anything" (AMA) session on reddit, Bill Gates states his concern about artificial superintelligence.[45][46]
2015 January 2–5 Conference The Future of AI: Opportunities and Challenges, an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute.[47] Nate Soares of the Machine Intelligence Research Institute would later call this the "turning point" of when top academics begin to focus on AI risk.[48]
2015 January 22–27 Publication Tim Urban publishes on Wait But Why a two-part series of blog posts about superhuman AI.[49][50]
2015 February 25 Opinion Sam Altman, president of Y Combinator, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".[51]
2015 May 1 Publication The Wikipedia article on existential risk from artificial general intelligence is published.[52]
2015 June 4 Opinion At Airbnb's Open Air 2015 conference, Sam Altman, president of Y Combinator, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.[53]
2015 June 17,21 Publication The Kindle edition of Artificial Superintelligence: A Futuristic Approach by Roman Yampolskiy is published. The paperback would be published on June 21.[54] Yampolskiy takes an AI safety engineering perspective, rather than a machine ethics perspective, to the problem of AI safety.[55]
2015 July 1 Grant The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.[56][57][58]
2015 August Grant The Open Philanthropy Project awards a grant of $1.2 million to the Future of Life Institute.[59]
2015 August Publication The Open Philanthropy Project publishes its cause report on potential risks from advanced artificial intelligence.[60]
2015 October Publication The Open Philanthropy Project first publishes its page on AI timelines.[61]
2015 December Organization The Leverhulme Centre for the Future of Intelligence launches around this time.[62]
2015 December 11 Organization OpenAI is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)[63][64]
2016 April 28 Publication The Global Catastrophic Risks 2016 report is published. The report is a collaboration between the Global Priorities Project and the Global Challenges Foundation.[65] The report includes discussion of risks from artificial general intelligence under "emerging risks".[66][67]
2016 April 7 Publication 80,000 Hours releases a new "problem profile" for risks from artificial intelligence, titled "Risks posed by artificial intelligence".[68][69]
2016 May Publication Luke Muehlhauser of the Open Philanthropy Project publishes "What should we learn from past AI forecasts?".[70]
2016 May 6 Publication Holden Karnofsky of the Open Philanthropy Project publishes a blog post on why Open Phil is making potential risks from artificial intelligence a major priority for the year.[71]
2016 May 6 Publication Holden Karnofsky of the Open Philanthropy Project publishes "Some Background on Our Views Regarding Advanced Artificial Intelligence" on the Open Phil blog.[72]
2016 June Grant The Open Philanthropy Project awards a grant of $264,525 to George Mason University for work by Robin Hanson.[59]
2016 June 21 Publication "Concrete Problems in AI Safety" is submitted to the arXiv.[73]
2016 August Organization The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".[74]
2016 August Grant The Open Philanthropy Project awards a grant of $5.6 million over two years to the newly formed Center for Human-Compatible AI at the University of California, Berkeley.[59]
2016 August Grant The Open Philanthropy Project awards a grant of $500,000 to the Machine Intelligence Research Institute.[59]
2016 August 24 US president Barack Obama speaks to entrepreneur and MIT Media Lab director Joi Ito about AI risk.[75]
2016 September 28 Organization The Partnership on AI is publicly announced.
2016 October 12 Publication Under the Obama Administration, the United States White House releases two reports, Preparing for the Future of Artificial Intelligence and National Artificial Intelligence Research and Development Strategic Plan. The former "surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy".[76][77]
2016 November Grant The Open Philanthropy Project awards a grant of $199,000 to the Electronic Frontier Foundation for work by Peter Eckersley.[59]
2016 December Grant The Open Philanthropy Project awards a grant of $32,000 to AI Impacts for work on strategic questions related to potential risks from advanced artificial intelligence.[59]
2016 December 3, 12 Publication A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.[78][79]
2016 December 13 Publication The "2016 AI Risk Literature Review and Charity Comparison" is published on the Effective Altruism Forum. The lengthy blog post covers all the published work of prominent organizations focused on AI safety.[80]
2017 Publication The Global Catastrophic Risks 2017 report is published.[81] The report discusses risks from artificial intelligence in a dedicated chapter.[82]
2017 Publication The Global Risks Report 2017 is published by the World Economic Forum. The report contains a section titled "Assessing the Risk of Artificial Intelligence" under "Emerging Technologies".[83]
2017 February 9 Project The Effective Altruism Funds (EA Funds) is announced on the Effective Altruism Forum. EA Funds includes a Long-Term Future Fund that is partly intended to support "priorities for robust and beneficial artificial intelligence".[84][85]
2017 March Grant The Open Philanthropy Project awards a grant of $2.0 million to the Future of Humanity Institute for general support.[59]
2017 March Grant The Open Philanthropy Project awards a grant of $30 million to OpenAI for general support.[59]
2017 April Organization The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.[86][87]
2017 April 6 Publication 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".[88][89]
2017 May Grant The Open Philanthropy Project awards a grant of $1.5 million to the UCLA School of Law for work on governance related to AI risk.[59]
2017 May 24 Publication "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the arXiv.[90] Two researchers from AI Impacts are authors on the paper.[91]
2017 June 14 Publication 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.[92]
2017 July Grant The Open Philanthropy Project awards a grant of $2.4 million to the Montreal Institute for Learning Algorithms.[59]
2017 July Grant The Open Philanthropy Project awards a grant of about $300,000 to Yale University to support research into the global politics of artificial intelligence led by Allan Dafoe.[59]
2017 July Grant The Open Philanthropy Project awards a grant of about $400,000 to the Berkeley Existential Risk Initiative to support core functions of grantee, and to help them provide contract workers for the Center for Human-Compatible AI (CHAI) housed at the University of California, Berkeley.[59]
2017 July 15–16 Opinion At the National Governors Association in Rhode Island, Elon Musk tells US governors that artificial intelligence is an "existential threat" to humanity.[93]
2017 July 23 Opinion During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways […] pretty irresponsible".[94]
2017 October Grant The Open Philanthropy Project awards MIRI a grant of $3.75 million over three years ($1.25 million per year). The cited reasons for the grant are a "very positive review" of MIRI's "Logical Induction" paper by an "outstanding" machine learning researcher, as well as the Open Philanthropy Project having made more grants in the area so that a grant to MIRI is less likely to appear as an "outsized endorsement of MIRI's approach".[95][96]
2017 December Grant Jaan Tallinn makes a donation of about $5 million to the Berkeley Existential Risk Initiative (BERI) Grants Program.[97]
2017 December 20 Publication The "2017 AI Safety Literature Review and Charity Comparison" is published. The lengthy blog post covers all the published work of prominent organizations focused on AI safety, and is a refresh of a similar post published a year ago.[98]
2018 April 5 Documentary The documentary Do You Trust This Computer?, directed by Chris Paine, is released. It covers issues related to AI safety and includes interviews with prominent individuals relevant to AI, such as Ray Kurzweil, Elon Musk and Jonathan Nolan.
2018 May Grant The Open Philanthropy Project announces the first set of grants for its AI Fellows Program, to 7 AI Fellows pursuing research relevant to AI risk. It also makes a grant of $525,000 to Ought and $100,000 to AI Impacts.[59]
2018 July Grant The Open Philanthropy Project grants $429,770 to the University of Oxford to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom.[59]
2018 August 14 Grant Nick Beckstead grants the Machine Intelligence Research Institute (MIRI) $488,994 from the Long Term Future Fund. This is part of his last set of grants as fund manager; he would subsequently step down and the fund management would move to a different team.[99][100]
2018 September to October Grant During this period, the Berkeley Existential Risk Initiative (BERI) makes a number of grants to individuals working on projects related to AI safety.[101]
2018 November 29 Grant The Long Term Future Fund, one of the Effective Altruism Funds, announces a set of grants: $40,000 to Machine Intelligence Research Institute, $10,000 to Ought, $21,000 to AI Summer School, and $4,500 to the AI Safety Unconference.[100]
2018 December 17 Publication The "2018 AI Alignment Literature Review and Charity Comparison" is published on the Effective Altruism Forum. It surveys AI safety work in 2018. It continues an annual tradition of similar blog posts in 2016 and 2017.[102]
2019 January Grant The Open Philanthropy Project grants $250,000 to the Berkeley Existential Risk Initiative (BERI) to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).[59]
2019 January Grant The Open Philanthropy Project provides a founding grant for the Center for Security and Emerging Technology (CSET) at Georgetown University of $55 million over 5 years.[59]
2019 February Grant The Open Philanthropy Project grants $2,112,500 to the Machine Intelligence Research Institute (MIRI) over two years. This is part of the first batch of grants decided by the Committee for Effective Altruism Support, which will set "grant sizes for a number of our largest grantees in the effective altruism community, including those who work on long-termist causes."[59] Around the same time, BERI grants $600,000 to MIRI.[101]
2019 April 7 Grant The Long Term Future Fund, one of the Effective Altruism Funds, announces a set of 23 grants totaling $923,150. About half the grant money is to organizations or projects directly working in AI safety. Recipients include the Machine Intelligence Research Institute (MIRI), AI Safety Camp, Ought, and a number of individuals working on AI safety projects, including three in deconfusion research.[100]
2019 June 7 Fictional portrayal The movie I Am Mother is released on Netflix. According to a comment on Slate Star Codex: "you can use it to illustrate everything from paperclip maximization to deontological kill switches".[103]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Issa likes to work locally and track changes with Git, so the revision history on this wiki only shows changes in bulk. To see more incremental changes, refer to the commit histories at the old location and the new location.

Funding information for this timeline is available.

What the timeline is still missing

  • The Matrix
  • maybe more at [1]
  • more AI box results at [2] but unfortunately no dates
  • stuff in [3] and [4]
  • siren/marketing worlds
  • TDT/UDT
  • Paul Christiano AI alignment blog
  • Intelligent Agent Foundations Forum
  • AI alignment prize
  • Translations of Superintelligence?
  • universal prior/distant superintelligences stuff
  • Steven Pinker?
  • when did the different approaches to alignment come along?
  • the name change from "friendly AI" to "AI safety" and "AI alignment" is probably worth adding, though this was gradual so kind of hard to pin down as an event. See also this comment.
  • CS 294-149: Safety and Control for Artificial General Intelligence (Fall 2018), taught by Andrew Critch and Stuart Russell

Timeline update strategy

See also

Timelines of organizations working in AI safety

Other timelines related to AI

External links

References

  1. Paul Christiano (November 19, 2016). "AI "safety" vs "control" vs "alignment"". AI Alignment. AI Alignment. Retrieved November 18, 2017. 
  2. Eliezer Yudkowsky (November 16, 2017). "Hero Licensing". LessWrong. Retrieved November 18, 2017. I'll mention as an aside that talk of "Friendly" AI has been going out of style where I'm from. We've started talking instead in terms of "aligning smarter-than-human AI with operators' goals," mostly because "AI alignment" smacks less of anthropomorphism than "friendliness." 
  3. Michael Nuschke (October 10, 2011). "Seven Ways Frankenstein Relates to Singularity". RetirementSingularity.com. Retrieved July 27, 2017. 
  4. Mitchell Howe (2002). "What is the intellectual history of the Singularity concept?". Retrieved July 27, 2017. Bearing little resemblance to the campy motion pictures he would inspire, Dr. Frankenstein's monster was a highly intelligent being of great emotional depth, but who could not be loved because of his hideous appearance; for this, he vowed to take revenge on his creator. The monster actually comes across as the most intelligent character in the novel, making Frankenstein perhaps the first work to touch on the core idea of the Singularity. 
  5. Alan Winfield (August 9, 2014). "Artificial Intelligence will not turn into a Frankenstein monster". The Guardian. Retrieved July 27, 2017. From the Golem to Frankenstein's monster, Skynet and the Matrix, we are fascinated by the old story: man plays god and then things go horribly wrong. 
  6. Wiener, Norbert (May 6, 1960). "Some Moral and Technical Consequences of Automation". Retrieved August 18, 2019. 
  7. Sinick, Jonah (July 20, 2013). "Norbert Wiener's paper "Some Moral and Technical Consequences of Automation"". LessWrong. Retrieved August 18, 2019. 
  8. "History of AI risk thought". Lesswrongwiki. LessWrong. Retrieved July 28, 2017. 
  9. "Form 990-EZ 2000" (PDF). Retrieved June 1, 2017. Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999. 
  10. "About the Singularity Institute for Artificial Intelligence". Retrieved July 1, 2017. The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors. 
  11. "SL4: By Thread". Retrieved July 1, 2017. 
  12. "SL4: By Thread". Retrieved July 1, 2017. 
  13. "Amazon.com: Super-Intelligent Machines (Ifsr International Series on Systems Science and Engineering) (9780306473883): Bill Hibbard: Books". Retrieved July 26, 2017. Publisher: Springer; 2002 edition (October 31, 2002) 
  14. "Ethical Issues In Advanced Artificial Intelligence". Retrieved July 25, 2017. 
  15. "About". Oxford Martin School. Retrieved July 25, 2017. The Future of Humanity Institute was established in 2005 with funding from the Oxford Martin School (then known as the James Martin 21st Century School). 
  16. "SL4: By Thread". Retrieved July 1, 2017. 
  17. "Basic AI drives". Lesswrongwiki. LessWrong. Retrieved July 26, 2017. 
  18. "AIPosNegFactor.pdf" (PDF). Retrieved July 27, 2017. 
  19. "The Hanson-Yudkowsky AI-Foom Debate". Lesswrongwiki. LessWrong. Retrieved July 1, 2017. 
  20. "Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong". LessWrong. Retrieved July 15, 2017. Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI. 
  21. 21.0 21.1 "Landscape of current work on potential risks from advanced AI". Google Docs. Retrieved July 27, 2017. 
  22. "How Long Untill Human-Level AI - 2011_AI-Experts.pdf" (PDF). Retrieved July 28, 2017. 
  23. "About". Global Catastrophic Risk Institute. Retrieved July 26, 2017. The Global Catastrophic Risk Institute (GCRI) is a nonprofit, nonpartisan think tank. GCRI was founded in 2011 by Seth Baum and Tony Barrett. 
  24. "Back of the Envelope Guide to Philanthropy". Retrieved July 28, 2017. 
  25. "Gordon Irlam on the BEGuide". Meteuphoric. WordPress.com. October 16, 2014. Retrieved July 28, 2017. 
  26. "Welcome". Oxford Martin Programme on the Impacts of Future Technology. Retrieved July 26, 2017. The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University. 
  27. "About". Facing the Intelligence Explosion. Retrieved July 27, 2017. 
  28. Luke Muehlhauser (December 11, 2013). "MIRI's Strategy for 2013". Machine Intelligence Research Institute. Retrieved July 6, 2017. 
  29. Sylvia Hui (November 25, 2012). "Cambridge to study technology's risk to humans". Retrieved July 26, 2017. The university said Sunday the center's launch is planned next year. 
  30. "Centre for the Study of Existential Risk". 
  31. "Transparency". Foundational Research Institute. Retrieved July 27, 2017. 
  32. "Future Progress in Artificial Intelligence: A Survey of Expert Opinion - survey.pdf" (PDF). Retrieved July 28, 2017. 
  33. Victoria Krakovna. "New organization - Future of Life Institute (FLI)". LessWrong. Retrieved July 6, 2017. As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself. 
  34. "MIRI's September Newsletter". Machine Intelligence Research Institute. September 1, 2014. Retrieved July 15, 2017. Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts. 
  35. Peter Stone; et al. (AI100 Standing Committee and Study Panel) (September 2016). "One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel" (PDF). Retrieved July 27, 2017. The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a longterm investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. 
  36. Samuel Gibbs (October 27, 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Retrieved July 25, 2017. 
  37. "AeroAstro Centennial Webcast". Retrieved July 25, 2017. The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium 
  38. "Stephen Hawking warns artificial intelligence could end mankind". BBC News. December 2, 2014. Retrieved July 25, 2017. 
  39. "Ex Machina's Scientific Advisor – Murray Shanahan". Y Combinator. June 28, 2017. Retrieved August 18, 2019. 
  40. "Ex Machina movie asks: is AI research in safe hands?". January 21, 2015. Retrieved August 18, 2019. 
  41. "Go see Ex Machina". LessWrong. February 26, 2016. Retrieved August 18, 2019. 
  42. Hardawar, Devindra (April 1, 2015). "'Ex Machina' director embraces the rise of superintelligent AI". Engadget. Retrieved August 18, 2019. 
  43. "Daniel Dewey". Open Philanthropy Project. Retrieved July 25, 2017. 
  44. Future of Humanity Institute - FHI. "Strategic Artificial Intelligence Research Centre - Future of Humanity Institute". Future of Humanity Institute. Retrieved July 27, 2017. 
  45. "Hi Reddit, I'm Bill Gates and I'm back for my third AMA. Ask me anything. • r/IAmA". reddit. Retrieved July 25, 2017. 
  46. Stuart Dredge (January 29, 2015). "Artificial intelligence will become strong enough to be a concern, says Bill Gates". The Guardian. Retrieved July 25, 2017. 
  47. "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017. 
  48. Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017. 
  49. "The Artificial Intelligence Revolution: Part 1". Wait But Why. January 22, 2017. Retrieved July 25, 2017. 
  50. "The Artificial Intelligence Revolution: Part 2". Wait But Why. January 27, 2015. Retrieved July 25, 2017. 
  51. "Machine intelligence, part 1". Sam Altman. Retrieved July 27, 2017. 
  52. "Existential risk from artificial general intelligence". May 1, 2015. Retrieved August 18, 2019. 
  53. Matt Weinberger (June 4, 2015). "Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last". Business Insider. Retrieved July 27, 2017. 
  54. Yampolskiy, Roman (June 17, 2015). "Artificial Superintelligence: A Futuristic Approach". Retrieved August 20, 2017. 
  55. Muehlhauser, Luke (July 15, 2013). "Roman Yampolskiy on AI Safety Engineering". Machine Intelligence Research Institute. Retrieved August 20, 2017. 
  56. "Grants Timeline - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  57. "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  58. "AI Safety Research - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  59. 59.00 59.01 59.02 59.03 59.04 59.05 59.06 59.07 59.08 59.09 59.10 59.11 59.12 59.13 59.14 59.15 59.16 "Open Philanthropy Project donations made (filtered to cause areas matching AI risk)". Retrieved July 27, 2017. 
  60. "Potential Risks from Advanced Artificial Intelligence". Open Philanthropy Project. Retrieved July 27, 2017. 
  61. "What Do We Know about AI Timelines?". Open Philanthropy Project. Retrieved July 25, 2017. 
  62. "The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity". University of Cambridge. December 3, 2015. Retrieved July 26, 2017. 
  63. John Markoff (December 11, 2015). "Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors". The New York Times. Retrieved July 26, 2017. The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco. 
  64. "Introducing OpenAI". OpenAI Blog. December 11, 2015. Retrieved July 26, 2017. 
  65. "Global Catastrophic Risks 2016". The Global Priorities Project. April 28, 2016. Retrieved July 28, 2017. 
  66. "Global-Catastrophic-Risk-Annual-Report-2016-FINAL.pdf" (PDF). Retrieved July 28, 2017. 
  67. George Dvorsky. "These Are the Most Serious Catastrophic Threats Faced by Humanity". Gizmodo. Retrieved July 28, 2017. 
  68. "How and why to use your career to make artificial intelligence safer". 80,000 Hours. April 7, 2016. Retrieved July 25, 2017. 
  69. "Risks posed by artificial intelligence". 80,000 Hours. 
  70. "What should we learn from past AI forecasts?". Open Philanthropy Project. Retrieved July 27, 2017. 
  71. "Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity". Open Philanthropy Project. Retrieved July 27, 2017. 
  72. "Some Background on Our Views Regarding Advanced Artificial Intelligence". Open Philanthropy Project. Retrieved July 27, 2017. 
  73. "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017. 
  74. "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". Berkeley News. August 29, 2016. Retrieved July 26, 2017. 
  75. Scott Dadich (October 12, 2016). "Barack Obama Talks AI, Robo Cars, and the Future of the World". WIRED. Retrieved July 28, 2017. 
  76. "The Administration's Report on the Future of Artificial Intelligence". whitehouse.gov. October 12, 2016. Retrieved July 28, 2017. 
  77. "The Obama Administration's Roadmap for AI Policy". Harvard Business Review. December 21, 2016. Retrieved July 28, 2017. 
  78. "CFAR's new focus, and AI Safety - Less Wrong". LessWrong. Retrieved July 13, 2017. 
  79. "Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong". LessWrong. Retrieved July 13, 2017. 
  80. Larks (December 13, 2016). "2016 AI Risk Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved August 18, 2019. 
  81. "Annual Report on Global Risks". Global Challenges Foundation. Retrieved July 28, 2017. 
  82. "Global Catastrophic Risks 2017.pdf" (PDF). Retrieved July 28, 2017. 
  83. "Acknowledgements". Global Risks Report 2017. Retrieved July 28, 2017. 
  84. "EA Funds". Retrieved July 27, 2017. In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence. 
  85. William MacAskill (February 9, 2017). "Introducing the EA Funds". Effective Altruism Forum. Retrieved July 27, 2017. 
  86. "May 2017 Newsletter". Machine Intelligence Research Institute. May 10, 2017. Retrieved July 25, 2017. Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere. 
  87. "Update on Effective Altruism Funds". Effective Altruism Forum. April 20, 2017. Retrieved July 25, 2017. 
  88. "Positively shaping the development of artificial intelligence". 80,000 Hours. Retrieved July 25, 2017. 
  89. "Completely new article on the pros/cons of working on AI safety, and how to actually go about it". April 6, 2017. 
  90. "[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved July 13, 2017. 
  91. "Media discussion of 2016 ESPAI". AI Impacts. June 14, 2017. Retrieved July 13, 2017. 
  92. "New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford's Future of Humanity Institute". 80,000 Hours. June 14, 2017. 
  93. "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR.org. July 17, 2017. Retrieved July 28, 2017. 
  94. Catherine Clifford (July 24, 2017). "Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'". CNBC. Retrieved July 25, 2017. 
  95. Malo Bourgon (November 8, 2017). "A major grant from the Open Philanthropy Project". Machine Intelligence Research Institute. Retrieved November 11, 2017. 
  96. "Machine Intelligence Research Institute — General Support (2017)". Open Philanthropy Project. November 8, 2017. Retrieved November 11, 2017. 
  97. "Berkeley Existential Risk Initiative | Activity Update - December 2017". Retrieved February 8, 2018. 
  98. Larks (December 20, 2017). "2017 AI Safety Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved August 18, 2019. 
  99. "July 2018 - Long-Term Future Fund Grants". Effective Altruism Funds. August 14, 2018. Retrieved August 18, 2019. 
  100. 100.0 100.1 100.2 "Effective Altruism Funds donations made (filtered to cause areas matching AI safety)". Retrieved August 18, 2019. 
  101. 101.0 101.1 "Berkeley Existential Risk Initiative donations made (filtered to cause areas matching AI safety)". Retrieved August 18, 2019. 
  102. Larks (December 17, 2018). "2018 AI Alignment Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved August 18, 2019. 
  103. "OPEN THREAD 129.25". June 8, 2019. Retrieved August 18, 2019.