Changes

Jump to: navigation, search

Timeline of AI safety

3,933 bytes added, 11:24, 18 August 2019
no edit summary
| 1950 to 2000 || Scientific speculation + fictional portrayals || During this period, discussion of AI safety moves from merely being a topic of fiction to one that scientists who study technological trends start talking about. The era sees commentary by I. J. Good, Vernor Vinge, and Bill Joy.
|-
| 2000 to 2012 || Birth of AI safety organizations || This period sees the creation of the Singularity Institute for Artificial Intelligence (SIAI) (which would later become the Machine Intelligence Research Institute (MIRI)) and the evolution of its mission from creating friendly AI to reducing the risk of unfriendly AI. The Future of Humanity Institute (FHI) is and Global Catastrophic Risk Institute (GCRI) are also founded.
|-
| 2013 to present || Mainstreaming of AI safety || SIAI changes name to MIRI, grows considerably in size, and gets a lot of funding. The Future of Life Institute (FLI) is founded. ''Superintelligence'', the book by Nick Bostrom, is released. The Future of Life Institute (FLI) and OpenAI are started, and the latter grows considerably. New organizations founded include the Center for the Study of Existential Risk (CSER), Leverhulme Centre for the Future of Intelligence (CFI), Future of Life Institute (FLI), OpenAI, Center for Human-Compatible AI (CHAI), Berkeley Existential Risk Initiative (BERI), Ought, and the Center for Science and Emerging Technology (CSET). OpenAI in particular becomes quite famous and influential. Prominent individuals such as Elon Musk, Sam Altman, and Bill Gates, talk of about the importance of AI safety and the risks of unfriendly AI.
|}
|-
| 2005 || {{dts|August 21}} || AI box || The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://sl4.org/archive/0508/index.html#12007 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
|-
| 2005 || || Publication || ''{{w|The Singularity is Near}}'' by inventor and futurist {{w|Ray Kurzweil}} is published. The book builds upon Kurzweil's previous books ''{{w|The Age of Intelligent Machines}}'' (1990) and ''{{w|The Age of Spiritual Machines}}'' (1999), but unlike its predecessors, uses the term {{w|technological singularity}} introduced by Vinge in 1993. Unlike Bill Joy, Kurzweil takes a very positive view of the impact of smarter-than-human AI and the upcoming (in his view) technological singularity.
|-
| 2008 || || Publication || {{w|Steve Omohundro}}'s paper "The Basic AI Drives" is published. The paper argues that certain drives, such as self-preservation and resource acquisition, will emerge in any sufficiently advanced AI. The idea would subsequently be defended by {{w|Nick Bostrom}} as part of his instrumental convergence thesis.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/Basic_AI_drives |title=Basic AI drives |website=Lesswrongwiki |accessdate=July 26, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref>
| 2015 || {{dts|July 1}} || Grant || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref>
|-
| 2015 || {{dts|August}} || Grant || The Open Philanthropy Project awards a grant of $1.2 million to the {{w|Future of Life Institute}}.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2015 || {{dts|August}} || Publication || The Open Philanthropy Project publishes its cause report on potential risks from advanced artificial intelligence.<ref>{{cite web |url=http://www.openphilanthropy.org/research/cause-reports/ai-risk |title=Potential Risks from Advanced Artificial Intelligence |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
| 2016 || {{dts|May 6}} || Publication || Holden Karnofsky of the Open Philanthropy Project publishes "Some Background on Our Views Regarding Advanced Artificial Intelligence" on the Open Phil blog.<ref>{{cite web |url=http://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence |title=Some Background on Our Views Regarding Advanced Artificial Intelligence |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
|-
| 2016 || {{dts|June}} || Grant || The Open Philanthropy Project awards a grant of $264,525 to {{w|George Mason University}} for work by {{w|Robin Hanson}}.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2016 || {{dts|June 21}} || Publication || "Concrete Problems in AI Safety" is submitted to the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref>
| 2016 || {{dts|August}} || Organization || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches. The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref>
|-
| 2016 || {{dts|August}} || Grant || The Open Philanthropy Project awards a grant of $5.6 million over two years to the newly formed {{w|Center for Human-Compatible AI}}at the University of California, Berkeley.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2016 || {{dts|August}} || Grant || The Open Philanthropy Project awards a grant of $500,000 to the {{w|Machine Intelligence Research Institute}}.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2016 || {{dts|August 24}} || || US president Barack Obama speaks to entrepreneur and MIT Media Lab director {{w|Joi Ito}} about AI risk.<ref>{{cite web |url=https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/ |title=Barack Obama Talks AI, Robo Cars, and the Future of the World |publisher=[[w:WIRED|WIRED]] |date=October 12, 2016 |author=Scott Dadich |accessdate=July 28, 2017}}</ref>
| 2016 || {{dts|October 12}} || Publication || Under the Obama Administration, the United States White House releases two reports, ''Preparing for the Future of Artificial Intelligence'' and ''National Artificial Intelligence Research and Development Strategic Plan''. The former "surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy".<ref>{{cite web |url=https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence |publisher=whitehouse.gov |title=The Administration's Report on the Future of Artificial Intelligence |date=October 12, 2016 |accessdate=July 28, 2017}}</ref><ref>{{cite web |url=https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy |date=December 21, 2016 |publisher=Harvard Business Review |title=The Obama Administration's Roadmap for AI Policy |accessdate=July 28, 2017}}</ref>
|-
| 2016 || {{dts|November}} || Grant || The Open Philanthropy Project awards a grant of $199,000 to the {{w|Electronic Frontier Foundation}} for work by {{w|Peter Eckersley}}.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2016 || {{dts|December}} || Grant || The Open Philanthropy Project awards a grant of $32,000 to AI Impacts for work on strategic questions related to potential risks from advanced artificial intelligence.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2016 || {{dts|December 3}}, 12 || Publication || A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.<ref>{{cite web |url=http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/ |title=CFAR's new focus, and AI Safety - Less Wrong |accessdate=July 13, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/ |title=Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong |accessdate=July 13, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref>
| 2017 || {{dts|February 9}} || Project || The Effective Altruism Funds (EA Funds) is announced on the Effective Altruism Forum. EA Funds includes a Long-Term Future Fund that is partly intended to support "priorities for robust and beneficial artificial intelligence".<ref>{{cite web |url=https://app.effectivealtruism.org/funds/far-future |title=EA Funds |accessdate=July 27, 2017 |quote=In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/174/introducing_the_ea_funds/ |author=William MacAskill |title=Introducing the EA Funds |publisher=Effective Altruism Forum |date=February 9, 2017 |accessdate=July 27, 2017}}</ref>
|-
| 2017 || {{dts|March}} || Grant || The Open Philanthropy Project awards a grant of $2.0 million to the {{w|Future of Humanity Institute}} for general support.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2017 || {{dts|March}} || Grant || The Open Philanthropy Project awards a grant of $30 million to {{w|OpenAI}} for general support.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2017 || {{dts|April}} || Organization || The Berkeley Existential Risk Initiative (BERI) launches around this time to assist researchers working at institutions working to mitigate existential risk, including AI risk.<ref>{{cite web |url=https://intelligence.org/2017/05/10/may-2017-newsletter/ |title=May 2017 Newsletter |publisher=Machine Intelligence Research Institute |date=May 10, 2017 |accessdate=July 25, 2017 |quote=Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ |title=Update on Effective Altruism Funds |publisher=Effective Altruism Forum |date=April 20, 2017 |accessdate=July 25, 2017}}</ref>
| 2017 || {{dts|April 6}} || Publication || 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".<ref>{{cite web |url=https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/ |title=Positively shaping the development of artificial intelligence |publisher=80,000 Hours |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1341451772603224 |title=Completely new article on the pros/cons of working on AI safety, and how to actually go about it |date=April 6, 2017}}</ref>
|-
| 2017 || {{dts|May}} || Grant || The Open Philanthropy Project awards a grant of $1.5 million to the {{w|UCLA School of Law}} for work on governance related to AI risk.<ref name="donations-portal-open-phil-ai-risksafety" />
|-
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref> Two researchers from AI Impacts are authors on the paper.<ref>{{cite web |url=http://aiimpacts.org/media-discussion-of-2016-espai/ |title=Media discussion of 2016 ESPAI |publisher=AI Impacts |date=June 14, 2017 |accessdate=July 13, 2017}}</ref>
| 2017 || {{dts|June 14}} || Publication || 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.<ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1416435978438136 |title=New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford’s Future of Humanity Institute |date=June 14, 2017 |publisher=80,000 Hours}}</ref>
|-
| 2017 || {{dts|July}} || Grant || The Open Philanthropy Project awards a grant of $2.4 million to the {{w|Montreal Institute for Learning Algorithms}}.<ref name="donations-portal-open-phil-ai-risksafety">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+risk safety |title=Open Philanthropy Project donations made (filtered to cause areas matching AI risk) |accessdate=July 27, 2017}}</ref>|-| 2017 || {{dts|July}} || Grant || The Open Philanthropy Project awards a grant of about $300,000 to Yale University to support research into the global politics of artificial intelligence led by Allan Dafoe.<ref name="donations-portal-open-phil-ai-safety" />|-| 2017 || {{dts|July}} || Grant || The Open Philanthropy Project awards a grant of about $400,000 to the Berkeley Existential Risk Initiative to support core functions of grantee, and to help them provide contract workers for the Center for Human-Compatible AI (CHAI) housed at the University of California, Berkeley.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2017 || {{dts|July 15}}–16 || Opinion || At the National Governors Association in Rhode Island, Elon Musk tells US governors that artificial intelligence is an "existential threat" to humanity.<ref>{{cite web |url=http://www.npr.org/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk |date=July 17, 2017 |title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk' |publisher=NPR.org |accessdate=July 28, 2017}}</ref>
|-
| 2017 || {{dts|October}} || Grant || The Open Philanthropy Project awards MIRI a grant of $3.75 million over three years ($1.25 million per year). The cited reasons for the grant are a "very positive review" of MIRI's "Logical Induction" paper by an "outstanding" machine learning researcher, as well as the Open Philanthropy Project having made more grants in the area so that a grant to MIRI is less likely to appear as an "outsized endorsement of MIRI's approach".<ref>{{cite web |url=https://intelligence.org/2017/11/08/major-grant-open-phil/ |title=A major grant from the Open Philanthropy Project |author=Malo Bourgon |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=November 8, 2017 |accessdate=November 11, 2017}}</ref><ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 |publisher=Open Philanthropy Project |title=Machine Intelligence Research Institute — General Support (2017) |date=November 8, 2017 |accessdate=November 11, 2017}}</ref>
|-
| 2017 || {{Dts|December}} || Financial || {{w|Jaan Tallinn}} makes a donation of about $5 million to the Berkeley Existential Risk Initiative (BERI) Grants Program.<ref name="december-2017-activity-update">{{cite web |url=http://existence.org/2018/01/11/activity-update-december-2017.html |title=Berkeley Existential Risk Initiative {{!}} Activity Update - December 2017 |accessdate=February 8, 2018}}</ref>
|-
| 2018 || {{dts|April 5}} || Documentary || The documentary ''{{w|Do You Trust This Computer?}}'', directed by {{w|Chris Paine}}, is released. It covers issues related to AI safety and includes interviews with prominent individuals relevant to AI, such as {{w|Ray Kurzweil}}, {{w|Elon Musk}} and {{w|Jonathan Nolan}}.
|-
| 2018 || {{dts|May}} || Grant || The Open Philanthropy Project announces the first set of grants for its AI Fellows Program, to 7 AI Fellows pursuing research relevant to AI risk. It also makes a grant of $525,000 to Ought and $100,000 to AI Impacts.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2018 || {{dts|July}} || Grant || The Open Philanthropy Project grants $429,770 to the University of Oxford to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2019 || {{dts|January}} || Grant || The Open Philanthropy Project grants $250,000 to the Berkeley Existential Risk Initiative (BERI) to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).
|-
| 2019 || {{dts|January}} || Grant || The Open Philanthropy Project provides a founding grant for the Center for Security and Emerging Technology (CSET) at Georgetown University of $55 million over 5 years.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2019 || {{dts|February}} || Grant || The Open Philanthropy Project grants $2,112,500 to the Machine Intelligence Research Institute (MIRI) over two years. This is part of the first batch of grants decided by the Committee for Effective Altruism Support, which will set "grant sizes for a number of our largest grantees in the effective altruism community, including those who work on long-termist causes."<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2019 || {{dts|June 7}} || Fictional portrayal || The movie ''{{w|I Am Mother}}'' is released on Netflix. According to a comment on Slate Star Codex: "you can use it to illustrate everything from paperclip maximization to deontological kill switches".<ref>{{cite web|url = https://slatestarcodex.com/2019/06/05/open-thread-129-25/|title = OPEN THREAD 129.25|date = June 8, 2019|accessdate = August 18, 2019}}</ref>
==See also==
 
===Timelines of organizations working in AI safety===
* [[Timeline of Machine Intelligence Research Institute]]
* [[Timeline of Future of Humanity Institute]]
* [[Timeline of OpenAI]]
* [[Timeline of Berkeley Existential Risk Initiative]]
 
===Other timelines related to AI===
 
* [[Timeline of Google Brain]]
* [[Timeline of DeepMind]]
* [[Timeline of OpenAI]]
* [[Timeline of machine learning]]
2,438
edits

Navigation menu