Changes

Jump to: navigation, search

Timeline of AI safety

4,081 bytes added, 11:50, 18 August 2019
no edit summary
|-
| 2016 || {{dts|December 3}}, 12 || Publication || A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.<ref>{{cite web |url=http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/ |title=CFAR's new focus, and AI Safety - Less Wrong |accessdate=July 13, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/ |title=Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong |accessdate=July 13, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref>
|-
| 2016 || {{dts|December 13}} || Publication || The "2016 AI Risk Literature Review and Charity Comparison" is published on the Effective Altruism Forum. The lengthy blog post covers all the published work of prominent organizations focused on AI safety.<ref>{{cite web|url = https://forum.effectivealtruism.org/posts/nSot23sAjoZRgaEwa/2016-ai-risk-literature-review-and-charity-comparison|title = 2016 AI Risk Literature Review and Charity Comparison|author = Larks|publisher = Effective Altruism Forum|date = December 13, 2016|accessdate = August 18, 2019}}</ref>
|-
| 2017 || || Publication || The Global Catastrophic Risks 2017 report is published.<ref>{{cite web |url=https://www.globalchallenges.org/en/our-work/annual-report |publisher=Global Challenges Foundation |title=Annual Report on Global Risks |accessdate=July 28, 2017}}</ref> The report discusses risks from artificial intelligence in a dedicated chapter.<ref>{{cite web |url=https://api.globalchallenges.org/static/files/Global%20Catastrophic%20Risks%202017.pdf |title=Global Catastrophic Risks 2017.pdf |accessdate=July 28, 2017}}</ref>
| 2017 || {{dts|October}} || Grant || The Open Philanthropy Project awards MIRI a grant of $3.75 million over three years ($1.25 million per year). The cited reasons for the grant are a "very positive review" of MIRI's "Logical Induction" paper by an "outstanding" machine learning researcher, as well as the Open Philanthropy Project having made more grants in the area so that a grant to MIRI is less likely to appear as an "outsized endorsement of MIRI's approach".<ref>{{cite web |url=https://intelligence.org/2017/11/08/major-grant-open-phil/ |title=A major grant from the Open Philanthropy Project |author=Malo Bourgon |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=November 8, 2017 |accessdate=November 11, 2017}}</ref><ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 |publisher=Open Philanthropy Project |title=Machine Intelligence Research Institute — General Support (2017) |date=November 8, 2017 |accessdate=November 11, 2017}}</ref>
|-
| 2017 || {{Dts|December}} || Financial Grant || {{w|Jaan Tallinn}} makes a donation of about $5 million to the Berkeley Existential Risk Initiative (BERI) Grants Program.<ref name="december-2017-activity-update">{{cite web |url=http://existence.org/2018/01/11/activity-update-december-2017.html |title=Berkeley Existential Risk Initiative {{!}} Activity Update - December 2017 |accessdate=February 8, 2018}}</ref>|-| 2017 || {{dts|December 20}} || Publication || The "2017 AI Safety Literature Review and Charity Comparison" is published. The lengthy blog post covers all the published work of prominent organizations focused on AI safety, and is a refresh of a similar post published a year ago.<ref>{{cite web|url = https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison|title = 2017 AI Safety Literature Review and Charity Comparison|author = Larks|publisher = Effective Altruism Forum|date = December 20, 2017|accessdate = August 18, 2019}}</ref>
|-
| 2018 || {{dts|April 5}} || Documentary || The documentary ''{{w|Do You Trust This Computer?}}'', directed by {{w|Chris Paine}}, is released. It covers issues related to AI safety and includes interviews with prominent individuals relevant to AI, such as {{w|Ray Kurzweil}}, {{w|Elon Musk}} and {{w|Jonathan Nolan}}.
| 2018 || {{dts|July}} || Grant || The Open Philanthropy Project grants $429,770 to the University of Oxford to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2018 || {{dts|August 14}} || Grant || Nick Beckstead grants the Machine Intelligence Research Institute (MIRI) $488,994 from the Long Term Future Fund. This is part of his last set of grants as fund manager; he would subsequently step down and the fund management would move to a different team.<ref>{{cite web|url = https://app.effectivealtruism.org/funds/far-future/payouts/6g4f7iae5Ok6K6YOaAiyK0|title = July 2018 - Long-Term Future Fund Grants|date = August 14, 2018|accessdate = August 18, 2019|publisher = Effective Altruism Funds}}</ref><ref name="donations-portal-ea-funds-ai-safety">{{cite web|url = https://donations.vipulnaik.com/donor.php?donor=Effective+Altruism+Funds&cause_area_filter=AI+safety|title = Effective Altruism Funds donations made (filtered to cause areas matching AI safety)|accessdate = August 18, 2019}}</ref>|-| 2018 || {{dts|September}} to October || Grant || During this period, the Berkeley Existential Risk Initiative (BERI) makes a number of grants to individuals working on projects related to AI safety.<ref name="donations-portal-beri-ai-safety" />|-| 2018 || {{dts|November 29}} || Grant || The Long Term Future Fund, one of the Effective Altruism Funds, announces a set of grants: $40,000 to Machine Intelligence Research Institute, $10,000 to Ought, $21,000 to AI Summer School, and $4,500 to the AI Safety Unconference.<ref name="donations-portal-ea-funds-ai-safety" />|-| 2018 || {{dts|December 17}} || Publication || The "2018 AI Alignment Literature Review and Charity Comparison" is published on the Effective Altruism Forum. It surveys AI safety work in 2018. It continues an annual tradition of similar blog posts in 2016 and 2017.<ref>{{cite web|url = https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison|title = 2018 AI Alignment Literature Review and Charity Comparison|author = Larks|publisher = Effective Altruism Forum|date = December 17, 2018|accessdate = August 18, 2019}}</ref>|-| 2019 || {{dts|January}} || Grant || The Open Philanthropy Project grants $250,000 to the Berkeley Existential Risk Initiative (BERI) to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2019 || {{dts|January}} || Grant || The Open Philanthropy Project provides a founding grant for the Center for Security and Emerging Technology (CSET) at Georgetown University of $55 million over 5 years.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2019 || {{dts|February}} || Grant || The Open Philanthropy Project grants $2,112,500 to the Machine Intelligence Research Institute (MIRI) over two years. This is part of the first batch of grants decided by the Committee for Effective Altruism Support, which will set "grant sizes for a number of our largest grantees in the effective altruism community, including those who work on long-termist causes."<ref name="donations-portal-open-phil-ai-safety" /> Around the same time, BERI grants $600,000 to MIRI.<ref name="donations-portal-beri-ai-safety">{{cite web|url = https://donations.vipulnaik.com/donor.php?donor=Berkeley+Existential+Risk+Initiative&cause_area_filter=AI+safety|title = Berkeley Existential Risk Initiative donations made (filtered to cause areas matching AI safety)|accessdate = August 18, 2019}}</ref>
|-
| 2019 || {{dts|April 7}} || Grant || The Long Term Future Fund, one of the Effective Altruism Funds, announces a set of 23 grants totaling $923,150. About half the grant money is to organizations or projects directly working in AI safety. Recipients include the Machine Intelligence Research Institute (MIRI), AI Safety Camp, Ought, and a number of individuals working on AI safety projects, including three in deconfusion research.<ref name="donations-portal-ea-funds-ai-safety" />
| 2019 || {{dts|June 7}} || Fictional portrayal || The movie ''{{w|I Am Mother}}'' is released on Netflix. According to a comment on Slate Star Codex: "you can use it to illustrate everything from paperclip maximization to deontological kill switches".<ref>{{cite web|url = https://slatestarcodex.com/2019/06/05/open-thread-129-25/|title = OPEN THREAD 129.25|date = June 8, 2019|accessdate = August 18, 2019}}</ref>
|}
2,438
edits

Navigation menu