Changes

Jump to: navigation, search

Timeline of AI safety

1,536 bytes added, 23:46, 18 April 2020
no edit summary
|-
| 2018 || {{dts|July}} || Grant || The Open Philanthropy Project grants $429,770 to the University of Oxford to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2018 || {{dts|July 10}} (beta), {{dts|October 29}} (out of beta) || || The team behind LessWrong 2.0 launches a beta for the AI Alignment Forum at AlignmentForum.org on July 10, as a successor to the Intelligent Agent Foundations Forum (IAFF) at agentfoundations.org.<ref>{{cite web|url = https://www.alignmentforum.org/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta|title = Announcing AlignmentForum.org Beta|last = Arnold|first = Raymond|date = Juluy 10, 2018|accessdate = April 18, 2020|publisher = LessWroong}}</ref> On October 29, the Alignment Forum exits beta and becomes generally available.<ref>{{cite web|url = https://www.lesswrong.com/posts/FoiiRDC3EhjHx7ayY/introducing-the-ai-alignment-forum-faq|title = Introducing the AI Alignment Forum (FAQ)|last = Habryka|first = Oliver|last2 = Pace|first2 = Ben|last3 = Arnold|first3 = Raymond|last4 = Babcock|first4 = Jim|publisher = LessWrong|date = October 29, 2018|accessdate = April 18, 2020}}</ref><ref>{{cite web|url = https://intelligence.org/2018/10/29/announcing-the-ai-alignment-forum/|title = Announcing the new AI Alignment Forum|date = October 29, 2018|accessdate = April 18, 2020|publisher = Machine Intelligence Research Institute}}</ref>
|-
| 2018 || {{dts|August 14}} || Grant || Nick Beckstead grants the Machine Intelligence Research Institute (MIRI) $488,994 from the Long-Term Future Fund. This is part of his last set of grants as fund manager; he would subsequently step down and the fund management would move to a different team.<ref>{{cite web|url = https://app.effectivealtruism.org/funds/far-future/payouts/6g4f7iae5Ok6K6YOaAiyK0|title = July 2018 - Long-Term Future Fund Grants|date = August 14, 2018|accessdate = August 18, 2019|publisher = Effective Altruism Funds}}</ref><ref name="donations-portal-ea-funds-ai-safety">{{cite web|url = https://donations.vipulnaik.com/donor.php?donor=Effective+Altruism+Funds&cause_area_filter=AI+safety|title = Effective Altruism Funds donations made (filtered to cause areas matching AI safety)|accessdate = August 18, 2019}}</ref>
|-
| 2019 || {{dts|November 21}} || Grant || The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set 13 grants totaling $466,000 USD to organizations and individuals. About a quarter of the grant money is to organizations and individuals working on AI safety. With the exception of a grant to AI Safety Camp, all the other grants related to AI safety are to individuals.<ref>{{cite web|url = https://app.effectivealtruism.org/funds/far-future/payouts/60MJaGYoLb0zGlIZxuCMPg|title = November 2019: Long-Term Future Fund Grants|date = November 21, 2019|accessdate = April 18, 2020|publisher = Effective Altruism Funds}}</ref>
|-
| 2019 || {{dts|November 24}} || || Toon Alfrink publishes on LessWrong a postmortem for RAISE, an attempt to build an online course for AI safety. The blog post explains the challenges with running RAISE and the reasons for eventually shutting it down.<ref>{{cite web|url = https://www.lesswrong.com/posts/oW6mbA3XHzcfJTwNq/raise-post-mortem|title = RAISE post-mortem|last = Alfrink|first = Toon|date = November 24, 2019|accessdate = April 18, 2020|publisher = LessWrong}}</ref>
|-
| 2019 || {{dts|December 18}} || Publication || The "2019 AI Alignment Literature Review and Charity Comparison" is published on the Effective Altruism Forum. It surveys AI safety work in 2019. It continues an annual tradition of similar blog posts in 2016, 2017, and 2018. One feature new to the document this year is the author's effort to make it easier for readers to jump to and focus on parts of the document most relevant to them, rather than read it beginning to end. To make this easier, the author ends each paragraph with a hashtag, and lists the hashtags at the beginning of the document.<ref name=larks-2019>{{cite web|url = https://forum.effectivealtruism.org/posts/dpBB24QsnsRnkq5JT/2019-ai-alignment-literature-review-and-charity-comparison|title = 2019 AI Alignment Literature Review and Charity Comparison|author = Larks|publisher = Effective Altruism Forum|date = December 18, 2019|accessdate = April 18, 2020}}</ref>
* Paul Christiano AI alignment blog. Also, more on Christiano's trajectory in AI safety
* Intelligent Agent Foundations Forum
* Alignment Forum (using the LessWrong software), arguably a successor to the Intelligent Agent Foundations Forum?
* AI Watch
* The launch of Ought
* the name change from "friendly AI" to "AI safety" and "AI alignment" is probably worth adding, though this was gradual so kind of hard to pin down as an event. See also [https://forum.effectivealtruism.org/posts/cuB3GApHqLFXG36C6/i-am-nate-soares-ama#AGZGEwsQ6QWda8Xnt this comment].
* [http://inst.eecs.berkeley.edu/~cs294-149/fa18/ CS 294-149: Safety and Control for Artificial General Intelligence (Fall 2018)], taught by Andrew Critch and Stuart Russell
* Survival and Flourishing Fund as the successor to BERI
* Median Group (insofar as some of their research is about AI)
* MIRI's nondisclosure-by-default policy
2,438
edits

Navigation menu