Changes

Jump to: navigation, search

Timeline of AI safety

945 bytes added, 08:21, 19 April 2020
no edit summary
| 2017 || This is a great year for cryptocurrency prices, causing a number of donations to MIRI from people who got rich through cryptocurrency. The AI safety funding and support landscape changes somewhat with the launch of the Berkeley Existential Risk Initiative (BERI) (and funding of its grants program by Jaan Tallinn) and the Effective Altruism Funds, specifically the Long-Term Future Fund. Open Phil makes several grants in AI safety, including a $30 million grant to OpenAI and a $3.75 million grant to MIRI. AI safety attracts dismissive commentary from Mark Zuckerberg, while Elon Musk continues to highlight its importance. Initiatives such as AI Watch and the AI Alignment Prize begin.
|-
| 2018 || Activity in the field of AI safety becomes more steady, in terms of both ongoing discussion (with the launch of the AI Alignment Newsletterand the Alignment Forum) and funding (with structural changes to the Long-Term Future Fund to make it grant more regularly, the introduction of the annual Open Phil AI Fellowship grants, and more grantmaking by BERI). Near the end of the year, MIRI announces its nondisclosure-by-default policy.
|-
| 2019 || The Center for Security and Emerging Technology (CSET), that is focused on AI safety and other security risks, launches with a 5-year $55 million grant from Open Phil. Grantmaking from the Long-Term Future Fund picks up pace; BERI hands off its grantmaking of Jaan Tallinn's money to the Survival and Flourishing Fund (SFF). Open Phil begins using the Committee for Effective Altruism Support to decide grant amounts for some of its AI safety grants, including grants to MIRI.
|2015 || {{dts|June 17}},21 || Publication || The Kindle edition of ''Artificial Superintelligence: A Futuristic Approach'' by {{w|Roman Yampolskiy}} is published. The paperback would be published on June 21.<ref>{{cite web|url =https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy-ebook/dp/B010ACWEG0/|title = Artificial Superintelligence: A Futuristic Approach|last = Yampolskiy|first = Roman|authorlink = w:Roman Yampolskiy|date = June 17, 2015|accessdate = August 20, 2017}}</ref> Yampolskiy takes an AI safety engineering perspective, rather than a machine ethics perspective, to the problem of AI safety.<ref>{{cite web|url = https://intelligence.org/2013/07/15/roman-interview/|title = Roman Yampolskiy on AI Safety Engineering|last = Muehlhauser|first = Luke|date = July 15, 2013|accessdate = August 20, 2017|publisher = Machine Intelligence Research Institute}}</ref>
|-
| 2015 || {{dts|July 1}} || Grant Funding || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref>
|-
| 2015 || {{dts|August}} || Grant Funding || The Open Philanthropy Project awards a grant of $1.2 million to the {{w|Future of Life Institute}}.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2015 || {{dts|August}} || Publication || The Open Philanthropy Project publishes its cause report on potential risks from advanced artificial intelligence.<ref>{{cite web |url=http://www.openphilanthropy.org/research/cause-reports/ai-risk |title=Potential Risks from Advanced Artificial Intelligence |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
| 2016 || {{dts|May 6}} || Publication || Holden Karnofsky of the Open Philanthropy Project publishes "Some Background on Our Views Regarding Advanced Artificial Intelligence" on the Open Phil blog.<ref>{{cite web |url=http://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence |title=Some Background on Our Views Regarding Advanced Artificial Intelligence |publisher=Open Philanthropy Project |accessdate=July 27, 2017}}</ref>
|-
| 2016 || {{dts|June}} || Grant Funding || The Open Philanthropy Project awards a grant of $264,525 to {{w|George Mason University}} for work by {{w|Robin Hanson}}.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2016 || {{dts|June 21}} || Publication || "Concrete Problems in AI Safety" by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané is submitted to the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref> The paper would receive a shoutout from the Open Philanthropy Project.<ref>{{cite web|url = https://www.openphilanthropy.org/blog/concrete-problems-ai-safety|title = Concrete Problems in AI Safety|last = Karnofsky|first = Holden|date = June 23, 2016|accessdate = April 18, 2020}}</ref> It would become a landmark in AI safety literature, and many of its authors would continue to do AI safety work at OpenAI in the years to come.
| 2016 || {{dts|August}} || Organization || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches under the leadership of AI expert {{w|Stuart J. Russell}} (co-author with Peter Norvig of ''Artificial Intelligenece: A Modern Approach''). The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref>
|-
| 2016 || {{dts|August}} || Grant Funding || The Open Philanthropy Project awards a grant of $5.6 million over two years to the newly formed {{w|Center for Human-Compatible AI}} at the University of California, Berkeley.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2016 || {{dts|August}} || Grant Funding || The Open Philanthropy Project awards a grant of $500,000 to the {{w|Machine Intelligence Research Institute}}.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2016 || {{dts|August 24}} || || US president Barack Obama speaks to entrepreneur and MIT Media Lab director {{w|Joi Ito}} about AI risk.<ref>{{cite web |url=https://www.wired.com/2016/10/president-obama-mit-joi-ito-interview/ |title=Barack Obama Talks AI, Robo Cars, and the Future of the World |publisher=[[w:WIRED|WIRED]] |date=October 12, 2016 |author=Scott Dadich |accessdate=July 28, 2017}}</ref>
| 2016 || {{dts|October 12}} || Publication || Under the Obama Administration, the United States White House releases two reports, ''Preparing for the Future of Artificial Intelligence'' and ''National Artificial Intelligence Research and Development Strategic Plan''. The former "surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy".<ref>{{cite web |url=https://obamawhitehouse.archives.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence |publisher=whitehouse.gov |title=The Administration's Report on the Future of Artificial Intelligence |date=October 12, 2016 |accessdate=July 28, 2017}}</ref><ref>{{cite web |url=https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy |date=December 21, 2016 |publisher=Harvard Business Review |title=The Obama Administration's Roadmap for AI Policy |accessdate=July 28, 2017}}</ref>
|-
| 2016 || {{dts|November}} || Grant Funding || The Open Philanthropy Project awards a grant of $199,000 to the {{w|Electronic Frontier Foundation}} for work by {{w|Peter Eckersley}}.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2016 || {{dts|December}} || Grant Funding || The Open Philanthropy Project awards a grant of $32,000 to AI Impacts for work on strategic questions related to potential risks from advanced artificial intelligence.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2016 || {{dts|December 3}}, 12 || Publication || A couple of posts are published on LessWrong by Center for Applied Rationality (CFAR) president Anna Salamon. The posts discuss CFAR's new focus on AI safety.<ref>{{cite web |url=http://lesswrong.com/lw/o7o/cfars_new_focus_and_ai_safety/ |title=CFAR's new focus, and AI Safety - Less Wrong |accessdate=July 13, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/o9h/further_discussion_of_cfars_focus_on_ai_safety/ |title=Further discussion of CFAR's focus on AI safety, and the good things folks wanted from "cause neutrality" - Less Wrong |accessdate=July 13, 2017 |publisher=[[w:LessWrong|LessWrong]]}}</ref>
| 2017 || {{dts|February 9}} || Project || The Effective Altruism Funds (EA Funds) is announced on the Effective Altruism Forum. EA Funds includes a Long-Term Future Fund that is partly intended to support "priorities for robust and beneficial artificial intelligence".<ref>{{cite web |url=https://app.effectivealtruism.org/funds/far-future |title=EA Funds |accessdate=July 27, 2017 |quote=In the biography on the right you can see a list of organizations the Fund Manager has previously supported, including a wide variety of organizations such as the Centre for the Study of Existential Risk, Future of Life Institute and the Center for Applied Rationality. These organizations vary in their strategies for improving the long-term future but are likely to include activities such as research into possible existential risks and their mitigation, and priorities for robust and beneficial artificial intelligence.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/174/introducing_the_ea_funds/ |author=William MacAskill |title=Introducing the EA Funds |publisher=Effective Altruism Forum |date=February 9, 2017 |accessdate=July 27, 2017}}</ref>
|-
| 2017 || {{dts|March}} || Grant Funding || The Open Philanthropy Project awards a grant of $2.0 million to the {{w|Future of Humanity Institute}} for general support.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2017 || {{dts|March}} || Grant Funding || The Open Philanthropy Project awards a grant of $30 million to {{w|OpenAI}} for general support.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2017 || {{dts|April}} || Organization || The Berkeley Existential Risk Initiative (BERI) launches around this time (under the leadership of Andrew Critch, who previously helped found the Center for Applied Rationality) to assist researchers working at institutions working to mitigate existential risk, including AI risk.<ref>{{cite web |url=https://intelligence.org/2017/05/10/may-2017-newsletter/ |title=May 2017 Newsletter |publisher=Machine Intelligence Research Institute |date=May 10, 2017 |accessdate=July 25, 2017 |quote=Interested parties may also wish to apply for the event coordinator position at the new Berkeley Existential Risk Initiative, which will help support work at CHAI and elsewhere.}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ |title=Update on Effective Altruism Funds |publisher=Effective Altruism Forum |date=April 20, 2017 |accessdate=July 25, 2017}}</ref>
| 2017 || {{dts|April 6}} || Publication || 80,000 Hours publishes an article about the pros and cons of working on AI safety, titled "Positively shaping the development of artificial intelligence".<ref>{{cite web |url=https://80000hours.org/problem-profiles/positively-shaping-artificial-intelligence/ |title=Positively shaping the development of artificial intelligence |publisher=80,000 Hours |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1341451772603224 |title=Completely new article on the pros/cons of working on AI safety, and how to actually go about it |date=April 6, 2017}}</ref>
|-
| 2017 || {{dts|May}} || Grant Funding || The Open Philanthropy Project awards a grant of $1.5 million to the {{w|UCLA School of Law}} for work on governance related to AI risk.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref> Two researchers from AI Impacts are authors on the paper.<ref>{{cite web |url=http://aiimpacts.org/media-discussion-of-2016-espai/ |title=Media discussion of 2016 ESPAI |publisher=AI Impacts |date=June 14, 2017 |accessdate=July 13, 2017}}</ref>
| 2017 || {{dts|June 14}} || Publication || 80,000 Hours publishes a guide to working in AI policy and strategy, written by Miles Brundage.<ref>{{cite web |url=https://www.facebook.com/80000Hours/posts/1416435978438136 |title=New in-depth guide to AI policy and strategy careers, written with Miles Brundage, a researcher at the University of Oxford’s Future of Humanity Institute |date=June 14, 2017 |publisher=80,000 Hours}}</ref>
|-
| 2017 || {{dts|July}} || Grant Funding || The Open Philanthropy Project awards a grant of $2.4 million to the {{w|Montreal Institute for Learning Algorithms}}.<ref name="donations-portal-open-phil-ai-safety">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+safety |title=Open Philanthropy Project donations made (filtered to cause areas matching AI risk) |accessdate=July 27, 2017}}</ref>
|-
| 2017 || {{dts|July}} || Grant Funding || The Open Philanthropy Project awards a grant of about $300,000 to Yale University to support research into the global politics of artificial intelligence led by Allan Dafoe.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2017 || {{dts|July}} || Grant Funding || The Open Philanthropy Project awards a grant of about $400,000 to the Berkeley Existential Risk Initiative to support core functions of grantee, and to help them provide contract workers for the Center for Human-Compatible AI (CHAI) housed at the University of California, Berkeley.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2017 || {{dts|July 15}}–16 || Opinion || At the National Governors Association in Rhode Island, Elon Musk tells US governors that artificial intelligence is an "existential threat" to humanity.<ref>{{cite web |url=http://www.npr.org/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk |date=July 17, 2017 |title=Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk' |publisher=NPR.org |accessdate=July 28, 2017}}</ref>
| 2017 || {{dts|July 23}} || Opinion || During a Facebook Live broadcast from his backyard, Mark Zuckerberg reveals that he is "optimistic" about advanced artificial intelligence and that spreading concern about "doomsday scenarios" is "really negative and in some ways [&hellip;] pretty irresponsible".<ref>{{cite web |url=http://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html |publisher=CNBC |title=Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible' |date=July 24, 2017 |author=Catherine Clifford |accessdate=July 25, 2017}}</ref>
|-
| 2017 || {{dts|October}} || Grant Funding || The Open Philanthropy Project awards MIRI a grant of $3.75 million over three years ($1.25 million per year). The cited reasons for the grant are a "very positive review" of MIRI's "Logical Induction" paper by an "outstanding" machine learning researcher, as well as the Open Philanthropy Project having made more grants in the area so that a grant to MIRI is less likely to appear as an "outsized endorsement of MIRI's approach".<ref>{{cite web |url=https://intelligence.org/2017/11/08/major-grant-open-phil/ |title=A major grant from the Open Philanthropy Project |author=Malo Bourgon |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=November 8, 2017 |accessdate=November 11, 2017}}</ref><ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 |publisher=Open Philanthropy Project |title=Machine Intelligence Research Institute — General Support (2017) |date=November 8, 2017 |accessdate=November 11, 2017}}</ref>
|-
| 2017 || {{dts|October}} || Project || The first commit for AI Watch, a repository of organizations, people, and products in AI safety, is made on October 23.<ref>{{cite web|url = https://github.com/riceissa/aiwatch/commit/563c48d9dcfd126b475f9a982e8d7af6c743411c|title = Initial commit: AI Watch|last = Rice|first = Issa|date = October 23, 2017|accessdate = April 19, 2020}}</ref> Work on the web portal at aiwatch.issarice.com would begin the next day.<ref>{{cite web|url = https://github.com/riceissa/aiwatch/commit/5329e8c34e599ca11f349cbe29427c1f4b73a20f#diff-b9e00bb3999fd5777328f867e74bbc9e|title = start on portal: AI Watch|date = October 24, 2017|accessdate = April 19, 2020|last = Rice|first = Issa}}</ref>
| 2017 || {{dts|November 3}} || Project || Zvi Mowshowitz and Vladimir Slepnev announce the AI Alignment Prize, a $5,000 prize funded by Paul Christiano for publicly posted work advancing AI alignment.<ref>{{cite web|url = https://www.lesswrong.com/posts/YDLGLnzJTKMEtti7Z/announcing-the-ai-alignment-prize|title = Announcing the AI Alignment Prize|date = November 3, 2017|accessdate = April 19, 2020|last = Slepnev|first = Vladimir|publisher = LessWrong}}</ref> The prize would be discontinued after the fourth round (ending December 31, 2018) due to reduced participation.<ref>{{cite web|url = https://www.alignmentforum.org/posts/nDHbgjdddG5EN6ocg/announcement-ai-alignment-prize-round-4-winners|title = Announcement: AI alignment prize round 4 winners|last = Slepnev|first = Vladimir|date = Janaury 20, 2019|accessdate = April 19, 2020|publisher = Alignment Forum}}</ref>
|-
| 2017 || {{Dtsdts|December}} || Grant Funding || {{w|Jaan Tallinn}} makes a donation of about $5 million to the Berkeley Existential Risk Initiative (BERI) Grants Program.<ref name="december-2017-activity-update">{{cite web |url=http://existence.org/2018/01/11/activity-update-december-2017.html |title=Berkeley Existential Risk Initiative {{!}} Activity Update - December 2017 |accessdate=February 8, 2018}}</ref>
|-
| 2017 || {{dts|December 20}} || Publication || The "2017 AI Safety Literature Review and Charity Comparison" is published. The lengthy blog post covers all the published work of prominent organizations focused on AI safety, and is a refresh of a similar post published a year ago.<ref>{{cite web|url = https://forum.effectivealtruism.org/posts/XKwiEpWRdfWo7jy7f/2017-ai-safety-literature-review-and-charity-comparison|title = 2017 AI Safety Literature Review and Charity Comparison|author = Larks|publisher = Effective Altruism Forum|date = December 20, 2017|accessdate = August 18, 2019}}</ref>
|-
| 2017 || Year-round || Funding || The huge increase in cryptocurrency prizes in this year would drive a lot of donations to AI safety organizations from people who had held cryptocurrency and got rich from it.<ref>{{cite web|url = https://www.facebook.com/danielfilan/posts/10210393063045457|title = Claim: if you work in an AI alignment org funded by donations, you shouldn't own much cryptocurrency, since much of your salary comes from people who do, so you'd rather diversify from the risk of a BTC or ETH price crash.|last = Filan|first = Daniel|date = November 18, 2017|accessdate = April 19, 2020}}</ref> MIRI would not be able to match the performance of its 2017 fundraiser in later years, and would cite the unusually high cryptocurrency prices in 2017 as one possible reason.<ref name=miri-2019-fundraiser-review/>
|-
| 2018 || {{dts|February 28}} || Publication || 80,000 Hours publishes a blog post ''A new recommended career path for effective altruists: China specialist'' suggesting specialization in China as a career path for people in the effective altruist movement. China's likely leading role in the development of artificial intelligence is highlighted as particularly relevant to AI safety efforts.<ref>{{cite web|url = https://80000hours.org/articles/china-careers/|title = 'A new recommended career path for effective altruists: China specialist|last = Todd|first = Benjamin|last2 = Tse|first2 = Brian|date = February 28, 2018|accessdate = September 8, 2019|publisher = 80,000 Hours}}</ref>
| 2018 || {{dts|April 12}} to 22 || Conference || The first AI Safety Camp is held in Gran Canaria.<ref>{{cite web|url = https://aisafetycamp.com/previous-camps/|title = Previous Camps|accessdate = September 7, 2019}}</ref> The AI Safety Camp team runs about two camps a year.
|-
| 2018 || {{dts|May}} || Grant Funding || The Open Philanthropy Project announces the first set of grants for the Open Phil AI Fellowship, to 7 AI Fellows pursuing research relevant to AI risk. It also makes a grant of $525,000 to Ought and $100,000 to AI Impacts.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2018 || {{dts|July}} || Grant Funding || The Open Philanthropy Project grants $429,770 to the University of Oxford to support research on the global politics of advanced artificial intelligence. The work will be led by Professor Allan Dafoe at the Future of Humanity Institute in Oxford, United Kingdom.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2018 || {{dts|July 10}} (beta), {{dts|October 29}} (out of beta) || Project || The team behind LessWrong 2.0 launches a beta for the AI Alignment Forum at AlignmentForum.org on July 10, as a successor to the Intelligent Agent Foundations Forum (IAFF) at agentfoundations.org.<ref>{{cite web|url = https://www.alignmentforum.org/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta|title = Announcing AlignmentForum.org Beta|last = Arnold|first = Raymond|date = Juluy 10, 2018|accessdate = April 18, 2020|publisher = LessWroong}}</ref> On October 29, the Alignment Forum exits beta and becomes generally available.<ref>{{cite web|url = https://www.lesswrong.com/posts/FoiiRDC3EhjHx7ayY/introducing-the-ai-alignment-forum-faq|title = Introducing the AI Alignment Forum (FAQ)|last = Habryka|first = Oliver|last2 = Pace|first2 = Ben|last3 = Arnold|first3 = Raymond|last4 = Babcock|first4 = Jim|publisher = LessWrong|date = October 29, 2018|accessdate = April 18, 2020}}</ref><ref>{{cite web|url = https://intelligence.org/2018/10/29/announcing-the-ai-alignment-forum/|title = Announcing the new AI Alignment Forum|date = October 29, 2018|accessdate = April 18, 2020|publisher = Machine Intelligence Research Institute}}</ref>
|-
| 2018 || {{dts|August 14}} || Grant Funding || Nick Beckstead grants the Machine Intelligence Research Institute (MIRI) $488,994 from the Long-Term Future Fund. This is part of his last set of grants as fund manager; he would subsequently step down and the fund management would move to a different team.<ref>{{cite web|url = https://app.effectivealtruism.org/funds/far-future/payouts/6g4f7iae5Ok6K6YOaAiyK0|title = July 2018 - Long-Term Future Fund Grants|date = August 14, 2018|accessdate = August 18, 2019|publisher = Effective Altruism Funds}}</ref><ref name="donations-portal-ea-funds-ai-safety">{{cite web|url = https://donations.vipulnaik.com/donor.php?donor=Effective+Altruism+Funds&cause_area_filter=AI+safety|title = Effective Altruism Funds donations made (filtered to cause areas matching AI safety)|accessdate = August 18, 2019}}</ref>
|-
| 2018 || {{dts|September}} to October || Grant Funding || During this period, the Berkeley Existential Risk Initiative (BERI) makes a number of grants to individuals working on projects related to AI safety.<ref name="donations-portal-beri-ai-safety" />
|-
| 2018 || {{dts|November 22}} || Disclosure norms || Nate Soares, executive director of MIRI, publishes MIRI's 2018 update post that announces MIRI's "nondisclosed-by-default" policy for most of its research.<ref>{{cite web |url=https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ |title=2018 Update: Our New Research Directions - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=November 22, 2018 |accessdate=February 14, 2019}}</ref> The 2018 AI alignment literature review and charity comparison post would discuss the complications created by this policy for evaluating MIRI's research,<ref name=larks-2018/> and so would the 2019 post.<ref name=larks-2019/> In its 2019 fundraiser review, MIRI would mention the nondisclosure-by-default policy as one possible reason for it raising less money in its 2019 fundraiser.<refname=miri-2019-fundraiser-review>{{cite web|url = https://intelligence.org/2020/02/13/our-2019-fundraiser-review/|title = Our 2019 Fundraiser Review|date = February 13, 2020|accessdate = April 19, 2020|publisher = Machine Intelligence Research Institute}}</ref>
|-
| 2018 || {{dts|November 29}} || Grant Funding || The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set of grants: $40,000 to Machine Intelligence Research Institute, $10,000 to Ought, $21,000 to AI Summer School, and $4,500 to the AI Safety Unconference.<ref name="donations-portal-ea-funds-ai-safety" />
|-
| 2018 || {{dts|December 17}} || Publication || The "2018 AI Alignment Literature Review and Charity Comparison" is published on the Effective Altruism Forum. It surveys AI safety work in 2018. It continues an annual tradition of similar blog posts in 2016 and 2017.<ref name=larks-2018>{{cite web|url = https://forum.effectivealtruism.org/posts/BznrRBgiDdcTwWWsB/2018-ai-alignment-literature-review-and-charity-comparison|title = 2018 AI Alignment Literature Review and Charity Comparison|author = Larks|publisher = Effective Altruism Forum|date = December 17, 2018|accessdate = August 18, 2019}}</ref>
|-
| 2019 || {{dts|January}} || Grant Funding || The Open Philanthropy Project grants $250,000 to the Berkeley Existential Risk Initiative (BERI) to temporarily or permanently hire machine learning research engineers dedicated to BERI’s collaboration with the Center for Human-compatible Artificial Intelligence (CHAI).<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2019 || {{dts|January}} || Grant Funding || The Open Philanthropy Project provides a founding grant for the Center for Security and Emerging Technology (CSET) at Georgetown University of $55 million over 5 years.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2019 || {{dts|February}} || Grant Funding || The Open Philanthropy Project grants $2,112,500 to the Machine Intelligence Research Institute (MIRI) over two years. This is part of the first batch of grants decided by the Committee for Effective Altruism Support, which will set "grant sizes for a number of our largest grantees in the effective altruism community, including those who work on long-termist causes."<ref name="donations-portal-open-phil-ai-safety" /> Around the same time, BERI grants $600,000 to MIRI.<ref name="donations-portal-beri-ai-safety">{{cite web|url = https://donations.vipulnaik.com/donor.php?donor=Berkeley+Existential+Risk+Initiative&cause_area_filter=AI+safety|title = Berkeley Existential Risk Initiative donations made (filtered to cause areas matching AI safety)|accessdate = August 18, 2019}}</ref>
|-
| 2019 || {{dts|April 7}} || Grant Funding || The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set of 23 grants totaling $923,150. About half the grant money is to organizations or projects directly working in AI safety. Recipients include the Machine Intelligence Research Institute (MIRI), AI Safety Camp, Ought, and a number of individuals working on AI safety projects, including three in deconfusion research.<ref name="donations-portal-ea-funds-ai-safety" />
|-
| 2019 || {{dts|May}} || Grant Funding || The Open Philanthropy Project announces the second class of the Open Phil AI Fellowship, with 8 machine learning researchers in the class, receiving a total of $2,325,000 in grants.<ref name="donations-portal-open-phil-ai-safety"/>
|-
| 2019 || {{dts|June 7}} || Fictional portrayal || The movie ''{{w|I Am Mother}}'' is released on Netflix. According to a comment on Slate Star Codex: "you can use it to illustrate everything from paperclip maximization to deontological kill switches".<ref>{{cite web|url = https://slatestarcodex.com/2019/06/05/open-thread-129-25/|title = OPEN THREAD 129.25|date = June 8, 2019|accessdate = August 18, 2019}}</ref>
| 2019 || {{dts|August 25}} || || Grantmaking done by Berkeley Existential Risk Initiative (BERI) funded by {{w|Jaan Tallinn}} moves to the newly created Survival and Flourishing Fund (SFF).<ref>{{cite web|url = http://existence.org/tallinn-grants-future/|title = The Future of Grant-making Funded by Jaan Tallinn at BERI|date = August 25, 2019|accessdate = April 18, 2020|publisher = Berkeley Existential Risk Initiative}}</ref> BERI's grantmaking in this space had previously included AI safety organizations. However, as of April 2020, SFF's grants have not included any grants to organizations exclusively focused on AI safety, but rather been to organizations working on broader global catastrophic risks and other adjacent topics.
|-
| 2019 || {{dts|August 30}} || Grant Funding || The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set 13 grants totaling $415,697 USD to organizations and individuals. About half the grant money is to organizations or projects working in AI safety and related AI strategy, governance, and policy issues. With the exception of a grant to AI Safety Camp, all the other grants related to AI safety are to individuals.<ref>{{cite web|url = https://app.effectivealtruism.org/funds/far-future/payouts/4UBI3Q0TBGbWcIZWCh4EQV|title = August 2019: Long-Term Future Fund Grants and Recommendations|date = August 30, 2019|accessdate = April 18, 2020|publisher = Effective Altruism Funds}}</ref>
|-
| 2019 || {{dts|October 8}} || Publication || The book ''Human Compatible'' by {{w|Stuart J. Russell}} (co-author with Peter Norvig of ''Artificial Intelligence: A Modern Approach'' and head of the Center for Human-Compatible AI at UC Berkeley) is published by Viking Press. The book is reviewed by ''The Guardian''<ref>{{cite web|url = https://www.theguardian.com/books/2019/oct/24/human-compatible-ai-problem-control-stuart-russell-review|title = Human Compatible by Stuart Russell review -- AI and our future. Creating machines smarter than us could be the biggest event in human history -- and the last|last = Sample|first = Ian|date = October 24, 2019|accessdate = April 18, 2020|publisher = The Guardian}}</ref> and interviews with the author is published by Vox<ref>{{cite web|url = https://www.vox.com/future-perfect/2019/10/26/20932289/ai-stuart-russell-human-compatible|title = AI could be a disaster for humanity. A top computer scientist thinks he has the solution. Stuart Russell wrote the book on AI and is leading the fight to change how we build it.|last = Piper|first = Kelsey|date = October 26, 2019|accessdate = April 18, 2020|publisher = Vox}}</ref> and TechCrunch.<ref>{{cite web|url = https://techcrunch.com/2020/03/20/stuart-russell-on-how-to-make-ai-human-compatible/|title = Stuart Russell on how to make AI ‘human-compatible’: 'We've actually thought about AI the wrong way from the beginning'|last = Coldewey|first = Devin|date = March 20, 2020|accessdate = April 18, 2020|publisher = TechCrunch}}</ref>
|-
| 2019 || {{dts|November}} || Grant Funding || The Open Philanthropy Project makes a $1 million grant to Ought, double the previous grant of $525,000.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2019 || {{dts|November 6}} || Publication || An "AI Alignment Research Overview" by Jacob Steinhardt (one of the co-authors of ''Concrete Problems in AI Safety'') is published to LessWrong and the AI Alignment Forum.<ref>{{cite web|url = https://www.lesswrong.com/posts/7GEviErBXcjJsbSeD/ai-alignment-research-overview-by-jacob-steinhardt|title = AI Alignment Research Overview (by Jacob Steinhardt)|last = Pace|first = Ben|date = November 6, 2019|accessdate = April 18, 2020|publisher = LessWrong}}</ref>
|-
| 2019 || {{dts|November 21}} || Grant Funding || The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set 13 grants totaling $466,000 USD to organizations and individuals. About a quarter of the grant money is to organizations and individuals working on AI safety. With the exception of a grant to AI Safety Camp, all the other grants related to AI safety are to individuals.<ref>{{cite web|url = https://app.effectivealtruism.org/funds/far-future/payouts/60MJaGYoLb0zGlIZxuCMPg|title = November 2019: Long-Term Future Fund Grants|date = November 21, 2019|accessdate = April 18, 2020|publisher = Effective Altruism Funds}}</ref>
|-
| 2019 || {{dts|November 24}} || || Toon Alfrink publishes on LessWrong a postmortem for RAISE, an attempt to build an online course for AI safety. The blog post explains the challenges with running RAISE and the reasons for eventually shutting it down.<ref>{{cite web|url = https://www.lesswrong.com/posts/oW6mbA3XHzcfJTwNq/raise-post-mortem|title = RAISE post-mortem|last = Alfrink|first = Toon|date = November 24, 2019|accessdate = April 18, 2020|publisher = LessWrong}}</ref>
| 2020 || {{dts|January|| Publication || Rohin Shah (the person who started the weekly AI Alignment Newsletter) publishes a blog post on LessWrong titled "AI Alignment 2018-19 Review" that he describes as "a review post of public work in AI alignment over 2019, with some inclusions from 2018."<ref>{{cite web|url = https://www.lesswrong.com/posts/dKxX76SCfCvceJXHv/ai-alignment-2018-19-review|title = AI Alignment 2018-19 Review|last = Shah|first = Rohin|date = January 27, 2020|accessdate = April 18, 2020|publisher = LessWrong}}</ref>
|-
| 2020 || {{dts|February}} || Grant Funding || The Open Philanthropy Project makes grants to AI safety organizations Ought ($1.59 million) and Machine Intelligence Research Institute ($7.7 million) with the money amount determined by the Committee for Effective Altruism Support (CEAS). Other organizations receiving money based on CEAS recommendations at around the same time are the Centre for Effective Altruism and 80,000 Hours, neither of which is primarily focused on AI safety.<ref name="donations-portal-open-phil-ai-safety" />
|-
| 2020 || {{dts|March}} || Publication || ''The Precipice: Existential Risk and the Future of Humanity'' by {{w|Toby Ord}} (affiliated with the Future of Humanity Institute and with Oxford University) is published by Hachette Books. The book covers risks including artificial intelligence, biological risks, and climate change. The author appears on podcasts to talk about the book, for Future of Life Institute<ref>{{cite web|url = https://futureoflife.org/2020/03/31/he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/?cn-reloaded=1|title = FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord|last = Perry|first = Lucas|date = March 31, 2020|accessdate = April 18, 2020|publisher = Future of Life Institute}}</ref> and 80,000 Hours.<ref>{{cite web|url = https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/|title = Toby Ord on the precipice and humanity's potential futures|last = Wiblin|first = Robert|last2 = Koehler|first2 = Arden|last3 = Harris|first3 = Kieran|publisher = 80,000 Hours|date = March 7, 2020|accessdate = April 18, 2020}}</ref>
2,438
edits

Navigation menu