Changes

Jump to: navigation, search

Timeline of AI safety

3,451 bytes added, 08:14, 19 April 2020
no edit summary
==Big picture==
 
===Overall summary===
{| class="wikitable"
|-
| 2013 to present || Mainstreaming of AI safety, separation from transhumanism || SIAI changes name to MIRI, sells off the "Singularity" brand to Singularity University, grows considerably in size, and gets a lot of funding. ''Superintelligence'', the book by Nick Bostrom, is released. The Future of Life Institute (FLI) and OpenAI are started, and the latter grows considerably. New organizations founded include the Center for the Study of Existential Risk (CSER), Leverhulme Centre for the Future of Intelligence (CFI), Future of Life Institute (FLI), OpenAI, Center for Human-Compatible AI (CHAI), Berkeley Existential Risk Initiative (BERI), Ought, and the Center for Science and Emerging Technology (CSET). OpenAI in particular becomes quite famous and influential. Prominent individuals such as Elon Musk, Sam Altman, and Bill Gates talk about the importance of AI safety and the risks of unfriendly AI. Key funders of this ecosystem include the Open Philanthropy Project and Elon Musk.
|}
 
===Highlights by year (2013 onward)===
 
{| class="wikitable"
! Year !! Highlights
|-
| 2013 || Research and outreach focused on forecasting and timelines continues. Connections with the nascent effective altruism movement strengthen. The Center for the Study of Existential Risk and the Foundational Research Institute launch.
|-
| 2014 || ''{{w|Superintelligence: Paths, Dangers, Strategies}}'' by Nick Bostrom is published. The {{w|Future of Life Institute}} is founded and AI Impacts launches. AI safety gets more mainstream attention, including from {{w|Elon Musk}}, {{w|Stephen Hawking}}, and the fictional portrayal ''{{w|Ex Machina}}''. While forecasting and timelines remain a focus of AI safety efforts, the effort shifts toward the technical AI safety agenda, with the launch of the Intelligent Agent Foundations Forum.
|-
| 2015 || AI safety continues to get more mainstream, with the founding of {{w|OpenAI}} (supported by Elon Musk and {{w|Sam Altman}}) and the {{w|Leverhulme Centre for the Future of Intelligence}}, the {{w|Open Letter on Artificial Intelligence}}, the Puerto Rico conference, and coverage on {{w|Wait But Why}}. This also appears to be the last year that Peter Thiel donates in the area.
|-
| 2016 || The Open Philanthropy Project (Open Phil) makes AI safety a focus area; it would ramp up giving in the area considerably starting around this time. The landmark paper "Concrete Problemms in AI Safety" is published, and OpenAI's safety work picks up pace. The Center for Human-Compatible AI launches. The annual tradition of LessWrong posts providing an AI alignment literature review and charity comparison for the year beginis. AI safety continues to get more mainstream, with the {{w|Partnership on AI}} and the Obama administration's efforts to understand the subject.
|-
| 2017 || This is a great year for cryptocurrency prices, causing a number of donations to MIRI from people who got rich through cryptocurrency. The AI safety funding and support landscape changes somewhat with the launch of the Berkeley Existential Risk Initiative (BERI) (and funding of its grants program by Jaan Tallinn) and the Effective Altruism Funds, specifically the Long-Term Future Fund. Open Phil makes several grants in AI safety, including a $30 million grant to OpenAI and a $3.75 million grant to MIRI. AI safety attracts dismissive commentary from Mark Zuckerberg, while Elon Musk continues to highlight its importance. Initiatives such as AI Watch and the AI Alignment Prize begin.
|-
| 2018 || Activity in the field of AI safety becomes more steady, in terms of both ongoing discussion (with the launch of the AI Alignment Newsletter) and funding (with structural changes to the Long-Term Future Fund to make it grant more regularly, the introduction of the annual Open Phil AI Fellowship grants, and more grantmaking by BERI). Near the end of the year, MIRI announces its nondisclosure-by-default policy.
|-
| 2019 || The Center for Security and Emerging Technology (CSET), that is focused on AI safety and other security risks, launches with a 5-year $55 million grant from Open Phil. Grantmaking from the Long-Term Future Fund picks up pace; BERI hands off its grantmaking of Jaan Tallinn's money to the Survival and Flourishing Fund (SFF). Open Phil begins using the Committee for Effective Altruism Support to decide grant amounts for some of its AI safety grants, including grants to MIRI.
|}
2,422
edits

Navigation menu