Changes

Jump to: navigation, search

Timeline of AI safety

126 bytes added, 06:54, 9 May 2020
no edit summary
| 2017 || This is a great year for cryptocurrency prices, causing a number of donations to MIRI from people who got rich through cryptocurrency. The AI safety funding and support landscape changes somewhat with the launch of the Berkeley Existential Risk Initiative (BERI) (and funding of its grants program by Jaan Tallinn) and the Effective Altruism Funds, specifically the Long-Term Future Fund. Open Phil makes several grants in AI safety, including a $30 million grant to OpenAI and a $3.75 million grant to MIRI. AI safety attracts dismissive commentary from Mark Zuckerberg, while Elon Musk continues to highlight its importance. The year begins with the Asilomar Conference and the Asilomar AI Principles, and initiatives such as AI Watch and the AI Alignment Prize begin toward the end of the year.
|-
| 2018 || Activity in the field of AI safety becomes more steady, in terms of both ongoing discussion (with the launch of the AI Alignment Newsletter, AI Alignment Podcast, and Alignment Forum) and funding (with structural changes to the Long-Term Future Fund to make it grant more regularly, the introduction of the annual Open Phil AI Fellowship grants, and more grantmaking by BERI). Near the end of the year, MIRI announces its nondisclosure-by-default policy. The Stanford Center for AI Safety launches.
|-
| 2019 || The Center for Security and Emerging Technology (CSET), that is focused on AI safety and other security risks, launches with a 5-year $55 million grant from Open Phil. The Stanford Institute for Human-Centered Artificial Intelligence (HAI) launches. Grantmaking from the Long-Term Future Fund picks up pace; BERI hands off its grantmaking of Jaan Tallinn's money to the Survival and Flourishing Fund (SFF). Open Phil begins using the Committee for Effective Altruism Support to decide grant amounts for some of its AI safety grants, including grants to MIRI. OpenAI unveils its GPT-2 model but does not release the full model initially; this sparks discussion on disclosure norms.
|}
2,422
edits

Navigation menu