Changes

Jump to: navigation, search

Timeline of AI safety

6 bytes added, 22:27, 18 April 2020
no edit summary
| 2016 || {{dts|June 21}} || Publication || "Concrete Problems in AI Safety" by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané is submitted to the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref> The paper would receive a shoutout from the Open Philanthropy Project.<ref>{{cite web|url = https://www.openphilanthropy.org/blog/concrete-problems-ai-safety|title = Concrete Problems in AI Safety|last = Karnofsky|first = Holden|date = June 23, 2016|accessdate = April 18, 2020}}</ref> It would become a landmark in AI safety literature, and many of its authors would continue to do AI safety work at OpenAI in the years to come.
|-
| 2016 || {{dts|August}} || Organization || The UC Berkeley Center for Human-Compatible Artificial Intelligence launches under the leadership of AI expert {{w|Stuart J. Russell}} (co-author with Peter Norvig of ''Artificial Intelligenece: A Modern Approach''). The focus of the center is "to ensure that AI systems are beneficial to humans".<ref>{{cite web |url=http://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |date=August 29, 2016 |publisher=Berkeley News |accessdate=July 26, 2017}}</ref>
|-
| 2016 || {{dts|August}} || Grant || The Open Philanthropy Project awards a grant of $5.6 million over two years to the newly formed {{w|Center for Human-Compatible AI}} at the University of California, Berkeley.<ref name="donations-portal-open-phil-ai-safety" />
| 2019 || {{dts|August 30}} || Grant || The Long-Term Future Fund, one of the Effective Altruism Funds, announces a set 13 grants totaling $415,697 USD to organizations and individuals. About half the grant money is to organizations or projects working in AI safety and related AI strategy, governance, and policy issues. With the exception of a grant to AI Safety Camp, all the other grants related to AI safety are to individuals.<ref>{{cite web|url = https://app.effectivealtruism.org/funds/far-future/payouts/4UBI3Q0TBGbWcIZWCh4EQV|title = August 2019: Long-Term Future Fund Grants and Recommendations|date = August 30, 2019|accessdate = April 18, 2020|publisher = Effective Altruism Funds}}</ref>
|-
| 2019 || {{dts|October 8}} || Publication || The book ''Human Compatible'' by {{w|Stuart J. Russell}} (co-author with Peter Norvig of ''Artificial Intelligence: A Modern Approach'' and head of the Center for Human-Compatible AI at UC Berkeley) is published by Viking Press. The book is reviewed by ''The Guardian''<ref>{{cite web|url = https://www.theguardian.com/books/2019/oct/24/human-compatible-ai-problem-control-stuart-russell-review|title = Human Compatible by Stuart Russell review -- AI and our future. Creating machines smarter than us could be the biggest event in human history -- and the last|last = Sample|first = Ian|date = October 24, 2019|accessdate = April 18, 2020|publisher = The Guardian}}</ref> and interviews with the author is published by Vox<ref>{{cite web|url = https://www.vox.com/future-perfect/2019/10/26/20932289/ai-stuart-russell-human-compatible|title = AI could be a disaster for humanity. A top computer scientist thinks he has the solution. Stuart Russell wrote the book on AI and is leading the fight to change how we build it.|last = Piper|first = Kelsey|date = October 26, 2019|accessdate = April 18, 2020|publisher = Vox}}</ref> and TechCrunch.<ref>{{cite web|url = https://techcrunch.com/2020/03/20/stuart-russell-on-how-to-make-ai-human-compatible/|title = Stuart Russell on how to make AI ‘human-compatible’: 'We've actually thought about AI the wrong way from the beginning'|last = Coldewey|first = Devin|date = March 20, 2020|accessdate = April 18, 2020|publisher = TechCrunch}}</ref>
|-
| 2019 || {{dts|November}} || Grant || The Open Philanthropy Project makes a $1 million grant to Ought, double the previous grant of $525,000.<ref name="donations-portal-open-phil-ai-safety" />
2,438
edits

Navigation menu