Changes

Jump to: navigation, search

Timeline of Machine Intelligence Research Institute

1,735 bytes added, 10:20, 7 May 2020
no edit summary
|-
| 2016 || {{dts|April 1}}–3 || Workshop || The Self-Reference, Type Theory, and Formal Verification takes place.<ref name="workshops" />
|-
| 2016 || {{dts|May 6}} (talk), {{dts|December 28}} (transcript release) || Publication || In May 2016, Eliezer Yudkowsky gives a talk titled "AI Alignment: Why It’s Hard, and Where to Start." On December 28, 2016, an edited version of the transcript is released on the MIRI blog.<ref>{{cite web|url = https://intelligence.org/stanford-talk/|title = The AI Alignment Problem: Why It’s Hard, and Where to Start|date = May 6, 2016|accessdate = May 7, 2020}}</ref><ref>{{cite web|url = https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/|title = AI Alignment: Why It’s Hard, and Where to Start|last = Yudkowsky|first = Eliezer|date = December 28, 2016|accessdate = May 7, 2020}}</ref>
|-
| 2016 || {{dts|May 28}}–29 || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Transparency takes place.<ref name="workshops" />
|-
| 2017 || {{dts|November 16}} || Publication || {{W|Eliezer Yudkowsky}}'s sequence/book ''Inadequate Equilibria'' is fully published. The book was published chapter-by-chapter on LessWrong 2.0 and the Effective Altruism Forum starting October 28.<ref>{{cite web |url=https://www.lesserwrong.com/posts/zsG9yKcriht2doRhM/inadequacy-and-modesty |title=Inadequacy and Modesty |accessdate=October 29, 2017}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/1g4/inadequacy_and_modesty/ |title=Inadequacy and Modesty |publisher=Effective Altruism Forum |accessdate=October 29, 2017}}</ref><ref>{{cite web |url=https://equilibriabook.com/discussion/ |title=Discussion - Inadequate Equilibria |publisher=Inadequate Equilibria |accessdate=December 12, 2017}}</ref> The book is reviewed on multiple blogs including Slate Star Codex (Scott Alexander),<ref>{{cite web |url=http://slatestarcodex.com/2017/11/30/book-review-inadequate-equilibria/ |title=Book Review: Inadequate Equilibria |date=December 9, 2017 |publisher=Slate Star Codex |accessdate=December 12, 2017}}</ref> Shtetl-Optimized ({{W|Scott Aaronson}}),<ref>{{cite web |url=https://www.scottaaronson.com/blog/?p=3535 |title=Shtetl-Optimized » Blog Archive » Review of "Inadequate Equilibria," by Eliezer Yudkowsky |accessdate=December 12, 2017}}</ref> and Overcoming Bias ({{W|Robin Hanson}}).<ref>{{cite web |url=http://www.overcomingbias.com/2017/11/why-be-contrarian.html |title=Overcoming Bias : Why Be Contrarian? |date=November 25, 2017 |author=Robin Hanson |accessdate=December 12, 2017}}</ref> The book outlines Yudkowsky's approach to epistemology, covering topics such as whether to trust expert consensus and whether one can expect to do better than average.
|-
| 2017 || {{dts|November 25}}, November 26 || Publication || A two-part series "Security Mindset and Ordinary Paranoia" and "Security Mindset and the Logistic Success Curve" by Eliezer Yudkowsky is published. The series uses the analogy of "security mindset" to highlight the importance and non-intuitiveness of AI safety. This is based on Eliezer Yudkowsky's 2016 talk "AI Alignment: Why It’s Hard, and Where to Start."<ref>{{cite web|url = https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/|title = Security Mindset and Ordinary Paranoia|date = November 25, 2017|accessdate = May 7, 2020|publisher = Machine Intelligence Research Institute|last = Yudkowsky|first = Eliezer}}</ref><ref>{{cite web|url = https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/|title = Security Mindset and the Logistic Success Curve|date = November 26, 2017|accessdate = May 7, 2020|publisher = Machine Intelligence Research Institute|last = Yudkowsky|first = Eliezer}}</ref>
|-
| 2017 || {{dts|December 1}} || Financial || MIRI's 2017 fundraiser begins. The announcement post describes MIRI's fundraising targets, recent work at MIRI (including recent hires), and MIRI's strategic background (which gives a high-level overview of how MIRI's work relates to long-term outcomes).<ref>{{cite web |url=https://intelligence.org/2017/12/01/miris-2017-fundraiser/ |title=MIRI's 2017 Fundraiser |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |author=Malo Bourgon |date=December 1, 2017 |accessdate=December 12, 2017}}</ref> The fundraiser would conclude with $2.5 million raised from over 300 distinct donors. The largest donation would be from {{w|Vitalik Buterin}} ($763,970 worth of {{W|Ethereum}}).<ref>{{cite web |url=https://intelligence.org/2018/01/10/fundraising-success/ |title=Fundraising success! |author=Malo Bourgon |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=January 10, 2018 |accessdate=January 30, 2018}}</ref>
2,438
edits

Navigation menu