Difference between revisions of "Timeline of Machine Intelligence Research Institute"

From Timelines
Jump to: navigation, search
(Full timeline: add MIRI Wikipedia page creation)
(Full timeline)
Line 113: Line 113:
 
| 2004 || {{dts|April 7}} || Staff || Michael Anissimov is announced as MIRI's advocacy director.<ref>{{cite web |url=http://sl4.org/archive/0404/8447.html |author=Tyler Emerson |date=April 7, 2004 |title=SL4: Michael Anissimov - SIAI Advocacy Director |accessdate=July 1, 2017 |quote=The Singularity Institute announces Michael Anissimov as our Advocacy Director. Michael has been an active volunteer for two years, and one of the more prominent voices in the singularity community. He is committed and thoughtful, and we feel very fortunate to have him help lead our advocacy.}}</ref>
 
| 2004 || {{dts|April 7}} || Staff || Michael Anissimov is announced as MIRI's advocacy director.<ref>{{cite web |url=http://sl4.org/archive/0404/8447.html |author=Tyler Emerson |date=April 7, 2004 |title=SL4: Michael Anissimov - SIAI Advocacy Director |accessdate=July 1, 2017 |quote=The Singularity Institute announces Michael Anissimov as our Advocacy Director. Michael has been an active volunteer for two years, and one of the more prominent voices in the singularity community. He is committed and thoughtful, and we feel very fortunate to have him help lead our advocacy.}}</ref>
 
|-
 
|-
| 2004 || {{dts|April 14}} || || The first version of the Wikipedia page for MIRI is created.<ref>{{cite web|url = https://en.wikipedia.org/w/index.php?title=Machine_Intelligence_Research_Institute&oldid=3188149|title = Machine Intelligence Research Institute: This is an old revision of this page, as edited by 63.201.36.156 (talk) at 19:28, 14 April 2004.|accessdate = July 15, 2017}}</ref>
+
| 2004 || {{dts|April 14}} || Outside review || The first version of the [[w:Machine Intelligence Research Institute|Wikipedia page for MIRI]] is created.<ref>{{cite web|url = https://en.wikipedia.org/w/index.php?title=Machine_Intelligence_Research_Institute&oldid=3188149|title = Machine Intelligence Research Institute: This is an old revision of this page, as edited by 63.201.36.156 (talk) at 19:28, 14 April 2004.|accessdate = July 15, 2017}}</ref>
 
|-
 
|-
 
| 2004 || {{dts|May}} || Publication || Eliezer Yudkowsky's paper "Coherent Extrapolated Volition" is published around this time.<ref>{{cite web |url=https://intelligence.org/files/CEV.pdf |title=Coherent Extrapolated Volition |author=Eliezer Yudkowsky |accessdate=July 1, 2017 |quote=The information is current as of May 2004, and should not become dreadfully obsolete until late June, when I plant to have an unexpected insight.}}</ref> It is originally called "Collective Volition", and is announced on the MIRI website on August 16.<ref>{{cite web |url=https://web.archive.org/web/20040623153032/http://www.singinst.org/friendly/collective-volition.html |title=Collective Volition |accessdate=July 4, 2017}}</ref><ref name="singinst_feb_2006_news" />
 
| 2004 || {{dts|May}} || Publication || Eliezer Yudkowsky's paper "Coherent Extrapolated Volition" is published around this time.<ref>{{cite web |url=https://intelligence.org/files/CEV.pdf |title=Coherent Extrapolated Volition |author=Eliezer Yudkowsky |accessdate=July 1, 2017 |quote=The information is current as of May 2004, and should not become dreadfully obsolete until late June, when I plant to have an unexpected insight.}}</ref> It is originally called "Collective Volition", and is announced on the MIRI website on August 16.<ref>{{cite web |url=https://web.archive.org/web/20040623153032/http://www.singinst.org/friendly/collective-volition.html |title=Collective Volition |accessdate=July 4, 2017}}</ref><ref name="singinst_feb_2006_news" />

Revision as of 12:38, 15 July 2017

This is a timeline of Machine Intelligence Research Institute. Machine Intelligence Research Institute (MIRI) is a nonprofit organization that does work related to AI safety.

Sample questions

This is an experimental section that provides some sample questions for readers, similar to reading questions that might come with a book. Some readers of this timeline might come to the page aimlessly and might not have a good idea of what they want to get out of the page. Having some "interesting" questions can help in reading the page with more purpose and in getting a sense of why the timeline is an important tool to have.

The following are some interesting questions that can be answered by reading this timeline:

  • Which Singularity Summits did MIRI host, and when did they happen? (Sort by the "Event type" column and look at the rows labeled "Conference".)
  • What was MIRI up to for the first ten years of its existence (before Luke Muehlhauser joined, before Holden Karnofsky wrote his critique of the organization)? (Scan the years 2000–2009.)
  • How has MIRI's explicit mission changed over the years? (Sort by the "Event type" column and look at the rows labeled "Mission".)

The following are some interesting questions that are difficult or impossible to answer just by reading the current version of this timeline, but might be possible to answer using a future version of this timeline:

  • When did some big donations to MIRI take place (for instance, the one by Peter Thiel)?
  • Has MIRI "done more things" between 2010–2013 or between 2014–2017?

Big picture

Time period Development summary More details
1998–2002 Various publications related to creating a superhuman AI Eliezer Yudkowsky writes various documents about designing a superhuman AI during this period, including "Coding a Transhuman AI", "The Plan to Singularity", and "Creating Friendly AI". The Flare Programming Language project launches to aid the creation of a superhuman AI.
2004–2009 Tyler Emerson's tenure as executive director
2006–2009 Modern rationalist community forms Overcoming Bias is created, LessWrong is created, Eliezer Yudkowsky writes the Sequences, and so on.
2006–2012 The Singularity Summits take place annually After the summit in 2012, the organization renames itself from "Singularity Institute for Artificial Intelligence" to the current "Machine Intelligence Research Institute".
2009–2012 Michael Vassar's tenure as president
2011–2015 Luke Muehlhauser's tenure as executive director
2013–present Change of focus MIRI changes focus to put less effort into public outreach and shift its research to Friendly AI math research.
2015–present Nate Soares's tenure as executive director

Visual data

Wikipedia pageviews across the different names

The image below shows desktop pageviews of the page Machine Intelligence Research Institute and its predecessor pages, "Singularity Institute for Artificial Intelligence" and "Singularity Institute".[1] The change in names occurred on these dates:[2][3]

  • February 1, 2013: Page moved from "Singularity Institute" to "Machine Intelligence Research Institute" with both old names redirecting to the new name
  • April 16, 2012: Page moved from "Singularity Institute for Artificial Intelligence" to "Singularity Institute" with the old name redirecting to the new name
  • December 23, 2011: Two pages "Singularity Institute" and "Singularity Institute for Artificial Intelligence" merged into single page "Singularty Institute for Artificial Intelligence"

MIRI Wikipedia desktop pageviews.png

Full timeline

Year Month and date Event type Details
1979 September 11 Eliezer Yudkowsky is born.[4]
1996 November 18 Eliezer Yudkowsky writes the first version of "Staring into the Singularity".[5]
1998 Publication The initial version of "Coding a Transhuman AI" (CaTAI) is published.[6]
1999 March 11 The Singularitarian mailing list is launched. The mailing list page notes that although hosted on MIRI's website, the mailing list "should be considered as being controlled by the individual Eliezer Yudkowsky".[7]
1999 September 17 The Singularitarian mailing list is first informed (by Yudkowsky?) of "The Plan to Singularity" (called "Creating the Singularity" at the time).[8]
2000 January 1 Publication "The Plan to Singularity" version 1.0 is written and published by Eliezer Yudkowsky, and posted to the Singularitarian, Extropians, and transhuman mailing lists.[8]
2000 January 1 Publication "The Singularitarian Principles" version 1.0 by Eliezer Yudkowsky is published.[9]
2000 February 6 The first email is sent on SL4 ("Shock Level Four"), a mailing list about transhumanism, superintelligent AI, existential risks, and so on.[10][11]
2000 May 18 Publication "Coding a Transhuman AI" (CaTAI) version 2.0a is "rushed out in time for the Foresight Gathering".[12]
2000 July 27 Mission Machine Intelligence Research Institute is founded as the Singularity Institute for Artificial Intelligence by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.[13]:3[14]
2000 September 1 Publication Large parts of "The Plan to Singularity" are marked obsolete "due to formation of Singularity Institute, and due to fundamental shifts in AI strategy caused by publication of CaTAI [Coding a Transhuman AI] 2".[8]
2000 September 7 Publication "Coding a Transhuman AI" (CaTAI) version 2.2.0 is published.[12]
2000 September 14 The first Wayback Machine snapshot of MIRI's website is from this day, using the singinst.org domain name.[15]
2001 April 8 MIRI begins accepting donations after receiving tax-exempt status.[16]
2001 April 18 Publication Version 0.9 of "Creating Friendly AI" is released.[17]
2001 June 14 Publication The "SIAI Guidelines on Friendly AI" are published.[18]
2001 June 15 Publication Version 1.0 of "Creating Friendly AI" is published.[19][17]
2001 July 23 Project MIRI announces that it has formally launched the development of the Flare programming language under Dmitriy Myshkin.[20]
2001 December 21 Domain MIRI obtains the flare.org domain name for its Flare language project.[20]
2002 March 8 AI box The first AI box experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.[21]
2002 April 7 Publication A draft of "Levels of Organization in General Intelligence" is announced on SL4.[22][23]
2002 July 4–5 AI box The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.[24]
2002 September 6 Staff Christian Rovner is appointed as MIRI's volunteer coordinator.[20]
2002 October 1 MIRI "releases a major new site upgrade" with various new pages.[20]
2002 October 7 Project MIRI announces the creation of its volunteers mailing list.[20]
2003 Project The Flare Programming language project is officially canceled.[25]
2003 Publication Eliezer Yudkowsky's "An Intuitive Explanation of Bayesian Reasoning" is published.[26]
2004 March 4–11 Staff MIRI announces Tyler Emerson as executive director.[27][28]
2004 April 7 Staff Michael Anissimov is announced as MIRI's advocacy director.[29]
2004 April 14 Outside review The first version of the Wikipedia page for MIRI is created.[30]
2004 May Publication Eliezer Yudkowsky's paper "Coherent Extrapolated Volition" is published around this time.[31] It is originally called "Collective Volition", and is announced on the MIRI website on August 16.[32][27]
2004 August 5–8 Conference TransVision 2004 takes place. TransVision is the World Transhumanist Association's annual event. MIRI is a sponsor for the event.[27]
2005 January 4 Publication "A Technical Explanation of Technical Explanation" is published.[33] It is announced on the MIRI news page on this day.[27]
2005 Conference MIRI does "AI and existential risk presentations at Stanford, Immortality Institute's Life Extension Conference, and the Terasem Foundation".[34]
2005 Publication Eliezer Yudkowsky writes chapters for Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković.[34] The book would be published in 2008.
2005 February 2 MIRI relocates from the Atlanta metropolitan area of Georgia to the Bay Area of California.[27]
2005 July 22–24 Conference TransVision 2005 takes place in Caracas, Venezuela. MIRI is a sponsor for the event.[27]
2005 August 21 AI box The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.[35]
2005–2006 December 20, 2005 – February 19, 2006 Financial The 2006 $100,000 Singularity Challenge, a fundraiser in which Peter Thiel matches donations up to $100,000, takes place. The fundraiser successfully matches the $100,000 amount.[27][36]
2006 Publication "Twelve Virtues of Rationality" is published.[37]
2006 February 13 Peter Thiel joins MIRI's Board of Advisors.[27]
2006 May 13 Conference The first Singularity Summit takes place at Stanford University.[38][39][40]
2006 November Robin Hanson starts Overcoming Bias.[41]
2007 Mission MIRI's organization mission ("Organization's Primary Exempt Purpose" on Form 990) changes to: "To develop safe, stable and self-modifying Artificial General Intelligence. And to support novel research and to foster the creation of a research community focused on Artificial General Intelligence and Safe and Friendly Artificial Intelligence."[42] This mission would be used in 2008 and 2009 as well.
2007 Project MIRI's outreach blog is started.[34]
2007 Project MIRI's Interview Series is started.[34]
2007 May 16 Project MIRI's introductory video is published on YouTube.[43][34]
2007 September 8–9 Conference The Singularity Summit 2007 takes place in the San Francisco Bay Area.[38][44][45]
2008 Publication "The Simple Truth" is published.[46]
2008 Project MIRI expands its Interview Series.[34]
2008 Project MIRI begins its summer intern program.[34]
2008 Project OpenCog is founded "via a grant from the [MIRI], and the donation from Novamente LLC of a large body of software code and software designs developed during the period 2001–2007".[47] (See also OpenCog § Relation to Singularity Institute.)
2008 October 25 Conference The Singularity Summit 2008 takes place in San Jose.[48][49]
2008 November–December Outside review The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.[50][51]
2009 Project MIRI establishes the Visiting Fellows Program.[34]
2009 (early) Staff Executive director Tyler Emerson departs MIRI.[52]
2009 (early) Staff Michael Anissimov is hired as a media director.[52] (Since he was advocacy director as far back as 2004, it's not clear if he left the organization and came back, or if just changed positions.)
2009 February Project Eliezer Yudkowsky starts LessWrong using as seed material his posts on Overcoming Bias.[53] On the 2009 accomplishments page, MIRI describes LessWrong as being "important to the Singularity Institute's work towards a beneficial Singularity in providing an introduction to issues of cognitive biases and rationality relevant for careful thinking about optimal philanthropy and many of the problems that must be solved in advance of the creation of provably human-friendly powerful artificial intelligence". And: "Besides providing a home for an intellectual community dialoguing on rationality and decision theory, Less Wrong is also a key venue for SIAI recruitment. Many of the participants in SIAI's Visiting Fellows Program first discovered the organization through Less Wrong."[52]
2009 February 16 Staff Michael Vassar announces himself as president of MIRI.[54]
2009 April Publication Eliezer Yudkowsky completes the Sequences.[52]
2009 August 13 Social media The Singularity Institute Twitter account, singinst, is created.[55]
2009 October Project A website maintained by MIRI, The Uncertain Future, first appears around this time.[56][57] The goal of the website is to "allow those interested in future technology to form their own rigorous, mathematically consistent model of how the development of advanced technologies will affect the evolution of civilization over the next hundred years".[58]
2009 October 3–4 Conference The Singularity Summit 2009 takes place in New York.[59][60]
2009 November Financial "Misappropriation of assets, by a contractor, was discovered in November 2009."[61]
2009 December Staff Amy Willey joins MIRI as Chief Compliance Officer.[52]
2009 December 11 Influence The third edition of Artificial Intelligence: A Modern Approach by Stuart J. Russell and Peter Norvig is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
2009 December 12 Project The Uncertain Future reaches beta and is announced on the MIRI blog.[62]
2009 Financial MIRI reports $118,803.00 in theft during this year.[34][63][64][65] The theft was by two former employees.[66]
2010 Mission The organization mission changes to: "To develop the theory and particulars of safe self-improving Artificial Intelligence; to support novel research and foster the creation of a research community focused on safe Artificial General Intelligence; and to otherwise improve the probability of humanity surviving future technological advances."[67] This mission would be used in 2011 and 2012 as well.
2010 February 28 Publication The first chapter of Eliezer Yudkowsky's fan fiction Harry Potter and the Methods of Rationality is published. The book would be published as a serial concluding on March 14, 2015.[68][69] The fan fiction would become the initial contact with MIRI of several larger donors to MIRI.[70]
2010 August 14–15 Conference The Singularity Summit 2010 takes place in San Francisco.[71]
2010 December 21 Social media The first post on the MIRI Facebook page is from this day.[72][73]
2010–2011 December 21, 2010 – January 20, 2011 Financial The Tallinn–Evans $125,000 Singularity Challenge takes place. The Challenge is a fundraiser in which Edwin Evans and Jaan Tallinn match each dollar donated to MIRI up to $125,000.[74][75]
2011 February Outside review Holden Karnofsky of GiveWell has a conversation with MIRI staff. The conversation reveals the existence of a "Persistent Problems Group" at MIRI, which will supposedly "assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition".[76] On April 30, Karnofsky would post the conversation to the GiveWell mailing list.[77]
2011 April Staff Luke Muehlhauser begins as an intern at MIRI.[78]
2011 May 10 – June 24 Outside review Holden Karnofsky of GiveWell and Jaan Tallinn (with Dario Amodei being present in the initial phone conversation) correspond regarding MIRI's work. The correspondence is posted to the GiveWell mailing list on July 18.[79]
2011 June 24 Domain A Wayback Machine snapshot on this day shows that singularity.org has turned into a GoDaddy.com placeholder.[80] Before this, the domain is some blog, most likely unrelated to MIRI.[81]
2011 July 18 – October 20 Domain At least during this period, the singularity.org domain name redirects to singinst.org/singularityfaq.[81]
2011 September 6 Domain The first Wayback Machine capture of singularityvolunteers.org is from this day.[82] For a time the site is used to coordinate volunteer efforts.
2011 October 15–16 Conference The Singularity Summit 2011 takes place in New York.[83]
2011 October 17 Social media The Singularity Summit YouTube account, SingularitySummits, is created.[84]
2011 November Staff Luke Muehlhauser is appointed executive director of MIRI.[85]
2011 December 12 Project Luke Muehlhauser announces the creation of Friendly-AI.com, a website introducing the idea of Friendly AI.[86]
2012 February 4 – May 4 Domain At least during this period, singularity.org redirects to singinst.org.[87]
2012 May 8 MIRI's April 2012 progress report is published, in which the Center for Applied Rationality's name is announced. Until this point, CFAR was known as the "Rationality Group" or "Rationality Org".[88]
2012 May 11 Outside review Holden Karnofsky publishes "Thoughts on the Singularity Institute (SI)" on LessWrong. The post explains why GiveWell does not plan to recommend the Singularity Institute.[89]
2012 June 16–28 Domain Sometime during this period, singinst.org begins redirecting to singularity.org, both being controlled by MIRI.[90]
2012 August 15 Luke Muehlhauser does an "ask me anything" (AMA) on reddit's r/Futurology.[91]
2012 September (approximate) Project MIRI begins to partner with Youtopia as its volunteer management platform.[92]
2012 October 13–14 Conference The Singularity Summit 2012 takes place.[93][94]
2012 November 11–18 Workshop The 1st Workshop on Logic, Probability, and Reflection takes place.[95]
2012 December 6 Singularity University announces that it has acquired the Singularity Summit from MIRI.[96]
2013 Mission The organization mission changes to: "To ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of the organization is to: a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; b) raise awareness of this important issue; c) advise researchers, leasers and laypeople around the world; d) as necessary, implement a smarter-than-human intelligence with humane, stable goals."[97] This mission would stay the same for 2014 and 2015.
2013–2014 Project MIRI conducts a lot of conversations during this period. Out of 80 conversations listed as of July 14, 2017, 75 are from this period (19 in 2013 and 56 in 2014).[98] In the "2014 in review" post on MIRI's blog Luke Muehlhauser writes: "Nearly all of the interviews were begun in 2013 or early 2014, even if they were not finished and published until much later. Mid-way through 2014, we decided to de-prioritize expert interviews, due to apparent diminishing returns."[99]
2013 January Staff Michael Anissimov leaves MIRI.[100]
2013 January 30 MIRI announces that it has renamed itself from "Singularity Institute for Artificial Intelligence" to "Machine Intelligence Research Institute".[101]
2013 February 1 Publication Facing the Intelligence Explosion by Luke Muehlhauser is published by MIRI.[102]
2013 February 11 – March 2 Domain Sometime during this period, MIRI's new website at intelligence.org begins to function.[103][104]
2013 March 2 – July 4 Domain At least during this period, singularity.org redirects to intelligence.org, MIRI's new domain.[105]
2013 April 3 Publication Singularity Hypotheses: A Scientific and Philosophical Assessment is published by Springer. The book contains chapters written by MIRI researchers and research associates.[106][107]
2013 April 3–24 Workshop The 2nd Workshop on Logic, Probability, and Reflection takes place.[95]
2013 April 13 Strategy MIRI publishes an update on its strategy on its blog. In the blog post, MIRI executive director Luke Muehlhauser states that MIRI plans to put less effort into public outreach and shift its research to Friendly AI math research.[108]
2013 July 4 Social media MIRI's Twitter account, MIRIBerkeley, is created.[109]
2013 July 4 Social media The earliest post on MIRI's Google Plus account, IntelligenceOrg, is from this day.[110][111]
2013 July 8–14 Workshop The 3rd Workshop on Logic, Probability, and Reflection takes place.[95]
2013 August 4 Domain By this point, singularity.org is operated by Singularity University.[112]
2013 September 1 Publication The Hanson-Yudkowsky AI-Foom Debate is published as an ebook by MIRI.[113]
2013 September 7–13 Workshop The 4th Workshop on Logic, Probability, and Reflection takes place.[95]
2013 October 25 Social media The MIRI YouTube account, MIRIBerkeley, is created.[114]
2013 October 27 Outside review MIRI meets with Holden Karnofsky, Jacob Steinhardt, and Dario Amodei for a discussion about MIRI's organizational strategy.[115][116]
2013 November 23–29 Workshop The 5th Workshop on Logic, Probability, and Reflection takes place.[95]
2013 December 10 Domain The first working Wayback Machine snapshot of the MIRI Volunteers website, available at mirivolunteers.org, is from this day.[117]
2013 December 14–20 Workshop The 6th Workshop on Logic, Probability, and Reflection takes place.[95] This is the first workshop attended by Nate Soares (at Google at the time), who would later becomes executive director of MIRI.[118][119]
2014 January (approximate) Financial Jed McCaleb, the creator of Ripple and original founder of Mt. Gox, makes a donation worth $500,000 in XRP.[120]
2014 January 16 Outside review MIRI meets with Holden Karnofsky of GiveWell for a discussion on existential risk strategy.[121][116]
2014 February 1 Publication Smarter Than Us: The Rise of Machine Intelligence by Stuart Armstrong is published by MIRI.[122]
2014 March–May Influence Future of Life Institute (FLI) is founded.[123] MIRI is a parter organization to FLI.[124] The Singularity Summit, MIRI's annual conference from 2006–2012, also played "a key causal role in getting Max Tegmark interested and the FLI created".[125] "Tallinn, a co-founder of FLI and of the Cambridge Centre for the Study of Existential Risk (CSER), cites MIRI as a key source for his views on AI risk".[126]
2014 March 13 Staff Some recent hires at MIRI are announced. Among the new team members is Nate Soares, who would become MIRI's executive director in 2015.[119]
2014 May 3–11 Workshop The 7th Workshop on Logic, Probability, and Reflection takes place.[95]
2014 July–September Influence Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published. While Bostrom has never worked for MIRI, he is a research advisor to MIRI. MIRI also contributed substantially to the publication of the book.[125]
2014 July 4 Project Earliest evidence of AI Impacts existing is from this day.[127]
2014 August Project The AI Impacts website launches.[128]
2014 November 4 Project The Intelligent Agent Foundations Forum, run by MIRI, is launched.[129]
2015 January Project AI Impacts rolls out a new website.[130]
2015 January 2–5 Conference The Future of AI: Opportunities and Challenges, an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute, but several MIRI staff (including Luke Muehlhauser, Eliezer Yudkowsky, and Nate Soares) attend.[131] Nate Soares would later call this the "turning point" of when top academics begin to focus on AI risk.[132]
2015 March 11 Influence Rationality: From AI to Zombies is published. It is an ebook of Eliezer Yudkowsky's series of blog posts, called "the Sequences".[133][134][135]
2015 May 4–6 Workshop The 1st Introductory Workshop on Logical Decision Theory takes place.[95]
2015 May 6 Staff Executive director Luke Muehlhauser announces his departure from MIRI. The announcement also states that Nate Soares will be the new executive director.[136]
2015 May 29–31 Workshop The 1st Introductory Workshop on Logical Uncertainty takes place.[95]
2015 June 3–4 Staff Nate Soares begins as executive director of MIRI.[118]
2015 June 11 Nate Soares, executive director of MIRI, does an "ask me anything" (AMA) on the Effective Altruism Forum.[137]
2015 June 12–14 Workshop The 2nd Introductory Workshop on Logical Decision Theory takes place.[95]
2015 June 26–28 Workshop The 1st Introductory Workshop on Vingean Reflection takes place.[95]
2015 July 7–26 Project The MIRI Summer Fellows program 2015, run by the Center for Applied Rationality, takes place.[138] This program is apparently "relatively successful at recruiting staff for MIRI".[139]
2015 August 7–9 Workshop The 2nd Introductory Workshop on Logical Uncertainty takes place.[95]
2015 August 28–30 Workshop The 3rd Introductory Workshop on Logical Decision Theory takes place.[95]
2016 Publication MIRI pays Eliezer Yudkowsky to produce AI risk content for Arbital.[140] (Not sure if there are any more details of this available.)
2016 April 1–3 Workshop The Self-Reference, Type Theory, and Formal Verification takes place.[95]
2016 May 28–29 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Transparency takes place.[95]
2016 June 4–5 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Robustness and Error-Tolerance takes place.[95]
2016 June 11–12 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Preference Specification takes place.[95]
2016 June 17 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Agent Models and Multi-Agent Dilemmas takes place.[95]
2016 July 27 MIRI announces its machine learning technical agenda, called "Alignment for Advanced Machine Learning Systems".[141]
2016 August Financial The Open Philanthropy Project awards a grant worth $500,000 to Machine Intelligence Research Institute. The grant writeup notes, "Despite our strong reservations about the technical research we reviewed, we felt that awarding $500,000 was appropriate for multiple reasons".[142]
2016 August 12–14 Workshop The 8th Workshop on Logic, Probability, and Reflection takes place.[95]
2016 August 26–28 Workshop The 1st Workshop on Machine Learning and AI Safety takes place.[95]
2016 September 12 Publication MIRI announces the release of its new paper, "Logical Induction" by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor.[143][144]
2016 October 12 MIRI does an "ask me anything" (AMA) on the Effective Altruism Forum.[145]
2016 October 21–23 Workshop The 2nd Workshop on Machine Learning and AI Safety takes place.[95]
2016 November 11–13 Workshop The 9th Workshop on Logic, Probability, and Reflection takes place.[95]
2016 December Financial The Open Philanthropy Project awards a grant worth $32,000 to AI Impacts.[146]
2016 December 1–3 Workshop The 3rd Workshop on Machine Learning and AI Safety takes place.[95]
2017 March 25–26 Workshop The Workshop on Agent Foundations and AI Safety takes place.[95]
2017 April 1–2 Workshop The 4th Workshop on Machine Learning and AI Safety takes place.[95]
2017 May 24 Publication "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the arXiv.[147] Two researchers from AI Impacts are authors on the paper. The paper would be mentioned in more than twenty news articles.[148]
2017 July 4 Strategy MIRI announces that it will be putting relatively little work into the "Alignment for Advanced Machine Learning Systems" agenda over the next year due to the departure of Patrick LaVictoire and Jessica Taylor, and leave taken by Andrew Critch.[149]
2017 July 7 Outside review Daniel Dewey, program officer for potential risks from advanced artificial intelligence at the Open Philanthropy Project, publishes a post giving his thoughts on MIRI's work on highly reliable agent design. The post is intended to provide "an unambiguous snapshot" of Dewey's beliefs, and gives the case for highly reliable agent design work (as he understands it) and why he finds other approaches (such as learning to reason from humans) more promising.[150]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Issa likes to work locally and track changes with Git, so the revision history on this wiki only shows changes in bulk. To see more incremental changes, refer to the commit history.

Funding information for this timeline is available.

What the timeline is still missing

  • TODO Figure out how to cover publications
  • TODO mention kurzweil
  • TODO maybe include some of the largest donations (e.g. the XRP/ETH ones, tallinn, thiel)
  • TODO maybe fundraisers
  • TODO look more closely through some AMAs: [1], [2]
  • TODO maybe more info in this SSC post [3]
  • TODO more links at EA Wikia page [4]
  • TODO lots of things from strategy updates, annual reviews, etc. [5]
  • TODO Ben Goertzel talks about his involvement with MIRI [6], also more on opencog
  • TODO giant thread on Ozy's blog [7]
  • NOTE From 2017-07-06: "years that have few events so far: 2003 (one event), 2007 (one event), 2008 (three events), 2010 (three events), 2017 (three events)"
  • TODO possibly include more from the old MIRI volunteers site. Some of the volunteering opportunities like proofreading and promoting MIRI by giving it good web of trust ratings seem to give a good flavor of what MIRI was like, the specific challenges in terms of switching domains, and so on.
  • TODO not sure how exactly to include this in the timeline, but something about MIRI's changing approach to funding certain types of contract work. e.g. Vipul says "I believe the work I did with Luke would no longer be sponsored by MIRI as their research agenda is now much more narrowly focused on the mathematical parts."
  • TODO who is Tyler Emerson?
  • TODO add some stuff about Zendegi: [8], [9], [10]

Timeline update strategy

Some places to look on the MIRI blog:

Also general stuff like big news coverage.

See also

External links

References

  1. "Wikipedia desktop pageviews for the three names of MIRI". Wikipedia Views. Retrieved July 15, 2017. 
  2. "Singularity Institute for Artificial Intelligence: Revision history". Retrieved July 15, 2017. 
  3. "All public logs: search Singularity Institute". Retrieved July 15, 2017. 
  4. Eliezer S. Yudkowsky (August 31, 2000). "Eliezer, the person". Archived from the original on February 5, 2001. 
  5. "Yudkowsky - Staring into the Singularity 1.2.5". Retrieved June 1, 2017. 
  6. Eliezer S. Yudkowsky. "Coding a Transhuman AI". Retrieved July 5, 2017. 
  7. Eliezer S. Yudkowsky. "Singularitarian mailing list". Retrieved July 5, 2017. The "Singularitarian" mailing list was first launched on Sunday, March 11th, 1999, to assist in the common goal of reaching the Singularity. It will do so by pooling the resources of time, brains, influence, and money available to Singularitarians; by enabling us to draw on the advice and experience of the whole; by bringing together individuals with compatible ideas and complementary resources; and by binding the Singularitarians into a community. 
  8. 8.0 8.1 8.2 Eliezer S. Yudkowsky. "PtS: Version History". Retrieved July 4, 2017. 
  9. Eliezer S. Yudkowsky. "Singularitarian Principles 1.0". Retrieved July 5, 2017. 
  10. "SL4: By Date". Retrieved June 1, 2017. 
  11. Eliezer S. Yudkowsky. "SL4 Mailing List". Retrieved June 1, 2017. 
  12. 12.0 12.1 Eliezer S. Yudkowsky. "Coding a Transhuman AI § Version History". Retrieved July 5, 2017. 
  13. "Form 990-EZ 2000" (PDF). Retrieved June 1, 2017. Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999. 
  14. "About the Singularity Institute for Artificial Intelligence". Retrieved July 1, 2017. The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors. 
  15. Eliezer S. Yudkowsky. "Singularity Institute for Artificial Intelligence, Inc.". Retrieved July 4, 2017. 
  16. Eliezer S. Yudkowsky. "Singularity Institute: News". Retrieved July 1, 2017. April 08, 2001: The Singularity Institute for Artificial Intelligence, Inc. announces that it has received tax-exempt status and is now accepting donations. 
  17. 17.0 17.1 "Singularity Institute for Artificial Intelligence // News // Archive". Retrieved July 13, 2017. 
  18. Singularity Institute for Artificial Intelligence. "SIAI Guidelines on Friendly AI". Retrieved July 13, 2017. 
  19. Eliezer Yudkowsky (2001). "Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures" (PDF). The Singularity Institute. Retrieved July 5, 2017. 
  20. 20.0 20.1 20.2 20.3 20.4 Eliezer S. Yudkowsky. "Singularity Institute: News". Retrieved July 1, 2017. 
  21. "SL4: By Thread". Retrieved July 1, 2017. 
  22. Eliezer S. Yudkowsky (April 7, 2002). "SL4: PAPER: Levels of Organization in General Intelligence". Retrieved July 5, 2017. 
  23. Singularity Institute for Artificial Intelligence. "Levels of Organization in General Intelligence". Retrieved July 5, 2017. 
  24. "SL4: By Thread". Retrieved July 1, 2017. 
  25. "FlareProgrammingLanguage". SL4 Wiki. September 14, 2007. Retrieved July 13, 2017. 
  26. "Yudkowsky - Bayes' Theorem". Retrieved July 5, 2017. Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute. If you've found Yudkowsky's pages on rationality useful, please consider donating to the Machine Intelligence Research Institute. 
  27. 27.0 27.1 27.2 27.3 27.4 27.5 27.6 27.7 "News of the Singularity Institute for Artificial Intelligence". Retrieved July 4, 2017. 
  28. "Singularity Institute for Artificial Intelligence // The SIAI Voice". Retrieved July 4, 2017. On March 4, 2004, the Singularity Institute announced Tyler Emerson as our Executive Director. Emerson will be responsible for guiding the Institute. His focus is in nonprofit management, marketing, relationship fundraising, leadership and planning. He will seek to cultivate a larger and more cohesive community that has the necessary resources to develop Friendly AI. 
  29. Tyler Emerson (April 7, 2004). "SL4: Michael Anissimov - SIAI Advocacy Director". Retrieved July 1, 2017. The Singularity Institute announces Michael Anissimov as our Advocacy Director. Michael has been an active volunteer for two years, and one of the more prominent voices in the singularity community. He is committed and thoughtful, and we feel very fortunate to have him help lead our advocacy. 
  30. "Machine Intelligence Research Institute: This is an old revision of this page, as edited by 63.201.36.156 (talk) at 19:28, 14 April 2004.". Retrieved July 15, 2017. 
  31. Eliezer Yudkowsky. "Coherent Extrapolated Volition" (PDF). Retrieved July 1, 2017. The information is current as of May 2004, and should not become dreadfully obsolete until late June, when I plant to have an unexpected insight. 
  32. "Collective Volition". Retrieved July 4, 2017. 
  33. "Yudkowsky - Technical Explanation". Retrieved July 5, 2017. Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute. 
  34. 34.0 34.1 34.2 34.3 34.4 34.5 34.6 34.7 34.8 Brandon Reinhart. "SIAI - An Examination - Less Wrong". LessWrong. Retrieved June 30, 2017. 
  35. "SL4: By Thread". Retrieved July 1, 2017. 
  36. "The Singularity Institute for Artificial Intelligence - 2006 $100,000 Singularity Challenge". Retrieved July 5, 2017. 
  37. "Twelve Virtues of Rationality". Retrieved July 5, 2017. Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute. 
  38. 38.0 38.1 "Singularity Summit". Machine Intelligence Research Institute. Retrieved June 30, 2017. 
  39. Dan Farber. "The great Singularity debate". ZDNet. Retrieved June 30, 2017. 
  40. Jerry Pournelle (May 20, 2006). "Chaos Manor Special Reports: The Stanford Singularity Summit". Retrieved June 30, 2017. 
  41. "Overcoming Bias : Bio". Retrieved June 1, 2017. 
  42. "Form 990 2007" (PDF). Retrieved July 8, 2017. 
  43. "Singularity Institute for Artificial Intelligence". YouTube. Retrieved July 8, 2017. 
  44. "The Singularity Summit 2007". Retrieved June 30, 2017. 
  45. "Scientists Fear Day Computers Become Smarter Than Humans". Fox News. September 12, 2007. Retrieved July 5, 2017. futurists gathered Saturday for a weekend conference 
  46. "Yudkowsky - The Simple Truth". Retrieved July 5, 2017. Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute. 
  47. "About". OpenCog Foundation. Retrieved July 6, 2017. 
  48. http://helldesign.net. "The Singularity Summit 2008: Opportunity, Risk, Leadership > Program". Retrieved June 30, 2017. 
  49. Elise Ackerman (October 23, 2008). "Annual A.I. conference to be held this Saturday in San Jose". The Mercury News. Retrieved July 5, 2017. 
  50. "The Hanson-Yudkowsky AI-Foom Debate". Lesswrongwiki. LessWrong. Retrieved July 1, 2017. 
  51. "Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong". LessWrong. Retrieved July 15, 2017. Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI. 
  52. 52.0 52.1 52.2 52.3 52.4 "Recent Singularity Institute Accomplishments". Singularity Institute for Artificial Intelligence. Retrieved July 6, 2017. 
  53. "FAQ - Lesswrongwiki". LessWrong. Retrieved June 1, 2017. 
  54. Michael Vassar (February 16, 2009). "Introducing Myself". Machine Intelligence Research Institute. Retrieved July 1, 2017. 
  55. "SingularityInstitute (@singinst)". Twitter. Retrieved July 4, 2017. 
  56. "Wayback Machine". Retrieved July 2, 2017.  The first snapshot is from October 5, 2009.
  57. "theuncertainfuture.com - Google Search". Retrieved July 2, 2017.  The earliest cache seems to be from October 25, 2009. Checking the Jan 1, 2008 – Jan 1, 2009 range produces no result.
  58. "The Uncertain Future". Machine Intelligence Research Institute. Retrieved July 2, 2017. 
  59. http://helldesign.net. "The Singularity Summit 2009 > Program". Retrieved June 30, 2017. 
  60. Stuart Fox (October 2, 2009). "Singularity Summit 2009: The Singularity Is Near". Popular Science. Retrieved June 30, 2017. 
  61. "Form 990 2009" (PDF). Retrieved July 8, 2017. 
  62. Michael Anissimov (December 12, 1009). "The Uncertain Future". The Singularity Institute Blog. Retrieved July 5, 2017. 
  63. "lukeprog comments on Thoughts on the Singularity Institute (SI) - Less Wrong". LessWrong. Retrieved June 30, 2017. So little monitoring of funds that $118k was stolen in 2010 before SI noticed. (Note that we have won stipulated judgments to get much of this back, and have upcoming court dates to argue for stipulated judgments to get the rest back.) 
  64. "cjb comments on SIAI Fundraising". LessWrong. Retrieved July 8, 2017. 
  65. "Almanac Almanac: Police Calls (December 23, 2009)". Retrieved July 8, 2017. Embezzlement report: Alicia Issac, 37, of Sunnyvale arrested on embezzlement, larceny and conspiracy charges in connection with $51,000 loss, Singularity Institute for Artificial Intelligence in 1400 block of Adams Drive, Dec. 10. 
  66. "Reply to Holden on The Singularity Institute". LessWrong. July 10, 2012. Retrieved June 30, 2017. Two former employees stole $118,000 from SI. Earlier this year we finally won stipulated judgments against both individuals, forcing them to pay back the full amounts they stole. We have already recovered several thousand dollars of this. 
  67. "Form 990 2010" (PDF). Retrieved July 8, 2017. 
  68. "Harry Potter and the Methods of Rationality Chapter 1: A Day of Very Low Probability, a harry potter fanfic". FanFiction. Retrieved July 1, 2017. Updated: 3/14/2015 - Published: 2/28/2010 
  69. David Whelan (March 2, 2015). "The Harry Potter Fan Fiction Author Who Wants to Make Everyone a Little More Rational". Vice. Retrieved July 1, 2017. 
  70. "2013 in Review: Fundraising - Machine Intelligence Research Institute". Machine Intelligence Research Institute. August 13, 2014. Retrieved July 1, 2017. Recently, we asked (nearly) every donor who gave more than $3,000 in 2013 about the source of their initial contact with MIRI, their reasons for donating in 2013, and their preferred methods for staying in contact with MIRI. [&hellip] Four came into contact with MIRI via HPMoR. 
  71. "Singularity Summit | Program". Retrieved June 30, 2017. 
  72. "Machine Intelligence Research Institute - Posts". Retrieved July 4, 2017. 
  73. "Machine Intelligence Research Institute - Posts". Retrieved July 4, 2017. 
  74. Louie Helm (December 21, 2010). "Announcing the Tallinn-Evans $125,000 Singularity Challenge". Machine Intelligence Research Institute. Retrieved July 7, 2017. 
  75. Kaj Sotala (December 26, 2010). "Tallinn-Evans $125,000 Singularity Challenge". LessWrong. Retrieved July 7, 2017. 
  76. "GiveWell conversation with SIAI". GiveWell. February 2011. Retrieved July 4, 2017. 
  77. Holden Karnofsky. "Singularity Institute for Artificial Intelligence". Yahoo! Groups. Retrieved July 4, 2017. 
  78. "lukeprog comments on Thoughts on the Singularity Institute (SI)". LessWrong. Retrieved June 30, 2017. When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. 
  79. Holden Karnofsky. "Re: [givewell] Singularity Institute for Artificial Intelligence". Yahoo! Groups. Retrieved July 4, 2017. 
  80. "singularity.org". Retrieved July 4, 2017. 
  81. 81.0 81.1 "Wayback Machine". Retrieved July 4, 2017. 
  82. "Singularity Institute Volunteering". Retrieved July 14, 2017. 
  83. "Singularity Summit | Program". Retrieved June 30, 2017. 
  84. "SingularitySummits". YouTube. Retrieved July 4, 2017. Joined Oct 17, 2011 
  85. Luke Muehlhauser (January 16, 2012). "Machine Intelligence Research Institute Progress Report, December 2011". Machine Intelligence Research Institute. Retrieved July 14, 2017. 
  86. lukeprog (December 12, 2011). "New 'landing page' website: Friendly-AI.com". LessWrong. Retrieved July 2, 2017. 
  87. "Wayback Machine". Retrieved July 4, 2017. 
  88. Louie Helm (May 8, 2012). "Machine Intelligence Research Institute Progress Report, April 2012". Machine Intelligence Research Institute. Retrieved June 30, 2017. 
  89. Holden Karnofsky. "Thoughts on the Singularity Institute (SI) - Less Wrong". LessWrong. Retrieved June 30, 2017. 
  90. "Wayback Machine". Retrieved July 4, 2017. 
  91. "I am Luke Muehlhauser, CEO of the Singularity Institute for Artificial Intelligence. Ask me anything about the Singularity, AI progress, technological forecasting, and researching Friendly AI! • r/Futurology". reddit. Retrieved June 30, 2017. 
  92. "November 2012 Newsletter". Machine Intelligence Research Institute. November 7, 2012. Retrieved July 14, 2017. Over the past couple of months we thought hard about how to improve our volunteer program, with the goal of finding a system that makes it easier to engage volunteers, create a sense of community, and quantify volunteer contributions. After evaluating several different volunteer management platforms, we decided to partner with Youtopia — a young company with a lot of promise — and make heavy use of Google Docs. 
  93. David J. Hill (August 29, 2012). "Singularity Summit 2012 Is Coming To San Francisco October 13-14". Singularity Hub. Retrieved July 6, 2017. 
  94. "Singularity Summit 2012: the lion doesn't sleep tonight". Gene Expression. Discover. October 15, 2012. Retrieved July 6, 2017. 
  95. 95.00 95.01 95.02 95.03 95.04 95.05 95.06 95.07 95.08 95.09 95.10 95.11 95.12 95.13 95.14 95.15 95.16 95.17 95.18 95.19 95.20 95.21 95.22 95.23 95.24 "Research Workshops - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved July 1, 2017. 
  96. "Singularity University Acquires the Singularity Summit". Singularity University. December 9, 2012. Retrieved June 30, 2017. 
  97. "Form 990 2013" (PDF). Retrieved July 8, 2017. 
  98. "Conversations Archives". Machine Intelligence Research Institute. Retrieved July 15, 2017. 
  99. Luke Muehlhauser (March 22, 2015). "2014 in review". Machine Intelligence Research Institute. Retrieved July 15, 2017. 
  100. "March Newsletter". Machine Intelligence Research Institute. March 7, 2013. Retrieved July 1, 2017. Due to Singularity University's acquisition of the Singularity Summit and some major changes to MIRI's public communications strategy, Michael Anissimov left MIRI in January 2013. Michael continues to support our mission and continues to volunteer for us. 
  101. "We are now the "Machine Intelligence Research Institute" (MIRI)". Machine Intelligence Research Institute. January 30, 2013. Retrieved June 30, 2017. 
  102. "Facing the Intelligence Explosion, Luke Muehlhauser". Amazon.com. Retrieved July 1, 2017. Publisher: Machine Intelligence Research Institute (February 1, 2013) 
  103. "Machine Intelligence Research Institute - Coming soon...". Retrieved July 4, 2017. 
  104. "Machine Intelligence Research Institute". Retrieved July 4, 2017. 
  105. "Wayback Machine". Retrieved July 4, 2017. 
  106. Luke Muehlhauser (April 25, 2013). ""Singularity Hypotheses" Published". Machine Intelligence Research Institute. Retrieved July 14, 2017. 
  107. "Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection): 9783642325595: Medicine & Health Science Books". Amazon.com. Retrieved July 14, 2017. Publisher: Springer; 2012 edition (April 3, 2013) 
  108. Luke Muehlhauser (December 11, 2013). "MIRI's Strategy for 2013". Machine Intelligence Research Institute. Retrieved July 6, 2017. 
  109. "MIRI (@MIRIBerkeley)". Twitter. Retrieved July 1, 2017. 
  110. "MIRI's +Luke Muehlhauser appears on "Big Picture Science" at 13:30-23:30.". Retrieved July 4, 2017. 
  111. "Machine Intelligence Research Institute - Google+". Retrieved July 4, 2017. 
  112. "Singularity Summit". Retrieved July 4, 2017. 
  113. "Amazon.com: The Hanson-Yudkowsky AI-Foom Debate eBook: Robin Hanson, Eliezer Yudkowsky: Kindle Store". Retrieved July 1, 2017. Publisher: Machine Intelligence Research Institute (September 1, 2013) 
  114. "Machine Intelligence Research Institute". YouTube. Retrieved July 4, 2017. Joined Oct 25, 2013 
  115. Luke Muehlhauser (January 13, 2014). "MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei". Machine Intelligence Research Institute. Retrieved July 7, 2017. 
  116. 116.0 116.1 "Potential Risks from Advanced Artificial Intelligence". Open Philanthropy Project. Retrieved July 7, 2017. 
  117. "Home - MIRI Volunteers". Retrieved July 14, 2017. 
  118. 118.0 118.1 "Taking the reins at MIRI". LessWrong. June 3, 2015. Retrieved July 5, 2017. 
  119. 119.0 119.1 "Recent Hires at MIRI". Machine Intelligence Research Institute. March 13, 2014. Retrieved July 13, 2017. 
  120. Jon Southurst (January 19, 2014). "Ripple Creator Donates $500k in XRP to Artificial Intelligence Research Charity". CoinDesk. Retrieved July 6, 2017. 
  121. Luke Muehlhauser (January 27, 2014). "Existential Risk Strategy Conversation with Holden Karnofsky". Machine Intelligence Research Institute. Retrieved July 7, 2017. 
  122. "Smarter Than Us: The Rise of Machine Intelligence, Stuart Armstrong". Amazon.com. Retrieved July 1, 2017. Publisher: Machine Intelligence Research Institute (February 1, 2014) 
  123. Victoria Krakovna. "New organization - Future of Life Institute (FLI)". LessWrong. Retrieved July 6, 2017. As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself. 
  124. "News from our Partner Organizations". Future of Life Institute. Retrieved July 6, 2017. 
  125. 125.0 125.1 "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017. 
  126. Rob Bensinger (August 10, 2015). "Assessing our past and potential impact". Machine Intelligence Research Institute. Retrieved July 6, 2017. 
  127. "Recent site activity - AI Impacts". Retrieved June 30, 2017. Jul 4, 2014, 10:39 AM Katja Grace edited Predictions of human-level AI timelines 
  128. "MIRI's September Newsletter". Machine Intelligence Research Institute. September 1, 2014. Retrieved July 15, 2017. Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts. 
  129. Benja Fallenstein. "Welcome!". Intelligent Agent Foundations Forum. Retrieved June 30, 2017. post by Benja Fallenstein 969 days ago 
  130. Luke Muehlhauser (January 11, 2015). "An improved "AI Impacts" website". Machine Intelligence Research Institute. Retrieved June 30, 2017. 
  131. "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017. 
  132. Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017. 
  133. RobbBB (March 13, 2015). "Rationality: From AI to Zombies". LessWrong. Retrieved July 1, 2017. 
  134. Ryan Carey. "Rationality: From AI to Zombies was released today!". Effective Altruism Forum. Retrieved July 1, 2017. 
  135. "Rationality: From AI to Zombies - Kindle edition by Eliezer Yudkowsky. Health, Fitness & Dieting Kindle eBooks @ Amazon.com.". Retrieved July 1, 2017. Publisher: Machine Intelligence Research Institute (March 11, 2015) 
  136. Luke Muehlhauser (May 6, 2015). "A fond farewell and a new Executive Director". Machine Intelligence Research Institute. Retrieved June 30, 2017. 
  137. "I am Nate Soares, AMA!". Effective Altruism Forum. Retrieved July 5, 2017. 
  138. "MIRI Summer Fellows 2015". CFAR. June 21, 2015. Retrieved July 8, 2017. 
  139. "Center for Applied Rationality — General Support". Open Philanthropy Project. Retrieved July 8, 2017. We have some doubts about CFAR's management and operations, and we see CFAR as having made only limited improvements over the last two years, with the possible exception of running the MIRI Summer Fellows Program in 2015, which we understand to have been relatively successful at recruiting staff for MIRI. 
  140. Larks (December 13, 2016). "2017 AI Risk Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved July 8, 2017. 
  141. Rob Bensinger (July 27, 2016). "New paper: "Alignment for advanced machine learning systems"". Machine Intelligence Research Institute. Retrieved July 1, 2017. 
  142. "Machine Intelligence Research Institute — General Support". Open Philanthropy Project. Retrieved June 30, 2017. 
  143. "New paper: "Logical induction"". Machine Intelligence Research Institute. March 23, 2017. Retrieved July 1, 2017. 
  144. Scott Aaronson (October 9, 2016). "Shtetl-Optimized » Blog Archive » Stuff That's Happened". Retrieved July 1, 2017. Some of you will also have seen that folks from the Machine Intelligence Research Institute (MIRI)—Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor—recently put out a major 130-page paper entitled "Logical Induction". 
  145. Rob Bensinger (October 11, 2016). "Ask MIRI Anything (AMA)". Effective Altruism Forum. Retrieved July 5, 2017. 
  146. "AI Impacts — General Support". Open Philanthropy Project. Retrieved June 30, 2017. 
  147. "[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved July 13, 2017. 
  148. "Media discussion of 2016 ESPAI". AI Impacts. June 14, 2017. Retrieved July 13, 2017. 
  149. "Updates to the research team, and a major donation - Machine Intelligence Research Institute". Machine Intelligence Research Institute. July 4, 2017. Retrieved July 4, 2017. 
  150. Daniel Dewey (July 7, 2017). "My current thoughts on MIRI's "highly reliable agent design" work". Effective Altruism Forum. Retrieved July 7, 2017.