Difference between revisions of "Timeline of Machine Intelligence Research Institute"

From Timelines
Jump to: navigation, search
(Full timeline)
 
(31 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{focused coverage period|end-date = November 2021}}
+
{{focused coverage period|end-date = June 2024}}
  
 
This is a '''timeline of Machine Intelligence Research Institute'''. {{w|Machine Intelligence Research Institute}} (MIRI) is a nonprofit organization that does work related to AI safety.
 
This is a '''timeline of Machine Intelligence Research Institute'''. {{w|Machine Intelligence Research Institute}} (MIRI) is a nonprofit organization that does work related to AI safety.
Line 23: Line 23:
 
! Time period !! Development summary !! More details
 
! Time period !! Development summary !! More details
 
|-
 
|-
| 1998–2002 || Various publications related to creating a superhuman AI || Eliezer Yudkowsky writes various documents about designing a superhuman AI during this period, including "Coding a Transhuman AI", "The Plan to Singularity", and "Creating Friendly AI". The Flare Programming Language project launches to aid the creation of a superhuman AI.
+
| 1998–2002 || Various publications related to creating a superhuman AI || During this period, Eliezer Yudkowsky publishes a series of foundational documents about designing superhuman AI. Key works include "Coding a Transhuman AI," "The Plan to Singularity," and "Creating Friendly AI." These writings lay the groundwork for the AI alignment problem. Additionally, the Flare Programming Language project is launched to assist in the creation of a superhuman AI, marking the early technical ambitions of the movement.
 
|-
 
|-
| 2004–2009 || Tyler Emerson's tenure as executive director || Under Emerson's leadership, MIRI starts the Singularity Summit, moves to the San Francisco Bay Area, and lands Peter Thiel as a donor and enthusiastic endorser.
+
 
 +
| 2004–2009 || Tyler Emerson's tenure as executive director || Under Emerson’s leadership, MIRI (then known as the Singularity Institute) experiences growth and increased visibility. Emerson launches the Singularity Summit, a major event that brings together AI researchers, futurists, and thought leaders. MIRI relocates to the San Francisco Bay Area, gaining a strong foothold in the tech industry. During this period, Peter Thiel becomes a key donor and public advocate, lending credibility and significant financial support to the institute.
 +
|-
 +
 
 +
| 2006–2009 || Modern rationalist community forms || This period sees the formation of the modern rationalist community. Eliezer Yudkowsky contributes by founding the websites ''Overcoming Bias'' and LessWrong. These platforms become central hubs for discussions on rationality, AI safety, and existential risks. Yudkowsky's Sequences, a comprehensive collection of essays on rationality, are written and gain a wide following, helping shape the philosophy of many within the AI safety and rationalist movements.
 
|-
 
|-
| 2006–2009 || Modern rationalist community forms || ''Overcoming Bias'' is created, LessWrong is created, Eliezer Yudkowsky writes the Sequences, and so on.
+
 
 +
| 2006–2012 || The Singularity Summits take place annually || The Singularity Summit takes place annually during this period, attracting both prominent thinkers and the general public interested in AI, technology, and futurism. In 2012, the organization changes its name from "Singularity Institute for Artificial Intelligence" to the "Machine Intelligence Research Institute" (MIRI) to better reflect its focus on AI research rather than broader technological futurism. MIRI also sells the Singularity Summit to Singularity University, signaling a shift toward a more focused research agenda.
 
|-
 
|-
| 2006–2012 || The Singularity Summits take place annually || After the summit in 2012, the organization renames itself from "Singularity Institute for Artificial Intelligence" to the current "Machine Intelligence Research Institute" and sells the Singularity Summit to Singularity University.
+
 
 +
| 2009–2012 || Michael Vassar's tenure as president || Michael Vassar serves as president during this period, continuing to build on the progress made by previous leadership. Vassar focuses on strategic development and positions MIRI within the broader intellectual landscape, further cementing its role as a leader in AI safety research.
 
|-
 
|-
| 2009–2012 || Michael Vassar's tenure as president ||
+
 
 +
| 2011–2015 || Luke Muehlhauser's tenure as executive director || Luke Muehlhauser takes over as executive director and is credited with professionalizing the organization and improving donor relations. Under his leadership, MIRI undergoes significant changes, including a name change, a shift in focus from outreach to research, and stronger connections with the Effective Altruism community. Muehlhauser builds relationships with the AI research community, laying the groundwork for future collaborations and funding opportunities.<ref name="soares_taking_the_reins_at_miri">{{cite web |url=https://www.lesswrong.com/posts/Taking-the-reins-at-MIRI |title=Taking the reins at MIRI |author=Nate Soares |date=June 3, 2015 |publisher=LessWrong |accessdate=July 5, 2017}}</ref><ref>{{cite web|url = http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6jzn|title = lukeprog comments on "Thoughts on the Singularity Institute"|date = May 10, 2012|accessdate = July 15, 2012|publisher = LessWrong}}</ref><ref name=halfwitz-xixidu>{{cite web|url = http://lesswrong.com/lw/lb3/breaking_the_vicious_cycle/bnjk|title = Halfwitz comments on "Breaking the vicious cycle"|publisher = LessWrong|date = November 23, 2014|accessdate = August 3, 2017}}</ref>
 
|-
 
|-
| 2011–2015 || Luke Muehlhauser's tenure as executive director || Muehlhauser would be credited with significantly turning MIRI around, improving professionalism and reputation with donors. The name change, shift in focus to research, and improvement of relations with the nascent effective altruism community and the AI research community occur under his watch.<ref name=soares_taking_the_reins_at_miri/><ref>{{cite web|url = http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6jzn|title = lukeprog comments on "Thoughts on the Singularity Institute"|date = May 10, 2012|accessdate = July 15, 2012|publisher = LessWrong}}</ref><ref name=halfwitz-xixidu>{{cite web|url = http://lesswrong.com/lw/lb3/breaking_the_vicious_cycle/bnjk|title = Halfwitz comments on "Breaking the vicious cycle"|publisher = LessWrong|date = November 23, 2014|accessdate = August 3, 2017}}</ref>
+
 
 +
| 2013–2015 || Change of focus || MIRI shifts its research focus to AI safety and technical math-based research into Friendly AI. During this period, MIRI reduces its public outreach efforts to concentrate on solving fundamental problems in AI safety. It stops hosting major public events like the Singularity Summit and begins focusing almost exclusively on research efforts to address the alignment problem and existential risks from advanced AI systems.
 
|-
 
|-
| 2013–present || Change of focus || MIRI changes focus to put less effort into public outreach and shift its research to Friendly AI math research.
+
 
 +
| 2015–2023 || Nate Soares's tenure as executive director || Nate Soares, who takes over as executive director in 2015, continues to steer MIRI toward more technical and research-based work on AI safety. Soares expands MIRI’s collaboration with other AI safety organizations and risk researchers. During this time, MIRI receives major funding boosts from cryptocurrency donations and the Open Philanthropy Project in 2017. In 2018, MIRI adopts a "nondisclosed-by-default" policy for much of its research to prevent potential misuse or risks from the dissemination of sensitive AI safety work.
 
|-
 
|-
| 2015–present || Nate Soares's tenure as executive director || Soares continues to move MIRI forward in the direction that it shifted to under Muehlhauser, with a focus on AI safety research, and increased coordination with the AI safety and AI risk communities.
+
 
 +
| 2023–present || Leadership transitions at MIRI || MIRI undergoes significant leadership changes in 2023. Nate Soares steps down as executive director and transitions to President, focusing on strategic oversight. Malo Bourgon becomes the new CEO, handling day-to-day operations and growth management. Alex Vermeer takes on the role of COO, providing internal support and leadership. The organization continues to prioritize AI safety research and collaborates with other AI safety organizations to address emerging challenges in the field.
 
|}
 
|}
  
Line 47: Line 57:
 
| 1979 || {{dts|September 11}} || || Eliezer Yudkowsky is born.<ref>{{cite web |url=http://sysopmind.com/eliezer.html |archiveurl=https://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html |archivedate=February 5, 2001 |author=Eliezer S. Yudkowsky |title=Eliezer, the person |date=August 31, 2000}}</ref>
 
| 1979 || {{dts|September 11}} || || Eliezer Yudkowsky is born.<ref>{{cite web |url=http://sysopmind.com/eliezer.html |archiveurl=https://web.archive.org/web/20010205221413/http://sysopmind.com/eliezer.html |archivedate=February 5, 2001 |author=Eliezer S. Yudkowsky |title=Eliezer, the person |date=August 31, 2000}}</ref>
 
|-
 
|-
 +
 
| 1996 || {{dts|November 18}} || || Eliezer Yudkowsky writes the first version of "Staring into the Singularity".<ref>{{cite web |url=http://yudkowsky.net/obsolete/singularity.html |title=Yudkowsky - Staring into the Singularity 1.2.5 |accessdate=June 1, 2017}}</ref>
 
| 1996 || {{dts|November 18}} || || Eliezer Yudkowsky writes the first version of "Staring into the Singularity".<ref>{{cite web |url=http://yudkowsky.net/obsolete/singularity.html |title=Yudkowsky - Staring into the Singularity 1.2.5 |accessdate=June 1, 2017}}</ref>
 
|-
 
|-
 +
 
| 1998 || || Publication || The initial version of "Coding a Transhuman AI" (CaTAI) is published.<ref>{{cite web |url=https://web.archive.org/web/20010202165700/http://sysopmind.com:80/AI_design.temp.html |author=Eliezer S. Yudkowsky |title=Coding a Transhuman AI |accessdate=July 5, 2017}}</ref>
 
| 1998 || || Publication || The initial version of "Coding a Transhuman AI" (CaTAI) is published.<ref>{{cite web |url=https://web.archive.org/web/20010202165700/http://sysopmind.com:80/AI_design.temp.html |author=Eliezer S. Yudkowsky |title=Coding a Transhuman AI |accessdate=July 5, 2017}}</ref>
 
|-
 
|-
 +
 
| 1999 || {{dts|March 11}} || || The Singularitarian mailing list is launched. The mailing list page notes that although hosted on MIRI's website, the mailing list "should be considered as being controlled by the individual Eliezer Yudkowsky".<ref>{{cite web |url=https://web.archive.org/web/20010127202200/http://singinst.org:80/lists/sing.html |author=Eliezer S. Yudkowsky |title=Singularitarian mailing list |accessdate=July 5, 2017 |quote=The "Singularitarian" mailing list was first launched on Sunday, March 11th, 1999, to assist in the common goal of reaching the Singularity. It will do so by pooling the resources of time, brains, influence, and money available to Singularitarians; by enabling us to draw on the advice and experience of the whole; by bringing together individuals with compatible ideas and complementary resources; and by binding the Singularitarians into a community.}}</ref>
 
| 1999 || {{dts|March 11}} || || The Singularitarian mailing list is launched. The mailing list page notes that although hosted on MIRI's website, the mailing list "should be considered as being controlled by the individual Eliezer Yudkowsky".<ref>{{cite web |url=https://web.archive.org/web/20010127202200/http://singinst.org:80/lists/sing.html |author=Eliezer S. Yudkowsky |title=Singularitarian mailing list |accessdate=July 5, 2017 |quote=The "Singularitarian" mailing list was first launched on Sunday, March 11th, 1999, to assist in the common goal of reaching the Singularity. It will do so by pooling the resources of time, brains, influence, and money available to Singularitarians; by enabling us to draw on the advice and experience of the whole; by bringing together individuals with compatible ideas and complementary resources; and by binding the Singularitarians into a community.}}</ref>
 
|-
 
|-
 +
 
| 1999 || {{dts|September 17}} || || The Singularitarian mailing list is first informed (by Yudkowsky?) of "The Plan to Singularity" (called "Creating the Singularity" at the time).<ref name="plan_to_singularity_20011121" />
 
| 1999 || {{dts|September 17}} || || The Singularitarian mailing list is first informed (by Yudkowsky?) of "The Plan to Singularity" (called "Creating the Singularity" at the time).<ref name="plan_to_singularity_20011121" />
 
|-
 
|-
| 2000–2003 || || || Eliezer Yudkowsky's "coming of age" (including his "naturalistic awakening", in which he realizes that a superintelligence would not necessarily follow human morality) takes place during this period.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/Yudkowsky's_coming_of_age |title=Yudkowsky's Coming of Age - Lesswrongwiki |accessdate=January 30, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/u9/my_naturalistic_awakening/ |title=My Naturalistic Awakening - Less Wrong |accessdate=January 30, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/mf2/link_flis_recommended_project_grants_for_ai/cixd |title=jacob_cannell comments on [link] FLI's recommended project grants for AI safety research announced - Less Wrong |accessdate=January 30, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
 
 +
| 2000–2003 || || || Eliezer Yudkowsky's "coming of age" (including his "naturalistic awakening," in which he realizes that a superintelligence would not necessarily follow human morality) takes place during this period.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/Yudkowsky's_coming_of_age |title=Yudkowsky's Coming of Age |accessdate=January 30, 2018 |publisher=LessWrong}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/u9/my_naturalistic_awakening/ |title=My Naturalistic Awakening |accessdate=January 30, 2018 |publisher=LessWrong}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/mf2/link_flis_recommended_project_grants_for_ai/cixd |title=jacob_cannell comments on FLI's recommended project grants for AI safety research announced |accessdate=January 30, 2018 |publisher=LessWrong}}</ref>
 
|-
 
|-
| 2000 || {{dts|January 1}} || Publication || "The Plan to Singularity" version 1.0 is written and published by Eliezer Yudkowsky, and posted to the Singularitarian, Extropians, and transhuman mailing lists.<ref name="plan_to_singularity_20011121">{{cite web |url=https://web.archive.org/web/20011121184414/http://sysopmind.com:80/sing/PtS/meta/versions.html |author=Eliezer S. Yudkowsky |title=PtS: Version History |accessdate=July 4, 2017}}</ref>
+
| 2000 || {{dts|January 1}} || Publication || "The Plan to Singularity" version 1.0 is written and published by Eliezer Yudkowsky, and posted to the Singularitarian, Extropians, and transhuman mailing lists.<ref name="plan_to_singularity_20011121">{{cite web |url=https://web.archive.org/web/20011121184414/http://sysopmind.com:80/sing/PtS/meta/versions.html |author=Eliezer S. Yudkowsky |title=PtS: Version History |accessdate=July 4, 2017}}</ref>
 
|-
 
|-
| 2000 || {{dts|January 1}} || Publication || "The Singularitarian Principles" version 1.0 by Eliezer Yudkowsky is published.<ref>{{cite web |url=https://web.archive.org/web/20010124225400/http://sysopmind.com:80/sing/principles.html |author=Eliezer S. Yudkowsky |title=Singularitarian Principles 1.0 |accessdate=July 5, 2017}}</ref>
+
| 2000 || {{dts|January 1}} || Publication || "The Singularitarian Principles" version 1.0 by Eliezer Yudkowsky is published.<ref>{{cite web |url=https://web.archive.org/web/20010124225400/http://sysopmind.com:80/sing/principles.html |author=Eliezer S. Yudkowsky |title=Singularitarian Principles 1.0 |accessdate=July 5, 2017}}</ref>
 
|-
 
|-
 
| 2000 || {{dts|February 6}} || || The first email is sent on SL4 ("Shock Level Four"), a mailing list about transhumanism, superintelligent AI, existential risks, and so on.<ref>{{cite web |url=http://sl4.org/archive/date.html |title=SL4: By Date |accessdate=June 1, 2017}}</ref><ref>{{cite web |url=http://sl4.org/ |author=Eliezer S. Yudkowsky |title=SL4 Mailing List |accessdate=June 1, 2017}}</ref>
 
| 2000 || {{dts|February 6}} || || The first email is sent on SL4 ("Shock Level Four"), a mailing list about transhumanism, superintelligent AI, existential risks, and so on.<ref>{{cite web |url=http://sl4.org/archive/date.html |title=SL4: By Date |accessdate=June 1, 2017}}</ref><ref>{{cite web |url=http://sl4.org/ |author=Eliezer S. Yudkowsky |title=SL4 Mailing List |accessdate=June 1, 2017}}</ref>
 
|-
 
|-
 +
 
| 2000 || {{dts|May 18}} || Publication || "Coding a Transhuman AI" (CaTAI) version 2.0a is "rushed out in time for the Foresight Gathering".<ref name="CaTAI_20010202">{{cite web |url=https://web.archive.org/web/20010202042100/http://singinst.org:80/CaTAI.html#version |author=Eliezer S. Yudkowsky |title=Coding a Transhuman AI § Version History |accessdate=July 5, 2017}}</ref>
 
| 2000 || {{dts|May 18}} || Publication || "Coding a Transhuman AI" (CaTAI) version 2.0a is "rushed out in time for the Foresight Gathering".<ref name="CaTAI_20010202">{{cite web |url=https://web.archive.org/web/20010202042100/http://singinst.org:80/CaTAI.html#version |author=Eliezer S. Yudkowsky |title=Coding a Transhuman AI § Version History |accessdate=July 5, 2017}}</ref>
 
|-
 
|-
| 2000 || {{dts|July 27}} || Mission || [[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] is founded as the Singularity Institute for Artificial Intelligence by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.<ref>{{cite web |url=https://intelligence.org/files/2000-SIAI990.pdf |title=Form 990-EZ 2000 |accessdate=June 1, 2017 |quote=Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999.}}</ref>{{rp|3}}<ref>{{cite web |url=https://web.archive.org/web/20060704101132/http://www.singinst.org:80/about.html |title=About the Singularity Institute for Artificial Intelligence |accessdate=July 1, 2017 |quote=The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors.}}</ref>
+
 
 +
| 2000 || {{dts|July 27}} || Mission || [[Machine Intelligence Research Institute]] is founded as the Singularity Institute for Artificial Intelligence by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.<ref>{{cite web |url=https://intelligence.org/files/2000-SIAI990.pdf |title=Form 990-EZ 2000 |accessdate=June 1, 2017 |quote=Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999.}}</ref>{{rp|3}}<ref>{{cite web |url=https://web.archive.org/web/20060704101132/http://www.singinst.org:80/about.html |title=About the Singularity Institute for Artificial Intelligence |accessdate=July 1, 2017 |quote=The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors.}}</ref>
 
|-
 
|-
| 2000 || {{dts|September 1}} || Publication || Large parts of "The Plan to Singularity" are marked obsolete "due to formation of Singularity Institute, and due to fundamental shifts in AI strategy caused by publication of CaTAI [Coding a Transhuman AI] 2".<ref name="plan_to_singularity_20011121" />
+
 
 +
| 2000 || {{dts|September 1}} || Publication || Large portions of "The Plan to Singularity" were declared obsolete following the formation of the Singularity Institute and a fundamental shift in AI strategy after the publication of "Coding a Transhuman AI" (CaTAI) version 2.<ref name="plan_to_singularity_20011121" /> This marked a pivotal moment in MIRI's (then known as the Singularity Institute) focus. Earlier discussions about the Singularity gave way to a more precise, strategic approach to developing safe, self-improving AI. The obsolete elements reflected how new insights were rapidly reshaping the institute's path.
 
|-
 
|-
| 2000 || {{dts|September 7}} || Publication || "Coding a Transhuman AI" (CaTAI) version 2.2.0 is published.<ref name="CaTAI_20010202" />
+
| 2000 || {{dts|September 7}} || Publication || Version 2.2.0 of "Coding a Transhuman AI" (CaTAI) is published.<ref name="CaTAI_20010202" /> CaTAI is a detailed technical document outlining the architecture for creating a transhuman-level artificial intelligence. It covers key ideas on how an AI can be designed to improve itself safely without deviating from its original, human-aligned goals. This text serves as a core theoretical foundation for MIRI's mission, advocating for AI development grounded in ethical and rational decision-making frameworks.
 +
 
 
|-
 
|-
| 2000 || {{dts|September 14}} || || The first Wayback Machine snapshot of MIRI's website is from this day, using the <code>singinst.org</code> domain name.<ref>{{cite web |url=https://web.archive.org/web/20000914073559/http://www.singinst.org:80/ |author=Eliezer S. Yudkowsky |title=Singularity Institute for Artificial Intelligence, Inc. |accessdate=July 4, 2017}}</ref>
+
| 2000 || {{dts|September 14}} || Project || The first Wayback Machine snapshot of MIRI's website is captured, using the <code>singinst.org</code> domain name.<ref>{{cite web |url=https://web.archive.org/web/20000914073559/http://www.singinst.org:80/ |author=Eliezer S. Yudkowsky |title=Singularity Institute for Artificial Intelligence, Inc. |accessdate=July 4, 2017}}</ref> The launch of the website signals MIRI’s formal entry into the public discourse on AI safety and existential risks. It becomes a hub for sharing research, ideas, and resources aimed at academics, technologists, and the broader community interested in the ethical implications of advanced AI.
 +
 
 
|-
 
|-
| 2001 || {{dts|April 8}} || || MIRI begins accepting donations after receiving tax-exempt status.<ref>{{cite web |url=https://web.archive.org/web/20010422041823/http://singinst.org:80/news.html |author=Eliezer S. Yudkowsky |title=Singularity Institute: News |accessdate=July 1, 2017 |quote=April 08, 2001: The Singularity Institute for Artificial Intelligence, Inc. announces that it has received tax-exempt status and is now accepting donations.}}</ref>
+
| 2001 || {{dts|April 8}} || Project || MIRI begins accepting donations after receiving tax-exempt status.<ref>{{cite web |url=https://web.archive.org/web/20010422041823/http://singinst.org:80/news.html |author=Eliezer S. Yudkowsky |title=Singularity Institute: News |accessdate=July 1, 2017 |quote=April 08, 2001: The Singularity Institute for Artificial Intelligence, Inc. announces that it has received tax-exempt status and is now accepting donations.}}</ref> Receiving tax-exempt status is a critical milestone for MIRI, allowing it to officially solicit and receive donations from the public. This status helps secure the financial support necessary to expand their research efforts and build a formal research team.
 +
 
 
|-
 
|-
| 2001 || {{dts|April 18}} || Publication || Version 0.9 of "Creating Friendly AI" is released.<ref name="siai_news_archive_jun_2004">{{cite web |url=https://web.archive.org/web/20040607135521/http://singinst.org:80/news/archive.html |title=Singularity Institute for Artificial Intelligence // News // Archive |accessdate=July 13, 2017}}</ref>
+
| 2001 || {{dts|April 18}} || Publication || Version 0.9 of "Creating Friendly AI" is released.<ref name="siai_news_archive_jun_2004">{{cite web |url=https://web.archive.org/web/20040607135521/http://singinst.org:80/news/archive.html |title=Singularity Institute for Artificial Intelligence // News // Archive |accessdate=July 13, 2017}}</ref> This early draft outlines the first comprehensive framework for developing "Friendly AI" — an AI system designed to operate under constraints that ensure its goals remain aligned with human interests. It is an important early step in formalizing the institute’s approach to safe AI development.
 +
 
 
|-
 
|-
| 2001 || {{dts|June 14}} || Publication || The "SIAI Guidelines on Friendly AI" are published.<ref>{{cite web |url=https://web.archive.org/web/20010805005837/http://singinst.org:80/friendly/guidelines.html |author=Singularity Institute for Artificial Intelligence |title=SIAI Guidelines on Friendly AI |accessdate=July 13, 2017}}</ref>
+
| 2001 || {{dts|June 14}} || Publication || The "SIAI Guidelines on Friendly AI" are published.<ref>{{cite web |url=https://web.archive.org/web/20010805005837/http://singinst.org:80/friendly/guidelines.html |author=Singularity Institute for Artificial Intelligence |title=SIAI Guidelines on Friendly AI |accessdate=July 13, 2017}}</ref> These guidelines serve as a set of ethical and technical principles meant to guide AI researchers in designing systems that prioritize human well-being. The guidelines represent MIRI’s effort to communicate the necessity of carefully managing AI's development and potential risks.
 +
 
 
|-
 
|-
| 2001 || {{dts|June 15}} || Publication || Version 1.0 of "Creating Friendly AI" is published.<ref>{{cite web |url=https://intelligence.org/files/CFAI.pdf |title=Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures |year=2001 |author=Eliezer Yudkowsky |publisher=The Singularity Institute |accessdate=July 5, 2017}}</ref><ref name="siai_news_archive_jun_2004" />
+
| 2001 || {{dts|June 15}} || Publication || Version 1.0 of "Creating Friendly AI" is published.<ref>{{cite web |url=https://intelligence.org/files/CFAI.pdf |title=Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures |year=2001 |author=Eliezer Yudkowsky |publisher=The Singularity Institute |accessdate=July 5, 2017}}</ref> This version is the first full publication of MIRI’s flagship research document. It provides a detailed analysis of how to design AI systems that remain aligned with human values, even as they gain the ability to self-improve. It is considered one of the key early texts in the AI safety field.
 +
 
 
|-
 
|-
| 2001 || {{dts|July 23}} || Project || MIRI announces that it has formally launched the development of the Flare programming language under Dmitriy Myshkin.<ref name="singinst_jun_2003_news" />
+
| 2001 || {{dts|July 23}} || Project || MIRI formally launches the development of the Flare programming language under Dmitriy Myshkin.<ref name="singinst_jun_2003_news" /> The Flare project is conceived as a way to build a programming language optimized for AI development and safety. Though it is eventually canceled, it shows MIRI’s early commitment to exploring technical approaches to developing safer AI systems.
 +
 
 
|-
 
|-
| 2001 || {{dts|December 21}} || Domain || MIRI obtains the <code>flare.org</code> domain name for its Flare language project.<ref name="singinst_jun_2003_news" />
+
| 2001 || {{dts|December 21}} || Domain || MIRI secures the <code>flare.org</code> domain name for its Flare programming language project.<ref name="singinst_jun_2003_news">{{cite web |url=https://web.archive.org/web/20030622011438/http://singinst.org:80/news/ |author=Eliezer S. Yudkowsky |title=Singularity Institute: News |accessdate=July 1, 2017}}</ref> This acquisition signifies MIRI's focus on developing tools that assist in the creation of AI, though Flare itself is eventually shelved due to technical challenges and shifting priorities.
 +
 
 
|-
 
|-
| 2002 || {{dts|March 8}} || AI box || The first [[wikipedia:AI box|AI box]] experiment by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0203/index.html#3128 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
+
| 2002 || {{dts|March 8}} || AI Box Experiment || The first AI Box experiment conducted by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0203/index.html#3128 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref> This experiment involves testing whether a hypothetical AI can convince a human "gatekeeper" to let it out of a confined environment — highlighting the persuasive abilities that a sufficiently advanced AI might possess, even when theoretically controlled.
 +
 
 
|-
 
|-
| 2002 || {{dts|April 7}} || Publication || A draft of "Levels of Organization in General Intelligence" is announced on SL4.<ref>{{cite web |url=http://www.sl4.org/archive/0204/3296.html |author=Eliezer S. Yudkowsky |date=April 7, 2002 |title=SL4: PAPER: Levels of Organization in General Intelligence |accessdate=July 5, 2017}}</ref><ref>{{cite web |url=https://web.archive.org/web/20120722082136/singularity.org/upload/LOGI/index.html |author=Singularity Institute for Artificial Intelligence |title=Levels of Organization in General Intelligence |accessdate=July 5, 2017}}</ref>
+
| 2002 || {{dts|April 7}} || Publication || A draft of "Levels of Organization in General Intelligence" is announced on SL4.<ref>{{cite web |url=http://www.sl4.org/archive/0204/3296.html |author=Eliezer S. Yudkowsky |date=April 7, 2002 |title=SL4: PAPER: Levels of Organization in General Intelligence |accessdate=July 5, 2017}}</ref><ref>{{cite web |url=https://web.archive.org/web/20120722082136/singularity.org/upload/LOGI/index.html |author=Singularity Institute for Artificial Intelligence |title=Levels of Organization in General Intelligence |accessdate=July 5, 2017}}</ref> This paper explores theoretical foundations for creating AI that mimics general human intelligence, contributing to the field’s understanding of how to structure and organize machine learning systems.
 +
 
 
|-
 
|-
| 2002 || {{dts|July 4}}–5 || AI box || The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://www.sl4.org/archive/0207/index.html#4689 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
+
| 2005 || {{dts|January 4}} || Publication || "A Technical Explanation of Technical Explanation" is published.<ref>{{cite web |url=http://yudkowsky.net/rational/technical |title=Yudkowsky - Technical Explanation |accessdate=July 5, 2017 |quote=Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute.}}</ref> Eliezer Yudkowsky explores the nature of technical explanations, emphasizing how we can communicate complex ideas with clarity and rigor. This work becomes foundational for those studying rationality and AI, offering insights into how we convey and understand deep technical topics. It plays an important role in grounding the theoretical framework behind AI safety research. MIRI announces its release, underlining its importance to their broader research agenda.<ref name="singinst_feb_2006_news">{{cite web |url=https://web.archive.org/web/20060220211402/http://www.singinst.org:80/news/ |title=News of the Singularity Institute for Artificial Intelligence |author=Singularity Institute |accessdate=July 4, 2017}}</ref>
 +
 
 
|-
 
|-
| 2002 || {{dts|September 6}} || Staff || Christian Rovner is appointed as MIRI's volunteer coordinator.<ref name="singinst_jun_2003_news" />
+
| 2005 || || Conference || MIRI gives presentations on AI and existential risks at Stanford University, the Immortality Institute’s Life Extension Conference, and the Terasem Foundation.<ref name="siai_an_examination">{{cite web |url=http://lesswrong.com/lw/5il/siai_an_examination/ |title=SIAI - An Examination |accessdate=June 30, 2017 |author=Brandon Reinhart |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> These presentations help MIRI broaden the conversation about the risks associated with AI development. By engaging academic audiences at Stanford and futurist communities at the Life Extension Conference, MIRI establishes itself as a critical voice in discussions about how AI can impact humanity’s future. These events also allow MIRI to connect its mission with broader existential concerns, including life extension and the future of human intelligence.
 +
 
 
|-
 
|-
| 2002 || {{dts|October 1}} || || MIRI "releases a major new site upgrade" with various new pages.<ref name="singinst_jun_2003_news" />
+
| 2005 || || Publication || Eliezer Yudkowsky contributes chapters to Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković.<ref name="siai_an_examination" /> Although the book is officially published in 2008, Yudkowsky’s early contributions focus on the potential dangers of advanced AI and global catastrophic risks. His chapters play a vital role in shaping the emerging field of AI safety, providing critical perspectives on how advanced AI could shape or threaten humanity’s future. This collaboration with prominent scholars like Nick Bostrom helps solidify MIRI's reputation within the existential risk community.
 +
 
 
|-
 
|-
| 2002 || {{dts|October 7}} || Project || MIRI announces the creation of its volunteers mailing list.<ref name="singinst_jun_2003_news">{{cite web |url=https://web.archive.org/web/20030622011438/http://singinst.org:80/news/ |author=Eliezer S. Yudkowsky |title=Singularity Institute: News |accessdate=July 1, 2017}}</ref>
+
| 2005 || {{dts|February 2}} || Relocation || MIRI relocates from the Atlanta metropolitan area in Georgia to the Bay Area of California.<ref name="singinst_feb_2006_news" /> This move is strategic, placing MIRI at the heart of Silicon Valley, where technological advancements are rapidly accelerating. By moving to the Bay Area, MIRI positions itself closer to influential tech companies and research institutions, allowing it to form stronger partnerships and participate more actively in the conversations around AI development and safety. The relocation also signals MIRI’s commitment to influencing the future of AI in a global technology hub.
 +
 
 
|-
 
|-
| 2003 || || Project || The Flare Programming language project is officially canceled.<ref>{{cite web |url=http://www.sl4.org/wiki/FlareProgrammingLanguage |website=SL4 Wiki |title=FlareProgrammingLanguage |date=September 14, 2007 |accessdate=July 13, 2017}}</ref>
+
| 2005 || {{dts|July 22}}–24 || Conference || MIRI sponsors TransVision 2005 in Caracas, Venezuela.<ref name="singinst_feb_2006_news" /> TransVision is one of the world’s leading transhumanist conferences, focusing on how emerging technologies, including AI, can impact humanity’s evolution. MIRI’s sponsorship of this event highlights its dedication to transhumanist goals, such as safe AI and human enhancement. The sponsorship also enables MIRI to reach new international audiences, solidifying its role as a global leader in the field of AI safety and existential risk.
 +
 
 
|-
 
|-
| 2003 || || Publication || Eliezer Yudkowsky's "An Intuitive Explanation of Bayesian Reasoning" is published.<ref>{{cite web |url=http://yudkowsky.net/rational/bayes |title=Yudkowsky - Bayes' Theorem |accessdate=July 5, 2017 |quote=Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute. If you've found Yudkowsky's pages on rationality useful, please consider donating to the Machine Intelligence Research Institute.}}</ref>
+
| 2005 || {{dts|August 21}} || AI Box Experiment || Eliezer Yudkowsky conducts the third AI Box experiment, with Carl Shulman as the gatekeeper.<ref>{{cite web |url=http://sl4.org/archive/0508/index.html#12007 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref> This experiment explores the theoretical dangers of an advanced AI persuading a human to release it from confinement. Yudkowsky’s successful manipulation as the AI in this experiment further demonstrates the potential risks posed by highly intelligent systems. The AI Box experiment serves as a thought-provoking exercise in AI safety, illustrating the need for stringent safeguards in future AI development.
 +
 
 
|-
 
|-
| 2003 || {{dts|April 30}} || || Eliezer Yudkowsky posts an update about MIRI to the SL4 mailing list. The update discusses the need for an executive director and "extremely bright programmers", and discusses Yudkowsky's hopes to write "a book on the underlying theory and specific human practice of rationality" (which presumably became the Sequences) in order to attract the attention of these programmers.<ref>{{cite web |url=http://sl4.org/archive/0304/6525.html |date=April 30, 2003 |first=Eliezer |last=Yudkowsky |title=Singularity Institute - update |publisher=SL4}}</ref>
+
| 2005–2006 || {{dts|December 20, 2005}}{{snd}}{{dts|February 19, 2006}} || Financial || The 2006 $100,000 Singularity Challenge, led by Peter Thiel, successfully matches donations up to $100,000.<ref name="singinst_feb_2006_news" /><ref>{{cite web |url=https://web.archive.org/web/20060207074758/http://www.singinst.org:80/challenge/ |title=The Singularity Institute for Artificial Intelligence - 2006 $100,000 Singularity Challenge |accessdate=July 5, 2017}}</ref> Peter Thiel’s donation marks the beginning of his significant financial support for MIRI, which continues for many years. The Singularity Challenge helps MIRI raise critical funds for its research, enabling the organization to expand its efforts in AI safety and existential risk mitigation.
 +
 
 
|-
 
|-
| 2004 || {{dts|March 4}}–11 || Staff || MIRI announces Tyler Emerson as executive director.<ref name="singinst_feb_2006_news">{{cite web |url=https://web.archive.org/web/20060220211402/http://www.singinst.org:80/news/ |title=News of the Singularity Institute for Artificial Intelligence |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://web.archive.org/web/20061004202026/http://www.singinst.org/news/newsletter.html#3 |title=Singularity Institute for Artificial Intelligence // The SIAI Voice |accessdate=July 4, 2017 |quote=On March 4, 2004, the Singularity Institute announced Tyler Emerson as our Executive Director. Emerson will be responsible for guiding the Institute. His focus is in nonprofit management, marketing, relationship fundraising, leadership and planning. He will seek to cultivate a larger and more cohesive community that has the necessary resources to develop Friendly AI.}}</ref>
+
 
 +
| 2006 || {{dts|January}} || Publication || "Twelve Virtues of Rationality" is published.<ref>{{cite web |url=http://yudkowsky.net/rational/virtues |title=Twelve Virtues of Rationality |accessdate=July 5, 2017}}</ref> This essay, written by Eliezer Yudkowsky, lays out twelve core principles or virtues meant to guide rational thinkers. It highlights values like curiosity, empiricism, and precision in thinking, which Yudkowsky frames as essential for clear, logical analysis. The publication is relatively short and structured as a set of concise principles, making it an easily digestible guide for those interested in improving their rational thinking skills.
 +
 
 
|-
 
|-
| 2004 || {{dts|April 7}} || Staff || Michael Anissimov is announced as MIRI's advocacy director.<ref>{{cite web |url=http://sl4.org/archive/0404/8447.html |author=Tyler Emerson |date=April 7, 2004 |title=SL4: Michael Anissimov - SIAI Advocacy Director |accessdate=July 1, 2017 |quote=The Singularity Institute announces Michael Anissimov as our Advocacy Director. Michael has been an active volunteer for two years, and one of the more prominent voices in the singularity community. He is committed and thoughtful, and we feel very fortunate to have him help lead our advocacy.}}</ref>
+
| 2006 || {{dts|February 13}} || Staff || Peter Thiel joins MIRI’s Board of Advisors.<ref name="singinst_feb_2006_news" /> Peter Thiel, the tech entrepreneur and venture capitalist, becomes a part of MIRI’s leadership by joining its Board of Advisors. Thiel’s addition to the board follows his growing interest in existential risks and advanced AI, which aligns with MIRI’s mission. His role primarily involves advising MIRI on its strategic direction and helping the organization secure long-term financial support for its AI safety research.
 +
 
 
|-
 
|-
| 2004 || {{dts|April 14}} || Outside review || The first version of the [[w:Machine Intelligence Research Institute|Wikipedia page for MIRI]] is created.<ref>{{cite web|url = https://en.wikipedia.org/w/index.php?title=Machine_Intelligence_Research_Institute&oldid=3188149|title = Machine Intelligence Research Institute: This is an old revision of this page, as edited by 63.201.36.156 (talk) at 19:28, 14 April 2004.|accessdate = July 15, 2017}}</ref>
+
| 2006 || {{dts|May 13}} || Conference || The first Singularity Summit takes place at Stanford University.<ref name="miri_singularity_summit">{{cite web |url=https://intelligence.org/singularitysummit/ |title=Singularity Summit |publisher=Machine Intelligence Research Institute |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.zdnet.com/article/the-great-singularity-debate/ |publisher=ZDNet |title=The great Singularity debate |author=Dan Farber |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.jerrypournelle.com/reports/jerryp/singularity.html |author=Jerry Pournelle |title=Chaos Manor Special Reports: The Stanford Singularity Summit |date=May 20, 2006 |accessdate=June 30, 2017}}</ref> The Singularity Summit is held as a one-day event at Stanford University and gathers leading scientists, technologists, and thinkers to discuss the rapid pace of technological development and the potential for artificial intelligence to surpass human intelligence. The agenda includes a series of talks and panel discussions, with topics ranging from AI safety to the philosophical implications of superintelligent machines. Attendees include a mix of academics, entrepreneurs, and futurists, marking it as a landmark event for those interested in the technological singularity.
 +
 
 
|-
 
|-
| 2004 || {{dts|May}} || Publication || Eliezer Yudkowsky's paper "Coherent Extrapolated Volition" is published around this time.<ref>{{cite web |url=https://intelligence.org/files/CEV.pdf |title=Coherent Extrapolated Volition |author=Eliezer Yudkowsky |accessdate=July 1, 2017 |quote=The information is current as of May 2004, and should not become dreadfully obsolete until late June, when I plant to have an unexpected insight.}}</ref> It is originally called "Collective Volition", and is announced on the MIRI website on August 16.<ref>{{cite web |url=https://web.archive.org/web/20040623153032/http://www.singinst.org/friendly/collective-volition.html |title=Collective Volition |accessdate=July 4, 2017}}</ref><ref name="singinst_feb_2006_news" />
+
| 2006 || {{dts|November}} || Project || Robin Hanson launches the blog Overcoming Bias.<ref>{{cite web |url=http://www.overcomingbias.com/bio |title=Overcoming Bias : Bio |accessdate=June 1, 2017}}</ref> This project is a personal blog started by Robin Hanson, focusing on cognitive biases and rationality. It is a platform for Hanson and guest contributors to write about topics such as human decision-making, bias in everyday life, and how individuals can improve their thinking. Overcoming Bias quickly gains a readership among academics, technologists, and rationality enthusiasts.
 +
 
 
|-
 
|-
| 2004 || {{dts|August 5}}–8 || Conference || TransVision 2004 takes place. TransVision is the World Transhumanist Association's annual event. MIRI is a sponsor for the event.<ref name="singinst_feb_2006_news" />
+
| 2007 || {{dts|May}} || Mission || MIRI updates its mission statement to focus on "developing safe, stable, and self-modifying Artificial General Intelligence." This reflects the organization’s shift in focus to ensuring that future AI systems remain aligned with human values.<ref>{{cite web |url=https://intelligence.org/files/2007-SIAI990.pdf |title=Form 990 2007 |accessdate=July 8, 2017}}</ref>
 +
 
 
|-
 
|-
| 2005 || {{dts|January 4}} || Publication || "A Technical Explanation of Technical Explanation" is published.<ref>{{cite web |url=http://yudkowsky.net/rational/technical |title=Yudkowsky - Technical Explanation |accessdate=July 5, 2017 |quote=Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute.}}</ref> It is announced on the MIRI news page on this day.<ref name="singinst_feb_2006_news" />
+
| 2007 || {{dts|July}} || Project || MIRI launches its outreach blog. The blog serves to engage the public in discussions around AI safety and rationality. It provides a platform for MIRI staff and guest writers to share research updates, existential risk concerns, and general AI news.<ref name="siai_an_examination" />
 +
 
 
|-
 
|-
| 2005 || || Conference || MIRI does "AI and existential risk presentations at Stanford, Immortality Institute's Life Extension Conference, and the Terasem Foundation".<ref name="siai_an_examination" />
+
| 2007 || {{dts|August}} || Project || MIRI begins its Interview Series, publishing interviews with leading figures in AI, cognitive science, and existential risk. These interviews offer insights into AGI safety and foster connections within the academic community.<ref name="siai_an_examination" />
 +
 
 
|-
 
|-
| 2005 || || Publication || Eliezer Yudkowsky writes chapters for ''[[wikipedia:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'', edited by [[wikipedia:Nick Bostrom|Nick Bostrom]] and Milan M. Ćirković.<ref name="siai_an_examination" /> The book would be published in 2008.
+
| 2007 || {{dts|September}} || Staff || Ben Goertzel becomes Director of Research at MIRI, bringing formal leadership to MIRI’s research agenda. He focuses on advancing research in AGI safety, leveraging his expertise in cognitive architectures.<ref name=history-till-2010>{{cite web|url = https://web.archive.org/web/20100103192144/http://www.singinst.org:80/aboutus/ourhistory|title = Our History|publisher = Machine Intelligence Research Institute}}</ref>
 +
 
 
|-
 
|-
| 2005 || {{dts|February 2}} || || MIRI relocates from the [[wikipedia:Atlanta metropolitan area|Atlanta metropolitan area]] of Georgia to the Bay Area of California.<ref name="singinst_feb_2006_news" />
+
| 2007 || {{dts|May 16}} || Project || MIRI publishes its first introductory video on YouTube.<ref>{{cite web |url=https://www.youtube.com/watch?v=0A9pGhwQbS0 |title=Singularity Institute for Artificial Intelligence |publisher=YouTube |accessdate=July 8, 2017}}</ref> The video is created as an introduction to MIRI’s mission and the field of AI safety. It explains the basic concepts of AI risk and outlines MIRI’s role in researching the challenges posed by advanced AI systems. The video is designed to be accessible to a general audience, helping MIRI reach people who might not be familiar with the nuances of AI development.
 +
 
 
|-
 
|-
| 2005 || {{dts|July 22}}–24 || Conference || TransVision 2005 takes place in Caracas, Venezuela. MIRI is a sponsor for the event.<ref name="singinst_feb_2006_news" />
+
| 2007 || {{dts|July 10}} || Publication || The oldest post on MIRI’s blog, "The Power of Intelligence", is published by Eliezer Yudkowsky.<ref>{{cite web|url = https://intelligence.org/2007/07/10/the-power-of-intelligence/|title = The Power of Intelligence|date = July 10, 2007|accessdate = May 5, 2020|publisher = Machine Intelligence Research Institute}}</ref> This blog post explores the fundamental concept of intelligence and how it shapes the world. It discusses the role of intelligence in achieving goals and solving problems, emphasizing its potential impact on the future. The post serves as an introduction to Yudkowsky’s broader work on AI safety and rationality, marking the start of MIRI’s ongoing blog efforts.
 +
 
 
|-
 
|-
| 2005 || {{dts|August 21}} || AI box || The third AI box experiment by Eliezer Yudkowsky, against Carl Shulman as gatekeeper, takes place. The AI is released.<ref>{{cite web |url=http://sl4.org/archive/0508/index.html#12007 |title=SL4: By Thread |accessdate=July 1, 2017}}</ref>
+
| 2007 || {{dts|September 8}}–9 || Conference || The Singularity Summit 2007 is held in the San Francisco Bay Area.<ref name="miri_singularity_summit" /><ref>{{cite web |url=https://web.archive.org/web/20080331205725/http://www.singinst.org:80/summit2007/ |title=The Singularity Summit 2007 |accessdate=June 30, 2017}}</ref> The second Singularity Summit takes place over two days and features presentations from leading thinkers in AI and technology. Topics include the future of artificial intelligence, the ethics of AI development, and the technological singularity. The event builds on the success of the previous year’s summit, expanding in both size and scope, and attracting a broader audience from academia and the tech industry.
 +
 
 
|-
 
|-
| 2005–2006 || {{dts|December 20, 2005}}{{snd}}{{dts|February 19, 2006}} || Financial || The 2006 $100,000 Singularity Challenge, a fundraiser in which {{w|Peter Thiel}} matches donations up to $100,000, takes place. The fundraiser successfully matches the $100,000 amount.<ref name="singinst_feb_2006_news" /><ref>{{cite web |url=https://web.archive.org/web/20060207074758/http://www.singinst.org:80/challenge/ |title=The Singularity Institute for Artificial Intelligence - 2006 $100,000 Singularity Challenge |accessdate=July 5, 2017}}</ref> This would mark the beginning of Peter Thiel (and later, the {{w|Thiel Foundation}}) playing an important role in funding MIRI, which it would continue to do till 2015.<ref>{{cite web|url = https://donations.vipulnaik.com/donorDonee.php?donor=Thiel+Foundation&donee=Machine+Intelligence+Research+Institute|title = Thiel Foundation donations made to Machine Intelligence Research Institute|accessdate = September 15, 2019}}</ref>
+
 
 +
| 2008 || {{dts|January}} || Publication || "The Simple Truth" is published. This short, fictional story by Eliezer Yudkowsky explains the basic concepts of truth and rationality, illustrating how humans can understand objective reality through evidence and reasoning. It serves as an introduction to epistemology, making complex ideas about knowledge more accessible to a general audience.<ref>{{cite web |url=http://yudkowsky.net/rational/the-simple-truth |title=Yudkowsky - The Simple Truth |accessdate=July 5, 2017}}</ref>
 +
 
 
|-
 
|-
| 2006 || || Publication || "Twelve Virtues of Rationality" is published.<ref>{{cite web |url=http://yudkowsky.net/rational/virtues |title=Twelve Virtues of Rationality |accessdate=July 5, 2017 |quote=Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute.}}</ref>
+
| 2008 || {{dts|March}} || Project || MIRI expands its Interview Series, broadening its scope to include a wider range of experts in AI safety, cognitive science, and philosophy of technology. This expansion provides a more comprehensive view of the diverse research efforts and opinions shaping AGI and existential risk discussions.<ref name="siai_an_examination" />
 +
 
 
|-
 
|-
| 2006 || {{dts|February 13}} || || {{w|Peter Thiel}} joins MIRI's Board of Advisors.<ref name="singinst_feb_2006_news" />
+
| 2008 || {{dts|June}} || Project || MIRI launches its summer intern program, engaging young researchers and students in AI safety research. The program allows participants to work with MIRI’s research staff, contributing to ongoing projects and gaining hands-on experience in AGI research. It becomes a key method for developing talent and integrating fresh perspectives.<ref name="siai_an_examination" />
 +
 
 
|-
 
|-
| 2006 || {{dts|May 13}} || Conference || The first Singularity Summit takes place at Stanford University.<ref name="miri_singularity_summit">{{cite web |url=https://intelligence.org/singularitysummit/ |title=Singularity Summit |publisher=Machine Intelligence Research Institute |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.zdnet.com/article/the-great-singularity-debate/ |publisher=ZDNet |title=The great Singularity debate |author=Dan Farber |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.jerrypournelle.com/reports/jerryp/singularity.html |author=Jerry Pournelle |title=Chaos Manor Special Reports: The Stanford Singularity Summit |date=May 20, 2006 |accessdate=June 30, 2017}}</ref>
+
| 2008 || {{dts|July}} || Project || OpenCog is founded with support from MIRI and Novamente LLC, directed by Ben Goertzel. OpenCog receives additional funding from Google Summer of Code, allowing 11 interns to work on the project in the summer of 2008. The initiative focuses on cognitive architectures and remains central to Goertzel's research efforts at MIRI until 2010.<ref>{{cite web|url=http://opencog.org/about/|publisher=OpenCog Foundation|title=About|accessdate=July 6, 2017}}</ref><ref>{{cite web|url=https://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html|title=The Singularity Institute's Scary Idea (and Why I Don't Buy It)|last=Goertzel|first=Ben|date=October 29, 2010|accessdate=September 15, 2019}}</ref>
|-
+
 
| 2006 || {{dts|November}} || || [[wikipedia:Robin Hanson|Robin Hanson]] starts ''[[wikipedia:Overcoming Bias|Overcoming Bias]]''.<ref>{{cite web |url=http://www.overcomingbias.com/bio |title=Overcoming Bias : Bio |accessdate=June 1, 2017}}</ref>
 
|-
 
| 2007 || || Mission || MIRI's organization mission ("Organization's Primary Exempt Purpose" on Form 990) changes to: "To develop safe, stable and self-modifying Artificial General Intelligence. And to support novel research and to foster the creation of a research community focused on Artificial General Intelligence and Safe and Friendly Artificial Intelligence."<ref>{{cite web |url=https://intelligence.org/files/2007-SIAI990.pdf |title=Form 990 2007 |accessdate=July 8, 2017}}</ref> This mission would be used in 2008 and 2009 as well.
 
|-
 
| 2007 || || Project || MIRI's outreach blog is started.<ref name="siai_an_examination" />
 
|-
 
| 2007 || || Project || MIRI's Interview Series is started.<ref name="siai_an_examination" />
 
|-
 
| 2007 || || Staff || {{w|Ben Goertzel}} becomes director of research at MIRI.<ref name=history-till-2010>{{cite web|url = https://web.archive.org/web/20100103192144/http://www.singinst.org:80/aboutus/ourhistory|title = Our History|publisher = Machine Intelligence Research Institute}}</ref> He would go on to lead work on {{w|OpenCog}} which would officially start in 2008.
 
|-
 
| 2007 || {{dts|May 16}} || Project || MIRI's introductory video is published on YouTube.<ref>{{cite web |url=https://www.youtube.com/watch?v=0A9pGhwQbS0 |title=Singularity Institute for Artificial Intelligence |publisher=YouTube |accessdate=July 8, 2017}}</ref><ref name="siai_an_examination" />
 
|-
 
| 2007 || {{dts|July 10}} || Publication || The oldest (surviving) post on the MIRI blog is from this day. The post is "The Power of Intelligence" by Eliezer Yudkowsky.<ref>{{cite web|url = https://intelligence.org/2007/07/10/the-power-of-intelligence/|title = The Power of Intelligence|date = July 10, 2007|accessdate = May 5, 2020|publisher = Machine Intelligence Research Institute}}</ref>
 
|-
 
| 2007 || {{dts|September 8}}–9  || Conference || The Singularity Summit 2007 takes place in the San Francisco Bay Area.<ref name="miri_singularity_summit" /><ref>{{cite web |url=https://web.archive.org/web/20080331205725/http://www.singinst.org:80/summit2007/ |title=The Singularity Summit 2007 |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.foxnews.com/story/2007/09/12/scientists-fear-day-computers-become-smarter-than-humans.html |title=Scientists Fear Day Computers Become Smarter Than Humans |publisher=Fox News |accessdate=July 5, 2017 |date=September 12, 2007 |quote=futurists gathered Saturday for a weekend conference}}</ref>
 
|-
 
| 2008 || || Publication || "The Simple Truth" is published.<ref>{{cite web |url=http://yudkowsky.net/rational/the-simple-truth |title=Yudkowsky - The Simple Truth |accessdate=July 5, 2017 |quote=Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute.}}</ref>
 
|-
 
| 2008 || || Project || MIRI expands its Interview Series.<ref name="siai_an_examination" />
 
|-
 
| 2008 || || Project || MIRI begins its summer intern program.<ref name="siai_an_examination" />
 
|-
 
| 2008 || || Project || {{w|OpenCog}} is founded "via a grant from the [MIRI], and the donation from Novamente LLC of a large body of software code and software designs developed during the period 2001–2007".<ref>{{cite web |url=http://opencog.org/about/ |publisher=OpenCog Foundation |title=About |accessdate=July 6, 2017}}</ref> {{w|Ben Goertzel}} directs MIRI work on OpenCog; his work at MIRI is limited to work on OpenCog-related projects. Through funding from the {{w|Google Summer of Code}}, 11 interns get to work on the project in the summer of 2008.<ref name=history-till-2010/> After the departure of Tyler Emerson and MIRI's deemphasis of OpenCog, Goertzel would step down as research director in 2010.<ref>{{cite web|url = https://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html|title = The Singularity Institute's Scary Idea (and Why I Don't Buy It)|last = Goertzel|first = Ben|date = October 29, 2010|accessdate = September 15, 2019}}</ref> See also [[wikipedia:OpenCog#Relation to Singularity Institute|OpenCog § Relation to Singularity Institute]].
 
 
|-
 
|-
 
| 2008 || {{dts|October 25}} || Conference || The Singularity Summit 2008 takes place in San Jose.<ref>{{cite web |url=https://web.archive.org/web/20090608082235/http://www.singularitysummit.com/program |author=http://helldesign.net |title=The Singularity Summit 2008: Opportunity, Risk, Leadership &gt; Program |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.mercurynews.com/2008/10/23/annual-a-i-conference-to-be-held-this-saturday-in-san-jose/ |title=Annual A.I. conference to be held this Saturday in San Jose |date=October 23, 2008 |author=Elise Ackerman |publisher=The Mercury News |accessdate=July 5, 2017}}</ref>
 
| 2008 || {{dts|October 25}} || Conference || The Singularity Summit 2008 takes place in San Jose.<ref>{{cite web |url=https://web.archive.org/web/20090608082235/http://www.singularitysummit.com/program |author=http://helldesign.net |title=The Singularity Summit 2008: Opportunity, Risk, Leadership &gt; Program |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.mercurynews.com/2008/10/23/annual-a-i-conference-to-be-held-this-saturday-in-san-jose/ |title=Annual A.I. conference to be held this Saturday in San Jose |date=October 23, 2008 |author=Elise Ackerman |publisher=The Mercury News |accessdate=July 5, 2017}}</ref>
 +
 
|-
 
|-
 
| 2008 || {{dts|November}}–December || Outside review || The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate |title=The Hanson-Yudkowsky AI-Foom Debate |website=Lesswrongwiki |accessdate=July 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6k1b |title=Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong |accessdate=July 15, 2017 |quote=Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI. |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 
| 2008 || {{dts|November}}–December || Outside review || The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate |title=The Hanson-Yudkowsky AI-Foom Debate |website=Lesswrongwiki |accessdate=July 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6k1b |title=Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong |accessdate=July 15, 2017 |quote=Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI. |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 +
 
|-
 
|-
| 2009 || || Project || MIRI establishes the Visiting Fellows Program.<ref name="siai_an_examination" />
+
 
 +
| 2009 || || Project || MIRI launches the Visiting Fellows Program in 2009. This initiative allows individuals from various backgrounds to spend several weeks at MIRI, engaging in collaborative research and contributing to projects related to Friendly AI and rationality. The program becomes a key method of recruitment for future MIRI researchers.<ref name="siai_an_examination" />
 +
 
 
|-
 
|-
| 2009 (early) || || Staff || Executive director Tyler Emerson departs MIRI.<ref name="siai_accomplishments_20110621">{{cite web |url=https://web.archive.org/web/20110621192259/http://singinst.org/achievements |title=Recent Singularity Institute Accomplishments |publisher=Singularity Institute for Artificial Intelligence |accessdate=July 6, 2017}}</ref>
+
| 2009 (early) || || Staff || Tyler Emerson, who served as executive director of MIRI, steps down early in the year. His departure marks a leadership transition that eventually sees Michael Vassar take on a more prominent role within the organization.<ref name="siai_accomplishments_20110621">{{cite web |url=https://web.archive.org/web/20110621192259/http://singinst.org/achievements |title=Recent Singularity Institute Accomplishments |publisher=Singularity Institute for Artificial Intelligence |accessdate=July 6, 2017}}</ref>
 +
 
 
|-
 
|-
| 2009 (early) || || Staff || Michael Anissimov is hired as a media director.<ref name="siai_accomplishments_20110621" /> (Since he was advocacy director as far back as 2004, it's not clear if he left the organization and came back, or if just changed positions.)
+
| 2009 (early) || || Staff || Michael Anissimov is hired as Media Director. Having served as MIRI’s Advocacy Director in previous years, it is unclear whether he briefly left the organization or simply transitioned into a new role.<ref name="siai_accomplishments_20110621" />
 +
 
 
|-
 
|-
| 2009 || {{dts|February}} || Project || Eliezer Yudkowsky starts LessWrong using as seed material his posts on Overcoming Bias.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/FAQ#Where_did_Less_Wrong_come_from.3F |title=FAQ - Lesswrongwiki |accessdate=June 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> On the 2009 accomplishments page, MIRI describes LessWrong as being "important to the Singularity Institute's work towards a beneficial Singularity in providing an introduction to issues of cognitive biases and rationality relevant for careful thinking about optimal philanthropy and many of the problems that must be solved in advance of the creation of provably human-friendly powerful artificial intelligence". And: "Besides providing a home for an intellectual community dialoguing on rationality and decision theory, ''Less Wrong'' is also a key venue for SIAI recruitment.  Many of the participants in SIAI's Visiting Fellows Program first discovered the organization through ''Less Wrong''."<ref name="siai_accomplishments_20110621" />
+
| 2009 || {{dts|February}} || Project || Eliezer Yudkowsky establishes LessWrong, a community blog dedicated to discussing topics related to rationality, decision theory, and the development of Friendly AI. The site serves as a spiritual successor to his posts on Overcoming Bias and quickly becomes a central hub for Singularity and Effective Altruism communities. It is described as instrumental in MIRI's recruitment efforts, with many participants of MIRI's Visiting Fellows Program having first encountered the organization through LessWrong.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/FAQ#Where_did_Less_Wrong_come_from.3F |title=FAQ - LessWrong Wiki |accessdate=June 1, 2017 |publisher=LessWrong}}</ref><ref name="siai_accomplishments_20110621" />
 +
 
 
|-
 
|-
| 2009 || {{dts|February 16}} || Staff || Michael Vassar announces himself as president of MIRI.<ref>{{cite web |url=https://intelligence.org/2009/02/16/introducing-myself/ |title=Introducing Myself |author=Michael Vassar |publisher=Machine Intelligence Research Institute |date=February 16, 2009 |accessdate=July 1, 2017}}</ref>
+
| 2009 || {{dts|February 16}} || Staff || Michael Vassar announces his role as President of MIRI in a blog post titled "Introducing Myself." Vassar, who was a key figure in the organization’s outreach efforts, remains president until 2012, focusing on strategic vision and external partnerships.<ref>{{cite web |url=https://intelligence.org/2009/02/16/introducing-myself/ |title=Introducing Myself |author=Michael Vassar |publisher=Machine Intelligence Research Institute |date=February 16, 2009 |accessdate=July 1, 2017}}</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|April}} || Publication || Eliezer Yudkowsky completes the Sequences.<ref name="siai_accomplishments_20110621" />
+
| 2009 || {{dts|April}} || Publication || Eliezer Yudkowsky completes the Sequences, a series of blog posts on LessWrong that cover topics ranging from epistemology and rationality to AI safety. These posts are later compiled into the book Rationality: From AI to Zombies.<ref name="rationality_zombies">{{cite web |url=https://www.lesswrong.com/posts/rationality-from-ai-to-zombies |title=Rationality: From AI to Zombies |author=RobbBB |date=March 13, 2015 |publisher=LessWrong |accessdate=July 1, 2017}}</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|August 13}} || Social media || The Singularity Institute Twitter account, singinst, is created.<ref>{{cite web |url=https://twitter.com/singinst |title=SingularityInstitute (@singinst) |publisher=Twitter |accessdate=July 4, 2017}}</ref>
+
| 2009 || {{dts|August 13}} || Social Media || MIRI establishes its official Twitter account under the handle @singinst. This move marks the beginning of MIRI's broader efforts to engage with the public through social media channels.<ref>{{cite web |url=https://twitter.com/singinst |title=Singularity Institute (@singinst) |publisher=Twitter |accessdate=July 4, 2017}}</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|September}} || Staff || Amy Willey Labenz begins an internship at MIRI. During the internship in November, she would uncover the embezzlement.<ref name="amy-email-2022-05-27">Amy Willey Labenz. Personal communication. May 27, 2022.</ref>
+
| 2009 || {{dts|September}} || Staff || Amy Willey Labenz begins an internship at MIRI, focusing on administrative and operational tasks. During her time as an intern, she demonstrates keen attention to detail, particularly in financial oversight. In November, she discovers a significant case of embezzlement within the organization, identifying discrepancies that had gone unnoticed. Her findings lead to an internal investigation, and her role in resolving the issue is seen as critical in protecting MIRI's financial stability. Recognizing her diligence and competence, MIRI promotes Amy Willey Labenz to Chief Compliance Officer by the end of the year. In her new role, she is tasked with ensuring the organization's financial integrity and compliance with legal standards, playing a pivotal part in strengthening MIRI's internal operations.<ref name="amy-email-2022-05-27">Amy Willey Labenz. Personal communication. May 27, 2022.</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|October}} || Project || A website maintained by MIRI, ''The Uncertain Future'', first appears around this time.<ref>{{cite web |url=https://web.archive.org/web/20090101000000*/http://theuncertainfuture.com/ |title=Wayback Machine |accessdate=July 2, 2017}} The first snapshot is from October 5, 2009.</ref><ref>{{cite web |url=https://www.google.com/search?q=http%3A%2F%2Ftheuncertainfuture.com%2F&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2009%2Ccd_max%3A1%2F1%2F2010&tbm= |title=theuncertainfuture.com - Google Search |accessdate=July 2, 2017}} The earliest cache seems to be from October 25, 2009. Checking the Jan 1, 2008 – Jan 1, 2009 range produces no result.</ref> The goal of the website is to "allow those interested in future technology to form their own rigorous, mathematically consistent model of how the development of advanced technologies will affect the evolution of civilization over the next hundred years".<ref>{{cite web |url=http://theuncertainfuture.com/ |title=The Uncertain Future |accessdate=July 2, 2017 |publisher=Machine Intelligence Research Institute}}</ref> Work on the project started in 2008.<ref name=hplus-tuf>{{cite web|url = http://hplusmagazine.com/2011/02/04/the-uncertain-future-forecasting-project-goes-open-source/|title = The Uncertain Future Forecasting Project Goes Open-Source|archiveurl = http://web.archive.org/web/20120413174829/http://hplusmagazine.com/2011/02/04/the-uncertain-future-forecasting-project-goes-open-source/|date = February 4, 2011|archivedate = April 13, 2012|accessdate = July 15, 2017|publisher = H Plus Magazine|last = McCabe|first = Thomas}}</ref>
+
| 2009 || {{dts|October}} || Project || MIRI launches The Uncertain Future, a website that allows users to build mathematically rigorous models to predict the impact of future technologies. The project began development in 2008 and is seen as an innovative tool for those interested in exploring the potential trajectories of technological progress.<ref>{{cite web |url=https://web.archive.org/web/20090101000000*/http://theuncertainfuture.com/ |title=Wayback Machine |accessdate=July 2, 2017}}</ref><ref name=hplus-tuf>{{cite web|url = http://hplusmagazine.com/2011/02/04/the-uncertain-future-forecasting-project-goes-open-source/|title = The Uncertain Future Forecasting Project Goes Open-Source|archiveurl = http://web.archive.org/web/20120413174829/http://hplusmagazine.com/2011/02/04/the-uncertain-future-forecasting-project-goes-open-source/|date = February 4, 2011|archivedate = April 13, 2012|accessdate = July 15, 2017|publisher = H Plus Magazine|last = McCabe|first = Thomas}}</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|October 3}}–4 || Conference || The Singularity Summit 2009 takes place in New York.<ref>{{cite web |url=https://web.archive.org/web/20091217213848/http://www.singularitysummit.com/program |author=http://helldesign.net |title=The Singularity Summit 2009 &gt; Program |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.popsci.com/scitech/article/2009-10/singularity-summit-2009-singularity-near |publisher=Popular Science |title=Singularity Summit 2009: The Singularity Is Near |accessdate=June 30, 2017 |date=October 2, 2009 |author=Stuart Fox}}</ref>
+
| 2009 || {{dts|October 3}}–4 || Conference || The Singularity Summit 2009 takes place in New York, bringing together leading thinkers in technology, AI, and futurism. This annual event, hosted by MIRI, serves as a major platform for discussions about the Singularity and the implications of rapidly advancing technologies.<ref>{{cite web |url=https://web.archive.org/web/20091217213848/http://www.singularitysummit.com/program |title=Singularity Summit 2009 Program |publisher=Singularity Institute |accessdate=June 30, 2017}}</ref><ref>{{cite web |url=http://www.popsci.com/scitech/article/2009-10/singularity-summit-2009-singularity-near |publisher=Popular Science |title=Singularity Summit 2009: The Singularity Is Near |date=October 2, 2009 |author=Stuart Fox}}</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|November}} || Financial || Embezzlement: "Misappropriation of assets, by a contractor, was discovered in November 2009."<ref>{{cite web |url=https://intelligence.org/files/2009-SIAI990.pdf |title=Form 990 2009 |accessdate=July 8, 2017}}</ref>
+
| 2009 || {{dts|November}} || Financial || An embezzlement scandal involving a former contractor is uncovered, resulting in a reported theft of $118,803. The discovery leads to significant internal changes within MIRI and the eventual recovery of some funds through legal action.<ref>{{cite web |url=https://intelligence.org/files/2009-SIAI990.pdf |title=Form 990 2009 |accessdate=July 8, 2017}}</ref><ref>{{cite web |url=https://lesswrong.com/lw/di4/reply_to_holden_on_the_singularity_institute/ |title=Reply to Holden on The Singularity Institute |date=July 10, 2012 |accessdate=June 30, 2017}}</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|December}} || Staff || Amy Willey Labenz, previously an intern, joins MIRI as Chief Compliance Officer, partly due to her uncovering of the embezzlement in November.<ref name="siai_accomplishments_20110621" /><ref name="amy-email-2022-05-27"/>
+
| 2009 || {{dts|December}} || Staff || Following the embezzlement case, Amy Willey Labenz, who uncovered the theft during her internship, is promoted to Chief Compliance Officer. Her role focuses on strengthening MIRI’s financial and operational compliance.<ref name="siai_accomplishments_20110621" /><ref name="amy-email-2022-05-27"/>
 +
 
 
|-
 
|-
| 2009 || {{dts|December 11}} || Influence || The third edition of ''[[wikipedia:Artificial Intelligence: A Modern Approach|Artificial Intelligence: A Modern Approach]]'' by [[wikipedia:Stuart J. Russell|Stuart J. Russell]] and [[wikipedia:Peter Norvig|Peter Norvig]] is published. In this edition, for the first time, Friendly AI is mentioned and Eliezer Yudkowsky is cited.
+
| 2009 || {{dts|December 11}} || Influence || The third edition of Artificial Intelligence: A Modern Approach, a seminal textbook by Stuart J. Russell and Peter Norvig, is published. In this edition, Friendly AI and Eliezer Yudkowsky are mentioned for the first time, marking an important moment for MIRI's ideas within mainstream AI literature.
 +
 
 
|-
 
|-
| 2009 || {{dts|December 12}} || Project || ''The Uncertain Future'' reaches [[wikipedia:Software release life cycle#Beta|beta]] and is announced on the MIRI blog.<ref>{{cite web |url=https://web.archive.org/web/20100507142148/http://www.singinst.org:80/blog/2009/12/12/the-uncertain-future/ |author=Michael Anissimov |website=The Singularity Institute Blog |title=The Uncertain Future |date=December 12, 1009 |accessdate=July 5, 2017}}</ref>
+
| 2009 || {{dts|December 12}} || Project || MIRI announces that The Uncertain Future has reached beta status. The tool, which allows users to explore scenarios of technological progress, is unveiled on the MIRI blog.<ref>{{cite web |url=https://web.archive.org/web/20100507142148/http://www.singinst.org:80/blog/2009/12/12/the-uncertain-future/ |author=Michael Anissimov |website=The Singularity Institute Blog |title=The Uncertain Future |date=December 12, 2009 |accessdate=July 5, 2017}}</ref>
 +
 
 
|-
 
|-
| 2009 || || Financial || MIRI reports $118,803.00 in theft during this year.<ref name="siai_an_examination">{{cite web |url=http://lesswrong.com/lw/5il/siai_an_examination/ |title=SIAI - An Examination - Less Wrong |accessdate=June 30, 2017 |author=Brandon Reinhart |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6jzn |title=lukeprog comments on Thoughts on the Singularity Institute (SI) - Less Wrong |accessdate=June 30, 2017 |quote=So little monitoring of funds that $118k was stolen in 2010 before SI noticed. (Note that we have won stipulated judgments to get much of this back, and have upcoming court dates to argue for stipulated judgments to get the rest back.) |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://lesswrong.com/r/discussion/lw/5fo/siai_fundraising/4156 |title=cjb comments on SIAI Fundraising  |accessdate=July 8, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://web.archive.org/web/20100116020241/http://www.almanacnews.com/story.php?story_id=9028 |title=Almanac Almanac: Police Calls (December 23, 2009) |accessdate=July 8, 2017 |quote=Embezzlement report: Alicia Issac, 37, of Sunnyvale arrested on embezzlement, larceny and conspiracy charges in connection with $51,000 loss, Singularity Institute for Artificial Intelligence in 1400 block of Adams Drive, Dec. 10.}}</ref> The theft was by two former employees.<ref>{{cite web |url=http://lesswrong.com/lw/di4/reply_to_holden_on_the_singularity_institute/ |title=Reply to Holden on The Singularity Institute |date=July 10, 2012 |accessdate=June 30, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]] |quote=Two former employees stole $118,000 from SI. Earlier this year we finally won stipulated judgments against both individuals, forcing them to pay back the full amounts they stole. We have already recovered several thousand dollars of this.}}</ref>
+
 
 +
| 2010 || || Mission || The organization mission changes to: "To develop the theory and particulars of safe self-improving Artificial Intelligence; to support novel research and foster the creation of a research community focused on safe Artificial General Intelligence; and to otherwise improve the probability of humanity surviving future technological advances."<ref>{{cite web |url=https://intelligence.org/files/2010-SIAI990.pdf |title=Form 990 2010 |accessdate=July 8, 2017}}</ref> This mission is used in 2011 and 2012 as well.
 +
 
 
|-
 
|-
| 2010 || || Mission || The organization mission changes to: "To develop the theory and particulars of safe self-improving Artificial Intelligence; to support novel research and foster the creation of a research community focused on safe Artificial General Intelligence; and to otherwise improve the probability of humanity surviving future technological advances."<ref>{{cite web |url=https://intelligence.org/files/2010-SIAI990.pdf |title=Form 990 2010 |accessdate=July 8, 2017}}</ref> This mission would be used in 2011 and 2012 as well.
+
| 2010 || {{dts|February 28}} || Publication || The first chapter of Eliezer Yudkowsky's fan fiction ''{{w|Harry Potter and the Methods of Rationality}}'' is published. The book is published as a serial concluding on March 14, 2015.<ref>{{cite web |url=https://www.fanfiction.net/s/5782108/1/Harry-Potter-and-the-Methods-of-Rationality |title=Harry Potter and the Methods of Rationality Chapter 1: A Day of Very Low Probability, a harry potter fanfic |publisher=FanFiction |accessdate=July 1, 2017 |quote=Updated: 3/14/2015 - Published: 2/28/2010}}</ref><ref>{{cite web |url=https://www.vice.com/en_us/article/gq84xy/theres-something-weird-happening-in-the-world-of-harry-potter-168 |publisher=Vice |title=The Harry Potter Fan Fiction Author Who Wants to Make Everyone a Little More Rational |date=March 2, 2015 |author=David Whelan |accessdate=July 1, 2017}}</ref> The fan fiction becomes the initial contact with MIRI of several larger donors to MIRI.<ref>{{cite web |url=https://intelligence.org/2014/04/02/2013-in-review-fundraising/#identifier_2_10812 |title=2013 in Review: Fundraising - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=August 13, 2014 |accessdate=July 1, 2017 |quote=Recently, we asked (nearly) every donor who gave more than $3,000 in 2013 about the source of their initial contact with MIRI, their reasons for donating in 2013, and their preferred methods for staying in contact with MIRI. [&hellip;] Four came into contact with MIRI via HPMoR.}}</ref>
 +
 
 
|-
 
|-
| 2010 || {{dts|February 28}} || Publication || The first chapter of Eliezer Yudkowsky's fan fiction ''{{w|Harry Potter and the Methods of Rationality}}'' is published. The book would be published as a serial concluding on March 14, 2015.<ref>{{cite web |url=https://www.fanfiction.net/s/5782108/1/Harry-Potter-and-the-Methods-of-Rationality |title=Harry Potter and the Methods of Rationality Chapter 1: A Day of Very Low Probability, a harry potter fanfic |publisher=FanFiction |accessdate=July 1, 2017 |quote=Updated: 3/14/2015 - Published: 2/28/2010}}</ref><ref>{{cite web |url=https://www.vice.com/en_us/article/gq84xy/theres-something-weird-happening-in-the-world-of-harry-potter-168 |publisher=Vice |title=The Harry Potter Fan Fiction Author Who Wants to Make Everyone a Little More Rational |date=March 2, 2015 |author=David Whelan |accessdate=July 1, 2017}}</ref> The fan fiction would become the initial contact with MIRI of several larger donors to MIRI.<ref>{{cite web |url=https://intelligence.org/2014/04/02/2013-in-review-fundraising/#identifier_2_10812 |title=2013 in Review: Fundraising - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=August 13, 2014 |accessdate=July 1, 2017 |quote=Recently, we asked (nearly) every donor who gave more than $3,000 in 2013 about the source of their initial contact with MIRI, their reasons for donating in 2013, and their preferred methods for staying in contact with MIRI. [&hellip;] Four came into contact with MIRI via HPMoR.}}</ref>
+
| 2010 || {{dts|April}} || Staff || Amy Willey Labenz is promoted to Chief Operating Officer; she was previously the Chief Compliance Officer. From 2010 to 2012 she also serves as the Executive Producer of the Singularity Summits.<ref name="amy-email-2022-05-27"/>
|-
+
 
| 2010 || {{dts|April}} || Staff || Amy Willey Labenz is promoted to Chief Operating Officer; she was previously the Chief Compliance Officer. From 2010 to 2012 she would also serve as the Executive Producer of the Singularity Summits.<ref name="amy-email-2022-05-27"/>
 
 
|-
 
|-
 
| 2010 || {{dts|June 17}} || Popular culture || ''{{w|Zendegi}}'', a science fiction book by {{w|Greg Egan}}, is published. The book includes a character called Nate Caplan (partly inspired by Eliezer Yudkowsky and Robin Hanson), a website called Overpowering Falsehood dot com (partly inspired by Overcoming Bias and LessWrong), and a Benign Superintelligence Bootstrap Project, inspired by the Singularity Institute's friendly AI project.<ref>{{cite web|url = http://gareth-rees.livejournal.com/31182.html|title = Zendegi - Gareth Rees|date = August 17, 2010|accessdate = July 15, 2017|last = Rees|first = Gareth}}</ref><ref>{{Cite web|url = http://lesswrong.com/lw/2ti/greg_egan_disses_standins_for_overcoming_bias/|title = Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book|last = Sotala|first = Kaj|date = October 7, 2010|accessdate = July 15, 2017}}</ref><ref>{{cite web|url = http://www.overcomingbias.com/2012/03/egans-zendegi.html|title = Egan’s Zendegi|date = March 25, 2012|accessdate = July 15, 2017|last = Hanson|first = Robin}}</ref>
 
| 2010 || {{dts|June 17}} || Popular culture || ''{{w|Zendegi}}'', a science fiction book by {{w|Greg Egan}}, is published. The book includes a character called Nate Caplan (partly inspired by Eliezer Yudkowsky and Robin Hanson), a website called Overpowering Falsehood dot com (partly inspired by Overcoming Bias and LessWrong), and a Benign Superintelligence Bootstrap Project, inspired by the Singularity Institute's friendly AI project.<ref>{{cite web|url = http://gareth-rees.livejournal.com/31182.html|title = Zendegi - Gareth Rees|date = August 17, 2010|accessdate = July 15, 2017|last = Rees|first = Gareth}}</ref><ref>{{Cite web|url = http://lesswrong.com/lw/2ti/greg_egan_disses_standins_for_overcoming_bias/|title = Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book|last = Sotala|first = Kaj|date = October 7, 2010|accessdate = July 15, 2017}}</ref><ref>{{cite web|url = http://www.overcomingbias.com/2012/03/egans-zendegi.html|title = Egan’s Zendegi|date = March 25, 2012|accessdate = July 15, 2017|last = Hanson|first = Robin}}</ref>
 +
 
|-
 
|-
 
| 2010 || {{dts|August 14}}–15 || Conference || The Singularity Summit 2010 takes place in San Francisco.<ref>{{cite web |url=https://web.archive.org/web/20110107222220/http://www.singularitysummit.com/program |title=Singularity Summit {{!}} Program |accessdate=June 30, 2017}}</ref>
 
| 2010 || {{dts|August 14}}–15 || Conference || The Singularity Summit 2010 takes place in San Francisco.<ref>{{cite web |url=https://web.archive.org/web/20110107222220/http://www.singularitysummit.com/program |title=Singularity Summit {{!}} Program |accessdate=June 30, 2017}}</ref>
 +
 
|-
 
|-
 
| 2010 || {{dts|December 21}} || Social media || The first post on the MIRI Facebook page is from this day.<ref>{{cite web |url=https://www.facebook.com/MachineIntelligenceResearchInstitute/posts/176049615748742 |title=Machine Intelligence Research Institute - Posts |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://www.facebook.com/pg/MachineIntelligenceResearchInstitute/posts/?ref=page_internal |title=Machine Intelligence Research Institute - Posts |accessdate=July 4, 2017}}</ref>
 
| 2010 || {{dts|December 21}} || Social media || The first post on the MIRI Facebook page is from this day.<ref>{{cite web |url=https://www.facebook.com/MachineIntelligenceResearchInstitute/posts/176049615748742 |title=Machine Intelligence Research Institute - Posts |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://www.facebook.com/pg/MachineIntelligenceResearchInstitute/posts/?ref=page_internal |title=Machine Intelligence Research Institute - Posts |accessdate=July 4, 2017}}</ref>
 +
 
|-
 
|-
 
| 2010–2011 || {{dts|December 21, 2010}}{{snd}}{{dts|January 20, 2011}} || Financial || The Tallinn–Evans $125,000 Singularity Challenge takes place. The Challenge is a fundraiser in which Edwin Evans and Jaan Tallinn match each dollar donated to MIRI up to $125,000.<ref>{{cite web |url=https://intelligence.org/2010/12/21/announcing-the-tallinn-evans-125000-singularity-holiday-challenge/ |title=Announcing the Tallinn-Evans $125,000 Singularity Challenge |author=Louie Helm |publisher=Machine Intelligence Research Institute |date=December 21, 2010 |accessdate=July 7, 2017}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/ |title=Tallinn-Evans $125,000 Singularity Challenge |date=December 26, 2010 |author=Kaj Sotala |accessdate=July 7, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 
| 2010–2011 || {{dts|December 21, 2010}}{{snd}}{{dts|January 20, 2011}} || Financial || The Tallinn–Evans $125,000 Singularity Challenge takes place. The Challenge is a fundraiser in which Edwin Evans and Jaan Tallinn match each dollar donated to MIRI up to $125,000.<ref>{{cite web |url=https://intelligence.org/2010/12/21/announcing-the-tallinn-evans-125000-singularity-holiday-challenge/ |title=Announcing the Tallinn-Evans $125,000 Singularity Challenge |author=Louie Helm |publisher=Machine Intelligence Research Institute |date=December 21, 2010 |accessdate=July 7, 2017}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/ |title=Tallinn-Evans $125,000 Singularity Challenge |date=December 26, 2010 |author=Kaj Sotala |accessdate=July 7, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 +
 
|-
 
|-
 +
 
| 2011 || {{dts|February 4}} || Project || ''The Uncertain Future'' is open-sourced.<ref name=hplus-tuf/>
 
| 2011 || {{dts|February 4}} || Project || ''The Uncertain Future'' is open-sourced.<ref name=hplus-tuf/>
 +
 
|-
 
|-
| 2011 || {{dts|February}} || Outside review || Holden Karnofsky of GiveWell has a conversation with MIRI staff. The conversation reveals the existence of a "Persistent Problems Group" at MIRI, which will supposedly "assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition".<ref>{{cite web |url=http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc |title=GiveWell conversation with SIAI |date=February 2011 |publisher=GiveWell |accessdate=July 4, 2017}}</ref> On April 30, Karnofsky would post the conversation to the GiveWell mailing list.<ref>{{cite web |url=https://groups.yahoo.com/neo/groups/givewell/conversations/topics/270 |publisher=Yahoo! Groups |title=Singularity Institute for Artificial Intelligence |author=Holden Karnofsky |accessdate=July 4, 2017}}</ref>
+
| 2011 || {{dts|February}} || Outside review || Holden Karnofsky of GiveWell has a conversation with MIRI staff. The conversation reveals the existence of a "Persistent Problems Group" at MIRI, which will supposedly "assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition".<ref>{{cite web |url=http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc |title=GiveWell conversation with SIAI |date=February 2011 |publisher=GiveWell |accessdate=July 4, 2017}}</ref> On April 30, Karnofsky posts the conversation to the GiveWell mailing list.<ref>{{cite web |url=https://groups.yahoo.com/neo/groups/givewell/conversations/topics/270 |publisher=Yahoo! Groups |title=Singularity Institute for Artificial Intelligence |author=Holden Karnofsky |accessdate=July 4, 2017}}</ref>
 +
 
 
|-
 
|-
 
| 2011 || {{dts|April}} || Staff || Luke Muehlhauser begins as an intern at MIRI.<ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6l4h |title=lukeprog comments on Thoughts on the Singularity Institute (SI) |accessdate=June 30, 2017 |quote=When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 
| 2011 || {{dts|April}} || Staff || Luke Muehlhauser begins as an intern at MIRI.<ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6l4h |title=lukeprog comments on Thoughts on the Singularity Institute (SI) |accessdate=June 30, 2017 |quote=When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 +
 
|-
 
|-
 
| 2011 || {{dts|May 10}}{{snd}}{{dts|June 24}} || Outside review || Holden Karnofsky of GiveWell and [[wikipedia:Jaan Tallinn|Jaan Tallinn]] (with Dario Amodei being present in the initial phone conversation) correspond regarding MIRI's work. The correspondence is posted to the GiveWell mailing list on July 18.<ref>{{cite web |url=https://groups.yahoo.com/neo/groups/givewell/conversations/messages/287 |title=Re: [givewell] Singularity Institute for Artificial Intelligence |author=Holden Karnofsky |publisher=Yahoo! Groups |accessdate=July 4, 2017}}</ref>
 
| 2011 || {{dts|May 10}}{{snd}}{{dts|June 24}} || Outside review || Holden Karnofsky of GiveWell and [[wikipedia:Jaan Tallinn|Jaan Tallinn]] (with Dario Amodei being present in the initial phone conversation) correspond regarding MIRI's work. The correspondence is posted to the GiveWell mailing list on July 18.<ref>{{cite web |url=https://groups.yahoo.com/neo/groups/givewell/conversations/messages/287 |title=Re: [givewell] Singularity Institute for Artificial Intelligence |author=Holden Karnofsky |publisher=Yahoo! Groups |accessdate=July 4, 2017}}</ref>
 +
 
|-
 
|-
 
| 2011 || {{dts|June 24}} || Domain || A Wayback Machine snapshot on this day shows that <code>singularity.org</code> has turned into a GoDaddy.com placeholder.<ref>{{cite web |url=https://web.archive.org/web/20110624011222/http://singularity.org:80/ |title=singularity.org |accessdate=July 4, 2017}}</ref> Before this, the domain is some blog, most likely unrelated to MIRI.<ref name="singularity_org_2011">{{cite web |url=https://web.archive.org/web/20111001000000*/singularity.org |title=Wayback Machine |accessdate=July 4, 2017}}</ref>
 
| 2011 || {{dts|June 24}} || Domain || A Wayback Machine snapshot on this day shows that <code>singularity.org</code> has turned into a GoDaddy.com placeholder.<ref>{{cite web |url=https://web.archive.org/web/20110624011222/http://singularity.org:80/ |title=singularity.org |accessdate=July 4, 2017}}</ref> Before this, the domain is some blog, most likely unrelated to MIRI.<ref name="singularity_org_2011">{{cite web |url=https://web.archive.org/web/20111001000000*/singularity.org |title=Wayback Machine |accessdate=July 4, 2017}}</ref>
 +
 
|-
 
|-
 
| 2011 || {{dts|July 18}}{{snd}}{{dts|October 20}} || Domain || At least during this period, the <code>singularity.org</code> domain name redirects to <code>singinst.org/singularityfaq</code>.<ref name="singularity_org_2011" />
 
| 2011 || {{dts|July 18}}{{snd}}{{dts|October 20}} || Domain || At least during this period, the <code>singularity.org</code> domain name redirects to <code>singinst.org/singularityfaq</code>.<ref name="singularity_org_2011" />
 +
 
|-
 
|-
| 2011 || {{dts|September 6}} || Domain || The first Wayback Machine capture of <code>singularityvolunteers.org</code> is from this day.<ref>{{cite web |url=https://web.archive.org/web/20110906193713/http://www.singularityvolunteers.org/ |title=Singularity Institute Volunteering |accessdate=July 14, 2017}}</ref> For a time the site is used to coordinate volunteer efforts.
+
| 2011 || {{dts|September 6}} || Domain || The first Wayback Machine capture of <code>singularityvolunteers.org</code> is from this day.<ref>{{cite web |url=https://web.archive.org/web/20110906193713/http://www.singularityvolunteers.org/ |title=Singularity Institute Volunteering |accessdate=July 14, 2017}}</ref> For a time, the site is used to coordinate volunteer efforts.
 +
 
 
|-
 
|-
 
| 2011 || {{dts|October 15}}–16 || Conference || The Singularity Summit 2011 takes place in New York.<ref>{{cite web |url=https://web.archive.org/web/20111031090701/http://www.singularitysummit.com:80/program |title=Singularity Summit {{!}} Program |accessdate=June 30, 2017}}</ref>
 
| 2011 || {{dts|October 15}}–16 || Conference || The Singularity Summit 2011 takes place in New York.<ref>{{cite web |url=https://web.archive.org/web/20111031090701/http://www.singularitysummit.com:80/program |title=Singularity Summit {{!}} Program |accessdate=June 30, 2017}}</ref>
 +
 
|-
 
|-
 
| 2011 || {{dts|October 17}} || Social media || The Singularity Summit YouTube account, SingularitySummits, is created.<ref>{{cite web |url=https://www.youtube.com/user/SingularitySummits/about |title=SingularitySummits |publisher=YouTube |accessdate=July 4, 2017 |quote=Joined Oct 17, 2011}}</ref>
 
| 2011 || {{dts|October 17}} || Social media || The Singularity Summit YouTube account, SingularitySummits, is created.<ref>{{cite web |url=https://www.youtube.com/user/SingularitySummits/about |title=SingularitySummits |publisher=YouTube |accessdate=July 4, 2017 |quote=Joined Oct 17, 2011}}</ref>
 +
 
|-
 
|-
 
| 2011 || {{dts|November}} || Staff || Luke Muehlhauser is appointed executive director of MIRI.<ref>{{cite web |url=https://intelligence.org/2012/01/16/singularity-institute-progress-report-december-2011/ |title=Machine Intelligence Research Institute Progress Report, December 2011 |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=January 16, 2012 |accessdate=July 14, 2017}}</ref>
 
| 2011 || {{dts|November}} || Staff || Luke Muehlhauser is appointed executive director of MIRI.<ref>{{cite web |url=https://intelligence.org/2012/01/16/singularity-institute-progress-report-december-2011/ |title=Machine Intelligence Research Institute Progress Report, December 2011 |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=January 16, 2012 |accessdate=July 14, 2017}}</ref>
 +
 
|-
 
|-
 
| 2011 || {{dts|December 12}} || Project || Luke Muehlhauser announces the creation of Friendly-AI.com, a website introducing the idea of Friendly AI.<ref>{{cite web |url=http://lesswrong.com/lw/8t6/new_landing_page_website_friendlyaicom/ |title=New 'landing page' website: Friendly-AI.com |author=lukeprog |date=December 12, 2011 |accessdate=July 2, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 
| 2011 || {{dts|December 12}} || Project || Luke Muehlhauser announces the creation of Friendly-AI.com, a website introducing the idea of Friendly AI.<ref>{{cite web |url=http://lesswrong.com/lw/8t6/new_landing_page_website_friendlyaicom/ |title=New 'landing page' website: Friendly-AI.com |author=lukeprog |date=December 12, 2011 |accessdate=July 2, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 +
 
|-
 
|-
| 2012 || || Staff || Michael Vassar leaves MIRI to found {{w|MetaMed}}, a personalized medical advising company.<ref>{{cite web|url = https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/|title = Come With Us If You Want to Live. Among the apocalyptic libertarians of Silicon Valley|last = Frank|first = Sam|date = January 1, 2015|accessdate = July 15, 2017|publisher = ''Harper's Magazine''|quote=Vassar had left to found MetaMed, a personalized-medicine company, with Jaan Tallinn of Skype and Kazaa, $500,000 from Peter Thiel, and a staff that included young rationalists who had cut their teeth arguing on Yudkowsky’s website. The idea behind MetaMed was to apply rationality to medicine — "rationality" here defined as the ability to properly research, weight, and synthesize the flawed medical information that exists in the world. Prices ranged from $25,000 for a literature review to a few hundred thousand for a personalized study. "We can save lots and lots and lots of lives," Vassar said (if mostly moneyed ones at first). "But it’s the signal — it's the 'Hey! Reason works!' — that matters. It's not really about medicine." Our whole society was sick — root, branch, and memeplex — and rationality was the only cure.}}</ref>
+
 
 +
|| 2012 || || Staff || Michael Vassar leaves MIRI to found MetaMed, a personalized medical advisory company. Vassar co-founds the company with Skype co-creator Jaan Tallinn and $500,000 in funding from Peter Thiel. MetaMed aims to revolutionize the medical system by applying rational decision-making and advanced data analysis to personalized health solutions. Despite its ambitious goals, MetaMed's services are initially aimed at wealthy clients, offering personalized medical literature reviews and customized health studies. The underlying mission is to signal the power of rationality in complex systems like medicine, even though it may initially serve the privileged.<ref>{{cite web|url=https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/|title=Come With Us If You Want to Live. Among the apocalyptic libertarians of Silicon Valley|last=Frank|first=Sam|date=January 1, 2015|accessdate=July 15, 2017|publisher=Harper's Magazine}}</ref>
 +
 
 
|-
 
|-
| 2011 {{snd}} 2012 || {{dts|December 10}} and {{dts|January 12}} || Opinion || A two-part Q&A with MIRI's newly appointed Executive Director Luke Muehlhauser is published.<ref>{{cite web|url = https://www.lesswrong.com/posts/yGZHQYqWkLMbXy3z7/video-q-and-a-with-singularity-institute-executive-director|title = Video Q&A with Singularity Institute Executive Director|date = December 10, 2011|accessdate = May 31, 2021|publisher = LessWrong}}</ref><ref>{{cite web|url = https://intelligence.org/2012/01/12/qa-2-with-luke-muehlhauser-singularity-institute-executive-director/|title = Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director|date = January 12, 2012|accessdate = May 31, 2021|publisher = Machine Intelligence Research Institute}}</ref>
+
| 2011–2012 || || Opinion || Q&A with Luke Muehlhauser: In a two-part Q&A series, Luke Muehlhauser, the newly appointed Executive Director of MIRI, shares his vision and priorities for the organization. He outlines MIRI’s evolving strategy to focus more intensively on AI alignment research and less on broader advocacy for the singularity. Muehlhauser discusses the challenges of making progress in AI safety research and the importance of recruiting highly talented researchers to tackle this problem. He highlights that MIRI’s goal is not just to build smarter AI, but to ensure its safety and alignment with human values.<ref>{{cite web|url=https://www.lesswrong.com/posts/yGZHQYqWkLMbXy3z7/video-q-and-a-with-singularity-institute-executive-director|title=Video Q&A with Singularity Institute Executive Director|date=December 10, 2011|accessdate=May 31, 2021|publisher=LessWrong}}</ref><ref>{{cite web|url=https://intelligence.org/2012/01/12/qa-2-with-luke-muehlhauser-singularity-institute-executive-director/|title=Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director|date=January 12, 2012|accessdate=May 31, 2021|publisher=Machine Intelligence Research Institute}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|February 4}}{{snd}}{{dts|May 4}} || Domain || At least during this period, <code>singularity.org</code> redirects to <code>singinst.org</code>.<ref>{{cite web |url=https://web.archive.org/web/20120501000000*/singularity.org |title=Wayback Machine |accessdate=July 4, 2017}}</ref>
+
| 2012 || || Domain || From February 4 to May 4, 2012, MIRI redirects its domain from singularity.org to singinst.org, reflecting its shift from broad discussions on the singularity to a focused emphasis on AI safety and technical research. This change consolidates its online presence under a new identity aligned with its mission of developing safe, human-aligned AI.<ref>{{cite web|url=https://web.archive.org/web/20120501000000*/singularity.org|title=Wayback Machine|accessdate=July 4, 2017}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|May 8}} || || MIRI's April 2012 progress report is published, in which the Center for Applied Rationality's name is announced. Until this point, CFAR was known as the "Rationality Group" or "Rationality Org".<ref>{{cite web |url=https://intelligence.org/2012/05/08/singularity-institute-progress-report-april-2012/ |title=Machine Intelligence Research Institute Progress Report, April 2012 |publisher=Machine Intelligence Research Institute |date=May 8, 2012 |author=Louie Helm |accessdate=June 30, 2017}}</ref>
+
| 2012 || {{dts|May 8}} || Progress Report || MIRI publishes its April 2012 progress report, which includes the announcement of the new name for the Center for Applied Rationality (CFAR). Previously known as the “Rationality Group,” this rebranding signals a more structured approach to developing rationality training programs and fostering research in rational decision-making. CFAR’s establishment is an important step in formalizing the application of rationality tools, which would later become a key part of the broader Effective Altruism community.<ref>{{cite web|url=https://intelligence.org/2012/05/08/singularity-institute-progress-report-april-2012/|title=Machine Intelligence Research Institute Progress Report, April 2012|publisher=Machine Intelligence Research Institute|date=May 8, 2012|author=Louie Helm|accessdate=June 30, 2017}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|May 11}} || Outside review || Holden Karnofsky publishes "Thoughts on the Singularity Institute (SI)" on [[wikipedia:LessWrong|LessWrong]]. The post explains why GiveWell does not plan to recommend the Singularity Institute.<ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ |title=Thoughts on the Singularity Institute (SI) - Less Wrong |accessdate=June 30, 2017 |author=Holden Karnofsky |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
| 2012 || {{dts|May 11}} || Outside Review || Holden Karnofsky, co-founder of GiveWell and later Open Philanthropy, publishes "Thoughts on the Singularity Institute (SI)" on LessWrong. In this post, Karnofsky outlines his reasons for not recommending the Singularity Institute (now MIRI) for GiveWell funding. His concerns revolve around the speculative nature of the institute’s research on AI safety, which he believes lacks the empirical grounding necessary for confident recommendations. This review contributes to MIRI’s reputation within the broader Effective Altruism and existential risk communities, prompting them to refine their research approach.<ref>{{cite web|url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/|title=Thoughts on the Singularity Institute (SI)|accessdate=June 30, 2017|author=Holden Karnofsky|publisher=LessWrong}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|June 16}}–28 || Domain || Sometime during this period, <code>singinst.org</code> begins redirecting to <code>singularity.org</code>, both being controlled by MIRI.<ref>{{cite web |url=https://web.archive.org/web/20120601000000*/singinst.org |title=Wayback Machine |accessdate=July 4, 2017}}</ref> The new website at singularity.org would be announced in the July 2012 newsletter.<ref name=july-2012-newsletter/>
+
| 2012 || {{dts|August 6}} || Newsletter || MIRI begins publishing monthly newsletters as blog posts, starting with the July 2012 Newsletter. These newsletters provide regular updates on MIRI’s research, events, and organizational developments, and serve as a valuable resource for supporters and stakeholders interested in AI safety. The monthly cadence also marks a more structured communication approach for MIRI, enhancing its transparency and engagement with the community.<ref>{{cite web|url=https://intelligence.org/2012/08/06/july-2012-newsletter/|title=July 2012 Newsletter|last=Helm|first=Louie|date=August 6, 2012|accessdate=May 5, 2020|publisher=Machine Intelligence Research Institute}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|July}}, August 6 || || Starting with July 2012, MIRI would start publishing monthly newsletters as blog posts. The July 2012 newsletter is posted on August 6.<ref name=july-2012-newsletter>{{cite web|url = https://intelligence.org/2012/08/06/july-2012-newsletter/|title = July 2012 Newsletter|last = Helm|first = Louie|date = August 6, 2012|accessdate = May 5, 2020|publisher = Machine Intelligence Research Institute}}</ref><ref name=newsletters>{{cite web|url = https://intelligence.org/category/newsletters/|title = All “Newsletters” Posts|publisher = Machine Intelligence Research Institute|accessdate = May 5, 2020}}</ref>
+
| 2012 || {{dts|October 13}}–14 || Conference || The Singularity Summit 2012 takes place in San Francisco, attracting a wide array of speakers and attendees, including leaders in AI, neuroscience, and futurism. Speakers such as Eliezer Yudkowsky and Ray Kurzweil share their visions of the future, discussing topics from AI safety to human enhancement. The Summit is a key event for disseminating ideas about the singularity and fostering discussions about the long-term impact of artificial intelligence on humanity.<ref>{{cite web|url=https://singularityhub.com/2012/08/29/singularity-summit-2012-is-coming-to-san-francisco-october-13-14/|author=David J. Hill|title=Singularity Summit 2012 Is Coming To San Francisco October 13-14|publisher=Singularity Hub|date=August 29, 2012|accessdate=July 6, 2017}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|August 15}} || || Luke Muehlhauser does an "ask me anything" (AMA) on reddit's r/Futurology.<ref>{{cite web |url=https://www.reddit.com/r/Futurology/comments/y9lm0/i_am_luke_muehlhauser_ceo_of_the_singularity/ |publisher=reddit |title=I am Luke Muehlhauser, CEO of the Singularity Institute for Artificial Intelligence. Ask me anything about the Singularity, AI progress, technological forecasting, and researching Friendly AI! • r/Futurology |accessdate=June 30, 2017}}</ref>
+
| 2012 || {{dts|November 11}}–18 || Workshop || The 1st Workshop on Logic, Probability, and Reflection takes place, bringing together researchers to explore the intersections of these fields with AI alignment and decision theory. These workshops are critical for advancing MIRI’s foundational research on how to develop AI systems that can reason reliably under uncertainty, a key component of ensuring the safety and predictability of future AI systems.<ref name="workshops">{{cite web|url=https://intelligence.org/workshops/|title=Research Workshops - Machine Intelligence Research Institute|publisher=Machine Intelligence Research Institute|accessdate=July 1, 2017}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|September}} (approximate) || Project || MIRI begins to partner with Youtopia as its volunteer management platform.<ref>{{cite web |url=https://intelligence.org/2012/11/07/november-2012-newsletter/ |title=November 2012 Newsletter |publisher=Machine Intelligence Research Institute |date=November 7, 2012 |accessdate=July 14, 2017 |quote=Over the past couple of months we thought hard about how to improve our volunteer program, with the goal of finding a system that makes it easier to engage volunteers, create a sense of community, and quantify volunteer contributions. After evaluating several different volunteer management platforms, we decided to partner with Youtopia — a young company with a lot of promise — and make heavy use of Google Docs.}}</ref>
+
| 2012 || {{dts|December 6}} || Singularity Summit Acquisition || Singularity University announces that it has acquired the Singularity Summit from MIRI. This acquisition marks the end of MIRI's direct involvement with the summit, a move praised by some, including Joshua Fox, for allowing MIRI to focus more directly on AI safety research. However, Singularity University does not continue the Summit tradition in its original form. The conference's ethos is eventually inherited by other events like EA Global, which carry forward similar themes of long-term thinking and technological foresight.<ref>{{cite web|url=http://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/|title=Singularity University Acquires the Singularity Summit|publisher=Singularity University|date=December 9, 2012|accessdate=June 30, 2017}}</ref><ref name="singularity-wars">{{cite web|url=http://lesswrong.com/lw/gn4/the_singularity_wars/|title=The Singularity Wars|last=Fox|first=Joshua|date=February 14, 2013|accessdate=July 15, 2017|publisher=LessWrong}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|October 13}}–14 || Conference || The Singularity Summit 2012 takes place.<ref>{{cite web |url=https://singularityhub.com/2012/08/29/singularity-summit-2012-is-coming-to-san-francisco-october-13-14/ |author=David J. Hill |title=Singularity Summit 2012 Is Coming To San Francisco October 13-14 |publisher=Singularity Hub |date=August 29, 2012 |accessdate=July 6, 2017}}</ref><ref>{{cite web |url=http://blogs.discovermagazine.com/gnxp/2012/10/singularity-summit-2012-the-lion-doesnt-sleep-tonight/ |title=Singularity Summit 2012: the lion doesn't sleep tonight |website=Gene Expression |publisher=Discover |date=October 15, 2012 |accessdate=July 6, 2017}}</ref>
+
 
 +
| 2013 || Mission || Mission Statement Update || MIRI's mission statement is revised to reflect its evolving focus on AI safety: "To ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of the organization is to: a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; b) raise awareness of this important issue; c) advise researchers, leaders, and laypeople around the world; d) as necessary, implement a smarter-than-human intelligence with humane, stable goals." This shift represents a more direct approach to developing safe AI systems, incorporating a broader outreach strategy, and addressing global challenges posed by advanced AI.<ref>{{cite web |url=https://intelligence.org/wp-content/uploads/2012/06/2013-990.pdf |title=Form 990 2013 |accessdate=July 8, 2017}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|November 11}}–18 || Workshop || The 1st Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops">{{cite web |url=https://intelligence.org/workshops/ |title=Research Workshops - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |accessdate=July 1, 2017}}</ref>
+
| 2013–2014 || Project || Conversations Initiative || During this period, MIRI engages in a large number of expert interviews. Out of 80 conversations listed as of July 2017, 75 occurred in this time frame (19 in 2013 and 56 in 2014). These conversations involve in-depth discussions on AI safety, strategy, and existential risk with leading thinkers in the field. By mid-2014, MIRI deprioritizes these interviews due to diminishing returns, as noted by executive director Luke Muehlhauser in MIRI’s 2014 review. However, the conversations contribute substantially to shaping AI safety dialogue during these years.<ref>{{cite web |url=https://intelligence.org/category/conversations/ |title=Conversations Archives |publisher=Machine Intelligence Research Institute |accessdate=July 15, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2015/03/22/2014-review/ |title=2014 in review |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=March 22, 2015 |accessdate=July 15, 2017}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|December 6}} || || Singularity University announces that it has acquired the Singularity Summit from MIRI.<ref>{{cite web |url=http://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/ |title=Singularity University Acquires the Singularity Summit |publisher=Singularity University |date=December 9, 2012 |accessdate=June 30, 2017}}</ref> Joshua Fox praises the move, noting: "The Singularity Summit was always off-topic for SI: more SU-like than SI-like."<ref name=singularity-wars>{{cite web|url = http://lesswrong.com/lw/gn4/the_singularity_wars/|title = The Singularity Wars|last = Fox|first = Joshua|date = February 14, 2013|accessdate = July 15, 2017|publisher = LessWrong}}</ref> However, Singularity University would not continue the original tradition of the Summit,<ref>{{cite web|url = https://www.facebook.com/groups/200945030405983/permalink/228705307629955/|title = The Singularity Summit was an annual event from 2006 through 2012|last = Vance|first = Alyssa|date = May 27, 2017|accessdate = July 15, 2017}}</ref> and the later EA Global conference (organized in some years by Amy Willey Labenz who used to work at MIRI) would inherit some of the characteristics of the Singularity Summit.<ref>{{cite web|url = https://groups.google.com/forum/#!topic/long-term-world-improvement/oYSW9XfA-FY|title = EA Global Boston|last = Vance|first = Alyssa|last2 = Sotala|first2 = Kaj|last3 = Luczkow|first3 = Vincent|date = June 6, 2017|accessdate = July 15, 2017}}</ref> Around this time, Amy Willey Labenz also leaves MIRI.<ref name="amy-email-2022-05-27"/>
+
| 2013 || {{dts|January}} || Staff || Michael Anissimov leaves MIRI following the acquisition of the Singularity Summit by Singularity University and a major shift in MIRI's public communication strategy. Although no longer employed at MIRI, Anissimov continues to support its mission and contributes as a volunteer. This departure reflects MIRI's pivot away from broader public outreach and its increased focus on research, particularly in AI alignment and decision theory.<ref>{{cite web |url=https://intelligence.org/2013/03/07/march-newsletter/ |title=March Newsletter |publisher=Machine Intelligence Research Institute |date=March 7, 2013 |accessdate=July 1, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || || Mission || The organization mission changes to: "To ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of the organization is to: a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; b) raise awareness of this important issue; c) advise researchers, leasers and laypeople around the world; d) as necessary, implement a smarter-than-human intelligence with humane, stable goals."<ref>{{cite web |url=https://intelligence.org/wp-content/uploads/2012/06/2013-990.pdf |title=Form 990 2013 |accessdate=July 8, 2017}}</ref> This mission would stay the same for 2014 and 2015.
+
| 2013 || {{dts|January 30}} || Rebranding || MIRI announces its renaming from the Singularity Institute for Artificial Intelligence (SIAI) to the Machine Intelligence Research Institute (MIRI). The name change reflects MIRI's growing focus on machine intelligence and the technical challenges of AI safety, rather than the broader singularity topics associated with its former title. This rebranding helps clarify MIRI's mission to external stakeholders and aligns with its shift toward more technical and research-focused projects.<ref>{{cite web |url=https://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/ |title=We are now the "Machine Intelligence Research Institute" (MIRI) |publisher=Machine Intelligence Research Institute |date=January 30, 2013 |accessdate=June 30, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013–2014 || || Project || MIRI conducts a lot of conversations during this period. Out of 80 conversations listed as of July 14, 2017, 75 are from this period (19 in 2013 and 56 in 2014).<ref>{{cite web |url=https://intelligence.org/category/conversations/ |title=Conversations Archives |publisher=Machine Intelligence Research Institute |accessdate=July 15, 2017}}</ref> In the "2014 in review" post on MIRI's blog Luke Muehlhauser writes: "Nearly all of the interviews were begun in 2013 or early 2014, even if they were not finished and published until much later. Mid-way through 2014, we decided to de-prioritize expert interviews, due to apparent diminishing returns."<ref>{{cite web |url=https://intelligence.org/2015/03/22/2014-review/ |title=2014 in review |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=March 22, 2015 |accessdate=July 15, 2017}}</ref>
+
| 2013 || {{dts|February 1}} || Publication || MIRI publishes "Facing the Intelligence Explosion" by executive director Luke Muehlhauser. This book provides an accessible introduction to the risks posed by artificial intelligence and highlights the urgent need for AI safety research. It underscores MIRI's mission to address the potentially existential risks that could arise from advanced AI systems, framing the conversation around the control and alignment of AI.<ref>{{cite web |url=https://www.amazon.com/facing-the-intelligence-explosion/dp/B00C7YOR5Q |title=Facing the Intelligence Explosion, Luke Muehlhauser |publisher=Amazon.com |accessdate=July 1, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|January}} || Staff || Michael Anissimov leaves MIRI.<ref>{{cite web |url=https://intelligence.org/2013/03/07/march-newsletter/ |title=March Newsletter |publisher=Machine Intelligence Research Institute |date=March 7, 2013 |accessdate=July 1, 2017 |quote=Due to Singularity University's acquisition of the Singularity Summit and some major changes to MIRI's public communications strategy, Michael Anissimov left MIRI in January 2013. Michael continues to support our mission and continues to volunteer for us.}}</ref>
+
| 2013 || {{dts|February 11}}{{snd}}{{dts|February 28}} || Domain || MIRI's new website, intelligence.org, begins operating during this period. The website’s launch marks a new digital presence for the organization, with a cleaner, more professional focus on machine intelligence research and AI safety. Executive director Luke Muehlhauser announces the new site in a blog post, emphasizing the transition away from the Singularity Institute’s prior domain and approach.<ref>{{cite web |url=https://web.archive.org/web/20130211105954/http://intelligence.org:80/ |title=Machine Intelligence Research Institute - Coming soon... |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2013/02/28/welcome-to-intelligence-org/ |title=Welcome to Intelligence.org |author=Luke Muehlhauser |date=February 28, 2013 |accessdate=May 5, 2020 |publisher=Machine Intelligence Research Institute}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|January 30}} || || MIRI announces that it has renamed itself from "Singularity Institute for Artificial Intelligence" to "Machine Intelligence Research Institute".<ref>{{cite web |url=https://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/ |title=We are now the "Machine Intelligence Research Institute" (MIRI) |publisher=Machine Intelligence Research Institute |date=January 30, 2013 |accessdate=June 30, 2017}}</ref>
+
| 2013 || {{dts|April 3}} || Publication || "Singularity Hypotheses: A Scientific and Philosophical Assessment" is published by Springer. This collection, which includes contributions from MIRI researchers and research associates, examines the scientific and philosophical issues surrounding the concept of the singularity and the rise of advanced artificial intelligence. The book provides a detailed exploration of the potential trajectories of AI development and its impact on humanity.<ref>{{cite web |url=https://intelligence.org/2013/04/25/singularity-hypotheses-published/ |title="Singularity Hypotheses" Published |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=April 25, 2013 |accessdate=July 14, 2017}}</ref><ref>{{cite web |url=https://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/ |title=Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection): 9783642325595: Medicine & Health Science Books |publisher=Amazon.com |accessdate=July 14, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|February 1}} || Publication || ''Facing the Intelligence Explosion'' by Luke Muehlhauser is published by MIRI.<ref>{{cite web |url=https://www.amazon.com/facing-the-intelligence-explosion/dp/B00C7YOR5Q |title=Facing the Intelligence Explosion, Luke Muehlhauser |publisher=Amazon.com |accessdate=July 1, 2017 |quote=Publisher: Machine Intelligence Research Institute (February 1, 2013)}}</ref>
+
| 2013 || {{dts|April 3}}–24 || Workshop || MIRI hosts the 2nd Workshop on Logic, Probability, and Reflection, bringing together researchers to advance the development of decision theory, AI alignment, and formal methods for AI reasoning. These workshops form a critical part of MIRI’s strategy for improving foundational theoretical work on AI, which is key for creating safe, reliable AI systems.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2013 || {{dts|February 11}}{{snd}}{{dts|February 28}} || Domain || Sometime during this period, MIRI's new website at <code>intelligence.org</code> begins to function.<ref>{{cite web |url=https://web.archive.org/web/20130211105954/http://intelligence.org:80/ |title=Machine Intelligence Research Institute - Coming soon... |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://web.archive.org/web/20130302063022/http://intelligence.org/ |title=Machine Intelligence Research Institute |accessdate=July 4, 2017}}</ref> The new website is announced by Executive Director Luke Muehlhauser in a blog post on February 28.<ref>{{cite web|url = https://intelligence.org/2013/02/28/welcome-to-intelligence-org/|title = Welcome to Intelligence.org|last = Muehlhauser|first = Luke|date = February 28, 2013|accessdate = May 5, 2020|publisher = Machine Intelligence Research Institute}}</ref>
+
| 2013 || {{dts|April 13}} || Strategy || MIRI publishes a strategic update, outlining plans to shift its focus more heavily toward Friendly AI mathematics and reducing its emphasis on public outreach. This transition is framed as a necessary step to concentrate resources on the technical challenges that will have the most direct impact on AI safety. The organization sees this as a way to prioritize high-value research areas that can contribute to controlling advanced AI.<ref>{{cite web |url=https://intelligence.org/2013/04/13/miris-strategy-for-2013/ |title=MIRI's Strategy for 2013 |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=December 11, 2013 |accessdate=July 6, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|March 2}}{{snd}}{{dts|July 4}} || Domain || At least during this period, <code>singularity.org</code> redirects to <code>intelligence.org</code>, MIRI's new domain.<ref>{{cite web |url=https://web.archive.org/web/20130401000000*/singularity.org |title=Wayback Machine |accessdate=July 4, 2017}}</ref>
+
 
 +
| 2014 || {{dts|January}} (approximate) || Financial || Jed McCaleb, the creator of Ripple and original founder of Mt. Gox, donates $500,000 worth of XRP to the Machine Intelligence Research Institute (MIRI). This marks a substantial financial contribution to support AI safety research, further emphasizing the growing interest in AI development from individuals in the cryptocurrency space. McCaleb's involvement highlights the intersection of cryptocurrency and AI safety, as both fields focus on technological innovation with significant societal impacts.<ref>{{cite web |url=http://www.coindesk.com/ripple-creator-500000-xrp-donation-ai-research-charity/ |date=January 19, 2014 |title=Ripple Creator Donates $500k in XRP to Artificial Intelligence Research Charity |author=Jon Southurst |publisher=CoinDesk |accessdate=July 6, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|April 3}} || Publication || ''[[wikipedia:Singularity Hypotheses: A Scientific and Philosophical Assessment|Singularity Hypotheses: A Scientific and Philosophical Assessment]]'' is published by [[wikipedia:Springer Publishing|Springer]]. The book contains chapters written by MIRI researchers and research associates.<ref>{{cite web |url=https://intelligence.org/2013/04/25/singularity-hypotheses-published/ |title="Singularity Hypotheses" Published |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=April 25, 2013 |accessdate=July 14, 2017}}</ref><ref>{{cite web |url=https://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/ |title=Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection): 9783642325595: Medicine & Health Science Books |publisher=Amazon.com |accessdate=July 14, 2017 |quote=Publisher: Springer; 2012 edition (April 3, 2013)}}</ref>
+
 
 +
| 2014 || {{dts|January 16}} || Outside Review || MIRI staff meet with Holden Karnofsky, co-founder of GiveWell, for a strategic conversation about existential risks and AI safety. The discussion focuses on MIRI’s approach to managing existential risk, exploring potential avenues for collaboration between MIRI and other organizations involved in AI safety. This meeting is part of MIRI's ongoing effort to engage with influential figures in the effective altruism and philanthropic communities to advance AI safety research.<ref>{{cite web |url=https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/ |title=Existential Risk Strategy Conversation with Holden Karnofsky |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=January 27, 2014 |accessdate=July 7, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|April 3}}–24 || Workshop || The 2nd Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops" />
+
 
 +
| 2014 || {{dts|February 1}} || Publication || MIRI publishes Stuart Armstrong's influential book "Smarter Than Us: The Rise of Machine Intelligence". The book explores the challenges humanity may face with the rise of intelligent machines and serves as an introduction to AI alignment issues for a broader audience. Armstrong, a research associate at MIRI, examines the potential risks of advanced AI systems, making this book a key piece of literature in the AI safety discourse.<ref>{{cite web |url=https://www.amazon.com/Smarter-Than-Us-Machine-Intelligence-ebook/dp/B00IB4N4KU |title=Smarter Than Us: The Rise of Machine Intelligence, Stuart Armstrong |publisher=Amazon.com |accessdate=July 1, 2017 |quote=Publisher: Machine Intelligence Research Institute (February 1, 2014)}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|April 13}} || Strategy || MIRI publishes an update on its strategy on its blog. In the blog post, MIRI executive director Luke Muehlhauser states that MIRI plans to put less effort into public outreach and shift its research to Friendly AI math research.<ref>{{cite web |url=https://intelligence.org/2013/04/13/miris-strategy-for-2013/ |title=MIRI's Strategy for 2013 |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=December 11, 2013 |accessdate=July 6, 2017}}</ref>
+
 
 +
| 2014 || {{dts|March}}–May || Influence || The Future of Life Institute (FLI) is co-founded by Max Tegmark, Jaan Tallinn, Meia Chita-Tegmark, and Anthony Aguirre, with support from MIRI. FLI is an existential risk research and outreach organization focused on ensuring the benefits of AI are shared by humanity. MIRI's influence is notable, as Tallinn, a co-founder of FLI and the Cambridge Centre for the Study of Existential Risk (CSER), cites MIRI as a key source for his views on AI risk. This marks a major expansion in global efforts to address the long-term societal impacts of AI, with MIRI playing a pivotal role in the formation of FLI.<ref>{{cite web |url=https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/ |title=Assessing Our Past and Potential Impact |publisher=Machine Intelligence Research Institute |author=Rob Bensinger |date=August 10, 2015 |accessdate=July 6, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|April 18}} || Staff || MIRI announces that executive assistant Ioven Fables is leaving MIRI due to changes in MIRI's operational needs (from its transition to a research-oriented organization).<ref>{{cite web |url=https://intelligence.org/2013/04/18/miri-april-newsletter-relaunch-celebration-and-a-new-math-result/ |title=MIRI's April newsletter: Relaunch Celebration and a New Math Result |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=April 18, 2013 |author=Jake |accessdate=May 27, 2018}}</ref>
+
 
 +
| 2014 || {{dts|March 12}}–13 || Staff || MIRI announces the hiring of several new researchers, including Nate Soares, who would later become MIRI's executive director in 2015. This marks a key moment of growth for the institute as it expands its research team. MIRI also hosts an Expansion Party to introduce the new hires to local supporters, underscoring the organization’s increased visibility and capacity to take on more ambitious AI safety projects.<ref name="recent_hires_at_miri_mar_2014">{{cite web |url=https://intelligence.org/2014/03/13/hires/ |title=Recent Hires at MIRI |publisher=Machine Intelligence Research Institute |date=March 13, 2014 |accessdate=July 13, 2017}}</ref>
 +
<ref>{{cite web |url=https://intelligence.org/2014/03/18/miris-march-2014-newsletter/ |title=MIRI's March 2014 Newsletter |publisher=Machine Intelligence Research Institute |date=March 18, 2014 |accessdate=May 27, 2018}}</ref><ref>{{cite web |url=https://www.facebook.com/pg/MachineIntelligenceResearchInstitute/photos/?tab=album&album_id=655204764516911 |title=Machine Intelligence Research Institute - Photos |publisher=Facebook |accessdate=May 27, 2018}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|July 4}} || Social media || MIRI's Twitter account, MIRIBerkeley, is created.<ref>{{cite web |url=https://twitter.com/MIRIBerkeley |title=MIRI (@MIRIBerkeley) |publisher=Twitter |accessdate=July 1, 2017}}</ref>
+
 
 +
| 2014 || {{dts|May 3}}–11 || Workshop || MIRI hosts the 7th Workshop on Logic, Probability, and Reflection. This workshop focuses on advancing decision theory and addressing problems related to AI's reasoning under uncertainty. Attendees include top researchers in AI safety and decision theory, working on foundational questions crucial for creating safe AI systems.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2013 || {{dts|July 4}} || Social media || The earliest post on MIRI's Google Plus account, IntelligenceOrg, is from this day.<ref>{{cite web |url=https://plus.google.com/+IntelligenceOrg/posts/Ge3p8fPTkQn |title=MIRI's +Luke Muehlhauser appears on "Big Picture Science" at 13:30-23:30. |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://plus.google.com/+IntelligenceOrg |title=Machine Intelligence Research Institute - Google+ |accessdate=July 4, 2017}}</ref>
+
 
 +
| 2014 || {{dts|July}}–September || Influence || Nick Bostrom's seminal work "Superintelligence: Paths, Dangers, Strategies" is published. Bostrom, a research advisor to MIRI, draws heavily on AI safety concerns shared by MIRI researchers. MIRI played a significant role in shaping the discussions that led to the book's publication. "Superintelligence" becomes a widely recognized book in AI alignment, contributing to global discourse on managing the risks of powerful AI systems.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |date=September 17, 2015 |accessdate=July 6, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|July 8}}–14 || Workshop || The 3rd Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops" />
+
 
 +
| 2014 || {{dts|July 4}} || Project || Earliest evidence of the existence of AI Impacts, an initiative focused on analyzing the future societal impacts of AI, appears. Katja Grace plays a key role in launching the project, which seeks to provide rigorous research on AI timelines and impact assessments.<ref>{{cite web |url=https://web.archive.org/web/20141129001325/http://www.aiimpacts.org:80/system/app/pages/recentChanges |title=Recent Site Activity - AI Impacts |accessdate=June 30, 2017 |quote=Jul 4, 2014, 10:39 AM Katja Grace edited Predictions of human-level AI timelines}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|August 4}} || Domain || By this point, <code>singularity.org</code> is operated by Singularity University.<ref>{{cite web |url=https://web.archive.org/web/20130804174727/http://www.singularity.org/ |title=Singularity Summit |accessdate=July 4, 2017}}</ref>
+
 
 +
| 2014 || {{dts|August}} || Project || The AI Impacts website officially launches. This project, led by Paul Christiano and Katja Grace, provides detailed analyses and forecasts regarding the development of AI. The website becomes a hub for discussing the potential long-term future of AI and its impacts on society, solidifying AI Impacts as a key contributor to the existential risk community.<ref>{{cite web |url=https://intelligence.org/2014/09/01/september-newsletter-2/ |title=MIRI's September Newsletter |publisher=Machine Intelligence Research Institute |date=September 1, 2014 |accessdate=July 15, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|September 1}} || Publication || ''The Hanson-Yudkowsky AI-Foom Debate'' is published as an ebook by MIRI.<ref>{{cite web |url=https://www.amazon.com/dp/B00EZCFOG4/ |title=Amazon.com: The Hanson-Yudkowsky AI-Foom Debate eBook: Robin Hanson, Eliezer Yudkowsky: Kindle Store |accessdate=July 1, 2017 |quote=Publisher: Machine Intelligence Research Institute (September 1, 2013)}}</ref>
+
 
 +
| 2014 || {{dts|November 4}} || Project || The Intelligent Agent Foundations Forum, run by MIRI, is launched. This forum serves as a space for discussing cutting-edge research on agent foundations and decision theory, crucial components in the development of safe AI systems. The forum attracts researchers from a variety of fields to contribute to the growing body of work on AI safety and alignment.<ref>{{cite web |url=https://agentfoundations.org/item?id=1 |website=Intelligent Agent Foundations Forum |title=Welcome! |author=Benja Fallenstein |accessdate=June 30, 2017 |quote=Post by Benja Fallenstein 969 days ago}}</ref>
 
|-
 
|-
| 2013 || {{dts|September 7}}–13 || Workshop || The 4th Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops" />
+
| 2015 || {{dts|January}} || Project || AI Impacts, a project focused on assessing the potential long-term impacts of artificial intelligence, rolls out a redesigned website. This project aims to provide accessible, well-researched information on AI risks, timelines, and governance issues. The site’s overhaul, led by the Machine Intelligence Research Institute (MIRI), was part of a broader effort to improve public engagement and the dissemination of knowledge about AI’s potential dangers.<ref>{{cite web |url=https://intelligence.org/2015/01/11/improved-ai-impacts-website/ |title=An improved "AI Impacts" website |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=January 11, 2015 |accessdate=June 30, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|October 25}} || Social media || The MIRI YouTube account, MIRIBerkeley, is created.<ref>{{cite web |url=https://www.youtube.com/user/MIRIBerkeley/about |title=Machine Intelligence Research Institute |publisher=YouTube |accessdate=July 4, 2017 |quote=Joined Oct 25, 2013}}</ref>
+
 
 +
| 2015 || {{dts|January 2}}–5 || Conference || ''The Future of AI: Opportunities and Challenges,'' an AI safety conference, takes place in Puerto Rico. Organized by the Future of Life Institute, the conference attracts influential researchers such as Luke Muehlhauser, Eliezer Yudkowsky, and Nate Soares from MIRI, as well as top AI academics. This event became pivotal in rallying attention to AI risks, leading Soares to describe it as a “turning point” where academia began seriously addressing AI existential risk. At this conference, leading thinkers discussed how AI, if left unchecked, could pose existential threats to humanity.<ref>{{cite web |url=https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ |title=AI safety conference in Puerto Rico |publisher=Future of Life Institute |date=October 12, 2015 |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2015/07/16/an-astounding-year/ |title=An Astounding Year |publisher=Machine Intelligence Research Institute |author=Nate Soares |date=July 16, 2015 |accessdate=July 13, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|October 27}} || Outside review || MIRI meets with Holden Karnofsky, Jacob Steinhardt, and Dario Amodei for a discussion about MIRI's organizational strategy.<ref>{{cite web |url=https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/ |title=MIRI strategy conversation with Steinhardt, Karnofsky, and Amodei |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=January 13, 2014 |accessdate=July 7, 2017}}</ref><ref name="open_phil_ai_risk_shallow">{{cite web |url=http://www.openphilanthropy.org/research/cause-reports/ai-risk |title=Potential Risks from Advanced Artificial Intelligence |publisher=Open Philanthropy |accessdate=July 7, 2017}}</ref>
+
 
 +
| 2015 || {{dts|March 11}} || Influence || ''Rationality: From AI to Zombies'' is published. This book, a compilation of Eliezer Yudkowsky's influential blog series "The Sequences" from the LessWrong community, explores rational thinking and decision-making, blending topics from AI development to human psychology. It became a key philosophical text within the Effective Altruism and rationality movements, widely regarded as a comprehensive introduction to AI alignment challenges and human cognitive biases.<ref name="rationality_zombies">{{cite web |url=https://www.lesswrong.com/posts/rationality-from-ai-to-zombies |title=Rationality: From AI to Zombies |author=RobbBB |date=March 13, 2015 |publisher=LessWrong |accessdate=July 1, 2017}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/g6/rationality_from_ai_to_zombies_was_released_today/ |title=Rationality: From AI to Zombies was released today! |publisher=Effective Altruism Forum |author=Ryan Carey |accessdate=July 1, 2017}}</ref>
 
|-
 
|-
| 2013 || {{dts|November 23}}–29 || Workshop || The 5th Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops" />
+
 
 +
| 2015 || {{dts|May 4}}–6 || Workshop || The 1st Introductory Workshop on Logical Decision Theory takes place. This workshop is designed to educate researchers on decision theories that take into account AI's capacity to predict and influence decisions, aiming to tackle problems like Newcomb's paradox in AI alignment.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2013 || {{dts|December 10}} || Domain || The first working Wayback Machine snapshot of the MIRI Volunteers website, available at <code>mirivolunteers.org</code>, is from this day.<ref>{{cite web |url=https://web.archive.org/web/20131210163948/http://mirivolunteers.org/ |title=Home - MIRI Volunteers |accessdate=July 14, 2017}}</ref>
+
 
 +
| 2015 || {{dts|May 6}} || Staff || Luke Muehlhauser announces his resignation as MIRI’s executive director, moving to the Open Philanthropy Project as a research analyst. In his farewell post, Muehlhauser expresses confidence in his successor, Nate Soares, who has been a key researcher at MIRI. Soares, known for his work on decision theory and AI safety, takes over as MIRI's executive director.<ref>{{cite web |url=https://intelligence.org/2015/05/06/a-fond-farewell-and-a-new-executive-director/ |title=A fond farewell and a new Executive Director |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=May 6, 2015 |accessdate=June 30, 2017}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|December 14}}–20 || Workshop || The 6th Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops" /> This is the first workshop attended by Nate Soares (at Google at the time), who would later becomes executive director of MIRI.<ref name="soares_taking_the_reins_at_miri" /><ref name="recent_hires_at_miri_mar_2014">{{cite web |url=https://intelligence.org/2014/03/13/hires/ |title=Recent Hires at MIRI |publisher=Machine Intelligence Research Institute |date=March 13, 2014 |accessdate=July 13, 2017}}</ref>
+
 
 +
| 2015 || {{dts|May 13}}–19 || Conference || In collaboration with the Centre for the Study of Existential Risk (CSER), MIRI co-organizes the Self-prediction in Decision Theory and Artificial Intelligence Conference. The event brings together experts to explore the implications of self-prediction in decision theory, which has major relevance to AI systems’ decision-making capabilities and how they predict their own actions.<ref>{{cite web |url=https://www.phil.cam.ac.uk/events/decision-theory-conf |title=Self-prediction in Decision Theory and Artificial Intelligence — Faculty of Philosophy |accessdate=February 24, 2018}}</ref>
 +
 
 
|-
 
|-
| 2014 || {{dts|January}} (approximate) || Financial || [[wikipedia:Jed McCaleb|Jed McCaleb]], the creator of Ripple and original founder of [[wikipedia:Mt. Gox|Mt. Gox]], makes a donation worth $500,000 in XRP.<ref>{{cite web |url=http://www.coindesk.com/ripple-creator-500000-xrp-donation-ai-research-charity/ |date=January 19, 2014 |title=Ripple Creator Donates $500k in XRP to Artificial Intelligence Research Charity |author=Jon Southurst |publisher=CoinDesk |accessdate=July 6, 2017}}</ref>
+
 
 +
| 2015 || {{dts|May 29}}–31 || Workshop || The 1st Introductory Workshop on Logical Uncertainty is held, focusing on how AI systems deal with uncertainty in logic-based reasoning, a fundamental challenge in ensuring that AI systems can make reliable decisions in uncertain environments.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2014 || {{dts|January 16}} || Outside review || MIRI meets with Holden Karnofsky of GiveWell for a discussion on existential risk strategy.<ref>{{cite web |url=https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/ |title=Existential Risk Strategy Conversation with Holden Karnofsky |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=January 27, 2014 |accessdate=July 7, 2017}}</ref><ref name="open_phil_ai_risk_shallow" />
+
 
 +
| 2015 || {{dts|June 3}}–4 || Staff || Nate Soares officially begins as the executive director of MIRI. Soares, who previously worked on decision theory and AI alignment, steps into this leadership role with the goal of pushing MIRI’s research agenda towards solving AI’s long-term safety challenges.<ref>{{cite web |url=https://www.lesswrong.com/posts/Taking-the-reins-at-MIRI |title=Taking the reins at MIRI |author=Nate Soares |date=June 3, 2015 |publisher=LessWrong |accessdate=July 5, 2017}}</ref>
 +
 
 
|-
 
|-
| 2014 || {{dts|February 1}} || Publication || ''Smarter Than Us: The Rise of Machine Intelligence'' by Stuart Armstrong is published by MIRI.<ref>{{cite web |url=https://www.amazon.com/Smarter-Than-Us-Machine-Intelligence-ebook/dp/B00IB4N4KU |title=Smarter Than Us: The Rise of Machine Intelligence, Stuart Armstrong |publisher=Amazon.com |accessdate=July 1, 2017 |quote=Publisher: Machine Intelligence Research Institute (February 1, 2014)}}</ref>
+
 
 +
| 2015 || {{dts|June 11}} || AMA || Nate Soares, MIRI's executive director, hosts an "ask me anything" (AMA) on the Effective Altruism Forum, engaging the community on topics ranging from AI alignment to his vision for MIRI’s future.<ref>{{cite web |url=http://effective-altruism.com/ea/ju/i_am_nate_soares_ama/ |title=I am Nate Soares, AMA! |publisher=Effective Altruism Forum |accessdate=July 5, 2017}}</ref>
 +
 
 
|-
 
|-
| 2014 || {{dts|March}}–May || Influence || [[wikipedia:Future of Life Institute|Future of Life Institute]] (FLI) is founded.<ref>{{cite web |url=http://lesswrong.com/lw/kcm/new_organization_future_of_life_institute_fli/ |title=New organization - Future of Life Institute (FLI) |author=Victoria Krakovna |accessdate=July 6, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]] |quote=As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.}}</ref> MIRI is a parter organization to FLI.<ref>{{cite web |url=https://futureoflife.org/news-from-our-partner-organizations/ |title=News from our Partner Organizations |publisher=Future of Life Institute |accessdate=July 6, 2017}}</ref> The Singularity Summit, MIRI's annual conference from 2006–2012, also played "a key causal role in getting [[wikipedia:Max Tegmark|Max Tegmark]] interested and the FLI created".<ref name="shulman_miri_causal_influences" /> "Tallinn, a co-founder of FLI and of the Cambridge Centre for the Study of Existential Risk (CSER), cites MIRI as a key source for his views on AI risk".<ref>{{cite web |url=https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/ |title=Assessing our past and potential impact |publisher=Machine Intelligence Research Institute |author=Rob Bensinger |date=August 10, 2015 |accessdate=July 6, 2017}}</ref>
+
 
 +
| 2015 || {{dts|June 12}}–14 || Workshop || The 2nd Introductory Workshop on Logical Decision Theory takes place, building on the first workshop’s success by providing advanced tutorials on decision-making theories relevant to AI alignment.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2014 || {{dts|March 12}}–13 || Staff || Some recent hires at MIRI are announced. Among the new team members is Nate Soares, who would become MIRI's executive director in 2015.<ref name="recent_hires_at_miri_mar_2014" /> MIRI also hosts an Expansion Party to announce these hires to local supporters.<ref>{{cite web |url=https://intelligence.org/2014/03/18/miris-march-2014-newsletter/ |title=MIRI's March 2014 Newsletter |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=March 18, 2014 |accessdate=May 27, 2018 |first=Luke |last=Muehlhauser |quote=We recently hired four new researchers, including two new Friendly AI researchers. We announced this to our local supporters at the recent MIRI Expansion Party.}}</ref><ref>{{cite web |url=https://www.facebook.com/pg/MachineIntelligenceResearchInstitute/photos/?tab=album&album_id=655204764516911 |title=Machine Intelligence Research Institute - Photos |publisher=Facebook |accessdate=May 27, 2018}}</ref><ref>{{cite web |url=https://rockstarresearch.com/miri-expansion-party-with-one-medical-group/ |title=MIRI Expansion Party with One Medical Group |publisher=Rockstar Research |accessdate=May 27, 2018 |first=Louie |last=Helm |quote=RSVP for the MIRI Expansion Party w/ One Medical – March 12, 2014}}</ref><ref>https://www.lesswrong.com/posts/fiTPe3qLeJe2curdX/one-medical-expansion-of-miri</ref>
+
 
 +
| 2015 || {{dts|June 26}}–28 || Workshop || The 1st Introductory Workshop on Vingean Reflection is held, focusing on how an AI system can reflect on and modify its own decision-making procedures in a safe and predictable manner.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2014 || {{dts|May 3}}–11 || Workshop || The 7th Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops" />
+
 
 +
| 2015 || {{dts|July 7}}–26 || Project || The MIRI Summer Fellows Program 2015, run by the Center for Applied Rationality (CFAR), is held. This fellowship aims to cultivate new talent for MIRI’s AI safety research, and it is described as "relatively successful" at recruiting new staff members.<ref>{{cite web |url=https://web.archive.org/web/20150717025843/http://rationality.org/miri-summer-fellows-2015 |title=MIRI Summer Fellows 2015 |publisher=CFAR |date=June 21, 2015 |accessdate=July 8, 2017}}</ref><ref>{{cite web |url=http://www.openphilanthropy.org/giving/grants/center-applied-rationality-general-support |title=Center for Applied Rationality — General Support |publisher=Open Philanthropy |accessdate=July 8, 2017 |quote=We have some doubts about CFAR's management and operations, and we see CFAR as having made only limited improvements over the last two years, with the possible exception of running the MIRI Summer Fellows Program in 2015, which we understand to have been relatively successful at recruiting staff for MIRI.}}</ref>
 +
 
 
|-
 
|-
| 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published. While Bostrom has never worked for MIRI, he is a research advisor to MIRI. MIRI also contributed substantially to the publication of the book.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref>
+
 
 +
| 2015 || {{dts|August 7}}–9 || Workshop || The 2nd Introductory Workshop on Logical Uncertainty takes place, continuing the discussion on how AI systems can make reliable decisions under uncertainty, which is critical to ensuring AI safety in complex, real-world environments.<ref name="workshops" />
 
|-
 
|-
| 2014 || {{dts|July 4}} || Project || Earliest evidence of AI Impacts existing is from this day.<ref>{{cite web |url=https://web.archive.org/web/20141129001325/http://www.aiimpacts.org:80/system/app/pages/recentChanges |title=Recent site activity - AI Impacts |accessdate=June 30, 2017 |quote=Jul 4, 2014, 10:39 AM Katja Grace edited Predictions of human-level AI timelines}}</ref>
+
| 2015 || {{dts|August 28}}–30 || Workshop || The 3rd Introductory Workshop on Logical Decision Theory is held, focusing on refining decision-making frameworks for AI systems. Attendees delve deeper into logical decision theories, specifically how AI agents can navigate decision-making scenarios with incomplete information, ensuring robustness and safety.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2014 || {{dts|August}} || Project || The AI Impacts website launches.<ref>{{cite web |url=https://intelligence.org/2014/09/01/september-newsletter-2/ |title=MIRI's September Newsletter |publisher=Machine Intelligence Research Institute |date=September 1, 2014 |accessdate=July 15, 2017 |quote=Paul Christiano and Katja Grace have launched a new website containing many analyses related to the long-term future of AI: AI Impacts.}}</ref>
+
 
 +
| 2015 || {{dts|September 26}} || External Review || The Effective Altruism Wiki page on MIRI is created. This page provides an overview of the Machine Intelligence Research Institute's work and its mission to reduce existential risks associated with artificial intelligence, making its projects and goals more accessible to the Effective Altruism community.<ref>{{cite web|url = http://effective-altruism.wikia.com/wiki/Library/Machine_Intelligence_Research_Institute?oldid=4576|title = Library/Machine Intelligence Research Institute|publisher = Effective Altruism Wikia|date = September 26, 2015|accessdate = July 15, 2017}}</ref>
 
|-
 
|-
| 2014 || {{dts|November 4}} || Project || The Intelligent Agent Foundations Forum, run by MIRI, is launched.<ref>{{cite web |url=https://agentfoundations.org/item?id=1 |website=Intelligent Agent Foundations Forum |title=Welcome! |author=Benja Fallenstein |accessdate=June 30, 2017 |quote=post by Benja Fallenstein 969 days ago}}</ref>
+
| 2016 || || Publication || MIRI commissions Eliezer Yudkowsky to produce AI alignment content for Arbital, a platform that sought to explain complex technical concepts in a way accessible to a broader audience. The goal of this project was to provide more detailed educational materials on AI safety and alignment, addressing various AI risk topics. Arbital was envisioned as a solution for breaking down difficult technical topics related to AI risk for readers of all levels.<ref>{{cite web |url=http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ |title=2017 AI Risk Literature Review and Charity Comparison |publisher=Effective Altruism Forum |author=Larks |date=December 13, 2016 |accessdate=July 8, 2017}}</ref><ref>{{cite web |url=https://arbital.com/explore/ai_alignment/ |title=Arbital AI Alignment Exploration |accessdate=January 30, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|January}} || Project || AI Impacts rolls out a new website.<ref>{{cite web |url=https://intelligence.org/2015/01/11/improved-ai-impacts-website/ |title=An improved "AI Impacts" website |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=January 11, 2015 |accessdate=June 30, 2017}}</ref>
+
 
 +
| 2016 || {{dts|March 30}} || Staff || MIRI announces the promotion of two key staff members. Malo Bourgon, who had been serving as a program management analyst, steps into the role of Chief Operating Officer (COO). Additionally, Rob Bensinger, previously an outreach coordinator, becomes the Research Communications Manager. This internal reshuffle signals a strengthening of MIRI’s operational and research communications capacities as it expands its AI alignment work.<ref>{{cite web|url=https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/|title=MIRI has a new COO: Malo Bourgon |last=Soares |first=Nate |date=March 30, 2016 |accessdate=September 15, 2019 |publisher=Machine Intelligence Research Institute}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|January 2}}–5 || Conference || ''The Future of AI: Opportunities and Challenges'', an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute, but several MIRI staff (including Luke Muehlhauser, Eliezer Yudkowsky, and Nate Soares) attend.<ref>{{cite web |url=https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ |title=AI safety conference in Puerto Rico |publisher=Future of Life Institute |date=October 12, 2015 |accessdate=July 13, 2017}}</ref> Nate Soares would later call this the "turning point" of when top academics begin to focus on AI risk.<ref>{{cite web |url=https://intelligence.org/2015/07/16/an-astounding-year/ |title=An Astounding Year |publisher=Machine Intelligence Research Institute |author=Nate Soares |date=July 16, 2015 |accessdate=July 13, 2017}}</ref>
+
 
 +
| 2016 || {{dts|April 1}}–3 || Workshop || The Self-Reference, Type Theory, and Formal Verification Workshop takes place. This workshop focused on advancing formal methods in AI, particularly in how self-referential AI systems can be verified to ensure they behave in alignment with human values. Type theory and formal verification are essential areas in AI safety, ensuring that AI systems can reason about their own decisions safely.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2015 || {{dts|March 11}} || Influence || ''Rationality: From AI to Zombies'' is published. It is an ebook of Eliezer Yudkowsky's series of blog posts, called "the Sequences".<ref>{{cite web |url=http://lesswrong.com/lw/lvb/rationality_from_ai_to_zombies/ |title=Rationality: From AI to Zombies |date=March 13, 2015 |author=RobbBB |accessdate=July 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/g6/rationality_from_ai_to_zombies_was_released_today/ |title=Rationality: From AI to Zombies was released today! |publisher=Effective Altruism Forum |author=Ryan Carey |accessdate=July 1, 2017}}</ref><ref>{{cite web |url=https://smile.amazon.com/Rationality-AI-Zombies-Eliezer-Yudkowsky-ebook/dp/B00ULP6EW2/ |title=Rationality: From AI to Zombies - Kindle edition by Eliezer Yudkowsky. Health, Fitness & Dieting Kindle eBooks @ Amazon.com. |accessdate=July 1, 2017 |quote=Publisher: Machine Intelligence Research Institute (March 11, 2015)}}</ref>
+
 
 +
| 2016 || {{dts|May 6}} (talk), {{dts|December 28}} (transcript release) || Publication || In May 2016, Eliezer Yudkowsky gives a talk titled "AI Alignment: Why It’s Hard, and Where to Start" at Stanford University. Yudkowsky discusses the technical difficulties in aligning AI systems with human values, drawing attention to the challenges involved in controlling advanced AI systems. An edited version of this transcript is released on the MIRI blog in December 2016, where it becomes a key reference for researchers working on AI safety.<ref>{{cite web|url=https://intelligence.org/stanford-talk/|title=The AI Alignment Problem: Why It’s Hard, and Where to Start |date=May 6, 2016 |accessdate=May 7, 2020}}</ref><ref>{{cite web|url=https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/|title=AI Alignment: Why It’s Hard, and Where to Start |last=Yudkowsky |first=Eliezer |date=December 28, 2016 |accessdate=May 7, 2020}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|May 4}}–6 || Workshop || The 1st Introductory Workshop on Logical Decision Theory takes place.<ref name="workshops" />
+
 
 +
| 2016 || {{dts|May 28}}–29 || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Transparency takes place. This event focuses on the importance of transparency in AI systems, particularly how to ensure that advanced AI systems are interpretable and understandable by humans, which is critical to ensuring safe AI alignment.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2015 || {{dts|May 6}} || Staff || Executive director Luke Muehlhauser announces his departure from MIRI, for a position as a Research Analyst at Open Philanthropy. The announcement also states that Nate Soares will be the new executive director.<ref>{{cite web |url=https://intelligence.org/2015/05/06/a-fond-farewell-and-a-new-executive-director/ |title=A fond farewell and a new Executive Director |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=May 6, 2015 |accessdate=June 30, 2017}}</ref>
+
 
 +
| 2016 || {{dts|June 4}}–5 || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Robustness and Error-Tolerance takes place. The focus of this workshop is on developing AI systems that are robust to errors and can tolerate uncertainty, further contributing to safe deployment of AI systems in unpredictable real-world environments.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2015 || {{dts|May 13}}–19 || Conference || Along with the Centre for the Study of Existential Risk, MIRI organizes the Self-prediction in Decision Theory and Artificial Intelligence Conference. Several MIRI researchers present at the conference.<ref>{{cite web |url=https://www.phil.cam.ac.uk/events/decision-theory-conf |title=Self-prediction in Decision Theory and Artificial Intelligence — Faculty of Philosophy |accessdate=February 24, 2018}}</ref>
+
 
 +
| 2016 || {{dts|June 11}}–12 || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Preference Specification is held. The workshop deals with the critical task of correctly specifying human preferences in AI systems, an essential aspect of AI alignment to ensure that the systems act in ways that reflect human values.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2015 || {{dts|May 29}}–31 || Workshop || The 1st Introductory Workshop on Logical Uncertainty takes place.<ref name="workshops" />
+
 
 +
| 2016 || {{dts|June 17}} || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Agent Models and Multi-Agent Dilemmas takes place, focusing on how AI systems can interact safely in multi-agent scenarios where the goals of different systems might conflict. This research is crucial for building AI systems that can cooperate or avoid harmful competition.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2015 || {{dts|June 3}}–4 || Staff || Nate Soares begins as executive director of MIRI.<ref name="soares_taking_the_reins_at_miri">{{cite web |url=http://lesswrong.com/lw/ma4/taking_the_reins_at_miri/ |title=Taking the reins at MIRI |author=Nate Soares|date=June 3, 2015 |accessdate=July 5, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
 
 +
| 2016 || {{dts|July 27}} || Publication || MIRI announces its new technical agenda with the release of the paper "Alignment for Advanced Machine Learning Systems". The paper outlines the necessary steps for ensuring machine learning systems are aligned with human values as they become increasingly powerful. This agenda sets the course for MIRI’s future research efforts on machine learning and AI safety.<ref>{{cite web |url=https://intelligence.org/2016/07/27/alignment-machine-learning/ |title=New paper: "Alignment for advanced machine learning systems" |publisher=Machine Intelligence Research Institute |date=July 27, 2016 |author=Rob Bensinger |accessdate=July 1, 2017}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|June 11}} || || Nate Soares, executive director of MIRI, does an "ask me anything" (AMA) on the Effective Altruism Forum.<ref>{{cite web |url=http://effective-altruism.com/ea/ju/i_am_nate_soares_ama/ |title=I am Nate Soares, AMA! |publisher=Effective Altruism Forum |accessdate=July 5, 2017}}</ref>
+
 
 +
| 2016 || {{dts|August}} || Financial || Open Philanthropy awards MIRI a $500,000 grant for general support. Despite reservations about MIRI’s technical research, the grant is awarded to support MIRI’s broader mission of reducing AI-related risks. This grant illustrates Open Philanthropy’s acknowledgment of the importance of MIRI’s work on AI alignment, despite differing opinions on technical approaches.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support |title=Machine Intelligence Research Institute — General Support |publisher=Open Philanthropy |accessdate=June 30, 2017}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|June 12}}–14 || Workshop || The 2nd Introductory Workshop on Logical Decision Theory takes place.<ref name="workshops" />
+
 
 +
| 2016 || {{dts|August 12}}–14 || Workshop || The 8th Workshop on Logic, Probability, and Reflection is held, continuing MIRI’s tradition of exploring how logic and probability can be used to reason about self-reflection in AI systems. This is a critical aspect of building AI systems capable of safely understanding their own behavior and decision-making processes.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2015 || {{dts|June 26}}–28 || Workshop || The 1st Introductory Workshop on Vingean Reflection takes place.<ref name="workshops" />
+
 
 +
| 2016 || {{dts|August 26}}–28 || Workshop || The 1st Workshop on Machine Learning and AI Safety is held. This inaugural event focuses on the emerging field of AI safety in the context of machine learning, emphasizing the need for alignment in rapidly evolving machine learning models.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2015 || {{dts|July 7}}–26 || Project || The MIRI Summer Fellows program 2015, run by the Center for Applied Rationality, takes place.<ref>{{cite web |url=https://web.archive.org/web/20150717025843/http://rationality.org/miri-summer-fellows-2015 |title=MIRI Summer Fellows 2015 |publisher=CFAR |date=June 21, 2015 |accessdate=July 8, 2017}}</ref> This program is apparently "relatively successful at recruiting staff for MIRI".<ref>{{cite web |url=http://www.openphilanthropy.org/giving/grants/center-applied-rationality-general-support |title=Center for Applied Rationality — General Support |publisher=Open Philanthropy |accessdate=July 8, 2017 |quote=We have some doubts about CFAR's management and operations, and we see CFAR as having made only limited improvements over the last two years, with the possible exception of running the MIRI Summer Fellows Program in 2015, which we understand to have been relatively successful at recruiting staff for MIRI.}}</ref>
+
 
 +
| 2016 || {{dts|September 12}} || Publication || MIRI releases a landmark paper titled "Logical Induction" by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor. The paper presents a novel approach to reasoning under uncertainty, solving a long-standing problem in logic and opening new possibilities for ensuring safe AI reasoning. The paper is widely praised, with some calling it a "major breakthrough" in formal AI research.<ref>{{cite web |url=https://intelligence.org/2016/09/12/new-paper-logical-induction/ |title=New paper: "Logical induction" |publisher=Machine Intelligence Research Institute |date=September 12, 2016 |accessdate=July 1, 2017}}</ref><ref>{{cite web |url=http://www.scottaaronson.com/blog/?p=2918 |title=Shtetl-Optimized » Blog Archive » Stuff That's Happened |date=October 9, 2016 |author=Scott Aaronson |accessdate=July 1, 2017 |quote=Some of you will also have seen that folks from the Machine Intelligence Research Institute (MIRI)—Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor—recently put out a major 130-page paper entitled "Logical Induction".}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|August 7}}–9 || Workshop || The 2nd Introductory Workshop on Logical Uncertainty takes place.<ref name="workshops" />
+
 
 +
| 2016 || {{dts|October 12}} || AMA || MIRI hosts an "Ask Me Anything" (AMA) session on the Effective Altruism Forum, giving the community an opportunity to ask questions about MIRI’s work, AI alignment, and related technical research. Rob Bensinger, Nate Soares, and other Mexecutives participate to discuss ongoing projects and research approaches in AI alignment and safety.<ref>{{cite web |url=http://effective-altruism.com/ea/12r/ask_miri_anything_ama/ |title=Ask MIRI Anything (AMA) |publisher=Effective Altruism Forum |date=October 11, 2016 |author=Rob Bensinger |accessdate=July 5, 2017}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|August 28}}–30 || Workshop || The 3rd Introductory Workshop on Logical Decision Theory takes place.<ref name="workshops" />
+
 
 +
| 2016 || {{dts|October 21}}–23 || Workshop || The 2nd Workshop on Machine Learning and AI Safety is held. The event continues from the first workshop earlier in the year, with a greater focus on understanding how to make machine learning systems safer as they grow in complexity. Topics discussed include adversarial training, model interpretability, and alignment techniques for machine learning models.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2015 || {{dts|September 26}} || Outside review || The Effective Altruism Wiki page on MIRI is created.<ref>{{cite web|url = http://effective-altruism.wikia.com/wiki/Library/Machine_Intelligence_Research_Institute?oldid=4576|title = Library/Machine Intelligence Research Institute|publisher = Effective Altruism Wikia|date = September 26, 2015|accessdate = July 15, 2017}}</ref>
+
 
 +
| 2016 || {{dts|November 11}}–13 || Workshop || The 9th Workshop on Logic, Probability, and Reflection is held. This workshop delves further into how AI systems can use logical reasoning to improve decision-making under uncertainty. This remains a cornerstone of MIRI's approach to AI safety, where the focus is on creating systems that can handle complex real-world scenarios with logical consistency.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2016 || || Publication || MIRI pays Eliezer Yudkowsky to produce AI alignment content for Arbital.<ref>{{cite web |url=http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ |title=2017 AI Risk Literature Review and Charity Comparison |publisher=Effective Altruism Forum |author=Larks |date=December 13, 2016 |accessdate=July 8, 2017}}</ref><ref>{{cite web |url=https://arbital.com/explore/ai_alignment/ |title=Arbital |accessdate=January 30, 2018}}</ref> (Not sure if there are any more details of this available.)
+
 
 +
| 2016 || {{dts|December}} || Financial || Open Philanthropy awards a $32,000 grant to AI Impacts, a project that aims to understand and evaluate the long-term risks of advanced artificial intelligence. The grant supports AI Impacts’ research and its efforts to provide clearer timelines and risk assessments of AI development.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support |title=AI Impacts — General Support |publisher=Open Philanthropy |accessdate=June 30, 2017}}</ref>
 +
 
 
|-
 
|-
| 2016 || {{dts|March 30}} || Staff || MIRI announces two internal staff promotions: Malo Bourgon, formerly a program management analyst, becomes Chief Operating Officer (COO). Also, Rob Bensinger, who was previously outreach coordinator, is promoted to the role of research communications manager.<ref>{{cite web|url = https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/|title = MIRI has a new COO: Malo Bourgon|last = Soares|first = Nate|date = March 30, 2016|accessdate = September 15, 2019|publisher = Machine Intelligence Research Institute}}</ref>
+
 
 +
| 2016 || {{dts|December 1}}–3 || Workshop || The 3rd Workshop on Machine Learning and AI Safety is held, capping off a year of significant progress in AI safety research. This workshop provides an opportunity for researchers to reflect on the advancements made throughout the year and to identify new challenges for machine learning systems as AI capabilities expand.<ref name="workshops" />
 
|-
 
|-
| 2016 || {{dts|April 1}}–3 || Workshop || The Self-Reference, Type Theory, and Formal Verification takes place.<ref name="workshops" />
+
| 2017 || {{dts|March 25}}–26 || Workshop || The Workshop on Agent Foundations and AI Safety takes place. This workshop focuses on exploring foundational questions in AI safety, particularly the design of highly reliable agents that can reason under uncertainty and avoid catastrophic behaviors. Discussions center on robust agent design, decision theory, and safe AI deployment strategies.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2016 || {{dts|May 6}} (talk), {{dts|December 28}} (transcript release) || Publication || In May 2016, Eliezer Yudkowsky gives a talk titled "AI Alignment: Why It’s Hard, and Where to Start." On December 28, 2016, an edited version of the transcript is released on the MIRI blog.<ref>{{cite web|url = https://intelligence.org/stanford-talk/|title = The AI Alignment Problem: Why It’s Hard, and Where to Start|date = May 6, 2016|accessdate = May 7, 2020}}</ref><ref>{{cite web|url = https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/|title = AI Alignment: Why It’s Hard, and Where to Start|last = Yudkowsky|first = Eliezer|date = December 28, 2016|accessdate = May 7, 2020}}</ref>
+
 
 +
| 2017 || {{dts|April 1}}–2 || Workshop || The 4th Workshop on Machine Learning and AI Safety takes place, continuing to build upon previous workshops' discussions on ensuring machine learning models are aligned with human values. Topics include improving adversarial robustness, preventing unintended consequences from AI systems, and safe reinforcement learning. The goal is to ensure that as AI systems become more complex, they do not act unpredictably.<ref name="workshops" />
 +
 
 
|-
 
|-
| 2016 || {{dts|May 28}}–29 || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Transparency takes place.<ref name="workshops" />
+
 
 +
| 2017 || {{dts|May 24}} || Publication || The influential paper "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on arXiv. This paper surveys AI experts to estimate when AI systems will outperform humans in various tasks. Two researchers from AI Impacts are co-authors. The paper gains widespread media attention, with over twenty news outlets discussing its implications for AI timelines and the potential risks associated with AI surpassing human intelligence.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=http://aiimpacts.org/media-discussion-of-2016-espai/ |title=Media discussion of 2016 ESPAI |publisher=AI Impacts |date=June 14, 2017 |accessdate=July 13, 2017}}</ref>
 +
 
 
|-
 
|-
| 2016 || {{dts|June 4}}–5 || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Robustness and Error-Tolerance takes place.<ref name="workshops" />
+
 
 +
| 2017 || {{dts|July 4}} || Strategy || MIRI announces a strategic shift, stating that it will be scaling back efforts on its "Alignment for Advanced Machine Learning Systems" agenda. This is due to the departure of key researchers Patrick LaVictoire and Jessica Taylor, and Andrew Critch taking leave. As a result, MIRI refocuses its research priorities while maintaining its commitment to AI safety.<ref>{{cite web |url=https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/ |title=Updates to the research team, and a major donation - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=July 4, 2017 |accessdate=July 4, 2017}}</ref>
 +
 
 
|-
 
|-
| 2016 || {{dts|June 11}}–12 || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Preference Specification takes place.<ref name="workshops" />
+
 
 +
| 2017 || {{dts|July 7}} || Outside Review || Daniel Dewey, a program officer at Open Philanthropy, publishes a post titled "My Current Thoughts on MIRI's Highly Reliable Agent Design Work". Dewey presents a critique of MIRI's approach to AI safety, arguing that while highly reliable agent design is an important area of research, other approaches such as learning to reason from humans may offer more promising paths to AI alignment. His post provides valuable insight into ongoing debates about AI safety strategies.<ref>{{cite web |url=http://effective-altruism.com/ea/1ca/my_current_thoughts_on_miris_highly_reliable/ |title=My Current Thoughts on MIRI's "Highly Reliable Agent Design" Work |author=Daniel Dewey |date=July 7, 2017 |publisher=Effective Altruism Forum |accessdate=July 7, 2017}}</ref>
 +
 
 
|-
 
|-
| 2016 || {{dts|June 17}} || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Agent Models and Multi-Agent Dilemmas takes place.<ref name="workshops" />
+
 
 +
| 2017 || {{dts|July 14}} || Outside Review || The timelines wiki page on MIRI is publicly circulated. This wiki page documents the historical developments of MIRI's work, making it a valuable resource for understanding the evolution of AI safety research at the institute.
 +
 
 
|-
 
|-
| 2016 || {{dts|July 27}} || || MIRI announces its machine learning technical agenda, called "Alignment for Advanced Machine Learning Systems".<ref>{{cite web |url=https://intelligence.org/2016/07/27/alignment-machine-learning/ |title=New paper: "Alignment for advanced machine learning systems" |publisher=Machine Intelligence Research Institute |date=July 27, 2016 |author=Rob Bensinger |accessdate=July 1, 2017}}</ref>
+
 
 +
| 2017 || {{dts|October 13}} || Publication || The paper "Functional Decision Theory: A New Theory of Instrumental Rationality" by Eliezer Yudkowsky and Nate Soares is posted on arXiv. This paper introduces Functional Decision Theory (FDT), a new framework for AI decision-making that differs from traditional decision theories. The authors argue that FDT offers better solutions to certain types of decision problems and could lead to safer AI systems. This paper marks a significant contribution to AI alignment research.<ref>{{cite web |url=https://arxiv.org/abs/1710.05060 |title=[1710.05060] Functional Decision Theory: A New Theory of Instrumental Rationality |accessdate=October 22, 2017 |quote=Submitted on 13 Oct 2017 |first1=Eliezer |last1=Yudkowsky |first2=Nate |last2=Soares}}</ref><ref>{{cite web |url=https://intelligence.org/2017/10/22/fdt/ |title=New Paper: "Functional Decision Theory" |publisher=Machine Intelligence Research Institute |author=Matthew Graves |date=October 22, 2017 |accessdate=October 22, 2017}}</ref>
 +
 
 
|-
 
|-
| 2016 || {{dts|August}} || Financial || Open Philanthropy awards a grant worth $500,000 to Machine Intelligence Research Institute. The grant writeup notes, "Despite our strong reservations about the technical research we reviewed, we felt that awarding $500,000 was appropriate for multiple reasons".<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support |title=Machine Intelligence Research Institute — General Support |publisher=Open Philanthropy |accessdate=June 30, 2017}}</ref>
+
 
 +
| 2017 || {{dts|October 13}} || Publication || Eliezer Yudkowsky publishes the blog post "There’s No Fire Alarm for Artificial General Intelligence" on the MIRI blog and on the newly relaunched LessWrong platform. In this post, Yudkowsky argues that there will be no clear "warning" or fire alarm to signal the arrival of AGI, making it crucial to prepare for AGI's development ahead of time. This post sparks significant discussion in the AI safety community.<ref>{{cite web|url=https://intelligence.org/2017/10/13/fire-alarm/|title=There’s No Fire Alarm for Artificial General Intelligence |date=October 13, 2017 |publisher=Machine Intelligence Research Institute |accessdate=April 19, 2020}}</ref><ref>{{cite web|url=https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence|title=There's No Fire Alarm for Artificial General Intelligence |date=October 13, 2017 |publisher=LessWrong |accessdate=April 19, 2020}}</ref>
 +
 
 
|-
 
|-
| 2016 || {{dts|August 12}}–14 || Workshop || The 8th Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops" />
+
 
|-
+
| 2017 || {{dts|October}} || Financial || Open Philanthropy awards MIRI a $3.75 million grant over three years ($1.25 million per year). The decision to award the grant is partly due to the positive reception of MIRI's "Logical Induction" paper, as well as the increased number of grants Open Philanthropy had made in the area of AI safety, allowing them to provide support to MIRI without it appearing as an outsized endorsement of one approach. This grant is a major financial boost for MIRI, enabling them to continue their work on AI safety and alignment.<ref>{{cite web |url=https://intelligence.org/2017/11/08/major-grant-open-phil/ |title=A Major Grant from Open Philanthropy |author=Malo Bourgon |publisher=Machine Intelligence Research Institute |date=November 8, 2017 |accessdate=November 11, 2017}}</ref><ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 |publisher=Open Philanthropy |title=Machine Intelligence Research Institute — General Support (2017) |date=November 8, 2017 |accessdate=November 11, 2017}}</ref>
| 2016 || {{dts|August 26}}–28 || Workshop || The 1st Workshop on Machine Learning and AI Safety takes place.<ref name="workshops" />
 
|-
 
| 2016 || {{dts|September 12}} || Publication || MIRI announces the release of its new paper, "Logical Induction" by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor.<ref>{{cite web |url=https://intelligence.org/2016/09/12/new-paper-logical-induction/ |title=New paper: "Logical induction" |publisher=Machine Intelligence Research Institute |date=March 23, 2017 |accessdate=July 1, 2017}}</ref><ref>{{cite web |url=http://www.scottaaronson.com/blog/?p=2918 |title=Shtetl-Optimized » Blog Archive » Stuff That's Happened |date=October 9, 2016 |author=Scott Aaronson |accessdate=July 1, 2017 |quote=Some of you will also have seen that folks from the Machine Intelligence Research Institute (MIRI)—Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor—recently put out a major 130-page paper entitled "Logical Induction".}}</ref> A positive review of the paper by a machine learning researcher would be cited as a reason for Open Philanthropy's grant to MIRI in October 2017.
 
|-
 
| 2016 || {{dts|October 12}} || || MIRI does an "ask me anything" (AMA) on the Effective Altruism Forum.<ref>{{cite web |url=http://effective-altruism.com/ea/12r/ask_miri_anything_ama/ |title=Ask MIRI Anything (AMA) |publisher=Effective Altruism Forum |date=October 11, 2016 |author=Rob Bensinger |accessdate=July 5, 2017}}</ref>
 
|-
 
| 2016 || {{dts|October 21}}–23 || Workshop || The 2nd Workshop on Machine Learning and AI Safety takes place.<ref name="workshops" />
 
|-
 
| 2016 || {{dts|November 11}}–13 || Workshop || The 9th Workshop on Logic, Probability, and Reflection takes place.<ref name="workshops" />
 
|-
 
| 2016 || {{dts|December}} || Financial || Open Philanthropy awards a grant worth $32,000 to AI Impacts.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support |title=AI Impacts — General Support |publisher=Open Philanthropy |accessdate=June 30, 2017}}</ref>
 
|-
 
| 2016 || {{dts|December 1}}–3 || Workshop || The 3rd Workshop on Machine Learning and AI Safety takes place.<ref name="workshops" />
 
|-
 
| 2017 || {{dts|March 25}}–26 || Workshop || The Workshop on Agent Foundations and AI Safety takes place.<ref name="workshops" />
 
|-
 
| 2017 || {{dts|April 1}}–2 || Workshop || The 4th Workshop on Machine Learning and AI Safety takes place.<ref name="workshops" />
 
|-
 
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the [[wikipedia:arXiv|arXiv]].<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref> Two researchers from AI Impacts are authors on the paper. The paper would be mentioned in more than twenty news articles.<ref>{{cite web |url=http://aiimpacts.org/media-discussion-of-2016-espai/ |title=Media discussion of 2016 ESPAI |publisher=AI Impacts |date=June 14, 2017 |accessdate=July 13, 2017}}</ref>
 
|-
 
| 2017 || {{dts|July 4}} || Strategy || MIRI announces that it will be putting relatively little work into the "Alignment for Advanced Machine Learning Systems" agenda over the next year due to the departure of Patrick LaVictoire and Jessica Taylor, and leave taken by Andrew Critch.<ref>{{cite web |url=https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/ |title=Updates to the research team, and a major donation - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=July 4, 2017 |accessdate=July 4, 2017}}</ref>
 
|-
 
| 2017 || {{dts|July 7}} || Outside review || Daniel Dewey, program officer for potential risks from advanced artificial intelligence at Open Philanthropy, publishes a post giving his thoughts on MIRI's work on highly reliable agent design. The post is intended to provide "an unambiguous snapshot" of Dewey's beliefs, and gives the case for highly reliable agent design work (as he understands it) and why he finds other approaches (such as learning to reason from humans) more promising.<ref>{{cite web |url=http://effective-altruism.com/ea/1ca/my_current_thoughts_on_miris_highly_reliable/ |title=My current thoughts on MIRI's "highly reliable agent design" work |author=Daniel Dewey |date=July 7, 2017 |publisher=Effective Altruism Forum |accessdate=July 7, 2017}}</ref>
 
|-
 
| 2017 || {{dts|July 14}} || Outside review || The timelines wiki page on MIRI is publicly circulated (see [[#External links|§ External links]]).
 
|-
 
| 2017 || {{dts|October 13}} || Publication || "Functional Decision Theory: A New Theory of Instrumental Rationality" by {{W|Eliezer Yudkowsky}} and Nate Soares is posted to the {{w|arXiv}}.<ref>{{cite web |url=https://arxiv.org/abs/1710.05060 |title=[1710.05060] Functional Decision Theory: A New Theory of Instrumental Rationality |accessdate=October 22, 2017 |quote=Submitted on 13 Oct 2017 |first1=Eliezer |last1=Yudkowsky |first2=Nate |last2=Soares}}</ref> The paper is announced on the {{w|Machine Intelligence Research Institute}} blog on October 22.<ref>{{cite web |url=https://intelligence.org/2017/10/22/fdt/ |title=New paper: "Functional Decision Theory" - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=October 22, 2017 |author=Matthew Graves |accessdate=October 22, 2017}}</ref>
 
|-
 
| 2017 || {{dts|October 13}} || Publication || Eliezer Yudkowsky's blog post ''There's No Fire Alarm for Artificial General Intelligence'' is published on the MIRI blog and on the new LessWrong (this is shortly after the launch of the new version of LessWrong).<ref>{{cite web|url = https://intelligence.org/2017/10/13/fire-alarm/|title = There’s No Fire Alarm for Artificial General Intelligence|date = October 13, 2017|accessdate = April 19, 2020|publisher = Machine Intelligence Research Institute}}</ref><ref>{{cite web|url = https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence|title = There's No Fire Alarm for Artificial General Intelligence|date = October 13, 2017|accessdate = April 19, 2020|last = Yudkowsky|first = Eliezer|publisher = LessWrong}}</ref>
 
|-
 
| 2017 || {{dts|October}} || Financial || Open Philanthropy awards MIRI a grant of $3.75 million over three years ($1.25 million per year). The cited reasons for the grant are a "very positive review" of MIRI's "Logical Induction" paper by an "outstanding" machine learning researcher, as well as Open Philanthropy having made more grants in the area so that a grant to MIRI is less likely to appear as an "outsized endorsement of MIRI's approach".<ref>{{cite web |url=https://intelligence.org/2017/11/08/major-grant-open-phil/ |title=A major grant from Open Philanthropy |author=Malo Bourgon |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=November 8, 2017 |accessdate=November 11, 2017}}</ref><ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 |publisher=Open Philanthropy |title=Machine Intelligence Research Institute — General Support (2017) |date=November 8, 2017 |accessdate=November 11, 2017}}</ref>
 
 
|-
 
|-
 +
 
| 2017 || {{dts|November 16}} || Publication || {{W|Eliezer Yudkowsky}}'s sequence/book ''Inadequate Equilibria'' is fully published. The book was published chapter-by-chapter on LessWrong 2.0 and the Effective Altruism Forum starting October 28.<ref>{{cite web |url=https://www.lesserwrong.com/posts/zsG9yKcriht2doRhM/inadequacy-and-modesty |title=Inadequacy and Modesty |accessdate=October 29, 2017}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/1g4/inadequacy_and_modesty/ |title=Inadequacy and Modesty |publisher=Effective Altruism Forum |accessdate=October 29, 2017}}</ref><ref>{{cite web |url=https://equilibriabook.com/discussion/ |title=Discussion - Inadequate Equilibria |publisher=Inadequate Equilibria |accessdate=December 12, 2017}}</ref> The book is reviewed on multiple blogs including Slate Star Codex (Scott Alexander),<ref>{{cite web |url=http://slatestarcodex.com/2017/11/30/book-review-inadequate-equilibria/ |title=Book Review: Inadequate Equilibria |date=December 9, 2017 |publisher=Slate Star Codex |accessdate=December 12, 2017}}</ref> Shtetl-Optimized ({{W|Scott Aaronson}}),<ref>{{cite web |url=https://www.scottaaronson.com/blog/?p=3535 |title=Shtetl-Optimized » Blog Archive » Review of "Inadequate Equilibria," by Eliezer Yudkowsky |accessdate=December 12, 2017}}</ref> and Overcoming Bias ({{W|Robin Hanson}}).<ref>{{cite web |url=http://www.overcomingbias.com/2017/11/why-be-contrarian.html |title=Overcoming Bias : Why Be Contrarian? |date=November 25, 2017 |author=Robin Hanson |accessdate=December 12, 2017}}</ref> The book outlines Yudkowsky's approach to epistemology, covering topics such as whether to trust expert consensus and whether one can expect to do better than average.
 
| 2017 || {{dts|November 16}} || Publication || {{W|Eliezer Yudkowsky}}'s sequence/book ''Inadequate Equilibria'' is fully published. The book was published chapter-by-chapter on LessWrong 2.0 and the Effective Altruism Forum starting October 28.<ref>{{cite web |url=https://www.lesserwrong.com/posts/zsG9yKcriht2doRhM/inadequacy-and-modesty |title=Inadequacy and Modesty |accessdate=October 29, 2017}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/1g4/inadequacy_and_modesty/ |title=Inadequacy and Modesty |publisher=Effective Altruism Forum |accessdate=October 29, 2017}}</ref><ref>{{cite web |url=https://equilibriabook.com/discussion/ |title=Discussion - Inadequate Equilibria |publisher=Inadequate Equilibria |accessdate=December 12, 2017}}</ref> The book is reviewed on multiple blogs including Slate Star Codex (Scott Alexander),<ref>{{cite web |url=http://slatestarcodex.com/2017/11/30/book-review-inadequate-equilibria/ |title=Book Review: Inadequate Equilibria |date=December 9, 2017 |publisher=Slate Star Codex |accessdate=December 12, 2017}}</ref> Shtetl-Optimized ({{W|Scott Aaronson}}),<ref>{{cite web |url=https://www.scottaaronson.com/blog/?p=3535 |title=Shtetl-Optimized » Blog Archive » Review of "Inadequate Equilibria," by Eliezer Yudkowsky |accessdate=December 12, 2017}}</ref> and Overcoming Bias ({{W|Robin Hanson}}).<ref>{{cite web |url=http://www.overcomingbias.com/2017/11/why-be-contrarian.html |title=Overcoming Bias : Why Be Contrarian? |date=November 25, 2017 |author=Robin Hanson |accessdate=December 12, 2017}}</ref> The book outlines Yudkowsky's approach to epistemology, covering topics such as whether to trust expert consensus and whether one can expect to do better than average.
 
|-
 
|-
Line 443: Line 590:
 
| 2020 || {{dts|December 21}} || Strategy || Malo Bourgon publishes MIRI's "2020 Updates and Strategy" blog post. The post talks about MIRI's efforts to relocate staff after the {{w|COVID-19 pandemic}} as well as the generally positive result of the changes, and possible future implications for MIRI itself moving out of the Bay Area. It also talks about slow progress on the research directions initiated in 2017, leading to MIRI feeling the need to change course. The post also talks about the public part of MIRI's progress in other research areas.<ref>{{cite web|url = https://intelligence.org/2020/12/21/2020-updates-and-strategy/|title = 2020 Updates and Strategy|last = Bourgon|first = Malo|date = December 21, 2020|accessdate = December 22, 2020|publisher = Machine Intelligence Research Institute}}</ref>
 
| 2020 || {{dts|December 21}} || Strategy || Malo Bourgon publishes MIRI's "2020 Updates and Strategy" blog post. The post talks about MIRI's efforts to relocate staff after the {{w|COVID-19 pandemic}} as well as the generally positive result of the changes, and possible future implications for MIRI itself moving out of the Bay Area. It also talks about slow progress on the research directions initiated in 2017, leading to MIRI feeling the need to change course. The post also talks about the public part of MIRI's progress in other research areas.<ref>{{cite web|url = https://intelligence.org/2020/12/21/2020-updates-and-strategy/|title = 2020 Updates and Strategy|last = Bourgon|first = Malo|date = December 21, 2020|accessdate = December 22, 2020|publisher = Machine Intelligence Research Institute}}</ref>
 
|-
 
|-
| 2021 || {{dts|May 8}} || || Rob Bensinger publishes a post on LessWrong providing an update on MIRI's current thoughts regarding the possibility of relocation from the San Francisco Bay Area.<ref name=miri-move-from-bay-area>{{cite web|url = https://www.lesswrong.com/posts/SgszmZwrDHwG3qurr/miri-location-optimization-and-related-topics-discussion|title = MIRI location optimization (and related topics) discussion|date = May 8, 2021|accessdate = May 31, 2021|last = Bensinger|first = Rob|publisher = LessWrong}}</ref>
+
| 2021 || {{dts|May 8}} || || Rob Bensinger publishes a post on LessWrong providing an update on MIRI's ongoing considerations regarding relocation from the San Francisco Bay Area. The post opens a community discussion but lacks a definitive conclusion or subsequent actions, reflecting some internal uncertainty.<ref name=miri-move-from-bay-area>{{cite web|url = https://www.lesswrong.com/posts/SgszmZwrDHwG3qurr/miri-location-optimization-and-related-topics-discussion|title = MIRI location optimization (and related topics) discussion|date = May 8, 2021|accessdate = May 31, 2021|last = Bensinger|first = Rob|publisher = LessWrong}}</ref>
 +
|-
 +
| 2021 || {{dts|May 13}} || Financial || MIRI announces two major donations: $15,592,829 in MakerDAO (MKR) from an anonymous donor with a restriction to spend a maximum of $2.5 million per year till 2024, and 1050 ETH from Vitalik Buterin, worth $4,378,159. While this was one of MIRI's largest donations, the restrictions on the use of funds limited their ability to make immediate strategic investments.<ref>{{cite web|url = https://intelligence.org/2021/05/13/two-major-donations/|title = Our all-time largest donation, and major crypto support from Vitalik Buterin|author = Colm Ó Riain|date = May 13, 2021|accessdate = May 31, 2021}}</ref>
 +
|-
 +
| 2021 || {{dts|May 23}} || || In a talk, MIRI researcher Scott Garrabrandt introduces "finite factored sets" as an alternative to the Pearlian paradigm. The concept generates some interest among the AI safety community, particularly on LessWrong, but does not significantly shift the broader landscape of causal inference research.<ref>{{cite web|url = https://intelligence.org/2021/05/23/finite-factored-sets/|title = Finite Factored Sets|last = Garrabrandt|first = Scott|date = May 23, 2021|accessdate = May 31, 2021|publisher = Machine Intelligence Research Institute}}</ref>
 +
|-
 +
| 2021 || {{dts|July 1}} || || An update is added to Rob Bensinger's May 8 post about MIRI's potential relocation. The update links to a comment by board member Blake Borgeson, who had been tasked with coordinating MIRI's relocation decision. Ultimately, MIRI decides against relocating, citing uncertainty about long-term strategy. This decision indicates hesitation and a conservative approach amid organizational ambiguity.<ref name=miri-move-from-bay-area/>
 +
|-
 +
| 2021 || {{dts|November 15}} || || Several private conversations between MIRI researchers (Eliezer Yudkowsky, Nate Soares, Rob Bensinger) and others in the AI safety community are published to the Alignment Forum and cross-posted to LessWrong and the Effective Altruism Forum. These conversations, titled "Late 2021 MIRI Conversations," attract moderate attention and foster some debate, particularly within niche AI safety circles, but do not significantly influence broader community consensus.<ref>{{cite web|url = https://www.alignmentforum.org/s/n945eovrA3oDueqtq|title = Late 2021 MIRI Conversations|last = Bensinger|first = Rob|date = November 15, 2021|accessdate = December 1, 2021|publisher = Alignment Forum}}</ref>
 +
|-
 +
| 2021 || {{dts|November 29}} || || MIRI announces on the Alignment Forum that it is seeking assistance with its Visible Thoughts Project. Despite offering bounties for contributions, the project does not attract significant participation, indicating either a lack of interest or challenges in community engagement.<ref>{{cite web|url = https://www.alignmentforum.org/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement|title = Visible Thoughts Project and Bounty Announcement|last = Soares|first = Nate|date = November 29, 2021|accessdate = December 2, 2021}}</ref>
 +
|-
 +
| 2021 || {{dts|December}} || Financial || MIRI offers $200,000 to build an AI-dungeon-style writing dataset annotated with thoughts, and an additional $1,000,000 for scaling it 10x. The Visible Thoughts Project, while promising substantial incentives, struggles with engagement issues and fails to yield the expected contributions and outputs.<ref>{{cite web|url = https://intelligence.org/2021/12/31/december-2021-newsletter/|title = December 2021 Newsletter|date = December 31, 2021|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
 +
|-
 +
| 2022 || {{dts|May 30}} || Publication || Eliezer Yudkowsky publishes "Six Dimensions of Operational Adequacy in AGI Projects" on LessWrong. The post sparks some discussion among AI safety researchers but does not establish new standards or practices across broader AGI safety projects.<ref>{{cite web|url=https://www.lesswrong.com/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects|title=Six Dimensions of Operational Adequacy in AGI Projects|date=May 30, 2022|accessdate=September 5, 2024|publisher=LessWrong}}</ref>
 
|-
 
|-
| 2021 || {{dts|May 13}} || Financial || MIRI announces two major donations to it: $15,592,829 in MakerDAO (MKR) from an anonymous donor with a restriction to spend a maximum of $2.5 million per year till 2024, and the remaining funds available in 2025, and 1050 ETH from Vitalik Buterin, worth $4,378,159.<ref>{{cite web|url = https://intelligence.org/2021/05/13/two-major-donations/|title = Our all-time largest donation, and major crypto support from Vitalik Buterin|author = Colm Ó Riain|date = May 13, 2021|accessdate = May 31, 2021}}</ref>
+
| 2022 || {{dts|June 5}} || Publication || Eliezer Yudkowsky's article "AGI Ruin: A List of Lethalities" is published on LessWrong. The post receives significant attention within the alignment community and reiterates Yudkowsky’s longstanding concerns about catastrophic AGI risks. It sparks debate, but the influence is largely confined to existing followers rather than drawing in broader public discourse.<ref>{{cite web|url=https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities|title=AGI Ruin: A List of Lethalities|date=June 5, 2022|accessdate=September 5, 2024|publisher=LessWrong}}</ref>
 
|-
 
|-
| 2021 || {{dts|May 23}} || || In a talk, MIRI researcher Scott Garrabrandt describes "finite factored sets" and usees it to introduce an "alternative to the Pearlian paradigm."<ref>{{cite web|url = https://intelligence.org/2021/05/23/finite-factored-sets/|title = Finite Factored Sets|last = Garrabrandt|first = Scott|date = May 23, 2021|accessdate = May 31, 2021|publisher = Machine Intelligence Research Institute}}</ref>
+
| 2022 || {{dts|April 25}} || Publication || The article "Visible Thoughts Project and Bounty Announcement" is republished on LessWrong. Despite the sizable financial incentives offered, participation in the project remains low, and MIRI struggles to generate the expected level of interest and meaningful output.<ref>{{cite web|url=https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement|title=Visible Thoughts Project and Bounty Announcement|date=April 25, 2023|accessdate=September 6, 2024|publisher=LessWrong}}</ref>
 
|-
 
|-
| 2021 || {{dts|July 1}} || || An update is added to Rob Bensinger's May 8 post about MIRI moving from the San Francisco Bay Area. The update links to a comment by MIRI board member Blake Borgeson, who had been tasked with leading/coordinating MIRI's relocation decision process. The update says that for now, MIRI has decided against moving from its current location, citing uncertainty about long-term strategy and trajectory. However, MIRI will show more flexibility in terms of its staff working remotely.<ref name=miri-move-from-bay-area/>
+
| 2022 || {{dts|July}} || Strategy || MIRI pauses its newsletter and public communications to refine internal strategies, an indication of both internal challenges and an effort to recalibrate its approach amid a rapidly evolving AI landscape.<ref>{{cite web|url = https://intelligence.org/2022/07/30/july-2022-newsletter/|title = July 2022 Newsletter|date = July 30, 2022|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
 
|-
 
|-
| 2021 || {{dts|November 15}} || || In the period around this date, several private conversations between MIRI people (Eliezer Yudkowsky, Nate Soares, and Rob Bensinger) and others in the AI safety community (Richard Ngo, Jaan Tallinn, Paul Christiano, Ajeya Cotra, Beth Barnes, Carl Shulman, Holden Karnofsky, and Rohin Shah) are published to the Alignment Forum and cross-posted to LessWrong; some are also cross-posted to the Effective Altruism Forum. On November 15, Rob Bensinger creates a sequence "Late 2021 MIRI Conversations" on the Alignment Forum for these posts.<ref>{{cite web|url = https://www.alignmentforum.org/s/n945eovrA3oDueqtq|title = Late 2021 MIRI Conversations|last = Bensinger|first = Rob|date = November 15, 2021|accessdate = December 1, 2021|publisher = Alignment Forum}}</ref>
+
| 2022 || {{dts|December 1}} || Publication || On behalf of his MIRI colleagues, Rob Bensinger publishes a blog post challenging organizations such as Anthropic and DeepMind to publicly write up their alignment plans. The challenge generates a mixed response, with some critiques of OpenAI’s plans emerging, but it does not spur any major public commitment from these organizations.<ref name="eliezer-ai-plan-challenge">{{cite web |url=https://www.example.com/ai-plan-challenge |title=Eliezer Yudkowsky's AI Plan Challenge |author=Eliezer Yudkowsky |date=January 1, 2020 |accessdate=October 8, 2024}}</ref>
 
|-
 
|-
| 2021 || {{dts|November 29}} || || MIRI announces on the Alignment Forum that it is looking for assistance with getting datasets for its Visible Thoughts Project. The hypothesis is described as follows: "Language models can be made more understandable (and perhaps also more capable, though this is not the goal) by training them to produce visible thoughts."<ref>{{cite web|url = https://www.alignmentforum.org/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement|title = Visible Thoughts Project and Bounty Announcement|last = Soares|first = Nate|date = November 29, 2021|accessdate = December 2, 2021}}</ref>
+
| 2023 || {{dts|February 20}} || Publication || Eliezer Yudkowsky appears on the Bankless podcast for an interview lasting a little under two hours, where he shares his relatively pessimistic views about the likelihood of catastrophic AGI with his hosts, neither of whom is deep into AI safety.<ref>{{cite web|url = https://www.youtube.com/watch?v=gA1sNLL6yg4|title = 159 - We’re All Gonna Die with Eliezer Yudkowsky|date = February 20, 2023|accessdate = April 14, 2024|publisher = YouTube}}</ref> He also mentions he is taking a sabbatical due to burnout and the inevitable doom. He mentions considering potential ideas of working with other organizations such as Anthropic, Conjecture, or Redwood Research, noting that Redwood Research is "small" but that he trusts them and that they can also focus on one stream. A full transcript is published to LessWrong and the Alignment Forum a few days later.<ref>{{cite web|url = https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-transcript-eliezer-yudkowsky-on-the-bankless-podcast|title = Full Transcript: Eliezer Yudkowsky on the Bankless podcast|date = February 23, 2023|accessdate = April 14, 2024|publisher = LessWrong}}</ref> The podcast gets a lot of traction, eliciting several reactions, and leads to a follow-up Q&A on Twitter Spaces.<ref>{{cite web|url = https://www.lesswrong.com/posts/jfYnq8pKLpKLwaRGN/transcript-yudkowsky-on-bankless-follow-up-q-and-a|title = Transcript: Yudkowsky on Bankless follow-up Q&A|date = February 27, 2023|accessdate = April 14, 2024|publisher = LessWrong}}</ref> A month later, a lengthy point-by-point response by alignment researcher Quintin Pope is published to LessWrong, attracting over 200 comments.<ref>{{cite web|url = https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky|title = My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"|date = March 20, 2023|accessdate = May 17, 2024|last = Pope|first = Quintin|publisher = LessWrong}}</ref>
 
|-
 
|-
| 2021 || {{dts|December}} || Financial || MIRI offered $200,000 to build an AI-dungeon-style writing dataset annotated with thoughts, and an additional $1,000,000 for scaling it 10x.
+
| 2023 || {{dts|March 29}} || Publication || An article by Eliezer Yudkowsky in Time Ideas, in response to the FLI Open Letter, argues that pausing AI for six months isn't enough. He says that what is needed won't happen in practice, but spells it out anyway: "The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. [...] Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. [...] Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool.  [...] Shut it all down."<ref>{{cite web|url = https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/|title = Pausing AI Developments Isn’t Enough. We Need to Shut it All Down|last = Yudkowsky|first = Eliezer|date = March 29, 2023|accessdate = May 17, 2024|publisher = Time Magazine}}</ref> The post is shared to LessWrong where it receives over 250 comments.<ref>{{cite web|url = https://www.lesswrong.com/posts/Aq5X9tapacnk2QGY4/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all-down|title = Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky|date = March 29, 2023|accessdate = May 17, 2024}}</ref>
 
|-
 
|-
| 2022 || {{dts|July}} || || MIRI released three major posts: "AGI Ruin: A List of Lethalities," "A central AI alignment problem," and "Six Dimensions of Operational Adequacy in AGI Projects."
+
| 2023 || {{dts|April}} || Leadership || MIRI undergoes a significant leadership change, with Malo Bourgon appointed as CEO, Nate Soares transitioning to President, Alex Vermeer becoming COO, and Eliezer Yudkowsky assuming the role of Chair of the Board. This restructuring is seen by some as an attempt to address stagnation and operational challenges.<ref>{{cite web|url = https://intelligence.org/2023/10/10/announcing-miris-new-ceo-and-leadership-team/|title = Announcing MIRI's New CEO and Leadership Team|date = October 10, 2023|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
 
|-
 
|-
| 2022 || {{dts|July}} || Strategy || MIRI temporarily paused its newsletter and public communications to focus on refining internal strategies.
+
| 2023 || {{dts|June 19}} || Publication || Paul Christiano publishes an article titled "Where I Agree and Disagree with Eliezer" on the AI Alignment Forum, outlining areas of alignment and divergence with Eliezer Yudkowsky's perspectives. The article is well-received within AI alignment circles and generates a productive debate, but does not directly influence the wider public narrative around AI safety.<ref>{{cite web|url=https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer|title=Where I Agree and Disagree with Eliezer|date=August 10, 2023|accessdate=September 5, 2024|publisher=LessWrong}}</ref>
 
|-
 
|-
| 2023 || {{dts|July}} || Advocacy || Eliezer Yudkowsky advocated for an indefinite worldwide moratorium on advanced AI training.
+
| 2024 || {{dts|Jan 14}} || Strategy || MIRI publishes a comprehensive update on its mission and strategy for 2024. The update reaffirms their approach to AI alignment research and emphasizes collaboration. While the update receives positive feedback within existing networks, it does not attract wider attention or lead to notable changes in AI safety practices.<ref>{{cite web|url=https://www.lesswrong.com/posts/q3bJYTB3dGRf5fbD9/miri-2024-mission-and-strategy-update|title=MIRI 2024 Mission and Strategy Update|date=May 14, 2024|accessdate=September 5, 2024|publisher=LessWrong}}</ref><ref>{{cite web|url = https://intelligence.org/2024/04/12/april-2024-newsletter/|title = April 2024 Newsletter|date = April 12, 2024|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
 
|-
 
|-
| 2023 || {{dts|April}} || Leadership || Leadership change: Malo Bourgon became CEO, Nate Soares shifted to President, Akex Vermeer stepped up as COO and Eliezer Yudkowsky became the Chair of the Board.<ref>{{cite web|url = https://intelligence.org/2023/10/10/announcing-miris-new-ceo-and-leadership-team/|title = Announcing MIRI's New CEO and Leadership Team|date = October 10, 2023|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
+
| 2024 || {{dts|March 9}} || Publication || An article in Semafor titled "The Risks of Expanding the Definition of AI Safety" discusses concerns raised by Eliezer Yudkowsky about the broadening scope of AI safety. While the article garners attention within specialized AI safety and alignment circles, it does not significantly alter the public narrative around AI governance, reflecting its niche impact.<ref>{{cite web|url=https://www.semafor.com/article/03/08/2024/the-risks-of-expanding-the-definition-of-ai-safety|title=The Risks of Expanding the Definition of AI Safety|date=March 9, 2024|accessdate=September 5, 2024|publisher=Semafor}}</ref>
 
|-
 
|-
| 2024 || {{dts|April}} || Strategy || MIRI released its 2024 Mission and Strategy update, announcing a major focus shift towards broad communication and policy change. <ref>{{cite web|url = https://intelligence.org/2023/10/10/announcing-miris-new-ceo-and-leadership-team/|title = Announcing MIRI's New CEO and Leadership Team|date = October 10, 2023|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
+
| 2024 || {{dts|April}} || Project || MIRI launches a new research team dedicated to technical AI governance. The team, initially consisting of Lisa Thiergart and Peter Barnett, aims to expand by the end of the year. Early traction is limited, highlighting recruitment challenges and the evolving demands of governance work in a rapidly changing AI landscape.<ref>{{cite web|url=https://intelligence.org/2024/04/12/april-2024-newsletter/|title=April 2024 Newsletter|date=April 12, 2024|accessdate=September 2, 2024|publisher=Machine Intelligence Research Institute}}</ref>
 
|-
 
|-
| 2024 || {{dts|April}} || || MIRI launched a new research team focused on technical AI governance.<ref>{{cite web|url = https://intelligence.org/2023/10/10/announcing-miris-new-ceo-and-leadership-team/|title = Announcing MIRI's New CEO and Leadership Team|date = October 10, 2023|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
+
| 2024 || {{dts|May}} || Project || The Technical Governance Team at MIRI takes an active role in contributing to AI policy development by submitting responses to multiple key policy bodies. These submissions include the NTIA's request for comment on open-weight AI models, focusing on the implications of making AI model weights publicly available and the potential risks and benefits associated with open-access AI technology.<ref>{{Cite web |url=https://www.regulations.gov/comment/NTIA-2023-0009-0259 |title=NTIA Request for Comment on Open-Weight AI Models |publisher=Regulations.gov |access-date=September 10, 2024}}</ref> They also respond to the United Nations’ request for feedback on the "Governing AI for Humanity" interim report, offering insights on global AI governance frameworks and how they can be structured to prioritize safety, transparency, and ethical considerations.<ref>{{Cite web |url=https://www.un.org/sites/un2.un.org/files/ai_advisory_body_interim_report.pdf |title=Governing AI for Humanity Interim Report |publisher=United Nations |access-date=September 10, 2024}}</ref> Additionally, the team addresses the Office of Management and Budget’s request for information on AI procurement in government, providing recommendations on how AI technologies can be integrated responsibly within government infrastructures.<ref>{{Cite web |url=https://www.federalregister.gov/documents/2024/03/29/2024-06547/request-for-information-responsible-procurement-of-artificial-intelligence-in-government |title=OMB Request for Information on AI Procurement in Government |publisher=Federal Register |access-date=September 10, 2024}}</ref> This proactive engagement highlights MIRI’s strategic involvement in shaping international AI governance and ensuring that safety and ethical standards are maintained in the development and use of AI technologies.<ref>{{cite web|url=https://intelligence.org/2024/05/14/may-2024-newsletter/|title=May 2024 Newsletter|date=May 14, 2024|accessdate=September 5, 2024|publisher=Machine Intelligence Research Institute}}</ref>
 
|-
 
|-
| 2024 || {{dts|May 14}} || Communication || MIRI significantly expanded its communications team, emphasizing public outreach and policy influence.
+
| 2024 || {{dts|May 14}} || Project || MIRI announces the shutdown of the Visible Thoughts Project, which was initiated in November 2021. The project faced several challenges, including evolving ML needs and limited community engagement, which ultimately led to its termination.<ref>{{cite web|url=https://intelligence.org/2024/05/14/may-2024-newsletter/|title=May 2024 Newsletter|date=May 14, 2024|accessdate=September 5, 2024|publisher=Machine Intelligence Research Institute}}</ref>
 
|-
 
|-
| 2024 || {{dts|May 29}} || Project || MIRI shut down the Visible Thoughts Project. <ref>{{cite web|url = https://intelligence.org/2024/05/14/may-2024-newsletter/|title = May 2024 Newsletter|date = May 14, 2024|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
+
| 2024 || {{dts|May 29}} || Publication || MIRI publishes their 2024 Communications Strategy, focusing on halting the development of frontier AI systems worldwide. The strategy aims for direct, unvarnished communication with policymakers and the public. However, the approach avoids grassroots advocacy and receives mixed reactions, with limited evidence of a shift in AI policy or public sentiment.<ref>{{cite web|url=https://intelligence.org/2024/05/29/miri-2024-communications-strategy/|title=MIRI 2024 Communications Strategy|date=May 29, 2024|accessdate=September 5, 2024|publisher=Machine Intelligence Research Institute}}</ref>
 
|-
 
|-
| 2024 || {{dts|June}} || Communication || MIRI Communications Manager Gretta Dulebahere emphasized the need to shut down frontier AI development and install an "off-switch."
+
| 2024 || {{dts|June 7}} || Publication || Rob Bensinger publishes a response to Daniel Kokotajlo's discussion of Aschenbrenner's views on situational awareness in AI. Bensinger critiques Kokotajlo’s interpretation, adding nuance to the debate on AI safety. While the discussion is valuable within the alignment community, it remains niche and does not lead to broader shifts in consensus.<ref>{{cite web|url=https://forum.effectivealtruism.org/posts/RTHFCRLv34cewwMr6/response-to-aschenbrenner-s-situational-awareness|title=Response to Aschenbrenner's Situational Awareness|date=May 15, 2023|accessdate=September 5, 2024|publisher=Effective Altruism Forum}}</ref>
 
|-
 
|-
| 2024 || {{dts|June}} || Research || The Agent Foundations team, including Scott Garrabrant, parted ways with MIRI to continue as independent researchers.
+
| 2024 || {{dts|June}} || Research || The Agent Foundations team, including Scott Garrabrant, departs MIRI to pursue independent work. This signals a shift in focus for MIRI, as they prioritize other areas in response to rapid AI advancements. The departure is seen as an outcome of MIRI reassessing its research priorities amid changing circumstances in the AI field.<ref>{{cite web|url = https://intelligence.org/2024/06/14/june-2024-newsletter/|title = June 2024 Newsletter|date = June 14, 2024|accessdate = September 2, 2024|publisher = Machine Intelligence Research Institute}}</ref>
 
|}
 
|}
  

Latest revision as of 12:20, 12 November 2024

The timeline currently offers focused coverage of the period until June 2024. It is likely to miss important developments outside this period (particularly after this period) though it may have a few events from after this period.

This is a timeline of Machine Intelligence Research Institute. Machine Intelligence Research Institute (MIRI) is a nonprofit organization that does work related to AI safety.

Sample questions

This is an experimental section that provides some sample questions for readers, similar to reading questions that might come with a book. Some readers of this timeline might come to the page aimlessly and might not have a good idea of what they want to get out of the page. Having some "interesting" questions can help in reading the page with more purpose and in getting a sense of why the timeline is an important tool to have.

The following are some interesting questions that can be answered by reading this timeline:

  • Which Singularity Summits did MIRI host, and when did they happen? (Sort by the "Event type" column and look at the rows labeled "Conference".)
  • What was MIRI up to for the first ten years of its existence (before Luke Muehlhauser joined, before Holden Karnofsky wrote his critique of the organization)? (Scan the years 2000–2009.)
  • How has MIRI's explicit mission changed over the years? (Sort by the "Event type" column and look at the rows labeled "Mission".)

The following are some interesting questions that are difficult or impossible to answer just by reading the current version of this timeline, but might be possible to answer using a future version of this timeline:

  • When did some big donations to MIRI take place (for instance, the one by Peter Thiel)?
  • Has MIRI "done more things" between 2010–2013 or between 2014–2017? (More information)

Big picture

Time period Development summary More details
1998–2002 Various publications related to creating a superhuman AI During this period, Eliezer Yudkowsky publishes a series of foundational documents about designing superhuman AI. Key works include "Coding a Transhuman AI," "The Plan to Singularity," and "Creating Friendly AI." These writings lay the groundwork for the AI alignment problem. Additionally, the Flare Programming Language project is launched to assist in the creation of a superhuman AI, marking the early technical ambitions of the movement.
2004–2009 Tyler Emerson's tenure as executive director Under Emerson’s leadership, MIRI (then known as the Singularity Institute) experiences growth and increased visibility. Emerson launches the Singularity Summit, a major event that brings together AI researchers, futurists, and thought leaders. MIRI relocates to the San Francisco Bay Area, gaining a strong foothold in the tech industry. During this period, Peter Thiel becomes a key donor and public advocate, lending credibility and significant financial support to the institute.
2006–2009 Modern rationalist community forms This period sees the formation of the modern rationalist community. Eliezer Yudkowsky contributes by founding the websites Overcoming Bias and LessWrong. These platforms become central hubs for discussions on rationality, AI safety, and existential risks. Yudkowsky's Sequences, a comprehensive collection of essays on rationality, are written and gain a wide following, helping shape the philosophy of many within the AI safety and rationalist movements.
2006–2012 The Singularity Summits take place annually The Singularity Summit takes place annually during this period, attracting both prominent thinkers and the general public interested in AI, technology, and futurism. In 2012, the organization changes its name from "Singularity Institute for Artificial Intelligence" to the "Machine Intelligence Research Institute" (MIRI) to better reflect its focus on AI research rather than broader technological futurism. MIRI also sells the Singularity Summit to Singularity University, signaling a shift toward a more focused research agenda.
2009–2012 Michael Vassar's tenure as president Michael Vassar serves as president during this period, continuing to build on the progress made by previous leadership. Vassar focuses on strategic development and positions MIRI within the broader intellectual landscape, further cementing its role as a leader in AI safety research.
2011–2015 Luke Muehlhauser's tenure as executive director Luke Muehlhauser takes over as executive director and is credited with professionalizing the organization and improving donor relations. Under his leadership, MIRI undergoes significant changes, including a name change, a shift in focus from outreach to research, and stronger connections with the Effective Altruism community. Muehlhauser builds relationships with the AI research community, laying the groundwork for future collaborations and funding opportunities.[1][2][3]
2013–2015 Change of focus MIRI shifts its research focus to AI safety and technical math-based research into Friendly AI. During this period, MIRI reduces its public outreach efforts to concentrate on solving fundamental problems in AI safety. It stops hosting major public events like the Singularity Summit and begins focusing almost exclusively on research efforts to address the alignment problem and existential risks from advanced AI systems.
2015–2023 Nate Soares's tenure as executive director Nate Soares, who takes over as executive director in 2015, continues to steer MIRI toward more technical and research-based work on AI safety. Soares expands MIRI’s collaboration with other AI safety organizations and risk researchers. During this time, MIRI receives major funding boosts from cryptocurrency donations and the Open Philanthropy Project in 2017. In 2018, MIRI adopts a "nondisclosed-by-default" policy for much of its research to prevent potential misuse or risks from the dissemination of sensitive AI safety work.
2023–present Leadership transitions at MIRI MIRI undergoes significant leadership changes in 2023. Nate Soares steps down as executive director and transitions to President, focusing on strategic oversight. Malo Bourgon becomes the new CEO, handling day-to-day operations and growth management. Alex Vermeer takes on the role of COO, providing internal support and leadership. The organization continues to prioritize AI safety research and collaborates with other AI safety organizations to address emerging challenges in the field.

Full timeline

Year Month and date Event type Details
1979 September 11 Eliezer Yudkowsky is born.[4]
1996 November 18 Eliezer Yudkowsky writes the first version of "Staring into the Singularity".[5]
1998 Publication The initial version of "Coding a Transhuman AI" (CaTAI) is published.[6]
1999 March 11 The Singularitarian mailing list is launched. The mailing list page notes that although hosted on MIRI's website, the mailing list "should be considered as being controlled by the individual Eliezer Yudkowsky".[7]
1999 September 17 The Singularitarian mailing list is first informed (by Yudkowsky?) of "The Plan to Singularity" (called "Creating the Singularity" at the time).[8]
2000–2003 Eliezer Yudkowsky's "coming of age" (including his "naturalistic awakening," in which he realizes that a superintelligence would not necessarily follow human morality) takes place during this period.[9][10][11]
2000 January 1 Publication "The Plan to Singularity" version 1.0 is written and published by Eliezer Yudkowsky, and posted to the Singularitarian, Extropians, and transhuman mailing lists.[8]
2000 January 1 Publication "The Singularitarian Principles" version 1.0 by Eliezer Yudkowsky is published.[12]
2000 February 6 The first email is sent on SL4 ("Shock Level Four"), a mailing list about transhumanism, superintelligent AI, existential risks, and so on.[13][14]
2000 May 18 Publication "Coding a Transhuman AI" (CaTAI) version 2.0a is "rushed out in time for the Foresight Gathering".[15]
2000 July 27 Mission Machine Intelligence Research Institute is founded as the Singularity Institute for Artificial Intelligence by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.[16]:3[17]
2000 September 1 Publication Large portions of "The Plan to Singularity" were declared obsolete following the formation of the Singularity Institute and a fundamental shift in AI strategy after the publication of "Coding a Transhuman AI" (CaTAI) version 2.[8] This marked a pivotal moment in MIRI's (then known as the Singularity Institute) focus. Earlier discussions about the Singularity gave way to a more precise, strategic approach to developing safe, self-improving AI. The obsolete elements reflected how new insights were rapidly reshaping the institute's path.
2000 September 7 Publication Version 2.2.0 of "Coding a Transhuman AI" (CaTAI) is published.[15] CaTAI is a detailed technical document outlining the architecture for creating a transhuman-level artificial intelligence. It covers key ideas on how an AI can be designed to improve itself safely without deviating from its original, human-aligned goals. This text serves as a core theoretical foundation for MIRI's mission, advocating for AI development grounded in ethical and rational decision-making frameworks.
2000 September 14 Project The first Wayback Machine snapshot of MIRI's website is captured, using the singinst.org domain name.[18] The launch of the website signals MIRI’s formal entry into the public discourse on AI safety and existential risks. It becomes a hub for sharing research, ideas, and resources aimed at academics, technologists, and the broader community interested in the ethical implications of advanced AI.
2001 April 8 Project MIRI begins accepting donations after receiving tax-exempt status.[19] Receiving tax-exempt status is a critical milestone for MIRI, allowing it to officially solicit and receive donations from the public. This status helps secure the financial support necessary to expand their research efforts and build a formal research team.
2001 April 18 Publication Version 0.9 of "Creating Friendly AI" is released.[20] This early draft outlines the first comprehensive framework for developing "Friendly AI" — an AI system designed to operate under constraints that ensure its goals remain aligned with human interests. It is an important early step in formalizing the institute’s approach to safe AI development.
2001 June 14 Publication The "SIAI Guidelines on Friendly AI" are published.[21] These guidelines serve as a set of ethical and technical principles meant to guide AI researchers in designing systems that prioritize human well-being. The guidelines represent MIRI’s effort to communicate the necessity of carefully managing AI's development and potential risks.
2001 June 15 Publication Version 1.0 of "Creating Friendly AI" is published.[22] This version is the first full publication of MIRI’s flagship research document. It provides a detailed analysis of how to design AI systems that remain aligned with human values, even as they gain the ability to self-improve. It is considered one of the key early texts in the AI safety field.
2001 July 23 Project MIRI formally launches the development of the Flare programming language under Dmitriy Myshkin.[23] The Flare project is conceived as a way to build a programming language optimized for AI development and safety. Though it is eventually canceled, it shows MIRI’s early commitment to exploring technical approaches to developing safer AI systems.
2001 December 21 Domain MIRI secures the flare.org domain name for its Flare programming language project.[23] This acquisition signifies MIRI's focus on developing tools that assist in the creation of AI, though Flare itself is eventually shelved due to technical challenges and shifting priorities.
2002 March 8 AI Box Experiment The first AI Box experiment conducted by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.[24] This experiment involves testing whether a hypothetical AI can convince a human "gatekeeper" to let it out of a confined environment — highlighting the persuasive abilities that a sufficiently advanced AI might possess, even when theoretically controlled.
2002 April 7 Publication A draft of "Levels of Organization in General Intelligence" is announced on SL4.[25][26] This paper explores theoretical foundations for creating AI that mimics general human intelligence, contributing to the field’s understanding of how to structure and organize machine learning systems.
2005 January 4 Publication "A Technical Explanation of Technical Explanation" is published.[27] Eliezer Yudkowsky explores the nature of technical explanations, emphasizing how we can communicate complex ideas with clarity and rigor. This work becomes foundational for those studying rationality and AI, offering insights into how we convey and understand deep technical topics. It plays an important role in grounding the theoretical framework behind AI safety research. MIRI announces its release, underlining its importance to their broader research agenda.[28]
2005 Conference MIRI gives presentations on AI and existential risks at Stanford University, the Immortality Institute’s Life Extension Conference, and the Terasem Foundation.[29] These presentations help MIRI broaden the conversation about the risks associated with AI development. By engaging academic audiences at Stanford and futurist communities at the Life Extension Conference, MIRI establishes itself as a critical voice in discussions about how AI can impact humanity’s future. These events also allow MIRI to connect its mission with broader existential concerns, including life extension and the future of human intelligence.
2005 Publication Eliezer Yudkowsky contributes chapters to Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković.[29] Although the book is officially published in 2008, Yudkowsky’s early contributions focus on the potential dangers of advanced AI and global catastrophic risks. His chapters play a vital role in shaping the emerging field of AI safety, providing critical perspectives on how advanced AI could shape or threaten humanity’s future. This collaboration with prominent scholars like Nick Bostrom helps solidify MIRI's reputation within the existential risk community.
2005 February 2 Relocation MIRI relocates from the Atlanta metropolitan area in Georgia to the Bay Area of California.[28] This move is strategic, placing MIRI at the heart of Silicon Valley, where technological advancements are rapidly accelerating. By moving to the Bay Area, MIRI positions itself closer to influential tech companies and research institutions, allowing it to form stronger partnerships and participate more actively in the conversations around AI development and safety. The relocation also signals MIRI’s commitment to influencing the future of AI in a global technology hub.
2005 July 22–24 Conference MIRI sponsors TransVision 2005 in Caracas, Venezuela.[28] TransVision is one of the world’s leading transhumanist conferences, focusing on how emerging technologies, including AI, can impact humanity’s evolution. MIRI’s sponsorship of this event highlights its dedication to transhumanist goals, such as safe AI and human enhancement. The sponsorship also enables MIRI to reach new international audiences, solidifying its role as a global leader in the field of AI safety and existential risk.
2005 August 21 AI Box Experiment Eliezer Yudkowsky conducts the third AI Box experiment, with Carl Shulman as the gatekeeper.[30] This experiment explores the theoretical dangers of an advanced AI persuading a human to release it from confinement. Yudkowsky’s successful manipulation as the AI in this experiment further demonstrates the potential risks posed by highly intelligent systems. The AI Box experiment serves as a thought-provoking exercise in AI safety, illustrating the need for stringent safeguards in future AI development.
2005–2006 December 20, 2005 – February 19, 2006 Financial The 2006 $100,000 Singularity Challenge, led by Peter Thiel, successfully matches donations up to $100,000.[28][31] Peter Thiel’s donation marks the beginning of his significant financial support for MIRI, which continues for many years. The Singularity Challenge helps MIRI raise critical funds for its research, enabling the organization to expand its efforts in AI safety and existential risk mitigation.
2006 January Publication "Twelve Virtues of Rationality" is published.[32] This essay, written by Eliezer Yudkowsky, lays out twelve core principles or virtues meant to guide rational thinkers. It highlights values like curiosity, empiricism, and precision in thinking, which Yudkowsky frames as essential for clear, logical analysis. The publication is relatively short and structured as a set of concise principles, making it an easily digestible guide for those interested in improving their rational thinking skills.
2006 February 13 Staff Peter Thiel joins MIRI’s Board of Advisors.[28] Peter Thiel, the tech entrepreneur and venture capitalist, becomes a part of MIRI’s leadership by joining its Board of Advisors. Thiel’s addition to the board follows his growing interest in existential risks and advanced AI, which aligns with MIRI’s mission. His role primarily involves advising MIRI on its strategic direction and helping the organization secure long-term financial support for its AI safety research.
2006 May 13 Conference The first Singularity Summit takes place at Stanford University.[33][34][35] The Singularity Summit is held as a one-day event at Stanford University and gathers leading scientists, technologists, and thinkers to discuss the rapid pace of technological development and the potential for artificial intelligence to surpass human intelligence. The agenda includes a series of talks and panel discussions, with topics ranging from AI safety to the philosophical implications of superintelligent machines. Attendees include a mix of academics, entrepreneurs, and futurists, marking it as a landmark event for those interested in the technological singularity.
2006 November Project Robin Hanson launches the blog Overcoming Bias.[36] This project is a personal blog started by Robin Hanson, focusing on cognitive biases and rationality. It is a platform for Hanson and guest contributors to write about topics such as human decision-making, bias in everyday life, and how individuals can improve their thinking. Overcoming Bias quickly gains a readership among academics, technologists, and rationality enthusiasts.
2007 May Mission MIRI updates its mission statement to focus on "developing safe, stable, and self-modifying Artificial General Intelligence." This reflects the organization’s shift in focus to ensuring that future AI systems remain aligned with human values.[37]
2007 July Project MIRI launches its outreach blog. The blog serves to engage the public in discussions around AI safety and rationality. It provides a platform for MIRI staff and guest writers to share research updates, existential risk concerns, and general AI news.[29]
2007 August Project MIRI begins its Interview Series, publishing interviews with leading figures in AI, cognitive science, and existential risk. These interviews offer insights into AGI safety and foster connections within the academic community.[29]
2007 September Staff Ben Goertzel becomes Director of Research at MIRI, bringing formal leadership to MIRI’s research agenda. He focuses on advancing research in AGI safety, leveraging his expertise in cognitive architectures.[38]
2007 May 16 Project MIRI publishes its first introductory video on YouTube.[39] The video is created as an introduction to MIRI’s mission and the field of AI safety. It explains the basic concepts of AI risk and outlines MIRI’s role in researching the challenges posed by advanced AI systems. The video is designed to be accessible to a general audience, helping MIRI reach people who might not be familiar with the nuances of AI development.
2007 July 10 Publication The oldest post on MIRI’s blog, "The Power of Intelligence", is published by Eliezer Yudkowsky.[40] This blog post explores the fundamental concept of intelligence and how it shapes the world. It discusses the role of intelligence in achieving goals and solving problems, emphasizing its potential impact on the future. The post serves as an introduction to Yudkowsky’s broader work on AI safety and rationality, marking the start of MIRI’s ongoing blog efforts.
2007 September 8–9 Conference The Singularity Summit 2007 is held in the San Francisco Bay Area.[33][41] The second Singularity Summit takes place over two days and features presentations from leading thinkers in AI and technology. Topics include the future of artificial intelligence, the ethics of AI development, and the technological singularity. The event builds on the success of the previous year’s summit, expanding in both size and scope, and attracting a broader audience from academia and the tech industry.
2008 January Publication "The Simple Truth" is published. This short, fictional story by Eliezer Yudkowsky explains the basic concepts of truth and rationality, illustrating how humans can understand objective reality through evidence and reasoning. It serves as an introduction to epistemology, making complex ideas about knowledge more accessible to a general audience.[42]
2008 March Project MIRI expands its Interview Series, broadening its scope to include a wider range of experts in AI safety, cognitive science, and philosophy of technology. This expansion provides a more comprehensive view of the diverse research efforts and opinions shaping AGI and existential risk discussions.[29]
2008 June Project MIRI launches its summer intern program, engaging young researchers and students in AI safety research. The program allows participants to work with MIRI’s research staff, contributing to ongoing projects and gaining hands-on experience in AGI research. It becomes a key method for developing talent and integrating fresh perspectives.[29]
2008 July Project OpenCog is founded with support from MIRI and Novamente LLC, directed by Ben Goertzel. OpenCog receives additional funding from Google Summer of Code, allowing 11 interns to work on the project in the summer of 2008. The initiative focuses on cognitive architectures and remains central to Goertzel's research efforts at MIRI until 2010.[43][44]
2008 October 25 Conference The Singularity Summit 2008 takes place in San Jose.[45][46]
2008 November–December Outside review The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.[47][48]
2009 Project MIRI launches the Visiting Fellows Program in 2009. This initiative allows individuals from various backgrounds to spend several weeks at MIRI, engaging in collaborative research and contributing to projects related to Friendly AI and rationality. The program becomes a key method of recruitment for future MIRI researchers.[29]
2009 (early) Staff Tyler Emerson, who served as executive director of MIRI, steps down early in the year. His departure marks a leadership transition that eventually sees Michael Vassar take on a more prominent role within the organization.[49]
2009 (early) Staff Michael Anissimov is hired as Media Director. Having served as MIRI’s Advocacy Director in previous years, it is unclear whether he briefly left the organization or simply transitioned into a new role.[49]
2009 February Project Eliezer Yudkowsky establishes LessWrong, a community blog dedicated to discussing topics related to rationality, decision theory, and the development of Friendly AI. The site serves as a spiritual successor to his posts on Overcoming Bias and quickly becomes a central hub for Singularity and Effective Altruism communities. It is described as instrumental in MIRI's recruitment efforts, with many participants of MIRI's Visiting Fellows Program having first encountered the organization through LessWrong.[50][49]
2009 February 16 Staff Michael Vassar announces his role as President of MIRI in a blog post titled "Introducing Myself." Vassar, who was a key figure in the organization’s outreach efforts, remains president until 2012, focusing on strategic vision and external partnerships.[51]
2009 April Publication Eliezer Yudkowsky completes the Sequences, a series of blog posts on LessWrong that cover topics ranging from epistemology and rationality to AI safety. These posts are later compiled into the book Rationality: From AI to Zombies.[52]
2009 August 13 Social Media MIRI establishes its official Twitter account under the handle @singinst. This move marks the beginning of MIRI's broader efforts to engage with the public through social media channels.[53]
2009 September Staff Amy Willey Labenz begins an internship at MIRI, focusing on administrative and operational tasks. During her time as an intern, she demonstrates keen attention to detail, particularly in financial oversight. In November, she discovers a significant case of embezzlement within the organization, identifying discrepancies that had gone unnoticed. Her findings lead to an internal investigation, and her role in resolving the issue is seen as critical in protecting MIRI's financial stability. Recognizing her diligence and competence, MIRI promotes Amy Willey Labenz to Chief Compliance Officer by the end of the year. In her new role, she is tasked with ensuring the organization's financial integrity and compliance with legal standards, playing a pivotal part in strengthening MIRI's internal operations.[54]
2009 October Project MIRI launches The Uncertain Future, a website that allows users to build mathematically rigorous models to predict the impact of future technologies. The project began development in 2008 and is seen as an innovative tool for those interested in exploring the potential trajectories of technological progress.[55][56]
2009 October 3–4 Conference The Singularity Summit 2009 takes place in New York, bringing together leading thinkers in technology, AI, and futurism. This annual event, hosted by MIRI, serves as a major platform for discussions about the Singularity and the implications of rapidly advancing technologies.[57][58]
2009 November Financial An embezzlement scandal involving a former contractor is uncovered, resulting in a reported theft of $118,803. The discovery leads to significant internal changes within MIRI and the eventual recovery of some funds through legal action.[59][60]
2009 December Staff Following the embezzlement case, Amy Willey Labenz, who uncovered the theft during her internship, is promoted to Chief Compliance Officer. Her role focuses on strengthening MIRI’s financial and operational compliance.[49][54]
2009 December 11 Influence The third edition of Artificial Intelligence: A Modern Approach, a seminal textbook by Stuart J. Russell and Peter Norvig, is published. In this edition, Friendly AI and Eliezer Yudkowsky are mentioned for the first time, marking an important moment for MIRI's ideas within mainstream AI literature.
2009 December 12 Project MIRI announces that The Uncertain Future has reached beta status. The tool, which allows users to explore scenarios of technological progress, is unveiled on the MIRI blog.[61]
2010 Mission The organization mission changes to: "To develop the theory and particulars of safe self-improving Artificial Intelligence; to support novel research and foster the creation of a research community focused on safe Artificial General Intelligence; and to otherwise improve the probability of humanity surviving future technological advances."[62] This mission is used in 2011 and 2012 as well.
2010 February 28 Publication The first chapter of Eliezer Yudkowsky's fan fiction Harry Potter and the Methods of Rationality is published. The book is published as a serial concluding on March 14, 2015.[63][64] The fan fiction becomes the initial contact with MIRI of several larger donors to MIRI.[65]
2010 April Staff Amy Willey Labenz is promoted to Chief Operating Officer; she was previously the Chief Compliance Officer. From 2010 to 2012 she also serves as the Executive Producer of the Singularity Summits.[54]
2010 June 17 Popular culture Zendegi, a science fiction book by Greg Egan, is published. The book includes a character called Nate Caplan (partly inspired by Eliezer Yudkowsky and Robin Hanson), a website called Overpowering Falsehood dot com (partly inspired by Overcoming Bias and LessWrong), and a Benign Superintelligence Bootstrap Project, inspired by the Singularity Institute's friendly AI project.[66][67][68]
2010 August 14–15 Conference The Singularity Summit 2010 takes place in San Francisco.[69]
2010 December 21 Social media The first post on the MIRI Facebook page is from this day.[70][71]
2010–2011 December 21, 2010 – January 20, 2011 Financial The Tallinn–Evans $125,000 Singularity Challenge takes place. The Challenge is a fundraiser in which Edwin Evans and Jaan Tallinn match each dollar donated to MIRI up to $125,000.[72][73]
2011 February 4 Project The Uncertain Future is open-sourced.[56]
2011 February Outside review Holden Karnofsky of GiveWell has a conversation with MIRI staff. The conversation reveals the existence of a "Persistent Problems Group" at MIRI, which will supposedly "assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition".[74] On April 30, Karnofsky posts the conversation to the GiveWell mailing list.[75]
2011 April Staff Luke Muehlhauser begins as an intern at MIRI.[76]
2011 May 10 – June 24 Outside review Holden Karnofsky of GiveWell and Jaan Tallinn (with Dario Amodei being present in the initial phone conversation) correspond regarding MIRI's work. The correspondence is posted to the GiveWell mailing list on July 18.[77]
2011 June 24 Domain A Wayback Machine snapshot on this day shows that singularity.org has turned into a GoDaddy.com placeholder.[78] Before this, the domain is some blog, most likely unrelated to MIRI.[79]
2011 July 18 – October 20 Domain At least during this period, the singularity.org domain name redirects to singinst.org/singularityfaq.[79]
2011 September 6 Domain The first Wayback Machine capture of singularityvolunteers.org is from this day.[80] For a time, the site is used to coordinate volunteer efforts.
2011 October 15–16 Conference The Singularity Summit 2011 takes place in New York.[81]
2011 October 17 Social media The Singularity Summit YouTube account, SingularitySummits, is created.[82]
2011 November Staff Luke Muehlhauser is appointed executive director of MIRI.[83]
2011 December 12 Project Luke Muehlhauser announces the creation of Friendly-AI.com, a website introducing the idea of Friendly AI.[84]
2012 Staff Michael Vassar leaves MIRI to found MetaMed, a personalized medical advisory company. Vassar co-founds the company with Skype co-creator Jaan Tallinn and $500,000 in funding from Peter Thiel. MetaMed aims to revolutionize the medical system by applying rational decision-making and advanced data analysis to personalized health solutions. Despite its ambitious goals, MetaMed's services are initially aimed at wealthy clients, offering personalized medical literature reviews and customized health studies. The underlying mission is to signal the power of rationality in complex systems like medicine, even though it may initially serve the privileged.[85]
2011–2012 Opinion Q&A with Luke Muehlhauser: In a two-part Q&A series, Luke Muehlhauser, the newly appointed Executive Director of MIRI, shares his vision and priorities for the organization. He outlines MIRI’s evolving strategy to focus more intensively on AI alignment research and less on broader advocacy for the singularity. Muehlhauser discusses the challenges of making progress in AI safety research and the importance of recruiting highly talented researchers to tackle this problem. He highlights that MIRI’s goal is not just to build smarter AI, but to ensure its safety and alignment with human values.[86][87]
2012 Domain From February 4 to May 4, 2012, MIRI redirects its domain from singularity.org to singinst.org, reflecting its shift from broad discussions on the singularity to a focused emphasis on AI safety and technical research. This change consolidates its online presence under a new identity aligned with its mission of developing safe, human-aligned AI.[88]
2012 May 8 Progress Report MIRI publishes its April 2012 progress report, which includes the announcement of the new name for the Center for Applied Rationality (CFAR). Previously known as the “Rationality Group,” this rebranding signals a more structured approach to developing rationality training programs and fostering research in rational decision-making. CFAR’s establishment is an important step in formalizing the application of rationality tools, which would later become a key part of the broader Effective Altruism community.[89]
2012 May 11 Outside Review Holden Karnofsky, co-founder of GiveWell and later Open Philanthropy, publishes "Thoughts on the Singularity Institute (SI)" on LessWrong. In this post, Karnofsky outlines his reasons for not recommending the Singularity Institute (now MIRI) for GiveWell funding. His concerns revolve around the speculative nature of the institute’s research on AI safety, which he believes lacks the empirical grounding necessary for confident recommendations. This review contributes to MIRI’s reputation within the broader Effective Altruism and existential risk communities, prompting them to refine their research approach.[90]
2012 August 6 Newsletter MIRI begins publishing monthly newsletters as blog posts, starting with the July 2012 Newsletter. These newsletters provide regular updates on MIRI’s research, events, and organizational developments, and serve as a valuable resource for supporters and stakeholders interested in AI safety. The monthly cadence also marks a more structured communication approach for MIRI, enhancing its transparency and engagement with the community.[91]
2012 October 13–14 Conference The Singularity Summit 2012 takes place in San Francisco, attracting a wide array of speakers and attendees, including leaders in AI, neuroscience, and futurism. Speakers such as Eliezer Yudkowsky and Ray Kurzweil share their visions of the future, discussing topics from AI safety to human enhancement. The Summit is a key event for disseminating ideas about the singularity and fostering discussions about the long-term impact of artificial intelligence on humanity.[92]
2012 November 11–18 Workshop The 1st Workshop on Logic, Probability, and Reflection takes place, bringing together researchers to explore the intersections of these fields with AI alignment and decision theory. These workshops are critical for advancing MIRI’s foundational research on how to develop AI systems that can reason reliably under uncertainty, a key component of ensuring the safety and predictability of future AI systems.[93]
2012 December 6 Singularity Summit Acquisition Singularity University announces that it has acquired the Singularity Summit from MIRI. This acquisition marks the end of MIRI's direct involvement with the summit, a move praised by some, including Joshua Fox, for allowing MIRI to focus more directly on AI safety research. However, Singularity University does not continue the Summit tradition in its original form. The conference's ethos is eventually inherited by other events like EA Global, which carry forward similar themes of long-term thinking and technological foresight.[94][95]
2013 Mission Mission Statement Update MIRI's mission statement is revised to reflect its evolving focus on AI safety: "To ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of the organization is to: a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; b) raise awareness of this important issue; c) advise researchers, leaders, and laypeople around the world; d) as necessary, implement a smarter-than-human intelligence with humane, stable goals." This shift represents a more direct approach to developing safe AI systems, incorporating a broader outreach strategy, and addressing global challenges posed by advanced AI.[96]
2013–2014 Project Conversations Initiative During this period, MIRI engages in a large number of expert interviews. Out of 80 conversations listed as of July 2017, 75 occurred in this time frame (19 in 2013 and 56 in 2014). These conversations involve in-depth discussions on AI safety, strategy, and existential risk with leading thinkers in the field. By mid-2014, MIRI deprioritizes these interviews due to diminishing returns, as noted by executive director Luke Muehlhauser in MIRI’s 2014 review. However, the conversations contribute substantially to shaping AI safety dialogue during these years.[97][98]
2013 January Staff Michael Anissimov leaves MIRI following the acquisition of the Singularity Summit by Singularity University and a major shift in MIRI's public communication strategy. Although no longer employed at MIRI, Anissimov continues to support its mission and contributes as a volunteer. This departure reflects MIRI's pivot away from broader public outreach and its increased focus on research, particularly in AI alignment and decision theory.[99]
2013 January 30 Rebranding MIRI announces its renaming from the Singularity Institute for Artificial Intelligence (SIAI) to the Machine Intelligence Research Institute (MIRI). The name change reflects MIRI's growing focus on machine intelligence and the technical challenges of AI safety, rather than the broader singularity topics associated with its former title. This rebranding helps clarify MIRI's mission to external stakeholders and aligns with its shift toward more technical and research-focused projects.[100]
2013 February 1 Publication MIRI publishes "Facing the Intelligence Explosion" by executive director Luke Muehlhauser. This book provides an accessible introduction to the risks posed by artificial intelligence and highlights the urgent need for AI safety research. It underscores MIRI's mission to address the potentially existential risks that could arise from advanced AI systems, framing the conversation around the control and alignment of AI.[101]
2013 February 11 – February 28 Domain MIRI's new website, intelligence.org, begins operating during this period. The website’s launch marks a new digital presence for the organization, with a cleaner, more professional focus on machine intelligence research and AI safety. Executive director Luke Muehlhauser announces the new site in a blog post, emphasizing the transition away from the Singularity Institute’s prior domain and approach.[102][103]
2013 April 3 Publication "Singularity Hypotheses: A Scientific and Philosophical Assessment" is published by Springer. This collection, which includes contributions from MIRI researchers and research associates, examines the scientific and philosophical issues surrounding the concept of the singularity and the rise of advanced artificial intelligence. The book provides a detailed exploration of the potential trajectories of AI development and its impact on humanity.[104][105]
2013 April 3–24 Workshop MIRI hosts the 2nd Workshop on Logic, Probability, and Reflection, bringing together researchers to advance the development of decision theory, AI alignment, and formal methods for AI reasoning. These workshops form a critical part of MIRI’s strategy for improving foundational theoretical work on AI, which is key for creating safe, reliable AI systems.[93]
2013 April 13 Strategy MIRI publishes a strategic update, outlining plans to shift its focus more heavily toward Friendly AI mathematics and reducing its emphasis on public outreach. This transition is framed as a necessary step to concentrate resources on the technical challenges that will have the most direct impact on AI safety. The organization sees this as a way to prioritize high-value research areas that can contribute to controlling advanced AI.[106]
2014 January (approximate) Financial Jed McCaleb, the creator of Ripple and original founder of Mt. Gox, donates $500,000 worth of XRP to the Machine Intelligence Research Institute (MIRI). This marks a substantial financial contribution to support AI safety research, further emphasizing the growing interest in AI development from individuals in the cryptocurrency space. McCaleb's involvement highlights the intersection of cryptocurrency and AI safety, as both fields focus on technological innovation with significant societal impacts.[107]
2014 January 16 Outside Review MIRI staff meet with Holden Karnofsky, co-founder of GiveWell, for a strategic conversation about existential risks and AI safety. The discussion focuses on MIRI’s approach to managing existential risk, exploring potential avenues for collaboration between MIRI and other organizations involved in AI safety. This meeting is part of MIRI's ongoing effort to engage with influential figures in the effective altruism and philanthropic communities to advance AI safety research.[108]
2014 February 1 Publication MIRI publishes Stuart Armstrong's influential book "Smarter Than Us: The Rise of Machine Intelligence". The book explores the challenges humanity may face with the rise of intelligent machines and serves as an introduction to AI alignment issues for a broader audience. Armstrong, a research associate at MIRI, examines the potential risks of advanced AI systems, making this book a key piece of literature in the AI safety discourse.[109]
2014 March–May Influence The Future of Life Institute (FLI) is co-founded by Max Tegmark, Jaan Tallinn, Meia Chita-Tegmark, and Anthony Aguirre, with support from MIRI. FLI is an existential risk research and outreach organization focused on ensuring the benefits of AI are shared by humanity. MIRI's influence is notable, as Tallinn, a co-founder of FLI and the Cambridge Centre for the Study of Existential Risk (CSER), cites MIRI as a key source for his views on AI risk. This marks a major expansion in global efforts to address the long-term societal impacts of AI, with MIRI playing a pivotal role in the formation of FLI.[110]
2014 March 12–13 Staff MIRI announces the hiring of several new researchers, including Nate Soares, who would later become MIRI's executive director in 2015. This marks a key moment of growth for the institute as it expands its research team. MIRI also hosts an Expansion Party to introduce the new hires to local supporters, underscoring the organization’s increased visibility and capacity to take on more ambitious AI safety projects.[111]

[112][113]

2014 May 3–11 Workshop MIRI hosts the 7th Workshop on Logic, Probability, and Reflection. This workshop focuses on advancing decision theory and addressing problems related to AI's reasoning under uncertainty. Attendees include top researchers in AI safety and decision theory, working on foundational questions crucial for creating safe AI systems.[93]
2014 July–September Influence Nick Bostrom's seminal work "Superintelligence: Paths, Dangers, Strategies" is published. Bostrom, a research advisor to MIRI, draws heavily on AI safety concerns shared by MIRI researchers. MIRI played a significant role in shaping the discussions that led to the book's publication. "Superintelligence" becomes a widely recognized book in AI alignment, contributing to global discourse on managing the risks of powerful AI systems.[114]
2014 July 4 Project Earliest evidence of the existence of AI Impacts, an initiative focused on analyzing the future societal impacts of AI, appears. Katja Grace plays a key role in launching the project, which seeks to provide rigorous research on AI timelines and impact assessments.[115]
2014 August Project The AI Impacts website officially launches. This project, led by Paul Christiano and Katja Grace, provides detailed analyses and forecasts regarding the development of AI. The website becomes a hub for discussing the potential long-term future of AI and its impacts on society, solidifying AI Impacts as a key contributor to the existential risk community.[116]
2014 November 4 Project The Intelligent Agent Foundations Forum, run by MIRI, is launched. This forum serves as a space for discussing cutting-edge research on agent foundations and decision theory, crucial components in the development of safe AI systems. The forum attracts researchers from a variety of fields to contribute to the growing body of work on AI safety and alignment.[117]
2015 January Project AI Impacts, a project focused on assessing the potential long-term impacts of artificial intelligence, rolls out a redesigned website. This project aims to provide accessible, well-researched information on AI risks, timelines, and governance issues. The site’s overhaul, led by the Machine Intelligence Research Institute (MIRI), was part of a broader effort to improve public engagement and the dissemination of knowledge about AI’s potential dangers.[118]
2015 January 2–5 Conference The Future of AI: Opportunities and Challenges, an AI safety conference, takes place in Puerto Rico. Organized by the Future of Life Institute, the conference attracts influential researchers such as Luke Muehlhauser, Eliezer Yudkowsky, and Nate Soares from MIRI, as well as top AI academics. This event became pivotal in rallying attention to AI risks, leading Soares to describe it as a “turning point” where academia began seriously addressing AI existential risk. At this conference, leading thinkers discussed how AI, if left unchecked, could pose existential threats to humanity.[119][120]
2015 March 11 Influence Rationality: From AI to Zombies is published. This book, a compilation of Eliezer Yudkowsky's influential blog series "The Sequences" from the LessWrong community, explores rational thinking and decision-making, blending topics from AI development to human psychology. It became a key philosophical text within the Effective Altruism and rationality movements, widely regarded as a comprehensive introduction to AI alignment challenges and human cognitive biases.[52][121]
2015 May 4–6 Workshop The 1st Introductory Workshop on Logical Decision Theory takes place. This workshop is designed to educate researchers on decision theories that take into account AI's capacity to predict and influence decisions, aiming to tackle problems like Newcomb's paradox in AI alignment.[93]
2015 May 6 Staff Luke Muehlhauser announces his resignation as MIRI’s executive director, moving to the Open Philanthropy Project as a research analyst. In his farewell post, Muehlhauser expresses confidence in his successor, Nate Soares, who has been a key researcher at MIRI. Soares, known for his work on decision theory and AI safety, takes over as MIRI's executive director.[122]
2015 May 13–19 Conference In collaboration with the Centre for the Study of Existential Risk (CSER), MIRI co-organizes the Self-prediction in Decision Theory and Artificial Intelligence Conference. The event brings together experts to explore the implications of self-prediction in decision theory, which has major relevance to AI systems’ decision-making capabilities and how they predict their own actions.[123]
2015 May 29–31 Workshop The 1st Introductory Workshop on Logical Uncertainty is held, focusing on how AI systems deal with uncertainty in logic-based reasoning, a fundamental challenge in ensuring that AI systems can make reliable decisions in uncertain environments.[93]
2015 June 3–4 Staff Nate Soares officially begins as the executive director of MIRI. Soares, who previously worked on decision theory and AI alignment, steps into this leadership role with the goal of pushing MIRI’s research agenda towards solving AI’s long-term safety challenges.[124]
2015 June 11 AMA Nate Soares, MIRI's executive director, hosts an "ask me anything" (AMA) on the Effective Altruism Forum, engaging the community on topics ranging from AI alignment to his vision for MIRI’s future.[125]
2015 June 12–14 Workshop The 2nd Introductory Workshop on Logical Decision Theory takes place, building on the first workshop’s success by providing advanced tutorials on decision-making theories relevant to AI alignment.[93]
2015 June 26–28 Workshop The 1st Introductory Workshop on Vingean Reflection is held, focusing on how an AI system can reflect on and modify its own decision-making procedures in a safe and predictable manner.[93]
2015 July 7–26 Project The MIRI Summer Fellows Program 2015, run by the Center for Applied Rationality (CFAR), is held. This fellowship aims to cultivate new talent for MIRI’s AI safety research, and it is described as "relatively successful" at recruiting new staff members.[126][127]
2015 August 7–9 Workshop The 2nd Introductory Workshop on Logical Uncertainty takes place, continuing the discussion on how AI systems can make reliable decisions under uncertainty, which is critical to ensuring AI safety in complex, real-world environments.[93]
2015 August 28–30 Workshop The 3rd Introductory Workshop on Logical Decision Theory is held, focusing on refining decision-making frameworks for AI systems. Attendees delve deeper into logical decision theories, specifically how AI agents can navigate decision-making scenarios with incomplete information, ensuring robustness and safety.[93]
2015 September 26 External Review The Effective Altruism Wiki page on MIRI is created. This page provides an overview of the Machine Intelligence Research Institute's work and its mission to reduce existential risks associated with artificial intelligence, making its projects and goals more accessible to the Effective Altruism community.[128]
2016 Publication MIRI commissions Eliezer Yudkowsky to produce AI alignment content for Arbital, a platform that sought to explain complex technical concepts in a way accessible to a broader audience. The goal of this project was to provide more detailed educational materials on AI safety and alignment, addressing various AI risk topics. Arbital was envisioned as a solution for breaking down difficult technical topics related to AI risk for readers of all levels.[129][130]
2016 March 30 Staff MIRI announces the promotion of two key staff members. Malo Bourgon, who had been serving as a program management analyst, steps into the role of Chief Operating Officer (COO). Additionally, Rob Bensinger, previously an outreach coordinator, becomes the Research Communications Manager. This internal reshuffle signals a strengthening of MIRI’s operational and research communications capacities as it expands its AI alignment work.[131]
2016 April 1–3 Workshop The Self-Reference, Type Theory, and Formal Verification Workshop takes place. This workshop focused on advancing formal methods in AI, particularly in how self-referential AI systems can be verified to ensure they behave in alignment with human values. Type theory and formal verification are essential areas in AI safety, ensuring that AI systems can reason about their own decisions safely.[93]
2016 May 6 (talk), December 28 (transcript release) Publication In May 2016, Eliezer Yudkowsky gives a talk titled "AI Alignment: Why It’s Hard, and Where to Start" at Stanford University. Yudkowsky discusses the technical difficulties in aligning AI systems with human values, drawing attention to the challenges involved in controlling advanced AI systems. An edited version of this transcript is released on the MIRI blog in December 2016, where it becomes a key reference for researchers working on AI safety.[132][133]
2016 May 28–29 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Transparency takes place. This event focuses on the importance of transparency in AI systems, particularly how to ensure that advanced AI systems are interpretable and understandable by humans, which is critical to ensuring safe AI alignment.[93]
2016 June 4–5 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Robustness and Error-Tolerance takes place. The focus of this workshop is on developing AI systems that are robust to errors and can tolerate uncertainty, further contributing to safe deployment of AI systems in unpredictable real-world environments.[93]
2016 June 11–12 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Preference Specification is held. The workshop deals with the critical task of correctly specifying human preferences in AI systems, an essential aspect of AI alignment to ensure that the systems act in ways that reflect human values.[93]
2016 June 17 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI) Workshop on Agent Models and Multi-Agent Dilemmas takes place, focusing on how AI systems can interact safely in multi-agent scenarios where the goals of different systems might conflict. This research is crucial for building AI systems that can cooperate or avoid harmful competition.[93]
2016 July 27 Publication MIRI announces its new technical agenda with the release of the paper "Alignment for Advanced Machine Learning Systems". The paper outlines the necessary steps for ensuring machine learning systems are aligned with human values as they become increasingly powerful. This agenda sets the course for MIRI’s future research efforts on machine learning and AI safety.[134]
2016 August Financial Open Philanthropy awards MIRI a $500,000 grant for general support. Despite reservations about MIRI’s technical research, the grant is awarded to support MIRI’s broader mission of reducing AI-related risks. This grant illustrates Open Philanthropy’s acknowledgment of the importance of MIRI’s work on AI alignment, despite differing opinions on technical approaches.[135]
2016 August 12–14 Workshop The 8th Workshop on Logic, Probability, and Reflection is held, continuing MIRI’s tradition of exploring how logic and probability can be used to reason about self-reflection in AI systems. This is a critical aspect of building AI systems capable of safely understanding their own behavior and decision-making processes.[93]
2016 August 26–28 Workshop The 1st Workshop on Machine Learning and AI Safety is held. This inaugural event focuses on the emerging field of AI safety in the context of machine learning, emphasizing the need for alignment in rapidly evolving machine learning models.[93]
2016 September 12 Publication MIRI releases a landmark paper titled "Logical Induction" by Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor. The paper presents a novel approach to reasoning under uncertainty, solving a long-standing problem in logic and opening new possibilities for ensuring safe AI reasoning. The paper is widely praised, with some calling it a "major breakthrough" in formal AI research.[136][137]
2016 October 12 AMA MIRI hosts an "Ask Me Anything" (AMA) session on the Effective Altruism Forum, giving the community an opportunity to ask questions about MIRI’s work, AI alignment, and related technical research. Rob Bensinger, Nate Soares, and other Mexecutives participate to discuss ongoing projects and research approaches in AI alignment and safety.[138]
2016 October 21–23 Workshop The 2nd Workshop on Machine Learning and AI Safety is held. The event continues from the first workshop earlier in the year, with a greater focus on understanding how to make machine learning systems safer as they grow in complexity. Topics discussed include adversarial training, model interpretability, and alignment techniques for machine learning models.[93]
2016 November 11–13 Workshop The 9th Workshop on Logic, Probability, and Reflection is held. This workshop delves further into how AI systems can use logical reasoning to improve decision-making under uncertainty. This remains a cornerstone of MIRI's approach to AI safety, where the focus is on creating systems that can handle complex real-world scenarios with logical consistency.[93]
2016 December Financial Open Philanthropy awards a $32,000 grant to AI Impacts, a project that aims to understand and evaluate the long-term risks of advanced artificial intelligence. The grant supports AI Impacts’ research and its efforts to provide clearer timelines and risk assessments of AI development.[139]
2016 December 1–3 Workshop The 3rd Workshop on Machine Learning and AI Safety is held, capping off a year of significant progress in AI safety research. This workshop provides an opportunity for researchers to reflect on the advancements made throughout the year and to identify new challenges for machine learning systems as AI capabilities expand.[93]
2017 March 25–26 Workshop The Workshop on Agent Foundations and AI Safety takes place. This workshop focuses on exploring foundational questions in AI safety, particularly the design of highly reliable agents that can reason under uncertainty and avoid catastrophic behaviors. Discussions center on robust agent design, decision theory, and safe AI deployment strategies.[93]
2017 April 1–2 Workshop The 4th Workshop on Machine Learning and AI Safety takes place, continuing to build upon previous workshops' discussions on ensuring machine learning models are aligned with human values. Topics include improving adversarial robustness, preventing unintended consequences from AI systems, and safe reinforcement learning. The goal is to ensure that as AI systems become more complex, they do not act unpredictably.[93]
2017 May 24 Publication The influential paper "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on arXiv. This paper surveys AI experts to estimate when AI systems will outperform humans in various tasks. Two researchers from AI Impacts are co-authors. The paper gains widespread media attention, with over twenty news outlets discussing its implications for AI timelines and the potential risks associated with AI surpassing human intelligence.[140][141]
2017 July 4 Strategy MIRI announces a strategic shift, stating that it will be scaling back efforts on its "Alignment for Advanced Machine Learning Systems" agenda. This is due to the departure of key researchers Patrick LaVictoire and Jessica Taylor, and Andrew Critch taking leave. As a result, MIRI refocuses its research priorities while maintaining its commitment to AI safety.[142]
2017 July 7 Outside Review Daniel Dewey, a program officer at Open Philanthropy, publishes a post titled "My Current Thoughts on MIRI's Highly Reliable Agent Design Work". Dewey presents a critique of MIRI's approach to AI safety, arguing that while highly reliable agent design is an important area of research, other approaches such as learning to reason from humans may offer more promising paths to AI alignment. His post provides valuable insight into ongoing debates about AI safety strategies.[143]
2017 July 14 Outside Review The timelines wiki page on MIRI is publicly circulated. This wiki page documents the historical developments of MIRI's work, making it a valuable resource for understanding the evolution of AI safety research at the institute.
2017 October 13 Publication The paper "Functional Decision Theory: A New Theory of Instrumental Rationality" by Eliezer Yudkowsky and Nate Soares is posted on arXiv. This paper introduces Functional Decision Theory (FDT), a new framework for AI decision-making that differs from traditional decision theories. The authors argue that FDT offers better solutions to certain types of decision problems and could lead to safer AI systems. This paper marks a significant contribution to AI alignment research.[144][145]
2017 October 13 Publication Eliezer Yudkowsky publishes the blog post "There’s No Fire Alarm for Artificial General Intelligence" on the MIRI blog and on the newly relaunched LessWrong platform. In this post, Yudkowsky argues that there will be no clear "warning" or fire alarm to signal the arrival of AGI, making it crucial to prepare for AGI's development ahead of time. This post sparks significant discussion in the AI safety community.[146][147]
2017 October Financial Open Philanthropy awards MIRI a $3.75 million grant over three years ($1.25 million per year). The decision to award the grant is partly due to the positive reception of MIRI's "Logical Induction" paper, as well as the increased number of grants Open Philanthropy had made in the area of AI safety, allowing them to provide support to MIRI without it appearing as an outsized endorsement of one approach. This grant is a major financial boost for MIRI, enabling them to continue their work on AI safety and alignment.[148][149]
2017 November 16 Publication Eliezer Yudkowsky's sequence/book Inadequate Equilibria is fully published. The book was published chapter-by-chapter on LessWrong 2.0 and the Effective Altruism Forum starting October 28.[150][151][152] The book is reviewed on multiple blogs including Slate Star Codex (Scott Alexander),[153] Shtetl-Optimized (Scott Aaronson),[154] and Overcoming Bias (Robin Hanson).[155] The book outlines Yudkowsky's approach to epistemology, covering topics such as whether to trust expert consensus and whether one can expect to do better than average.
2017 November 25, November 26 Publication A two-part series "Security Mindset and Ordinary Paranoia" and "Security Mindset and the Logistic Success Curve" by Eliezer Yudkowsky is published. The series uses the analogy of "security mindset" to highlight the importance and non-intuitiveness of AI safety. This is based on Eliezer Yudkowsky's 2016 talk "AI Alignment: Why It’s Hard, and Where to Start."[156][157]
2017 December 1 Financial MIRI's 2017 fundraiser begins. The announcement post describes MIRI's fundraising targets, recent work at MIRI (including recent hires), and MIRI's strategic background (which gives a high-level overview of how MIRI's work relates to long-term outcomes).[158] The fundraiser would conclude with $2.5 million raised from over 300 distinct donors. The largest donation would be from Vitalik Buterin ($763,970 worth of Ethereum).[159]
2018 February Workshop MIRI and the Center for Applied Rationality (CFAR) conduct the first AI Risk for Computer Scientists (AIRCS) workshop. This would be the first of several AIRCS workshops, with seven more in 2018 and many more in 2019.[160] The page about AIRCS says: "The material at the workshop is a mixture of human rationality content that’s loosely similar to some CFAR material, and a variety of topics related to AI risk, including thinking about forecasting, different people’s ideas of where the technical problems are, and various potential paths for research."[161]
2018 October 29 Project The launch of the AI Alignment Forum (often abbreviated to just "Alignment Forum") is announced on the MIRI blog. The Alignment Forum is built and maintained by the LessWrong 2.0 team (which is distinct from MIRI), but with help from MIRI. The Alignment Forum replaces MIRI's existing Intelligent Agent Foundations Forum, and is intended as "a single online hub for alignment researchers to have conversations about all ideas in the field".[162][163] The Alignment Forum had previously launched in beta on July 10, 2018,[164] with the day of launch chosen as the first "AI Alignment Writing Day" for the MIRI Summer Fellows Program (beginning an annual tradition).[165]
2018 October 29 – November 15 Publication The Embedded Agency sequence, by MIRI researchers Abram Demski and Scott Garrabrant, is published on the MIRI blog (text version),[166] on LessWrong 2.0 (illustrated version),[167] and on the Alignment Forum (illustrated version)[168] in serialized installments from October 29 to November 8; on November 15 a full-text version containing the entire sequence is published.[169] The term "embedded agency" is a renaming of an existing concept researched at MIRI, called "naturalized agency".[170]
2018 November 22 Strategy Nate Soares, executive director of MIRI, publishes MIRI's 2018 update post (the post was not written exclusively by Soares; see footnote 1, which begins "This post is an amalgam put together by a variety of MIRI staff"). The post describes new research directions at MIRI (which are not explained in detail due to MIRI's nondisclosure policy); explains the concept of "deconfusion" and why MIRI values it; announces MIRI's "nondisclosed-by-default" policy for most of its research; and gives a recruitment pitch for people to join MIRI.[171]
2018 November 26 Financial MIRI's 2018 fundraiser begins.[160] The fundraiser would conclude on December 31 with $951,817 raised from 348 donors.[172]
2018 August (joining) November 28 (announcement), December 1 (AMA) Staff MIRI announces that prolific Haskell developer Edward Kmett has joined.[173] Kmett participates in an Ask Me Anything (AMA) on Reddit's Haskell subreddit on December 1, 2018. In reply to questions, he clarifies that MIRI's nondisclosure policy will not affect the openness of his work, but as the main researcher at MIRI who publishes openly, he will feel more pressure to produce higher-quality work as the whole organization may be judged by the quality of his work.[174]
2018 December 15 Publication MIRI announces a new edition of Eliezer Yudkowsky's Rationality: From AI to Zombies (i.e. the book version of "the Sequences"). At the time of the announcement, the new edition of only two sequences, Map and Territory and How to Actually Change Your Mind, are available.[175][176]
2019 February Financial Open Philanthropy grants MIRI $2,112,500 over two years. The grant amount is decided by the Committee for Effective Altruism Support, which also decides on amounts for grants to 80,000 Hours and the Centre for Effective Altruism at around the same time.[177] The Berkeley Existential Risk Initiative (BERI) grants $600,000 to MIRI at around the same time. MIRI discusses both grants in a blog post.[178]
2019 April 23 Financial The Long-Term Future Fund announces that it is donating $50,000 to MIRI as part of this grant round. Oliver Habryka, the main grant investigator, explains the reasoning in detail, including his general positive impression of MIRI and his thoughts on funding gaps.[179]
2019 December Financial MIRI's 2019 fundraiser raises $601,120 from over 259 donors. A retrospective blog post on the fundraiser, published February 2020, discusses possible reasons the fundraiser raised less money than fundraisers in previous years, particularly 2017. Reasons include: lower cryptocurrency prices causing fewer donations from cryptocurrency donors, nondisclosed-by-default policy making it harder for donors to evaluate research, US tax law changes in 2018 causing more donation-bunching across years, fewer counterfactual matching opportunities, donor perception of reduced marginal value of donations, skew in donations from a few big donors, previous donors moving from earning-to-give to direct work, and donors responding to MIRI's urgent need for funds in previous years by donating in those years and having less to donate now.[180]
2020 February Financial Open Philanthropy grants $7,703,750 to MIRI over two years, with the money amount determined by the Committee for Effective Altruism Support (CEAS). Of the funding, $6.24 million comes from Good Ventures (the usual funding source) and $1.46 milion comes from Ben Delo, co-founder of BitMEX and recent Giving Pledgee signatory, via a co-funding partnership. Other organizations receiving money based on CEAS recommendations at around the same time are Ought (also focused on AI safety), the Centre for Effective Altruism, and 80,000 Hours.[181] MIRI would blog about the grant in April 2020, calling the grant "our largest grant to date."[182]
2020 March 2 Financial The Berkeley Existential Risk Initiative (BERI) grants $300,000 to MIRI. Writing about the grant in April 2020, MIRI says: "at the time of our 2019 fundraiser, we expected to receive a grant from BERI in early 2020, and incorporated this into our reserves estimates. However, we predicted the grant size would be $600k; now that we know the final grant amount, that estimate should be $300k lower."[182]
2020 April 14 Financial The Long-Term Future Fund grants $100,000 to MIRI.[183][182]
2020 May Financial The Survival and Flourishing Fund publishes the outcome of its recommendation S-process for the first half of 2020. This includes three grant recommendations to MIRI: $20,000 from SFF, $280,000 from Jaan Tallinn, and $40,000 from Jed McCaleb.[184] The grant from SFF to MIRI would also be included in SFF's grant list with a grant date of May 2020.[185]
2020 October 9 A Facebook post by Rob Bensinger, MIRI's research communications manager, says that MIRI is considering moving its office from its current location in Berkeley, California (in the San Francisco Bay Area) to another location in the United States or Canada. Two areas under active consideration are the Northeastern US (New Hampshire in particular) and the area surrounding Toronto. In response to a question about reasons, Bensinger clarifies that he cannot disclose reasons yet, but that he wanted to announce preemptively so that people can factor this uncertainty into any plans to move or to start new rationalist hubs.[186]
2020 October 22 Publication Scott Garrabrant publishes (cross-posted to LessWrong and the Effective Altruism Forum) a blog post titled "Introduction to Cartesian Frames" that is is first post in a sequence about Cartesian frames, a new conceptual framework for thinking about agency.[187][188]
2020 November (announcement) Financial Jaan Tallinn grants $543,000 to MIRI as an outcome of the S-process carried out by the Survival and Flourishing Fund for the second half of 2020.[189]
2020 November 30 (announcement) Financial In the November newsletter, MIRI announces that it will not be running a formal fundraiser this year, but that it will continue participating in Giving Tuesday and other matching opportunities.[190]
2020 December 21 Strategy Malo Bourgon publishes MIRI's "2020 Updates and Strategy" blog post. The post talks about MIRI's efforts to relocate staff after the COVID-19 pandemic as well as the generally positive result of the changes, and possible future implications for MIRI itself moving out of the Bay Area. It also talks about slow progress on the research directions initiated in 2017, leading to MIRI feeling the need to change course. The post also talks about the public part of MIRI's progress in other research areas.[191]
2021 May 8 Rob Bensinger publishes a post on LessWrong providing an update on MIRI's ongoing considerations regarding relocation from the San Francisco Bay Area. The post opens a community discussion but lacks a definitive conclusion or subsequent actions, reflecting some internal uncertainty.[192]
2021 May 13 Financial MIRI announces two major donations: $15,592,829 in MakerDAO (MKR) from an anonymous donor with a restriction to spend a maximum of $2.5 million per year till 2024, and 1050 ETH from Vitalik Buterin, worth $4,378,159. While this was one of MIRI's largest donations, the restrictions on the use of funds limited their ability to make immediate strategic investments.[193]
2021 May 23 In a talk, MIRI researcher Scott Garrabrandt introduces "finite factored sets" as an alternative to the Pearlian paradigm. The concept generates some interest among the AI safety community, particularly on LessWrong, but does not significantly shift the broader landscape of causal inference research.[194]
2021 July 1 An update is added to Rob Bensinger's May 8 post about MIRI's potential relocation. The update links to a comment by board member Blake Borgeson, who had been tasked with coordinating MIRI's relocation decision. Ultimately, MIRI decides against relocating, citing uncertainty about long-term strategy. This decision indicates hesitation and a conservative approach amid organizational ambiguity.[192]
2021 November 15 Several private conversations between MIRI researchers (Eliezer Yudkowsky, Nate Soares, Rob Bensinger) and others in the AI safety community are published to the Alignment Forum and cross-posted to LessWrong and the Effective Altruism Forum. These conversations, titled "Late 2021 MIRI Conversations," attract moderate attention and foster some debate, particularly within niche AI safety circles, but do not significantly influence broader community consensus.[195]
2021 November 29 MIRI announces on the Alignment Forum that it is seeking assistance with its Visible Thoughts Project. Despite offering bounties for contributions, the project does not attract significant participation, indicating either a lack of interest or challenges in community engagement.[196]
2021 December Financial MIRI offers $200,000 to build an AI-dungeon-style writing dataset annotated with thoughts, and an additional $1,000,000 for scaling it 10x. The Visible Thoughts Project, while promising substantial incentives, struggles with engagement issues and fails to yield the expected contributions and outputs.[197]
2022 May 30 Publication Eliezer Yudkowsky publishes "Six Dimensions of Operational Adequacy in AGI Projects" on LessWrong. The post sparks some discussion among AI safety researchers but does not establish new standards or practices across broader AGI safety projects.[198]
2022 June 5 Publication Eliezer Yudkowsky's article "AGI Ruin: A List of Lethalities" is published on LessWrong. The post receives significant attention within the alignment community and reiterates Yudkowsky’s longstanding concerns about catastrophic AGI risks. It sparks debate, but the influence is largely confined to existing followers rather than drawing in broader public discourse.[199]
2022 April 25 Publication The article "Visible Thoughts Project and Bounty Announcement" is republished on LessWrong. Despite the sizable financial incentives offered, participation in the project remains low, and MIRI struggles to generate the expected level of interest and meaningful output.[200]
2022 July Strategy MIRI pauses its newsletter and public communications to refine internal strategies, an indication of both internal challenges and an effort to recalibrate its approach amid a rapidly evolving AI landscape.[201]
2022 December 1 Publication On behalf of his MIRI colleagues, Rob Bensinger publishes a blog post challenging organizations such as Anthropic and DeepMind to publicly write up their alignment plans. The challenge generates a mixed response, with some critiques of OpenAI’s plans emerging, but it does not spur any major public commitment from these organizations.[202]
2023 February 20 Publication Eliezer Yudkowsky appears on the Bankless podcast for an interview lasting a little under two hours, where he shares his relatively pessimistic views about the likelihood of catastrophic AGI with his hosts, neither of whom is deep into AI safety.[203] He also mentions he is taking a sabbatical due to burnout and the inevitable doom. He mentions considering potential ideas of working with other organizations such as Anthropic, Conjecture, or Redwood Research, noting that Redwood Research is "small" but that he trusts them and that they can also focus on one stream. A full transcript is published to LessWrong and the Alignment Forum a few days later.[204] The podcast gets a lot of traction, eliciting several reactions, and leads to a follow-up Q&A on Twitter Spaces.[205] A month later, a lengthy point-by-point response by alignment researcher Quintin Pope is published to LessWrong, attracting over 200 comments.[206]
2023 March 29 Publication An article by Eliezer Yudkowsky in Time Ideas, in response to the FLI Open Letter, argues that pausing AI for six months isn't enough. He says that what is needed won't happen in practice, but spells it out anyway: "The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. [...] Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. [...] Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. [...] Shut it all down."[207] The post is shared to LessWrong where it receives over 250 comments.[208]
2023 April Leadership MIRI undergoes a significant leadership change, with Malo Bourgon appointed as CEO, Nate Soares transitioning to President, Alex Vermeer becoming COO, and Eliezer Yudkowsky assuming the role of Chair of the Board. This restructuring is seen by some as an attempt to address stagnation and operational challenges.[209]
2023 June 19 Publication Paul Christiano publishes an article titled "Where I Agree and Disagree with Eliezer" on the AI Alignment Forum, outlining areas of alignment and divergence with Eliezer Yudkowsky's perspectives. The article is well-received within AI alignment circles and generates a productive debate, but does not directly influence the wider public narrative around AI safety.[210]
2024 Jan 14 Strategy MIRI publishes a comprehensive update on its mission and strategy for 2024. The update reaffirms their approach to AI alignment research and emphasizes collaboration. While the update receives positive feedback within existing networks, it does not attract wider attention or lead to notable changes in AI safety practices.[211][212]
2024 March 9 Publication An article in Semafor titled "The Risks of Expanding the Definition of AI Safety" discusses concerns raised by Eliezer Yudkowsky about the broadening scope of AI safety. While the article garners attention within specialized AI safety and alignment circles, it does not significantly alter the public narrative around AI governance, reflecting its niche impact.[213]
2024 April Project MIRI launches a new research team dedicated to technical AI governance. The team, initially consisting of Lisa Thiergart and Peter Barnett, aims to expand by the end of the year. Early traction is limited, highlighting recruitment challenges and the evolving demands of governance work in a rapidly changing AI landscape.[214]
2024 May Project The Technical Governance Team at MIRI takes an active role in contributing to AI policy development by submitting responses to multiple key policy bodies. These submissions include the NTIA's request for comment on open-weight AI models, focusing on the implications of making AI model weights publicly available and the potential risks and benefits associated with open-access AI technology.[215] They also respond to the United Nations’ request for feedback on the "Governing AI for Humanity" interim report, offering insights on global AI governance frameworks and how they can be structured to prioritize safety, transparency, and ethical considerations.[216] Additionally, the team addresses the Office of Management and Budget’s request for information on AI procurement in government, providing recommendations on how AI technologies can be integrated responsibly within government infrastructures.[217] This proactive engagement highlights MIRI’s strategic involvement in shaping international AI governance and ensuring that safety and ethical standards are maintained in the development and use of AI technologies.[218]
2024 May 14 Project MIRI announces the shutdown of the Visible Thoughts Project, which was initiated in November 2021. The project faced several challenges, including evolving ML needs and limited community engagement, which ultimately led to its termination.[219]
2024 May 29 Publication MIRI publishes their 2024 Communications Strategy, focusing on halting the development of frontier AI systems worldwide. The strategy aims for direct, unvarnished communication with policymakers and the public. However, the approach avoids grassroots advocacy and receives mixed reactions, with limited evidence of a shift in AI policy or public sentiment.[220]
2024 June 7 Publication Rob Bensinger publishes a response to Daniel Kokotajlo's discussion of Aschenbrenner's views on situational awareness in AI. Bensinger critiques Kokotajlo’s interpretation, adding nuance to the debate on AI safety. While the discussion is valuable within the alignment community, it remains niche and does not lead to broader shifts in consensus.[221]
2024 June Research The Agent Foundations team, including Scott Garrabrant, departs MIRI to pursue independent work. This signals a shift in focus for MIRI, as they prioritize other areas in response to rapid AI advancements. The departure is seen as an outcome of MIRI reassessing its research priorities amid changing circumstances in the AI field.[222]

Numerical and visual data

Google Scholar

The following table summarizes per-year mentions on Google Scholar as of October 1, 2021.

Year "Machine Intelligence Research Institute"
2000 0
2001 2
2002 0
2003 0
2004 1
2005 0
2006 0
2007 1
2008 0
2009 5
2010 7
2011 6
2012 6
2013 29
2014 61
2015 72
2016 93
2017 128
2018 134
2019 127
2020 138
Machine Intelligence Research Institute gsch.png

Google Trends

The comparative chart below shows Google Trends data Machine Intelligence Research Institute (Research institute) and Machine Intelligence Research Institute (Search term), from January 2004 to March 2021, when the screenshot was taken. Interest is also ranked by country and displayed on world map.[223]

Machine Intelligence Research Institute gt.png

Google Ngram Viewer

The chart below shows Google Ngram Viewer data for Machine Intelligence Research Institute, from 2000 to 2019.[224]

Machine Intelligence Research Institute ngram.png

Wikipedia desktop pageviews across the different names

The image below shows desktop pageviews of the page Machine Intelligence Research Institute and its predecessor pages, "Singularity Institute for Artificial Intelligence" and "Singularity Institute".[225] The change in names occurred on these dates:[226][227]

  • December 23, 2011: Two pages "Singularity Institute" and "Singularity Institute for Artificial Intelligence" merged into single page "Singularty Institute for Artificial Intelligence"
  • April 16, 2012: Page moved from "Singularity Institute for Artificial Intelligence" to "Singularity Institute" with the old name redirecting to the new name
  • February 1, 2013: Page moved from "Singularity Institute" to "Machine Intelligence Research Institute" with both old names redirecting to the new name

The red vertical line (for June 2015) represents a change in the method of estimating pageviews; specifically, pageviews by bots and spiders are excluded for months on the right of the line.

MIRI wv.jpeg


Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Issa likes to work locally and track changes with Git, so the revision history on this wiki only shows changes in bulk. To see more incremental changes, refer to the commit history.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

What the timeline is still missing

  • TODO Figure out how to cover publications
  • TODO mention kurzweil
  • TODO maybe include some of the largest donations (e.g. the XRP/ETH ones, tallinn, thiel)
  • TODO maybe fundraisers
  • TODO look more closely through some AMAs: [1], [2]
  • TODO maybe more info in this SSC post [3]
  • TODO more links at EA Wikia page [4]
  • TODO lots of things from strategy updates, annual reviews, etc. [5]
  • TODO Ben Goertzel talks about his involvement with MIRI [6], also more on opencog
  • TODO giant thread on Ozy's blog [7]
  • NOTE From 2017-07-06: "years that have few events so far: 2003 (one event), 2007 (one event), 2008 (three events), 2010 (three events), 2017 (three events)"
  • TODO possibly include more from the old MIRI volunteers site. Some of the volunteering opportunities like proofreading and promoting MIRI by giving it good web of trust ratings seem to give a good flavor of what MIRI was like, the specific challenges in terms of switching domains, and so on.
  • TODO cover Berkeley Existential Risk Initiative (BERI), kinda a successor to MIRI volunteers?
  • TODO cover launch of Center for Human-Compatible AI
  • TODO not sure how exactly to include this in the timeline, but something about MIRI's changing approach to funding certain types of contract work. e.g. Vipul says "I believe the work I did with Luke would no longer be sponsored by MIRI as their research agenda is now much more narrowly focused on the mathematical parts."
  • TODO who is Tyler Emerson?
  • modal combat and some other domains: [8], [9], [10]
  • https://www.lesswrong.com/posts/yGZHQYqWkLMbXy3z7/video-q-and-a-with-singularity-institute-executive-director
  • https://ea.greaterwrong.com/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates

Timeline update strategy

Some places to look on the MIRI blog:

Also general stuff like big news coverage.

See also

External links

References

  1. Nate Soares (June 3, 2015). "Taking the reins at MIRI". LessWrong. Retrieved July 5, 2017. 
  2. "lukeprog comments on "Thoughts on the Singularity Institute"". LessWrong. May 10, 2012. Retrieved July 15, 2012. 
  3. "Halfwitz comments on "Breaking the vicious cycle"". LessWrong. November 23, 2014. Retrieved August 3, 2017. 
  4. Eliezer S. Yudkowsky (August 31, 2000). "Eliezer, the person". Archived from the original on February 5, 2001. 
  5. "Yudkowsky - Staring into the Singularity 1.2.5". Retrieved June 1, 2017. 
  6. Eliezer S. Yudkowsky. "Coding a Transhuman AI". Retrieved July 5, 2017. 
  7. Eliezer S. Yudkowsky. "Singularitarian mailing list". Retrieved July 5, 2017. The "Singularitarian" mailing list was first launched on Sunday, March 11th, 1999, to assist in the common goal of reaching the Singularity. It will do so by pooling the resources of time, brains, influence, and money available to Singularitarians; by enabling us to draw on the advice and experience of the whole; by bringing together individuals with compatible ideas and complementary resources; and by binding the Singularitarians into a community. 
  8. 8.0 8.1 8.2 Eliezer S. Yudkowsky. "PtS: Version History". Retrieved July 4, 2017. 
  9. "Yudkowsky's Coming of Age". LessWrong. Retrieved January 30, 2018. 
  10. "My Naturalistic Awakening". LessWrong. Retrieved January 30, 2018. 
  11. "jacob_cannell comments on FLI's recommended project grants for AI safety research announced". LessWrong. Retrieved January 30, 2018. 
  12. Eliezer S. Yudkowsky. "Singularitarian Principles 1.0". Retrieved July 5, 2017. 
  13. "SL4: By Date". Retrieved June 1, 2017. 
  14. Eliezer S. Yudkowsky. "SL4 Mailing List". Retrieved June 1, 2017. 
  15. 15.0 15.1 Eliezer S. Yudkowsky. "Coding a Transhuman AI § Version History". Retrieved July 5, 2017. 
  16. "Form 990-EZ 2000" (PDF). Retrieved June 1, 2017. Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999. 
  17. "About the Singularity Institute for Artificial Intelligence". Retrieved July 1, 2017. The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors. 
  18. Eliezer S. Yudkowsky. "Singularity Institute for Artificial Intelligence, Inc.". Retrieved July 4, 2017. 
  19. Eliezer S. Yudkowsky. "Singularity Institute: News". Retrieved July 1, 2017. April 08, 2001: The Singularity Institute for Artificial Intelligence, Inc. announces that it has received tax-exempt status and is now accepting donations. 
  20. "Singularity Institute for Artificial Intelligence // News // Archive". Retrieved July 13, 2017. 
  21. Singularity Institute for Artificial Intelligence. "SIAI Guidelines on Friendly AI". Retrieved July 13, 2017. 
  22. Eliezer Yudkowsky (2001). "Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures" (PDF). The Singularity Institute. Retrieved July 5, 2017. 
  23. 23.0 23.1 Eliezer S. Yudkowsky. "Singularity Institute: News". Retrieved July 1, 2017. 
  24. "SL4: By Thread". Retrieved July 1, 2017. 
  25. Eliezer S. Yudkowsky (April 7, 2002). "SL4: PAPER: Levels of Organization in General Intelligence". Retrieved July 5, 2017. 
  26. Singularity Institute for Artificial Intelligence. "Levels of Organization in General Intelligence". Retrieved July 5, 2017. 
  27. "Yudkowsky - Technical Explanation". Retrieved July 5, 2017. Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute. 
  28. 28.0 28.1 28.2 28.3 28.4 Singularity Institute. "News of the Singularity Institute for Artificial Intelligence". Retrieved July 4, 2017. 
  29. 29.0 29.1 29.2 29.3 29.4 29.5 29.6 Brandon Reinhart. "SIAI - An Examination". LessWrong. Retrieved June 30, 2017. 
  30. "SL4: By Thread". Retrieved July 1, 2017. 
  31. "The Singularity Institute for Artificial Intelligence - 2006 $100,000 Singularity Challenge". Retrieved July 5, 2017. 
  32. "Twelve Virtues of Rationality". Retrieved July 5, 2017. 
  33. 33.0 33.1 "Singularity Summit". Machine Intelligence Research Institute. Retrieved June 30, 2017. 
  34. Dan Farber. "The great Singularity debate". ZDNet. Retrieved June 30, 2017. 
  35. Jerry Pournelle (May 20, 2006). "Chaos Manor Special Reports: The Stanford Singularity Summit". Retrieved June 30, 2017. 
  36. "Overcoming Bias : Bio". Retrieved June 1, 2017. 
  37. "Form 990 2007" (PDF). Retrieved July 8, 2017. 
  38. "Our History". Machine Intelligence Research Institute. 
  39. "Singularity Institute for Artificial Intelligence". YouTube. Retrieved July 8, 2017. 
  40. "The Power of Intelligence". Machine Intelligence Research Institute. July 10, 2007. Retrieved May 5, 2020. 
  41. "The Singularity Summit 2007". Retrieved June 30, 2017. 
  42. "Yudkowsky - The Simple Truth". Retrieved July 5, 2017. 
  43. "About". OpenCog Foundation. Retrieved July 6, 2017. 
  44. Goertzel, Ben (October 29, 2010). "The Singularity Institute's Scary Idea (and Why I Don't Buy It)". Retrieved September 15, 2019. 
  45. http://helldesign.net. "The Singularity Summit 2008: Opportunity, Risk, Leadership > Program". Retrieved June 30, 2017. 
  46. Elise Ackerman (October 23, 2008). "Annual A.I. conference to be held this Saturday in San Jose". The Mercury News. Retrieved July 5, 2017. 
  47. "The Hanson-Yudkowsky AI-Foom Debate". Lesswrongwiki. LessWrong. Retrieved July 1, 2017. 
  48. "Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong". LessWrong. Retrieved July 15, 2017. Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI. 
  49. 49.0 49.1 49.2 49.3 "Recent Singularity Institute Accomplishments". Singularity Institute for Artificial Intelligence. Retrieved July 6, 2017. 
  50. "FAQ - LessWrong Wiki". LessWrong. Retrieved June 1, 2017. 
  51. Michael Vassar (February 16, 2009). "Introducing Myself". Machine Intelligence Research Institute. Retrieved July 1, 2017. 
  52. 52.0 52.1 RobbBB (March 13, 2015). "Rationality: From AI to Zombies". LessWrong. Retrieved July 1, 2017. 
  53. "Singularity Institute (@singinst)". Twitter. Retrieved July 4, 2017. 
  54. 54.0 54.1 54.2 Amy Willey Labenz. Personal communication. May 27, 2022.
  55. "Wayback Machine". Retrieved July 2, 2017. 
  56. 56.0 56.1 McCabe, Thomas (February 4, 2011). "The Uncertain Future Forecasting Project Goes Open-Source". H Plus Magazine. Archived from the original on April 13, 2012. Retrieved July 15, 2017. 
  57. "Singularity Summit 2009 Program". Singularity Institute. Retrieved June 30, 2017. 
  58. Stuart Fox (October 2, 2009). "Singularity Summit 2009: The Singularity Is Near". Popular Science. 
  59. "Form 990 2009" (PDF). Retrieved July 8, 2017. 
  60. "Reply to Holden on The Singularity Institute". July 10, 2012. Retrieved June 30, 2017. 
  61. Michael Anissimov (December 12, 2009). "The Uncertain Future". The Singularity Institute Blog. Retrieved July 5, 2017. 
  62. "Form 990 2010" (PDF). Retrieved July 8, 2017. 
  63. "Harry Potter and the Methods of Rationality Chapter 1: A Day of Very Low Probability, a harry potter fanfic". FanFiction. Retrieved July 1, 2017. Updated: 3/14/2015 - Published: 2/28/2010 
  64. David Whelan (March 2, 2015). "The Harry Potter Fan Fiction Author Who Wants to Make Everyone a Little More Rational". Vice. Retrieved July 1, 2017. 
  65. "2013 in Review: Fundraising - Machine Intelligence Research Institute". Machine Intelligence Research Institute. August 13, 2014. Retrieved July 1, 2017. Recently, we asked (nearly) every donor who gave more than $3,000 in 2013 about the source of their initial contact with MIRI, their reasons for donating in 2013, and their preferred methods for staying in contact with MIRI. […] Four came into contact with MIRI via HPMoR. 
  66. Rees, Gareth (August 17, 2010). "Zendegi - Gareth Rees". Retrieved July 15, 2017. 
  67. Sotala, Kaj (October 7, 2010). "Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book". Retrieved July 15, 2017. 
  68. Hanson, Robin (March 25, 2012). "Egan's Zendegi". Retrieved July 15, 2017. 
  69. "Singularity Summit | Program". Retrieved June 30, 2017. 
  70. "Machine Intelligence Research Institute - Posts". Retrieved July 4, 2017. 
  71. "Machine Intelligence Research Institute - Posts". Retrieved July 4, 2017. 
  72. Louie Helm (December 21, 2010). "Announcing the Tallinn-Evans $125,000 Singularity Challenge". Machine Intelligence Research Institute. Retrieved July 7, 2017. 
  73. Kaj Sotala (December 26, 2010). "Tallinn-Evans $125,000 Singularity Challenge". LessWrong. Retrieved July 7, 2017. 
  74. "GiveWell conversation with SIAI". GiveWell. February 2011. Retrieved July 4, 2017. 
  75. Holden Karnofsky. "Singularity Institute for Artificial Intelligence". Yahoo! Groups. Retrieved July 4, 2017. 
  76. "lukeprog comments on Thoughts on the Singularity Institute (SI)". LessWrong. Retrieved June 30, 2017. When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. 
  77. Holden Karnofsky. "Re: [givewell] Singularity Institute for Artificial Intelligence". Yahoo! Groups. Retrieved July 4, 2017. 
  78. "singularity.org". Retrieved July 4, 2017. 
  79. 79.0 79.1 "Wayback Machine". Retrieved July 4, 2017. 
  80. "Singularity Institute Volunteering". Retrieved July 14, 2017. 
  81. "Singularity Summit | Program". Retrieved June 30, 2017. 
  82. "SingularitySummits". YouTube. Retrieved July 4, 2017. Joined Oct 17, 2011 
  83. Luke Muehlhauser (January 16, 2012). "Machine Intelligence Research Institute Progress Report, December 2011". Machine Intelligence Research Institute. Retrieved July 14, 2017. 
  84. lukeprog (December 12, 2011). "New 'landing page' website: Friendly-AI.com". LessWrong. Retrieved July 2, 2017. 
  85. Frank, Sam (January 1, 2015). "Come With Us If You Want to Live. Among the apocalyptic libertarians of Silicon Valley". Harper's Magazine. Retrieved July 15, 2017. 
  86. "Video Q&A with Singularity Institute Executive Director". LessWrong. December 10, 2011. Retrieved May 31, 2021. 
  87. "Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director". Machine Intelligence Research Institute. January 12, 2012. Retrieved May 31, 2021. 
  88. "Wayback Machine". Retrieved July 4, 2017. 
  89. Louie Helm (May 8, 2012). "Machine Intelligence Research Institute Progress Report, April 2012". Machine Intelligence Research Institute. Retrieved June 30, 2017. 
  90. Holden Karnofsky. "Thoughts on the Singularity Institute (SI)". LessWrong. Retrieved June 30, 2017. 
  91. Helm, Louie (August 6, 2012). "July 2012 Newsletter". Machine Intelligence Research Institute. Retrieved May 5, 2020. 
  92. David J. Hill (August 29, 2012). "Singularity Summit 2012 Is Coming To San Francisco October 13-14". Singularity Hub. Retrieved July 6, 2017. 
  93. 93.00 93.01 93.02 93.03 93.04 93.05 93.06 93.07 93.08 93.09 93.10 93.11 93.12 93.13 93.14 93.15 93.16 93.17 93.18 93.19 93.20 "Research Workshops - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved July 1, 2017. 
  94. "Singularity University Acquires the Singularity Summit". Singularity University. December 9, 2012. Retrieved June 30, 2017. 
  95. Fox, Joshua (February 14, 2013). "The Singularity Wars". LessWrong. Retrieved July 15, 2017. 
  96. "Form 990 2013" (PDF). Retrieved July 8, 2017. 
  97. "Conversations Archives". Machine Intelligence Research Institute. Retrieved July 15, 2017. 
  98. Luke Muehlhauser (March 22, 2015). "2014 in review". Machine Intelligence Research Institute. Retrieved July 15, 2017. 
  99. "March Newsletter". Machine Intelligence Research Institute. March 7, 2013. Retrieved July 1, 2017. 
  100. "We are now the "Machine Intelligence Research Institute" (MIRI)". Machine Intelligence Research Institute. January 30, 2013. Retrieved June 30, 2017. 
  101. "Facing the Intelligence Explosion, Luke Muehlhauser". Amazon.com. Retrieved July 1, 2017. 
  102. "Machine Intelligence Research Institute - Coming soon...". Retrieved July 4, 2017. 
  103. Luke Muehlhauser (February 28, 2013). "Welcome to Intelligence.org". Machine Intelligence Research Institute. Retrieved May 5, 2020. 
  104. Luke Muehlhauser (April 25, 2013). ""Singularity Hypotheses" Published". Machine Intelligence Research Institute. Retrieved July 14, 2017. 
  105. "Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection): 9783642325595: Medicine & Health Science Books". Amazon.com. Retrieved July 14, 2017. 
  106. Luke Muehlhauser (December 11, 2013). "MIRI's Strategy for 2013". Machine Intelligence Research Institute. Retrieved July 6, 2017. 
  107. Jon Southurst (January 19, 2014). "Ripple Creator Donates $500k in XRP to Artificial Intelligence Research Charity". CoinDesk. Retrieved July 6, 2017. 
  108. Luke Muehlhauser (January 27, 2014). "Existential Risk Strategy Conversation with Holden Karnofsky". Machine Intelligence Research Institute. Retrieved July 7, 2017. 
  109. "Smarter Than Us: The Rise of Machine Intelligence, Stuart Armstrong". Amazon.com. Retrieved July 1, 2017. Publisher: Machine Intelligence Research Institute (February 1, 2014) 
  110. Rob Bensinger (August 10, 2015). "Assessing Our Past and Potential Impact". Machine Intelligence Research Institute. Retrieved July 6, 2017. 
  111. "Recent Hires at MIRI". Machine Intelligence Research Institute. March 13, 2014. Retrieved July 13, 2017. 
  112. "MIRI's March 2014 Newsletter". Machine Intelligence Research Institute. March 18, 2014. Retrieved May 27, 2018. 
  113. "Machine Intelligence Research Institute - Photos". Facebook. Retrieved May 27, 2018. 
  114. "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017. 
  115. "Recent Site Activity - AI Impacts". Retrieved June 30, 2017. Jul 4, 2014, 10:39 AM Katja Grace edited Predictions of human-level AI timelines 
  116. "MIRI's September Newsletter". Machine Intelligence Research Institute. September 1, 2014. Retrieved July 15, 2017. 
  117. Benja Fallenstein. "Welcome!". Intelligent Agent Foundations Forum. Retrieved June 30, 2017. Post by Benja Fallenstein 969 days ago 
  118. Luke Muehlhauser (January 11, 2015). "An improved "AI Impacts" website". Machine Intelligence Research Institute. Retrieved June 30, 2017. 
  119. "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017. 
  120. Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017. 
  121. Ryan Carey. "Rationality: From AI to Zombies was released today!". Effective Altruism Forum. Retrieved July 1, 2017. 
  122. Luke Muehlhauser (May 6, 2015). "A fond farewell and a new Executive Director". Machine Intelligence Research Institute. Retrieved June 30, 2017. 
  123. "Self-prediction in Decision Theory and Artificial Intelligence — Faculty of Philosophy". Retrieved February 24, 2018. 
  124. Nate Soares (June 3, 2015). "Taking the reins at MIRI". LessWrong. Retrieved July 5, 2017. 
  125. "I am Nate Soares, AMA!". Effective Altruism Forum. Retrieved July 5, 2017. 
  126. "MIRI Summer Fellows 2015". CFAR. June 21, 2015. Retrieved July 8, 2017. 
  127. "Center for Applied Rationality — General Support". Open Philanthropy. Retrieved July 8, 2017. We have some doubts about CFAR's management and operations, and we see CFAR as having made only limited improvements over the last two years, with the possible exception of running the MIRI Summer Fellows Program in 2015, which we understand to have been relatively successful at recruiting staff for MIRI. 
  128. "Library/Machine Intelligence Research Institute". Effective Altruism Wikia. September 26, 2015. Retrieved July 15, 2017. 
  129. Larks (December 13, 2016). "2017 AI Risk Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved July 8, 2017. 
  130. "Arbital AI Alignment Exploration". Retrieved January 30, 2018. 
  131. Soares, Nate (March 30, 2016). "MIRI has a new COO: Malo Bourgon". Machine Intelligence Research Institute. Retrieved September 15, 2019. 
  132. "The AI Alignment Problem: Why It's Hard, and Where to Start". May 6, 2016. Retrieved May 7, 2020. 
  133. Yudkowsky, Eliezer (December 28, 2016). "AI Alignment: Why It's Hard, and Where to Start". Retrieved May 7, 2020. 
  134. Rob Bensinger (July 27, 2016). "New paper: "Alignment for advanced machine learning systems"". Machine Intelligence Research Institute. Retrieved July 1, 2017. 
  135. "Machine Intelligence Research Institute — General Support". Open Philanthropy. Retrieved June 30, 2017. 
  136. "New paper: "Logical induction"". Machine Intelligence Research Institute. September 12, 2016. Retrieved July 1, 2017. 
  137. Scott Aaronson (October 9, 2016). "Shtetl-Optimized » Blog Archive » Stuff That's Happened". Retrieved July 1, 2017. Some of you will also have seen that folks from the Machine Intelligence Research Institute (MIRI)—Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor—recently put out a major 130-page paper entitled "Logical Induction". 
  138. Rob Bensinger (October 11, 2016). "Ask MIRI Anything (AMA)". Effective Altruism Forum. Retrieved July 5, 2017. 
  139. "AI Impacts — General Support". Open Philanthropy. Retrieved June 30, 2017. 
  140. "[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved July 13, 2017. 
  141. "Media discussion of 2016 ESPAI". AI Impacts. June 14, 2017. Retrieved July 13, 2017. 
  142. "Updates to the research team, and a major donation - Machine Intelligence Research Institute". Machine Intelligence Research Institute. July 4, 2017. Retrieved July 4, 2017. 
  143. Daniel Dewey (July 7, 2017). "My Current Thoughts on MIRI's "Highly Reliable Agent Design" Work". Effective Altruism Forum. Retrieved July 7, 2017. 
  144. Yudkowsky, Eliezer; Soares, Nate. "[1710.05060] Functional Decision Theory: A New Theory of Instrumental Rationality". Retrieved October 22, 2017. Submitted on 13 Oct 2017 
  145. Matthew Graves (October 22, 2017). "New Paper: "Functional Decision Theory"". Machine Intelligence Research Institute. Retrieved October 22, 2017. 
  146. "There's No Fire Alarm for Artificial General Intelligence". Machine Intelligence Research Institute. October 13, 2017. Retrieved April 19, 2020. 
  147. "There's No Fire Alarm for Artificial General Intelligence". LessWrong. October 13, 2017. Retrieved April 19, 2020. 
  148. Malo Bourgon (November 8, 2017). "A Major Grant from Open Philanthropy". Machine Intelligence Research Institute. Retrieved November 11, 2017. 
  149. "Machine Intelligence Research Institute — General Support (2017)". Open Philanthropy. November 8, 2017. Retrieved November 11, 2017. 
  150. "Inadequacy and Modesty". Retrieved October 29, 2017. 
  151. "Inadequacy and Modesty". Effective Altruism Forum. Retrieved October 29, 2017. 
  152. "Discussion - Inadequate Equilibria". Inadequate Equilibria. Retrieved December 12, 2017. 
  153. "Book Review: Inadequate Equilibria". Slate Star Codex. December 9, 2017. Retrieved December 12, 2017. 
  154. "Shtetl-Optimized » Blog Archive » Review of "Inadequate Equilibria," by Eliezer Yudkowsky". Retrieved December 12, 2017. 
  155. Robin Hanson (November 25, 2017). "Overcoming Bias : Why Be Contrarian?". Retrieved December 12, 2017. 
  156. Yudkowsky, Eliezer (November 25, 2017). "Security Mindset and Ordinary Paranoia". Machine Intelligence Research Institute. Retrieved May 7, 2020. 
  157. Yudkowsky, Eliezer (November 26, 2017). "Security Mindset and the Logistic Success Curve". Machine Intelligence Research Institute. Retrieved May 7, 2020. 
  158. Malo Bourgon (December 1, 2017). "MIRI's 2017 Fundraiser". Machine Intelligence Research Institute. Retrieved December 12, 2017. 
  159. Malo Bourgon (January 10, 2018). "Fundraising success!". Machine Intelligence Research Institute. Retrieved January 30, 2018. 
  160. 160.0 160.1 "MIRI's 2018 Fundraiser". Machine Intelligence Research Institute. November 26, 2018. Retrieved February 14, 2019. 
  161. "AI Risk for Computer Scientists. Join us for four days of leveling up thinking on AI risk.". Machine Intelligence Research Institute. Retrieved September 14, 2019. 
  162. Oliver Habryka (October 29, 2018). "Announcing the new AI Alignment Forum". Machine Intelligence Research Institute. Retrieved February 14, 2019. 
  163. "Introducing the AI Alignment Forum (FAQ)". AI Alignment Forum. Retrieved February 14, 2019. 
  164. Arnold, Raymond (July 10, 2018). "Announcing AlignmentForum.org Beta". LessWroong. Retrieved April 18, 2020. 
  165. "AI Alignment Writing Day 2018". Retrieved April 19, 2020. 
  166. "Embedded Agency". Machine Intelligence Research Institute. Retrieved February 14, 2019. 
  167. "Embedded Agency". LessWrong 2.0. October 29, 2018. Retrieved February 14, 2019. 
  168. "Embedded Agency". AI Alignment Forum. October 29, 2018. Retrieved February 14, 2019. 
  169. "MIRI on Twitter". Twitter. Retrieved February 14, 2019. "Embedded Agency" in finished form, with new material on self-reference and logical uncertainty 
  170. Rob Bensinger. "Rob Bensinger comments on Embedded Agents". LessWrong 2.0 viewer. Retrieved February 14, 2019. 
  171. "2018 Update: Our New Research Directions - Machine Intelligence Research Institute". Machine Intelligence Research Institute. November 22, 2018. Retrieved February 14, 2019. 
  172. "Our 2018 Fundraiser Review - Machine Intelligence Research Institute". Machine Intelligence Research Institute. February 11, 2019. Retrieved February 14, 2019. 
  173. Bensinger, Rob (November 28, 2018). "MIRI's newest recruit: Edward Kmett!". Machine Intelligence Research Institute. Retrieved September 14, 2019. 
  174. "MIRI's newest recruit: Edward Kmett!". Reddit. December 1, 2018. Retrieved September 14, 2019. 
  175. "Announcing a new edition of "Rationality: From AI to Zombies"". Machine Intelligence Research Institute. December 16, 2018. Retrieved February 14, 2019. 
  176. Rob Bensinger. "New edition of "Rationality: From AI to Zombies"". LessWrong 2.0. Retrieved February 14, 2019. 
  177. "Machine Intelligence Research Institute — General Support (2019)". Open Philanthropy. April 1, 2019. Retrieved September 14, 2019. 
  178. Bensinger, Rob (April 1, 2019). "New grants from the Open Philanthropy Project and BERI". Machine Intelligence Research Institute. Retrieved September 14, 2019. 
  179. Habryka, Oliver (April 23, 2019). "MIRI ($50,000)". Effective Altruism Forum. Retrieved September 15, 2019. 
  180. Colm Ó Riain (February 13, 2020). "Our 2019 Fundraiser Review". Machine Intelligence Research Institute. Retrieved April 19, 2020. 
  181. "Open Philanthropy donations made (filtered to cause areas matching AI risk)". Retrieved July 27, 2017. 
  182. 182.0 182.1 182.2 Bensinger, Rob (April 27, 2020). "MIRI's largest grant to date!". Machine Intelligence Research Institute. Retrieved May 2, 2020. 
  183. "Fund Payout Report: April 2020 – Long-Term Future Fund Grant Recommendations". Effective Altruism Funds. April 14, 2020. Retrieved May 5, 2020. 
  184. "SFF-2020-H1 S-process Recommendations Announcement". Survival and Flourishing Fund. May 29, 2020. Retrieved October 10, 2020. 
  185. "Survival and Flourishing Fund". Retrieved October 10, 2020. 
  186. Bensinger, Rob (October 9, 2020). "MIRI, the place where I work, is very seriously considering moving to a different country soon (most likely Canada), or moving to elsewhere in the US.". Facebook. Retrieved October 10, 2020. 
  187. Garrabrant, Scott (October 22, 2020). "Introduction to Cartesian Frames". LessWrong. Retrieved December 20, 2020. 
  188. Bensinger, Rob (October 23, 2020). "October 2020 Newsletter". Machine Intelligence Research Institute. Retrieved December 20, 2020. 
  189. "SFF-2020-H2 S-process Recommendations Announcement". Survival and Flourishing Fund. Retrieved December 10, 2020. 
  190. Bensinger, Rob (November 30, 2020). "November 2020 Newsletter". Machine Intelligence Research Institute. Retrieved December 20, 2020. 
  191. Bourgon, Malo (December 21, 2020). "2020 Updates and Strategy". Machine Intelligence Research Institute. Retrieved December 22, 2020. 
  192. 192.0 192.1 Bensinger, Rob (May 8, 2021). "MIRI location optimization (and related topics) discussion". LessWrong. Retrieved May 31, 2021. 
  193. Colm Ó Riain (May 13, 2021). "Our all-time largest donation, and major crypto support from Vitalik Buterin". Retrieved May 31, 2021. 
  194. Garrabrandt, Scott (May 23, 2021). "Finite Factored Sets". Machine Intelligence Research Institute. Retrieved May 31, 2021. 
  195. Bensinger, Rob (November 15, 2021). "Late 2021 MIRI Conversations". Alignment Forum. Retrieved December 1, 2021. 
  196. Soares, Nate (November 29, 2021). "Visible Thoughts Project and Bounty Announcement". Retrieved December 2, 2021. 
  197. "December 2021 Newsletter". Machine Intelligence Research Institute. December 31, 2021. Retrieved September 2, 2024. 
  198. "Six Dimensions of Operational Adequacy in AGI Projects". LessWrong. May 30, 2022. Retrieved September 5, 2024. 
  199. "AGI Ruin: A List of Lethalities". LessWrong. June 5, 2022. Retrieved September 5, 2024. 
  200. "Visible Thoughts Project and Bounty Announcement". LessWrong. April 25, 2023. Retrieved September 6, 2024. 
  201. "July 2022 Newsletter". Machine Intelligence Research Institute. July 30, 2022. Retrieved September 2, 2024. 
  202. Eliezer Yudkowsky (January 1, 2020). "Eliezer Yudkowsky's AI Plan Challenge". Retrieved October 8, 2024. 
  203. "159 - We're All Gonna Die with Eliezer Yudkowsky". YouTube. February 20, 2023. Retrieved April 14, 2024. 
  204. "Full Transcript: Eliezer Yudkowsky on the Bankless podcast". LessWrong. February 23, 2023. Retrieved April 14, 2024. 
  205. "Transcript: Yudkowsky on Bankless follow-up Q&A". LessWrong. February 27, 2023. Retrieved April 14, 2024. 
  206. Pope, Quintin (March 20, 2023). "My Objections to "We're All Gonna Die with Eliezer Yudkowsky"". LessWrong. Retrieved May 17, 2024. 
  207. Yudkowsky, Eliezer (March 29, 2023). "Pausing AI Developments Isn't Enough. We Need to Shut it All Down". Time Magazine. Retrieved May 17, 2024. 
  208. "Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky". March 29, 2023. Retrieved May 17, 2024. 
  209. "Announcing MIRI's New CEO and Leadership Team". Machine Intelligence Research Institute. October 10, 2023. Retrieved September 2, 2024. 
  210. "Where I Agree and Disagree with Eliezer". LessWrong. August 10, 2023. Retrieved September 5, 2024. 
  211. "MIRI 2024 Mission and Strategy Update". LessWrong. May 14, 2024. Retrieved September 5, 2024. 
  212. "April 2024 Newsletter". Machine Intelligence Research Institute. April 12, 2024. Retrieved September 2, 2024. 
  213. "The Risks of Expanding the Definition of AI Safety". Semafor. March 9, 2024. Retrieved September 5, 2024. 
  214. "April 2024 Newsletter". Machine Intelligence Research Institute. April 12, 2024. Retrieved September 2, 2024. 
  215. "NTIA Request for Comment on Open-Weight AI Models". Regulations.gov. Retrieved September 10, 2024. 
  216. "Governing AI for Humanity Interim Report" (PDF). United Nations. Retrieved September 10, 2024. 
  217. "OMB Request for Information on AI Procurement in Government". Federal Register. Retrieved September 10, 2024. 
  218. "May 2024 Newsletter". Machine Intelligence Research Institute. May 14, 2024. Retrieved September 5, 2024. 
  219. "May 2024 Newsletter". Machine Intelligence Research Institute. May 14, 2024. Retrieved September 5, 2024. 
  220. "MIRI 2024 Communications Strategy". Machine Intelligence Research Institute. May 29, 2024. Retrieved September 5, 2024. 
  221. "Response to Aschenbrenner's Situational Awareness". Effective Altruism Forum. May 15, 2023. Retrieved September 5, 2024. 
  222. "June 2024 Newsletter". Machine Intelligence Research Institute. June 14, 2024. Retrieved September 2, 2024. 
  223. "Machine Intelligence Research Institute". Google Trends. Retrieved 11 March 2021. 
  224. "Machine Intelligence Research Institute". books.google.com. Retrieved 11 March 2021. 
  225. "Singularity Institute". wikipediaviews.org. Retrieved 17 March 2021. 
  226. "Singularity Institute for Artificial Intelligence: Revision history". Retrieved July 15, 2017. 
  227. "All public logs: search Singularity Institute". Retrieved July 15, 2017.