Difference between revisions of "Timeline of Machine Intelligence Research Institute"
(→Full timeline) |
|||
(5 intermediate revisions by 2 users not shown) | |||
Line 47: | Line 47: | ||
|- | |- | ||
− | | 2023–present || Leadership transitions | + | |2023–present || Leadership transitions and response to LLM advancements || In 2023, MIRI undergoes major leadership changes, with Nate Soares transitioning to President, Malo Bourgon becoming CEO, and Alex Vermeer taking on the role of COO. This period coincides with the rapid adoption of large language models (LLMs) like OpenAI's ChatGPT, which transforms public and institutional awareness of AI capabilities. These developments drive MIRI to refine its focus, emphasizing systemic risks and governance in a landscape dominated by increasingly powerful AI systems. The organization prioritizes collaborations with policymakers, researchers, and other AI safety groups to address emerging challenges. |
+ | |} | ||
+ | |||
+ | ===Highlights by year (2013 onward)=== | ||
+ | |||
+ | {| class="wikitable" | ||
+ | ! Year !! Highlights | ||
+ | |- | ||
+ | | 2013 || MIRI (formerly SIAI) continues its focus on AI alignment research and community-building. Collaboration with the rationalist and Effective Altruism movements deepens. MIRI establishes itself as a key organization for long-term AI safety, setting the groundwork for its agent foundations research agenda. | ||
+ | |- | ||
+ | | 2014 || MIRI publishes several key technical research papers on decision theory and logical uncertainty. The Effective Altruism community increasingly recognizes MIRI's role in addressing existential risks from AI. The Intelligent Agent Foundations Forum is launched to foster collaboration among AI alignment researchers. | ||
+ | |- | ||
+ | | 2015 || MIRI co-organizes the Puerto Rico AI Safety Conference with the Future of Life Institute, a pivotal event that brings mainstream attention to AI risks. Nate Soares succeeds Luke Muehlhauser as MIRI’s Executive Director, signaling a new phase for the organization. MIRI holds multiple workshops on logical decision theory, logical uncertainty, and Vingean reflection, solidifying its research agenda. | ||
+ | |- | ||
+ | | 2016 || MIRI shifts its research focus toward highly reliable agent design and alignment for advanced AI systems. Scott Garrabrant and collaborators publish the "Logical Induction" paper, a major breakthrough in reasoning under uncertainty. Open Philanthropy awards MIRI a $500,000 grant for general support, acknowledging its role in reducing AI-related risks. | ||
+ | |- | ||
+ | | 2017 || Cryptocurrency donations surge, boosting MIRI’s funding, including a significant contribution from Ethereum co-founder Vitalik Buterin. Open Philanthropy grants MIRI $3.75 million, its largest single grant to date. The organization also begins exploring new research directions while maintaining its focus on AI safety and alignment. | ||
+ | |- | ||
+ | | 2018 || MIRI announces its nondisclosure-by-default research policy, marking a strategic shift to safeguard alignment progress. Edward Kmett, a prolific Haskell developer, joins MIRI to contribute to its research. The Embedded Agency sequence, exploring naturalized agency concepts, is published, becoming a foundational reference for AI alignment discussions. | ||
+ | |- | ||
+ | | 2019 || Open Philanthropy and the Survival and Flourishing Fund provide substantial grants to MIRI, supporting its ongoing AI safety research. MIRI’s research agenda focuses on building robust agents capable of reasoning under logical uncertainty, with continued emphasis on solving core AI alignment challenges. | ||
+ | |- | ||
+ | | 2020 || MIRI receives its largest grant to date—$7.7 million over two years—from Open Philanthropy, reinforcing its position as a leader in AI safety research. Internal discussions about relocating MIRI’s operations emerge but conclude with a decision to remain in Berkeley, California. | ||
+ | |- | ||
+ | | 2021 || Major cryptocurrency donations, including from Vitalik Buterin, provide critical funding for MIRI’s research. Scott Garrabrant introduces "Finite Factored Sets" as a novel approach to causal inference, generating interest in the alignment research community. | ||
+ | |- | ||
+ | | 2022 || Eliezer Yudkowsky publishes "AGI Ruin: A List of Lethalities," renewing discussions on catastrophic risks from AI systems. MIRI refines its internal strategy, pausing public communications to focus on advancing its research agenda amid a rapidly evolving AI landscape. | ||
+ | |- | ||
+ | | 2023 || MIRI undergoes leadership changes, with Malo Bourgon appointed CEO and Nate Soares transitioning to President. Eliezer Yudkowsky’s public advocacy for halting advanced AI development garners significant media attention, amplifying calls for stricter AI governance. | ||
+ | |- | ||
+ | | 2024 || MIRI launches a new technical governance research team, engaging with international AI policy initiatives and contributing to global discussions on AI safety. The organization announces the termination of the Visible Thoughts Project due to evolving research priorities and limited engagement. | ||
|} | |} | ||
Line 85: | Line 115: | ||
|- | |- | ||
− | | 2000 || {{dts|September 1}} || Publication || Large portions of "The Plan to Singularity" are declared obsolete following the formation of the Singularity Institute and a fundamental shift in AI strategy after the publication of "Coding a Transhuman AI" (CaTAI) version 2.<ref name="plan_to_singularity_20011121" /> This marks a pivotal moment in MIRI's (then known as the Singularity Institute) focus. Earlier discussions about the Singularity give way to a more precise, strategic approach to developing safe, self-improving AI. The obsoletion reflects | + | | 2000 || {{dts|September 1}} || Publication || Large portions of "The Plan to Singularity" are declared obsolete following the formation of the Singularity Institute and a fundamental shift in AI strategy after the publication of "Coding a Transhuman AI" (CaTAI) version 2.<ref name="plan_to_singularity_20011121" /> This marks a pivotal moment in MIRI's (then known as the Singularity Institute) focus. Earlier discussions about the Singularity give way to a more precise, strategic approach to developing safe, self-improving AI. The obsoletion reflects how new insights are rapidly reshaping the institute's path. |
|- | |- | ||
| 2000 || {{dts|September 7}} || Publication || Version 2.2.0 of "Coding a Transhuman AI" (CaTAI) is published.<ref name="CaTAI_20010202" /> CaTAI is a detailed technical document outlining the architecture for creating a transhuman-level artificial intelligence. It covers key ideas on how an AI can be designed to improve itself safely without deviating from its original, human-aligned goals. This text serves as a core theoretical foundation for MIRI's mission, advocating for AI development grounded in ethical and rational decision-making frameworks. | | 2000 || {{dts|September 7}} || Publication || Version 2.2.0 of "Coding a Transhuman AI" (CaTAI) is published.<ref name="CaTAI_20010202" /> CaTAI is a detailed technical document outlining the architecture for creating a transhuman-level artificial intelligence. It covers key ideas on how an AI can be designed to improve itself safely without deviating from its original, human-aligned goals. This text serves as a core theoretical foundation for MIRI's mission, advocating for AI development grounded in ethical and rational decision-making frameworks. | ||
Line 163: | Line 193: | ||
|- | |- | ||
− | | 2005 || {{dts|January 4}} || Publication || "A Technical Explanation of Technical Explanation" is published.<ref>{{cite web |url=http://yudkowsky.net/rational/technical |title=Yudkowsky - Technical Explanation |accessdate=July 5, 2017 |quote=Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute.}}</ref> Eliezer Yudkowsky explores the nature of technical explanations, emphasizing how we can communicate complex ideas with clarity and rigor. This work becomes foundational for those studying rationality and AI, offering insights into how we convey and understand deep technical topics. It plays an important role in grounding the theoretical framework behind AI safety research. MIRI announces its release, underlining its importance to their broader research agenda.<ref name=" | + | | 2005 || {{dts|January 4}} || Publication || "A Technical Explanation of Technical Explanation" is published.<ref>{{cite web |url=http://yudkowsky.net/rational/technical |title=Yudkowsky - Technical Explanation |accessdate=July 5, 2017 |quote=Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute.}}</ref> Eliezer Yudkowsky explores the nature of technical explanations, emphasizing how we can communicate complex ideas with clarity and rigor. This work becomes foundational for those studying rationality and AI, offering insights into how we convey and understand deep technical topics. It plays an important role in grounding the theoretical framework behind AI safety research. MIRI announces its release, underlining its importance to their broader research agenda.<ref name="singinst_feb_2006_news_unique">{{cite web |url=https://web.archive.org/web/20060220211402/http://www.singinst.org:80/news/ |title=News of the Singularity Institute for Artificial Intelligence |author=Singularity Institute |accessdate=July 4, 2017}}</ref> |
+ | |- | ||
|- | |- | ||
Line 281: | Line 312: | ||
|- | |- | ||
− | | 2010 || || Mission || The | + | | 2010 || || Mission || The organization’s mission is updated to: "To develop the theory and particulars of safe self-improving Artificial Intelligence; to support novel research and foster the creation of a research community focused on safe Artificial General Intelligence; and to otherwise improve the probability of humanity surviving future technological advances." This mission statement is also used in 2011 and 2012.<ref>{{cite web |url=https://intelligence.org/files/2010-SIAI990.pdf |title=Form 990 2010 |accessdate=July 8, 2017}}</ref> |
|- | |- | ||
− | | 2010 || {{dts|February 28}} || Publication || | + | |
+ | | 2010 || {{dts|February 28}} || Publication || Eliezer Yudkowsky publishes the first chapter of ''{{w|Harry Potter and the Methods of Rationality}}'' (HPMoR), a fan fiction exploring rationalist themes. The story is published serially, concluding on March 14, 2015. Later surveys identify HPMoR as an initial point of contact with MIRI for at least four major donors in 2013.<ref>{{cite web |url=https://www.fanfiction.net/s/5782108/1/Harry-Potter-and-the-Methods-of-Rationality |title=Harry Potter and the Methods of Rationality Chapter 1: A Day of Very Low Probability, a harry potter fanfic |publisher=FanFiction |accessdate=July 1, 2017 |quote=Updated: 3/14/2015 - Published: 2/28/2010}}</ref><ref>{{cite web |url=https://www.vice.com/en_us/article/gq84xy/theres-something-weird-happening-in-the-world-of-harry-potter-168 |publisher=Vice |title=The Harry Potter Fan Fiction Author Who Wants to Make Everyone a Little More Rational |date=March 2, 2015 |author=David Whelan |accessdate=July 1, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2014/04/02/2013-in-review-fundraising/#identifier_2_10812 |title=2013 in Review: Fundraising - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=August 13, 2014 |accessdate=July 1, 2017}}</ref> | ||
|- | |- | ||
− | | 2010 || {{dts|April}} || Staff || Amy Willey Labenz is promoted to Chief Operating Officer | + | |
+ | | 2010 || {{dts|April}} || Staff || Amy Willey Labenz is promoted to Chief Operating Officer (COO) of MIRI. She also serves as the Executive Producer of the Singularity Summits from 2010 to 2012.<ref name="amy-email-2022-05-27"/> | ||
|- | |- | ||
− | | 2010 || {{dts|June 17}} || Popular culture || ''{{w|Zendegi}}'' | + | |
+ | | 2010 || {{dts|June 17}} || Popular culture || Greg Egan’s science fiction novel ''{{w|Zendegi}}'' is published. The book includes characters and concepts inspired by the rationalist and AI safety communities, such as the Friendly AI project, the ''Overcoming Bias'' blog, and ''LessWrong.''<ref>{{cite web|url = http://gareth-rees.livejournal.com/31182.html|title = Zendegi - Gareth Rees|date = August 17, 2010|accessdate = July 15, 2017|last = Rees|first = Gareth}}</ref><ref>{{Cite web|url = http://lesswrong.com/lw/2ti/greg_egan_disses_standins_for_overcoming_bias/|title = Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book|last = Sotala|first = Kaj|date = October 7, 2010|accessdate = July 15, 2017}}</ref><ref>{{cite web|url = http://www.overcomingbias.com/2012/03/egans-zendegi.html|title = Egan’s Zendegi|date = March 25, 2012|accessdate = July 15, 2017|last = Hanson|first = Robin}}</ref> | ||
|- | |- | ||
− | | 2010 || {{dts|August 14}}–15 || Conference || The Singularity Summit 2010 | + | |
+ | | 2010 || {{dts|August 14}}–15 || Conference || The Singularity Summit 2010 is held in San Francisco. The event features speakers from AI research, technology, and futurism communities.<ref>{{cite web |url=https://web.archive.org/web/20110107222220/http://www.singularitysummit.com/program |title=Singularity Summit {{!}} Program |accessdate=June 30, 2017}}</ref> | ||
|- | |- | ||
− | |||
− | | | + | | 2010 || {{dts|December 21}} || Social media || MIRI posts to its Facebook page for the first time. This marks the organization’s entry into social media platforms.<ref>{{cite web |url=https://www.facebook.com/MachineIntelligenceResearchInstitute/posts/176049615748742 |title=Machine Intelligence Research Institute - Posts |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://www.facebook.com/pg/MachineIntelligenceResearchInstitute/posts/?ref=page_internal |title=Machine Intelligence Research Institute - Posts |accessdate=July 4, 2017}}</ref> |
− | |||
|- | |- | ||
− | | | + | | 2010–2011 || {{dts|December 21, 2010}}{{snd}}{{dts|January 20, 2011}} || Financial || The Tallinn–Evans $125,000 Singularity Challenge fundraiser takes place. Donations to MIRI are matched dollar-for-dollar by Edwin Evans and Jaan Tallinn, up to $125,000.<ref>{{cite web |url=https://intelligence.org/2010/12/21/announcing-the-tallinn-evans-125000-singularity-holiday-challenge/ |title=Announcing the Tallinn-Evans $125,000 Singularity Challenge |author=Louie Helm |publisher=Machine Intelligence Research Institute |date=December 21, 2010 |accessdate=July 7, 2017}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/3gy/tallinnevans_125000_singularity_challenge/ |title=Tallinn-Evans $125,000 Singularity Challenge |date=December 26, 2010 |author=Kaj Sotala |accessdate=July 7, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> |
|- | |- | ||
− | | 2011 || {{dts|February}} || | + | | 2011 || {{dts|February 4}} || Project || ''The Uncertain Future'', a web-based tool for estimating probabilities of various future scenarios involving AI and other technologies, is made open-source. The project is aimed at fostering public understanding of the uncertainties surrounding future technological advancements.<ref name=hplus-tuf/> |
|- | |- | ||
− | | 2011 || {{dts| | + | |
+ | | 2011 || {{dts|February}} || Outside review || Holden Karnofsky, co-founder of GiveWell, holds a discussion with MIRI staff to assess the organization’s strategy, priorities, and effectiveness. Key topics include MIRI's research focus, its ability to produce actionable results, and its approach to donor communication. Karnofsky critiques speculative initiatives like the "Persistent Problems Group" (PPG), which aimed to assemble expert panels on misunderstood topics, questioning its relevance to MIRI’s stated goal of addressing existential risks from AI. The conversation transcript, released on April 30, prompts broader discussions in the rationalist and philanthropic communities about MIRI’s focus and alignment with its mission.<ref>{{cite web |url=http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc |title=GiveWell conversation with SIAI |date=February 2011 |publisher=GiveWell |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://groups.yahoo.com/neo/groups/givewell/conversations/topics/270 |publisher=Yahoo! Groups |title=Singularity Institute for Artificial Intelligence |author=Holden Karnofsky |accessdate=July 4, 2017}}</ref> | ||
|- | |- | ||
− | | 2011 || {{dts| | + | |
+ | | 2011 || {{dts|April}} || Staff || Luke Muehlhauser begins working as an intern at MIRI. In reflections shared later, Muehlhauser notes operational and organizational inefficiencies at the time, which shape his vision for improving MIRI’s structure when he becomes Executive Director.<ref>{{cite web |url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/6l4h |title=lukeprog comments on Thoughts on the Singularity Institute (SI) |accessdate=June 30, 2017 |quote=When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> | ||
|- | |- | ||
− | | 2011 || {{dts|June 24}} || | + | |
+ | | 2011 || {{dts|May 10}}{{snd}}{{dts|June 24}} || Outside review || Holden Karnofsky and Jaan Tallinn correspond about MIRI’s activities, with Dario Amodei participating in an initial phone conversation. Their discussion touches on MIRI’s research goals and the broader implications of AI safety. The correspondence is shared on the GiveWell mailing list on July 18, furthering public engagement with AI safety issues.<ref>{{cite web |url=https://groups.yahoo.com/neo/groups/givewell/conversations/messages/287 |title=Re: [givewell] Singularity Institute for Artificial Intelligence |author=Holden Karnofsky |publisher=Yahoo! Groups |accessdate=July 4, 2017}}</ref> | ||
|- | |- | ||
− | | 2011 || {{dts| | + | |
+ | | 2011 || {{dts|June 24}} || Domain || A Wayback Machine snapshot shows that <code>singularity.org</code> has become a GoDaddy.com placeholder. Previously, the domain appears to have hosted an unrelated blog.<ref>{{cite web |url=https://web.archive.org/web/20110624011222/http://singularity.org:80/ |title=singularity.org |accessdate=July 4, 2017}}</ref><ref name="singularity_org_2011">{{cite web |url=https://web.archive.org/web/20111001000000*/singularity.org |title=Wayback Machine |accessdate=July 4, 2017}}</ref> | ||
|- | |- | ||
− | | 2011 || {{dts| | + | |
+ | | 2011 || {{dts|July 18}}{{snd}}{{dts|October 20}} || Domain || During this period, the <code>singularity.org</code> domain redirects to <code>singinst.org/singularityfaq</code>, which hosts FAQs about the singularity and MIRI’s approach to AI safety.<ref name="singularity_org_2011" /> | ||
|- | |- | ||
− | | 2011 || {{dts| | + | |
+ | | 2011 || {{dts|September 6}} || Domain || The first Wayback Machine capture of <code>singularityvolunteers.org</code> is made. This site is used to coordinate volunteer efforts for MIRI’s projects and events.<ref>{{cite web |url=https://web.archive.org/web/20110906193713/http://www.singularityvolunteers.org/ |title=Singularity Institute Volunteering |accessdate=July 14, 2017}}</ref> | ||
|- | |- | ||
− | | 2011 || {{dts|October | + | |
+ | | 2011 || {{dts|October 15}}–16 || Conference || The Singularity Summit 2011 is held in New York. The event features talks from researchers and thinkers on AI, futurism, and technology, attracting attention from both academic and public audiences.<ref>{{cite web |url=https://web.archive.org/web/20111031090701/http://www.singularitysummit.com:80/program |title=Singularity Summit {{!}} Program |accessdate=June 30, 2017}}</ref> | ||
|- | |- | ||
− | | 2011 || {{dts| | + | |
+ | | 2011 || {{dts|October 17}} || Social media || The Singularity Summit YouTube account, "SingularitySummits," is created to share recorded talks and materials from the summit and promote public engagement with AI and technology-related topics.<ref>{{cite web |url=https://www.youtube.com/user/SingularitySummits/about |title=SingularitySummits |publisher=YouTube |accessdate=July 4, 2017 |quote=Joined Oct 17, 2011}}</ref> | ||
|- | |- | ||
− | | 2011 || {{dts| | + | |
+ | | 2011 || {{dts|November}} || Staff || Luke Muehlhauser is appointed Executive Director of MIRI. Muehlhauser’s tenure is marked by efforts to professionalize the organization, improve donor relations, and refocus on foundational research in AI safety.<ref>{{cite web |url=https://intelligence.org/2012/01/16/singularity-institute-progress-report-december-2011/ |title=Machine Intelligence Research Institute Progress Report, December 2011 |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=January 16, 2012 |accessdate=July 14, 2017}}</ref> | ||
|- | |- | ||
− | || | + | | 2011 || {{dts|December 12}} || Project || Luke Muehlhauser announces the launch of Friendly-AI.com, a website dedicated to explaining the concept of Friendly AI. Friendly AI refers to Artificial General Intelligence (AGI) systems that are designed to align with human values and operate safely, avoiding harmful unintended consequences. The website serves as an introductory resource for the public and AI researchers.<ref>{{cite web |url=http://lesswrong.com/lw/8t6/new_landing_page_website_friendlyaicom/ |title=New 'landing page' website: Friendly-AI.com |author=lukeprog |date=December 12, 2011 |accessdate=July 2, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> |
|- | |- | ||
− | | | + | | 2012 || || Staff || Michael Vassar steps down from his role at MIRI to co-found MetaMed, a personalized medical advisory company. With Skype co-creator Jaan Tallinn and $500,000 in funding from Peter Thiel, MetaMed seeks to revolutionize healthcare by applying rational decision-making and advanced data analysis to personalized medical care. The company targets wealthy clients, offering custom literature reviews and health studies tailored to individual needs. Supporters see the venture as a demonstration of rationality’s potential in complex domains like medicine, though its exclusivity raises questions about broader accessibility and impact.<ref>{{cite web|url=https://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/|title=Come With Us If You Want to Live. Among the apocalyptic libertarians of Silicon Valley|last=Frank|first=Sam|date=January 1, 2015|accessdate=July 15, 2017|publisher=Harper's Magazine}}</ref> |
|- | |- | ||
− | | | + | |
+ | | 2011–2012 || || Opinion || In a two-part Q&A series, Luke Muehlhauser, MIRI’s Executive Director, shares insights into the organization’s evolving focus. He emphasizes a transition away from broader singularity advocacy toward a concentration on AI alignment research, arguing that MIRI’s most important contribution lies in developing foundational theories to guide safe AI development. Muehlhauser discusses challenges such as the speculative nature of the field, the difficulty in attracting top researchers, and the absence of empirical milestones. The series provides a transparent view of MIRI’s priorities and ambitions, helping to build trust among donors and the research community.<ref>{{cite web|url=https://www.lesswrong.com/posts/yGZHQYqWkLMbXy3z7/video-q-and-a-with-singularity-institute-executive-director|title=Video Q&A with Singularity Institute Executive Director|date=December 10, 2011|accessdate=May 31, 2021|publisher=LessWrong}}</ref><ref>{{cite web|url=https://intelligence.org/2012/01/12/qa-2-with-luke-muehlhauser-singularity-institute-executive-director/|title=Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director|date=January 12, 2012|accessdate=May 31, 2021|publisher=Machine Intelligence Research Institute}}</ref> | ||
|- | |- | ||
− | | 2012 || | + | |
+ | | 2012 || || Domain || Between February 4 and May 4, 2012, MIRI redirects its primary domain from singularity.org to singinst.org. This change reflects MIRI’s strategic shift from engaging in public advocacy for the singularity to focusing exclusively on AI safety and technical research. The new domain better aligns with MIRI’s narrower mission of developing safe AI systems and communicating its research priorities.<ref>{{cite web|url=https://web.archive.org/web/20120501000000*/singularity.org|title=Wayback Machine|accessdate=July 4, 2017}}</ref> | ||
|- | |- | ||
− | | 2012 || {{dts|May | + | |
+ | | 2012 || {{dts|May 8}} || Progress Report || MIRI publishes its April 2012 progress report, announcing the formal establishment of the Center for Applied Rationality (CFAR). Previously known as the "Rationality Group," CFAR focuses on creating training programs to enhance reasoning and decision-making skills. This rebranding highlights CFAR’s role in institutionalizing rationality techniques, which later become integral to the Effective Altruism movement. CFAR’s mission aligns with MIRI’s overarching goal of fostering better decision-making in high-stakes domains like AI safety.<ref>{{cite web|url=https://intelligence.org/2012/05/08/singularity-institute-progress-report-april-2012/|title=Machine Intelligence Research Institute Progress Report, April 2012|publisher=Machine Intelligence Research Institute|date=May 8, 2012|author=Louie Helm|accessdate=June 30, 2017}}</ref> | ||
|- | |- | ||
− | | 2012 || {{dts| | + | |
+ | | 2012 || {{dts|May 11}} || Outside Review || Holden Karnofsky, co-founder of GiveWell and later Open Philanthropy, publishes "Thoughts on the Singularity Institute (SI)" on LessWrong. Karnofsky critiques MIRI’s speculative approach, questioning its ability to deliver actionable insights and measurable outcomes. He highlights concerns about the lack of empirical grounding in AI safety research and its reliance on theoretical models. The review is influential within the Effective Altruism and existential risk communities, prompting MIRI to reflect on its research priorities and improve its communication with donors.<ref>{{cite web|url=http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/|title=Thoughts on the Singularity Institute (SI)|accessdate=June 30, 2017|author=Holden Karnofsky|publisher=LessWrong}}</ref> | ||
|- | |- | ||
− | | 2012 || {{dts| | + | |
+ | | 2012 || {{dts|August 6}} || Newsletter || MIRI begins publishing monthly newsletters, starting with the July 2012 edition. These newsletters provide updates on research progress, organizational changes, and events, offering supporters greater transparency. The regular cadence helps MIRI engage more effectively with its community of donors and researchers.<ref>{{cite web|url=https://intelligence.org/2012/08/06/july-2012-newsletter/|title=July 2012 Newsletter|last=Helm|first=Louie|date=August 6, 2012|accessdate=May 5, 2020|publisher=Machine Intelligence Research Institute}}</ref> | ||
|- | |- | ||
− | | 2012 || {{dts| | + | |
+ | | 2012 || {{dts|October 13}}–14 || Conference || The Singularity Summit 2012 is held in San Francisco. Speakers include Eliezer Yudkowsky, Ray Kurzweil, and other leading voices in AI and futurism. Topics range from AI safety and neuroscience to human enhancement, attracting a broad audience from academia, technology, and the public. The event serves as a platform for discussing the future impact of AI and technological advancements on society.<ref>{{cite web|url=https://singularityhub.com/2012/08/29/singularity-summit-2012-is-coming-to-san-francisco-october-13-14/|author=David J. Hill|title=Singularity Summit 2012 Is Coming To San Francisco October 13-14|publisher=Singularity Hub|date=August 29, 2012|accessdate=July 6, 2017}}</ref> | ||
|- | |- | ||
− | | 2012 || {{dts| | + | |
+ | | 2012 || {{dts|November 11}}–18 || Workshop || MIRI organizes the 1st Workshop on Logic, Probability, and Reflection. This event gathers researchers to explore foundational challenges in AI alignment, focusing on how AI systems can reason under uncertainty and make decisions reliably. The workshop’s outcomes help shape MIRI’s approach to developing mathematically sound frameworks for AI safety.<ref name="workshops">{{cite web|url=https://intelligence.org/workshops/|title=Research Workshops - Machine Intelligence Research Institute|publisher=Machine Intelligence Research Institute|accessdate=July 1, 2017}}</ref> | ||
|- | |- | ||
− | | | + | | 2012 || {{dts|December 6}} || Singularity Summit Acquisition || Singularity University announces its acquisition of the Singularity Summit from MIRI, marking the end of MIRI’s direct involvement in the event. Some commentators, including Joshua Fox, praise the decision as a way for MIRI to focus exclusively on AI safety research. However, others express concerns that the summit’s emphasis on fostering long-term thinking may be diluted under Singularity University’s broader programming. The summit’s original themes of technological foresight and existential risks are eventually inherited by events like EA Global.<ref>{{cite web|url=http://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/|title=Singularity University Acquires the Singularity Summit|publisher=Singularity University|date=December 9, 2012|accessdate=June 30, 2017}}</ref><ref name="singularity-wars">{{cite web|url=http://lesswrong.com/lw/gn4/the_singularity_wars/|title=The Singularity Wars|last=Fox|first=Joshua|date=February 14, 2013|accessdate=July 15, 2017|publisher=LessWrong}}</ref> |
|- | |- | ||
− | | | + | |
+ | | 2013 || Mission || Mission Statement Update || MIRI revises its mission statement to reflect a sharper focus on AI safety: "To ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of the organization is to: a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; b) raise awareness of this important issue; c) advise researchers, leaders, and laypeople around the world; d) as necessary, implement a smarter-than-human intelligence with humane, stable goals." This new wording underscores MIRI's commitment to both technical research and broader engagement with key stakeholders to address global risks associated with advanced AI.<ref>{{cite web |url=https://intelligence.org/wp-content/uploads/2012/06/2013-990.pdf |title=Form 990 2013 |accessdate=July 8, 2017}}</ref> | ||
|- | |- | ||
− | | | + | |
+ | | 2013–2014 || Project || Conversations Initiative || MIRI conducts numerous expert interviews on AI safety, strategy, and existential risks, recording 75 of its 80 total listed conversations during this time (19 in 2013 and 56 in 2014). These discussions involve leading thinkers in fields like AI alignment, decision theory, and risk mitigation. While valuable, the initiative is deprioritized in mid-2014 after diminishing returns, as noted by executive director Luke Muehlhauser in MIRI’s 2014 review. Nonetheless, these conversations shape the direction of AI safety dialogue during this period.<ref>{{cite web |url=https://intelligence.org/category/conversations/ |title=Conversations Archives |publisher=Machine Intelligence Research Institute |accessdate=July 15, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2015/03/22/2014-review/ |title=2014 in review |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=March 22, 2015 |accessdate=July 15, 2017}}</ref> | ||
|- | |- | ||
− | | 2013 || {{dts|January | + | |
+ | | 2013 || {{dts|January}} || Staff || Michael Anissimov departs MIRI after the acquisition of the Singularity Summit by Singularity University and a major strategic shift at MIRI. Anissimov had played a key role in public outreach and advocacy for the singularity. Following his departure, MIRI pivots away from broader public engagement and focuses more heavily on technical research in AI alignment and decision theory. Despite leaving MIRI, Anissimov remains an active supporter and volunteer.<ref>{{cite web |url=https://intelligence.org/2013/03/07/march-newsletter/ |title=March Newsletter |publisher=Machine Intelligence Research Institute |date=March 7, 2013 |accessdate=July 1, 2017}}</ref> | ||
|- | |- | ||
− | | 2013 || {{dts| | + | |
+ | | 2013 || {{dts|January 30}} || Rebranding || MIRI officially renames itself from the Singularity Institute for Artificial Intelligence (SIAI) to the Machine Intelligence Research Institute (MIRI). The change signals MIRI’s narrowed focus on machine intelligence and technical AI safety challenges, distancing itself from broader discussions about the singularity. This rebranding clarifies the organization’s mission to external stakeholders and aligns with its research-driven goals.<ref>{{cite web |url=https://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/ |title=We are now the "Machine Intelligence Research Institute" (MIRI) |publisher=Machine Intelligence Research Institute |date=January 30, 2013 |accessdate=June 30, 2017}}</ref> | ||
|- | |- | ||
− | | 2013 || {{dts|February | + | |
+ | | 2013 || {{dts|February 1}} || Publication || MIRI publishes "Facing the Intelligence Explosion," a book by executive director Luke Muehlhauser. This work introduces readers to the potential risks posed by advanced AI systems and emphasizes the importance of research into AI alignment and safety. The book frames the discussion around the existential risks of misaligned AI and MIRI’s role in addressing these challenges.<ref>{{cite web |url=https://www.amazon.com/facing-the-intelligence-explosion/dp/B00C7YOR5Q |title=Facing the Intelligence Explosion, Luke Muehlhauser |publisher=Amazon.com |accessdate=July 1, 2017}}</ref> | ||
|- | |- | ||
− | | 2013 || {{dts| | + | |
+ | | 2013 || {{dts|February 11}}{{snd}}{{dts|February 28}} || Domain || MIRI launches its new website, intelligence.org, during this period. The redesigned website features a professional layout emphasizing machine intelligence research and AI safety. Executive director Luke Muehlhauser announces the change in a blog post, positioning the new site as a cornerstone of MIRI’s updated branding and communication strategy.<ref>{{cite web |url=https://web.archive.org/web/20130211105954/http://intelligence.org:80/ |title=Machine Intelligence Research Institute - Coming soon... |accessdate=July 4, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2013/02/28/welcome-to-intelligence-org/ |title=Welcome to Intelligence.org |author=Luke Muehlhauser |date=February 28, 2013 |accessdate=May 5, 2020 |publisher=Machine Intelligence Research Institute}}</ref> | ||
|- | |- | ||
− | |||
− | + | | 2013 || {{dts|April 3}} || Publication || Springer publishes "Singularity Hypotheses: A Scientific and Philosophical Assessment," a collection of essays examining the potential trajectories of AI and the singularity. MIRI researchers and associates contribute to this volume, which explores the societal implications and challenges of smarter-than-human intelligence. The book is positioned as a resource for academics and policymakers seeking to understand the scientific and philosophical issues surrounding advanced AI.<ref>{{cite web |url=https://intelligence.org/2013/04/25/singularity-hypotheses-published/ |title="Singularity Hypotheses" Published |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=April 25, 2013 |accessdate=July 14, 2017}}</ref><ref>{{cite web |url=https://www.amazon.com/Singularity-Hypotheses-Scientific-Philosophical-Assessment/dp/3642325599/ |title=Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection): 9783642325595: Medicine & Health Science Books |publisher=Amazon.com |accessdate=July 14, 2017}}</ref> | |
− | | 2013 || {{dts|April | ||
|- | |- | ||
− | | | + | | 2013 || {{dts|April 3}}–24 || Workshop || MIRI hosts the 2nd Workshop on Logic, Probability, and Reflection, advancing research on decision theory, formal reasoning, and AI alignment. The workshop builds on MIRI’s foundational research strategy, fostering collaboration among experts to address critical challenges in creating safe AI systems.<ref name="workshops" /> |
|- | |- | ||
− | | | + | | 2013 || {{dts|April 13}} || Strategy || MIRI publishes a strategic update outlining its increased emphasis on Friendly AI mathematics and research while scaling back public outreach. This shift aims to concentrate resources on technical research areas with the highest potential to influence AI safety and alignment, reflecting a more focused approach to existential risk reduction.<ref>{{cite web |url=https://intelligence.org/2013/04/13/miris-strategy-for-2013/ |title=MIRI's Strategy for 2013 |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=December 11, 2013 |accessdate=July 6, 2017}}</ref> |
|- | |- | ||
− | | 2014 || {{dts| | + | | 2014 || {{dts|January}} (approximate) || Financial || Jed McCaleb, creator of Ripple and founder of Mt. Gox, donates $500,000 worth of XRP cryptocurrency to MIRI. This significant financial contribution supports AI safety research and highlights the growing crossover between the cryptocurrency community and existential risk initiatives. McCaleb's donation underscores the recognition of AI safety as a crucial issue by technologists in fields outside traditional AI research.<ref>{{cite web |url=http://www.coindesk.com/ripple-creator-500000-xrp-donation-ai-research-charity/ |date=January 19, 2014 |title=Ripple Creator Donates $500k in XRP to Artificial Intelligence Research Charity |author=Jon Southurst |publisher=CoinDesk |accessdate=July 6, 2017}}</ref> |
|- | |- | ||
− | | 2014 || {{dts| | + | | 2014 || {{dts|January 16}} || Outside Review || MIRI staff meet with Holden Karnofsky, co-founder of GiveWell, for a strategic discussion on existential risks and AI safety. The meeting focuses on MIRI’s approach to managing long-term risks and explores potential collaboration opportunities between MIRI and other organizations in the Effective Altruism (EA) and philanthropic communities. This conversation is part of MIRI’s broader effort to build alliances and align its strategy with the priorities of influential stakeholders.<ref>{{cite web |url=https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/ |title=Existential Risk Strategy Conversation with Holden Karnofsky |publisher=Machine Intelligence Research Institute |author=Luke Muehlhauser |date=January 27, 2014 |accessdate=July 7, 2017}}</ref> |
|- | |- | ||
− | | 2014 || {{dts| | + | | 2014 || {{dts|February 1}} || Publication || MIRI publishes "Smarter Than Us: The Rise of Machine Intelligence" by Stuart Armstrong. The book introduces key concepts in AI alignment and explores the challenges posed by advanced AI systems. Written for a general audience, it serves as an accessible entry point into the field of AI safety and aligns with MIRI’s mission to raise awareness about existential risks.<ref>{{cite web |url=https://www.amazon.com/Smarter-Than-Us-Machine-Intelligence-ebook/dp/B00IB4N4KU |title=Smarter Than Us: The Rise of Machine Intelligence, Stuart Armstrong |publisher=Amazon.com |accessdate=July 1, 2017 |quote=Publisher: Machine Intelligence Research Institute (February 1, 2014)}}</ref> |
− | |||
|- | |- | ||
− | | 2014 || {{dts| | + | | 2014 || {{dts|March}}–May || Influence || The Future of Life Institute (FLI) is co-founded by Max Tegmark, Jaan Tallinn, Meia Chita-Tegmark, and Anthony Aguirre, with MIRI playing a foundational role in its creation. Tallinn, one of MIRI’s key supporters and an FLI co-founder, cites MIRI as instrumental in shaping his views on AI safety. FLI focuses on existential risks, particularly those associated with advanced AI, expanding the global conversation on AI alignment and societal impact.<ref>{{cite web |url=https://intelligence.org/2015/08/10/assessing-our-past-and-potential-impact/ |title=Assessing Our Past and Potential Impact |publisher=Machine Intelligence Research Institute |author=Rob Bensinger |date=August 10, 2015 |accessdate=July 6, 2017}}</ref> |
|- | |- | ||
− | | 2014 || {{dts| | + | | 2014 || {{dts|March 12}}–13 || Staff || MIRI hires several new researchers, including Nate Soares, who would later become its executive director in 2015. To celebrate this organizational growth, MIRI hosts an Expansion Party, highlighting its increased capacity for tackling ambitious AI safety projects. The event strengthens connections with local supporters and researchers.<ref name="recent_hires_at_miri_mar_2014">{{cite web |url=https://intelligence.org/2014/03/13/hires/ |title=Recent Hires at MIRI |publisher=Machine Intelligence Research Institute |date=March 13, 2014 |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2014/03/18/miris-march-2014-newsletter/ |title=MIRI's March 2014 Newsletter |publisher=Machine Intelligence Research Institute |date=March 18, 2014 |accessdate=May 27, 2018}}</ref><ref>{{cite web |url=https://www.facebook.com/pg/MachineIntelligenceResearchInstitute/photos/?tab=album&album_id=655204764516911 |title=Machine Intelligence Research Institute - Photos |publisher=Facebook |accessdate=May 27, 2018}}</ref> |
|- | |- | ||
− | | 2014 || {{dts| | + | | 2014 || {{dts|May 3}}–11 || Workshop || MIRI hosts its 7th Workshop on Logic, Probability, and Reflection. Participants collaborate on problems related to decision theory, reasoning under uncertainty, and formal AI alignment techniques. These workshops play a key role in advancing the theoretical foundations of safe AI development.<ref name="workshops" /> |
|- | |- | ||
− | | 2014 || {{dts| | + | | 2014 || {{dts|July}}–September || Influence || Nick Bostrom publishes "Superintelligence: Paths, Dangers, Strategies," a landmark work on AI alignment and existential risk. Bostrom, a MIRI advisor, builds on concepts developed by MIRI researchers, significantly contributing to global discussions on managing advanced AI. The book solidifies AI safety as a crucial area of focus for researchers and policymakers.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |date=September 17, 2015 |accessdate=July 6, 2017}}</ref> |
|- | |- | ||
− | | 2014 || {{dts| | + | | 2014 || {{dts|July 4}} || Project || AI Impacts, an initiative analyzing societal implications of AI development, emerges with Katja Grace playing a leading role. The project focuses on AI timelines, economic effects, and strategic considerations, contributing to the broader AI safety community’s understanding of future challenges.<ref>{{cite web |url=https://web.archive.org/web/20141129001325/http://www.aiimpacts.org:80/system/app/pages/recentChanges |title=Recent Site Activity - AI Impacts |accessdate=June 30, 2017 |quote=Jul 4, 2014, 10:39 AM Katja Grace edited Predictions of human-level AI timelines}}</ref> |
− | |||
− | |||
|- | |- | ||
− | | | + | | 2014 || {{dts|August}} || Project || The AI Impacts website officially launches. Led by Paul Christiano and Katja Grace, the platform provides data-driven analyses and forecasts about AI development, serving as a resource for researchers and policymakers concerned with the long-term societal impacts of AI.<ref>{{cite web |url=https://intelligence.org/2014/09/01/september-newsletter-2/ |title=MIRI's September Newsletter |publisher=Machine Intelligence Research Institute |date=September 1, 2014 |accessdate=July 15, 2017}}</ref> |
|- | |- | ||
− | | | + | | 2014 || {{dts|November 4}} || Project || The Intelligent Agent Foundations Forum launches under MIRI’s management. This forum provides a collaborative space for researchers to discuss foundational problems in decision theory and agent design, central to developing aligned AI systems. It attracts contributions from academics and independent researchers worldwide.<ref>{{cite web |url=https://agentfoundations.org/item?id=1 |website=Intelligent Agent Foundations Forum |title=Welcome! |author=Benja Fallenstein |accessdate=June 30, 2017 |quote=Post by Benja Fallenstein 969 days ago}}</ref> |
|- | |- | ||
− | + | | 2015 || {{dts|January}} || Project || AI Impacts rolls out a redesigned website. The revamped site aims to make research on AI risks, timelines, and governance issues more accessible to the public. Led by MIRI, this initiative reflects a broader effort to improve public engagement and communication about the long-term societal implications of artificial intelligence.<ref>{{cite web |url=https://intelligence.org/2015/01/11/improved-ai-impacts-website/ |title=An improved "AI Impacts" website |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=January 11, 2015 |accessdate=June 30, 2017}}</ref> | |
− | | 2015 || {{dts| | ||
|- | |- | ||
− | | 2015 || {{dts| | + | | 2015 || {{dts|January 2}}–5 || Conference || ''The Future of AI: Opportunities and Challenges,'' an AI safety conference organized by the Future of Life Institute, takes place in Puerto Rico. Attendees include influential figures like Luke Muehlhauser, Eliezer Yudkowsky, and Nate Soares from MIRI, alongside leading AI researchers and academics. The event galvanizes interest in AI safety, with Nate Soares describing it as a pivotal moment for academia’s recognition of AI existential risks. Discussions center on the potential for unaligned AI to pose catastrophic threats to humanity.<ref>{{cite web |url=https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ |title=AI safety conference in Puerto Rico |publisher=Future of Life Institute |date=October 12, 2015 |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2015/07/16/an-astounding-year/ |title=An Astounding Year |publisher=Machine Intelligence Research Institute |author=Nate Soares |date=July 16, 2015 |accessdate=July 13, 2017}}</ref> |
|- | |- | ||
− | | 2015 || {{dts| | + | | 2015 || {{dts|March 11}} || Influence || ''Rationality: From AI to Zombies,'' a compilation of Eliezer Yudkowsky's influential writings on rational thinking and decision-making, is published. Drawing from "The Sequences" on LessWrong, the book explores topics from cognitive biases to AI safety, positioning itself as a foundational text within the Effective Altruism and rationality movements. It serves as both an introduction to AI alignment challenges and a guide for improving human reasoning.<ref name="rationality_zombies">{{cite web |url=https://www.lesswrong.com/posts/rationality-from-ai-to-zombies |title=Rationality: From AI to Zombies |author=RobbBB |date=March 13, 2015 |publisher=LessWrong |accessdate=July 1, 2017}}</ref><ref>{{cite web |url=http://effective-altruism.com/ea/g6/rationality_from_ai_to_zombies_was_released_today/ |title=Rationality: From AI to Zombies was released today! |publisher=Effective Altruism Forum |author=Ryan Carey |accessdate=July 1, 2017}}</ref> |
|- | |- | ||
− | | 2015 || {{dts|May | + | | 2015 || {{dts|May 4}}–6 || Workshop || MIRI hosts the 1st Introductory Workshop on Logical Decision Theory. This workshop educates researchers on advanced decision theories relevant to AI alignment, tackling problems such as Newcomb's paradox and exploring how AI agents can predict and influence outcomes in logical environments.<ref name="workshops" /> |
|- | |- | ||
− | | 2015 || {{dts| | + | | 2015 || {{dts|May 6}} || Staff || Luke Muehlhauser resigns as MIRI’s executive director to join the Open Philanthropy Project as a research analyst. In his farewell post, he expresses confidence in Nate Soares, a MIRI researcher known for his work on decision theory and AI alignment, as his successor. Soares takes over leadership with the goal of advancing MIRI’s technical research agenda.<ref>{{cite web |url=https://intelligence.org/2015/05/06/a-fond-farewell-and-a-new-executive-director/ |title=A fond farewell and a new Executive Director |author=Luke Muehlhauser |publisher=Machine Intelligence Research Institute |date=May 6, 2015 |accessdate=June 30, 2017}}</ref> |
|- | |- | ||
− | | 2015 || {{dts| | + | | 2015 || {{dts|May 13}}–19 || Conference || MIRI collaborates with the Centre for the Study of Existential Risk (CSER) to co-organize the Self-Prediction in Decision Theory and Artificial Intelligence Conference. This event focuses on how AI systems can predict and incorporate their own actions into decision-making, a critical aspect of ensuring alignment and safety in advanced AI.<ref>{{cite web |url=https://www.phil.cam.ac.uk/events/decision-theory-conf |title=Self-prediction in Decision Theory and Artificial Intelligence — Faculty of Philosophy |accessdate=February 24, 2018}}</ref> |
|- | |- | ||
− | | 2015 || {{dts| | + | | 2015 || {{dts|May 29}}–31 || Workshop || The 1st Introductory Workshop on Logical Uncertainty explores how AI systems can reason under uncertainty in formal, logic-based frameworks. Researchers tackle foundational challenges in ensuring AI reliability in dynamic and unpredictable environments.<ref name="workshops" /> |
|- | |- | ||
− | | 2015 || {{dts|June | + | | 2015 || {{dts|June 3}}–4 || Staff || Nate Soares officially begins as MIRI’s executive director, succeeding Luke Muehlhauser. Soares emphasizes MIRI’s mission to address core AI alignment challenges through focused technical research and collaboration with the broader AI safety community.<ref>{{cite web |url=https://www.lesswrong.com/posts/Taking-the-reins-at-MIRI |title=Taking the reins at MIRI |author=Nate Soares |date=June 3, 2015 |publisher=LessWrong |accessdate=July 5, 2017}}</ref> |
|- | |- | ||
− | | 2015 || {{dts| | + | | 2015 || {{dts|June 11}} || AMA || Nate Soares hosts an "ask me anything" (AMA) session on the Effective Altruism Forum, engaging with the community on AI safety, MIRI’s research agenda, and his vision for the organization’s future under his leadership.<ref>{{cite web |url=http://effective-altruism.com/ea/ju/i_am_nate_soares_ama/ |title=I am Nate Soares, AMA! |publisher=Effective Altruism Forum |accessdate=July 5, 2017}}</ref> |
|- | |- | ||
− | | 2015 || {{dts| | + | | 2015 || {{dts|June 12}}–14 || Workshop || MIRI hosts the 2nd Introductory Workshop on Logical Decision Theory. The workshop builds on the previous event, offering deeper insights into decision theories critical for AI alignment, particularly in uncertain and strategic environments.<ref name="workshops" /> |
− | |||
− | |||
|- | |- | ||
− | | 2015 || {{dts| | + | | 2015 || {{dts|June 26}}–28 || Workshop || The 1st Introductory Workshop on Vingean Reflection focuses on developing frameworks for AI systems to reflect on and improve their decision-making procedures without compromising safety or alignment. Researchers address challenges in creating systems that can safely modify their own decision algorithms.<ref name="workshops" /> |
− | |||
− | |||
|- | |- | ||
− | | | + | | 2015 || {{dts|July 7}}–26 || Project || MIRI collaborates with the Center for Applied Rationality (CFAR) to host the MIRI Summer Fellows Program. This initiative aims to cultivate new talent for AI safety research and is described as "relatively successful" in recruiting staff for MIRI.<ref>{{cite web |url=https://web.archive.org/web/20150717025843/http://rationality.org/miri-summer-fellows-2015 |title=MIRI Summer Fellows 2015 |publisher=CFAR |date=June 21, 2015 |accessdate=July 8, 2017}}</ref><ref>{{cite web |url=http://www.openphilanthropy.org/giving/grants/center-applied-rationality-general-support |title=Center for Applied Rationality — General Support |publisher=Open Philanthropy |accessdate=July 8, 2017 |quote=We have some doubts about CFAR's management and operations, and we see CFAR as having made only limited improvements over the last two years, with the possible exception of running the MIRI Summer Fellows Program in 2015, which we understand to have been relatively successful at recruiting staff for MIRI.}}</ref> |
|- | |- | ||
− | | | + | | 2015 || {{dts|August 7}}–9 || Workshop || The 2nd Introductory Workshop on Logical Uncertainty continues exploring how AI systems can navigate uncertain and incomplete information, ensuring reliability in real-world applications.<ref name="workshops" /> |
|- | |- | ||
− | | | + | | 2015 || {{dts|August 28}}–30 || Workshop || The 3rd Introductory Workshop on Logical Decision Theory delves into refining decision-making frameworks for AI systems, with a focus on tackling strategic scenarios with limited information.<ref name="workshops" /> |
|- | |- | ||
− | | | + | | 2015 || {{dts|September 26}} || External Review || The Effective Altruism Wiki publishes a detailed page on MIRI, summarizing its work on reducing existential risks from AI. This page serves as an accessible resource for the EA community to better understand MIRI’s mission and projects.<ref>{{cite web|url = http://effective-altruism.wikia.com/wiki/Library/Machine_Intelligence_Research_Institute?oldid=4576|title = Library/Machine Intelligence Research Institute|publisher = Effective Altruism Wikia|date = September 26, 2015|accessdate = July 15, 2017}}</ref> |
− | |||
|- | |- | ||
− | | 2016 || | + | | 2016 || || Publication || MIRI commissions Eliezer Yudkowsky to create AI alignment content for Arbital, a platform designed to simplify complex technical topics for a general audience. The project aimed to address gaps in public understanding of AI alignment by providing accessible explanations of technical concepts, including risks posed by unaligned AI. Arbital was part of a broader effort to improve outreach and education on AI safety.<ref>{{cite web |url=http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ |title=2017 AI Risk Literature Review and Charity Comparison |publisher=Effective Altruism Forum |author=Larks |date=December 13, 2016 |accessdate=July 8, 2017}}</ref><ref>{{cite web |url=https://arbital.com/explore/ai_alignment/ |title=Arbital AI Alignment Exploration |accessdate=January 30, 2018}}</ref> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|March 30}} || Staff || MIRI promotes two key staff members to leadership roles: Malo Bourgon becomes Chief Operating Officer (COO), and Rob Bensinger is named Research Communications Manager. These changes reflect MIRI’s growing emphasis on operational efficiency and effective communication as it scales up its AI alignment research.<ref>{{cite web|url=https://intelligence.org/2016/03/30/miri-has-a-new-coo-malo-bourgon/|title=MIRI has a new COO: Malo Bourgon |last=Soares |first=Nate |date=March 30, 2016 |accessdate=September 15, 2019 |publisher=Machine Intelligence Research Institute}}</ref> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|April 1}}–3 || Workshop || The Self-Reference, Type Theory, and Formal Verification Workshop focuses on applying formal methods to AI systems. Researchers explore how self-referential AI can be verified to ensure alignment with human values, leveraging type theory and formal verification techniques. This workshop advances MIRI’s goal of creating provably safe AI systems.<ref name="workshops" /> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|May 6}} (talk), {{dts|December 28}} (transcript release) || Publication || Eliezer Yudkowsky delivers a talk at Stanford University titled "AI Alignment: Why It’s Hard, and Where to Start," addressing the technical challenges of aligning AI with human values. The transcript, released on MIRI's blog in December, becomes a foundational resource for researchers grappling with alignment problems.<ref>{{cite web|url=https://intelligence.org/stanford-talk/|title=The AI Alignment Problem: Why It’s Hard, and Where to Start |date=May 6, 2016 |accessdate=May 7, 2020}}</ref><ref>{{cite web|url=https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/|title=AI Alignment: Why It’s Hard, and Where to Start |last=Yudkowsky |first=Eliezer |date=December 28, 2016 |accessdate=May 7, 2020}}</ref> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|May 28}}–29 || Workshop || The CSRBAI Workshop on Transparency explores methods for making AI systems interpretable and understandable. Researchers examine how transparency can contribute to trustworthiness and alignment in advanced AI, especially in high-stakes applications.<ref name="workshops" /> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|June 4}}–5 || Workshop || The CSRBAI Workshop on Robustness and Error-Tolerance addresses how to design AI systems capable of handling uncertainty and errors without catastrophic failures. Robustness is identified as a key factor for deploying AI systems in unpredictable environments.<ref name="workshops" /> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|June 11}}–12 || Workshop || The CSRBAI Workshop on Preference Specification focuses on accurately encoding human values and preferences into AI systems, a foundational challenge in AI alignment.<ref name="workshops" /> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|June 17}} || Workshop || The CSRBAI Workshop on Agent Models and Multi-Agent Dilemmas delves into how AI systems interact in multi-agent scenarios. Researchers examine ways to ensure cooperation and prevent conflicts among agents with potentially competing goals.<ref name="workshops" /> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|July 27}} || Publication || MIRI releases the technical agenda paper "Alignment for Advanced Machine Learning Systems," outlining key challenges in aligning machine learning models with human values. This document marks MIRI’s formal pivot to addressing machine learning-specific safety issues.<ref>{{cite web |url=https://intelligence.org/2016/07/27/alignment-machine-learning/ |title=New paper: "Alignment for advanced machine learning systems" |publisher=Machine Intelligence Research Institute |date=July 27, 2016 |author=Rob Bensinger |accessdate=July 1, 2017}}</ref> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|August}} || Financial || Open Philanthropy awards MIRI a $500,000 grant for general support. The grant acknowledges MIRI’s contributions to AI safety while expressing differing views on the technical approaches employed by the organization.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support |title=Machine Intelligence Research Institute — General Support |publisher=Open Philanthropy |accessdate=June 30, 2017}}</ref> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|September 12}} || Publication || MIRI publishes "Logical Induction," a groundbreaking paper by Scott Garrabrant and co-authors. The paper introduces a framework for reasoning under uncertainty in a mathematically rigorous way, earning widespread acclaim as a significant advancement in formal AI research.<ref>{{cite web |url=https://intelligence.org/2016/09/12/new-paper-logical-induction/ |title=New paper: "Logical induction" |publisher=Machine Intelligence Research Institute |date=September 12, 2016 |accessdate=July 1, 2017}}</ref><ref>{{cite web |url=http://www.scottaaronson.com/blog/?p=2918 |title=Shtetl-Optimized » Blog Archive » Stuff That's Happened |date=October 9, 2016 |author=Scott Aaronson |accessdate=July 1, 2017 |quote=Some of you will also have seen that folks from the Machine Intelligence Research Institute (MIRI)—Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor—recently put out a major 130-page paper entitled "Logical Induction".}}</ref> |
|- | |- | ||
− | | 2016 || {{dts| | + | | 2016 || {{dts|October 12}} || AMA || MIRI hosts an AMA on the Effective Altruism Forum, inviting questions on AI safety, alignment challenges, and research strategies. Nate Soares, Rob Bensinger, and other MIRI staff participate, offering insights into ongoing projects.<ref>{{cite web |url=http://effective-altruism.com/ea/12r/ask_miri_anything_ama/ |title=Ask MIRI Anything (AMA) |publisher=Effective Altruism Forum |date=October 11, 2016 |author=Rob Bensinger |accessdate=July 5, 2017}}</ref> |
|- | |- | ||
− | | 2016 || {{dts|December | + | | 2016 || {{dts|December}} || Financial || Open Philanthropy awards AI Impacts a $32,000 grant to support research on AI development timelines and potential risks. This funding enables the project to expand its analyses and outreach efforts.<ref>{{cite web |url=http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/ai-impacts-general-support |title=AI Impacts — General Support |publisher=Open Philanthropy |accessdate=June 30, 2017}}</ref> |
|- | |- | ||
− | | 2017 || {{dts| | + | |
+ | | 2017 || {{dts|April 1}}–2 || Workshop || The 4th Workshop on Machine Learning and AI Safety continues the exploration of how to align machine learning models with human values. Key topics include enhancing adversarial robustness, mitigating unintended consequences of AI behavior, and improving safe reinforcement learning techniques. This workshop plays a crucial role in addressing challenges posed by increasingly complex AI systems.<ref name="workshops" /> | ||
|- | |- | ||
− | | 2017 || {{dts| | + | | 2017 || {{dts|May 24}} || Publication || The paper "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on arXiv. Co-authored by AI Impacts researchers, the paper surveys AI experts on timelines for when AI will surpass human abilities in various domains. The findings generate significant media coverage, with over 20 outlets discussing its implications for AI development and existential risks. This work highlights the uncertainty surrounding AI timelines and emphasizes the importance of proactive AI safety measures.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=http://aiimpacts.org/media-discussion-of-2016-espai/ |title=Media discussion of 2016 ESPAI |publisher=AI Impacts |date=June 14, 2017 |accessdate=July 13, 2017}}</ref> |
|- | |- | ||
− | | 2017 || {{dts| | + | | 2017 || {{dts|July 4}} || Strategy || MIRI announces a strategic pivot, scaling back work on its "Alignment for Advanced Machine Learning Systems" agenda. This shift is attributed to key researchers Patrick LaVictoire and Jessica Taylor departing, and Andrew Critch taking a leave of absence. MIRI refocuses its research priorities, reaffirming its commitment to foundational AI safety work while adjusting to the evolving landscape of AI research.<ref>{{cite web |url=https://intelligence.org/2017/07/04/updates-to-the-research-team-and-a-major-donation/ |title=Updates to the research team, and a major donation - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=July 4, 2017 |accessdate=July 4, 2017}}</ref> |
|- | |- | ||
− | | 2017 || {{dts|July | + | | 2017 || {{dts|July 7}} || Outside Review || Daniel Dewey, a program officer at Open Philanthropy, publishes "My Current Thoughts on MIRI's Highly Reliable Agent Design Work" on the Effective Altruism Forum. Dewey critiques MIRI’s focus on agent design, suggesting alternative approaches like learning from human behavior may offer more practical paths to AI alignment. His review sparks broader discussions on the merits of different AI safety strategies.<ref>{{cite web |url=http://effective-altruism.com/ea/1ca/my_current_thoughts_on_miris_highly_reliable/ |title=My Current Thoughts on MIRI's "Highly Reliable Agent Design" Work |author=Daniel Dewey |date=July 7, 2017 |publisher=Effective Altruism Forum |accessdate=July 7, 2017}}</ref> |
|- | |- | ||
− | | 2017 || {{dts|July | + | | 2017 || {{dts|July 14}} || Outside Review || A publicly accessible timeline of MIRI’s work is circulated on the timelines wiki. This document outlines the history and evolution of MIRI’s research and strategies, offering insights into the development of AI safety as a field. |
|- | |- | ||
− | | 2017 || {{dts| | + | | 2017 || {{dts|October 13}} || Publication || Eliezer Yudkowsky and Nate Soares publish "Functional Decision Theory: A New Theory of Instrumental Rationality" on arXiv. This paper introduces Functional Decision Theory (FDT), which offers a new approach to decision-making for AI systems. FDT addresses limitations of existing theories and is positioned as a promising framework for developing safer AI. The paper is a milestone in MIRI's theoretical research.<ref>{{cite web |url=https://arxiv.org/abs/1710.05060 |title=[1710.05060] Functional Decision Theory: A New Theory of Instrumental Rationality |accessdate=October 22, 2017 |quote=Submitted on 13 Oct 2017 |first1=Eliezer |last1=Yudkowsky |first2=Nate |last2=Soares}}</ref><ref>{{cite web |url=https://intelligence.org/2017/10/22/fdt/ |title=New Paper: "Functional Decision Theory" |publisher=Machine Intelligence Research Institute |author=Matthew Graves |date=October 22, 2017 |accessdate=October 22, 2017}}</ref> |
|- | |- | ||
− | | 2017 || {{dts|October 13}} || Publication || | + | | 2017 || {{dts|October 13}} || Publication || Eliezer Yudkowsky publishes "There’s No Fire Alarm for Artificial General Intelligence" on MIRI’s blog and the relaunched LessWrong platform. In this influential post, Yudkowsky argues that there will be no clear, universal signal for the emergence of AGI, stressing the need to prepare proactively. The essay prompts substantial debate within the AI safety and Effective Altruism communities.<ref>{{cite web|url=https://intelligence.org/2017/10/13/fire-alarm/|title=There’s No Fire Alarm for Artificial General Intelligence |date=October 13, 2017 |publisher=Machine Intelligence Research Institute |accessdate=April 19, 2020}}</ref><ref>{{cite web|url=https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence|title=There's No Fire Alarm for Artificial General Intelligence |date=October 13, 2017 |publisher=LessWrong |accessdate=April 19, 2020}}</ref> |
|- | |- | ||
− | | 2017 || {{dts|October | + | | 2017 || {{dts|October}} || Financial || Open Philanthropy awards MIRI a $3.75 million grant over three years, a major financial boost. The grant reflects Open Philanthropy’s acknowledgment of MIRI’s role in advancing AI safety research, particularly following the success of the "Logical Induction" paper. This funding supports ongoing research and staff expansion at MIRI.<ref>{{cite web |url=https://intelligence.org/2017/11/08/major-grant-open-phil/ |title=A Major Grant from Open Philanthropy |author=Malo Bourgon |publisher=Machine Intelligence Research Institute |date=November 8, 2017 |accessdate=November 11, 2017}}</ref><ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2017 |publisher=Open Philanthropy |title=Machine Intelligence Research Institute — General Support (2017) |date=November 8, 2017 |accessdate=November 11, 2017}}</ref> |
|- | |- | ||
− | | 2017 || {{dts| | + | | 2017 || {{dts|November 16}} || Publication || Eliezer Yudkowsky’s book ''Inadequate Equilibria'' is fully published after serialized releases on LessWrong and the Effective Altruism Forum. The book discusses epistemology, expert consensus, and decision-making in complex systems. It receives reviews from prominent bloggers and researchers, including Scott Alexander, Scott Aaronson, and Robin Hanson, who engage with its core ideas.<ref>{{cite web |url=http://slatestarcodex.com/2017/11/30/book-review-inadequate-equilibria/ |title=Book Review: Inadequate Equilibria |date=December 9, 2017 |publisher=Slate Star Codex |accessdate=December 12, 2017}}</ref><ref>{{cite web |url=https://www.scottaaronson.com/blog/?p=3535 |title=Shtetl-Optimized » Blog Archive » Review of "Inadequate Equilibria," by Eliezer Yudkowsky |accessdate=December 12, 2017}}</ref><ref>{{cite web |url=http://www.overcomingbias.com/2017/11/why-be-contrarian.html |title=Overcoming Bias : Why Be Contrarian? |date=November 25, 2017 |author=Robin Hanson |accessdate=December 12, 2017}}</ref> |
+ | |||
|- | |- | ||
− | | 2017 || {{dts|November | + | | 2017 || {{dts|November 25}}–26 || Publication || Eliezer Yudkowsky publishes the two-part series "Security Mindset and Ordinary Paranoia" and "Security Mindset and the Logistic Success Curve." These posts discuss the importance of adopting a security mindset in AI safety, a continuation of themes from his 2016 talk "AI Alignment: Why It’s Hard, and Where to Start." The series emphasizes the counterintuitive nature of preparing for potential AI risks.<ref>{{cite web|url = https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/|title = Security Mindset and Ordinary Paranoia|date = November 25, 2017|accessdate = May 7, 2020|publisher = Machine Intelligence Research Institute|last = Yudkowsky|first = Eliezer}}</ref><ref>{{cite web|url = https://intelligence.org/2017/11/26/security-mindset-and-the-logistic-success-curve/|title = Security Mindset and the Logistic Success Curve|date = November 26, 2017|accessdate = May 7, 2020|publisher = Machine Intelligence Research Institute|last = Yudkowsky|first = Eliezer}}</ref> |
+ | |||
|- | |- | ||
− | | 2017 || {{dts| | + | |
+ | | 2017 || {{dts|December 1}} || Financial || MIRI launches its 2017 fundraiser, setting ambitious targets to expand its research capabilities. By the fundraiser’s conclusion, over $2.5 million is raised from more than 300 donors, including a $763,970 donation in Ethereum from Vitalik Buterin. This success solidifies MIRI’s financial stability and supports its ongoing AI safety research.<ref>{{cite web |url=https://intelligence.org/2017/12/01/miris-2017-fundraiser/ |title=MIRI's 2017 Fundraiser |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |author=Malo Bourgon |date=December 1, 2017 |accessdate=December 12, 2017}}</ref><ref>{{cite web |url=https://intelligence.org/2018/01/10/fundraising-success/ |title=Fundraising success! |author=Malo Bourgon |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=January 10, 2018 |accessdate=January 30, 2018}}</ref> | ||
|- | |- | ||
− | | | + | | 2018 || {{dts|February}} || Workshop || MIRI and the Center for Applied Rationality (CFAR) conduct the first AI Risk for Computer Scientists (AIRCS) workshop. Designed to engage technical professionals with the challenges of AI safety, the workshops blend rationality training with in-depth discussions on forecasting, AI risks, technical problems, and potential research directions. AIRCS becomes a recurring event, with seven more workshops held in 2018 and a significant expansion in 2019.<ref name=miris-2018-fundraiser /> <ref>{{cite web|url=https://intelligence.org/ai-risk-for-computer-scientists/|title=AI Risk for Computer Scientists. Join us for four days of leveling up thinking on AI risk.|publisher=Machine Intelligence Research Institute}}</ref> |
+ | |||
|- | |- | ||
− | | 2018 || {{dts| | + | | 2018 || {{dts|August}} (joining), {{dts|November 28}} (announcement), {{dts|December 1}} (AMA) || Staff || Prolific Haskell developer Edward Kmett joins MIRI. Kmett, renowned for his work in programming and functional languages, emphasizes that his research will remain open despite MIRI’s nondisclosure policy. In an AMA on Reddit, he clarifies that he will strive to produce high-quality work, as his outputs may influence perceptions of MIRI's broader efforts.<ref>{{cite web|url=https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/|title=MIRI’s newest recruit: Edward Kmett! |publisher=Machine Intelligence Research Institute}}</ref><ref>{{cite web|url=https://www.reddit.com/r/haskell/comments/a24hw7/miris_newest_recruit_edward_kmett/|title=MIRI's newest recruit: Edward Kmett! |publisher=Reddit}}</ref> |
+ | |||
|- | |- | ||
− | | 2018 || {{dts|October 29}} || Project || The | + | |
+ | | 2018 || {{dts|October 29}} || Project || The AI Alignment Forum, a centralized hub for alignment researchers, is officially launched. Developed by the LessWrong 2.0 team with support from MIRI, the forum replaces MIRI's Intelligent Agent Foundations Forum. It provides a space for researchers to engage in detailed discussions on AI alignment challenges, fostering collaboration across the field. The forum had previously launched in beta on July 10, 2018, coinciding with the inaugural AI Alignment Writing Day during the MIRI Summer Fellows Program.<ref>{{cite web |url=https://intelligence.org/2018/10/29/announcing-the-ai-alignment-forum/ |title=Announcing the new AI Alignment Forum |publisher=Machine Intelligence Research Institute}}</ref><ref>{{cite web|url=https://www.alignmentforum.org/posts/JiMAMNAb55Qq24nES/announcing-alignmentforum-org-beta |title=Announcing AlignmentForum.org Beta |author=Raymond Arnold}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2018 || {{dts|October 29}}{{snd}}November 15 || Publication || | + | |
+ | | 2018 || {{dts|October 29}}{{snd}}November 15 || Publication || MIRI publishes the "Embedded Agency" sequence by researchers Abram Demski and Scott Garrabrant. This series redefines the concept of naturalized agency as embedded agency, offering insights into how AI systems can operate as agents situated within and interacting with the environments they model. The serialized installments, released across MIRI’s blog, LessWrong 2.0, and the Alignment Forum, culminate in a full-text version on November 15. The sequence introduces foundational ideas in self-reference, logical uncertainty, and the limitations of traditional agent models.<ref>{{cite web|url=https://intelligence.org/embedded-agency/|title=Embedded Agency |publisher=Machine Intelligence Research Institute}}</ref><ref>{{cite web|url=https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh|title=Embedded Agency |publisher=LessWrong 2.0}}</ref><ref>{{cite web|url=https://twitter.com/miriberkeley/status/1063166929899159552|title=Embedded Agency (Full Version) |publisher=Twitter}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2018 || {{dts|November 22}} || Strategy || Nate Soares | + | |
+ | | 2018 || {{dts|November 22}} || Strategy || Nate Soares publishes MIRI’s 2018 update, outlining new research directions under MIRI’s nondisclosure-by-default policy. The post emphasizes "deconfusion," a research approach aimed at clarifying foundational AI alignment problems. MIRI also issues a call for recruits, signaling a growing need for expertise in their evolving focus areas.<ref>{{cite web|url=https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/|title=2018 Update: Our New Research Directions |publisher=Machine Intelligence Research Institute}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2018 || {{dts|November 26}} || Financial || MIRI | + | |
+ | | 2018 || {{dts|November 26}} || Financial || MIRI launches its 2018 fundraiser, which runs through December 31, raising $951,817 from 348 donors. The funds support MIRI’s expanding research efforts, including its nondisclosed-by-default projects and AIRCS workshops.<ref name=miris-2018-fundraiser>{{cite web|url=https://intelligence.org/2018/11/26/miris-2018-fundraiser/|title=MIRI's 2018 Fundraiser |publisher=Machine Intelligence Research Institute}}</ref><ref>{{cite web|url=https://intelligence.org/2019/02/11/our-2018-fundraiser-review/|title=Our 2018 Fundraiser Review |publisher=Machine Intelligence Research Institute}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2018 || {{dts|August}} (joining) {{dts|November 28}} (announcement), {{dts|December 1}} (AMA) || Staff || | + | |
+ | | 2018 || {{dts|August}} (joining), {{dts|November 28}} (announcement), {{dts|December 1}} (AMA) || Staff || Prolific Haskell developer Edward Kmett joins MIRI. Kmett, renowned for his work in programming and functional languages, emphasizes that his research will remain open despite MIRI’s nondisclosure policy. In an AMA on Reddit, he clarifies that he will strive to produce high-quality work, as his outputs may influence perceptions of MIRI's broader efforts.<ref>{{cite web|url=https://intelligence.org/2018/11/28/miris-newest-recruit-edward-kmett/|title=MIRI’s newest recruit: Edward Kmett! |publisher=Machine Intelligence Research Institute}}</ref><ref>{{cite web|url=https://www.reddit.com/r/haskell/comments/a24hw7/miris_newest_recruit_edward_kmett/|title=MIRI's newest recruit: Edward Kmett! |publisher=Reddit}}</ref> | ||
+ | |||
|- | |- | ||
+ | |||
| 2018 || {{dts|December 15}} || Publication || MIRI announces a new edition of Eliezer Yudkowsky's ''Rationality: From AI to Zombies'' (i.e. the book version of "the Sequences"). At the time of the announcement, the new edition of only two sequences, ''Map and Territory'' and ''How to Actually Change Your Mind'', are available.<ref>{{cite web |url=https://intelligence.org/2018/12/15/announcing-new-raz/ |title=Announcing a new edition of "Rationality: From AI to Zombies" |publisher=Machine Intelligence Research Institute |date=December 16, 2018 |accessdate=February 14, 2019}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/NjFgqv8bzjhXFaELP/new-edition-of-rationality-from-ai-to-zombies |title=New edition of "Rationality: From AI to Zombies" |publisher=LessWrong 2.0 |author=Rob Bensinger |accessdate=February 14, 2019}}</ref> | | 2018 || {{dts|December 15}} || Publication || MIRI announces a new edition of Eliezer Yudkowsky's ''Rationality: From AI to Zombies'' (i.e. the book version of "the Sequences"). At the time of the announcement, the new edition of only two sequences, ''Map and Territory'' and ''How to Actually Change Your Mind'', are available.<ref>{{cite web |url=https://intelligence.org/2018/12/15/announcing-new-raz/ |title=Announcing a new edition of "Rationality: From AI to Zombies" |publisher=Machine Intelligence Research Institute |date=December 16, 2018 |accessdate=February 14, 2019}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/NjFgqv8bzjhXFaELP/new-edition-of-rationality-from-ai-to-zombies |title=New edition of "Rationality: From AI to Zombies" |publisher=LessWrong 2.0 |author=Rob Bensinger |accessdate=February 14, 2019}}</ref> | ||
|- | |- | ||
− | | 2019 || {{dts|February}} || Financial || Open Philanthropy | + | | 2019 || {{dts|February}} || Financial || Open Philanthropy awards MIRI a grant of $2,112,500 over two years. This grant, decided by the Committee for Effective Altruism Support, aligns with grants provided to other Effective Altruism (EA) organizations, including 80,000 Hours and the Centre for Effective Altruism, reflecting a broader EA funding strategy. Around the same time, the Berkeley Existential Risk Initiative (BERI) grants $600,000 to MIRI. These combined contributions signify continued institutional confidence in MIRI’s work, supporting its nondisclosed-by-default AI safety research and operational capacity. MIRI discusses these grants in a blog post, noting their significance in bolstering its research agenda.<ref>{{cite web|url = https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support-2019|title = Machine Intelligence Research Institute — General Support (2019)|publisher = Open Philanthropy|date = April 1, 2019|accessdate = September 14, 2019}}</ref><ref>{{cite web|url = https://intelligence.org/2019/04/01/new-grants-open-phil-beri/|title = New grants from the Open Philanthropy Project and BERI|date = April 1, 2019|accessdate = September 14, 2019|publisher = Machine Intelligence Research Institute|last = Bensinger|first = Rob}}</ref> |
+ | |||
|- | |- | ||
− | | 2019 || {{dts|April 23}} || Financial || The Long-Term Future Fund announces | + | |
+ | | 2019 || {{dts|April 23}} || Financial || The Long-Term Future Fund announces a $50,000 grant to MIRI as part of its April 2019 grant round. Oliver Habryka, the lead investigator for this grant, outlines the rationale behind the decision, praising MIRI's contributions to AI safety and addressing perceived funding gaps. This grant highlights MIRI's ongoing role in the Effective Altruism community as a vital organization tackling existential risks from AI.<ref>{{cite web|url = https://forum.effectivealtruism.org/posts/CJJDwgyqT4gXktq6g/long-term-future-fund-april-2019-grant-decisions#MIRI___50_000_|title = MIRI ($50,000)|last = Habryka|first = Oliver|date = April 23, 2019|accessdate = September 15, 2019|publisher = Effective Altruism Forum}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2019 || {{dts|December}} || Financial || MIRI's 2019 fundraiser raises $601,120 from over 259 donors. A retrospective blog post | + | |
+ | | 2019 || {{dts|December}} || Financial || MIRI's 2019 fundraiser raises $601,120 from over 259 donors, a significant decline compared to past fundraisers. A retrospective blog post, published in February 2020, analyzes factors contributing to the lower total, including reduced cryptocurrency donations due to market conditions, challenges posed by MIRI’s nondisclosed research policies, and shifts in donor behavior such as bunching donations across years to maximize tax benefits. Other contributing factors include fewer matching opportunities, a reduced perception of MIRI’s marginal need, and prior donors transitioning from earning-to-give strategies to direct work. The analysis underscores evolving dynamics in MIRI's donor base and external conditions affecting its fundraising outcomes.<ref>{{cite web|url = https://intelligence.org/2020/02/13/our-2019-fundraiser-review/|title = Our 2019 Fundraiser Review|date = February 13, 2020|accessdate = April 19, 2020|author = Colm Ó Riain|publisher = Machine Intelligence Research Institute}}</ref> | ||
|- | |- | ||
− | | 2020 || {{dts|February}} || Financial || Open Philanthropy | + | | 2020 || {{dts|February}} || Financial || Open Philanthropy awards MIRI a $7,703,750 grant over two years, marking MIRI’s largest grant to date. The funds include $6.24 million from Good Ventures and $1.46 million from Ben Delo, co-founder of BitMEX and a Giving Pledge signatory, under a co-funding partnership. This grant reflects Open Philanthropy’s continued support for AI safety research, with similar grants awarded to Ought, the Centre for Effective Altruism, and 80,000 Hours during the same period. MIRI notes in its April blog post that this funding strengthens its capacity to pursue long-term research directions.<ref name="donations-portal-open-phil-ai-safety">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy&cause_area_filter=AI+safety |title=Open Philanthropy donations made (filtered to cause areas matching AI risk) |accessdate=July 27, 2017}}</ref><ref name=miris-largest-grant-to-date>{{cite web|url=https://intelligence.org/2020/04/27/miris-largest-grant-to-date/|title=MIRI’s largest grant to date!|last=Bensinger|first=Rob|date=April 27, 2020|publisher=Machine Intelligence Research Institute}}</ref> |
+ | |||
|- | |- | ||
− | | 2020 || {{dts|March 2}} || Financial || The Berkeley Existential Risk Initiative (BERI) grants $300,000 to MIRI. | + | |
+ | | 2020 || {{dts|March 2}} || Financial || The Berkeley Existential Risk Initiative (BERI) grants $300,000 to MIRI. This amount is lower than the $600,000 MIRI had projected during its 2019 fundraiser. MIRI incorporates the adjustment into its reserves estimates and publicly acknowledges the grant as part of its financial transparency efforts.<ref name=miris-largest-grant-to-date/> | ||
+ | |||
|- | |- | ||
− | | 2020 || {{dts|April 14}} || Financial || The Long-Term Future Fund grants $100,000 to MIRI.<ref name=ltf-april-2020>{{cite web|url = https://app.effectivealtruism.org/funds/far-future/payouts/3waQ7rp3Bfy4Lwr5sZP9TP|title = Fund Payout Report: April 2020 – Long-Term Future Fund Grant Recommendations|date = April 14 | + | |
+ | | 2020 || {{dts|April 14}} || Financial || The Long-Term Future Fund grants $100,000 to MIRI. This grant reflects ongoing support from the Effective Altruism community for MIRI’s AI safety research.<ref name=ltf-april-2020>{{cite web|url=https://app.effectivealtruism.org/funds/far-future/payouts/3waQ7rp3Bfy4Lwr5sZP9TP|title=Fund Payout Report: April 2020 – Long-Term Future Fund Grant Recommendations|date=April 14, 2020|publisher=Effective Altruism Funds}}</ref><ref name=miris-largest-grant-to-date/> | ||
+ | |||
|- | |- | ||
− | | 2020 || May || Financial || The Survival and Flourishing Fund | + | |
+ | | 2020 || May || Financial || The Survival and Flourishing Fund (SFF) announces three grants to MIRI as part of its recommendation process for the first half of 2020: $20,000 from SFF directly, $280,000 from Jaan Tallinn, and $40,000 from Jed McCaleb. These grants underscore sustained philanthropic interest in MIRI’s AI safety initiatives.<ref name=sff-2020-h1>{{cite web|url=http://survivalandflourishing.fund/sff-2020-h1|title=SFF-2020-H1 S-process Recommendations Announcement|date=May 29, 2020|publisher=Survival and Flourishing Fund}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2020 || {{dts|October 9}} || || | + | |
+ | | 2020 || {{dts|October 9}} || Strategy || Rob Bensinger, MIRI’s research communications manager, announces on Facebook that MIRI is considering relocating its office from Berkeley, California, to another location in the United States or Canada. Potential areas include New Hampshire and Toronto. Bensinger indicates that while specific reasons for the move cannot yet be disclosed, the preemptive announcement is intended to help stakeholders consider this uncertainty in their own planning.<ref>{{cite web|url=https://www.facebook.com/robbensinger/posts/10163981893970447/|title=MIRI, the place where I work, is very seriously considering moving to a different country soon (most likely Canada), or moving to elsewhere in the US.|last=Bensinger|first=Rob|date=October 9, 2020|publisher=Facebook}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2020 || {{dts|October 22}} || Publication || Scott Garrabrant publishes | + | |
+ | | 2020 || {{dts|October 22}} || Publication || Scott Garrabrant publishes "Introduction to Cartesian Frames," the first post in a new sequence on LessWrong and the Effective Altruism Forum. Cartesian frames provide a novel conceptual framework for understanding agency, and this sequence contributes to foundational research on AI alignment and decision-making.<ref>{{cite web|url=https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames|title=Introduction to Cartesian Frames|last=Garrabrant|first=Scott|date=October 22, 2020|publisher=LessWrong}}</ref><ref>{{cite web|url=https://intelligence.org/2020/10/23/october-2020-newsletter/|title=October 2020 Newsletter|last=Bensinger|first=Rob|date=October 23, 2020|publisher=Machine Intelligence Research Institute}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2020 || {{dts|November}} (announcement) || Financial | | + | |
+ | | 2020 || {{dts|November}} (announcement) || Financial || Jaan Tallinn donates $543,000 to MIRI through the Survival and Flourishing Fund's second-half 2020 recommendation process. Tallinn’s contributions have consistently supported MIRI’s AI safety work.<ref>{{cite web|url=https://survivalandflourishing.fund/sff-2020-h2-recommendations|title=SFF-2020-H2 S-process Recommendations Announcement|publisher=Survival and Flourishing Fund}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2020 || {{dts|November 30}} | + | |
+ | | 2020 || {{dts|November 30}} || Financial || MIRI announces that it will not conduct a formal fundraiser in 2020 but will participate in Giving Tuesday and other matching opportunities. This decision reflects changes in MIRI’s fundraising approach amid uncertainties caused by the COVID-19 pandemic.<ref>{{cite web|url=https://intelligence.org/2020/11/30/november-2020-newsletter/|title=November 2020 Newsletter|last=Bensinger|first=Rob|date=November 30, 2020|publisher=Machine Intelligence Research Institute}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2020 || {{dts|December 21}} || Strategy || Malo Bourgon publishes | + | |
+ | | 2020 || {{dts|December 21}} || Strategy || Malo Bourgon, MIRI’s Chief Operating Officer, publishes a strategy update. The blog post reflects on the impact of the COVID-19 pandemic, MIRI’s relocation efforts, and its decision to potentially leave the Bay Area. It also discusses slower-than-expected progress in certain research directions initiated in 2017, leading MIRI to consider new strategic approaches. The post highlights public-facing progress in other research areas, affirming MIRI’s continued commitment to foundational AI safety challenges.<ref>{{cite web|url=https://intelligence.org/2020/12/21/2020-updates-and-strategy/|title=2020 Updates and Strategy|last=Bourgon|first=Malo|date=December 21, 2020|publisher=Machine Intelligence Research Institute}}</ref> | ||
|- | |- | ||
− | | 2021 || {{dts|May 8}} || || Rob Bensinger | + | | 2021 || {{dts|May 8}} || Strategy || Rob Bensinger posts on LessWrong about MIRI’s potential relocation from the Bay Area. The post outlines factors under consideration, such as proximity to AI research hubs and cost of living, and invites community input. Despite the discussion, no decisions are made, leaving MIRI’s location unchanged for the time being.<ref name=miri-move-from-bay-area>{{cite web|url=https://www.lesswrong.com/posts/SgszmZwrDHwG3qurr/miri-location-optimization-and-related-topics-discussion|title=MIRI location optimization (and related topics) discussion|date=May 8, 2021|accessdate=May 31, 2021|last=Bensinger|first=Rob|publisher=LessWrong}}</ref> |
+ | |||
|- | |- | ||
− | | 2021 || {{dts|May 13}} || Financial || MIRI | + | |
+ | | 2021 || {{dts|May 13}} || Financial || MIRI receives two cryptocurrency donations: $15.6 million in MakerDAO (MKR), with restrictions limiting spending to $2.5 million annually, and $4.4 million in Ethereum (ETH) from Vitalik Buterin. The MakerDAO donation provides stability but restricts immediate use, shaping how MIRI plans its future spending.<ref>{{cite web|url=https://intelligence.org/2021/05/13/two-major-donations/|title=Our all-time largest donation, and major crypto support from Vitalik Buterin|author=Colm Ó Riain|date=May 13, 2021|accessdate=May 31, 2021}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2021 || {{dts|May 23}} || || | + | |
+ | | 2021 || {{dts|May 23}} || Research || Scott Garrabrandt presents "finite factored sets," a new framework for modeling causality. The approach, which uses factored sets rather than graphs, receives attention in niche AI safety circles but does not lead to broader adoption in causal inference research.<ref>{{cite web|url=https://intelligence.org/2021/05/23/finite-factored-sets/|title=Finite Factored Sets|last=Garrabrandt|first=Scott|date=May 23, 2021|accessdate=May 31, 2021|publisher=Machine Intelligence Research Institute}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2021 || {{dts|July 1}} || || | + | |
+ | | 2021 || {{dts|July 1}} || Strategy || MIRI decides against relocating, citing unresolved strategic considerations. This announcement follows earlier discussions about potential moves to areas like Toronto or New Hampshire.<ref name=miri-move-from-bay-area/> | ||
+ | |||
|- | |- | ||
− | | 2021 || {{dts|November 15}} || || | + | |
+ | | 2021 || {{dts|November 15}} || Collaboration || MIRI publishes a series of internal discussions, "Late 2021 MIRI Conversations," on topics including AI timelines and alignment research priorities. The discussions draw limited attention outside the AI safety community.<ref>{{cite web|url=https://www.alignmentforum.org/s/n945eovrA3oDueqtq|title=Late 2021 MIRI Conversations|last=Bensinger|first=Rob|date=November 15, 2021|accessdate=December 1, 2021|publisher=Alignment Forum}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2021 || {{dts|November 29}} || || MIRI announces | + | |
+ | | 2021 || {{dts|November 29}} || Project || MIRI announces the Visible Thoughts Project, offering bounties for AI-dungeon-style datasets annotated with visible thoughts. Despite financial incentives, the project attracts few contributors and generates limited results.<ref>{{cite web|url=https://www.alignmentforum.org/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement|title=Visible Thoughts Project and Bounty Announcement|last=Soares|first=Nate|date=November 29, 2021|accessdate=December 2, 2021}}</ref> | ||
+ | |||
|- | |- | ||
− | | 2021 || {{dts|December}} || Financial || MIRI | + | |
+ | | 2021 || {{dts|December}} || Financial || MIRI allocates $200,000 for creating annotated datasets and $1,000,000 for scaling the Visible Thoughts Project. However, the project struggles to gain traction, and few submissions are received.<ref>{{cite web|url=https://intelligence.org/2021/12/31/december-2021-newsletter/|title=December 2021 Newsletter|date=December 31, 2021|publisher=Machine Intelligence Research Institute}}</ref> | ||
|- | |- | ||
| 2022 || {{dts|May 30}} || Publication || Eliezer Yudkowsky publishes "Six Dimensions of Operational Adequacy in AGI Projects" on LessWrong. The post sparks some discussion among AI safety researchers but does not establish new standards or practices across broader AGI safety projects.<ref>{{cite web|url=https://www.lesswrong.com/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects|title=Six Dimensions of Operational Adequacy in AGI Projects|date=May 30, 2022|accessdate=September 5, 2024|publisher=LessWrong}}</ref> | | 2022 || {{dts|May 30}} || Publication || Eliezer Yudkowsky publishes "Six Dimensions of Operational Adequacy in AGI Projects" on LessWrong. The post sparks some discussion among AI safety researchers but does not establish new standards or practices across broader AGI safety projects.<ref>{{cite web|url=https://www.lesswrong.com/posts/keiYkaeoLHoKK4LYA/six-dimensions-of-operational-adequacy-in-agi-projects|title=Six Dimensions of Operational Adequacy in AGI Projects|date=May 30, 2022|accessdate=September 5, 2024|publisher=LessWrong}}</ref> |
Latest revision as of 18:41, 19 December 2024
This is a timeline of Machine Intelligence Research Institute. Machine Intelligence Research Institute (MIRI) is a nonprofit organization that does work related to AI safety.
Contents
Sample questions
This is an experimental section that provides some sample questions for readers, similar to reading questions that might come with a book. Some readers of this timeline might come to the page aimlessly and might not have a good idea of what they want to get out of the page. Having some "interesting" questions can help in reading the page with more purpose and in getting a sense of why the timeline is an important tool to have.
The following are some interesting questions that can be answered by reading this timeline:
- Which Singularity Summits did MIRI host, and when did they happen? (Sort by the "Event type" column and look at the rows labeled "Conference".)
- What was MIRI up to for the first ten years of its existence (before Luke Muehlhauser joined, before Holden Karnofsky wrote his critique of the organization)? (Scan the years 2000–2009.)
- How has MIRI's explicit mission changed over the years? (Sort by the "Event type" column and look at the rows labeled "Mission".)
The following are some interesting questions that are difficult or impossible to answer just by reading the current version of this timeline, but might be possible to answer using a future version of this timeline:
- When did some big donations to MIRI take place (for instance, the one by Peter Thiel)?
- Has MIRI "done more things" between 2010–2013 or between 2014–2017? (More information)
Big picture
Time period | Development summary | More details |
---|---|---|
1998–2002 | Various publications related to creating a superhuman AI | During this period, Eliezer Yudkowsky publishes a series of foundational documents about designing superhuman AI. Key works include "Coding a Transhuman AI," "The Plan to Singularity," and "Creating Friendly AI." These writings lay the groundwork for the AI alignment problem. Additionally, the Flare Programming Language project is launched to assist in the creation of a superhuman AI, marking the early technical ambitions of the movement. |
2004–2009 | Tyler Emerson's tenure as executive director | Under Emerson’s leadership, MIRI (then known as the Singularity Institute) experiences growth and increased visibility. Emerson launches the Singularity Summit, a major event that brings together AI researchers, futurists, and thought leaders. MIRI relocates to the San Francisco Bay Area, gaining a strong foothold in the tech industry. During this period, Peter Thiel becomes a key donor and public advocate, lending credibility and significant financial support to the institute. |
2006–2009 | Modern rationalist community forms | This period sees the formation of the modern rationalist community. Eliezer Yudkowsky contributes by founding the websites Overcoming Bias and LessWrong. These platforms become central hubs for discussions on rationality, AI safety, and existential risks. Yudkowsky's Sequences, a comprehensive collection of essays on rationality, are written and gain a wide following, helping shape the philosophy of many within the AI safety and rationalist movements. |
2006–2012 | The Singularity Summits take place annually | The Singularity Summit takes place annually during this period, attracting both prominent thinkers and the general public interested in AI, technology, and futurism. In 2012, the organization changes its name from "Singularity Institute for Artificial Intelligence" to the "Machine Intelligence Research Institute" (MIRI) to better reflect its focus on AI research rather than broader technological futurism. MIRI also sells the Singularity Summit to Singularity University, signaling a shift toward a more focused research agenda. |
2009–2012 | Michael Vassar's tenure as president | Michael Vassar serves as president during this period, continuing to build on the progress made by previous leadership. Vassar focuses on strategic development and positions MIRI within the broader intellectual landscape, further cementing its role as a leader in AI safety research. |
2011–2015 | Luke Muehlhauser's tenure as executive director | Luke Muehlhauser takes over as executive director and is credited with professionalizing the organization and improving donor relations. Under his leadership, MIRI undergoes significant changes, including a name change, a shift in focus from outreach to research, and stronger connections with the Effective Altruism community. Muehlhauser builds relationships with the AI research community, laying the groundwork for future collaborations and funding opportunities.[1][2][3] |
2013–2015 | Change of focus | MIRI shifts its research focus to AI safety and technical math-based research into Friendly AI. During this period, MIRI reduces its public outreach efforts to concentrate on solving fundamental problems in AI safety. It stops hosting major public events like the Singularity Summit and begins focusing almost exclusively on research efforts to address the alignment problem and existential risks from advanced AI systems. |
2015–2023 | Nate Soares's tenure as executive director | Nate Soares, who takes over as executive director in 2015, continues to steer MIRI toward more technical and research-based work on AI safety. Soares expands MIRI’s collaboration with other AI safety organizations and risk researchers. During this time, MIRI receives major funding boosts from cryptocurrency donations and the Open Philanthropy Project in 2017. In 2018, MIRI adopts a "nondisclosed-by-default" policy for much of its research to prevent potential misuse or risks from the dissemination of sensitive AI safety work. |
2023–present | Leadership transitions and response to LLM advancements | In 2023, MIRI undergoes major leadership changes, with Nate Soares transitioning to President, Malo Bourgon becoming CEO, and Alex Vermeer taking on the role of COO. This period coincides with the rapid adoption of large language models (LLMs) like OpenAI's ChatGPT, which transforms public and institutional awareness of AI capabilities. These developments drive MIRI to refine its focus, emphasizing systemic risks and governance in a landscape dominated by increasingly powerful AI systems. The organization prioritizes collaborations with policymakers, researchers, and other AI safety groups to address emerging challenges. |
Highlights by year (2013 onward)
Year | Highlights |
---|---|
2013 | MIRI (formerly SIAI) continues its focus on AI alignment research and community-building. Collaboration with the rationalist and Effective Altruism movements deepens. MIRI establishes itself as a key organization for long-term AI safety, setting the groundwork for its agent foundations research agenda. |
2014 | MIRI publishes several key technical research papers on decision theory and logical uncertainty. The Effective Altruism community increasingly recognizes MIRI's role in addressing existential risks from AI. The Intelligent Agent Foundations Forum is launched to foster collaboration among AI alignment researchers. |
2015 | MIRI co-organizes the Puerto Rico AI Safety Conference with the Future of Life Institute, a pivotal event that brings mainstream attention to AI risks. Nate Soares succeeds Luke Muehlhauser as MIRI’s Executive Director, signaling a new phase for the organization. MIRI holds multiple workshops on logical decision theory, logical uncertainty, and Vingean reflection, solidifying its research agenda. |
2016 | MIRI shifts its research focus toward highly reliable agent design and alignment for advanced AI systems. Scott Garrabrant and collaborators publish the "Logical Induction" paper, a major breakthrough in reasoning under uncertainty. Open Philanthropy awards MIRI a $500,000 grant for general support, acknowledging its role in reducing AI-related risks. |
2017 | Cryptocurrency donations surge, boosting MIRI’s funding, including a significant contribution from Ethereum co-founder Vitalik Buterin. Open Philanthropy grants MIRI $3.75 million, its largest single grant to date. The organization also begins exploring new research directions while maintaining its focus on AI safety and alignment. |
2018 | MIRI announces its nondisclosure-by-default research policy, marking a strategic shift to safeguard alignment progress. Edward Kmett, a prolific Haskell developer, joins MIRI to contribute to its research. The Embedded Agency sequence, exploring naturalized agency concepts, is published, becoming a foundational reference for AI alignment discussions. |
2019 | Open Philanthropy and the Survival and Flourishing Fund provide substantial grants to MIRI, supporting its ongoing AI safety research. MIRI’s research agenda focuses on building robust agents capable of reasoning under logical uncertainty, with continued emphasis on solving core AI alignment challenges. |
2020 | MIRI receives its largest grant to date—$7.7 million over two years—from Open Philanthropy, reinforcing its position as a leader in AI safety research. Internal discussions about relocating MIRI’s operations emerge but conclude with a decision to remain in Berkeley, California. |
2021 | Major cryptocurrency donations, including from Vitalik Buterin, provide critical funding for MIRI’s research. Scott Garrabrant introduces "Finite Factored Sets" as a novel approach to causal inference, generating interest in the alignment research community. |
2022 | Eliezer Yudkowsky publishes "AGI Ruin: A List of Lethalities," renewing discussions on catastrophic risks from AI systems. MIRI refines its internal strategy, pausing public communications to focus on advancing its research agenda amid a rapidly evolving AI landscape. |
2023 | MIRI undergoes leadership changes, with Malo Bourgon appointed CEO and Nate Soares transitioning to President. Eliezer Yudkowsky’s public advocacy for halting advanced AI development garners significant media attention, amplifying calls for stricter AI governance. |
2024 | MIRI launches a new technical governance research team, engaging with international AI policy initiatives and contributing to global discussions on AI safety. The organization announces the termination of the Visible Thoughts Project due to evolving research priorities and limited engagement. |
Full timeline
Year | Month and date | Event type | Details |
---|---|---|---|
1979 | September 11 | Eliezer Yudkowsky is born.[4] | |
1996 | November 18 | Eliezer Yudkowsky writes the first version of "Staring into the Singularity".[5] | |
1998 | Publication | The initial version of "Coding a Transhuman AI" (CaTAI) is published.[6] | |
1999 | March 11 | The Singularitarian mailing list is launched. The mailing list page notes that although hosted on MIRI's website, the mailing list "should be considered as being controlled by the individual Eliezer Yudkowsky".[7] | |
1999 | September 17 | The Singularitarian mailing list is first informed (by Yudkowsky?) of "The Plan to Singularity" (called "Creating the Singularity" at the time).[8] | |
2000–2003 | Eliezer Yudkowsky's "coming of age" (including his "naturalistic awakening," in which he realizes that a superintelligence would not necessarily follow human morality) takes place during this period.[9][10][11] | ||
2000 | January 1 | Publication | "The Plan to Singularity" version 1.0 is written and published by Eliezer Yudkowsky, and posted to the Singularitarian, Extropians, and transhuman mailing lists.[8] |
2000 | January 1 | Publication | "The Singularitarian Principles" version 1.0 by Eliezer Yudkowsky is published.[12] |
2000 | February 6 | The first email is sent on SL4 ("Shock Level Four"), a mailing list about transhumanism, superintelligent AI, existential risks, and so on.[13][14] | |
2000 | May 18 | Publication | "Coding a Transhuman AI" (CaTAI) version 2.0a is "rushed out in time for the Foresight Gathering".[15] |
2000 | July 27 | Mission | Machine Intelligence Research Institute is founded as the Singularity Institute for Artificial Intelligence by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The organization's mission ("organization's primary exempt purpose" on Form 990) at the time is "Create a Friendly, self-improving Artificial Intelligence"; this mission would be in use during 2000–2006 and would change in 2007.[16]:3[17] |
2000 | September 1 | Publication | Large portions of "The Plan to Singularity" are declared obsolete following the formation of the Singularity Institute and a fundamental shift in AI strategy after the publication of "Coding a Transhuman AI" (CaTAI) version 2.[8] This marks a pivotal moment in MIRI's (then known as the Singularity Institute) focus. Earlier discussions about the Singularity give way to a more precise, strategic approach to developing safe, self-improving AI. The obsoletion reflects how new insights are rapidly reshaping the institute's path. |
2000 | September 7 | Publication | Version 2.2.0 of "Coding a Transhuman AI" (CaTAI) is published.[15] CaTAI is a detailed technical document outlining the architecture for creating a transhuman-level artificial intelligence. It covers key ideas on how an AI can be designed to improve itself safely without deviating from its original, human-aligned goals. This text serves as a core theoretical foundation for MIRI's mission, advocating for AI development grounded in ethical and rational decision-making frameworks. |
2000 | September 14 | Project | The first Wayback Machine snapshot of MIRI's website is captured, using the singinst.org domain name.[18] The launch of the website signals MIRI’s formal entry into the public discourse on AI safety and existential risks. It becomes a hub for sharing research, ideas, and resources aimed at academics, technologists, and the broader community interested in the ethical implications of advanced AI.
|
2001 | April 8 | Project | MIRI begins accepting donations after receiving tax-exempt status.[19] Receiving tax-exempt status is a critical milestone for MIRI, allowing it to officially solicit and receive donations from the public. This status helps secure the financial support necessary to expand their research efforts and build a formal research team. |
2001 | April 18 | Publication | Version 0.9 of "Creating Friendly AI" is released.[20] This early draft outlines the first comprehensive framework for developing "Friendly AI" — an AI system designed to operate under constraints that ensure its goals remain aligned with human interests. It is an important early step in formalizing the institute’s approach to safe AI development. |
2001 | June 14 | Publication | The "SIAI Guidelines on Friendly AI" is published.[21] These guidelines serve as a set of ethical and technical principles meant to guide AI researchers in designing systems that prioritize human well-being. The guidelines represent MIRI’s effort to communicate the necessity of carefully managing AI's development and potential risks. |
2001 | June 15 | Publication | Version 1.0 of "Creating Friendly AI" is published.[22] This version is the first full publication of MIRI’s flagship research document. It provides a detailed analysis of how to design AI systems that remain aligned with human values, even as they gain the ability to self-improve. It is considered one of the key early texts in the AI safety field. |
2001 | July 23 | Project | MIRI formally launches the development of the Flare programming language under Dmitriy Myshkin.[23] The Flare project is conceived as a way to build a programming language optimized for AI development and safety. Though it is eventually canceled, it shows MIRI’s early commitment to exploring technical approaches to developing safer AI systems. |
2001 | December 21 | Domain | MIRI secures the flare.org domain name for its Flare programming language project.[23] This acquisition signifies MIRI's focus on developing tools that assist in the creation of AI, though Flare itself is eventually shelved due to technical challenges and shifting priorities.
|
2002 | March 8 | AI Box Experiment | The first AI Box experiment conducted by Eliezer Yudkowsky, against Nathan Russell as gatekeeper, takes place. The AI is released.[24] This experiment involves testing whether a hypothetical AI can convince a human "gatekeeper" to let it out of a confined environment — highlighting the persuasive abilities that a sufficiently advanced AI might possess, even when theoretically controlled. |
2002 | April 7 | Publication | A draft of "Levels of Organization in General Intelligence" is announced on SL4.[25][26] This paper explores theoretical foundations for creating AI that mimics general human intelligence, contributing to the field’s understanding of how to structure and organize machine learning systems. |
2002 | July 4–5 | AI box | The second AI box experiment by Eliezer Yudkowsky, against David McFadzean as gatekeeper, takes place. The AI is released, showcasing the potential persuasive power of advanced AI in overcoming human-imposed restrictions, even in a controlled experimental setting.[27] |
2002 | September 6 | Staff | Christian Rovner is appointed as MIRI's volunteer coordinator, formalizing efforts to engage volunteers in advancing the institute's mission of Friendly AI development.[23] |
2002 | October 1 | MIRI "releases a major new site upgrade" with various new pages, reflecting its growing presence and commitment to outreach and transparency in its research efforts.[23] | |
2002 | October 7 | Project | MIRI announces the creation of its volunteers mailing list to streamline communication and foster collaboration among its expanding network of supporters.[23] |
2003 | Project | The Flare Programming language project is officially canceled, marking a strategic shift in MIRI's focus to other priorities in the pursuit of advanced AI research.[28] | |
2003 | Publication | Eliezer Yudkowsky's "An Intuitive Explanation of Bayesian Reasoning" is published. This accessible explanation of Bayesian statistics becomes a foundational resource for those interested in rational decision-making and probability.[29] | |
2003 | April 30 | Eliezer Yudkowsky posts an update about MIRI to the SL4 mailing list. The update highlights the need for an executive director, bright programmers, and mentions plans for a rationality-focused book to attract talent.[30] | |
2004 | March 4–11 | Staff | MIRI announces Tyler Emerson as executive director. Emerson's expertise in nonprofit management and leadership aims to strengthen the organization’s capacity to achieve its mission.[31][32] |
2004 | April 7 | Staff | Michael Anissimov is announced as MIRI's advocacy director. A dedicated volunteer and influential voice in the transhumanist community, Anissimov is tasked with leading advocacy initiatives.[33] |
2004 | April 14 | Outside review | The first version of the Wikipedia page for MIRI is created. This marks a step in MIRI's broader public visibility and transparency.[34] |
2004 | May | Publication | Eliezer Yudkowsky's paper "Coherent Extrapolated Volition" is published around this time, outlining a vision for aligning AI development with human values. Originally called "Collective Volition," it is later announced on the MIRI website on August 16.[35][31] |
2004 | August 5–8 | Conference | TransVision 2004 takes place. TransVision, the World Transhumanist Association's annual event, sees MIRI participating as a sponsor, reflecting its growing influence in the transhumanist and AI communities.[31] |
2005 | January 4 | Publication | "A Technical Explanation of Technical Explanation" is published.[36] Eliezer Yudkowsky explores the nature of technical explanations, emphasizing how we can communicate complex ideas with clarity and rigor. This work becomes foundational for those studying rationality and AI, offering insights into how we convey and understand deep technical topics. It plays an important role in grounding the theoretical framework behind AI safety research. MIRI announces its release, underlining its importance to their broader research agenda.[37] |
2005 | Conference | MIRI gives presentations on AI and existential risks at Stanford University, the Immortality Institute’s Life Extension Conference, and the Terasem Foundation.[38] These presentations help MIRI broaden the conversation about the risks associated with AI development. By engaging academic audiences at Stanford and futurist communities at the Life Extension Conference, MIRI establishes itself as a critical voice in discussions about how AI can impact humanity’s future. These events also allow MIRI to connect its mission with broader existential concerns, including life extension and the future of human intelligence. | |
2005 | Publication | Eliezer Yudkowsky contributes chapters to Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković.[38] Although the book is officially published in 2008, Yudkowsky’s early contributions focus on the potential dangers of advanced AI and global catastrophic risks. His chapters play a vital role in shaping the emerging field of AI safety, providing critical perspectives on how advanced AI could shape or threaten humanity’s future. This collaboration with prominent scholars like Nick Bostrom helps solidify MIRI's reputation within the existential risk community. | |
2005 | February 2 | Relocation | MIRI relocates from the Atlanta metropolitan area in Georgia to the Bay Area of California.[31] This move is strategic, placing MIRI at the heart of Silicon Valley, where technological advancements are rapidly accelerating. By moving to the Bay Area, MIRI positions itself closer to influential tech companies and research institutions, allowing it to form stronger partnerships and participate more actively in the conversations around AI development and safety. The relocation also signals MIRI’s commitment to influencing the future of AI in a global technology hub. |
2005 | July 22–24 | Conference | MIRI sponsors TransVision 2005 in Caracas, Venezuela.[31] TransVision is one of the world’s leading transhumanist conferences, focusing on how emerging technologies, including AI, can impact humanity’s evolution. MIRI’s sponsorship of this event highlights its dedication to transhumanist goals, such as safe AI and human enhancement. The sponsorship also enables MIRI to reach new international audiences, solidifying its role as a global leader in the field of AI safety and existential risk. |
2005 | August 21 | AI Box Experiment | Eliezer Yudkowsky conducts the third AI Box experiment, with Carl Shulman as the gatekeeper.[39] This experiment explores the theoretical dangers of an advanced AI persuading a human to release it from confinement. Yudkowsky’s successful manipulation as the AI in this experiment further demonstrates the potential risks posed by highly intelligent systems. The AI Box experiment serves as a thought-provoking exercise in AI safety, illustrating the need for stringent safeguards in future AI development. |
2005–2006 | December 20, 2005 – February 19, 2006 | Financial | The 2006 $100,000 Singularity Challenge, led by Peter Thiel, successfully matches donations up to $100,000.[31][40] Peter Thiel’s donation marks the beginning of his significant financial support for MIRI, which continues for many years. The Singularity Challenge helps MIRI raise critical funds for its research, enabling the organization to expand its efforts in AI safety and existential risk mitigation. |
2006 | January | Publication | "Twelve Virtues of Rationality" is published.[41] This essay, written by Eliezer Yudkowsky, lays out twelve core principles or virtues meant to guide rational thinkers. It highlights values like curiosity, empiricism, and precision in thinking, which Yudkowsky frames as essential for clear, logical analysis. The publication is relatively short and structured as a set of concise principles, making it an easily digestible guide for those interested in improving their rational thinking skills. |
2006 | February 13 | Staff | Peter Thiel joins MIRI’s Board of Advisors.[31] Peter Thiel, the tech entrepreneur and venture capitalist, becomes a part of MIRI’s leadership by joining its Board of Advisors. Thiel’s addition to the board follows his growing interest in existential risks and advanced AI, which aligns with MIRI’s mission. His role primarily involves advising MIRI on its strategic direction and helping the organization secure long-term financial support for its AI safety research. |
2006 | May 13 | Conference | The first Singularity Summit takes place at Stanford University.[42][43][44] The Singularity Summit is held as a one-day event at Stanford University and gathers leading scientists, technologists, and thinkers to discuss the rapid pace of technological development and the potential for artificial intelligence to surpass human intelligence. The agenda includes a series of talks and panel discussions, with topics ranging from AI safety to the philosophical implications of superintelligent machines. Attendees include a mix of academics, entrepreneurs, and futurists, marking it as a landmark event for those interested in the technological singularity. |
2006 | November | Project | Robin Hanson launches the blog Overcoming Bias.[45] This project is a personal blog started by Robin Hanson, focusing on cognitive biases and rationality. It is a platform for Hanson and guest contributors to write about topics such as human decision-making, bias in everyday life, and how individuals can improve their thinking. Overcoming Bias quickly gains a readership among academics, technologists, and rationality enthusiasts. |
2007 | May | Mission | MIRI updates its mission statement to focus on "developing safe, stable, and self-modifying Artificial General Intelligence." This reflects the organization’s shift in focus to ensuring that future AI systems remain aligned with human values.[46] |
2007 | July | Project | MIRI launches its outreach blog. The blog serves to engage the public in discussions around AI safety and rationality. It provides a platform for MIRI staff and guest writers to share research updates, existential risk concerns, and general AI news.[38] |
2007 | August | Project | MIRI begins its Interview Series, publishing interviews with leading figures in AI, cognitive science, and existential risk. These interviews offer insights into AGI safety and foster connections within the academic community.[38] |
2007 | September | Staff | Ben Goertzel becomes Director of Research at MIRI, bringing formal leadership to MIRI’s research agenda. He focuses on advancing research in AGI safety, leveraging his expertise in cognitive architectures.[47] |
2007 | May 16 | Project | MIRI publishes its first introductory video on YouTube.[48] The video is created as an introduction to MIRI’s mission and the field of AI safety. It explains the basic concepts of AI risk and outlines MIRI’s role in researching the challenges posed by advanced AI systems. The video is designed to be accessible to a general audience, helping MIRI reach people who might not be familiar with the nuances of AI development. |
2007 | July 10 | Publication | The oldest post on MIRI’s blog, "The Power of Intelligence", is published by Eliezer Yudkowsky.[49] This blog post explores the fundamental concept of intelligence and how it shapes the world. It discusses the role of intelligence in achieving goals and solving problems, emphasizing its potential impact on the future. The post serves as an introduction to Yudkowsky’s broader work on AI safety and rationality, marking the start of MIRI’s ongoing blog efforts. |
2007 | September 8–9 | Conference | The Singularity Summit 2007 is held in the San Francisco Bay Area.[42][50] The second Singularity Summit takes place over two days and features presentations from leading thinkers in AI and technology. Topics include the future of artificial intelligence, the ethics of AI development, and the technological singularity. The event builds on the success of the previous year’s summit, expanding in both size and scope, and attracting a broader audience from academia and the tech industry. |
2008 | January | Publication | "The Simple Truth" is published. This short, fictional story by Eliezer Yudkowsky explains the basic concepts of truth and rationality, illustrating how humans can understand objective reality through evidence and reasoning. It serves as an introduction to epistemology, making complex ideas about knowledge more accessible to a general audience.[51] |
2008 | March | Project | MIRI expands its Interview Series, broadening its scope to include a wider range of experts in AI safety, cognitive science, and philosophy of technology. This expansion provides a more comprehensive view of the diverse research efforts and opinions shaping AGI and existential risk discussions.[38] |
2008 | June | Project | MIRI launches its summer intern program, engaging young researchers and students in AI safety research. The program allows participants to work with MIRI’s research staff, contributing to ongoing projects and gaining hands-on experience in AGI research. It becomes a key method for developing talent and integrating fresh perspectives.[38] |
2008 | July | Project | OpenCog is founded with support from MIRI and Novamente LLC, directed by Ben Goertzel. OpenCog receives additional funding from Google Summer of Code, allowing 11 interns to work on the project in the summer of 2008. The initiative focuses on cognitive architectures and remains central to Goertzel's research efforts at MIRI until 2010.[52][53] |
2008 | October 25 | Conference | The Singularity Summit 2008 takes place in San Jose.[54][55] |
2008 | November–December | Outside review | The AI-Foom debate between Robin Hanson and Eliezer Yudkowsky takes place. The blog posts from the debate would later be turned into an ebook by MIRI.[56][57] |
2009 | Project | MIRI launches the Visiting Fellows Program in 2009. This initiative allows individuals from various backgrounds to spend several weeks at MIRI, engaging in collaborative research and contributing to projects related to Friendly AI and rationality. The program becomes a key method of recruitment for future MIRI researchers.[38] | |
2009 (early) | Staff | Tyler Emerson, who served as executive director of MIRI, steps down early in the year. His departure marks a leadership transition that eventually sees Michael Vassar take on a more prominent role within the organization.[58] | |
2009 (early) | Staff | Michael Anissimov is hired as Media Director. Having served as MIRI’s Advocacy Director in previous years, it is unclear whether he briefly left the organization or simply transitioned into a new role.[58] | |
2009 | February | Project | Eliezer Yudkowsky establishes LessWrong, a community blog dedicated to discussing topics related to rationality, decision theory, and the development of Friendly AI. The site serves as a spiritual successor to his posts on Overcoming Bias and quickly becomes a central hub for Singularity and Effective Altruism communities. It is described as instrumental in MIRI's recruitment efforts, with many participants of MIRI's Visiting Fellows Program having first encountered the organization through LessWrong.[59][58] |
2009 | February 16 | Staff | Michael Vassar announces his role as President of MIRI in a blog post titled "Introducing Myself." Vassar, who was a key figure in the organization’s outreach efforts, remains president until 2012, focusing on strategic vision and external partnerships.[60] |
2009 | April | Publication | Eliezer Yudkowsky completes the Sequences, a series of blog posts on LessWrong that cover topics ranging from epistemology and rationality to AI safety. These posts are later compiled into the book Rationality: From AI to Zombies.[61] |
2009 | August 13 | Social Media | MIRI establishes its official Twitter account under the handle @singinst. This move marks the beginning of MIRI's broader efforts to engage with the public through social media channels.[62] |
2009 | September | Staff | Amy Willey Labenz begins an internship at MIRI, focusing on administrative and operational tasks. During her time as an intern, she demonstrates keen attention to detail, particularly in financial oversight. In November, she discovers a significant case of embezzlement within the organization, identifying discrepancies that had gone unnoticed. Her findings lead to an internal investigation, and her role in resolving the issue is seen as critical in protecting MIRI's financial stability. Recognizing her diligence and competence, MIRI promotes Amy Willey Labenz to Chief Compliance Officer by the end of the year. In her new role, she is tasked with ensuring the organization's financial integrity and compliance with legal standards, playing a pivotal part in strengthening MIRI's internal operations.[63] |
2009 | October | Project | MIRI launches The Uncertain Future, a website that allows users to build mathematically rigorous models to predict the impact of future technologies. The project began development in 2008 and is seen as an innovative tool for those interested in exploring the potential trajectories of technological progress.[64][65] |
2009 | October 3–4 | Conference | The Singularity Summit 2009 takes place in New York, bringing together leading thinkers in technology, AI, and futurism. This annual event, hosted by MIRI, serves as a major platform for discussions about the Singularity and the implications of rapidly advancing technologies.[66][67] |
2009 | November | Financial | An embezzlement scandal involving a former contractor is uncovered, resulting in a reported theft of $118,803. The discovery leads to significant internal changes within MIRI and the eventual recovery of some funds through legal action.[68][69] |
2009 | December | Staff | Following the embezzlement case, Amy Willey Labenz, who uncovered the theft during her internship, is promoted to Chief Compliance Officer. Her role focuses on strengthening MIRI’s financial and operational compliance.[58][63] |
2009 | December 11 | Influence | The third edition of Artificial Intelligence: A Modern Approach, a seminal textbook by Stuart J. Russell and Peter Norvig, is published. In this edition, Friendly AI and Eliezer Yudkowsky are mentioned for the first time, marking an important moment for MIRI's ideas within mainstream AI literature. |
2009 | December 12 | Project | MIRI announces that The Uncertain Future has reached beta status. The tool, which allows users to explore scenarios of technological progress, is unveiled on the MIRI blog.[70] |
2010 | Mission | The organization’s mission is updated to: "To develop the theory and particulars of safe self-improving Artificial Intelligence; to support novel research and foster the creation of a research community focused on safe Artificial General Intelligence; and to otherwise improve the probability of humanity surviving future technological advances." This mission statement is also used in 2011 and 2012.[71] | |
2010 | February 28 | Publication | Eliezer Yudkowsky publishes the first chapter of Harry Potter and the Methods of Rationality (HPMoR), a fan fiction exploring rationalist themes. The story is published serially, concluding on March 14, 2015. Later surveys identify HPMoR as an initial point of contact with MIRI for at least four major donors in 2013.[72][73][74] |
2010 | April | Staff | Amy Willey Labenz is promoted to Chief Operating Officer (COO) of MIRI. She also serves as the Executive Producer of the Singularity Summits from 2010 to 2012.[63] |
2010 | June 17 | Popular culture | Greg Egan’s science fiction novel Zendegi is published. The book includes characters and concepts inspired by the rationalist and AI safety communities, such as the Friendly AI project, the Overcoming Bias blog, and LessWrong.[75][76][77] |
2010 | August 14–15 | Conference | The Singularity Summit 2010 is held in San Francisco. The event features speakers from AI research, technology, and futurism communities.[78] |
2010 | December 21 | Social media | MIRI posts to its Facebook page for the first time. This marks the organization’s entry into social media platforms.[79][80] |
2010–2011 | December 21, 2010 – January 20, 2011 | Financial | The Tallinn–Evans $125,000 Singularity Challenge fundraiser takes place. Donations to MIRI are matched dollar-for-dollar by Edwin Evans and Jaan Tallinn, up to $125,000.[81][82] |
2011 | February 4 | Project | The Uncertain Future, a web-based tool for estimating probabilities of various future scenarios involving AI and other technologies, is made open-source. The project is aimed at fostering public understanding of the uncertainties surrounding future technological advancements.[65] |
2011 | February | Outside review | Holden Karnofsky, co-founder of GiveWell, holds a discussion with MIRI staff to assess the organization’s strategy, priorities, and effectiveness. Key topics include MIRI's research focus, its ability to produce actionable results, and its approach to donor communication. Karnofsky critiques speculative initiatives like the "Persistent Problems Group" (PPG), which aimed to assemble expert panels on misunderstood topics, questioning its relevance to MIRI’s stated goal of addressing existential risks from AI. The conversation transcript, released on April 30, prompts broader discussions in the rationalist and philanthropic communities about MIRI’s focus and alignment with its mission.[83][84] |
2011 | April | Staff | Luke Muehlhauser begins working as an intern at MIRI. In reflections shared later, Muehlhauser notes operational and organizational inefficiencies at the time, which shape his vision for improving MIRI’s structure when he becomes Executive Director.[85] |
2011 | May 10 – June 24 | Outside review | Holden Karnofsky and Jaan Tallinn correspond about MIRI’s activities, with Dario Amodei participating in an initial phone conversation. Their discussion touches on MIRI’s research goals and the broader implications of AI safety. The correspondence is shared on the GiveWell mailing list on July 18, furthering public engagement with AI safety issues.[86] |
2011 | June 24 | Domain | A Wayback Machine snapshot shows that singularity.org has become a GoDaddy.com placeholder. Previously, the domain appears to have hosted an unrelated blog.[87][88]
|
2011 | July 18 – October 20 | Domain | During this period, the singularity.org domain redirects to singinst.org/singularityfaq , which hosts FAQs about the singularity and MIRI’s approach to AI safety.[88]
|
2011 | September 6 | Domain | The first Wayback Machine capture of singularityvolunteers.org is made. This site is used to coordinate volunteer efforts for MIRI’s projects and events.[89]
|
2011 | October 15–16 | Conference | The Singularity Summit 2011 is held in New York. The event features talks from researchers and thinkers on AI, futurism, and technology, attracting attention from both academic and public audiences.[90] |
2011 | October 17 | Social media | The Singularity Summit YouTube account, "SingularitySummits," is created to share recorded talks and materials from the summit and promote public engagement with AI and technology-related topics.[91] |
2011 | November | Staff | Luke Muehlhauser is appointed Executive Director of MIRI. Muehlhauser’s tenure is marked by efforts to professionalize the organization, improve donor relations, and refocus on foundational research in AI safety.[92] |
2011 | December 12 | Project | Luke Muehlhauser announces the launch of Friendly-AI.com, a website dedicated to explaining the concept of Friendly AI. Friendly AI refers to Artificial General Intelligence (AGI) systems that are designed to align with human values and operate safely, avoiding harmful unintended consequences. The website serves as an introductory resource for the public and AI researchers.[93] |
2012 | Staff | Michael Vassar steps down from his role at MIRI to co-found MetaMed, a personalized medical advisory company. With Skype co-creator Jaan Tallinn and $500,000 in funding from Peter Thiel, MetaMed seeks to revolutionize healthcare by applying rational decision-making and advanced data analysis to personalized medical care. The company targets wealthy clients, offering custom literature reviews and health studies tailored to individual needs. Supporters see the venture as a demonstration of rationality’s potential in complex domains like medicine, though its exclusivity raises questions about broader accessibility and impact.[94] | |
2011–2012 | Opinion | In a two-part Q&A series, Luke Muehlhauser, MIRI’s Executive Director, shares insights into the organization’s evolving focus. He emphasizes a transition away from broader singularity advocacy toward a concentration on AI alignment research, arguing that MIRI’s most important contribution lies in developing foundational theories to guide safe AI development. Muehlhauser discusses challenges such as the speculative nature of the field, the difficulty in attracting top researchers, and the absence of empirical milestones. The series provides a transparent view of MIRI’s priorities and ambitions, helping to build trust among donors and the research community.[95][96] | |
2012 | Domain | Between February 4 and May 4, 2012, MIRI redirects its primary domain from singularity.org to singinst.org. This change reflects MIRI’s strategic shift from engaging in public advocacy for the singularity to focusing exclusively on AI safety and technical research. The new domain better aligns with MIRI’s narrower mission of developing safe AI systems and communicating its research priorities.[97] | |
2012 | May 8 | Progress Report | MIRI publishes its April 2012 progress report, announcing the formal establishment of the Center for Applied Rationality (CFAR). Previously known as the "Rationality Group," CFAR focuses on creating training programs to enhance reasoning and decision-making skills. This rebranding highlights CFAR’s role in institutionalizing rationality techniques, which later become integral to the Effective Altruism movement. CFAR’s mission aligns with MIRI’s overarching goal of fostering better decision-making in high-stakes domains like AI safety.[98] |
2012 | May 11 | Outside Review | Holden Karnofsky, co-founder of GiveWell and later Open Philanthropy, publishes "Thoughts on the Singularity Institute (SI)" on LessWrong. Karnofsky critiques MIRI’s speculative approach, questioning its ability to deliver actionable insights and measurable outcomes. He highlights concerns about the lack of empirical grounding in AI safety research and its reliance on theoretical models. The review is influential within the Effective Altruism and existential risk communities, prompting MIRI to reflect on its research priorities and improve its communication with donors.[99] |
2012 | August 6 | Newsletter | MIRI begins publishing monthly newsletters, starting with the July 2012 edition. These newsletters provide updates on research progress, organizational changes, and events, offering supporters greater transparency. The regular cadence helps MIRI engage more effectively with its community of donors and researchers.[100] |
2012 | October 13–14 | Conference | The Singularity Summit 2012 is held in San Francisco. Speakers include Eliezer Yudkowsky, Ray Kurzweil, and other leading voices in AI and futurism. Topics range from AI safety and neuroscience to human enhancement, attracting a broad audience from academia, technology, and the public. The event serves as a platform for discussing the future impact of AI and technological advancements on society.[101] |
2012 | November 11–18 | Workshop | MIRI organizes the 1st Workshop on Logic, Probability, and Reflection. This event gathers researchers to explore foundational challenges in AI alignment, focusing on how AI systems can reason under uncertainty and make decisions reliably. The workshop’s outcomes help shape MIRI’s approach to developing mathematically sound frameworks for AI safety.[102] |
2012 | December 6 | Singularity Summit Acquisition | Singularity University announces its acquisition of the Singularity Summit from MIRI, marking the end of MIRI’s direct involvement in the event. Some commentators, including Joshua Fox, praise the decision as a way for MIRI to focus exclusively on AI safety research. However, others express concerns that the summit’s emphasis on fostering long-term thinking may be diluted under Singularity University’s broader programming. The summit’s original themes of technological foresight and existential risks are eventually inherited by events like EA Global.[103][104] |
2013 | Mission | Mission Statement Update | MIRI revises its mission statement to reflect a sharper focus on AI safety: "To ensure that the creation of smarter-than-human intelligence has a positive impact. Thus, the charitable purpose of the organization is to: a) perform research relevant to ensuring that smarter-than-human intelligence has a positive impact; b) raise awareness of this important issue; c) advise researchers, leaders, and laypeople around the world; d) as necessary, implement a smarter-than-human intelligence with humane, stable goals." This new wording underscores MIRI's commitment to both technical research and broader engagement with key stakeholders to address global risks associated with advanced AI.[105] |
2013–2014 | Project | Conversations Initiative | MIRI conducts numerous expert interviews on AI safety, strategy, and existential risks, recording 75 of its 80 total listed conversations during this time (19 in 2013 and 56 in 2014). These discussions involve leading thinkers in fields like AI alignment, decision theory, and risk mitigation. While valuable, the initiative is deprioritized in mid-2014 after diminishing returns, as noted by executive director Luke Muehlhauser in MIRI’s 2014 review. Nonetheless, these conversations shape the direction of AI safety dialogue during this period.[106][107] |
2013 | January | Staff | Michael Anissimov departs MIRI after the acquisition of the Singularity Summit by Singularity University and a major strategic shift at MIRI. Anissimov had played a key role in public outreach and advocacy for the singularity. Following his departure, MIRI pivots away from broader public engagement and focuses more heavily on technical research in AI alignment and decision theory. Despite leaving MIRI, Anissimov remains an active supporter and volunteer.[108] |
2013 | January 30 | Rebranding | MIRI officially renames itself from the Singularity Institute for Artificial Intelligence (SIAI) to the Machine Intelligence Research Institute (MIRI). The change signals MIRI’s narrowed focus on machine intelligence and technical AI safety challenges, distancing itself from broader discussions about the singularity. This rebranding clarifies the organization’s mission to external stakeholders and aligns with its research-driven goals.[109] |
2013 | February 1 | Publication | MIRI publishes "Facing the Intelligence Explosion," a book by executive director Luke Muehlhauser. This work introduces readers to the potential risks posed by advanced AI systems and emphasizes the importance of research into AI alignment and safety. The book frames the discussion around the existential risks of misaligned AI and MIRI’s role in addressing these challenges.[110] |
2013 | February 11 – February 28 | Domain | MIRI launches its new website, intelligence.org, during this period. The redesigned website features a professional layout emphasizing machine intelligence research and AI safety. Executive director Luke Muehlhauser announces the change in a blog post, positioning the new site as a cornerstone of MIRI’s updated branding and communication strategy.[111][112] |
2013 | April 3 | Publication | Springer publishes "Singularity Hypotheses: A Scientific and Philosophical Assessment," a collection of essays examining the potential trajectories of AI and the singularity. MIRI researchers and associates contribute to this volume, which explores the societal implications and challenges of smarter-than-human intelligence. The book is positioned as a resource for academics and policymakers seeking to understand the scientific and philosophical issues surrounding advanced AI.[113][114] |
2013 | April 3–24 | Workshop | MIRI hosts the 2nd Workshop on Logic, Probability, and Reflection, advancing research on decision theory, formal reasoning, and AI alignment. The workshop builds on MIRI’s foundational research strategy, fostering collaboration among experts to address critical challenges in creating safe AI systems.[102] |
2013 | April 13 | Strategy | MIRI publishes a strategic update outlining its increased emphasis on Friendly AI mathematics and research while scaling back public outreach. This shift aims to concentrate resources on technical research areas with the highest potential to influence AI safety and alignment, reflecting a more focused approach to existential risk reduction.[115] |
2014 | January (approximate) | Financial | Jed McCaleb, creator of Ripple and founder of Mt. Gox, donates $500,000 worth of XRP cryptocurrency to MIRI. This significant financial contribution supports AI safety research and highlights the growing crossover between the cryptocurrency community and existential risk initiatives. McCaleb's donation underscores the recognition of AI safety as a crucial issue by technologists in fields outside traditional AI research.[116] |
2014 | January 16 | Outside Review | MIRI staff meet with Holden Karnofsky, co-founder of GiveWell, for a strategic discussion on existential risks and AI safety. The meeting focuses on MIRI’s approach to managing long-term risks and explores potential collaboration opportunities between MIRI and other organizations in the Effective Altruism (EA) and philanthropic communities. This conversation is part of MIRI’s broader effort to build alliances and align its strategy with the priorities of influential stakeholders.[117] |
2014 | February 1 | Publication | MIRI publishes "Smarter Than Us: The Rise of Machine Intelligence" by Stuart Armstrong. The book introduces key concepts in AI alignment and explores the challenges posed by advanced AI systems. Written for a general audience, it serves as an accessible entry point into the field of AI safety and aligns with MIRI’s mission to raise awareness about existential risks.[118] |
2014 | March–May | Influence | The Future of Life Institute (FLI) is co-founded by Max Tegmark, Jaan Tallinn, Meia Chita-Tegmark, and Anthony Aguirre, with MIRI playing a foundational role in its creation. Tallinn, one of MIRI’s key supporters and an FLI co-founder, cites MIRI as instrumental in shaping his views on AI safety. FLI focuses on existential risks, particularly those associated with advanced AI, expanding the global conversation on AI alignment and societal impact.[119] |
2014 | March 12–13 | Staff | MIRI hires several new researchers, including Nate Soares, who would later become its executive director in 2015. To celebrate this organizational growth, MIRI hosts an Expansion Party, highlighting its increased capacity for tackling ambitious AI safety projects. The event strengthens connections with local supporters and researchers.[120][121][122] |
2014 | May 3–11 | Workshop | MIRI hosts its 7th Workshop on Logic, Probability, and Reflection. Participants collaborate on problems related to decision theory, reasoning under uncertainty, and formal AI alignment techniques. These workshops play a key role in advancing the theoretical foundations of safe AI development.[102] |
2014 | July–September | Influence | Nick Bostrom publishes "Superintelligence: Paths, Dangers, Strategies," a landmark work on AI alignment and existential risk. Bostrom, a MIRI advisor, builds on concepts developed by MIRI researchers, significantly contributing to global discussions on managing advanced AI. The book solidifies AI safety as a crucial area of focus for researchers and policymakers.[123] |
2014 | July 4 | Project | AI Impacts, an initiative analyzing societal implications of AI development, emerges with Katja Grace playing a leading role. The project focuses on AI timelines, economic effects, and strategic considerations, contributing to the broader AI safety community’s understanding of future challenges.[124] |
2014 | August | Project | The AI Impacts website officially launches. Led by Paul Christiano and Katja Grace, the platform provides data-driven analyses and forecasts about AI development, serving as a resource for researchers and policymakers concerned with the long-term societal impacts of AI.[125] |
2014 | November 4 | Project | The Intelligent Agent Foundations Forum launches under MIRI’s management. This forum provides a collaborative space for researchers to discuss foundational problems in decision theory and agent design, central to developing aligned AI systems. It attracts contributions from academics and independent researchers worldwide.[126] |
2015 | January | Project | AI Impacts rolls out a redesigned website. The revamped site aims to make research on AI risks, timelines, and governance issues more accessible to the public. Led by MIRI, this initiative reflects a broader effort to improve public engagement and communication about the long-term societal implications of artificial intelligence.[127] |
2015 | January 2–5 | Conference | The Future of AI: Opportunities and Challenges, an AI safety conference organized by the Future of Life Institute, takes place in Puerto Rico. Attendees include influential figures like Luke Muehlhauser, Eliezer Yudkowsky, and Nate Soares from MIRI, alongside leading AI researchers and academics. The event galvanizes interest in AI safety, with Nate Soares describing it as a pivotal moment for academia’s recognition of AI existential risks. Discussions center on the potential for unaligned AI to pose catastrophic threats to humanity.[128][129] |
2015 | March 11 | Influence | Rationality: From AI to Zombies, a compilation of Eliezer Yudkowsky's influential writings on rational thinking and decision-making, is published. Drawing from "The Sequences" on LessWrong, the book explores topics from cognitive biases to AI safety, positioning itself as a foundational text within the Effective Altruism and rationality movements. It serves as both an introduction to AI alignment challenges and a guide for improving human reasoning.[61][130] |
2015 | May 4–6 | Workshop | MIRI hosts the 1st Introductory Workshop on Logical Decision Theory. This workshop educates researchers on advanced decision theories relevant to AI alignment, tackling problems such as Newcomb's paradox and exploring how AI agents can predict and influence outcomes in logical environments.[102] |
2015 | May 6 | Staff | Luke Muehlhauser resigns as MIRI’s executive director to join the Open Philanthropy Project as a research analyst. In his farewell post, he expresses confidence in Nate Soares, a MIRI researcher known for his work on decision theory and AI alignment, as his successor. Soares takes over leadership with the goal of advancing MIRI’s technical research agenda.[131] |
2015 | May 13–19 | Conference | MIRI collaborates with the Centre for the Study of Existential Risk (CSER) to co-organize the Self-Prediction in Decision Theory and Artificial Intelligence Conference. This event focuses on how AI systems can predict and incorporate their own actions into decision-making, a critical aspect of ensuring alignment and safety in advanced AI.[132] |
2015 | May 29–31 | Workshop | The 1st Introductory Workshop on Logical Uncertainty explores how AI systems can reason under uncertainty in formal, logic-based frameworks. Researchers tackle foundational challenges in ensuring AI reliability in dynamic and unpredictable environments.[102] |
2015 | June 3–4 | Staff | Nate Soares officially begins as MIRI’s executive director, succeeding Luke Muehlhauser. Soares emphasizes MIRI’s mission to address core AI alignment challenges through focused technical research and collaboration with the broader AI safety community.[133] |
2015 | June 11 | AMA | Nate Soares hosts an "ask me anything" (AMA) session on the Effective Altruism Forum, engaging with the community on AI safety, MIRI’s research agenda, and his vision for the organization’s future under his leadership.[134] |
2015 | June 12–14 | Workshop | MIRI hosts the 2nd Introductory Workshop on Logical Decision Theory. The workshop builds on the previous event, offering deeper insights into decision theories critical for AI alignment, particularly in uncertain and strategic environments.[102] |
2015 | June 26–28 | Workshop | The 1st Introductory Workshop on Vingean Reflection focuses on developing frameworks for AI systems to reflect on and improve their decision-making procedures without compromising safety or alignment. Researchers address challenges in creating systems that can safely modify their own decision algorithms.[102] |
2015 | July 7–26 | Project | MIRI collaborates with the Center for Applied Rationality (CFAR) to host the MIRI Summer Fellows Program. This initiative aims to cultivate new talent for AI safety research and is described as "relatively successful" in recruiting staff for MIRI.[135][136] |
2015 | August 7–9 | Workshop | The 2nd Introductory Workshop on Logical Uncertainty continues exploring how AI systems can navigate uncertain and incomplete information, ensuring reliability in real-world applications.[102] |
2015 | August 28–30 | Workshop | The 3rd Introductory Workshop on Logical Decision Theory delves into refining decision-making frameworks for AI systems, with a focus on tackling strategic scenarios with limited information.[102] |
2015 | September 26 | External Review | The Effective Altruism Wiki publishes a detailed page on MIRI, summarizing its work on reducing existential risks from AI. This page serves as an accessible resource for the EA community to better understand MIRI’s mission and projects.[137] |
2016 | Publication | MIRI commissions Eliezer Yudkowsky to create AI alignment content for Arbital, a platform designed to simplify complex technical topics for a general audience. The project aimed to address gaps in public understanding of AI alignment by providing accessible explanations of technical concepts, including risks posed by unaligned AI. Arbital was part of a broader effort to improve outreach and education on AI safety.[138][139] | |
2016 | March 30 | Staff | MIRI promotes two key staff members to leadership roles: Malo Bourgon becomes Chief Operating Officer (COO), and Rob Bensinger is named Research Communications Manager. These changes reflect MIRI’s growing emphasis on operational efficiency and effective communication as it scales up its AI alignment research.[140] |
2016 | April 1–3 | Workshop | The Self-Reference, Type Theory, and Formal Verification Workshop focuses on applying formal methods to AI systems. Researchers explore how self-referential AI can be verified to ensure alignment with human values, leveraging type theory and formal verification techniques. This workshop advances MIRI’s goal of creating provably safe AI systems.[102] |
2016 | May 6 (talk), December 28 (transcript release) | Publication | Eliezer Yudkowsky delivers a talk at Stanford University titled "AI Alignment: Why It’s Hard, and Where to Start," addressing the technical challenges of aligning AI with human values. The transcript, released on MIRI's blog in December, becomes a foundational resource for researchers grappling with alignment problems.[141][142] |
2016 | May 28–29 | Workshop | The CSRBAI Workshop on Transparency explores methods for making AI systems interpretable and understandable. Researchers examine how transparency can contribute to trustworthiness and alignment in advanced AI, especially in high-stakes applications.[102] |
2016 | June 4–5 | Workshop | The CSRBAI Workshop on Robustness and Error-Tolerance addresses how to design AI systems capable of handling uncertainty and errors without catastrophic failures. Robustness is identified as a key factor for deploying AI systems in unpredictable environments.[102] |
2016 | June 11–12 | Workshop | The CSRBAI Workshop on Preference Specification focuses on accurately encoding human values and preferences into AI systems, a foundational challenge in AI alignment.[102] |
2016 | June 17 | Workshop | The CSRBAI Workshop on Agent Models and Multi-Agent Dilemmas delves into how AI systems interact in multi-agent scenarios. Researchers examine ways to ensure cooperation and prevent conflicts among agents with potentially competing goals.[102] |
2016 | July 27 | Publication | MIRI releases the technical agenda paper "Alignment for Advanced Machine Learning Systems," outlining key challenges in aligning machine learning models with human values. This document marks MIRI’s formal pivot to addressing machine learning-specific safety issues.[143] |
2016 | August | Financial | Open Philanthropy awards MIRI a $500,000 grant for general support. The grant acknowledges MIRI’s contributions to AI safety while expressing differing views on the technical approaches employed by the organization.[144] |
2016 | September 12 | Publication | MIRI publishes "Logical Induction," a groundbreaking paper by Scott Garrabrant and co-authors. The paper introduces a framework for reasoning under uncertainty in a mathematically rigorous way, earning widespread acclaim as a significant advancement in formal AI research.[145][146] |
2016 | October 12 | AMA | MIRI hosts an AMA on the Effective Altruism Forum, inviting questions on AI safety, alignment challenges, and research strategies. Nate Soares, Rob Bensinger, and other MIRI staff participate, offering insights into ongoing projects.[147] |
2016 | December | Financial | Open Philanthropy awards AI Impacts a $32,000 grant to support research on AI development timelines and potential risks. This funding enables the project to expand its analyses and outreach efforts.[148] |
2017 | April 1–2 | Workshop | The 4th Workshop on Machine Learning and AI Safety continues the exploration of how to align machine learning models with human values. Key topics include enhancing adversarial robustness, mitigating unintended consequences of AI behavior, and improving safe reinforcement learning techniques. This workshop plays a crucial role in addressing challenges posed by increasingly complex AI systems.[102] |
2017 | May 24 | Publication | The paper "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on arXiv. Co-authored by AI Impacts researchers, the paper surveys AI experts on timelines for when AI will surpass human abilities in various domains. The findings generate significant media coverage, with over 20 outlets discussing its implications for AI development and existential risks. This work highlights the uncertainty surrounding AI timelines and emphasizes the importance of proactive AI safety measures.[149][150] |
2017 | July 4 | Strategy | MIRI announces a strategic pivot, scaling back work on its "Alignment for Advanced Machine Learning Systems" agenda. This shift is attributed to key researchers Patrick LaVictoire and Jessica Taylor departing, and Andrew Critch taking a leave of absence. MIRI refocuses its research priorities, reaffirming its commitment to foundational AI safety work while adjusting to the evolving landscape of AI research.[151] |
2017 | July 7 | Outside Review | Daniel Dewey, a program officer at Open Philanthropy, publishes "My Current Thoughts on MIRI's Highly Reliable Agent Design Work" on the Effective Altruism Forum. Dewey critiques MIRI’s focus on agent design, suggesting alternative approaches like learning from human behavior may offer more practical paths to AI alignment. His review sparks broader discussions on the merits of different AI safety strategies.[152] |
2017 | July 14 | Outside Review | A publicly accessible timeline of MIRI’s work is circulated on the timelines wiki. This document outlines the history and evolution of MIRI’s research and strategies, offering insights into the development of AI safety as a field. |
2017 | October 13 | Publication | Eliezer Yudkowsky and Nate Soares publish "Functional Decision Theory: A New Theory of Instrumental Rationality" on arXiv. This paper introduces Functional Decision Theory (FDT), which offers a new approach to decision-making for AI systems. FDT addresses limitations of existing theories and is positioned as a promising framework for developing safer AI. The paper is a milestone in MIRI's theoretical research.[153][154] |
2017 | October 13 | Publication | Eliezer Yudkowsky publishes "There’s No Fire Alarm for Artificial General Intelligence" on MIRI’s blog and the relaunched LessWrong platform. In this influential post, Yudkowsky argues that there will be no clear, universal signal for the emergence of AGI, stressing the need to prepare proactively. The essay prompts substantial debate within the AI safety and Effective Altruism communities.[155][156] |
2017 | October | Financial | Open Philanthropy awards MIRI a $3.75 million grant over three years, a major financial boost. The grant reflects Open Philanthropy’s acknowledgment of MIRI’s role in advancing AI safety research, particularly following the success of the "Logical Induction" paper. This funding supports ongoing research and staff expansion at MIRI.[157][158] |
2017 | November 16 | Publication | Eliezer Yudkowsky’s book Inadequate Equilibria is fully published after serialized releases on LessWrong and the Effective Altruism Forum. The book discusses epistemology, expert consensus, and decision-making in complex systems. It receives reviews from prominent bloggers and researchers, including Scott Alexander, Scott Aaronson, and Robin Hanson, who engage with its core ideas.[159][160][161] |
2017 | November 25–26 | Publication | Eliezer Yudkowsky publishes the two-part series "Security Mindset and Ordinary Paranoia" and "Security Mindset and the Logistic Success Curve." These posts discuss the importance of adopting a security mindset in AI safety, a continuation of themes from his 2016 talk "AI Alignment: Why It’s Hard, and Where to Start." The series emphasizes the counterintuitive nature of preparing for potential AI risks.[162][163] |
2017 | December 1 | Financial | MIRI launches its 2017 fundraiser, setting ambitious targets to expand its research capabilities. By the fundraiser’s conclusion, over $2.5 million is raised from more than 300 donors, including a $763,970 donation in Ethereum from Vitalik Buterin. This success solidifies MIRI’s financial stability and supports its ongoing AI safety research.[164][165] |
2018 | February | Workshop | MIRI and the Center for Applied Rationality (CFAR) conduct the first AI Risk for Computer Scientists (AIRCS) workshop. Designed to engage technical professionals with the challenges of AI safety, the workshops blend rationality training with in-depth discussions on forecasting, AI risks, technical problems, and potential research directions. AIRCS becomes a recurring event, with seven more workshops held in 2018 and a significant expansion in 2019.[166] [167] |
2018 | August (joining), November 28 (announcement), December 1 (AMA) | Staff | Prolific Haskell developer Edward Kmett joins MIRI. Kmett, renowned for his work in programming and functional languages, emphasizes that his research will remain open despite MIRI’s nondisclosure policy. In an AMA on Reddit, he clarifies that he will strive to produce high-quality work, as his outputs may influence perceptions of MIRI's broader efforts.[168][169] |
2018 | October 29 | Project | The AI Alignment Forum, a centralized hub for alignment researchers, is officially launched. Developed by the LessWrong 2.0 team with support from MIRI, the forum replaces MIRI's Intelligent Agent Foundations Forum. It provides a space for researchers to engage in detailed discussions on AI alignment challenges, fostering collaboration across the field. The forum had previously launched in beta on July 10, 2018, coinciding with the inaugural AI Alignment Writing Day during the MIRI Summer Fellows Program.[170][171] |
2018 | October 29 – November 15 | Publication | MIRI publishes the "Embedded Agency" sequence by researchers Abram Demski and Scott Garrabrant. This series redefines the concept of naturalized agency as embedded agency, offering insights into how AI systems can operate as agents situated within and interacting with the environments they model. The serialized installments, released across MIRI’s blog, LessWrong 2.0, and the Alignment Forum, culminate in a full-text version on November 15. The sequence introduces foundational ideas in self-reference, logical uncertainty, and the limitations of traditional agent models.[172][173][174] |
2018 | November 22 | Strategy | Nate Soares publishes MIRI’s 2018 update, outlining new research directions under MIRI’s nondisclosure-by-default policy. The post emphasizes "deconfusion," a research approach aimed at clarifying foundational AI alignment problems. MIRI also issues a call for recruits, signaling a growing need for expertise in their evolving focus areas.[175] |
2018 | November 26 | Financial | MIRI launches its 2018 fundraiser, which runs through December 31, raising $951,817 from 348 donors. The funds support MIRI’s expanding research efforts, including its nondisclosed-by-default projects and AIRCS workshops.[166][176] |
2018 | August (joining), November 28 (announcement), December 1 (AMA) | Staff | Prolific Haskell developer Edward Kmett joins MIRI. Kmett, renowned for his work in programming and functional languages, emphasizes that his research will remain open despite MIRI’s nondisclosure policy. In an AMA on Reddit, he clarifies that he will strive to produce high-quality work, as his outputs may influence perceptions of MIRI's broader efforts.[177][178] |
2018 | December 15 | Publication | MIRI announces a new edition of Eliezer Yudkowsky's Rationality: From AI to Zombies (i.e. the book version of "the Sequences"). At the time of the announcement, the new edition of only two sequences, Map and Territory and How to Actually Change Your Mind, are available.[179][180] |
2019 | February | Financial | Open Philanthropy awards MIRI a grant of $2,112,500 over two years. This grant, decided by the Committee for Effective Altruism Support, aligns with grants provided to other Effective Altruism (EA) organizations, including 80,000 Hours and the Centre for Effective Altruism, reflecting a broader EA funding strategy. Around the same time, the Berkeley Existential Risk Initiative (BERI) grants $600,000 to MIRI. These combined contributions signify continued institutional confidence in MIRI’s work, supporting its nondisclosed-by-default AI safety research and operational capacity. MIRI discusses these grants in a blog post, noting their significance in bolstering its research agenda.[181][182] |
2019 | April 23 | Financial | The Long-Term Future Fund announces a $50,000 grant to MIRI as part of its April 2019 grant round. Oliver Habryka, the lead investigator for this grant, outlines the rationale behind the decision, praising MIRI's contributions to AI safety and addressing perceived funding gaps. This grant highlights MIRI's ongoing role in the Effective Altruism community as a vital organization tackling existential risks from AI.[183] |
2019 | December | Financial | MIRI's 2019 fundraiser raises $601,120 from over 259 donors, a significant decline compared to past fundraisers. A retrospective blog post, published in February 2020, analyzes factors contributing to the lower total, including reduced cryptocurrency donations due to market conditions, challenges posed by MIRI’s nondisclosed research policies, and shifts in donor behavior such as bunching donations across years to maximize tax benefits. Other contributing factors include fewer matching opportunities, a reduced perception of MIRI’s marginal need, and prior donors transitioning from earning-to-give strategies to direct work. The analysis underscores evolving dynamics in MIRI's donor base and external conditions affecting its fundraising outcomes.[184] |
2020 | February | Financial | Open Philanthropy awards MIRI a $7,703,750 grant over two years, marking MIRI’s largest grant to date. The funds include $6.24 million from Good Ventures and $1.46 million from Ben Delo, co-founder of BitMEX and a Giving Pledge signatory, under a co-funding partnership. This grant reflects Open Philanthropy’s continued support for AI safety research, with similar grants awarded to Ought, the Centre for Effective Altruism, and 80,000 Hours during the same period. MIRI notes in its April blog post that this funding strengthens its capacity to pursue long-term research directions.[185][186] |
2020 | March 2 | Financial | The Berkeley Existential Risk Initiative (BERI) grants $300,000 to MIRI. This amount is lower than the $600,000 MIRI had projected during its 2019 fundraiser. MIRI incorporates the adjustment into its reserves estimates and publicly acknowledges the grant as part of its financial transparency efforts.[186] |
2020 | April 14 | Financial | The Long-Term Future Fund grants $100,000 to MIRI. This grant reflects ongoing support from the Effective Altruism community for MIRI’s AI safety research.[187][186] |
2020 | May | Financial | The Survival and Flourishing Fund (SFF) announces three grants to MIRI as part of its recommendation process for the first half of 2020: $20,000 from SFF directly, $280,000 from Jaan Tallinn, and $40,000 from Jed McCaleb. These grants underscore sustained philanthropic interest in MIRI’s AI safety initiatives.[188] |
2020 | October 9 | Strategy | Rob Bensinger, MIRI’s research communications manager, announces on Facebook that MIRI is considering relocating its office from Berkeley, California, to another location in the United States or Canada. Potential areas include New Hampshire and Toronto. Bensinger indicates that while specific reasons for the move cannot yet be disclosed, the preemptive announcement is intended to help stakeholders consider this uncertainty in their own planning.[189] |
2020 | October 22 | Publication | Scott Garrabrant publishes "Introduction to Cartesian Frames," the first post in a new sequence on LessWrong and the Effective Altruism Forum. Cartesian frames provide a novel conceptual framework for understanding agency, and this sequence contributes to foundational research on AI alignment and decision-making.[190][191] |
2020 | November (announcement) | Financial | Jaan Tallinn donates $543,000 to MIRI through the Survival and Flourishing Fund's second-half 2020 recommendation process. Tallinn’s contributions have consistently supported MIRI’s AI safety work.[192] |
2020 | November 30 | Financial | MIRI announces that it will not conduct a formal fundraiser in 2020 but will participate in Giving Tuesday and other matching opportunities. This decision reflects changes in MIRI’s fundraising approach amid uncertainties caused by the COVID-19 pandemic.[193] |
2020 | December 21 | Strategy | Malo Bourgon, MIRI’s Chief Operating Officer, publishes a strategy update. The blog post reflects on the impact of the COVID-19 pandemic, MIRI’s relocation efforts, and its decision to potentially leave the Bay Area. It also discusses slower-than-expected progress in certain research directions initiated in 2017, leading MIRI to consider new strategic approaches. The post highlights public-facing progress in other research areas, affirming MIRI’s continued commitment to foundational AI safety challenges.[194] |
2021 | May 8 | Strategy | Rob Bensinger posts on LessWrong about MIRI’s potential relocation from the Bay Area. The post outlines factors under consideration, such as proximity to AI research hubs and cost of living, and invites community input. Despite the discussion, no decisions are made, leaving MIRI’s location unchanged for the time being.[195] |
2021 | May 13 | Financial | MIRI receives two cryptocurrency donations: $15.6 million in MakerDAO (MKR), with restrictions limiting spending to $2.5 million annually, and $4.4 million in Ethereum (ETH) from Vitalik Buterin. The MakerDAO donation provides stability but restricts immediate use, shaping how MIRI plans its future spending.[196] |
2021 | May 23 | Research | Scott Garrabrandt presents "finite factored sets," a new framework for modeling causality. The approach, which uses factored sets rather than graphs, receives attention in niche AI safety circles but does not lead to broader adoption in causal inference research.[197] |
2021 | July 1 | Strategy | MIRI decides against relocating, citing unresolved strategic considerations. This announcement follows earlier discussions about potential moves to areas like Toronto or New Hampshire.[195] |
2021 | November 15 | Collaboration | MIRI publishes a series of internal discussions, "Late 2021 MIRI Conversations," on topics including AI timelines and alignment research priorities. The discussions draw limited attention outside the AI safety community.[198] |
2021 | November 29 | Project | MIRI announces the Visible Thoughts Project, offering bounties for AI-dungeon-style datasets annotated with visible thoughts. Despite financial incentives, the project attracts few contributors and generates limited results.[199] |
2021 | December | Financial | MIRI allocates $200,000 for creating annotated datasets and $1,000,000 for scaling the Visible Thoughts Project. However, the project struggles to gain traction, and few submissions are received.[200] |
2022 | May 30 | Publication | Eliezer Yudkowsky publishes "Six Dimensions of Operational Adequacy in AGI Projects" on LessWrong. The post sparks some discussion among AI safety researchers but does not establish new standards or practices across broader AGI safety projects.[201] |
2022 | June 5 | Publication | Eliezer Yudkowsky's article "AGI Ruin: A List of Lethalities" is published on LessWrong. The post receives significant attention within the alignment community and reiterates Yudkowsky’s longstanding concerns about catastrophic AGI risks. It sparks debate, but the influence is largely confined to existing followers rather than drawing in broader public discourse.[202] |
2022 | April 25 | Publication | The article "Visible Thoughts Project and Bounty Announcement" is republished on LessWrong. Despite the sizable financial incentives offered, participation in the project remains low, and MIRI struggles to generate the expected level of interest and meaningful output.[203] |
2022 | July | Strategy | MIRI pauses its newsletter and public communications to refine internal strategies, an indication of both internal challenges and an effort to recalibrate its approach amid a rapidly evolving AI landscape.[204] |
2022 | December 1 | Publication | On behalf of his MIRI colleagues, Rob Bensinger publishes a blog post challenging organizations such as Anthropic and DeepMind to publicly write up their alignment plans. The challenge generates a mixed response, with some critiques of OpenAI’s plans emerging, but it does not spur any major public commitment from these organizations.[205] |
2023 | February 20 | Publication | Eliezer Yudkowsky appears on the Bankless podcast for an interview lasting a little under two hours, where he shares his relatively pessimistic views about the likelihood of catastrophic AGI with his hosts, neither of whom is deep into AI safety.[206] He also mentions he is taking a sabbatical due to burnout and the inevitable doom. He mentions considering potential ideas of working with other organizations such as Anthropic, Conjecture, or Redwood Research, noting that Redwood Research is "small" but that he trusts them and that they can also focus on one stream. A full transcript is published to LessWrong and the Alignment Forum a few days later.[207] The podcast gets a lot of traction, eliciting several reactions, and leads to a follow-up Q&A on Twitter Spaces.[208] A month later, a lengthy point-by-point response by alignment researcher Quintin Pope is published to LessWrong, attracting over 200 comments.[209] |
2023 | March 29 | Publication | An article by Eliezer Yudkowsky in Time Ideas, in response to the FLI Open Letter, argues that pausing AI for six months isn't enough. He says that what is needed won't happen in practice, but spells it out anyway: "The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. [...] Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. [...] Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. [...] Shut it all down."[210] The post is shared to LessWrong where it receives over 250 comments.[211] |
2023 | April | Leadership | MIRI undergoes a significant leadership change, with Malo Bourgon appointed as CEO, Nate Soares transitioning to President, Alex Vermeer becoming COO, and Eliezer Yudkowsky assuming the role of Chair of the Board. This restructuring is seen by some as an attempt to address stagnation and operational challenges.[212] |
2023 | June 19 | Publication | Paul Christiano publishes an article titled "Where I Agree and Disagree with Eliezer" on the AI Alignment Forum, outlining areas of alignment and divergence with Eliezer Yudkowsky's perspectives. The article is well-received within AI alignment circles and generates a productive debate, but does not directly influence the wider public narrative around AI safety.[213] |
2024 | Jan 14 | Strategy | MIRI publishes a comprehensive update on its mission and strategy for 2024. The update reaffirms their approach to AI alignment research and emphasizes collaboration. While the update receives positive feedback within existing networks, it does not attract wider attention or lead to notable changes in AI safety practices.[214][215] |
2024 | March 9 | Publication | An article in Semafor titled "The Risks of Expanding the Definition of AI Safety" discusses concerns raised by Eliezer Yudkowsky about the broadening scope of AI safety. While the article garners attention within specialized AI safety and alignment circles, it does not significantly alter the public narrative around AI governance, reflecting its niche impact.[216] |
2024 | April | Project | MIRI launches a new research team dedicated to technical AI governance. The team, initially consisting of Lisa Thiergart and Peter Barnett, aims to expand by the end of the year. Early traction is limited, highlighting recruitment challenges and the evolving demands of governance work in a rapidly changing AI landscape.[217] |
2024 | May | Project | The Technical Governance Team at MIRI takes an active role in contributing to AI policy development by submitting responses to multiple key policy bodies. These submissions include the NTIA's request for comment on open-weight AI models, focusing on the implications of making AI model weights publicly available and the potential risks and benefits associated with open-access AI technology.[218] They also respond to the United Nations’ request for feedback on the "Governing AI for Humanity" interim report, offering insights on global AI governance frameworks and how they can be structured to prioritize safety, transparency, and ethical considerations.[219] Additionally, the team addresses the Office of Management and Budget’s request for information on AI procurement in government, providing recommendations on how AI technologies can be integrated responsibly within government infrastructures.[220] This proactive engagement highlights MIRI’s strategic involvement in shaping international AI governance and ensuring that safety and ethical standards are maintained in the development and use of AI technologies.[221] |
2024 | May 14 | Project | MIRI announces the shutdown of the Visible Thoughts Project, which was initiated in November 2021. The project faced several challenges, including evolving ML needs and limited community engagement, which ultimately led to its termination.[222] |
2024 | May 29 | Publication | MIRI publishes their 2024 Communications Strategy, focusing on halting the development of frontier AI systems worldwide. The strategy aims for direct, unvarnished communication with policymakers and the public. However, the approach avoids grassroots advocacy and receives mixed reactions, with limited evidence of a shift in AI policy or public sentiment.[223] |
2024 | June 7 | Publication | Rob Bensinger publishes a response to Daniel Kokotajlo's discussion of Aschenbrenner's views on situational awareness in AI. Bensinger critiques Kokotajlo’s interpretation, adding nuance to the debate on AI safety. While the discussion is valuable within the alignment community, it remains niche and does not lead to broader shifts in consensus.[224] |
2024 | June | Research | The Agent Foundations team, including Scott Garrabrant, departs MIRI to pursue independent work. This signals a shift in focus for MIRI, as they prioritize other areas in response to rapid AI advancements. The departure is seen as an outcome of MIRI reassessing its research priorities amid changing circumstances in the AI field.[225] |
Numerical and visual data
Google Scholar
The following table summarizes per-year mentions on Google Scholar as of October 1, 2021.
Year | "Machine Intelligence Research Institute" |
---|---|
2000 | 0 |
2001 | 2 |
2002 | 0 |
2003 | 0 |
2004 | 1 |
2005 | 0 |
2006 | 0 |
2007 | 1 |
2008 | 0 |
2009 | 5 |
2010 | 7 |
2011 | 6 |
2012 | 6 |
2013 | 29 |
2014 | 61 |
2015 | 72 |
2016 | 93 |
2017 | 128 |
2018 | 134 |
2019 | 127 |
2020 | 138 |
2021 | 120 |
2022 | 117 |
2023 | 122 |
Google Trends
The comparative chart below shows Google Trends data Machine Intelligence Research Institute (Research institute) and Machine Intelligence Research Institute (Search term), from January 2004 to November 2024, when the screenshot was taken. Interest is also ranked by country and displayed on world map.[226]
Google Ngram Viewer
The chart below shows Google Ngram Viewer data for Machine Intelligence Research Institute, from 2000 to 2024.[227]
Wikipedia desktop pageviews across the different names
The image below shows desktop pageviews of the page Machine Intelligence Research Institute and its predecessor pages, "Singularity Institute for Artificial Intelligence" and "Singularity Institute".[228] The change in names occurred on these dates:[229][230]
- December 23, 2011: Two pages "Singularity Institute" and "Singularity Institute for Artificial Intelligence" merged into single page "Singularty Institute for Artificial Intelligence"
- April 16, 2012: Page moved from "Singularity Institute for Artificial Intelligence" to "Singularity Institute" with the old name redirecting to the new name
- February 1, 2013: Page moved from "Singularity Institute" to "Machine Intelligence Research Institute" with both old names redirecting to the new name
The red vertical line (for June 2015) represents a change in the method of estimating pageviews; specifically, pageviews by bots and spiders are excluded for months on the right of the line.
Meta information on the timeline
How the timeline was built
The initial version of the timeline was written by Issa Rice.
Issa likes to work locally and track changes with Git, so the revision history on this wiki only shows changes in bulk. To see more incremental changes, refer to the commit history.
Funding information for this timeline is available.
Feedback and comments
Feedback for the timeline can be provided at the following places:
What the timeline is still missing
- TODO Figure out how to cover publications
- TODO mention kurzweil
- TODO maybe include some of the largest donations (e.g. the XRP/ETH ones, tallinn, thiel)
- TODO maybe fundraisers
- TODO look more closely through some AMAs: [1], [2]
- TODO maybe more info in this SSC post [3]
- TODO more links at EA Wikia page [4]
- TODO lots of things from strategy updates, annual reviews, etc. [5]
- TODO Ben Goertzel talks about his involvement with MIRI [6], also more on opencog
- TODO giant thread on Ozy's blog [7]
- NOTE From 2017-07-06: "years that have few events so far: 2003 (one event), 2007 (one event), 2008 (three events), 2010 (three events), 2017 (three events)"
- TODO possibly include more from the old MIRI volunteers site. Some of the volunteering opportunities like proofreading and promoting MIRI by giving it good web of trust ratings seem to give a good flavor of what MIRI was like, the specific challenges in terms of switching domains, and so on.
- TODO cover Berkeley Existential Risk Initiative (BERI), kinda a successor to MIRI volunteers?
- TODO cover launch of Center for Human-Compatible AI
- TODO not sure how exactly to include this in the timeline, but something about MIRI's changing approach to funding certain types of contract work. e.g. Vipul says "I believe the work I did with Luke would no longer be sponsored by MIRI as their research agenda is now much more narrowly focused on the mathematical parts."
- TODO who is Tyler Emerson?
- modal combat and some other domains: [8], [9], [10]
- https://www.lesswrong.com/posts/yGZHQYqWkLMbXy3z7/video-q-and-a-with-singularity-institute-executive-director
- https://ea.greaterwrong.com/posts/NBgpPaz5vYe3tH4ga/on-deference-and-yudkowsky-s-ai-risk-estimates
Timeline update strategy
Some places to look on the MIRI blog:
Also general stuff like big news coverage.
See also
- Timeline of AI safety
- Timeline of Against Malaria Foundation
- Timeline of Center for Applied Rationality
- Timeline of decision theory
- Timeline of Future of Humanity Institute
External links
- Official website
- Intelligent Agent Foundations Forum
- LessWrong
- Machine Intelligence Research Institute (Wikipedia)
- The Singularity Wars (LessWrong) covers some of the early history of MIRI and the differences with Singularity University
- Donations information and other relevant documents, compiled by Vipul Naik
- Staff history and list of products on AI Watch
References
- ↑ Nate Soares (June 3, 2015). "Taking the reins at MIRI". LessWrong. Retrieved July 5, 2017.
- ↑ "lukeprog comments on "Thoughts on the Singularity Institute"". LessWrong. May 10, 2012. Retrieved July 15, 2012.
- ↑ "Halfwitz comments on "Breaking the vicious cycle"". LessWrong. November 23, 2014. Retrieved August 3, 2017.
- ↑ Eliezer S. Yudkowsky (August 31, 2000). "Eliezer, the person". Archived from the original on February 5, 2001.
- ↑ "Yudkowsky - Staring into the Singularity 1.2.5". Retrieved June 1, 2017.
- ↑ Eliezer S. Yudkowsky. "Coding a Transhuman AI". Retrieved July 5, 2017.
- ↑ Eliezer S. Yudkowsky. "Singularitarian mailing list". Retrieved July 5, 2017.
The "Singularitarian" mailing list was first launched on Sunday, March 11th, 1999, to assist in the common goal of reaching the Singularity. It will do so by pooling the resources of time, brains, influence, and money available to Singularitarians; by enabling us to draw on the advice and experience of the whole; by bringing together individuals with compatible ideas and complementary resources; and by binding the Singularitarians into a community.
- ↑ 8.0 8.1 8.2 Eliezer S. Yudkowsky. "PtS: Version History". Retrieved July 4, 2017.
- ↑ "Yudkowsky's Coming of Age". LessWrong. Retrieved January 30, 2018.
- ↑ "My Naturalistic Awakening". LessWrong. Retrieved January 30, 2018.
- ↑ "jacob_cannell comments on FLI's recommended project grants for AI safety research announced". LessWrong. Retrieved January 30, 2018.
- ↑ Eliezer S. Yudkowsky. "Singularitarian Principles 1.0". Retrieved July 5, 2017.
- ↑ "SL4: By Date". Retrieved June 1, 2017.
- ↑ Eliezer S. Yudkowsky. "SL4 Mailing List". Retrieved June 1, 2017.
- ↑ 15.0 15.1 Eliezer S. Yudkowsky. "Coding a Transhuman AI § Version History". Retrieved July 5, 2017.
- ↑ "Form 990-EZ 2000" (PDF). Retrieved June 1, 2017.
Organization was incorporated in July 2000 and does not have a financial history for years 1996-1999.
- ↑ "About the Singularity Institute for Artificial Intelligence". Retrieved July 1, 2017.
The Singularity Institute for Artificial Intelligence, Inc. (SIAI) was incorporated on July 27th, 2000 by Brian Atkins, Sabine Atkins (then Sabine Stoeckel) and Eliezer Yudkowsky. The Singularity Institute is a nonprofit corporation governed by the Georgia Nonprofit Corporation Code, and is federally tax-exempt as a 501(c)(3) public charity. At this time, the Singularity Institute is funded solely by individual donors.
- ↑ Eliezer S. Yudkowsky. "Singularity Institute for Artificial Intelligence, Inc.". Retrieved July 4, 2017.
- ↑ Eliezer S. Yudkowsky. "Singularity Institute: News". Retrieved July 1, 2017.
April 08, 2001: The Singularity Institute for Artificial Intelligence, Inc. announces that it has received tax-exempt status and is now accepting donations.
- ↑ "Singularity Institute for Artificial Intelligence // News // Archive". Retrieved July 13, 2017.
- ↑ Singularity Institute for Artificial Intelligence. "SIAI Guidelines on Friendly AI". Retrieved July 13, 2017.
- ↑ Eliezer Yudkowsky (2001). "Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures" (PDF). The Singularity Institute. Retrieved July 5, 2017.
- ↑ 23.0 23.1 23.2 23.3 23.4 Eliezer S. Yudkowsky. "Singularity Institute: News". Retrieved July 1, 2017.
- ↑ "SL4: By Thread". Retrieved July 1, 2017.
- ↑ Eliezer S. Yudkowsky (April 7, 2002). "SL4: PAPER: Levels of Organization in General Intelligence". Retrieved July 5, 2017.
- ↑ Singularity Institute for Artificial Intelligence. "Levels of Organization in General Intelligence". Retrieved July 5, 2017.
- ↑ "SL4: By Thread". Retrieved July 1, 2017.
- ↑ "FlareProgrammingLanguage". SL4 Wiki. September 14, 2007. Retrieved July 13, 2017.
- ↑ "Yudkowsky - Bayes' Theorem". Retrieved July 5, 2017.
Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute. If you've found Yudkowsky's pages on rationality useful, please consider donating to the Machine Intelligence Research Institute.
- ↑ Yudkowsky, Eliezer (April 30, 2003). "Singularity Institute - update". SL4.
- ↑ 31.0 31.1 31.2 31.3 31.4 31.5 31.6 "News of the Singularity Institute for Artificial Intelligence". Retrieved July 4, 2017.
- ↑ "Singularity Institute for Artificial Intelligence // The SIAI Voice". Retrieved July 4, 2017.
On March 4, 2004, the Singularity Institute announced Tyler Emerson as our Executive Director. Emerson will be responsible for guiding the Institute. His focus is in nonprofit management, marketing, relationship fundraising, leadership and planning. He will seek to cultivate a larger and more cohesive community that has the necessary resources to develop Friendly AI.
- ↑ Tyler Emerson (April 7, 2004). "SL4: Michael Anissimov - SIAI Advocacy Director". Retrieved July 1, 2017.
The Singularity Institute announces Michael Anissimov as our Advocacy Director. Michael has been an active volunteer for two years, and one of the more prominent voices in the singularity community. He is committed and thoughtful, and we feel very fortunate to have him help lead our advocacy.
- ↑ "Machine Intelligence Research Institute: This is an old revision of this page, as edited by 63.201.36.156 (talk) at 19:28, 14 April 2004.". Retrieved July 15, 2017.
- ↑ Eliezer Yudkowsky. "Coherent Extrapolated Volition" (PDF). Retrieved July 1, 2017.
The information is current as of May 2004, and should not become dreadfully obsolete until late June, when I plant to have an unexpected insight.
- ↑ "Yudkowsky - Technical Explanation". Retrieved July 5, 2017.
Eliezer Yudkowsky's work is supported by the Machine Intelligence Research Institute.
- ↑ Singularity Institute. "News of the Singularity Institute for Artificial Intelligence". Retrieved July 4, 2017.
- ↑ 38.0 38.1 38.2 38.3 38.4 38.5 38.6 Brandon Reinhart. "SIAI - An Examination". LessWrong. Retrieved June 30, 2017.
- ↑ "SL4: By Thread". Retrieved July 1, 2017.
- ↑ "The Singularity Institute for Artificial Intelligence - 2006 $100,000 Singularity Challenge". Retrieved July 5, 2017.
- ↑ "Twelve Virtues of Rationality". Retrieved July 5, 2017.
- ↑ 42.0 42.1 "Singularity Summit". Machine Intelligence Research Institute. Retrieved June 30, 2017.
- ↑ Dan Farber. "The great Singularity debate". ZDNet. Retrieved June 30, 2017.
- ↑ Jerry Pournelle (May 20, 2006). "Chaos Manor Special Reports: The Stanford Singularity Summit". Retrieved June 30, 2017.
- ↑ "Overcoming Bias : Bio". Retrieved June 1, 2017.
- ↑ "Form 990 2007" (PDF). Retrieved July 8, 2017.
- ↑ "Our History". Machine Intelligence Research Institute.
- ↑ "Singularity Institute for Artificial Intelligence". YouTube. Retrieved July 8, 2017.
- ↑ "The Power of Intelligence". Machine Intelligence Research Institute. July 10, 2007. Retrieved May 5, 2020.
- ↑ "The Singularity Summit 2007". Retrieved June 30, 2017.
- ↑ "Yudkowsky - The Simple Truth". Retrieved July 5, 2017.
- ↑ "About". OpenCog Foundation. Retrieved July 6, 2017.
- ↑ Goertzel, Ben (October 29, 2010). "The Singularity Institute's Scary Idea (and Why I Don't Buy It)". Retrieved September 15, 2019.
- ↑ http://helldesign.net. "The Singularity Summit 2008: Opportunity, Risk, Leadership > Program". Retrieved June 30, 2017.
- ↑ Elise Ackerman (October 23, 2008). "Annual A.I. conference to be held this Saturday in San Jose". The Mercury News. Retrieved July 5, 2017.
- ↑ "The Hanson-Yudkowsky AI-Foom Debate". Lesswrongwiki. LessWrong. Retrieved July 1, 2017.
- ↑ "Eliezer_Yudkowsky comments on Thoughts on the Singularity Institute (SI) - Less Wrong". LessWrong. Retrieved July 15, 2017.
Nonetheless, it already has a warm place in my heart next to the debate with Robin Hanson as the second attempt to mount informed criticism of SIAI.
- ↑ 58.0 58.1 58.2 58.3 "Recent Singularity Institute Accomplishments". Singularity Institute for Artificial Intelligence. Retrieved July 6, 2017.
- ↑ "FAQ - LessWrong Wiki". LessWrong. Retrieved June 1, 2017.
- ↑ Michael Vassar (February 16, 2009). "Introducing Myself". Machine Intelligence Research Institute. Retrieved July 1, 2017.
- ↑ 61.0 61.1 RobbBB (March 13, 2015). "Rationality: From AI to Zombies". LessWrong. Retrieved July 1, 2017.
- ↑ "Singularity Institute (@singinst)". Twitter. Retrieved July 4, 2017.
- ↑ 63.0 63.1 63.2 Amy Willey Labenz. Personal communication. May 27, 2022.
- ↑ "Wayback Machine". Retrieved July 2, 2017.
- ↑ 65.0 65.1 McCabe, Thomas (February 4, 2011). "The Uncertain Future Forecasting Project Goes Open-Source". H Plus Magazine. Archived from the original on April 13, 2012. Retrieved July 15, 2017.
- ↑ "Singularity Summit 2009 Program". Singularity Institute. Retrieved June 30, 2017.
- ↑ Stuart Fox (October 2, 2009). "Singularity Summit 2009: The Singularity Is Near". Popular Science.
- ↑ "Form 990 2009" (PDF). Retrieved July 8, 2017.
- ↑ "Reply to Holden on The Singularity Institute". July 10, 2012. Retrieved June 30, 2017.
- ↑ Michael Anissimov (December 12, 2009). "The Uncertain Future". The Singularity Institute Blog. Retrieved July 5, 2017.
- ↑ "Form 990 2010" (PDF). Retrieved July 8, 2017.
- ↑ "Harry Potter and the Methods of Rationality Chapter 1: A Day of Very Low Probability, a harry potter fanfic". FanFiction. Retrieved July 1, 2017.
Updated: 3/14/2015 - Published: 2/28/2010
- ↑ David Whelan (March 2, 2015). "The Harry Potter Fan Fiction Author Who Wants to Make Everyone a Little More Rational". Vice. Retrieved July 1, 2017.
- ↑ "2013 in Review: Fundraising - Machine Intelligence Research Institute". Machine Intelligence Research Institute. August 13, 2014. Retrieved July 1, 2017.
- ↑ Rees, Gareth (August 17, 2010). "Zendegi - Gareth Rees". Retrieved July 15, 2017.
- ↑ Sotala, Kaj (October 7, 2010). "Greg Egan disses stand-ins for Overcoming Bias, SIAI in new book". Retrieved July 15, 2017.
- ↑ Hanson, Robin (March 25, 2012). "Egan's Zendegi". Retrieved July 15, 2017.
- ↑ "Singularity Summit | Program". Retrieved June 30, 2017.
- ↑ "Machine Intelligence Research Institute - Posts". Retrieved July 4, 2017.
- ↑ "Machine Intelligence Research Institute - Posts". Retrieved July 4, 2017.
- ↑ Louie Helm (December 21, 2010). "Announcing the Tallinn-Evans $125,000 Singularity Challenge". Machine Intelligence Research Institute. Retrieved July 7, 2017.
- ↑ Kaj Sotala (December 26, 2010). "Tallinn-Evans $125,000 Singularity Challenge". LessWrong. Retrieved July 7, 2017.
- ↑ "GiveWell conversation with SIAI". GiveWell. February 2011. Retrieved July 4, 2017.
- ↑ Holden Karnofsky. "Singularity Institute for Artificial Intelligence". Yahoo! Groups. Retrieved July 4, 2017.
- ↑ "lukeprog comments on Thoughts on the Singularity Institute (SI)". LessWrong. Retrieved June 30, 2017.
When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty.
- ↑ Holden Karnofsky. "Re: [givewell] Singularity Institute for Artificial Intelligence". Yahoo! Groups. Retrieved July 4, 2017.
- ↑ "singularity.org". Retrieved July 4, 2017.
- ↑ 88.0 88.1 "Wayback Machine". Retrieved July 4, 2017.
- ↑ "Singularity Institute Volunteering". Retrieved July 14, 2017.
- ↑ "Singularity Summit | Program". Retrieved June 30, 2017.
- ↑ "SingularitySummits". YouTube. Retrieved July 4, 2017.
Joined Oct 17, 2011
- ↑ Luke Muehlhauser (January 16, 2012). "Machine Intelligence Research Institute Progress Report, December 2011". Machine Intelligence Research Institute. Retrieved July 14, 2017.
- ↑ lukeprog (December 12, 2011). "New 'landing page' website: Friendly-AI.com". LessWrong. Retrieved July 2, 2017.
- ↑ Frank, Sam (January 1, 2015). "Come With Us If You Want to Live. Among the apocalyptic libertarians of Silicon Valley". Harper's Magazine. Retrieved July 15, 2017.
- ↑ "Video Q&A with Singularity Institute Executive Director". LessWrong. December 10, 2011. Retrieved May 31, 2021.
- ↑ "Q&A #2 with Luke Muehlhauser, Machine Intelligence Research Institute Executive Director". Machine Intelligence Research Institute. January 12, 2012. Retrieved May 31, 2021.
- ↑ "Wayback Machine". Retrieved July 4, 2017.
- ↑ Louie Helm (May 8, 2012). "Machine Intelligence Research Institute Progress Report, April 2012". Machine Intelligence Research Institute. Retrieved June 30, 2017.
- ↑ Holden Karnofsky. "Thoughts on the Singularity Institute (SI)". LessWrong. Retrieved June 30, 2017.
- ↑ Helm, Louie (August 6, 2012). "July 2012 Newsletter". Machine Intelligence Research Institute. Retrieved May 5, 2020.
- ↑ David J. Hill (August 29, 2012). "Singularity Summit 2012 Is Coming To San Francisco October 13-14". Singularity Hub. Retrieved July 6, 2017.
- ↑ 102.00 102.01 102.02 102.03 102.04 102.05 102.06 102.07 102.08 102.09 102.10 102.11 102.12 102.13 102.14 "Research Workshops - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved July 1, 2017.
- ↑ "Singularity University Acquires the Singularity Summit". Singularity University. December 9, 2012. Retrieved June 30, 2017.
- ↑ Fox, Joshua (February 14, 2013). "The Singularity Wars". LessWrong. Retrieved July 15, 2017.
- ↑ "Form 990 2013" (PDF). Retrieved July 8, 2017.
- ↑ "Conversations Archives". Machine Intelligence Research Institute. Retrieved July 15, 2017.
- ↑ Luke Muehlhauser (March 22, 2015). "2014 in review". Machine Intelligence Research Institute. Retrieved July 15, 2017.
- ↑ "March Newsletter". Machine Intelligence Research Institute. March 7, 2013. Retrieved July 1, 2017.
- ↑ "We are now the "Machine Intelligence Research Institute" (MIRI)". Machine Intelligence Research Institute. January 30, 2013. Retrieved June 30, 2017.
- ↑ "Facing the Intelligence Explosion, Luke Muehlhauser". Amazon.com. Retrieved July 1, 2017.
- ↑ "Machine Intelligence Research Institute - Coming soon...". Retrieved July 4, 2017.
- ↑ Luke Muehlhauser (February 28, 2013). "Welcome to Intelligence.org". Machine Intelligence Research Institute. Retrieved May 5, 2020.
- ↑ Luke Muehlhauser (April 25, 2013). ""Singularity Hypotheses" Published". Machine Intelligence Research Institute. Retrieved July 14, 2017.
- ↑ "Singularity Hypotheses: A Scientific and Philosophical Assessment (The Frontiers Collection): 9783642325595: Medicine & Health Science Books". Amazon.com. Retrieved July 14, 2017.
- ↑ Luke Muehlhauser (December 11, 2013). "MIRI's Strategy for 2013". Machine Intelligence Research Institute. Retrieved July 6, 2017.
- ↑ Jon Southurst (January 19, 2014). "Ripple Creator Donates $500k in XRP to Artificial Intelligence Research Charity". CoinDesk. Retrieved July 6, 2017.
- ↑ Luke Muehlhauser (January 27, 2014). "Existential Risk Strategy Conversation with Holden Karnofsky". Machine Intelligence Research Institute. Retrieved July 7, 2017.
- ↑ "Smarter Than Us: The Rise of Machine Intelligence, Stuart Armstrong". Amazon.com. Retrieved July 1, 2017.
Publisher: Machine Intelligence Research Institute (February 1, 2014)
- ↑ Rob Bensinger (August 10, 2015). "Assessing Our Past and Potential Impact". Machine Intelligence Research Institute. Retrieved July 6, 2017.
- ↑ "Recent Hires at MIRI". Machine Intelligence Research Institute. March 13, 2014. Retrieved July 13, 2017.
- ↑ "MIRI's March 2014 Newsletter". Machine Intelligence Research Institute. March 18, 2014. Retrieved May 27, 2018.
- ↑ "Machine Intelligence Research Institute - Photos". Facebook. Retrieved May 27, 2018.
- ↑ "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017.
- ↑ "Recent Site Activity - AI Impacts". Retrieved June 30, 2017.
Jul 4, 2014, 10:39 AM Katja Grace edited Predictions of human-level AI timelines
- ↑ "MIRI's September Newsletter". Machine Intelligence Research Institute. September 1, 2014. Retrieved July 15, 2017.
- ↑ Benja Fallenstein. "Welcome!". Intelligent Agent Foundations Forum. Retrieved June 30, 2017.
Post by Benja Fallenstein 969 days ago
- ↑ Luke Muehlhauser (January 11, 2015). "An improved "AI Impacts" website". Machine Intelligence Research Institute. Retrieved June 30, 2017.
- ↑ "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017.
- ↑ Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017.
- ↑ Ryan Carey. "Rationality: From AI to Zombies was released today!". Effective Altruism Forum. Retrieved July 1, 2017.
- ↑ Luke Muehlhauser (May 6, 2015). "A fond farewell and a new Executive Director". Machine Intelligence Research Institute. Retrieved June 30, 2017.
- ↑ "Self-prediction in Decision Theory and Artificial Intelligence — Faculty of Philosophy". Retrieved February 24, 2018.
- ↑ Nate Soares (June 3, 2015). "Taking the reins at MIRI". LessWrong. Retrieved July 5, 2017.
- ↑ "I am Nate Soares, AMA!". Effective Altruism Forum. Retrieved July 5, 2017.
- ↑ "MIRI Summer Fellows 2015". CFAR. June 21, 2015. Retrieved July 8, 2017.
- ↑ "Center for Applied Rationality — General Support". Open Philanthropy. Retrieved July 8, 2017.
We have some doubts about CFAR's management and operations, and we see CFAR as having made only limited improvements over the last two years, with the possible exception of running the MIRI Summer Fellows Program in 2015, which we understand to have been relatively successful at recruiting staff for MIRI.
- ↑ "Library/Machine Intelligence Research Institute". Effective Altruism Wikia. September 26, 2015. Retrieved July 15, 2017.
- ↑ Larks (December 13, 2016). "2017 AI Risk Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved July 8, 2017.
- ↑ "Arbital AI Alignment Exploration". Retrieved January 30, 2018.
- ↑ Soares, Nate (March 30, 2016). "MIRI has a new COO: Malo Bourgon". Machine Intelligence Research Institute. Retrieved September 15, 2019.
- ↑ "The AI Alignment Problem: Why It's Hard, and Where to Start". May 6, 2016. Retrieved May 7, 2020.
- ↑ Yudkowsky, Eliezer (December 28, 2016). "AI Alignment: Why It's Hard, and Where to Start". Retrieved May 7, 2020.
- ↑ Rob Bensinger (July 27, 2016). "New paper: "Alignment for advanced machine learning systems"". Machine Intelligence Research Institute. Retrieved July 1, 2017.
- ↑ "Machine Intelligence Research Institute — General Support". Open Philanthropy. Retrieved June 30, 2017.
- ↑ "New paper: "Logical induction"". Machine Intelligence Research Institute. September 12, 2016. Retrieved July 1, 2017.
- ↑ Scott Aaronson (October 9, 2016). "Shtetl-Optimized » Blog Archive » Stuff That's Happened". Retrieved July 1, 2017.
Some of you will also have seen that folks from the Machine Intelligence Research Institute (MIRI)—Scott Garrabrant, Tsvi Benson-Tilsen, Andrew Critch, Nate Soares, and Jessica Taylor—recently put out a major 130-page paper entitled "Logical Induction".
- ↑ Rob Bensinger (October 11, 2016). "Ask MIRI Anything (AMA)". Effective Altruism Forum. Retrieved July 5, 2017.
- ↑ "AI Impacts — General Support". Open Philanthropy. Retrieved June 30, 2017.
- ↑ "[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved July 13, 2017.
- ↑ "Media discussion of 2016 ESPAI". AI Impacts. June 14, 2017. Retrieved July 13, 2017.
- ↑ "Updates to the research team, and a major donation - Machine Intelligence Research Institute". Machine Intelligence Research Institute. July 4, 2017. Retrieved July 4, 2017.
- ↑ Daniel Dewey (July 7, 2017). "My Current Thoughts on MIRI's "Highly Reliable Agent Design" Work". Effective Altruism Forum. Retrieved July 7, 2017.
- ↑ Yudkowsky, Eliezer; Soares, Nate. "[1710.05060] Functional Decision Theory: A New Theory of Instrumental Rationality". Retrieved October 22, 2017.
Submitted on 13 Oct 2017
- ↑ Matthew Graves (October 22, 2017). "New Paper: "Functional Decision Theory"". Machine Intelligence Research Institute. Retrieved October 22, 2017.
- ↑ "There's No Fire Alarm for Artificial General Intelligence". Machine Intelligence Research Institute. October 13, 2017. Retrieved April 19, 2020.
- ↑ "There's No Fire Alarm for Artificial General Intelligence". LessWrong. October 13, 2017. Retrieved April 19, 2020.
- ↑ Malo Bourgon (November 8, 2017). "A Major Grant from Open Philanthropy". Machine Intelligence Research Institute. Retrieved November 11, 2017.
- ↑ "Machine Intelligence Research Institute — General Support (2017)". Open Philanthropy. November 8, 2017. Retrieved November 11, 2017.
- ↑ "Book Review: Inadequate Equilibria". Slate Star Codex. December 9, 2017. Retrieved December 12, 2017.
- ↑ "Shtetl-Optimized » Blog Archive » Review of "Inadequate Equilibria," by Eliezer Yudkowsky". Retrieved December 12, 2017.
- ↑ Robin Hanson (November 25, 2017). "Overcoming Bias : Why Be Contrarian?". Retrieved December 12, 2017.
- ↑ Yudkowsky, Eliezer (November 25, 2017). "Security Mindset and Ordinary Paranoia". Machine Intelligence Research Institute. Retrieved May 7, 2020.
- ↑ Yudkowsky, Eliezer (November 26, 2017). "Security Mindset and the Logistic Success Curve". Machine Intelligence Research Institute. Retrieved May 7, 2020.
- ↑ Malo Bourgon (December 1, 2017). "MIRI's 2017 Fundraiser". Machine Intelligence Research Institute. Retrieved December 12, 2017.
- ↑ Malo Bourgon (January 10, 2018). "Fundraising success!". Machine Intelligence Research Institute. Retrieved January 30, 2018.
- ↑ 166.0 166.1 "MIRI's 2018 Fundraiser". Machine Intelligence Research Institute.
- ↑ "AI Risk for Computer Scientists. Join us for four days of leveling up thinking on AI risk.". Machine Intelligence Research Institute.
- ↑ "MIRI's newest recruit: Edward Kmett!". Machine Intelligence Research Institute.
- ↑ "MIRI's newest recruit: Edward Kmett!". Reddit.
- ↑ "Announcing the new AI Alignment Forum". Machine Intelligence Research Institute.
- ↑ Raymond Arnold. "Announcing AlignmentForum.org Beta".
- ↑ "Embedded Agency". Machine Intelligence Research Institute.
- ↑ "Embedded Agency". LessWrong 2.0.
- ↑ "Embedded Agency (Full Version)". Twitter.
- ↑ "2018 Update: Our New Research Directions". Machine Intelligence Research Institute.
- ↑ "Our 2018 Fundraiser Review". Machine Intelligence Research Institute.
- ↑ "MIRI's newest recruit: Edward Kmett!". Machine Intelligence Research Institute.
- ↑ "MIRI's newest recruit: Edward Kmett!". Reddit.
- ↑ "Announcing a new edition of "Rationality: From AI to Zombies"". Machine Intelligence Research Institute. December 16, 2018. Retrieved February 14, 2019.
- ↑ Rob Bensinger. "New edition of "Rationality: From AI to Zombies"". LessWrong 2.0. Retrieved February 14, 2019.
- ↑ "Machine Intelligence Research Institute — General Support (2019)". Open Philanthropy. April 1, 2019. Retrieved September 14, 2019.
- ↑ Bensinger, Rob (April 1, 2019). "New grants from the Open Philanthropy Project and BERI". Machine Intelligence Research Institute. Retrieved September 14, 2019.
- ↑ Habryka, Oliver (April 23, 2019). "MIRI ($50,000)". Effective Altruism Forum. Retrieved September 15, 2019.
- ↑ Colm Ó Riain (February 13, 2020). "Our 2019 Fundraiser Review". Machine Intelligence Research Institute. Retrieved April 19, 2020.
- ↑ "Open Philanthropy donations made (filtered to cause areas matching AI risk)". Retrieved July 27, 2017.
- ↑ 186.0 186.1 186.2 Bensinger, Rob (April 27, 2020). "MIRI's largest grant to date!". Machine Intelligence Research Institute.
- ↑ "Fund Payout Report: April 2020 – Long-Term Future Fund Grant Recommendations". Effective Altruism Funds. April 14, 2020.
- ↑ "SFF-2020-H1 S-process Recommendations Announcement". Survival and Flourishing Fund. May 29, 2020.
- ↑ Bensinger, Rob (October 9, 2020). "MIRI, the place where I work, is very seriously considering moving to a different country soon (most likely Canada), or moving to elsewhere in the US.". Facebook.
- ↑ Garrabrant, Scott (October 22, 2020). "Introduction to Cartesian Frames". LessWrong.
- ↑ Bensinger, Rob (October 23, 2020). "October 2020 Newsletter". Machine Intelligence Research Institute.
- ↑ "SFF-2020-H2 S-process Recommendations Announcement". Survival and Flourishing Fund.
- ↑ Bensinger, Rob (November 30, 2020). "November 2020 Newsletter". Machine Intelligence Research Institute.
- ↑ Bourgon, Malo (December 21, 2020). "2020 Updates and Strategy". Machine Intelligence Research Institute.
- ↑ 195.0 195.1 Bensinger, Rob (May 8, 2021). "MIRI location optimization (and related topics) discussion". LessWrong. Retrieved May 31, 2021.
- ↑ Colm Ó Riain (May 13, 2021). "Our all-time largest donation, and major crypto support from Vitalik Buterin". Retrieved May 31, 2021.
- ↑ Garrabrandt, Scott (May 23, 2021). "Finite Factored Sets". Machine Intelligence Research Institute. Retrieved May 31, 2021.
- ↑ Bensinger, Rob (November 15, 2021). "Late 2021 MIRI Conversations". Alignment Forum. Retrieved December 1, 2021.
- ↑ Soares, Nate (November 29, 2021). "Visible Thoughts Project and Bounty Announcement". Retrieved December 2, 2021.
- ↑ "December 2021 Newsletter". Machine Intelligence Research Institute. December 31, 2021.
- ↑ "Six Dimensions of Operational Adequacy in AGI Projects". LessWrong. May 30, 2022. Retrieved September 5, 2024.
- ↑ "AGI Ruin: A List of Lethalities". LessWrong. June 5, 2022. Retrieved September 5, 2024.
- ↑ "Visible Thoughts Project and Bounty Announcement". LessWrong. April 25, 2023. Retrieved September 6, 2024.
- ↑ "July 2022 Newsletter". Machine Intelligence Research Institute. July 30, 2022. Retrieved September 2, 2024.
- ↑ Eliezer Yudkowsky (January 1, 2020). "Eliezer Yudkowsky's AI Plan Challenge". Retrieved October 8, 2024.
- ↑ "159 - We're All Gonna Die with Eliezer Yudkowsky". YouTube. February 20, 2023. Retrieved April 14, 2024.
- ↑ "Full Transcript: Eliezer Yudkowsky on the Bankless podcast". LessWrong. February 23, 2023. Retrieved April 14, 2024.
- ↑ "Transcript: Yudkowsky on Bankless follow-up Q&A". LessWrong. February 27, 2023. Retrieved April 14, 2024.
- ↑ Pope, Quintin (March 20, 2023). "My Objections to "We're All Gonna Die with Eliezer Yudkowsky"". LessWrong. Retrieved May 17, 2024.
- ↑ Yudkowsky, Eliezer (March 29, 2023). "Pausing AI Developments Isn't Enough. We Need to Shut it All Down". Time Magazine. Retrieved May 17, 2024.
- ↑ "Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky". March 29, 2023. Retrieved May 17, 2024.
- ↑ "Announcing MIRI's New CEO and Leadership Team". Machine Intelligence Research Institute. October 10, 2023. Retrieved September 2, 2024.
- ↑ "Where I Agree and Disagree with Eliezer". LessWrong. August 10, 2023. Retrieved September 5, 2024.
- ↑ "MIRI 2024 Mission and Strategy Update". LessWrong. May 14, 2024. Retrieved September 5, 2024.
- ↑ "April 2024 Newsletter". Machine Intelligence Research Institute. April 12, 2024. Retrieved September 2, 2024.
- ↑ "The Risks of Expanding the Definition of AI Safety". Semafor. March 9, 2024. Retrieved September 5, 2024.
- ↑ "April 2024 Newsletter". Machine Intelligence Research Institute. April 12, 2024. Retrieved September 2, 2024.
- ↑ "NTIA Request for Comment on Open-Weight AI Models". Regulations.gov. Retrieved September 10, 2024.
- ↑ "Governing AI for Humanity Interim Report" (PDF). United Nations. Retrieved September 10, 2024.
- ↑ "OMB Request for Information on AI Procurement in Government". Federal Register. Retrieved September 10, 2024.
- ↑ "May 2024 Newsletter". Machine Intelligence Research Institute. May 14, 2024. Retrieved September 5, 2024.
- ↑ "May 2024 Newsletter". Machine Intelligence Research Institute. May 14, 2024. Retrieved September 5, 2024.
- ↑ "MIRI 2024 Communications Strategy". Machine Intelligence Research Institute. May 29, 2024. Retrieved September 5, 2024.
- ↑ "Response to Aschenbrenner's Situational Awareness". Effective Altruism Forum. May 15, 2023. Retrieved September 5, 2024.
- ↑ "June 2024 Newsletter". Machine Intelligence Research Institute. June 14, 2024. Retrieved September 2, 2024.
- ↑ "Machine Intelligence Research Institute". Google Trends. Retrieved 11 March 2021.
- ↑ "Machine Intelligence Research Institute". books.google.com. Retrieved 11 March 2021.
- ↑ "Singularity Institute". wikipediaviews.org. Retrieved 17 March 2021.
- ↑ "Singularity Institute for Artificial Intelligence: Revision history". Retrieved July 15, 2017.
- ↑ "All public logs: search Singularity Institute". Retrieved July 15, 2017.