Difference between revisions of "Timeline of Future of Humanity Institute"

From Timelines
Jump to: navigation, search
(Undo revision 82796 by Hith (talk))
 
(18 intermediate revisions by 2 users not shown)
Line 24: Line 24:
 
| 2011–2015 || More development and publication of ''Superintelligence'' || FHI continues to publish, hold workshops, and advise policymakers. There is more focus on existential risks, in particular risks from advanced artificial intelligence, during this period. The most notable accomplishment of FHI during this period seems to be the publication of Bostrom's book ''Superintelligence''. FHI did not seem to publish any Annual/Achievement Reports during this period, so it is somewhat difficult to tell what FHI considers its greatest accomplishments during this period (other than the publication of ''Superintelligence'').
 
| 2011–2015 || More development and publication of ''Superintelligence'' || FHI continues to publish, hold workshops, and advise policymakers. There is more focus on existential risks, in particular risks from advanced artificial intelligence, during this period. The most notable accomplishment of FHI during this period seems to be the publication of Bostrom's book ''Superintelligence''. FHI did not seem to publish any Annual/Achievement Reports during this period, so it is somewhat difficult to tell what FHI considers its greatest accomplishments during this period (other than the publication of ''Superintelligence'').
 
|-
 
|-
| 2015–present || More development || FHI launches both the Strategic AI Research Center and the Governance of AI Program during this period. Its staff count continues to grow. FHI also begins collaborating with {{W|Google DeepMind}}.
+
| 2021–2024 || AI Governance, Pandemic Research, and Closure || FHI contributes significantly to AI governance and publishes important research on existential risks and pandemic preparedness. In 2023, FHI faces internal challenges, including a controversy involving Nick Bostrom, which leads to its closure in April 2024. Despite this, FHI's legacy in AI safety, biosecurity, and global risk mitigation continues to influence ongoing research and the development of policy frameworks.  
 
|}
 
|}
 
  
 
==Full timeline==
 
==Full timeline==
Line 42: Line 41:
 
* For "Social media", the intention is to include all social media account creations (where the date is known) and Reddit AMAs.
 
* For "Social media", the intention is to include all social media account creations (where the date is known) and Reddit AMAs.
 
* Events about FHI staff giving policy advice (to e.g. government bodies) are not included, as there are many such events and it is difficult to tell which ones are more important.
 
* Events about FHI staff giving policy advice (to e.g. government bodies) are not included, as there are many such events and it is difficult to tell which ones are more important.
 +
* For "Project Announcement" or "Intiatives", the intention is to include announcements of major initiatives and research programs launched by FHI, especially those aimed at training researchers or advancing existential risk mitigation.
 +
* For "Collaboration", the intention is to include significant collaborations with other institutions where FHI co-authored reports, conducted joint research, or played a major role in advising.
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
Line 110: Line 111:
 
| 2005–2007 || || Project || Lighthill Risk Network is created by Peter Taylor of FHI.<ref name="fhi-report" />
 
| 2005–2007 || || Project || Lighthill Risk Network is created by Peter Taylor of FHI.<ref name="fhi-report" />
 
|-
 
|-
| 2007 || {{dts|April}} || Internal review || Issue 4 of the FHI Progress Report (apparently renamed from "Bimonthly Progress Report") is published.<ref name="report-april-2007">{{cite web |url=http://www.fhi.ox.ac.uk:80/newsletters/April%202007%20final.pdf |title=Progress Report - Issue 4 |publisher=Future of Humanity Institute |accessdate=March 18, 2018 |archiveurl=https://web.archive.org/web/20081221082328/http://www.fhi.ox.ac.uk:80/newsletters/April%202007%20final.pdf |archivedate=December 21, 2008 |dead-url=yes}}</ref>
+
| 2007 || {{dts|April}} || Internal review || Issue 4 of the FHI Progress Report (apparently renamed from "Bimonthly Progress Report") is published. This issue highlights key developments across the institute’s projects, focusing on topics like existential risk reduction and emerging technologies. It serves as an internal checkpoint to assess the direction of FHI’s ongoing work and to provide strategic updates on project milestones achieved in early 2007.<ref name="report-april-2007">{{cite web |url=http://www.fhi.ox.ac.uk:80/newsletters/April%202007%20final.pdf |title=Progress Report - Issue 4 |publisher=Future of Humanity Institute |accessdate=March 18, 2018 |archiveurl=https://web.archive.org/web/20081221082328/http://www.fhi.ox.ac.uk:80/newsletters/April%202007%20final.pdf |archivedate=December 21, 2008 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2007 || {{dts|May 26}}–27 || Workshop || The Whole Brain Emulation Workshop is hosted by FHI.<ref name="fhi-report" />{{rp|62}}<ref name="report-april-2007" />{{rp|2}} The workshop would eventually lead to the publication of "Whole Brain Emulation: A Technical Roadmap" in 2008.<ref name="annual-report-oct-2008-to-sep-2009" />
+
 
 +
| 2007 || {{dts|May 26}}–27 || Workshop || The Whole Brain Emulation Workshop is hosted by FHI. This two-day event brings together experts in neuroscience, computational modeling, and artificial intelligence to discuss the feasibility and ethical considerations of emulating a human brain in a computer. It sparks a foundation for whole-brain emulation research, with discussions ranging from technical challenges to long-term applications. This workshop would eventually lead to the publication of "Whole Brain Emulation: A Technical Roadmap" in 2008, establishing FHI’s ongoing influence in the brain emulation field.<ref name="fhi-report" />{{rp|62}}<ref name="report-april-2007" />{{rp|2}}<ref name="annual-report-oct-2008-to-sep-2009" />
 +
 
 
|-
 
|-
| 2007 || {{dts|June 4}} || Conference || Nick Shackel of FHI organizes the Bayesian Approaches to Agreement Conference.<ref name="fhi-report" />{{rp|63}}
+
 
 +
| 2007 || {{dts|June 4}} || Conference || Nick Shackel of FHI organizes the Bayesian Approaches to Agreement Conference. This event gathers leading thinkers to explore Bayesian principles in achieving agreement in uncertain conditions. By examining methods for assessing probabilities and evidence in differing viewpoints, this conference contributes to FHI's mission of improving decision-making frameworks and bolstering rational discourse in high-stakes scenarios.<ref name="fhi-report" />{{rp|63}}
 +
 
 
|-
 
|-
| 2007 || {{dts|July 18}} || Internal review || The first FHI Achievements Report, covering November 2005 to July 2007, is published.<ref name="fhi-report" />
+
 
 +
| 2007 || {{dts|July 18}} || Internal review || The first FHI Achievements Report, covering November 2005 to July 2007, is published. This report outlines FHI’s major accomplishments, including research initiatives and institutional growth, and reflects the organization’s commitment to transparency and accountability in its existential risk work. It highlights FHI’s rapid progress in interdisciplinary research and provides a roadmap for future directions.<ref name="fhi-report" />
 +
 
 
|-
 
|-
| 2007 || {{dts|August 24}} || Publication || ''Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Wittgenstein-His-Interpreters-Essays-Memory/dp/1405129220/ |title=Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books">{{cite web |url=http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20101103223749/http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |archivedate=November 3, 2010 |dead-url=yes}}</ref>
+
 
 +
| 2007 || {{dts|August 24}} || Publication || ''Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker'' is published. This book, co-edited by FHI’s Guy Kahane, pays tribute to philosopher Gordon Baker and includes essays on Wittgenstein’s legacy. The publication underscores FHI’s dedication to supporting diverse intellectual pursuits, including philosophy, which informs the ethical underpinnings of its work on humanity’s long-term prospects.<ref>{{cite web |url=https://www.amazon.co.uk/Wittgenstein-His-Interpreters-Essays-Memory/dp/1405129220/ |title=Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books">{{cite web |url=http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20101103223749/http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |archivedate=November 3, 2010 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2007 || Autumn || Workshop || Nick Bostrom and Rafaela Hillerbrand of FHI organize an Existential Risk Workshop that takes place around this time.<ref name="fhi-report" />{{rp|74}}<ref name="bostrom-cv" />{{rp|17}}
+
 
 +
| 2007 || Autumn || Workshop || Nick Bostrom and Rafaela Hillerbrand of FHI organize an Existential Risk Workshop around this time. This event addresses the critical risks threatening humanity's survival, such as advanced AI, biotechnological dangers, and catastrophic events. Scholars and practitioners convene to evaluate these risks and strategize preventive measures, reinforcing FHI’s position as a leader in existential risk research.<ref name="fhi-report" />{{rp|74}}<ref name="bostrom-cv" />{{rp|17}}
 +
 
 
|-
 
|-
| 2007 || {{dts|November}} || Website || The ''Practical Ethics'' blog, a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches. This blog takes many names throughout its life, including ''practicalethics: Ethical analysis of science in the news'', ''Practical Ethics'', ''Practical Ethics: Ethical Perspectives on the News'', and ''Practical Ethics: Ethics in the News'', and has also been called ''Practical Ethics in the News''. Its URL also begins at <code>ethicsinthenews.typepad.com/practicalethics</code>, was also available at <code>www.practicalethicsnews.com</code>, and is now at <code>blog.practicalethics.ox.ac.uk</code>.<ref name="annual-report-oct-2008-to-sep-2009">{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/19901/FHI_Annual_Report.pdf |title=Wayback Machine |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20120413031223/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/19901/FHI_Annual_Report.pdf |archivedate=April 13, 2012 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/updates.html |title=Future of Humanity Institute Updates |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20080915151519/http://www.fhi.ox.ac.uk:80/updates.html |archivedate=September 15, 2008 |dead-url=yes}}</ref> Interestingly, as of March 2018, although the blog continues at the new Oxford URL, the page at <code>www.practicalethicsnews.com</code> only says "This page is temporarily offline for maintenance", and has said so for several years.
+
 
 +
| 2007 || {{dts|November}} || Website || The ''Practical Ethics'' blog, managed by FHI’s Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches. This blog serves as a platform for analyzing ethical issues related to scientific advancements and public policy. Over time, it adopts several names, such as ''Practical Ethics in the News'' and ''Practical Ethics: Ethical Perspectives on the News,'' with its initial URL hosted at <code>ethicsinthenews.typepad.com/practicalethics</code>. The blog evolves to become a critical resource for ethical perspectives on new technological developments and bioethics issues.<ref name="annual-report-oct-2008-to-sep-2009">{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/19901/FHI_Annual_Report.pdf |title=Wayback Machine |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20120413031223/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/19901/FHI_Annual_Report.pdf |archivedate=April 13, 2012 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/updates.html |title=Future of Humanity Institute Updates |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20080915151519/http://www.fhi.ox.ac.uk:80/updates.html |archivedate=September 15, 2008 |dead-url=yes}}</ref>
 
|-
 
|-
| 2008 || || Publication || "Whole Brain Emulation: A Technical Roadmap" by Anders Sandberg and Nick Bostrom is published.<ref name="annual-report-oct-2008-to-sep-2009" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
| 2008 || || Publication || "Whole Brain Emulation: A Technical Roadmap" by Anders Sandberg and Nick Bostrom is published. This report lays out a comprehensive technical framework for creating a whole-brain emulation, discussing the requirements in neuroscience, computer science, and ethical considerations. It becomes a cornerstone publication for FHI, guiding future research in cognitive science and AI alignment.<ref name="annual-report-oct-2008-to-sep-2009" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
 +
 
 
|-
 
|-
| 2008–2009 || || Financial || FHI reports donations from three unnamed philanthropists and the Bright Horizons Foundation.<ref name="annual-report-oct-2008-to-sep-2009" />{{rp|23}}
+
 
 +
| 2008–2009 || || Financial || FHI reports donations from three unnamed philanthropists and the Bright Horizons Foundation, which help sustain research activities and expand efforts on existential risk and ethical technology development.<ref name="annual-report-oct-2008-to-sep-2009" />{{rp|23}}
 +
 
 
|-
 
|-
| 2008–2010 || || Workshop || FHI hosts a Cognitive Enhancement Workshop sometime during this period.<ref name="achievements-report-2008-to-2010" />
+
 
 +
| 2008–2010 || || Workshop || FHI hosts a Cognitive Enhancement Workshop during this period, convening experts to discuss methods for enhancing cognitive abilities through technology and their ethical implications. The workshop is part of FHI’s broader inquiry into human enhancement and aims to inform ethical frameworks around cognitive interventions.<ref name="achievements-report-2008-to-2010" />
 +
 
 
|-
 
|-
| 2008–2010 || || Workshop || FHI hosts a symposium on "cognitive enhancement and related ethical and policy issues" sometime during this period.<ref name="achievements-report-2008-to-2010" />
+
 
 +
| 2008–2010 || || Workshop || FHI organizes a symposium on "Cognitive Enhancement and Related Ethical and Policy Issues," which gathers scholars to explore the social and policy ramifications of cognitive enhancement technologies. This event reinforces FHI’s role in leading discussions on the ethics of emerging technologies.<ref name="achievements-report-2008-to-2010" />
 +
 
 
|-
 
|-
| 2008–2010 || || Workshop || FHI co-hosts an event entitled "Uncertainty, Lags and Nonlinearity: Challenges to governance in a turbulent world" sometime during this period.<ref name="achievements-report-2008-to-2010" />
+
 
 +
| 2008–2010 || || Workshop || FHI co-hosts an event titled "Uncertainty, Lags, and Nonlinearity: Challenges to Governance in a Turbulent World," addressing the difficulties in governing complex systems under uncertainty. The workshop explores strategies for managing global risks where outcomes are unpredictable, reinforcing FHI’s interdisciplinary approach to governance and risk management.<ref name="achievements-report-2008-to-2010" />
 +
 
 
|-
 
|-
| 2008–2010 || || Financial || FHI reports that it has received "about 10" philanthropic donations from private individuals during this period.<ref name="achievements-report-2008-to-2010" />
+
 
 +
| 2008–2010 || || Financial || FHI reports receiving "about 10" philanthropic donations from private individuals, which contribute to funding ongoing projects on global catastrophic risks, AI safety, and ethical technology development.<ref name="achievements-report-2008-to-2010" />
 +
 
 
|-
 
|-
| 2008 || {{dts|January 22}} || Website || The domain name for the Global Catastrophic Risks website, <code>global-catastrophic-risks.com</code>, is registered.<ref>{{cite web |url=https://whois.icann.org/en/lookup?name=global-catastrophic-risks.com |title=Showing results for: global-catastrophic-risks.com |publisher=ICANN WHOIS |accessdate=March 11, 2018 |quote=Creation Date: 2008-01-22T20:47:11Z}}</ref> The first snapshot on the Internet Archive would be on May 5, 2008.<ref>{{cite web |url=https://web.archive.org/web/20080701000000*/global-catastrophic-risks.com |title=global-catastrophic-risks.com |accessdate=March 10, 2018}}</ref>
+
 
 +
| 2008 || {{dts|January 22}} || Website || The domain name for the Global Catastrophic Risks website, <code>global-catastrophic-risks.com</code>, is registered. This domain serves as an information hub for FHI’s work on existential threats. The first snapshot on the Internet Archive is recorded on May 5, 2008.<ref>{{cite web |url=https://whois.icann.org/en/lookup?name=global-catastrophic-risks.com |title=Showing results for: global-catastrophic-risks.com |publisher=ICANN WHOIS |accessdate=March 11, 2018 |quote=Creation Date: 2008-01-22T20:47:11Z}}</ref><ref>{{cite web |url=https://web.archive.org/web/20080701000000*/global-catastrophic-risks.com |title=global-catastrophic-risks.com |accessdate=March 10, 2018}}</ref>
 +
 
 
|-
 
|-
| 2008 || {{Dts|September 15}} || Publication || ''[[w:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published.<ref>{{cite web |url=https://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503 |title=Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
+
 
 +
| 2008 || {{dts|September 15}} || Publication || ''Global Catastrophic Risks'' is published. Edited by Nick Bostrom and Milan M. Ćirković, this book consolidates research on existential risks, offering a multi-faceted analysis of threats ranging from climate change to artificial intelligence. It becomes an influential text within FHI and broader academic circles on preventing catastrophic events.<ref>{{cite web |url=https://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503 |title=Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 +
 
 
|-
 
|-
| 2009 || || Publication || "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" by Rafaela Hillerbrand, Toby Ord, and Anders Sandberg is published.<ref name="annual-report-oct-2008-to-sep-2009" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2009 || || Publication || "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" by Rafaela Hillerbrand, Toby Ord, and Anders Sandberg is published. This paper addresses the difficulty of assessing rare but impactful events, offering methods to better evaluate these high-stakes risks. It is recognized as a featured FHI publication, reflecting the institute’s commitment to improving risk assessment methodologies.<ref name="annual-report-oct-2008-to-sep-2009" /><ref name="selected-publications-archive" />
 +
 
 
|-
 
|-
| 2009 || {{dts|January 1}} || Publication || On the group blog (at the time) ''Overcoming Bias'' Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published. The paper title might be "Fundamental Moral Uncertainty".<ref name="achievements-report-2008-to-2010" /><ref>{{cite web |url=http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html |title=Overcoming Bias : Moral uncertainty – towards a solution? |accessdate=March 10, 2018}}</ref> Despite the idea not being published in full, it is often referenced in discussions.<ref>{{cite web |url=http://lesswrong.com/lw/l55/is_the_potential_astronomical_waste_in_our/ |title=Is the potential astronomical waste in our universe too small to care about? |first=Wei |last=Dai |date=October 21, 2014 |accessdate=March 15, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://www.reddit.com/r/science/comments/2hbp21/science_ama_series_im_nick_bostrom_director_of/ckrgodl/ |first=Nick |last=Bostrom |date=September 24, 2014 |publisher=reddit |title=Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA • r/science |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=https://reflectivedisequilibrium.blogspot.com/2014/08/population-ethics-and-inaccessible.html |title=Population ethics and inaccessible populations |accessdate=March 16, 2018 |first=Carl |last=Shulman |date=August 21, 2014 |website=Reflective Disequilibrium |quote=Some approaches, such as Nick Bostrom and Toby Ord's Parliamentary Model, consider what would happen if each normative option had resources to deploy on its own (related to its plausibility or appeal), and look for Pareto-improvements.}}</ref>
+
 
 +
| 2009 || {{dts|January 1}} || Publication || On the blog ''Overcoming Bias,'' Nick Bostrom publishes a post proposing the "Parliamentary Model" for addressing moral uncertainty. This model suggests weighing different ethical perspectives as if they were political parties, allowing for structured decision-making amid moral ambiguity. Although Bostrom mentioned an ongoing paper with Toby Ord on this topic, it seems unpublished as of 2018. The idea is frequently referenced in philosophical discussions on LessWrong and other platforms.<ref name="achievements-report-2008-to-2010" /><ref>{{cite web |url=http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html |title=Overcoming Bias : Moral uncertainty – towards a solution? |accessdate=March 10, 2018}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/l55/is_the_potential_astronomical_waste_in_our/ |title=Is the potential astronomical waste in our universe too small to care about? |first=Wei |last=Dai |date=October 21, 2014 |accessdate=March 15, 2018 |publisher=[[wikipedia
 +
|LessWrong]]}}</ref><ref>{{cite web |url=https://reflectivedisequilibrium.blogspot.com/2014/08/population-ethics-and-inaccessible.html |title=Population ethics and inaccessible populations |accessdate=March 16, 2018 |first=Carl |last=Shulman |date=August 21, 2014 |website=Reflective Disequilibrium |quote=Some approaches, such as Nick Bostrom and Toby Ord's Parliamentary Model, consider what would happen if each normative option had resources to deploy on its own (related to its plausibility or appeal), and look for Pareto-improvements.}}</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|January 22}} || Publication || ''Human Enhancement'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" /><ref name="annual-report-oct-2008-to-sep-2009" />
+
 
 +
| 2009 || {{dts|January 22}} || Publication || ''Human Enhancement'' is published. Edited by Julian Savulescu and Nick Bostrom, this book compiles essays exploring the ethical implications of enhancing human capacities through technology. The publication deepens FHI’s contributions to bioethics and human enhancement debates, especially concerning cognitive, physical, and moral augmentation.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" /><ref name="annual-report-oct-2008-to-sep-2009" />
 +
 
 
|-
 
|-
| 2009 || {{dts|February}} || Website || ''{{W|LessWrong}}'', a group blog about rationality, launches.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/FAQ#Where_did_Less_Wrong_come_from.3F |title=FAQ - Lesswrongwiki |accessdate=June 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> The blog is sponsored and endorsed by FHI, although its written contributions seem to be minimal.<ref name="annual-report-oct-2008-to-sep-2009" /><ref name="sotala-siai-vs-fhi" />
+
 
 +
| 2009 || {{dts|February}} || Website || The group blog ''LessWrong'' launches, dedicated to rationality and cognitive improvement. Sponsored by FHI, LessWrong becomes a community space for discussing decision theory, existential risks, and ethics. Though FHI’s direct contributions are minimal, the blog is highly influential among FHI researchers and the wider rationalist community.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/FAQ#Where_did_Less_Wrong_come_from.3F |title=FAQ - Lesswrongwiki |accessdate=June 1, 2017 |publisher=[[wikipedia
 +
|LessWrong]]}}</ref><ref name="annual-report-oct-2008-to-sep-2009" /><ref name="sotala-siai-vs-fhi" />
 +
 
 
|-
 
|-
| 2009 || {{dts|March 6}} || Social media || The FHI YouTube account, FHIOxford, is created.<ref>{{cite web |url=https://www.youtube.com/user/FHIOxford/about |publisher=YouTube |title=FHIOxford - YouTube |accessdate=March 15, 2018}}</ref>
+
 
 +
| 2009 || {{dts|March 6}} || Social media || The FHI YouTube account, FHIOxford, is created. This channel hosts videos related to FHI’s research, public lectures, and discussions on existential risk, allowing FHI to extend its educational outreach and share insights into its work with a wider audience.<ref>{{cite web |url=https://www.youtube.com/user/FHIOxford/about |publisher=YouTube |title=FHIOxford - YouTube |accessdate=March 15, 2018}}</ref>
 +
 
 
|-
 
|-
| 2009 || {{dts|June 19}} || Publication || "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges" by Nick Bostrom and Anders Sandberg is published in the journal ''{{W|Science and Engineering Ethics}}''.<ref>{{cite web |url=https://nickbostrom.com/cognitive.pdf |title=Cognitive Enhancement: Methods, Ethics, Regulatory Challenges |first1=Nick |last1=Bostrom |first2=Anders |last2=Sandberg |year=2009 |accessdate=March 15, 2018}}</ref><ref name="selected-publications-archive" /> By 2011, this would be the "overwhelmingly most cited article" from FHI.<ref name="sotala-siai-vs-fhi" />
+
 
 +
| 2009 || {{dts|June 19}} || Publication || "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges" by Nick Bostrom and Anders Sandberg is published in the journal ''Science and Engineering Ethics''. This paper explores the ethical challenges of cognitive enhancement technologies and the societal implications of augmenting human abilities. By 2011, it becomes the most-cited article from FHI, underscoring its impact in bioethics and policy discussions.<ref>{{cite web |url=https://nickbostrom.com/cognitive.pdf |title=Cognitive Enhancement: Methods, Ethics, Regulatory Challenges |first1=Nick |last1=Bostrom |first2=Anders |last2=Sandberg |year=2009 |accessdate=March 15, 2018}}</ref><ref name="selected-publications-archive" /><ref name="sotala-siai-vs-fhi" />
 +
 
 
|-
 
|-
| 2009 || {{dts|September}} || Internal review || The FHI Annual Report, covering the period October 1, 2008 to September 30, 2009, is probably published during this month. (The report does not have a date.)<ref name="annual-report-oct-2008-to-sep-2009" />
+
 
 +
| 2009 || {{dts|September}} || Internal review || The FHI Annual Report, covering the period October 1, 2008 to September 30, 2009, is likely published during this month. This report details FHI’s research advancements, financial statements, and strategic directions, reinforcing its commitment to transparency and scholarly excellence.<ref name="annual-report-oct-2008-to-sep-2009" />
 
|-
 
|-
| 2010 || || Internal review || The FHI Achievements Report, covering the years 2008 to 2010, is probably published during this year. (The report does not have a date so it is unclear when it was published.)<ref name="achievements-report-2008-to-2010" />
+
| 2010 || || Internal review || The FHI Achievements Report, covering the years 2008 to 2010, is likely published. This report provides an overview of FHI’s activities, research outputs, and organizational growth, summarizing the institute’s efforts in global catastrophic risk mitigation and ethics.<ref name="achievements-report-2008-to-2010" />
 +
 
 
|-
 
|-
| 2010 || {{dts|June 21}} || Publication || ''Anthropic Bias'' by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.<ref>{{cite web |url=https://www.amazon.co.uk/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946/ |title=Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
+
 
 +
| 2010 || {{dts|June 21}} || Publication || ''Anthropic Bias'' by Nick Bostrom is published. The book delves into reasoning under observation selection effects, exploring how knowledge of one's existence as an observer can impact probabilistic reasoning.<ref>{{cite web |url=https://www.amazon.co.uk/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946/ |title=Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 +
 
 
|-
 
|-
| 2010 || {{dts|June}} || Staff || Eric Mandelbaum joins FHI as a Postdoctoral Research Fellow. He would remain at FHI until July 2011.<ref>{{cite web |url=https://static1.squarespace.com/static/54c160eae4b060a8974e59cc/t/59b05ac5f7e0ab27e55d54ee/1504729797699/CV+May+2017.doc |title=Eric Mandelbaum |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20180316012900/https://static1.squarespace.com/static/54c160eae4b060a8974e59cc/t/59b05ac5f7e0ab27e55d54ee/1504729797699/CV+May+2017.doc |archivedate=March 16, 2018 |dead-url=no}}</ref>
+
 
 +
| 2010 || {{dts|June}} || Staff || Eric Mandelbaum joins FHI as a Postdoctoral Research Fellow, contributing to interdisciplinary research on cognitive science and philosophy. He would remain at FHI until July 2011.<ref>{{cite web |url=https://static1.squarespace.com/static/54c160eae4b060a8974e59cc/t/59b05ac5f7e0ab27e55d54ee/1504729797699/CV+May+2017.doc |title=Eric Mandelbaum |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20180316012900/https://static1.squarespace.com/static/54c160eae4b060a8974e59cc/t/59b05ac5f7e0ab27e55d54ee/1504729797699/CV+May+2017.doc |archivedate=March 16, 2018 |dead-url=no}}</ref>
 +
 
 
|-
 
|-
| 2011 || {{dts|January 14}}–17 || Conference || The Winter Intelligence Conference, organized by FHI, takes place. The conference brings together experts and students in philosophy, cognitive science, and artificial intelligence for discussions about intelligence.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0013/20173/Winter_Intelligence_Conference_Report_280111.pdf |title=Winter Intelligence |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20110711082741/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0013/20173/Winter_Intelligence_Conference_Report_280111.pdf |archivedate=July 11, 2011 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/archived_events/winter_conference |title=Future of Humanity Institute - Winter Intelligence Conference |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20130116104313/http://www.fhi.ox.ac.uk/archived_events/winter_conference |archivedate=January 16, 2013 |dead-url=yes}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/winter-intelligence-conference-2011-2/ |author=Future of Humanity Institute - FHI |title=Winter Intelligence Conference 2011 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=November 8, 2017 |accessdate=March 16, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/winter-intelligence-conference-2011/ |author=Future of Humanity Institute - FHI |title=Winter Intelligence Conference 2011 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 14, 2011 |accessdate=March 16, 2018}}</ref>
+
 
 +
| 2011 || {{dts|January 14}}–17 || Conference || The Winter Intelligence Conference, organized by FHI, takes place. The conference brings together experts and students in philosophy, cognitive science, and artificial intelligence for discussions on intelligence, ethical AI, and cognitive enhancement.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0013/20173/Winter_Intelligence_Conference_Report_280111.pdf |title=Winter Intelligence |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20110711082741/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0013/20173/Winter_Intelligence_Conference_Report_280111.pdf |archivedate=July 11, 2011 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/archived_events/winter_conference |title=Future of Humanity Institute - Winter Intelligence Conference |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20130116104313/http://www.fhi.ox.ac.uk/archived_events/winter_conference |archivedate=January 16, 2013 |dead-url=yes}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/winter-intelligence-conference-2011-2/ |author=Future of Humanity Institute - FHI |title=Winter Intelligence Conference 2011 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=November 8, 2017 |accessdate=March 16, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/winter-intelligence-conference-2011/ |author=Future of Humanity Institute - FHI |title=Winter Intelligence Conference 2011 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 14, 2011 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2011 || {{dts|March 18}} || Publication || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
+
 
 +
| 2011 || {{dts|March 18}} || Publication || ''Enhancing Human Capacities'' is published. This book, co-edited by Julian Savulescu, Ruud ter Meulen, and FHI's Guy Kahane, examines various forms of human enhancement, including cognitive, physical, and moral augmentation, and discusses ethical, social, and policy implications.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2011 || {{dts|June 9}} || External review || On a comment thread on ''{{W|LessWrong}}'', a discussion takes place regarding FHI's funding needs, productivity of marginal hires, dispersion of research topics (i.e. lack of focus on existential risks), and other topics related to funding FHI.<ref>{{cite web |url=http://lesswrong.com/lw/634/safety_culture_and_the_marginal_effect_of_a_dollar/4bnx |title=CarlShulman comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong |accessdate=March 15, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
 
 +
| 2011 || {{dts|June 9}} || External review || On a comment thread on ''LessWrong'', a discussion unfolds about FHI’s funding needs, productivity, and research focus. This thread covers topics like existential risk prioritization, marginal productivity of hires, and the role of private funding in sustaining FHI’s research initiatives.<ref>{{cite web |url=http://lesswrong.com/lw/634/safety_culture_and_the_marginal_effect_of_a_dollar/4bnx |title=CarlShulman comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong |accessdate=March 15, 2018 |publisher=[[wikipedia
 +
|LessWrong]]}}</ref>
 +
 
 
|-
 
|-
| 2011 || {{dts|September}} || Project || The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.<ref>{{cite web |url=http://www.futuretech.ox.ac.uk/www.futuretech.ox.ac.uk/index.html |title=Welcome |publisher=Oxford Martin Programme on the Impacts of Future Technology |accessdate=July 26, 2017 |quote=The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University.}}</ref> The Programme is directed by Nick Bostrom and works closely with FHI, among other organizations.
+
 
 +
| 2011 || {{dts|September}} || Project || The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches. Directed by Nick Bostrom and working closely with FHI, this interdisciplinary initiative within the Oxford Martin School investigates the societal impacts of emerging technologies, with a focus on potential risks and governance.<ref>{{cite web |url=http://www.futuretech.ox.ac.uk/www.futuretech.ox.ac.uk/index.html |title=Welcome |publisher=Oxford Martin Programme on the Impacts of Future Technology |accessdate=July 26, 2017 |quote=The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University.}}</ref>
 +
 
 
|-
 
|-
| 2011 || {{dts|September}} || Staff || Stuart Armstrong joins FHI as a Research Fellow.<ref>{{cite web |url=https://www.linkedin.com/in/stuart-armstrong-2447743/ |title=Stuart Armstrong |accessdate=March 15, 2018 |publisher=LinkedIn}}</ref>
+
 
 +
| 2011 || {{dts|September}} || Staff || Stuart Armstrong joins FHI as a Research Fellow, contributing to research on AI alignment and existential risk mitigation, particularly in the areas of forecasting and decision theory.<ref>{{cite web |url=https://www.linkedin.com/in/stuart-armstrong-2447743/ |title=Stuart Armstrong |accessdate=March 15, 2018 |publisher=LinkedIn}}</ref>
 +
 
 
|-
 
|-
| 2011 || {{dts|September 25}} || External review || Kaj Sotala posts "SIAI vs. FHI achievements, 2008–2010" on ''{{W|LessWrong}}'', comparing the outputs of FHI and the {{W|Machine Intelligence Research Institute}} (which used to be called the Singularity Institute for Artificial Intelligence, abbreviated SIAI).<ref name="sotala-siai-vs-fhi">{{cite web |url=http://lesswrong.com/lw/7sc/siai_vs_fhi_achievements_20082010/ |title=SIAI vs. FHI achievements, 2008-2010 - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
 
 +
| 2011 || {{dts|September 25}} || External review || Kaj Sotala posts "SIAI vs. FHI achievements, 2008–2010" on ''LessWrong'', providing a comparative analysis of the Future of Humanity Institute and the Machine Intelligence Research Institute (previously the Singularity Institute for Artificial Intelligence), evaluating their respective outputs and contributions over recent years.<ref name="sotala-siai-vs-fhi">{{cite web |url=http://lesswrong.com/lw/7sc/siai_vs_fhi_achievements_20082010/ |title=SIAI vs. FHI achievements, 2008-2010 - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia
 +
|LessWrong]]}}</ref>
 +
 
 
|-
 
|-
| 2012 || || Staff || Daniel Dewey joins FHI as a Research Fellow.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Daniel-Dewey.pdf |title=Daniel-Dewey.pdf |accessdate=March 15, 2018}}</ref>
+
 
 +
| 2012 || || Staff || Daniel Dewey joins FHI as a Research Fellow, bringing expertise in AI and machine ethics, focusing on long-term safety and alignment of artificial intelligence systems.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Daniel-Dewey.pdf |title=Daniel-Dewey.pdf |accessdate=March 15, 2018}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|June 6}} || Publication || The technical report "Indefinite survival through backup copies" by Anders Sandberg and Stuart Armstrong is published. The paper shows that if an individual entity copies itself so that the number of copies grows logarithmically with time, it will have a nonzero probability of ultimate survival.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0004/26482/2012-1.pdf |title=Indefinite survival through backup copies |date=June 6, 2012 |first1=Anders |last1=Sandberg |first2=Stuart |last2=Armstrong |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20130116012326/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0004/26482/2012-1.pdf |archivedate=January 16, 2013 |dead-url=yes}}</ref> This report used to be a featured FHI publication.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs |title=Future of Humanity Institute - Publications |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20130112235857/http://www.fhi.ox.ac.uk/selected_outputs |archivedate=January 12, 2013 |dead-url=yes}}</ref>
+
 
 +
| 2012 || {{dts|June 6}} || Publication || The technical report "Indefinite Survival Through Backup Copies" by Anders Sandberg and Stuart Armstrong is published. This paper examines the feasibility of maintaining a high survival probability through self-copying, proposing a model in which the number of copies grows logarithmically over time.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0004/26482/2012-1.pdf |title=Indefinite survival through backup copies |date=June 6, 2012 |first1=Anders |last1=Sandberg |first2=Stuart |last2=Armstrong |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20130116012326/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0004/26482/2012-1.pdf |archivedate=January 16, 2013 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs |title=Future of Humanity Institute - Publications |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20130112235857/http://www.fhi.ox.ac.uk/selected_outputs |archivedate=January 12, 2013 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|August 15}} || Website || The first Internet Archive snapshot of the Winter Intelligence Conference website is from this day.<ref>{{cite web |url=http://www.winterintelligence.org:80/ |title=Winter Intelligence Conferences {{!}} The future of artificial general intelligence |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20120815232147/http://www.winterintelligence.org:80/ |archivedate=August 15, 2012 |dead-url=yes}}</ref>
+
 
 +
| 2012 || {{dts|August 15}} || Website || The first Internet Archive snapshot of the Winter Intelligence Conference website is from this day. This site hosts information about the event and related resources for attendees and interested researchers.<ref>{{cite web |url=http://www.winterintelligence.org:80/ |title=Winter Intelligence Conferences {{!}} The future of artificial general intelligence |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20120815232147/http://www.winterintelligence.org:80/ |archivedate=August 15, 2012 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|September 5}} || Social media || The FHI {{W|Twitter}} account, @FHIOxford, is registered.<ref>{{cite web |url=https://twitter.com/fhioxford?lang=en |title=Future of Humanity Institute (@FHIOxford) |publisher=Twitter |accessdate=March 11, 2018}}</ref>
+
 
 +
| 2012 || {{dts|September 5}} || Social media || The FHI Twitter account, @FHIOxford, is registered, marking the institute’s entry into social media for outreach and public engagement on topics of existential risk, bioethics, and AI safety.<ref>{{cite web |url=https://twitter.com/fhioxford?lang=en |title=Future of Humanity Institute (@FHIOxford) |publisher=Twitter |accessdate=March 11, 2018}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|November 16}} || External review || John Maxwell IV posts "Room for more funding at the Future of Humanity Institute" on ''{{W|LessWrong}}''.<ref>{{cite web |url=http://lesswrong.com/lw/faa/room_for_more_funding_at_the_future_of_humanity/ |title=Room for more funding at the Future of Humanity Institute - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
 
 +
| 2012 || {{dts|November 16}} || External review || John Maxwell IV posts "Room for More Funding at the Future of Humanity Institute" on ''LessWrong'', initiating a public discussion on FHI’s funding requirements, allocation, and potential impact of additional resources.<ref>{{cite web |url=http://lesswrong.com/lw/faa/room_for_more_funding_at_the_future_of_humanity/ |title=Room for more funding at the Future of Humanity Institute - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia
 +
|LessWrong]]}}</ref>
 +
 
 
|-
 
|-
| 2012 || {{dts|December 10}}–11 || Conference || FHI hosts the 2012 conference on Impacts and Risks of Artificial General Intelligence. This conference is one of the two conferences that are part of the Winter Intelligence Multi-Conference 2012, which is hosted by FHI.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/archive_news |title=Future of Humanity Institute - News Archive |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20130112235735/http://www.fhi.ox.ac.uk/archive_news |archivedate=January 12, 2013 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.winterintelligence.org/oxford2012/agi-impacts/ |title=AGI Impacts {{!}} Winter Intelligence Conferences |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20121030120754/http://www.winterintelligence.org/oxford2012/agi-impacts/ |archivedate=October 30, 2012 |dead-url=yes}}</ref>
+
 
 +
| 2012 || {{dts|December 10}}–11 || Conference || FHI hosts the 2012 conference on Impacts and Risks of Artificial General Intelligence, one of two events in the Winter Intelligence Multi-Conference 2012. Attendees discuss AGI development, associated risks, and strategies for mitigating potential hazards in artificial intelligence advancement.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/archive_news |title=Future of Humanity Institute - News Archive |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20130112235735/http://www.fhi.ox.ac.uk/archive_news |archivedate=January 12, 2013 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.winterintelligence.org/oxford2012/agi-impacts/ |title=AGI Impacts {{!}} Winter Intelligence Conferences |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20121030120754/http://www.winterintelligence.org/oxford2012/agi-impacts/ |archivedate=October 30, 2012 |dead-url=yes}}</ref>
 
|-
 
|-
| 2013 || || Staff || [[wikipedia:Carl Benedikt Frey|Carl Frey]] and {{W|Vincent Müller}} join FHI as Research Fellows sometime around this year.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/about/staff/ |title=Staff {{!}} Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20130615192159/http://www.fhi.ox.ac.uk:80/about/staff/ |archivedate=June 15, 2013 |dead-url=yes}}</ref>
+
 
 +
| 2013 || || Staff || Carl Frey and Vincent Müller join FHI as Research Fellows, focusing on topics related to technology and existential risk.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/about/staff/ |title=Staff {{!}} Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20130615192159/http://www.fhi.ox.ac.uk:80/about/staff/ |archivedate=June 15, 2013 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|February}} || Publication || "Existential Risk Prevention as Global Priority" by Nick Bostrom is published in ''{{W|Global Policy}}''.<ref>{{cite web |url=http://www.existential-risk.org/concept.pdf |title=Existential Risk Prevention as Global Priority |first=Nick |last=Bostrom |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2013 || {{dts|February}} || Publication || "Existential Risk Prevention as Global Priority" by Nick Bostrom is published in Global Policy. This paper examines the significance of existential risk reduction and suggests that preventing catastrophic outcomes should be a global priority.<ref>{{cite web |url=http://www.existential-risk.org/concept.pdf |title=Existential Risk Prevention as Global Priority |first=Nick |last=Bostrom |accessdate=March 14, 2018}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|February 25}} || External review || "Omens: When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?" is published on the digital magazine ''[[wikipedia:Aeon (digital magazine)|Aeon]]''. The piece covers FHI, existential risk, Nick Bostrom, and some of his ideas.<ref>{{cite web |url=https://aeon.co/essays/will-humans-be-around-in-a-billion-years-or-a-trillion |title=Omens When we peer into the fog of the deep future what do we see – human extinction or a future among the stars? |author=Ross Andersen |publisher=Aeon |date=February 25, 2013 |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/gvb/link_wellwritten_article_on_the_future_of/ |title=[LINK] Well-written article on the Future of Humanity Institute and Existential Risk |date=March 2, 2013 |author=ESRogs |accessdate=March 15, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/aeon-magazine-feature-omens/ |author=Future of Humanity Institute - FHI |title=Aeon Magazine Feature: "Omens" - Future of Humanity Institute |publisher=Future of Humanity Institute |date=February 25, 2013 |accessdate=March 16, 2018}}</ref>
+
 
 +
| 2013 || {{dts|February 25}} || External review || "Omens: When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?" is published on Aeon. The piece, authored by Ross Andersen, covers FHI, existential risk, and Nick Bostrom's views on humanity's future.<ref>{{cite web |url=https://aeon.co/essays/will-humans-be-around-in-a-billion-years-or-a-trillion |title=Omens When we peer into the fog of the deep future what do we see – human extinction or a future among the stars? |author=Ross Andersen |publisher=Aeon |date=February 25, 2013 |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/gvb/link_wellwritten_article_on_the_future_of/ |title=[LINK] Well-written article on the Future of Humanity Institute and Existential Risk |date=March 2, 2013 |author=ESRogs |accessdate=March 15, 2018 |publisher=[[wikipedia
 +
|LessWrong]]}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/aeon-magazine-feature-omens/ |author=Future of Humanity Institute - FHI |title=Aeon Magazine Feature: "Omens" - Future of Humanity Institute |publisher=Future of Humanity Institute |date=February 25, 2013 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|March 12}} || Publication || "Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox" by Stuart Armstrong and Anders Sandberg is published.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf |title=Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox |first1=Stuart |last1=Armstrong |first2=Anders |last2=Sandberg |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140409031029/http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf |archivedate=April 9, 2014 |dead-url=yes}}</ref> This paper is a featured FHI publication in 2014.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/research/publications/ |title=Publications {{!}} Future of Humanity Institute |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140523110809/http://www.fhi.ox.ac.uk:80/research/publications/ |archivedate=May 23, 2014 |dead-url=yes}}</ref>
+
 
 +
| 2013 || {{dts|March 12}} || Publication || "Eternity in Six Hours: Intergalactic Spreading of Intelligent Life and Sharpening the Fermi Paradox" by Stuart Armstrong and Anders Sandberg is published. The paper discusses models for the rapid spread of intelligent life across the galaxy and examines the Fermi Paradox.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf |title=Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox |first1=Stuart |last1=Armstrong |first2=Anders |last2=Sandberg |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140409031029/http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf |archivedate=April 9, 2014 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|May 30}} || Collaboration || A collaboration between FHI and the insurance company {{W|Amlin}} is announced. The collaboration is for research into systemic risks.<ref>{{cite web |url=https://www.oxfordmartin.ox.ac.uk/news/201305AmlinFHI |publisher=Oxford Martin School |title=FHI & Amlin join forces to understand systemic risk |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/amlin/ |author=Future of Humanity Institute - FHI |title=FHI-Amlin Collaboration - Future of Humanity Institute |publisher=Future of Humanity Institute |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/research/amlin/ |title=FHI-Amlin Collaboration {{!}} Future of Humanity Institute |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140523110804/http://www.fhi.ox.ac.uk:80/research/amlin/ |archivedate=May 23, 2014 |dead-url=yes}}</ref>
+
 
 +
| 2013 || {{dts|May 30}} || Collaboration || A collaboration between FHI and the insurance company Amlin is announced. This partnership focuses on research into systemic risks, particularly how they may affect society and how insurance can help mitigate such risks.<ref>{{cite web |url=https://www.oxfordmartin.ox.ac.uk/news/201305AmlinFHI |publisher=Oxford Martin School |title=FHI & Amlin join forces to understand systemic risk |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/amlin/ |author=Future of Humanity Institute - FHI |title=FHI-Amlin Collaboration - Future of Humanity Institute |publisher=Future of Humanity Institute |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/research/amlin/ |title=FHI-Amlin Collaboration {{!}} Future of Humanity Institute |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140523110804/http://www.fhi.ox.ac.uk:80/research/amlin/ |archivedate=May 23, 2014 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|June}} || Staff || Nick Beckstead joins FHI as a Research Fellow. He would remain at FHI until November 2014.<ref>{{cite web |url=https://www.linkedin.com/in/nick-beckstead-7aa54374/ |title=Nick Beckstead |accessdate=March 15, 2018 |publisher=LinkedIn}}</ref>
+
 
 +
| 2013 || {{dts|June}} || Staff || Nick Beckstead joins FHI as a Research Fellow, focusing on long-term priorities in effective altruism and existential risk. He would remain at FHI until November 2014.<ref>{{cite web |url=https://www.linkedin.com/in/nick-beckstead-7aa54374/ |title=Nick Beckstead |accessdate=March 15, 2018 |publisher=LinkedIn}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|September 17}} || Publication || "The Future of Employment: How Susceptible are Jobs to Computerisation?" by {{W|Carl Benedikt Frey}} and Michael A. Osborne is published.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/The-Future-of-Employment-How-Susceptible-Are-Jobs-to-Computerization.pdf |first1=Carl Benedikt |last1=Frey |first2=Michael A. |last2=Osborne |title=The Future of Employment: How Susceptible are Jobs to Computerisation? |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2013 || {{dts|September 17}} || Publication || "The Future of Employment: How Susceptible are Jobs to Computerisation?" by Carl Benedikt Frey and Michael A. Osborne is published. The study assesses the potential impacts of automation on various jobs and predicts which occupations are most at risk of computerization.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/The-Future-of-Employment-How-Susceptible-Are-Jobs-to-Computerization.pdf |first1=Carl Benedikt |last1=Frey |first2=Michael A. |last2=Osborne |title=The Future of Employment: How Susceptible are Jobs to Computerisation? |accessdate=March 14, 2018}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|November}} || Workshop || FHI hosts a week-long math workshop led by the {{W|Machine Intelligence Research Institute}}.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/miri/ |author=Future of Humanity Institute - FHI |title=FHI Hosts Machine Intelligence Research Institute Maths Workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=November 26, 2013 |accessdate=March 16, 2018}}</ref>
+
 
 +
| 2013 || {{dts|November}} || Workshop || FHI hosts a week-long math workshop led by the Machine Intelligence Research Institute (MIRI). This workshop brings together mathematicians and researchers to develop technical approaches for AI alignment and safety.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/miri/ |author=Future of Humanity Institute - FHI |title=FHI Hosts Machine Intelligence Research Institute Maths Workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=November 26, 2013 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2013 || {{dts|December 27}} || External review || Chris Hallquist posts "Donating to MIRI vs. FHI vs. CEA vs. CFAR" on ''{{W|LessWrong}}'' about the relative merits of donating to the listed organizations. The discussion thread includes a comment from Seán Ó hÉigeartaigh of FHI about the funding needs of FHI.<ref>{{cite web |url=http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/ |title=Donating to MIRI vs. FHI vs. CEA vs. CFAR - Less Wrong Discussion |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
 
 +
| 2013 || {{dts|December 27}} || External review || Chris Hallquist posts "Donating to MIRI vs. FHI vs. CEA vs. CFAR" on LessWrong, comparing the merits of donating to each organization. Seán Ó hÉigeartaigh from FHI participates in the discussion to address questions about FHI’s funding needs.<ref>{{cite web |url=http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/ |title=Donating to MIRI vs. FHI vs. CEA vs. CFAR - Less Wrong Discussion |accessdate=March 14, 2018 |publisher=[[wikipedia
 +
|LessWrong]]}}</ref>
 +
 
 
|-
 
|-
| 2014 || || Project || The Global Priorities Project (GPP) runs as a pilot project within the Centre for Effective Altruism. Team members of GPP include Owen Cotton-Barratt and Toby Ord of the Future of Humanity Institute.<ref>{{cite web |url=http://globalprioritiesproject.org/wp-content/uploads/2015/03/GPP-Strategy-Overview-February-2015.pdf |title=Global Priorities Project Strategy Overview |accessdate=March 10, 2018}}</ref> GPP would also eventually become a collaboration between Centre for Effective Altruism and FHI.<ref>{{cite web |url=http://globalprioritiesproject.org/ |publisher=The Global Priorities Project |title=HOME |accessdate=March 10, 2018}}</ref>
+
| 2014 || {{dts|January}} || Project || The Global Priorities Project (GPP) launches as a pilot within the Centre for Effective Altruism. FHI researchers Owen Cotton-Barratt and Toby Ord are key members of the project, which later becomes a collaboration between the Centre for Effective Altruism and FHI.<ref>{{cite web |url=http://globalprioritiesproject.org/wp-content/uploads/2015/03/GPP-Strategy-Overview-February-2015.pdf |title=Global Priorities Project Strategy Overview |accessdate=March 10, 2018}}</ref><ref>{{cite web |url=http://globalprioritiesproject.org/ |publisher=The Global Priorities Project |title=HOME |accessdate=March 10, 2018}}</ref><ref>{{cite web |url=https://www.centreforeffectivealtruism.org/history |title=Our history |publisher=Centre For Effective Altruism |accessdate=March 10, 2018}}</ref>
 
|-
 
|-
| 2014 || || Publication || "Managing existential risk from emerging technologies" by Nick Beckstead and Toby Ord is published in the report "Innovation: Managing Risk, Not Avoiding It. Evidence and Case Studies."<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Managing-existential-risks-from-Emerging-Technologies.pdf |title=Innovation: managing risk, not avoiding it |year=2014 |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
| 2014 || {{dts|February 4}} || Workshop || FHI hosts a workshop on agent-based modeling. The workshop aims to explore complex systems and the application of agent-based models in understanding social and economic dynamics.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/abm-workshop/ |author=Future of Humanity Institute - FHI |title=FHI hosts Agent Based Modelling workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=February 7, 2014 |accessdate=March 16, 2018}}</ref>
 
|-
 
|-
| 2014 || || Staff || Toby Ord joins FHI as a Research Fellow.<ref>{{cite web |url=http://www.amirrorclear.net/files/toby-ord-cv.pdf |title=Toby Ord - CV |accessdate=March 15, 2018}}</ref>
+
| 2014 || {{dts|February 11}}–12 || Conference || FHI holds the FHI–Amlin conference on systemic risk. This event convenes experts from academia and the insurance industry to discuss potential systemic threats to global stability and examine how insurance might play a role in risk management.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/systemic-risk-conference/ |author=Future of Humanity Institute - FHI |title=FHI-Amlin Conference on Systemic Risk - Future of Humanity Institute |publisher=Future of Humanity Institute |date=February 10, 2014 |accessdate=March 16, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/risk-conference-2014/home/ |title=Home {{!}} Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20140717123146/http://www.fhi.ox.ac.uk/risk-conference-2014/home/ |archivedate=July 17, 2014 |dead-url=yes}}</ref>
 
|-
 
|-
| 2014 || || Staff || John Cusbert joins FHI as a Research Fellow, for work on the ''Population Ethics: Theory and Practice'' project.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/about/staff/ |title=Staff {{!}} Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20141209042320/http://www.fhi.ox.ac.uk:80/about/staff/ |archivedate=December 9, 2014 |dead-url=yes}}</ref>
+
| 2014 || {{dts|May 12}} || Social media || FHI researchers Anders Sandberg and Andrew Snyder-Beattie participate in a Reddit AMA ("Ask Me Anything") on the platform's science forum, where they address public questions about FHI's work and existential risks.<ref>{{cite web |url=https://www.reddit.com/r/science/comments/25cnbr/science_ama_series_we_are_researchers_at_the/ |publisher=reddit |title=Science AMA Series: We are researchers at the Future of Humanity Institute at Oxford University, ask us anything! • r/science |accessdate=March 14, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/reddit/ |author=Future of Humanity Institute - FHI |title=Future of Humanity Institute answers questions from the public - Future of Humanity Institute |publisher=Future of Humanity Institute |date=May 16, 2014 |accessdate=March 14, 2018}}</ref>
 
|-
 
|-
| 2014–2017 || || Staff || Hilary Greaves joins as principal investigator for ''Population Ethics: Theory and Practice'' (organized by FHI).<ref>{{cite web |url=http://users.ox.ac.uk/~mert2255/cv.pdf |title=Curriculum Vitae - Hilary Graves |accessdate=March 16, 2018}}</ref>
+
| 2014 || {{dts|July}} || Workshop || FHI hosts a MIRIx Workshop in collaboration with the Machine Intelligence Research Institute (MIRI) to develop a technical agenda for AI safety. This workshop involves experts in machine learning, mathematics, and philosophy, aiming to advance the AI alignment field.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/mirix-at-fhi/ |author=Future of Humanity Institute - FHI |title=MIRIx at FHI - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 16, 2014 |accessdate=March 16, 2018}}</ref>
 
|-
 
|-
| 2014 || {{dts|February 4}} || Workshop || FHI hosts a workshop on agent based modelling.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/abm-workshop/ |author=Future of Humanity Institute - FHI |title=FHI hosts Agent Based Modelling workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=February 7, 2014 |accessdate=March 16, 2018}}</ref>
+
| 2014 || {{dts|July}}–September || Publication || Nick Bostrom’s book ''Superintelligence: Paths, Dangers, Strategies'' is published. The book explores potential risks from advanced AI and offers strategic analysis on preventing catastrophic outcomes associated with AI.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref>
 
|-
 
|-
| 2014 || {{dts|February 11}}–12 || Conference || FHI announces the FHI–Amlin conference on systemic risk.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/systemic-risk-conference/ |author=Future of Humanity Institute - FHI |title=FHI-Amlin Conference on Systemic Risk - Future of Humanity Institute |publisher=Future of Humanity Institute |date=February 10, 2014 |accessdate=March 16, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/risk-conference-2014/home/ |title=Home {{!}} Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20140717123146/http://www.fhi.ox.ac.uk/risk-conference-2014/home/ |archivedate=July 17, 2014 |dead-url=yes}}</ref>
+
| 2014 || {{dts|September}} || Publication || The policy brief "Unprecedented Technological Risks" by Nick Beckstead et al. is published. This brief outlines strategies for managing high-stakes risks associated with new technologies and highlights areas for focused policy intervention.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf |title=Unprecedented Technological Risks |first1=Nick |last1=Becktead |first2=Nick |last2=Bostrom |first3=Niel |last3=Bowerman |first4=Owen |last4=Cotton-Barratt |first5=William |last5=MacAskill |first6=Seán Ó |last6=hÉigeartaigh |first7=Toby |last7=Ord |accessdate=March 14, 2018}}</ref>
 
|-
 
|-
| 2014 || {{dts|May 12}} || Social media || FHI researchers {{W|Anders Sandberg}} and Andrew Snyder-Beattie do an AMA ("ask me anything") on {{W|Reddit}}.<ref>{{cite web |url=https://www.reddit.com/r/science/comments/25cnbr/science_ama_series_we_are_researchers_at_the/ |publisher=reddit |title=Science AMA Series: We are researchers at the Future of Humanity Institute at Oxford University, ask us anything! • r/science |accessdate=March 14, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/reddit/ |author=Future of Humanity Institute - FHI |title=Future of Humanity Institute answers questions from the public - Future of Humanity Institute |publisher=Future of Humanity Institute |date=May 16, 2014 |accessdate=March 14, 2018}}</ref>
+
| 2014 || {{dts|September 24}} || Social media || Nick Bostrom participates in a Reddit AMA, where he answers questions from the public about his work at FHI and his book ''Superintelligence''.<ref>{{cite web |url=https://www.reddit.com/r/science/comments/2hbp21/science_ama_series_im_nick_bostrom_director_of/ |publisher=reddit |title=Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA • r/science |accessdate=March 14, 2018}}</ref>
 
|-
 
|-
| 2014 || {{dts|July}} || Workshop || FHI hosts a MIRIx Workshop in collaboration with the {{W|Machine Intelligence Research Institute}} "to develop the technical agenda for AI safety".<ref>{{cite web |url=https://www.fhi.ox.ac.uk/mirix-at-fhi/ |author=Future of Humanity Institute - FHI |title=MIRIx at FHI - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 16, 2014 |accessdate=March 16, 2018}}</ref>
+
| 2014 || {{dts|September 26}} || External review || Daniel Dewey posts "The Future of Humanity Institute Could Make Use of Your Money" on LessWrong. This post generates discussions on the platform about the benefits of donating to FHI.<ref>{{cite web |url=https://aiwatch.issarice.com/?person=Daniel+Dewey |date=March 1, 2018 |title=Daniel Dewey |publisher=AI Watch |accessdate=March 14, 2018}}</ref>
 
|-
 
|-
| 2014 || {{dts|July}}–September || Publication || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> In March 2017, the {{W|Open Philanthropy Project}} would consider this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."<ref name="open-phil-grant-march-2017" />
+
| 2014 || {{dts|October 1}} || Financial || FHI posts a note of thanks to the Investling Group for a recent financial contribution. The specific donation amount and date are not disclosed.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/investling/ |author=Future of Humanity Institute - FHI |title=Thanks - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 1, 2014 |accessdate=March 16, 2018}}</ref>
 
|-
 
|-
| 2014 || {{dts|September}} || Publication || The policy brief "Unprecedented Technological Risks" by Nick Beckstead et al. is published.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Unprecedented-Technological-Risks.pdf |title=Unprecedented Technological Risks |first1=Nick |last1=Becktead |first2=Nick |last2=Bostrom |first3=Niel |last3=Bowerman |first4=Owen |last4=Cotton-Barratt |first5=William |last5=MacAskill |first6=Seán Ó |last6=hÉigeartaigh |first7=Toby |last7=Ord |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
| 2014 || {{Dts|October 28}} || Website || The domain name for the "Population Ethics: Theory and Practice" website, <code>populationethics.org</code>, is registered. The project is organized by FHI and supported by the Leverhulme Trust. The first Internet Archive snapshot of the website would be taken on December 23, 2014.<ref>{{cite web |url=https://whois.icann.org/en/lookup?name=populationethics.org |title=Showing results for: POPULATIONETHICS.ORG |publisher=ICANN WHOIS |accessdate=March 15, 2018 |quote=Creation Date: 2014-10-28T08:53:08Z}}</ref><ref>{{cite web |url=http://www.populationethics.org:80/ |publisher=Population Ethics: Theory and Practice |title=Welcome |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20141223051017/http://www.populationethics.org:80/ |archivedate=December 23, 2014 |dead-url=yes}}</ref>
 
|-
 
|-
| 2014 || {{dts|September 24}} || Social media || {{W|Nick Bostrom}} does an AMA ("ask me anything") on {{W|Reddit}}.<ref>{{cite web |url=https://www.reddit.com/r/science/comments/2hbp21/science_ama_series_im_nick_bostrom_director_of/ |publisher=reddit |title=Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA • r/science |accessdate=March 14, 2018}}</ref>
+
| 2014 || {{dts|November 21}} || Publication || "Managing Existential Risk from Emerging Technologies" by Nick Beckstead and Toby Ord is published as part of the 2014 UK Chief Scientific Advisor's report, titled "Innovation: Managing Risk, Not Avoiding It." This chapter addresses the risks associated with emerging technologies, such as artificial intelligence and biotechnology, and offers strategies for managing these potential threats.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/go-science-report/ |title=FHI contributes chapter on existential risk to UK Chief Scientific Advisor's report |accessdate=March 14,}}</ref>
 
|-
 
|-
| 2014 || {{dts|September 26}} || External review || Daniel Dewey (who is a Research Fellow for FHI at the time)<ref>{{cite web |url=https://aiwatch.issarice.com/?person=Daniel+Dewey |date=March 1, 2018 |title=Daniel Dewey |publisher=AI Watch |accessdate=March 14, 2018}}</ref> posts "The Future of Humanity Institute could make use of your money" on ''{{W|LessWrong}}''. The post results in some discussion about donating to FHI in the comments section.
+
 
 +
| 2015 || || Publication || "Learning the Preferences of Bounded Agents" is published, authored by Owain Evans from FHI. The paper explores methods for understanding the preferences of agents that operate under cognitive or resource constraints, which has applications in machine learning and AI safety.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/nips-workshop-2015-website.pdf |title=Learning the Preferences of Bounded Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
 +
 
 
|-
 
|-
| 2014 || {{dts|October 1}} || Financial || FHI posts a note of thanks to the Investling Group for a "recent financial contribution". The amount and date of donation are not listed.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/investling/ |author=Future of Humanity Institute - FHI |title=Thanks - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 1, 2014 |accessdate=March 16, 2018}}</ref>
+
 
 +
| 2015 || || Publication || "Corrigibility," a collaborative work by researchers including Stuart Armstrong from FHI, is published. The paper addresses the problem of designing AI systems that can accept human intervention or corrections without resistance, a key element in AI safety.<ref name="selected-publications-archive" />
 +
 
 
|-
 
|-
| 2014 || {{Dts|October 28}} || Website || The domain name for the ''Population Ethics: Theory and Practice'' website, <code>populationethics.org</code>, is registered.<ref>{{cite web |url=https://whois.icann.org/en/lookup?name=populationethics.org |title=Showing results for: POPULATIONETHICS.ORG |publisher=ICANN WHOIS |accessdate=March 15, 2018 |quote=Creation Date: 2014-10-28T08:53:08Z}}</ref> The first Internet Archive snapshot of the website would be on December 23, 2014. The "project is organised by the Future of Humanity Institute and supported by the Leverhulme Trust."<ref>{{cite web |url=http://www.populationethics.org:80/ |publisher=Population Ethics: Theory and Practice |title=Welcome |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20141223051017/http://www.populationethics.org:80/ |archivedate=December 23, 2014 |dead-url=yes}}</ref>
+
 
 +
| 2015 || || Staff || Owain Evans joins FHI as a postdoctoral research scientist, focusing on understanding human preferences and improving AI alignment approaches.<ref>{{cite web |url=https://www.linkedin.com/in/owain-evans-78b210133/ |title=Owain Evans |publisher=LinkedIn |accessdate=March 15, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || || Publication || "Learning the Preferences of Bounded Agents" is published. One of the paper's authors is Owain Evans at FHI.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/nips-workshop-2015-website.pdf |title=Learning the Preferences of Bounded Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2015 || || Staff || Ben Levinstein joins FHI as a Research Fellow, contributing to philosophical research in ethics and decision-making until his departure in 2016.<ref>{{cite web |url=http://www.levinstein.org/cv.html |publisher=Ben Levinstein |title=CV |accessdate=March 15, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || || Publication || "Corrigibility" by Soares et al. is published. One of the authors, Stuart Armstrong, is affiliated with FHI. This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2015 || || Staff || Feng Zhou joins FHI as a Research Fellow, working on the FHI–Amlin collaboration focused on systemic risk and mitigation strategies in the financial and technological domains.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/about/staff/ |title=Staff {{!}} Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20150413173829/http://www.fhi.ox.ac.uk/about/staff/ |archivedate=April 13, 2015 |dead-url=yes}}</ref>
 +
 
 
|-
 
|-
| 2015 || || Staff || Owain Evans joins FHI as a postdoctoral research scientist.<ref>{{cite web |url=https://www.linkedin.com/in/owain-evans-78b210133/ |title=Owain Evans |publisher=LinkedIn |accessdate=March 15, 2018}}</ref>
+
 
 +
| 2015 || || Staff || Simon Beard joins FHI as a Research Fellow in philosophy, contributing to the "Population Ethics: Theory and Practice" project until 2016.<ref>{{cite web |url=http://sjbeard.weebly.com/cv.html |publisher=Simon Beard |title=CV |accessdate=March 15, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || || Staff || Ben Levinstein joins FHI as a Research Fellow. He would stay at FHI until 2016.<ref>{{cite web |url=http://www.levinstein.org/cv.html |publisher=Ben Levinstein |title=CV |accessdate=March 15, 2018}}</ref>
+
 
 +
| 2015 || || Project || FHI establishes the Strategic Artificial Intelligence Research Centre, directed by Nick Bostrom, to focus on strategic issues and long-term safety of AI technologies.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/strategic-centre-for-artificial-intelligence-policy/ |author=Future of Humanity Institute - FHI |title=Strategic Artificial Intelligence Research Centre - Future of Humanity Institute |publisher=Future of Humanity Institute |date=September 28, 2015 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || || Staff || Feng Zhou joins FHI as a Research Fellow, for work on the FHI–Amlin collaboration on systemic risk.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/about/staff/ |title=Staff {{!}} Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20150413173829/http://www.fhi.ox.ac.uk/about/staff/ |archivedate=April 13, 2015 |dead-url=yes}}</ref>
+
 
 +
| 2015 || {{dts|January 2}}–5 || Conference || The AI safety conference The Future of AI: Opportunities and Challenges takes place in Puerto Rico. Organized by the Future of Life Institute, this gathering includes FHI’s Nick Bostrom as a speaker, discussing risks associated with advanced AI technologies.<ref>{{cite web |url=https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ |title=AI safety conference in Puerto Rico |publisher=Future of Life Institute |date=October 12, 2015 |accessdate=July 13, 2017}}</ref>
 +
 
 
|-
 
|-
| 2015 || || Staff || Simon Beard joins FHI as a Research Fellow in philosophy, for work on ''Population Ethics: Theory and Practice''. He would remain at FHI until 2016.<ref>{{cite web |url=http://sjbeard.weebly.com/cv.html |publisher=Simon Beard |title=CV |accessdate=March 15, 2018}}</ref>
+
 
 +
| 2015 || {{dts|January 8}} || Internal review || FHI publishes a brief overview of its activities in 2014, summarizing its research contributions to existential risks and AI safety.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-in-2014/ |author=Future of Humanity Institute - FHI |title=FHI in 2014 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 8, 2015 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || || Project || The Strategic Artificial Intelligence Research Centre is established.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/strategic-centre-for-artificial-intelligence-policy/ |author=Future of Humanity Institute - FHI |title=Strategic Artificial Intelligence Research Centre - Future of Humanity Institute |publisher=Future of Humanity Institute |date=September 28, 2015 |accessdate=March 16, 2018}}</ref><ref>{{cite web |url=https://nickbostrom.com/ |title=Nick Bostrom's Home Page |accessdate=March 16, 2018 |quote=Since 2015, I also direct the Strategic Artificial Intelligence Research Center.}}</ref><ref>{{cite web |url=https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/qa-philosopher-nick-bostrom-on-superintelligence-human-enhancement-and-existential-risk/ |publisher=[[wikipedia:The Washington Post|The Washington Post]] |title=Opinion {{!}} Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk |accessdate=February 8, 2018}}</ref>
+
 
 +
| 2015 || {{dts|July 1}} || Financial || The Future of Life Institute announces grant recipients for AI safety research, with FHI’s Nick Bostrom receiving $1.5 million to support the new Strategic Artificial Intelligence Research Centre. Owain Evans also receives funding for a project on inferring human values, marking significant support for FHI’s research.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|January 2}}–5 || Conference || ''The Future of AI: Opportunities and Challenges'', an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute, but speakers include {{W|Nick Bostrom}}, the director of FHI.<ref>{{cite web |url=https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ |title=AI safety conference in Puerto Rico |publisher=Future of Life Institute |date=October 12, 2015 |accessdate=July 13, 2017}}</ref> Nate Soares (executive director of {{W|Machine Intelligence Research Institute}}) would later call this the "turning point" of when top academics begin to focus on AI risk.<ref>{{cite web |url=https://intelligence.org/2015/07/16/an-astounding-year/ |title=An Astounding Year |publisher=Machine Intelligence Research Institute |author=Nate Soares |date=July 16, 2015 |accessdate=July 13, 2017}}</ref>
+
 
 +
| 2015 || {{dts|July 30}} || External review || A critique on LessWrong highlights the need for improved communication around existential risks on FHI's website, sparking discussions on accessible outreach strategies for existential risk topics.<ref>{{cite web |url=http://lesswrong.com/lw/mjy/help_build_a_landing_page_for_existential_risk/ |title=Help Build a Landing Page for Existential Risk? - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia
 +
|LessWrong]]}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|January 8}} || Internal review || FHI publishes a one-paragraph review of its work in 2014.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-in-2014/ |author=Future of Humanity Institute - FHI |title=FHI in 2014 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 8, 2015 |accessdate=March 16, 2018}}</ref>
+
 
 +
| 2015 || {{dts|September 1}} || Financial || FHI announces that Nick Bostrom has received a €2 million European Research Council Advanced Grant to further his research on AI safety and existential risk mitigation.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-awarded-prestigious-e2m-erc-grant/ |author=Future of Humanity Institute - FHI |title=FHI awarded prestigious €2m ERC Grant - Future of Humanity Institute |publisher=Future of Humanity Institute |date=September 25, 2015 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|July 1}} || Financial || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref> One of the grantees is {{W|Nick Bostrom}}, the director of FHI, who receives a grant of $1,500,000 for the creation of a new research center focused on AI safety (probably the Strategic Artificial Intelligence Research Centre).<ref>{{cite web |url=https://futureoflife.org/ai-researcher-nick-bostrom/ |title=AI Researcher Nick Bostrom - Future of Life Institute |publisher=Future of Life Institute |accessdate=March 14, 2018}}</ref> Another grantee is Owain Evans of FHI, who receives a grant of $227,212 for his project on inferring human values.<ref name="fli-grant-owain-evans">{{cite web |url=https://futureoflife.org/ai-researcher-owain-evans/ |title=AI Researcher Owain Evans - Future of Life Institute |publisher=Future of Life Institute |accessdate=March 14, 2018}}</ref>
+
 
 +
| 2015 || {{dts|September 15}} || Social media || Anders Sandberg from FHI hosts an AMA on Reddit, addressing questions about future studies, human enhancement, and global catastrophic risks.<ref>{{cite web |url=https://www.reddit.com/r/Futurology/comments/3l1jqs/i_am_a_researcher_at_the_future_of_humanity/ |publisher=reddit |title=I am a researcher at the Future of Humanity Institute in Oxford, working on future studies, human enhancement, global catastrophic risks, reasoning under uncertainty and everything else. Ask me anything! • r/Futurology |accessdate=March 14, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|July 30}} || External review || A post critiquing the lack of intuitive explanation of existential risks on the FHI website (among other places) is posted on ''{{W|LessWrong}}''.<ref>{{cite web |url=http://lesswrong.com/lw/mjy/help_build_a_landing_page_for_existential_risk/ |title=Help Build a Landing Page for Existential Risk? - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
+
 
 +
| 2015 || {{dts|October}} || Publication || "Moral Trade" by Toby Ord is published in Ethics, examining how moral goods can be exchanged under ethical systems.<ref>{{cite web |url=http://www.amirrorclear.net/files/moral-trade.pdf |title=Moral Trade |first=Toby |last=Ord |journal=Ethics |year=2015 |accessdate=March 14, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|September 1}} || Financial || FHI announces that Nick Bostrom has been awarded a €2 million (about $2,247,200 at the time)<ref>{{cite web |url=https://api.fixer.io/2015-09-01?base=EUR |title=Currency conversion from EUR on 2015-09-01 |publisher=Fixer.io |accessdate=March 16, 2018}}</ref> [[wikipedia:European Research Council#Grants offered|European Research Council Advanced Grant]].<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-awarded-prestigious-e2m-erc-grant/ |author=Future of Humanity Institute - FHI |title=FHI awarded prestigious €2m ERC Grant - Future of Humanity Institute |publisher=Future of Humanity Institute |date=September 25, 2015 |accessdate=March 16, 2018}}</ref>
+
 
 +
| 2015 || {{dts|November 23}} || External review || Nick Bostrom and FHI are featured in The New Yorker article "The Doomsday Invention," exploring the implications of artificial intelligence on humanity’s future.<ref>{{cite web |url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? |publisher=The New Yorker |first=Raffi |last=Khatchadourian |date=November 23, 2015 |accessdate=March 15, 2018}}</ref>
 +
 
 
|-
 
|-
| 2015 || {{dts|September 15}} || Social media || {{W|Anders Sandberg}} does an AMA ("ask me anything") on {{W|Reddit}}.<ref>{{cite web |url=https://www.reddit.com/r/Futurology/comments/3l1jqs/i_am_a_researcher_at_the_future_of_humanity/ |publisher=reddit |title=I am a researcher at the Future of Humanity Institute in Oxford, working on future studies, human enhancement, global catastrophic risks, reasoning under uncertainty and everything else. Ask me anything! • r/Futurology |accessdate=March 14, 2018}}</ref>
+
| 2016 || {{dts|January 26}} || Publication || "The Unilateralist's Curse and the Case for a Principle of Conformity" by Nick Bostrom, Thomas Douglas, and Anders Sandberg is published in the journal Social Epistemology. The paper discusses risks associated with unilateral decisions in global contexts, highlighting the need for collective decision-making in high-stakes scenarios. This is a featured FHI publication.<ref>{{cite web |url=https://www.tandfonline.com/doi/full/10.1080/02691728.2015.1108373 |title=The Unilateralist's Curse and the Case for a Principle of Conformity |publisher=Taylor & Francis |accessdate=March 14, 2018}}</ref><ref name="selected-publications-archive" />
 +
 
 
|-
 
|-
| 2015 || {{dts|October}} || Publication || "Moral Trade" by Toby Ord is published in the journal ''[[wikipedia:Ethics (journal)|Ethics]]''.<ref>{{cite web |url=http://www.amirrorclear.net/files/moral-trade.pdf |title=Moral Trade |first=Toby |last=Ord |journal=Ethics |year=2015 |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2016 || {{dts|February 8}}–9 || Workshop || The Global Priorities Project (a collaboration between FHI and the Centre for Effective Altruism) hosts a policy workshop on existential risk. Attendees include "twenty leading academics and policymakers from the UK, USA, Germany, Finland, and Sweden".<ref>{{cite web |url=https://www.fhi.ox.ac.uk/workshop-hosted-on-existential-risk/ |author=Future of Humanity Institute - FHI |title=Policy workshop hosted on existential risk - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 25, 2016 |accessdate=March 13, 2018}}</ref><ref name="annual-review-2016" />
 +
 
 
|-
 
|-
| 2015 || {{dts|November 23}} || External review || A ''{{W|The New Yorker}}'' piece featuring Nick Bostrom and FHI is published.<ref>{{cite web |url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? |publisher=The New Yorker |first=Raffi |last=Khatchadourian |date=November 23, 2015 |accessdate=March 15, 2018}}</ref>
+
 
 +
| 2016 || {{dts|May}} || Publication || The Global Priorities Project, associated with FHI, releases the Global Catastrophic Risk Report 2016. This report examines various risks that could pose global threats to humanity, such as pandemics, nuclear war, and AI, and provides recommendations for international mitigation strategies.<ref name="newsletter-summer-2016">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-newsletter-july-2016/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Summer 2016 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 13, 2018}}</ref>
 +
 
 
|-
 
|-
| 2016 || || Publication || Stuart Armstrong's paper "Off-policy Monte Carlo agents with variable behaviour policies" is published.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/monte_carlo_arXiv.pdf |title=Off-policy Monte Carlo agents with variable behaviour policies |first=Stuart |last=Armstrong |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2016 || {{dts|May}} || Workshop || FHI hosts a week-long workshop in Oxford titled "The Control Problem in AI", attended by ten members of the Machine Intelligence Research Institute. The workshop aims to tackle critical issues surrounding AI alignment and control.<ref name="annual-review-2016" />
 +
 
 
|-
 
|-
| 2016 || || Publication || "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.<ref>{{cite web |url=https://stuhlmueller.org/papers/preferences-aaai2016.pdf |title=Learning the Preferences of Ignorant, Inconsistent Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2016 || {{dts|May 27}}–{{dts|June 17}} || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted by the Machine Intelligence Research Institute and FHI, brings together professionals to discuss technical challenges associated with AI robustness and reliability. Talks are presented by FHI researchers Jan Leike and Stuart Armstrong.<ref>{{cite web |url=https://intelligence.org/colloquium-series/ |title=Colloquium Series on Robust and Beneficial AI - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/colloquium-series-on-robust-and-beneficial-ai/ |author=Future of Humanity Institute - FHI |title=Colloquium Series on Robust and Beneficial AI - Future of Humanity Institute |publisher=Future of Humanity Institute |date=August 5, 2016 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2016 || || Project || The Global Politics of AI Research Group is established by Carrick Flynn and Allan Dafoe (both of whom are affiliated with FHI). The group "consists of eleven research members [and] more than thirty volunteers" and "has the mission of helping researchers and political actors to adopt the best possible strategy around the development of AI."<ref name="annual-review-2016" /> (It's not clear where the group is based or if it even meets physically.)
+
 
 +
| 2016 || {{Dts|June}} || Staff || FHI recruits William MacAskill and Hilary Greaves to start a new "Programme on the Philosophical Foundations of Effective Altruism" as a collaboration between FHI and the Centre for Effective Altruism.<ref name="newsletter-summer-2016" />
 +
 
 
|-
 
|-
| 2016 || || Publication || "Thompson Sampling is Asymptotically Optimal in General Environments" by Leike et al. is published.<ref>{{cite web |url=http://auai.org/uai2016/proceedings/papers/20.pdf |title=Thompson Sampling is Asymptotically Optimal in General Environments |first1=Jan |last1=Leike |first2=Tor |last2=Lattimore |first3=Laurent |last3=Orseau |first4=Marcus |last4=Hutter |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
 +
| 2016 || {{dts|June}} || Publication || The Age of Em: Work, Love, and Life When Robots Rule the Earth, a book about the implications of whole brain emulation by FHI research associate Robin Hanson, is published.<ref>{{cite web |url=http://ageofem.com/ |title=The Age of Em, A Book |accessdate=March 13, 2018}}</ref> In October, FHI and Hanson organize a workshop about the book.<ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/robin-hanson-and-fhi-hold-seminar-and-public-talk-on-the-age-of-em/ |author=Future of Humanity Institute - FHI |title=Robin Hanson and FHI hold seminar and public talk on "The Age of Em" - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 25, 2016 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2016 || || Staff || Owen Cotton-Barratt joins FHI as a Research Fellow.<ref name="team-page-2016-06-27">{{cite web |url=https://www.fhi.ox.ac.uk/about/the-team/ |author=www.alz.consulting |title=Future of Humanity Institute |publisher=The Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20160627021237/https://www.fhi.ox.ac.uk/about/the-team/ |archivedate=June 27, 2016 |dead-url=yes}}</ref>
+
 
 +
| 2016 || {{dts|June 1}} || Publication || The paper "Safely interruptible agents" is announced on the Machine Intelligence Research Institute blog. The paper, a collaboration between Google DeepMind and FHI, features Stuart Armstrong of FHI among the authors. It is presented at the Conference on Uncertainty in Artificial Intelligence (UAI).<ref>{{cite web |url=https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/ |title=New paper: "Safely interruptible agents" - Machine Intelligence Research Institute |publisher=Machine Intelligence Research Institute |date=September 12, 2016 |first=Rob |last=Bensinger |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review">{{cite web |url=http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ |title=2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum |accessdate=March 10, 2018}}</ref><ref name="selected-publications-archive" />
 +
 
 
|-
 
|-
| 2016 || || Staff || Eric Drexler becomes an Oxford Martin Senior Fellow at FHI, and later a Senior Research Fellow. Previously, he was an Academic Visitor and then an Academic Advisor at FHI.<ref name="team-page-2016-06-27" /><ref name="team-page-2016-11-23" />
+
 
 +
| 2016 || {{dts|August}} || Staff || Piers Millett joins FHI as Senior Research Fellow, focusing on biosecurity and pandemic preparedness.<ref>{{cite web |url=https://www.linkedin.com/in/pdmillett/ |title=Piers Millett |publisher=LinkedIn |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-hire-first-biotech-expert/ |author=Future of Humanity Institute - FHI |title=FHI hires first biotech policy specialist - Future of Humanity Institute |publisher=Future of Humanity Institute |date=December 5, 2016 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2016 || || Staff || Jan Leike joins FHI as a Research Fellow.<ref name="team-page-2016-11-23" />
+
 
 +
| 2016 || {{dts|September}} || Financial || The Open Philanthropy Project recommends a grant of $115,652 to FHI to support Piers Millett’s work on biosecurity and pandemic preparedness.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/future-humanity-institute-biosecurity-and-pandemic-preparedness |publisher=Open Philanthropy Project |title=Future of Humanity Institute — Biosecurity and Pandemic Preparedness |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
 +
 
 
|-
 
|-
| 2016 || || Staff || Miles Brundage joins FHI as a Research Fellow.<ref name="team-page-2016-11-23">{{cite web |url=https://www.fhi.ox.ac.uk/about/the-team/ |author=www.alz.consulting |title=Future of Humanity Institute |publisher=The Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20161123155814/https://www.fhi.ox.ac.uk/about/the-team/ |archivedate=November 23, 2016 |dead-url=yes}}</ref>
+
 
 +
| 2016 || {{dts|September 16}} || Publication || Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1609.04994 |title=[1609.04994] Exploration Potential |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/new-paper-exploration-potential/ |author=Future of Humanity Institute - FHI |title=Exploration potential - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 5, 2016 |accessdate=March 16, 2018}}</ref>
 +
 
 
|-
 
|-
| 2016 || {{dts|January 26}} || Publication || "The Unilateralist's Curse and the Case for a Principle of Conformity" by Nick Bostrom, Thomas Douglas, and Anders Sandberg is published in the journal ''[[wikipedia:Social Epistemology (journal)|Social Epistemology]]''.<ref>{{cite web |url=https://www.tandfonline.com/doi/full/10.1080/02691728.2015.1108373 |title=The Unilateralist's Curse and the Case for a Principle of Conformity |publisher=Taylor & Francis |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
 
|-
+
| 2016 || {{dts|September 22}} || Collaboration || FHI’s webpage for its collaboration with Google DeepMind is published, though the exact start date of the collaboration is unspecified.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/deepmind-collaboration/ |author=Future of Humanity Institute - FHI |title=DeepMind collaboration - Future of Humanity Institute |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 13, 2018}}</ref>
| 2016 || {{dts|February 8}}–9 || Workshop || The Global Priorities Project (a collaboration between FHI and the Centre for Effective Altruism) hosts a policy workshop on existential risk. Attendees include "twenty leading academics and policy-makers from the UK, USA, Germany, Finland, and Sweden".<ref>{{cite web |url=https://www.fhi.ox.ac.uk/workshop-hosted-on-existential-risk/ |author=Future of Humanity Institute - FHI |title=Policy workshop hosted on existential risk - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 25, 2016 |accessdate=March 13, 2018}}</ref><ref name="annual-review-2016" />
+
 
|-
 
| 2016 || {{dts|May}} || Publication || The Global Priorities Project (associated with FHI) releases the Global Catastrophic Report 2016.<ref name="newsletter-summer-2016">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-newsletter-july-2016/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Summer 2016 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 13, 2018}}</ref>
 
|-
 
| 2016 || {{dts|May}} || Workshop || FHI hosts a week-long workshop in Oxford called "The Control Problem in AI", attended by ten members of {{W|Machine Intelligence Research Institute}}.<ref name="annual-review-2016" />
 
|-
 
| 2016 || {{dts|May 27}}{{snd}}{{dts|June 17}} || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted by the {{w|Machine Intelligence Research Institute}} and FHI, takes place. The program brings "together a variety of academics and professionals to address the technical challenges associated with AI robustness and reliability, with a goal of facilitating conversations between people interested in a number of different approaches." At the program, Jan Leike and Stuart Armstrong of FHI each give a talk.<ref>{{cite web |url=https://intelligence.org/colloquium-series/ |title=Colloquium Series on Robust and Beneficial AI - Machine Intelligence Research Institute |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/colloquium-series-on-robust-and-beneficial-ai/ |author=Future of Humanity Institute - FHI |title=Colloquium Series on Robust and Beneficial AI - Future of Humanity Institute |publisher=Future of Humanity Institute |date=August 5, 2016 |accessdate=March 16, 2018}}</ref>
 
|-
 
| 2016 || {{Dts|June}} (approximate) || Staff || FHI recruits {{W|William MacAskill}} and Hilary Greaves to start a new "Programme on the Philosophical Foundations of Effective Altruism" as a collaboration between FHI and the Centre for Effective Altruism.<ref name="newsletter-summer-2016" /> (It seems like this became the Global Priorities Institute, which is not to be confused with the Global Priorities Project.)
 
|-
 
| 2016 || {{dts|June}} || Publication || ''[[wikipedia:The Age of Em|The Age of Em: Work, Love and Life When Robots Rule the Earth]]'', a book about the implications of whole brain emulation by FHI research associate {{W|Robin Hanson}}, is published.<ref>{{cite web |url=http://ageofem.com/ |title=The Age of Em, A Book |accessdate=March 13, 2018}}</ref> In October, FHI and Hanson would organize a workshop about the book.<ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/robin-hanson-and-fhi-hold-seminar-and-public-talk-on-the-age-of-em/ |author=Future of Humanity Institute - FHI |title=Robin Hanson and FHI hold seminar and public talk on "The age of em" - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 25, 2016 |accessdate=March 16, 2018}}</ref>
 
|-
 
| 2016 || {{dts|June 1}} || Publication || The paper "Safely interruptible agents" is announced on the {{W|Machine Intelligence Research Institute}} blog. The paper is a collaboration between {{W|Google DeepMind}} and FHI, and one of the paper's authors is Stuart Armstrong of FHI.<ref>{{cite web |url=https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/ |title=New paper: "Safely interruptible agents" - Machine Intelligence Research Institute |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=September 12, 2016 |first=Rob |last=Bensinger |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review">{{cite web |url=http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ |title=2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum |accessdate=March 10, 2018}}</ref> The paper is also presented at the Conference on Uncertainty in Artificial Intelligence (UAI).<ref name="newsletter-summer-2016" /><ref>{{cite web |url=http://lesswrong.com/lw/noj/google_deepmind_and_fhi_collaborate_to_present/ |title=Google Deepmind and FHI collaborate to present research at UAI 2016 |author=Stuart Armstrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
 
|-
 
| 2016 || {{dts|August}} || Staff || Piers Millett joins FHI as Senior Research Fellow.<ref>{{cite web |url=https://www.linkedin.com/in/pdmillett/ |title=Piers Millett |publisher=LinkedIn |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-hire-first-biotech-expert/ |author=Future of Humanity Institute - FHI |title=FHI hires first biotech policy specialist - Future of Humanity Institute |publisher=Future of Humanity Institute |date=December 5, 2016 |accessdate=March 16, 2018}}</ref>
 
|-
 
| 2016 || {{dts|September}} || Financial || The {{W|Open Philanthropy Project}} recommends (to the Open Philanthropy Project fund, Good Ventures, or some other entity)<ref name="open-phil-guide-grant-seekers">{{cite web |url=https://www.openphilanthropy.org/giving/guide-for-grant-seekers |publisher=Open Philanthropy Project |title=Guide for Grant Seekers |date=November 6, 2017 |accessdate=March 17, 2018 |quote=Grants typically will be recommended to the Open Philanthropy Project fund, an advised fund of the Silicon Valley Community Foundation. In some cases, the Open Philanthropy Project makes grant recommendations directly to Good Ventures, the Open Philanthropy Action Fund (a 501(c)(4) social welfare organization), or to other entities.}}</ref><ref name="vipul-comment">{{cite web |url=https://www.openphilanthropy.org/blog/october-2017-open-thread?page=2#comment-453 |publisher=Open Philanthropy Project |title=October 2017 Open Thread |first=Vipul |last=Naik |date=November 12, 2017 |accessdate=March 17, 2018}}</ref> a grant of $115,652 to FHI to support the hiring of Piers Millett, who will work on biosecurity and pandemic preparedness.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/future-humanity-institute-biosecurity-and-pandemic-preparedness |publisher=Open Philanthropy Project |title=Future of Humanity Institute — Biosecurity and Pandemic Preparedness |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
 
|-
 
| 2016 || {{dts|September}} (approximate) || Financial || FHI receives a funding offer from Luke Ding to fund Hilary Greaves for four years starting mid-2017 (in case a proposed new institute is unable to raise academic funds for her) and William MacAskill's full salary for five years.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/q3-newsletter/ |author=Future of Humanity Institute |title=Quarterly Update Autumn 2016 |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 13, 2018}}</ref>
 
|-
 
| 2016 || {{dts|September 16}} || Publication || Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1609.04994 |title=[1609.04994] Exploration Potential |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /><ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/new-paper-exploration-potential/ |author=Future of Humanity Institute - FHI |title=Exploration potential - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 5, 2016 |accessdate=March 16, 2018}}</ref>
 
|-
 
| 2016 || {{dts|September 22}} || Collaboration || FHI's page on its collaboration with Google DeepMind is published. However it is unclear when the actual collaboration began.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/deepmind-collaboration/ |author=Future of Humanity Institute - FHI |title=DeepMind collaboration - Future of Humanity Institute |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 13, 2018}}</ref>
 
|-
 
| 2016 || {{dts|November}} || Workshop || The biotech horizon scanning workshop, co-hosted by the {{W|Centre for the Study of Existential Risk}} and FHI, takes place. The workshop and the overall "biological engineering horizon scanning" process is intended to lead up to "a peer-reviewed publication highlighting 15–20 developments of greatest likely impact."<ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/biotech-horizon-scanning-workshop/ |author=Future of Humanity Institute - FHI |title=Biotech horizon scanning workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=December 12, 2016 |accessdate=March 13, 2018}}</ref>
 
 
|-
 
|-
| 2016 || {{dts|December}} || Workshop || FHI hosts a workshop on "AI Safety and Blockchain". Attendees include Nick Bostrom, Vitalik Buterin, {{W|Jaan Tallinn}}, {{W|Wei Dai}}, Gwern Branwen, and Allan Dafoe. "The workshop explored the potential technical overlap between AI Safety and blockchain technologies and the possibilities for using blockchain, crypto-economics, and cryptocurrencies to facilitate greater global coordination."<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-holds-workshop-on-ai-safety-and-blockchain/ |author=Future of Humanity Institute - FHI |title=FHI holds workshop on AI safety and blockchain - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 19, 2017 |accessdate=March 13, 2018}}</ref><ref name="annual-review-2016" /> It is unclear whether any output resulted from this workshop.
+
| 2016 || {{dts|November}} || Workshop || The biotech horizon scanning workshop, co-hosted by the Centre for the Study of Existential Risk (CSER) and the Future of Humanity Institute (FHI), identifies potential high-impact developments in biological engineering. The workshop aims to assess emerging biotechnologies' risks and benefits, with findings intended for peer-reviewed publication.<ref name="annual-review-2016"></ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/biotech-horizon-scanning-workshop/ |author=Future of Humanity Institute - FHI |title=Biotech Horizon Scanning Workshop |publisher=Future of Humanity Institute |date=November 2016 |accessdate=March 13, 2018}}</ref>  
 
|-
 
|-
| 2017 || || Publication || Slides for an upcoming paper by FHI researchers Anders Sandberg, Eric Drexler, and Toby Ord, "Dissolving the Fermi Paradox", are posted.<ref>{{cite web |url=http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html |title=Has the Fermi paradox been resolved? - Marginal REVOLUTION |publisher=Marginal REVOLUTION |date=July 3, 2017 |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/09 |author=gwern |date=August 16, 2017 |title=September 2017 news - Gwern.net |accessdate=March 13, 2018}}</ref>
+
| 2016 || {{dts|December}} || Workshop || FHI hosts a workshop on "AI Safety and Blockchain," featuring prominent attendees such as Nick Bostrom, Vitalik Buterin, Jaan Tallinn, Wei Dai, Gwern Branwen, and Allan Dafoe. The workshop explores potential overlaps between AI safety and blockchain technologies, investigating how blockchain could improve global coordination in AI risk management.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-holds-workshop-on-ai-safety-and-blockchain/ |author=Future of Humanity Institute - FHI |title=FHI holds workshop on AI safety and blockchain - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 19, 2017 |accessdate=March 13, 2018}}</ref><ref name="annual-review-2016"></ref>  
 
|-
 
|-
| 2017 || || Publication || The report "Existential Risk: Diplomacy and Governance" is published. "This work began at the Global Priorities Project, whose policy work has now joined FHI."<ref name="newsletter-spring-2017" /> The report gives an overview of existential risks and presents three recommendations for ways to reduce existential risks (chosen out of more then 100 proposals): (1) developing governance of geoengineering research; (2) establishing scenario plans and exercises for severe engineered pandemics at the international level; and (3) building international attention and support for existential risk reduction.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Existential-Risks-2017-01-23.pdf |title=Existential Risk: Diplomacy and Governance |year=2017 |first1=Sebastian |last1=Farquhar |first2=John |last2=Halstead |first3=Owen |last3=Cotton-Barratt |first4=Stefan |last4=Schubert |first5=Haydn |last5=Belfield |first6=Andrew |last6=Snyder-Beattie |publisher=Global Priorities Project |accessdate=March 14, 2018}}</ref>
+
| 2017 || {{dts|January 15}} || Publication || "Agent-Agnostic Human-in-the-Loop Reinforcement Learning" is uploaded to the arXiv. This paper discusses a framework allowing humans to influence reinforcement learning processes, without needing agent-specific knowledge, enhancing safety and adaptability in AI systems.<ref>{{cite web |url=https://arxiv.org/abs/1701.04079v1 |title=[1701.04079v1] Agent-Agnostic Human-in-the-Loop Reinforcement Learning |accessdate=March 14, 2018}}</ref><ref name="newsletter-spring-2017"></ref>  
 
|-
 
|-
| 2017 || {{dts|January 15}} || Publication || "Agent-Agnostic Human-in-the-Loop Reinforcement Learning" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1701.04079v1 |title=[1701.04079v1] Agent-Agnostic Human-in-the-Loop Reinforcement Learning |accessdate=March 14, 2018}}</ref><ref name="newsletter-spring-2017" />
+
| 2017 || {{dts|January 23}} || Publication || The report "Existential Risk: Diplomacy and Governance" is published by the Global Priorities Project, which later integrates into FHI's policy work. This report outlines key existential risks, including AI, biotechnology, and climate change, and makes three policy recommendations: (1) establish governance for geoengineering; (2) initiate global pandemic scenario planning; and (3) boost international efforts in existential risk reduction.<ref name="newsletter-spring-2017"></ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Existential-Risks-2017-01-23.pdf |title=Existential Risk: Diplomacy and Governance |year=2017 |first1=Sebastian |last1=Farquhar |first2=John |last2=Halstead |first3=Owen |last3=Cotton-Barratt |first4=Stefan |last4=Schubert |first5=Haydn |last5=Belfield |first6=Andrew |last6=Snyder-Beattie |publisher=Global Priorities Project |accessdate=March 14, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|January 25}} || Publication || The FHI Annual Review 2016 is published.<ref name="annual-review-2016">{{cite web |url=https://www.fhi.ox.ac.uk/fhi-annual-review-2016/ |author=Future of Humanity Institute - FHI |title=FHI Annual Review 2016 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 13, 2018}}</ref>
+
| 2017 || {{dts|January 25}} || Publication || The FHI Annual Review 2016 is published. This review highlights FHI's major accomplishments and research focus areas for 2016, including AI safety, biosecurity, and policy recommendations on existential risk.<ref name="annual-review-2016">{{cite web |url=https://www.fhi.ox.ac.uk/fhi-annual-review-2016/ |author=Future of Humanity Institute - FHI |title=FHI Annual Review 2016 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 13, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|February 9}} || Publication || Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal ''{{W|Global Policy}}''.<ref>{{cite web |url=http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/abstract |title=Strategic Implications of Openness in AI Development |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /><ref name="newsletter-spring-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-spring-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Spring 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref> The paper "covers a breadth of areas including long-term AI development, singleton versus multipolar scenarios, race dynamics, responsible AI development, and identification of possible failure modes."<ref name="annual-review-2016" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
+
| 2017 || {{dts|February 9}} || Publication || Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal ''Global Policy''. This paper addresses AI development's long-term strategic concerns, including the effects of openness on development speed, singleton vs. multipolar outcomes, and possible failure modes.<ref>{{cite web |url=http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/abstract |title=Strategic Implications of Openness in AI Development |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /><ref name="newsletter-spring-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-spring-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Spring 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|February 10}} || Workshop || FHI hosts a workshop on normative uncertainty (i.e. uncertainty regarding moral frameworks).<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-normative-uncertainty-workshop/ |author=Future of Humanity Institute - FHI |title=Workshop on Normative Uncertainty |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 16, 2018}}</ref>
+
| 2017 || {{dts|February 10}} || Workshop || FHI hosts a workshop on normative uncertainty, examining uncertainty about moral frameworks and ethical theories, which impacts decision-making in existential risk policy.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-normative-uncertainty-workshop/ |author=Future of Humanity Institute - FHI |title=Workshop on Normative Uncertainty |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 16, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|February 19}}–20 || Workshop || FHI hosts a workshop on potential risks from malicious use of artificial intelligence. The workshop is organized by FHI, the {{W|Centre for the Study of Existential Risk}}, and the [[wikipedia:Leverhulme Centre for the Future of Intelligence|Centre for the Future of Intelligence]].<ref>{{cite web |url=https://www.fhi.ox.ac.uk/bad-actors-and-artificial-intelligence-workshop/ |author=Future of Humanity Institute - FHI |title=Bad Actors and AI Workshop |publisher=Future of Humanity Institute |date=November 4, 2017 |accessdate=March 16, 2018}}</ref>
+
| 2017 || {{dts|February 19}}–20 || Workshop || FHI, in collaboration with CSER and the Leverhulme Centre for the Future of Intelligence, hosts a workshop on risks from malicious uses of AI. The event addresses concerns about AI's potential use by malicious actors, including discussions on strategies to mitigate such risks.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/bad-actors-and-artificial-intelligence-workshop/ |author=Future of Humanity Institute - FHI |title=Bad Actors and AI Workshop |publisher=Future of Humanity Institute |date=November 4, 2017 |accessdate=March 16, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|March}} || Financial || The {{W|Open Philanthropy Project}} recommends (to the Open Philanthropy Project fund, Good Ventures, or some other entity)<ref name="open-phil-guide-grant-seekers" /><ref name="vipul-comment" /> a grant of $1,995,425 to FHI for general support.<ref name="open-phil-grant-march-2017">{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support |publisher=Open Philanthropy Project |title=Future of Humanity Institute — General Support |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
+
| 2017 || {{dts|March}} || Financial || The Open Philanthropy Project grants $1,995,425 to FHI for general support, aimed at expanding FHI's research capabilities in existential risk and policy recommendations.<ref name="open-phil-guide-grant-seekers" /><ref name="vipul-comment" /><ref name="open-phil-grant-march-2017">{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support |publisher=Open Philanthropy Project |title=Future of Humanity Institute — General Support |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|April 26}} || Publication || The online book ''Modeling Agents with Probabilistic Programs'' by Owain Evans (FHI Research Fellow), Andreas Stuhlmüller, John Salvatier (FHI intern), and Daniel Filan (FHI intern) is published. The book is available at [https://agentmodels.org/ <code>https://agentmodels.org/</code>].<ref>{{cite web |url=https://agentmodels.org/ |title=Modeling Agents with Probabilistic Programs |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/new-interactive-tutorial-planning-reinforcement-learning/ |author=Future of Humanity Institute - FHI |title=New Interactive Tutorial: Modeling Agents with Probabilistic Programs - Future of Humanity Institute |publisher=Future of Humanity Institute |date=April 26, 2017 |accessdate=March 13, 2018}}</ref> Work on the book began in spring of 2016. The main motivations for writing the book are: (1) to popularize inverse reinforcement learning (IRL) to a broader audience than machine learning researchers; and (2) "to give a detailed explanation of the authors' approach to IRL to the existing AI Safety and AI/ML communities."<ref name="fli-grant-owain-evans" />
+
| 2017 || {{dts|April 26}} || Publication || The online book ''Modeling Agents with Probabilistic Programs'' by Owain Evans, Andreas Stuhlmüller, John Salvatier, and Daniel Filan is published. This book provides a detailed explanation of using probabilistic programming for modeling agent behaviors, with an emphasis on inverse reinforcement learning.<ref>{{cite web |url=https://agentmodels.org/ |title=Modeling Agents with Probabilistic Programs |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/new-interactive-tutorial-planning-reinforcement-learning/ |author=Future of Humanity Institute - FHI |title=New Interactive Tutorial: Modeling Agents with Probabilistic Programs - Future of Humanity Institute |publisher=Future of Humanity Institute |date=April 26, 2017 |accessdate=March 13, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|April 27}} || Publication || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review" /><ref name="newsletter-summer-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-summer-2017/ |author=Future of Humanity Institute - FHI |title=FHI Quarterly Update Summer 2017 |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref>
+
| 2017 || {{dts|April 27}} || Publication || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv. The paper presents the aestivation hypothesis, suggesting advanced civilizations might be in a state of dormancy to conserve resources.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="newsletter-summer-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-summer-2017/ |author=Future of Humanity Institute - FHI |title=FHI Quarterly Update Summer 2017 |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|May}} || || FHI announces that it will be joining the {{W|Partnership on AI}}.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/join-partnership-ai/ |author=Future of Humanity Institute - FHI |title=FHI is joining the Partnership on AI |publisher=Future of Humanity Institute |date=May 17, 2017 |accessdate=March 16, 2018}}</ref>
+
| 2017 || {{dts|May}} || || FHI announces that it will be joining the Partnership on AI. This collaboration aims to promote responsible AI development and foster discussions on ethical, safety, and transparency issues.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/join-partnership-ai/ |author=Future of Humanity Institute - FHI |title=FHI is joining the Partnership on AI |publisher=Future of Humanity Institute |date=May 17, 2017 |accessdate=March 16, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the {{w|arXiv}}. Three of the authors of this paper are affiliated with FHI: Katja Grace, Allan Dafoe, and Owain Evans.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref>
+
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the arXiv, with authors Katja Grace, Allan Dafoe, and Owain Evans, affiliated with FHI. This survey of AI researchers presents predictions about AI capabilities and the timeline for surpassing human performance.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref>  
 
|-
 
|-
| 2017 || {{dts|July}} || Financial || The {{W|Open Philanthropy Project}} recommends (to the Open Philanthropy Project fund, Good Ventures, or some other entity)<ref name="open-phil-guide-grant-seekers" /><ref name="vipul-comment" /> a grant of $299,320 to Yale University "to support research on the global politics of advanced artificial intelligence". The work will be led by Allan Dafoe, who will conduct part of the work at FHI.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe |publisher=Open Philanthropy Project |title=Yale University — Research on the Global Politics of AI |date=December 15, 2017 |accessdate=March 11, 2018}}</ref>
+
| 2017 || {{dts|July 3}} || Publication || Slides for an upcoming paper by FHI researchers Anders Sandberg, Eric Drexler, and Toby Ord, titled "Dissolving the Fermi Paradox," are posted online. The presentation suggests new interpretations for the Fermi Paradox, addressing why we haven’t observed extraterrestrial civilizations despite the high probability of their existence.<ref>{{cite web |url=http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html |title=Has the Fermi paradox been resolved? - Marginal REVOLUTION |publisher=Marginal REVOLUTION |date=July 3, 2017 |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/09 |author=gwern |date=August 16, 2017 |title=September 2017 news - Gwern.net |accessdate=March 13, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{Dts|July 17}} || Publication || "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1707.05173 |title=[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison |author=Larks |publisher=Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
+
| 2017 || {{dts|July 17}} || Publication || "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv. The paper discusses a novel reinforcement learning approach where human feedback is integrated to prevent unsafe behaviors during training.<ref>{{cite web |url=https://arxiv.org/abs/1707.05173 |title=[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison |author=Larks |publisher=Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|August 25}} || Publication || FHI announces three new forthcoming papers in the latest issue of ''Health Security''.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-publishes-three-new-biosecurity-papers-health-security/ |author=Future of Humanity Institute - FHI |title=FHI publishes three new biosecurity papers in 'Health Security' - Future of Humanity Institute |publisher=Future of Humanity Institute |date=August 25, 2017 |accessdate=March 14, 2018}}</ref><ref name="newsletter-autumn-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-autumn-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Autumn 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 10, 2017 |accessdate=March 14, 2018}}</ref>
+
| 2017 || {{dts|August 25}} || Publication || FHI announces three new forthcoming papers in the latest issue of ''Health Security''. These papers address biosecurity challenges, proposing frameworks to mitigate risks associated with advanced biological technologies.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-publishes-three-new-biosecurity-papers-health-security/ |author=Future of Humanity Institute - FHI |title=FHI publishes three new biosecurity papers in 'Health Security' - Future of Humanity Institute |publisher=Future of Humanity Institute |date=August 25, 2017 |accessdate=March 14, 2018}}</ref><ref name="newsletter-autumn-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-autumn-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Autumn 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 10, 2017 |accessdate=March 14, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|September 27}} || || Carrick Flynn, a research project manager at FHI,<ref>{{cite web |url=https://www.fhi.ox.ac.uk/team/carrick-flynn/ |author=Future of Humanity Institute - FHI |title=Carrick Flynn - Future of Humanity Institute |publisher=Future of Humanity Institute |accessdate=March 15, 2018}}</ref> posts his thoughts on AI policy and strategy on the Effective Altruism Forum. Although he only writes in a personal capacity in the post, it is informed by his experience at FHI.<ref>{{cite web |url=http://effective-altruism.com/ea/1fa/personal_thoughts_on_careers_in_ai_policy_and/ |title=Personal thoughts on careers in AI policy and strategy |first=Carrick |last=Flynn |publisher=Effective Altruism Forum |accessdate=March 15, 2018}}</ref>
+
| 2017 || {{dts|September 27}} || || Carrick Flynn, a research project manager at FHI, posts his thoughts on AI policy and strategy on the Effective Altruism Forum. Although shared in a personal capacity, his insights reflect his experiences at FHI.<ref>{{cite web |url=http://effective-altruism.com/ea/1fa/personal_thoughts_on_careers_in_ai_policy_and/ |title=Personal thoughts on careers in AI policy and strategy |first=Carrick |last=Flynn |publisher=Effective Altruism Forum |accessdate=March 15, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{dts|September 29}} || Financial || Effective Altruism Grants fall 2017 recipients are announced. One of the recipients is Gregory Lewis, who will use the grant for "Research into biological risk mitigation with the Future of Humanity Institute." The grant amount for Lewis is £15,000 (about $20,000).<ref>{{cite web |url=https://docs.google.com/spreadsheets/d/1iBy–zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk/edit#gid=0 |title=EA Grants Fall 2017 Recipients |publisher=Google Docs |accessdate=March 11, 2018}}</ref>
+
| 2017 || {{dts|September 29}} || Financial || Effective Altruism Grants fall 2017 recipients are announced. Gregory Lewis, one of the recipients, receives £15,000 (around $20,000) for research on biological risk mitigation with FHI.<ref>{{cite web |url=https://docs.google.com/spreadsheets/d/1iBy–zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk/edit#gid=0 |title=EA Grants Fall 2017 Recipients |publisher=Google Docs |accessdate=March 11, 2018}}</ref>  
 
|-
 
|-
| 2017 || {{Dts|October}}–December || Project || FHI launches its Governance of AI Program, co-directed by Nick Bostrom and Allan Dafoe.<ref name="newsletter-winter-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-winter-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Winter 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 19, 2018 |accessdate=March 14, 2018}}</ref>
+
| 2017 || {{dts|October}}–December || Project || FHI launches its Governance of AI Program, co-directed by Nick Bostrom and Allan Dafoe. This initiative seeks to address policy, ethical, and regulatory questions related to AI governance.<ref name="newsletter-winter-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-winter-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Winter 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 19, 2018 |accessdate=March 14, 2018}}</ref>  
 
|-
 
|-
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at FHI, {{W|Centre for the Study of Existential Risk}}, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>
+
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts the malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at FHI, {{W|Centre for the Study of Existential Risk}}, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>
 
|-
 
|-
| 2018 || February || Initiative Launch || The Future of Humanity Institute (FHI) launches the Governance of Artificial Intelligence Program (GovAI), marking a pivotal moment in AI governance research. GovAI is designed to address the growing challenges of AI development, focusing on the political, economic, and societal impacts of advanced AI technologies. The initiative brings together experts from diverse fields, including technology, policy, ethics, and law, to explore ways to ensure that AI development is conducted safely and responsibly. GovAI's collaborative research extends beyond academia, engaging with policymakers and industry leaders to develop actionable strategies and policy frameworks that prioritize transparency, accountability, and global governance. This interdisciplinary approach positions GovAI as a leading force in shaping the governance of AI on an international scale. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/govai-launch/ |title=Governance of AI Program Launched |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20180225000000/http://www.fhi.ox.ac.uk/govai-launch/ |archivedate=February 25, 2018 |deadurl=yes}}</ref>  
+
| 2018 || February || Initiative|| The Future of Humanity Institute (FHI) launches the Governance of Artificial Intelligence Program (GovAI). GovAI is designed to address the growing challenges of AI development, focusing on the political, economic, and societal impacts of advanced AI technologies. The initiative brings together experts from diverse fields, including technology, policy, ethics, and law, to explore ways to ensure that AI development is conducted safely and responsibly. GovAI's collaborative research extends beyond academia, engaging with policymakers and industry leaders to develop actionable strategies and policy frameworks that prioritize transparency, accountability, and global governance.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/govai-launch/ |title=Governance of AI Program Launched |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20180225000000/http://www.fhi.ox.ac.uk/govai-launch/ |archivedate=February 25, 2018 |deadurl=yes}}</ref>  
 
 
 
|-  
 
|-  
  
| 2018 || March 1 || Publication || Deciphering China's AI Dream, written by FHI researcher Jeffrey Ding, provides one of the first comprehensive examinations of China’s AI ambitions. The paper meticulously analyzes China's AI strategies, combining insights from government policy documents, speeches, and investment data to paint a picture of the country’s AI-driven future. Ding breaks down the key drivers of China’s AI development, such as national security concerns, economic growth, and political influence, and shows how AI is central to China’s global strategy. The analysis also explores how AI shapes China's international posture and what this means for the future of global AI governance, especially in terms of competition with other world powers like the United States. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf |title=Deciphering China's AI Dream |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20180305000000/http://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf |archivedate=March 5, 2018 |deadurl=yes}}</ref>  
+
| 2018 || March 1 || Publication || "Deciphering China's AI Dream", written by FHI researcher Jeffrey Ding is published on the FHI website. It provides one of the first comprehensive examinations of China’s AI ambitions. The paper meticulously analyzes China's AI strategies, combining insights from government policy documents, speeches, and investment data to paint a picture of the country’s AI-driven future. Ding breaks down the key drivers of China’s AI development, such as national security concerns, economic growth, and political influence, and shows how AI is central to China’s global strategy. The analysis also explores how AI shapes China's international posture and what this means for the future of global AI governance, especially in terms of competition with other world powers like the United States.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf |title=Deciphering China's AI Dream |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20180305000000/http://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf |archivedate=March 5, 2018 |deadurl=yes}}</ref>  
 
 
 
|-  
 
|-  
  
| 2018 || March || Event || Nick Bostrom delivers a keynote address at the South by Southwest (SXSW) Conference, one of the world’s premier tech and cultural events. His presentation focuses on the transformative potential of artificial intelligence, particularly the rise of superintelligence. Bostrom discusses the dangers of AI systems that might surpass human intelligence without proper safety measures, framing the conversation around the existential risks these systems pose. He emphasizes the urgency of AI safety research and international cooperation, arguing that the unchecked rise of AI could lead to disastrous outcomes for humanity. Bostrom's speech captures widespread attention and drives home the need for robust AI regulation, influencing thought leaders and policymakers alike. <ref>{{cite web |url=https://schedule.sxsw.com/2018/speakers/1936484 |title=Nick Bostrom at SXSW 2018 |access-date=September 13, 2024}}</ref>  
+
| 2018 || March || Event || Nick Bostrom delivers a keynote address at the South by Southwest (SXSW) Conference, one of the world’s premier tech and cultural events. His presentation focuses on the transformative potential of artificial intelligence, particularly the rise of superintelligence. Bostrom discusses the dangers of AI systems that might surpass human intelligence without proper safety measures, framing the conversation around the existential risks these systems pose. He emphasizes the urgency of AI safety research and international cooperation, arguing that the unchecked rise of AI could lead to disastrous outcomes for humanity. Bostrom's speech captures widespread attention and drives home the need for robust AI regulation, influencing thought leaders and policymakers alike.<ref>{{cite web |url=https://schedule.sxsw.com/2018/speakers/1936484 |title=Nick Bostrom at SXSW 2018 |access-date=September 13, 2024}}</ref>  
 
 
 
|-  
 
|-  
  
| 2018 || April 1 || Publication || Opportunities for Individual Donors in AI Safety is published on LessWrong, encouraging individuals to fund AI safety initiatives. The publication highlights the crucial role that even small donations can play in advancing critical research on AI alignment, particularly at a time when funding for such research is limited compared to the enormous financial resources behind AI development. The piece details how individual donors can fund research groups, scholarships, and independent researchers working on AI alignment, emphasizing the need for distributed and strategic funding to support global efforts in ensuring AI safety. It serves as a rallying call for the broader public to engage with and support the AI safety movement. <ref>{{cite web |url=https://www.lesswrong.com/posts/cXbXR7QCqWvmPzjki/opportunities-for-individual-donors-in-ai-safety |title=Opportunities for Individual Donors in AI Safety |access-date=September 13, 2024}}</ref>  
+
| 2018 || April 1 || Publication || "Opportunities for Individual Donors in AI Safety" is published on LessWrong, encouraging individuals to fund AI safety initiatives. The publication highlights the crucial role that even small donations can play in advancing critical research on AI alignment, particularly at a time when funding for such research is limited compared to the enormous financial resources behind AI development. The piece details how individual donors can fund research groups, scholarships, and independent researchers working on AI alignment, emphasizing the need for distributed and strategic funding to support global efforts in ensuring AI safety. It serves as a rallying call for the broader public to engage with and support the AI safety movement.<ref>{{cite web |url=https://www.lesswrong.com/posts/cXbXR7QCqWvmPzjki/opportunities-for-individual-donors-in-ai-safety |title=Opportunities for Individual Donors in AI Safety |access-date=September 13, 2024}}</ref>  
  
 
|-  
 
|-  
  
| 2018 || June 1 || Publication || When Will AI Exceed Human Performance? Evidence from AI Experts compiles responses from hundreds of AI researchers on the timeline for AI to surpass human capabilities in a range of fields. The survey reveals a broad consensus that AI will outperform humans in tasks such as language translation, driving, and complex decision-making within the next few decades. The paper also highlights areas where AI might struggle, such as human creativity and emotional intelligence. This publication plays a key role in driving discussions about the ethical and practical implications of AI surpassing human performance, contributing to debates about how society can prepare for this unprecedented shift in technological power. <ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=When Will AI Exceed Human Performance? Evidence from AI Experts |access-date=September 13, 2024}}</ref>  
+
| 2018 || June 1 || Publication || A paper titled "When Will AI Exceed Human Performance? Evidence from AI Experts" compiles responses from hundreds of AI researchers on the timeline for AI to surpass human capabilities in a range of fields. The survey reveals a broad consensus that AI will outperform humans in tasks such as language translation, driving, and complex decision-making within the next few decades. The paper also highlights areas where AI might struggle, such as human creativity and emotional intelligence. This publication plays a key role in driving discussions about the ethical and practical implications of AI surpassing human performance, contributing to debates about how society can prepare for this unprecedented shift in technological power.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=When Will AI Exceed Human Performance? Evidence from AI Experts |access-date=September 13, 2024}}</ref>  
  
 
|-  
 
|-  
  
| 2018 || June 29 || Project Announcement || FHI launches the Research Scholars Programme, an ambitious project designed to train emerging researchers in areas critical to the long-term survival of humanity, such as AI safety, biosecurity, and existential risk mitigation. The program offers participants the opportunity to work directly with FHI’s senior researchers, contributing to major projects that focus on addressing global challenges. Participants engage in cross-disciplinary research, ranging from the technical aspects of AI alignment to the societal implications of biotechnology and pandemic risks. This initiative is seen as a step in building a talent pipeline dedicated to existential risk reduction, as it cultivates a new generation of thought leaders in these areas. <ref>{{cite web |url=https://www.lesswrong.com/posts/nLMX7zWGoNdnm9yHP/fhi-research-scholars-programme |title=FHI Research Scholars Programme |access-date=September 13, 2024}}</ref>  
+
| 2018 || June 29 || Project Announcement || FHI launches the "Research Scholars Programme", a project designed to train emerging researchers in areas critical to the long-term survival of humanity, such as AI safety, biosecurity, and existential risk mitigation. The program offers participants the opportunity to work directly with FHI’s senior researchers, contributing to major projects that focus on addressing global challenges. Participants engage in cross-disciplinary research, ranging from the technical aspects of AI alignment to the societal implications of biotechnology and pandemic risks. This initiative is seen as a step in building a talent pipeline dedicated to existential risk reduction, as it cultivates a new generation of thought leaders in these areas.<ref>{{cite web |url=https://www.lesswrong.com/posts/nLMX7zWGoNdnm9yHP/fhi-research-scholars-programme |title=FHI Research Scholars Programme |access-date=September 13, 2024}}</ref>  
  
 
|-  
 
|-  
  
| 2018 || June 30 || Publication || Dissolving the Fermi Paradox, published by Anders Sandberg, Eric Drexler, and Toby Ord, provides a groundbreaking reinterpretation of the famed Fermi Paradox—the question of why we haven’t observed evidence of extraterrestrial civilizations despite the high probability of their existence. The authors use updated probabilistic models to suggest that the absence of evidence for intelligent life may be less paradoxical than previously thought, due to a range of astrobiological and technological uncertainties. <ref>{{cite web |url=https://arxiv.org/abs/1806.02404 |title=Dissolving the Fermi Paradox |access-date=September 13, 2024}}</ref>  
+
| 2018 || June 30 || Publication ||"Dissolving the Fermi Paradox", is published by Anders Sandberg, Eric Drexler, and Toby Ord, provides an interesting reinterpretation of the famed Fermi Paradox—the question of why we haven’t observed evidence of extraterrestrial civilizations despite the high probability of their existence. The authors use updated probabilistic models to suggest that the absence of evidence for intelligent life may be less paradoxical than previously thought, due to a range of astrobiological and technological uncertainties. <ref>{{cite web |url=https://arxiv.org/abs/1806.02404 |title=Dissolving the Fermi Paradox |access-date=September 13, 2024}}</ref>  
  
 
|-  
 
|-  
  
| 2018 || November 7 || Publication || Nick Bostrom publishes The Vulnerable World Hypothesis, a provocative exploration of the dangers posed by future technological advancements. Bostrom argues that the increasing power of emerging technologies may soon enable the creation of catastrophic tools or weapons that could endanger civilization. He calls for the development of new global governance structures and surveillance systems to prevent the misuse of these technologies. <ref>{{cite web |url=https://www.lesswrong.com/posts/Tx6dGzYLtfzzkuGtF/the-vulnerable-world-hypothesis-by-bostrom |title=The Vulnerable World Hypothesis |access-date=September 13, 2024}}</ref>  
+
| 2018 || November 7 || Publication || Nick Bostrom publishes "The Vulnerable World Hypothesis", a provocative exploration of the dangers posed by future technological advancements. Bostrom argues that the increasing power of emerging technologies may soon enable the creation of catastrophic tools or weapons that could endanger civilization. He calls for the development of new global governance structures and surveillance systems to prevent the misuse of these technologies.<ref>{{cite web |url=https://www.lesswrong.com/posts/Tx6dGzYLtfzzkuGtF/the-vulnerable-world-hypothesis-by-bostrom |title=The Vulnerable World Hypothesis |access-date=September 13, 2024}}</ref>  
  
 
|-  
 
|-  
  
| 2018 || December 18 || Publication || The 2018 AI Alignment Literature Review and Charity Comparison is published, offering a detailed assessment of the state of AI alignment research. The report reviews the progress made by various organizations working on AI safety and provides guidance for donors on where to allocate resources to maximize their impact. By outlining the challenges and potential breakthroughs in AI safety research, the publication serves as an essential resource for both researchers and philanthropists looking to contribute to the field. It also underscores the importance of strategic funding to maintain the momentum of AI alignment research and ensure the safe development of AI systems. <ref>{{cite web |url=https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison |title=2018 AI Alignment Literature Review and Charity Comparison |access-date=September 13, 2024}}</ref>  
+
| 2018 || December 18 || Publication || The "2018 AI Alignment Literature Review and Charity Comparison" is published, offering a detailed assessment of the state of AI alignment research. The report reviews the progress made by various organizations working on AI safety and provides guidance for donors on where to allocate resources to maximize their impact. By outlining the challenges and potential breakthroughs in AI safety research, the publication serves as an essential resource for both researchers and philanthropists looking to contribute to the field. It also underscores the importance of strategic funding to maintain the momentum of AI alignment research and ensure the safe development of AI systems.<ref>{{cite web |url=https://www.lesswrong.com/posts/a72owS5hz3acBK5xc/2018-ai-alignment-literature-review-and-charity-comparison |title=2018 AI Alignment Literature Review and Charity Comparison |access-date=September 13, 2024}}</ref>  
 
|-  
 
|-  
| 2019 || January 1 || Publication || Long-Term Trajectories of Human Civilization, authored by Seth Baum and Stuart Armstrong, investigates possible futures for human civilization over the next millennia, focusing on both the opportunities and risks posed by technological advancements. The paper explores scenarios ranging from the achievement of utopia to potential existential threats that could lead to human extinction. It emphasizes the importance of strategic foresight in shaping policy decisions that will influence humanity's long-term survival, and how current technological and societal developments could impact these trajectories. This paper provides policymakers and scholars with a comprehensive framework for thinking about humanity's future in the context of existential risk and technological evolution. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/publications/long-term-trajectories-of-human-civilization/ |title=Long-Term Trajectories of Human Civilization |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20190110000000/http://www.fhi.ox.ac.uk/publications/long-term-trajectories-of-human-civilization/ |archivedate=January 10, 2019 |deadurl=yes}}</ref>
 
  
|-
+
| 2019 || January 1 || Publication || The publication "Long-Term Trajectories of Human Civilization" is posted on the FHI website, authored by Seth Baum and Stuart Armstrong, investigates possible futures for human civilization over the next millennia, focusing on both the opportunities and risks posed by technological advancements. The paper explores scenarios ranging from the achievement of utopia to potential existential threats that could lead to human extinction. It emphasizes the importance of strategic foresight in shaping policy decisions that will influence humanity's long-term survival, and how current technological and societal developments could impact these trajectories. This paper provides policymakers and scholars with a comprehensive framework for thinking about humanity's future in the context of existential risk and technological evolution.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/publications/long-term-trajectories-of-human-civilization/ |title=Long-Term Trajectories of Human Civilization |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20190110000000/http://www.fhi.ox.ac.uk/publications/long-term-trajectories-of-human-civilization/ |archivedate=January 10, 2019 |deadurl=yes}}</ref>  
 
 
| 2019 || February 1 || Publication || An Upper Bound for the Background Rate of Human Extinction estimates the natural rate of human extinction in the absence of anthropogenic (human-made) risks, using historical data and probabilistic models. The paper finds that without human-induced risks such as nuclear war or environmental destruction, the background rate of human extinction is exceedingly low, which highlights the importance of addressing man-made existential threats. The research provides a foundation for future studies in existential risk and emphasizes the need for proactive strategies to mitigate risks that could arise from human activities. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/publications/an-upper-bound-for-the-background-rate-of-human-extinction/ |title=An Upper Bound for the Background Rate of Human Extinction |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20190215000000/http://www.fhi.ox.ac.uk/publications/an-upper-bound-for-the-background-rate-of-human-extinction/ |archivedate=February 15, 2019 |deadurl=yes}}</ref>  
 
  
 
|-  
 
|-  
  
| 2019 || March 9 || Collaboration || FHI collaborates with the Centre for the Study of Existential Risk (CSER) to provide advice to the United Nations High-Level Panel on Digital Cooperation. Their joint report addresses key global digital risks, such as cybersecurity threats and the ethical development of AI technologies. The report emphasizes the importance of establishing international standards for the governance of AI, as well as ensuring that digital technologies benefit all of humanity rather than creating new forms of inequality or insecurity. The collaboration between FHI and CSER reflects the increasing recognition of AI as a global issue that requires coordinated, cross-border solutions. <ref>{{cite web |url=https://forum.effectivealtruism.org/posts/whDMv4NjsMcPrLq2b/cser-and-fhi-advice-to-un-high-level-panel-on-digital |title=CSER and FHI Advice to UN High-Level Panel on Digital Cooperation |access-date=September 13, 2024}}</ref>  
+
| 2019 || February 1 || Publication || The publication "An Upper Bound for the Background Rate of Human Extinction" is posted on the FHI website,estimates the natural rate of human extinction in the absence of anthropogenic (human-made) risks, using historical data and probabilistic models. The paper finds that without human-induced risks such as nuclear war or environmental destruction, the background rate of human extinction is exceedingly low, which highlights the importance of addressing man-made existential threats. The research provides a foundation for future studies in existential risk and emphasizes the need for proactive strategies to mitigate risks that could arise from human activities.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/publications/an-upper-bound-for-the-background-rate-of-human-extinction/ |title=An Upper Bound for the Background Rate of Human Extinction |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20190215000000/http://www.fhi.ox.ac.uk/publications/an-upper-bound-for-the-background-rate-of-human-extinction/ |archivedate=February 15, 2019 |deadurl=yes}}</ref>  
  
 
|-  
 
|-  
  
| 2019 || May 9 || Publication || The article Claims & Assumptions Made in Eternity in Six Hours critiques the overly optimistic hypothesis that humanity could rapidly expand across the universe. The paper examines the technological, physical, and energy constraints that would make such expansion far more difficult than anticipated. By challenging the assumptions underlying rapid human expansion, the authors promote a more cautious and realistic approach to space exploration and long-term human survival. <ref>{{cite web |url=https://www.lesswrong.com/posts/8WCPDk3RJ6SLP2ZuR/claims-and-assumptions-made-in-eternity-in-six-hours |title=Claims & Assumptions Made in Eternity in Six Hours |access-date=September 13, 2024}}</ref>  
+
| 2019 || March 9 || Collaboration || FHI collaborates with the Centre for the Study of Existential Risk (CSER) to provide advice to the United Nations High-Level Panel on Digital Cooperation. Their joint report addresses key global digital risks, such as cybersecurity threats and the ethical development of AI technologies. The report emphasizes the importance of establishing international standards for the governance of AI, as well as ensuring that digital technologies benefit all of humanity rather than creating new forms of inequality or insecurity. The collaboration between FHI and CSER reflects the increasing recognition of AI as a global issue that requires coordinated, cross-border solutions.<ref>{{cite web |url=https://forum.effectivealtruism.org/posts/whDMv4NjsMcPrLq2b/cser-and-fhi-advice-to-un-high-level-panel-on-digital |title=CSER and FHI Advice to UN High-Level Panel on Digital Cooperation |access-date=September 13, 2024}}</ref>  
  
 
|-  
 
|-  
  
| 2019 || July || Event || FHI researchers participate in the 2019 Beneficial AI Conference organized by the Future of Life Institute, where experts from across the AI field discuss aligning AI development with human values. Topics include preventing the misuse of AI, mitigating bias, and ensuring fairness in AI systems. The conference serves as a platform for exchanging ideas about how to create global safety standards for AI development and sets the stage for future international collaborations on AI safety. <ref>{{cite web |url=https://futureoflife.org/beneficial-ai-2019/ |title=Beneficial AI 2019 |access-date=September 13, 2024}}</ref>  
+
| 2019 || May 9 || Publication || The article "Claims & Assumptions Made in Eternity in Six Hours" critiques the overly optimistic hypothesis that humanity could rapidly expand across the universe. The paper examines the technological, physical, and energy constraints that would make such expansion far more difficult than anticipated. By challenging the assumptions underlying rapid human expansion, the authors promote a more cautious and realistic approach to space exploration and long-term human survival.<ref>{{cite web |url=https://www.lesswrong.com/posts/8WCPDk3RJ6SLP2ZuR/claims-and-assumptions-made-in-eternity-in-six-hours |title=Claims & Assumptions Made in Eternity in Six Hours |access-date=September 13, 2024}}</ref>  
  
 
|-  
 
|-  
  
| 2019 || September || Workshop || FHI hosts the Policy, Governance, and Ethics for AI workshop at the University of Oxford, attracting leading scholars, policymakers, and industry experts. Participants discuss key challenges surrounding AI accountability, transparency, and governance. The workshop focuses on crafting policy frameworks that encourage the responsible development of AI, ensuring that its benefits are widely distributed while minimizing the risks.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/events/ |title=FHI Events |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20190915000000/http://www.fhi.ox.ac.uk/events/ |archivedate=September 15, 2019 |deadurl=yes}}</ref>  
+
| 2019 || July || Event || FHI researchers participate in the 2019 Beneficial AI Conference organized by the Future of Life Institute, where experts from across the AI field discuss aligning AI development with human values. Topics include preventing the misuse of AI, mitigating bias, and ensuring fairness in AI systems. The conference serves as a platform for exchanging ideas about how to create global safety standards for AI development and sets the stage for future international collaborations on AI safety.<ref>{{cite web |url=https://futureoflife.org/beneficial-ai-2019/ |title=Beneficial AI 2019 |access-date=September 13, 2024}}</ref>  
  
 
|-  
 
|-  
 
+
| 2019 || October 1 || Publication || The paper "Artificial Intelligence: American Attitudes and Trends", authored by Baobao Zhang and Allan Dafoe, provides an in-depth analysis of public opinion on AI in the United States. The study reveals a range of views, with the public expressing both optimism about AI’s potential and concerns about issues such as job displacement, privacy, and security. The findings offer valuable insights for policymakers, helping them understand public sentiment toward AI and how it may affect future regulatory frameworks. The research contributes to the broader discourse on AI governance by providing data on how the American public views the risks and benefits of AI technologies.<ref>{{cite web |url=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3312874 |title=Artificial Intelligence: American Attitudes and Trends |access-date=September 13, 2024}}</ref>  
| 2019 || October 1 || Publication || The paper Artificial Intelligence: American Attitudes and Trends, authored by Baobao Zhang and Allan Dafoe, provides an in-depth analysis of public opinion on AI in the United States. The study reveals a range of views, with the public expressing both optimism about AI’s potential and concerns about issues such as job displacement, privacy, and security. The findings offer valuable insights for policymakers, helping them understand public sentiment toward AI and how it may affect future regulatory frameworks. The research contributes to the broader discourse on AI governance by providing data on how the American public views the risks and benefits of AI technologies. <ref>{{cite web |url=https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3312874 |title=Artificial Intelligence: American Attitudes and Trends |access-date=September 13, 2024}}</ref>  
 
 
 
 
|-  
 
|-  
 
+
| 2019 || December 19 || Publication || The 2019 AI Alignment Literature Review and Charity Comparison is published, providing a detailed evaluation of the progress made in AI alignment research. It reviews the efforts of organizations involved in AI safety, offering recommendations for donors to maximize their impact in this critical area. The report underscores the importance of sustained funding for AI safety research and highlights both challenges and potential breakthroughs.<ref>{{cite web |url=https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison |title=2019 AI Alignment Literature Review and Charity Comparison |access-date=September 22, 2024}}</ref>
| 2019 || December 19 || Publication || The 2019 AI Alignment Literature Review and Charity Comparison is published, updating the AI safety community on recent advancements in the field. The review evaluates progress made by organizations working on AI alignment and provides recommendations for donors on where their contributions would be most impactful. It highlights the importance of funding research in critical areas of AI safety, drawing attention to the need for sustained efforts to mitigate the potential risks of advanced AI.<ref>{{cite web |url=https://www.lesswrong.com/posts/SmDziGM9hBjW9DKmf/2019-ai-alignment-literature-review-and-charity-comparison |title=2019 AI Alignment Literature Review and Charity Comparison |access-date=September 13, 2024}}</ref>  
 
 
 
|-
 
 
 
| 2020 || March || Initiative || In response to the COVID-19 pandemic, FHI's Biosecurity Research Group intensifies its efforts to study global biological risks. The group collaborates with international health organizations, including the World Health Organization (WHO), to model pandemic trajectories and offer containment strategies. FHI’s expertise in pandemic preparedness and response is leveraged to provide real-time advice on government interventions, helping shape global health policies. The group also works on improving pandemic preparedness for future outbreaks by studying the biological and social factors that contribute to the spread of infectious diseases. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/research/biosecurity/ |title=Biosecurity Research at FHI |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20200315000000/http://www.fhi.ox.ac.uk/research/biosecurity/ |archivedate=March 15, 2020 |deadurl=yes}}</ref>
 
 
 
 
|-  
 
|-  
  
| 2020 || December 17 || Publication || The "FHI Paper Published in Science: Interventions Against COVID-19" rigorously assesses the effectiveness of various non-pharmaceutical interventions (NPIs) used globally to mitigate the spread of COVID-19. The research ranks interventions such as social distancing, mask mandates, school closures, and lockdowns by analyzing their impact on transmission rates in diverse geographical and social contexts. Drawing on comprehensive global datasets, the paper provides detailed comparisons of how each measure performed under different circumstances, offering governments robust evidence for decision-making in pandemic management. <ref>{{cite journal |last1=Haug |first1=Nancy |last2=Gatalo |first2=Oliver |last3=Kozik |first3=Alisa J. |last4=Kuhlman |first4=Courtney |title=Ranking the effectiveness of worldwide COVID-19 government interventions |journal=Nature Human Behaviour |volume=4 |pages=1303–1312 |date=December 2020 |doi=10.1038/s41562-020-01009-0 |pmid=33199859}}</ref>
+
| 2020 || December 17 || Publication || A paper titled "Ranking the Effectiveness of Worldwide COVID-19 Government Interventions" is published in Nature Human Behaviour. This research, co-authored by Nancy Haug, Oliver Gatalo, Alisa J. Kozik, and Courtney Kuhlman, provides a comprehensive assessment of the effectiveness of various non-pharmaceutical interventions (NPIs) during the pandemic. The study rigorously analyzes data from different countries to rank interventions such as social distancing, mask mandates, and lockdowns based on their impact on transmission rates. The findings offer governments critical, evidence-based insights for managing the spread of COVID-19 by highlighting the most effective strategies under varying circumstances.<ref>{{cite journal |url=https://www.nature.com/articles/s41562-020-01009-0 |title=Ranking the Effectiveness of Worldwide COVID-19 Government Interventions |journal=Nature Human Behaviour |volume=4 |pages=1303–1312 |date=December 2020}}</ref>
|-
 
 
 
| 2021 || February || Contribution || FHI plays a crucial role in contributing to the European Commission's Guidelines on Trustworthy AI. These guidelines focus on ensuring that AI systems are developed with ethical, legal, and social considerations at the forefront. FHI's input emphasizes the importance of long-term safety and the alignment of AI systems with human values. The guidelines cover core principles such as fairness, accountability, transparency, and robustness, aiming to create a regulatory framework that addresses the societal impacts of AI and promotes public trust in AI technologies. <ref>{{cite web |url=https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai |title=Ethics Guidelines for Trustworthy AI |access-date=September 13, 2024}}</ref>
 
 
|-
 
|-
  
| 2021 || April || Publication || In Enhancing Human Capacities, Nick Bostrom explores the complex ethical and societal questions surrounding human enhancement technologies. The paper addresses emerging advancements like genetic engineering, cybernetic augmentation, and cognitive enhancement drugs. Bostrom analyzes potential positive outcomes, such as extended life expectancy and improved intelligence, alongside risks like increased social inequality and potential misuse. The publication advocates for a cautious and ethically grounded approach to regulating human enhancement technologies, highlighting the need for careful governance to balance innovation with societal well-being. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/publications/ |title=FHI Publications |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20210415000000/http://www.fhi.ox.ac.uk/publications/ |archivedate=April 15, 2021 |deadurl=yes}}</ref>
+
| 2021 || August 27 || Publication || The Future of Humanity Institute (FHI) publishes the paper "Quantitative National Risk Reports (QNRs)". This paper introduces methodologies for assessing national-level risks related to existential threats, such as pandemics, AI misalignment, and biosecurity risks. The report offers comprehensive guidelines for improving global coordination and preparedness by providing a quantifiable model for assessing these risks. It aims to foster better understanding and management of potential global threats.<ref>{{cite web |url=https://http://www.fhi.ox.ac.uk/QNRs/ |title=Quantitative National Risk Reports (QNRs) |access-date=September 22, 2024}}</ref>
 
|-
 
|-
  
| 2021 || July || Event || FHI hosts the Global Forum on AI Safety, an international virtual conference that gathers leading experts from AI research, policy, and ethics. The forum features in-depth discussions on critical issues such as mitigating biases in AI systems, addressing the risks of advanced AI misuse, and fostering cross-border cooperation for AI safety governance. Attendees work together on developing collaborative research agendas and proposing policy frameworks that ensure AI development serves societal interests. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/events/ |title=FHI Events |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20210715000000/http://www.fhi.ox.ac.uk/events/ |archivedate=July 15, 2021 |deadurl=yes}}</ref>
+
| 2022 || June || Conference Participation || FHI actively participates in the International Conference on Machine Learning (ICML), presenting work on AI safety, fairness, and robustness. FHI researchers contribute to workshops emphasizing the integration of safety mechanisms into AI systems, particularly highlighting risks associated with unregulated advancements in fields like autonomous decision-making and algorithmic biases. Their contributions reinforce the need for AI to be both transparent and socially beneficial.<ref>{{cite web |url=https://icml.cc/Conferences/2022/Schedule |title=ICML 2022 Schedule |access-date=September 22, 2024}}</ref>
 
|-
 
|-
  
| 2022 || March || Research || FHI researchers publish multiple papers addressing the key challenges in aligning AI systems with human values, particularly through technical improvements. Topics include reinforcement learning from human feedback, enhancing the interpretability of complex AI decision-making processes, and designing scalable oversight mechanisms for advanced AI systems. These papers contribute to developing practical frameworks for AI alignment, ensuring that AI systems can operate safely and transparently even as they grow in complexity. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/publications/ |title=FHI Publications |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20220315000000/http://www.fhi.ox.ac.uk/publications/ |archivedate=March 15, 2022 |deadurl=yes}}</ref>
+
| 2022 || September || Program Expansion || FHI expands its Research Scholars Programme, adding specialized tracks in AI governance, biosecurity, and macrostrategy. This program aims to train the next generation of scholars to tackle critical global challenges, fostering interdisciplinary research on existential risks and long-term human survival. The expansion reflects FHI’s dedication to building a robust research community equipped to address AI safety, biosecurity, and global governance issues.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/research-scholars-programme/ |title=FHI Research Scholars Programme |access-date=September 22, 2024 |archiveurl=https://web.archive.org/web/20220915000000/http://www.fhi.ox.ac.uk/research-scholars-programme/ |archivedate=September 15, 2022}}</ref>
 
|-
 
|-
  
| 2022 || June || Conference Participation || FHI participates in the International Conference on Machine Learning (ICML), one of the most prestigious annual gatherings for machine learning professionals. Researchers from FHI present papers and participate in workshops on AI safety, robustness, and fairness. Their contributions emphasize the growing need to integrate safety mechanisms into mainstream AI research and highlight the risks posed by unregulated AI advancements in fields like autonomous decision-making and data-driven biases. <ref>{{cite web |url=https://icml.cc/Conferences/2022/Schedule |title=ICML 2022 Schedule |access-date=September 13, 2024}}</ref>
+
| 2023 || January || Public Statement || FHI Director Nick Bostrom issues a public apology following the resurfacing of a controversial email from the 1990s. In the email, Bostrom made offensive remarks that sparked backlash within academic and tech communities. His apology addresses the harm caused by the comments and reaffirms his commitment to diversity and inclusion. However, the incident leads to broader discussions about accountability in leadership roles at institutions like FHI.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/nick-bostroms-statement/ |title=Nick Bostrom's Statement |access-date=September 22, 2024 |archiveurl=https://web.archive.org/web/20230115000000/http://www.fhi.ox.ac.uk/nick-bostroms-statement/ |archivedate=January 15, 2023}}</ref> <ref>{{cite web |url=https://www.pasteurscube.com/why-im-personally-upset-with-nick-bostrom-right-now/ |title=Why I'm Personally Upset with Nick Bostrom Right Now |access-date=September 22, 2024}}</ref>
 
|-
 
|-
  
| 2022 || September || Program Expansion || FHI expands its Research Scholars Programme, a highly competitive initiative aimed at training emerging scholars in AI governance, biosecurity, and existential risk research. The expansion introduces new dedicated tracks in areas like macrostrategy, fostering interdisciplinary collaboration among researchers from diverse academic backgrounds. This broader scope reflects FHI’s commitment to cultivating the next generation of thought leaders equipped to tackle pressing global challenges related to AI, biosecurity, and long-term human survival. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/research-scholars-programme/ |title=FHI Research Scholars Programme |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20220915000000/http://www.fhi.ox.ac.uk/research-scholars-programme/ |archivedate=September 15, 2022 |deadurl=yes}}</ref>
+
| 2023 || August 15 || Publication || Anders Sandberg releases a chapter titled "The Lifespan of Civilizations: Do Societies 'Age,' or Is Collapse Just Bad Luck?" in the book "Existential Risk Studies." In this work, Sandberg investigates whether the collapse of civilizations is driven by internal factors like societal aging or external, unpredictable events such as natural disasters or wars. The chapter provides an analysis of historical civilizations and the reasons behind their collapse, offering a detailed exploration of potential parallels for modern societies facing existential risks. It adds to the broader discourse on the vulnerabilities of civilizations and the importance of resilience in global governance. This publication continues Sandberg's research into existential risks, a central focus of the Future of Humanity Institute.<ref>{{cite web |url=https://www.taylorfrancis.com/chapters/edit/10.4324/9781003331384-24/lifespan-civilizations-anders-sandberg |title=The Lifespan of Civilizations |access-date=September 22, 2024}}</ref>  
 
|-
 
|-
  
| 2023 || January || Public Statement || Nick Bostrom, Director of FHI, issues a public apology after a controversial email he wrote in the 1990s resurfaces. The email contained offensive language, prompting widespread criticism and calls for his resignation from the academic and tech communities. In his apology, Bostrom acknowledges the harm caused, expressing regret and reaffirming his commitment to diversity and inclusion. However, the incident triggers deeper debates about leadership accountability within institutions like FHI, raising concerns about the values upheld by those in influential positions. <ref>{{cite web |url=https://www.fhi.ox.ac.uk/nick-bostroms-statement/ |title=Nick Bostrom's Statement |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20230115000000/http://www.fhi.ox.ac.uk/nick-bostroms-statement/ |archivedate=January 15, 2023 |deadurl=yes}}</ref> <ref>{{cite web |url=https://www.pasteurscube.com/why-im-personally-upset-with-nick-bostrom-right-now/ |title=Why I'm Personally Upset with Nick Bostrom Right Now |access-date=September 13, 2024}}</ref>
+
| 2023 || August 28 || Publication || A collaboration of researchers from FHI, including Anders Sandberg, publishes "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness" on ArXiv. The paper tackles the problem of consciousness in AI systems, proposing methodologies for assessing whether AI systems could possess or simulate conscious experience. The study draws on the latest theories in the science of consciousness, comparing human cognitive processes with machine learning models. It aims to address key ethical and practical questions in AI development, particularly around the moral status of AI systems and the potential risks associated with conscious or quasi-conscious AI agents.<ref>{{cite web |url=https://arxiv.org/abs/2308.08708 |title=Consciousness in Artificial Intelligence |access-date=September 22, 2024}}</ref>  
 
|-
 
|-
  
| 2023 || April || Workshop || FHI co-hosts a high-level international workshop on AI Governance and Policy. The workshop brings together leading researchers, policymakers, and industry leaders to explore strategies for governing advanced AI technologies. Key topics include developing ethical regulatory frameworks, negotiating international AI treaties, and implementing responsible AI deployment in diverse sectors. Participants engage in collaborative policy discussions aimed at ensuring that AI technologies are both innovative and aligned with societal values, with a strong emphasis on transparency, accountability, and safety. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/events/ |title=FHI Events |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20230415000000/http://www.fhi.ox.ac.uk/events/ |archivedate=April 15, 2023 |deadurl=yes}}</ref>
+
| 2023 || September 5 || Publication || Another significant paper titled "Truthful AI: Towards Developing and Governing AI that Does Not Lie" is published by the team including Anders Sandberg. This research focuses on the creation of AI systems designed to uphold truthfulness and transparency. The paper outlines governance frameworks and technical measures to ensure that AI systems do not propagate misinformation, even unintentionally. This work is positioned within the broader context of AI safety research, a primary focus of the Future of Humanity Institute, addressing concerns about the integrity and trustworthiness of AI systems in fields such as journalism, governance, and public decision-making.<ref>{{cite web |url=https://arxiv.org/abs/2308.09045 |title=Truthful AI |access-date=September 22, 2024}}</ref>  
|-
 
 
 
| 2023 || July || Research Initiative || FHI launches an initiative focusing on the dual-use risks associated with advanced biotechnology and synthetic biology. Acknowledging the potential for these technologies to be weaponized or misused, the initiative aims to identify key biosecurity threats and develop robust prevention strategies. This research addresses the urgent need for responsible oversight of rapidly advancing biotechnologies that can be used both to benefit public health and to pose significant risks. The initiative also emphasizes collaboration with global policymakers to ensure safe research practices and prevent unintended consequences. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/research/ |title=FHI Research Areas |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20230715000000/http://www.fhi.ox.ac.uk/research/ |archivedate=July 15, 2023 |deadurl=yes}}</ref>
 
|-
 
| 2023 || October || Publication || FHI releases the Global Catastrophic Risks 2023 report, providing an updated analysis of existential threats, including artificial intelligence, pandemics, climate change, and nuclear warfare. The report synthesizes current data and research, evaluating the likelihood and impact of these risks. It offers detailed risk assessments for each category and outlines strategies to mitigate or manage these global challenges. The report includes input from international experts and serves as a vital resource for policymakers, scholars, and organizations working to prevent large-scale global disasters. <ref>{{cite web |url=http://www.fhi.ox.ac.uk/publications/ |title=FHI Publications |access-date=September 13, 2024 |archiveurl=https://web.archive.org/web/20231015000000/http://www.fhi.ox.ac.uk/publications/ |archivedate=October 15, 2023 |deadurl=yes}}</ref>
 
 
|-
 
|-
  
| 2024 || January || Report || Anders Sandberg publishes Future of Humanity Institute 2005-2024: Final Report, which offers a retrospective on FHI's 19 years of operation. The report documents the institute’s contributions to existential risk research, highlighting its key projects in AI safety, biosecurity, and the study of long-term human futures. Sandberg provides reflections on FHI’s collaborations with global institutions and its evolving focus over the years, as well as the internal and external challenges it faced leading up to its closure. <ref>{{cite web |url=https://ora.ox.ac.uk/objects/uuid:8c1ab46a-061c-479d-b587-8909989e4f51 |title=Future of Humanity Institute 2005-2024: Final Report |access-date=September 13, 2024}}</ref>
+
| 2024 || January || Report || Anders Sandberg publishes "Future of Humanity Institute 2005-2024: Final Report", which offers a retrospective on FHI's 19 years of operation. The report documents the institute’s contributions to existential risk research, highlighting its key projects in AI safety, biosecurity, and the study of long-term human futures. Sandberg provides reflections on FHI’s collaborations with global institutions and its evolving focus over the years, as well as the internal and external challenges it faced leading up to its closure.<ref>{{cite web |url=https://ora.ox.ac.uk/objects/uuid:8c1ab46a-061c-479d-b587-8909989e4f51 |title=Future of Humanity Institute 2005-2024: Final Report |access-date=September 13, 2024}}</ref>
 
|-
 
|-
  
| 2024 || January || Personal Reflections || Anders Sandberg publishes Thoughts at the End of an Era on his blog, reflecting on the closure of the Future of Humanity Institute. In the post, Sandberg shares his personal thoughts on FHI’s legacy, the global impact of its research, and the reasons for its closure. He also discusses the future of existential risk research, expressing optimism for continued work in the field despite the challenges faced by FHI. <ref>{{cite web |url=https://aleph.se/andart2/personal/thoughts-at-the-end-of-an-era/ |title=Thoughts at the End of an Era |access-date=September 13, 2024}}</ref>
+
| 2024 || January || Publication || Anders Sandberg publishes "Thoughts at the End of an Era" on his blog, reflecting on the closure of the Future of Humanity Institute. In the post, Sandberg shares his personal thoughts on FHI’s legacy, the global impact of its research, and the reasons for its closure. He also discusses the future of existential risk research, expressing optimism for continued work in the field despite the challenges faced by FHI.<ref>{{cite web |url=https://aleph.se/andart2/personal/thoughts-at-the-end-of-an-era/ |title=Thoughts at the End of an Era |access-date=September 13, 2024}}</ref>
 
|-
 
|-
  
| 2024 || April 16 || Closure || After nearly two decades of pioneering research, the Future of Humanity Institute is officially closed by Oxford University due to administrative disagreements and controversies surrounding its research direction and affiliations. The closure marks the end of a highly influential institution, but FHI's legacy endures through its contributions to global existential risk research, particularly in areas like AI safety, biosecurity, and global governance. <ref>{{cite web |url=https://www.lesswrong.com/posts/tu3CH22nFLLKouMKw/fhi-future-of-humanity-institute-has-shut-down-2005-2024 |title=FHI: Future of Humanity Institute Has Shut Down |access-date=September 13, 2024}}</ref>
+
| 2024 || April 16 || Closure || After nearly two decades of pioneering research, the Future of Humanity Institute is officially closed by Oxford University due to administrative disagreements and controversies surrounding its research direction and affiliations. The closure marks the end of a highly influential institution, but FHI's legacy endures through its contributions to global existential risk research, particularly in areas like AI safety, biosecurity, and global governance.<ref>{{cite web |url=https://www.lesswrong.com/posts/tu3CH22nFLLKouMKw/fhi-future-of-humanity-institute-has-shut-down-2005-2024 |title=FHI: Future of Humanity Institute Has Shut Down |access-date=September 13, 2024}}</ref>
 
|}
 
|}
  

Latest revision as of 05:21, 10 November 2024

This is a timeline of the Future of Humanity Institute (FHI).

Sample questions

This is an experimental section that provides some sample questions for readers, similar to reading questions that might come with a book. Some readers of this timeline might come to the page aimlessly and might not have a good idea of what they want to get out of the page. Having some "interesting" questions can help in reading the page with more purpose and in getting a sense of why the timeline is an important tool to have.

The following are some interesting questions that can be answered by reading this timeline:

  • What was FHI up to for the first ten years of its existence (roughly up to the time when Superintelligence was published)? (Scan the years 2005–2014.)
  • What are the websites FHI has been associated with? (Sort by the "Event type" column and look at the rows labeled "Website".)
  • Who were some of the early research staff at FHI? (Sort by the "Event type" column and look at the first several rows labeled "Staff".)

Many questions are still difficult or impossible to answer just by reading the current version of this timeline. See Representativeness of events in timelines for more information.

Big picture

Time period Development summary More details
Before 2005 Pre-FHI days This is the period leading up to FHI's existence. Nick Bostrom, who would become FHI's first (and so far only) director, is born and completes his education. Also happening in this period are the development of transhumanism, the creation of various transhumanism-related mailing lists, and Bostrom's development of his early ideas.
2005–2011 Early days of FHI FHI is established and begins its research. Compared to later periods, this period seems to have a greater focus on the ethics of human enhancement. (Bostrom mentions three "work streams" in his welcome event speech at the founding of FHI: human enhancement, global catastrophic risks, and improvement of methodological tools for studying big picture questions. Of these, the second and third "work streams" seem to dominate later periods.)[1] FHI publishes three Annual/Achievement Reports during this period.
2011–2015 More development and publication of Superintelligence FHI continues to publish, hold workshops, and advise policymakers. There is more focus on existential risks, in particular risks from advanced artificial intelligence, during this period. The most notable accomplishment of FHI during this period seems to be the publication of Bostrom's book Superintelligence. FHI did not seem to publish any Annual/Achievement Reports during this period, so it is somewhat difficult to tell what FHI considers its greatest accomplishments during this period (other than the publication of Superintelligence).
2021–2024 AI Governance, Pandemic Research, and Closure FHI contributes significantly to AI governance and publishes important research on existential risks and pandemic preparedness. In 2023, FHI faces internal challenges, including a controversy involving Nick Bostrom, which leads to its closure in April 2024. Despite this, FHI's legacy in AI safety, biosecurity, and global risk mitigation continues to influence ongoing research and the development of policy frameworks.

Full timeline

Here are the inclusion criteria for various event types:

  • For "Publication", the intention is to include the most notable publications. This usually means that if a publication has been featured by FHI itself or has been discussed by some outside sources, it is included. There are too many publications to include all of them.
  • For "Website", the intention is to include all websites associated with FHI. There are not that many such websites, so this is doable.
  • For "Staff", the intention is to include all Research Fellows and leadership positions (so far, Nick Bostrom has been the only director so not much to record here).
  • For "Workshop" and "Conference", the intention is to include all events organized or hosted by FHI, but not events where FHI staff only attended or only helped with organizing.
  • For "Internal review", the intention is to include all annual review documents.
  • For "External review", the intention is to include all reviews that seem substantive (judged by intuition). For mainstream media articles, only ones that treat FHI/Bostrom at length are included.
  • For "Financial", the intention is to include all substantial (say, over $10,000) donations, including aggregated donations and donations of unknown amounts.
  • For "Nick Bostrom", the intention is to include events sufficient to give a rough overview of Bostrom's development prior to the founding of FHI.
  • For "Social media", the intention is to include all social media account creations (where the date is known) and Reddit AMAs.
  • Events about FHI staff giving policy advice (to e.g. government bodies) are not included, as there are many such events and it is difficult to tell which ones are more important.
  • For "Project Announcement" or "Intiatives", the intention is to include announcements of major initiatives and research programs launched by FHI, especially those aimed at training researchers or advancing existential risk mitigation.
  • For "Collaboration", the intention is to include significant collaborations with other institutions where FHI co-authored reports, conducted joint research, or played a major role in advising.
Year Month and date Event type Details
1973 March 10 Nick Bostrom Nick Bostrom is born.
1992–1994 Nick Bostrom Nick Bostrom completes his undergraduate degree in philosophy, mathematics, mathematical logic, and artificial intelligence at the University of Gothenburg.[2]
1994–1996 Nick Bostrom Nick Bostrom completes his masters degree in philosophy and physics at Stockholm University.[2]
1996 Nick Bostrom Nick Bostrom completes his masters degree (?) in astrophysics and general relativity and computational neuroscience at King's College London.[2]
1996–2000 Nick Bostrom Nick Bostrom completes his PhD in philosophy at the London School of Economics.[2]
1998 August 30 Website The domain name for the Anthropic Principle website, anthropic-principle.com, is registered.[3] The first Internet Archive snapshot of the website is from January 25, 1999.[4]
1998 August 30 Website The domain name for Nick Bostrom's Future Studies website, future-studies.com, is registered.[5] The first Internet Archive snapshot of the website is from October 12, 1999.[6]
1998 December 14 Website The domain name for Nick Bostrom's analytic philosophy website, analytic.org, is registered.[7] The first Internet Archive snapshot of the website is from November 28, 1999.[8] As of March 2018, the website is not maintained and points to Bostrom's main website, nickbostrom.com.[9]
2000–2002 Nick Bostrom Nick Bostrom is a lecturer at Yale University during this period.[2]
2001 October 31 Website The Simulation Argument website's domain name, simulation-argument.com, is registered.[10] The first Internet Archive snapshot of the website would be on December 5, 2001.[11] The website hosts information about the simulation hypothesis, especially as articulated by Bostrom. In the FHI Achievements Report for 2008–2010, the Simulation Argument website is listed under websites maintained by FHI members.[12]
2003 Publication Nick Bostrom's "Astronomical Waste: The Opportunity Cost of Delayed Technological Development" is published in the journal Utilitas.[13] This is a featured FHI publication.[14]
2003–2005 Nick Bostrom Nick Bostrom is a British Academy Postdoctoral Fellow in the Faculty of Philosophy at Oxford University during this period.[2]
2005 June 1 or October 4 or November 29 The Future of Humanity Institute is established.[15][16][17][1]
2005 Financial At its founding, FHI receives funding from James Martin, the Bright Horizons Foundation, and one anonymous philanthropist.[17]
2005 December 18 Publication "How Unlikely is a Doomsday Catastrophe?" by Max Tegmark and Nick Bostrom is published.[18] This is a featured FHI publication.[14]
2006 Publication "What is a Singleton?" by Nick Bostrom is published in the journal Linguistic and Philosophical Investigations. The paper introduces the idea of a singleton, a hypothetical "world order in which there is a single decision-making agency at the highest level".[19]
2006 Staff Rebecca Roache joins FHI as a Research Fellow. Her topic of research is ethical issues regrading human enhancement and new technology.[20][21]
2006 January Staff Anders Sandberg joins FHI. As of March 2018 he is a Senior Research Fellow at FHI.[22]
2006 March 2 Website The ENHANCE project website is created[23] by Anders Sandberg.[20]
2006 March 13 Workshop FHI hosts the International Methodology Workshop.[17]:3[20]:59
2006 April Internal review Issue 1 of FHI's Bimonthly Progress Report is published.[24]
2006 and 2007 May 4 (2006) and March 27–28 (2007) Workshop Anders Sandberg of FHI helps to organize the ENHANCE Workshops on cognition enhancement.[20]:62
2006 July Publication "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" by Nick Bostrom and Toby Ord is published.[25] The paper introduces the reversal test in the context of bioethics of human enhancement. This is a featured FHI publication.[14]
2006 July Internal review Issue 2 of FHI's Bimonthly Progress Report is published.[26]
2006 July 19 Website The domain name for the existential risk website, existential-risk.org, is registered on this day.[27]
2006 October Workshop FHI and the Program on the Ethics of the New Biosciences host a workshop "to initiate new collaborations and to celebrate their first few months".[26]
2006 October Internal review Issue 3 of FHI's Bimonthly Progress Report is published.[1]
2006 November 20 Website Robin Hanson starts Overcoming Bias.[28] The first post on the blog seems to be from November 20.[29] On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.[30] The blog seems to have received support from FHI in the beginning.[31][20]
2006 December Staff Rafaela Hillerbrand joins FHI as a Research Fellow for "work on epistemological and ethical problems for decisions under risk and uncertainty". She would remain at FHI until October 2008.[32][1]
2006 December Staff Nicholas Shackel joins FHI as a Research Fellow in Theoretical Ethics.[33][1]
2006 December 17 External review The initial version of the Wikipedia page for FHI is created.[34]
2005–2007 Project Lighthill Risk Network is created by Peter Taylor of FHI.[20]
2007 April Internal review Issue 4 of the FHI Progress Report (apparently renamed from "Bimonthly Progress Report") is published. This issue highlights key developments across the institute’s projects, focusing on topics like existential risk reduction and emerging technologies. It serves as an internal checkpoint to assess the direction of FHI’s ongoing work and to provide strategic updates on project milestones achieved in early 2007.[35]
2007 May 26–27 Workshop The Whole Brain Emulation Workshop is hosted by FHI. This two-day event brings together experts in neuroscience, computational modeling, and artificial intelligence to discuss the feasibility and ethical considerations of emulating a human brain in a computer. It sparks a foundation for whole-brain emulation research, with discussions ranging from technical challenges to long-term applications. This workshop would eventually lead to the publication of "Whole Brain Emulation: A Technical Roadmap" in 2008, establishing FHI’s ongoing influence in the brain emulation field.[20]:62[35]:2[36]
2007 June 4 Conference Nick Shackel of FHI organizes the Bayesian Approaches to Agreement Conference. This event gathers leading thinkers to explore Bayesian principles in achieving agreement in uncertain conditions. By examining methods for assessing probabilities and evidence in differing viewpoints, this conference contributes to FHI's mission of improving decision-making frameworks and bolstering rational discourse in high-stakes scenarios.[20]:63
2007 July 18 Internal review The first FHI Achievements Report, covering November 2005 to July 2007, is published. This report outlines FHI’s major accomplishments, including research initiatives and institutional growth, and reflects the organization’s commitment to transparency and accountability in its existential risk work. It highlights FHI’s rapid progress in interdisciplinary research and provides a roadmap for future directions.[20]
2007 August 24 Publication Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker is published. This book, co-edited by FHI’s Guy Kahane, pays tribute to philosopher Gordon Baker and includes essays on Wittgenstein’s legacy. The publication underscores FHI’s dedication to supporting diverse intellectual pursuits, including philosophy, which informs the ethical underpinnings of its work on humanity’s long-term prospects.[37][38]
2007 Autumn Workshop Nick Bostrom and Rafaela Hillerbrand of FHI organize an Existential Risk Workshop around this time. This event addresses the critical risks threatening humanity's survival, such as advanced AI, biotechnological dangers, and catastrophic events. Scholars and practitioners convene to evaluate these risks and strategize preventive measures, reinforcing FHI’s position as a leader in existential risk research.[20]:74[2]:17
2007 November Website The Practical Ethics blog, managed by FHI’s Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches. This blog serves as a platform for analyzing ethical issues related to scientific advancements and public policy. Over time, it adopts several names, such as Practical Ethics in the News and Practical Ethics: Ethical Perspectives on the News, with its initial URL hosted at ethicsinthenews.typepad.com/practicalethics. The blog evolves to become a critical resource for ethical perspectives on new technological developments and bioethics issues.[36][39]
2008 Publication "Whole Brain Emulation: A Technical Roadmap" by Anders Sandberg and Nick Bostrom is published. This report lays out a comprehensive technical framework for creating a whole-brain emulation, discussing the requirements in neuroscience, computer science, and ethical considerations. It becomes a cornerstone publication for FHI, guiding future research in cognitive science and AI alignment.[36] This is a featured FHI publication.[14]
2008–2009 Financial FHI reports donations from three unnamed philanthropists and the Bright Horizons Foundation, which help sustain research activities and expand efforts on existential risk and ethical technology development.[36]:23
2008–2010 Workshop FHI hosts a Cognitive Enhancement Workshop during this period, convening experts to discuss methods for enhancing cognitive abilities through technology and their ethical implications. The workshop is part of FHI’s broader inquiry into human enhancement and aims to inform ethical frameworks around cognitive interventions.[12]
2008–2010 Workshop FHI organizes a symposium on "Cognitive Enhancement and Related Ethical and Policy Issues," which gathers scholars to explore the social and policy ramifications of cognitive enhancement technologies. This event reinforces FHI’s role in leading discussions on the ethics of emerging technologies.[12]
2008–2010 Workshop FHI co-hosts an event titled "Uncertainty, Lags, and Nonlinearity: Challenges to Governance in a Turbulent World," addressing the difficulties in governing complex systems under uncertainty. The workshop explores strategies for managing global risks where outcomes are unpredictable, reinforcing FHI’s interdisciplinary approach to governance and risk management.[12]
2008–2010 Financial FHI reports receiving "about 10" philanthropic donations from private individuals, which contribute to funding ongoing projects on global catastrophic risks, AI safety, and ethical technology development.[12]
2008 January 22 Website The domain name for the Global Catastrophic Risks website, global-catastrophic-risks.com, is registered. This domain serves as an information hub for FHI’s work on existential threats. The first snapshot on the Internet Archive is recorded on May 5, 2008.[40][41]
2008 September 15 Publication Global Catastrophic Risks is published. Edited by Nick Bostrom and Milan M. Ćirković, this book consolidates research on existential risks, offering a multi-faceted analysis of threats ranging from climate change to artificial intelligence. It becomes an influential text within FHI and broader academic circles on preventing catastrophic events.[42][38]
2009 Publication "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" by Rafaela Hillerbrand, Toby Ord, and Anders Sandberg is published. This paper addresses the difficulty of assessing rare but impactful events, offering methods to better evaluate these high-stakes risks. It is recognized as a featured FHI publication, reflecting the institute’s commitment to improving risk assessment methodologies.[36][14]
2009 January 1 Publication On the blog Overcoming Bias, Nick Bostrom publishes a post proposing the "Parliamentary Model" for addressing moral uncertainty. This model suggests weighing different ethical perspectives as if they were political parties, allowing for structured decision-making amid moral ambiguity. Although Bostrom mentioned an ongoing paper with Toby Ord on this topic, it seems unpublished as of 2018. The idea is frequently referenced in philosophical discussions on LessWrong and other platforms.[12][43][44][45]
2009 January 22 Publication Human Enhancement is published. Edited by Julian Savulescu and Nick Bostrom, this book compiles essays exploring the ethical implications of enhancing human capacities through technology. The publication deepens FHI’s contributions to bioethics and human enhancement debates, especially concerning cognitive, physical, and moral augmentation.[46][38][36]
2009 February Website The group blog LessWrong launches, dedicated to rationality and cognitive improvement. Sponsored by FHI, LessWrong becomes a community space for discussing decision theory, existential risks, and ethics. Though FHI’s direct contributions are minimal, the blog is highly influential among FHI researchers and the wider rationalist community.[47][36][48]
2009 March 6 Social media The FHI YouTube account, FHIOxford, is created. This channel hosts videos related to FHI’s research, public lectures, and discussions on existential risk, allowing FHI to extend its educational outreach and share insights into its work with a wider audience.[49]
2009 June 19 Publication "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges" by Nick Bostrom and Anders Sandberg is published in the journal Science and Engineering Ethics. This paper explores the ethical challenges of cognitive enhancement technologies and the societal implications of augmenting human abilities. By 2011, it becomes the most-cited article from FHI, underscoring its impact in bioethics and policy discussions.[50][14][48]
2009 September Internal review The FHI Annual Report, covering the period October 1, 2008 to September 30, 2009, is likely published during this month. This report details FHI’s research advancements, financial statements, and strategic directions, reinforcing its commitment to transparency and scholarly excellence.[36]
2010 Internal review The FHI Achievements Report, covering the years 2008 to 2010, is likely published. This report provides an overview of FHI’s activities, research outputs, and organizational growth, summarizing the institute’s efforts in global catastrophic risk mitigation and ethics.[12]
2010 June 21 Publication Anthropic Bias by Nick Bostrom is published. The book delves into reasoning under observation selection effects, exploring how knowledge of one's existence as an observer can impact probabilistic reasoning.[51][38]
2010 June Staff Eric Mandelbaum joins FHI as a Postdoctoral Research Fellow, contributing to interdisciplinary research on cognitive science and philosophy. He would remain at FHI until July 2011.[52]
2011 January 14–17 Conference The Winter Intelligence Conference, organized by FHI, takes place. The conference brings together experts and students in philosophy, cognitive science, and artificial intelligence for discussions on intelligence, ethical AI, and cognitive enhancement.[53][54][55][56]
2011 March 18 Publication Enhancing Human Capacities is published. This book, co-edited by Julian Savulescu, Ruud ter Meulen, and FHI's Guy Kahane, examines various forms of human enhancement, including cognitive, physical, and moral augmentation, and discusses ethical, social, and policy implications.[57][58]
2011 June 9 External review On a comment thread on LessWrong, a discussion unfolds about FHI’s funding needs, productivity, and research focus. This thread covers topics like existential risk prioritization, marginal productivity of hires, and the role of private funding in sustaining FHI’s research initiatives.[59]
2011 September Project The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches. Directed by Nick Bostrom and working closely with FHI, this interdisciplinary initiative within the Oxford Martin School investigates the societal impacts of emerging technologies, with a focus on potential risks and governance.[60]
2011 September Staff Stuart Armstrong joins FHI as a Research Fellow, contributing to research on AI alignment and existential risk mitigation, particularly in the areas of forecasting and decision theory.[61]
2011 September 25 External review Kaj Sotala posts "SIAI vs. FHI achievements, 2008–2010" on LessWrong, providing a comparative analysis of the Future of Humanity Institute and the Machine Intelligence Research Institute (previously the Singularity Institute for Artificial Intelligence), evaluating their respective outputs and contributions over recent years.[48]
2012 Staff Daniel Dewey joins FHI as a Research Fellow, bringing expertise in AI and machine ethics, focusing on long-term safety and alignment of artificial intelligence systems.[62]
2012 June 6 Publication The technical report "Indefinite Survival Through Backup Copies" by Anders Sandberg and Stuart Armstrong is published. This paper examines the feasibility of maintaining a high survival probability through self-copying, proposing a model in which the number of copies grows logarithmically over time.[63][64]
2012 August 15 Website The first Internet Archive snapshot of the Winter Intelligence Conference website is from this day. This site hosts information about the event and related resources for attendees and interested researchers.[65]
2012 September 5 Social media The FHI Twitter account, @FHIOxford, is registered, marking the institute’s entry into social media for outreach and public engagement on topics of existential risk, bioethics, and AI safety.[66]
2012 November 16 External review John Maxwell IV posts "Room for More Funding at the Future of Humanity Institute" on LessWrong, initiating a public discussion on FHI’s funding requirements, allocation, and potential impact of additional resources.[67]
2012 December 10–11 Conference FHI hosts the 2012 conference on Impacts and Risks of Artificial General Intelligence, one of two events in the Winter Intelligence Multi-Conference 2012. Attendees discuss AGI development, associated risks, and strategies for mitigating potential hazards in artificial intelligence advancement.[68][69]
2013 Staff Carl Frey and Vincent Müller join FHI as Research Fellows, focusing on topics related to technology and existential risk.[70]
2013 February Publication "Existential Risk Prevention as Global Priority" by Nick Bostrom is published in Global Policy. This paper examines the significance of existential risk reduction and suggests that preventing catastrophic outcomes should be a global priority.[71]
2013 February 25 External review "Omens: When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?" is published on Aeon. The piece, authored by Ross Andersen, covers FHI, existential risk, and Nick Bostrom's views on humanity's future.[72][73][74]
2013 March 12 Publication "Eternity in Six Hours: Intergalactic Spreading of Intelligent Life and Sharpening the Fermi Paradox" by Stuart Armstrong and Anders Sandberg is published. The paper discusses models for the rapid spread of intelligent life across the galaxy and examines the Fermi Paradox.[75]
2013 May 30 Collaboration A collaboration between FHI and the insurance company Amlin is announced. This partnership focuses on research into systemic risks, particularly how they may affect society and how insurance can help mitigate such risks.[76][77][78]
2013 June Staff Nick Beckstead joins FHI as a Research Fellow, focusing on long-term priorities in effective altruism and existential risk. He would remain at FHI until November 2014.[79]
2013 September 17 Publication "The Future of Employment: How Susceptible are Jobs to Computerisation?" by Carl Benedikt Frey and Michael A. Osborne is published. The study assesses the potential impacts of automation on various jobs and predicts which occupations are most at risk of computerization.[80]
2013 November Workshop FHI hosts a week-long math workshop led by the Machine Intelligence Research Institute (MIRI). This workshop brings together mathematicians and researchers to develop technical approaches for AI alignment and safety.[81]
2013 December 27 External review Chris Hallquist posts "Donating to MIRI vs. FHI vs. CEA vs. CFAR" on LessWrong, comparing the merits of donating to each organization. Seán Ó hÉigeartaigh from FHI participates in the discussion to address questions about FHI’s funding needs.[82]
2014 January Project The Global Priorities Project (GPP) launches as a pilot within the Centre for Effective Altruism. FHI researchers Owen Cotton-Barratt and Toby Ord are key members of the project, which later becomes a collaboration between the Centre for Effective Altruism and FHI.[83][84][85]
2014 February 4 Workshop FHI hosts a workshop on agent-based modeling. The workshop aims to explore complex systems and the application of agent-based models in understanding social and economic dynamics.[86]
2014 February 11–12 Conference FHI holds the FHI–Amlin conference on systemic risk. This event convenes experts from academia and the insurance industry to discuss potential systemic threats to global stability and examine how insurance might play a role in risk management.[87][88]
2014 May 12 Social media FHI researchers Anders Sandberg and Andrew Snyder-Beattie participate in a Reddit AMA ("Ask Me Anything") on the platform's science forum, where they address public questions about FHI's work and existential risks.[89][90]
2014 July Workshop FHI hosts a MIRIx Workshop in collaboration with the Machine Intelligence Research Institute (MIRI) to develop a technical agenda for AI safety. This workshop involves experts in machine learning, mathematics, and philosophy, aiming to advance the AI alignment field.[91]
2014 July–September Publication Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies is published. The book explores potential risks from advanced AI and offers strategic analysis on preventing catastrophic outcomes associated with AI.[92]
2014 September Publication The policy brief "Unprecedented Technological Risks" by Nick Beckstead et al. is published. This brief outlines strategies for managing high-stakes risks associated with new technologies and highlights areas for focused policy intervention.[93]
2014 September 24 Social media Nick Bostrom participates in a Reddit AMA, where he answers questions from the public about his work at FHI and his book Superintelligence.[94]
2014 September 26 External review Daniel Dewey posts "The Future of Humanity Institute Could Make Use of Your Money" on LessWrong. This post generates discussions on the platform about the benefits of donating to FHI.[95]
2014 October 1 Financial FHI posts a note of thanks to the Investling Group for a recent financial contribution. The specific donation amount and date are not disclosed.[96]
2014 October 28 Website The domain name for the "Population Ethics: Theory and Practice" website, populationethics.org, is registered. The project is organized by FHI and supported by the Leverhulme Trust. The first Internet Archive snapshot of the website would be taken on December 23, 2014.[97][98]
2014 November 21 Publication "Managing Existential Risk from Emerging Technologies" by Nick Beckstead and Toby Ord is published as part of the 2014 UK Chief Scientific Advisor's report, titled "Innovation: Managing Risk, Not Avoiding It." This chapter addresses the risks associated with emerging technologies, such as artificial intelligence and biotechnology, and offers strategies for managing these potential threats.[99]
2015 Publication "Learning the Preferences of Bounded Agents" is published, authored by Owain Evans from FHI. The paper explores methods for understanding the preferences of agents that operate under cognitive or resource constraints, which has applications in machine learning and AI safety.[100][101]
2015 Publication "Corrigibility," a collaborative work by researchers including Stuart Armstrong from FHI, is published. The paper addresses the problem of designing AI systems that can accept human intervention or corrections without resistance, a key element in AI safety.[14]
2015 Staff Owain Evans joins FHI as a postdoctoral research scientist, focusing on understanding human preferences and improving AI alignment approaches.[102]
2015 Staff Ben Levinstein joins FHI as a Research Fellow, contributing to philosophical research in ethics and decision-making until his departure in 2016.[103]
2015 Staff Feng Zhou joins FHI as a Research Fellow, working on the FHI–Amlin collaboration focused on systemic risk and mitigation strategies in the financial and technological domains.[104]
2015 Staff Simon Beard joins FHI as a Research Fellow in philosophy, contributing to the "Population Ethics: Theory and Practice" project until 2016.[105]
2015 Project FHI establishes the Strategic Artificial Intelligence Research Centre, directed by Nick Bostrom, to focus on strategic issues and long-term safety of AI technologies.[106]
2015 January 2–5 Conference The AI safety conference The Future of AI: Opportunities and Challenges takes place in Puerto Rico. Organized by the Future of Life Institute, this gathering includes FHI’s Nick Bostrom as a speaker, discussing risks associated with advanced AI technologies.[107]
2015 January 8 Internal review FHI publishes a brief overview of its activities in 2014, summarizing its research contributions to existential risks and AI safety.[108]
2015 July 1 Financial The Future of Life Institute announces grant recipients for AI safety research, with FHI’s Nick Bostrom receiving $1.5 million to support the new Strategic Artificial Intelligence Research Centre. Owain Evans also receives funding for a project on inferring human values, marking significant support for FHI’s research.[109]
2015 July 30 External review A critique on LessWrong highlights the need for improved communication around existential risks on FHI's website, sparking discussions on accessible outreach strategies for existential risk topics.[110]
2015 September 1 Financial FHI announces that Nick Bostrom has received a €2 million European Research Council Advanced Grant to further his research on AI safety and existential risk mitigation.[111]
2015 September 15 Social media Anders Sandberg from FHI hosts an AMA on Reddit, addressing questions about future studies, human enhancement, and global catastrophic risks.[112]
2015 October Publication "Moral Trade" by Toby Ord is published in Ethics, examining how moral goods can be exchanged under ethical systems.[113]
2015 November 23 External review Nick Bostrom and FHI are featured in The New Yorker article "The Doomsday Invention," exploring the implications of artificial intelligence on humanity’s future.[114]
2016 January 26 Publication "The Unilateralist's Curse and the Case for a Principle of Conformity" by Nick Bostrom, Thomas Douglas, and Anders Sandberg is published in the journal Social Epistemology. The paper discusses risks associated with unilateral decisions in global contexts, highlighting the need for collective decision-making in high-stakes scenarios. This is a featured FHI publication.[115][14]
2016 February 8–9 Workshop The Global Priorities Project (a collaboration between FHI and the Centre for Effective Altruism) hosts a policy workshop on existential risk. Attendees include "twenty leading academics and policymakers from the UK, USA, Germany, Finland, and Sweden".[116][117]
2016 May Publication The Global Priorities Project, associated with FHI, releases the Global Catastrophic Risk Report 2016. This report examines various risks that could pose global threats to humanity, such as pandemics, nuclear war, and AI, and provides recommendations for international mitigation strategies.[118]
2016 May Workshop FHI hosts a week-long workshop in Oxford titled "The Control Problem in AI", attended by ten members of the Machine Intelligence Research Institute. The workshop aims to tackle critical issues surrounding AI alignment and control.[117]
2016 May 27June 17 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted by the Machine Intelligence Research Institute and FHI, brings together professionals to discuss technical challenges associated with AI robustness and reliability. Talks are presented by FHI researchers Jan Leike and Stuart Armstrong.[119][120]
2016 June Staff FHI recruits William MacAskill and Hilary Greaves to start a new "Programme on the Philosophical Foundations of Effective Altruism" as a collaboration between FHI and the Centre for Effective Altruism.[118]
2016 June Publication The Age of Em: Work, Love, and Life When Robots Rule the Earth, a book about the implications of whole brain emulation by FHI research associate Robin Hanson, is published.[121] In October, FHI and Hanson organize a workshop about the book.[117][122]
2016 June 1 Publication The paper "Safely interruptible agents" is announced on the Machine Intelligence Research Institute blog. The paper, a collaboration between Google DeepMind and FHI, features Stuart Armstrong of FHI among the authors. It is presented at the Conference on Uncertainty in Artificial Intelligence (UAI).[123][101][14]
2016 August Staff Piers Millett joins FHI as Senior Research Fellow, focusing on biosecurity and pandemic preparedness.[124][125]
2016 September Financial The Open Philanthropy Project recommends a grant of $115,652 to FHI to support Piers Millett’s work on biosecurity and pandemic preparedness.[126]
2016 September 16 Publication Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.[127][101][128]
2016 September 22 Collaboration FHI’s webpage for its collaboration with Google DeepMind is published, though the exact start date of the collaboration is unspecified.[129]
2016 November Workshop The biotech horizon scanning workshop, co-hosted by the Centre for the Study of Existential Risk (CSER) and the Future of Humanity Institute (FHI), identifies potential high-impact developments in biological engineering. The workshop aims to assess emerging biotechnologies' risks and benefits, with findings intended for peer-reviewed publication.[117][130]
2016 December Workshop FHI hosts a workshop on "AI Safety and Blockchain," featuring prominent attendees such as Nick Bostrom, Vitalik Buterin, Jaan Tallinn, Wei Dai, Gwern Branwen, and Allan Dafoe. The workshop explores potential overlaps between AI safety and blockchain technologies, investigating how blockchain could improve global coordination in AI risk management.[131][117]
2017 January 15 Publication "Agent-Agnostic Human-in-the-Loop Reinforcement Learning" is uploaded to the arXiv. This paper discusses a framework allowing humans to influence reinforcement learning processes, without needing agent-specific knowledge, enhancing safety and adaptability in AI systems.[132][133]
2017 January 23 Publication The report "Existential Risk: Diplomacy and Governance" is published by the Global Priorities Project, which later integrates into FHI's policy work. This report outlines key existential risks, including AI, biotechnology, and climate change, and makes three policy recommendations: (1) establish governance for geoengineering; (2) initiate global pandemic scenario planning; and (3) boost international efforts in existential risk reduction.[133][134]
2017 January 25 Publication The FHI Annual Review 2016 is published. This review highlights FHI's major accomplishments and research focus areas for 2016, including AI safety, biosecurity, and policy recommendations on existential risk.[117]
2017 February 9 Publication Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal Global Policy. This paper addresses AI development's long-term strategic concerns, including the effects of openness on development speed, singleton vs. multipolar outcomes, and possible failure modes.[135][101][133]
2017 February 10 Workshop FHI hosts a workshop on normative uncertainty, examining uncertainty about moral frameworks and ethical theories, which impacts decision-making in existential risk policy.[136]
2017 February 19–20 Workshop FHI, in collaboration with CSER and the Leverhulme Centre for the Future of Intelligence, hosts a workshop on risks from malicious uses of AI. The event addresses concerns about AI's potential use by malicious actors, including discussions on strategies to mitigate such risks.[137]
2017 March Financial The Open Philanthropy Project grants $1,995,425 to FHI for general support, aimed at expanding FHI's research capabilities in existential risk and policy recommendations.[138][139][140]
2017 April 26 Publication The online book Modeling Agents with Probabilistic Programs by Owain Evans, Andreas Stuhlmüller, John Salvatier, and Daniel Filan is published. This book provides a detailed explanation of using probabilistic programming for modeling agent behaviors, with an emphasis on inverse reinforcement learning.[141][142]
2017 April 27 Publication "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv. The paper presents the aestivation hypothesis, suggesting advanced civilizations might be in a state of dormancy to conserve resources.[143][144]
2017 May FHI announces that it will be joining the Partnership on AI. This collaboration aims to promote responsible AI development and foster discussions on ethical, safety, and transparency issues.[145]
2017 May 24 Publication "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the arXiv, with authors Katja Grace, Allan Dafoe, and Owain Evans, affiliated with FHI. This survey of AI researchers presents predictions about AI capabilities and the timeline for surpassing human performance.[146]
2017 July 3 Publication Slides for an upcoming paper by FHI researchers Anders Sandberg, Eric Drexler, and Toby Ord, titled "Dissolving the Fermi Paradox," are posted online. The presentation suggests new interpretations for the Fermi Paradox, addressing why we haven’t observed extraterrestrial civilizations despite the high probability of their existence.[147][148]
2017 July 17 Publication "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv. The paper discusses a novel reinforcement learning approach where human feedback is integrated to prevent unsafe behaviors during training.[149][150]
2017 August 25 Publication FHI announces three new forthcoming papers in the latest issue of Health Security. These papers address biosecurity challenges, proposing frameworks to mitigate risks associated with advanced biological technologies.[151][152]
2017 September 27 Carrick Flynn, a research project manager at FHI, posts his thoughts on AI policy and strategy on the Effective Altruism Forum. Although shared in a personal capacity, his insights reflect his experiences at FHI.[153]
2017 September 29 Financial Effective Altruism Grants fall 2017 recipients are announced. Gregory Lewis, one of the recipients, receives £15,000 (around $20,000) for research on biological risk mitigation with FHI.[154]
2017 October–December Project FHI launches its Governance of AI Program, co-directed by Nick Bostrom and Allan Dafoe. This initiative seeks to address policy, ethical, and regulatory questions related to AI governance.[155]
2018 February 20 Publication The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts the malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at FHI, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.[156][157][158]
2018 February Initiative The Future of Humanity Institute (FHI) launches the Governance of Artificial Intelligence Program (GovAI). GovAI is designed to address the growing challenges of AI development, focusing on the political, economic, and societal impacts of advanced AI technologies. The initiative brings together experts from diverse fields, including technology, policy, ethics, and law, to explore ways to ensure that AI development is conducted safely and responsibly. GovAI's collaborative research extends beyond academia, engaging with policymakers and industry leaders to develop actionable strategies and policy frameworks that prioritize transparency, accountability, and global governance.[159]
2018 March 1 Publication "Deciphering China's AI Dream", written by FHI researcher Jeffrey Ding is published on the FHI website. It provides one of the first comprehensive examinations of China’s AI ambitions. The paper meticulously analyzes China's AI strategies, combining insights from government policy documents, speeches, and investment data to paint a picture of the country’s AI-driven future. Ding breaks down the key drivers of China’s AI development, such as national security concerns, economic growth, and political influence, and shows how AI is central to China’s global strategy. The analysis also explores how AI shapes China's international posture and what this means for the future of global AI governance, especially in terms of competition with other world powers like the United States.[160]
2018 March Event Nick Bostrom delivers a keynote address at the South by Southwest (SXSW) Conference, one of the world’s premier tech and cultural events. His presentation focuses on the transformative potential of artificial intelligence, particularly the rise of superintelligence. Bostrom discusses the dangers of AI systems that might surpass human intelligence without proper safety measures, framing the conversation around the existential risks these systems pose. He emphasizes the urgency of AI safety research and international cooperation, arguing that the unchecked rise of AI could lead to disastrous outcomes for humanity. Bostrom's speech captures widespread attention and drives home the need for robust AI regulation, influencing thought leaders and policymakers alike.[161]
2018 April 1 Publication "Opportunities for Individual Donors in AI Safety" is published on LessWrong, encouraging individuals to fund AI safety initiatives. The publication highlights the crucial role that even small donations can play in advancing critical research on AI alignment, particularly at a time when funding for such research is limited compared to the enormous financial resources behind AI development. The piece details how individual donors can fund research groups, scholarships, and independent researchers working on AI alignment, emphasizing the need for distributed and strategic funding to support global efforts in ensuring AI safety. It serves as a rallying call for the broader public to engage with and support the AI safety movement.[162]
2018 June 1 Publication A paper titled "When Will AI Exceed Human Performance? Evidence from AI Experts" compiles responses from hundreds of AI researchers on the timeline for AI to surpass human capabilities in a range of fields. The survey reveals a broad consensus that AI will outperform humans in tasks such as language translation, driving, and complex decision-making within the next few decades. The paper also highlights areas where AI might struggle, such as human creativity and emotional intelligence. This publication plays a key role in driving discussions about the ethical and practical implications of AI surpassing human performance, contributing to debates about how society can prepare for this unprecedented shift in technological power.[163]
2018 June 29 Project Announcement FHI launches the "Research Scholars Programme", a project designed to train emerging researchers in areas critical to the long-term survival of humanity, such as AI safety, biosecurity, and existential risk mitigation. The program offers participants the opportunity to work directly with FHI’s senior researchers, contributing to major projects that focus on addressing global challenges. Participants engage in cross-disciplinary research, ranging from the technical aspects of AI alignment to the societal implications of biotechnology and pandemic risks. This initiative is seen as a step in building a talent pipeline dedicated to existential risk reduction, as it cultivates a new generation of thought leaders in these areas.[164]
2018 June 30 Publication "Dissolving the Fermi Paradox", is published by Anders Sandberg, Eric Drexler, and Toby Ord, provides an interesting reinterpretation of the famed Fermi Paradox—the question of why we haven’t observed evidence of extraterrestrial civilizations despite the high probability of their existence. The authors use updated probabilistic models to suggest that the absence of evidence for intelligent life may be less paradoxical than previously thought, due to a range of astrobiological and technological uncertainties. [165]
2018 November 7 Publication Nick Bostrom publishes "The Vulnerable World Hypothesis", a provocative exploration of the dangers posed by future technological advancements. Bostrom argues that the increasing power of emerging technologies may soon enable the creation of catastrophic tools or weapons that could endanger civilization. He calls for the development of new global governance structures and surveillance systems to prevent the misuse of these technologies.[166]
2018 December 18 Publication The "2018 AI Alignment Literature Review and Charity Comparison" is published, offering a detailed assessment of the state of AI alignment research. The report reviews the progress made by various organizations working on AI safety and provides guidance for donors on where to allocate resources to maximize their impact. By outlining the challenges and potential breakthroughs in AI safety research, the publication serves as an essential resource for both researchers and philanthropists looking to contribute to the field. It also underscores the importance of strategic funding to maintain the momentum of AI alignment research and ensure the safe development of AI systems.[167]
2019 January 1 Publication The publication "Long-Term Trajectories of Human Civilization" is posted on the FHI website, authored by Seth Baum and Stuart Armstrong, investigates possible futures for human civilization over the next millennia, focusing on both the opportunities and risks posed by technological advancements. The paper explores scenarios ranging from the achievement of utopia to potential existential threats that could lead to human extinction. It emphasizes the importance of strategic foresight in shaping policy decisions that will influence humanity's long-term survival, and how current technological and societal developments could impact these trajectories. This paper provides policymakers and scholars with a comprehensive framework for thinking about humanity's future in the context of existential risk and technological evolution.[168]
2019 February 1 Publication The publication "An Upper Bound for the Background Rate of Human Extinction" is posted on the FHI website,estimates the natural rate of human extinction in the absence of anthropogenic (human-made) risks, using historical data and probabilistic models. The paper finds that without human-induced risks such as nuclear war or environmental destruction, the background rate of human extinction is exceedingly low, which highlights the importance of addressing man-made existential threats. The research provides a foundation for future studies in existential risk and emphasizes the need for proactive strategies to mitigate risks that could arise from human activities.[169]
2019 March 9 Collaboration FHI collaborates with the Centre for the Study of Existential Risk (CSER) to provide advice to the United Nations High-Level Panel on Digital Cooperation. Their joint report addresses key global digital risks, such as cybersecurity threats and the ethical development of AI technologies. The report emphasizes the importance of establishing international standards for the governance of AI, as well as ensuring that digital technologies benefit all of humanity rather than creating new forms of inequality or insecurity. The collaboration between FHI and CSER reflects the increasing recognition of AI as a global issue that requires coordinated, cross-border solutions.[170]
2019 May 9 Publication The article "Claims & Assumptions Made in Eternity in Six Hours" critiques the overly optimistic hypothesis that humanity could rapidly expand across the universe. The paper examines the technological, physical, and energy constraints that would make such expansion far more difficult than anticipated. By challenging the assumptions underlying rapid human expansion, the authors promote a more cautious and realistic approach to space exploration and long-term human survival.[171]
2019 July Event FHI researchers participate in the 2019 Beneficial AI Conference organized by the Future of Life Institute, where experts from across the AI field discuss aligning AI development with human values. Topics include preventing the misuse of AI, mitigating bias, and ensuring fairness in AI systems. The conference serves as a platform for exchanging ideas about how to create global safety standards for AI development and sets the stage for future international collaborations on AI safety.[172]
2019 October 1 Publication The paper "Artificial Intelligence: American Attitudes and Trends", authored by Baobao Zhang and Allan Dafoe, provides an in-depth analysis of public opinion on AI in the United States. The study reveals a range of views, with the public expressing both optimism about AI’s potential and concerns about issues such as job displacement, privacy, and security. The findings offer valuable insights for policymakers, helping them understand public sentiment toward AI and how it may affect future regulatory frameworks. The research contributes to the broader discourse on AI governance by providing data on how the American public views the risks and benefits of AI technologies.[173]
2019 December 19 Publication The 2019 AI Alignment Literature Review and Charity Comparison is published, providing a detailed evaluation of the progress made in AI alignment research. It reviews the efforts of organizations involved in AI safety, offering recommendations for donors to maximize their impact in this critical area. The report underscores the importance of sustained funding for AI safety research and highlights both challenges and potential breakthroughs.[174]
2020 December 17 Publication A paper titled "Ranking the Effectiveness of Worldwide COVID-19 Government Interventions" is published in Nature Human Behaviour. This research, co-authored by Nancy Haug, Oliver Gatalo, Alisa J. Kozik, and Courtney Kuhlman, provides a comprehensive assessment of the effectiveness of various non-pharmaceutical interventions (NPIs) during the pandemic. The study rigorously analyzes data from different countries to rank interventions such as social distancing, mask mandates, and lockdowns based on their impact on transmission rates. The findings offer governments critical, evidence-based insights for managing the spread of COVID-19 by highlighting the most effective strategies under varying circumstances.[175]
2021 August 27 Publication The Future of Humanity Institute (FHI) publishes the paper "Quantitative National Risk Reports (QNRs)". This paper introduces methodologies for assessing national-level risks related to existential threats, such as pandemics, AI misalignment, and biosecurity risks. The report offers comprehensive guidelines for improving global coordination and preparedness by providing a quantifiable model for assessing these risks. It aims to foster better understanding and management of potential global threats.[176]
2022 June Conference Participation FHI actively participates in the International Conference on Machine Learning (ICML), presenting work on AI safety, fairness, and robustness. FHI researchers contribute to workshops emphasizing the integration of safety mechanisms into AI systems, particularly highlighting risks associated with unregulated advancements in fields like autonomous decision-making and algorithmic biases. Their contributions reinforce the need for AI to be both transparent and socially beneficial.[177]
2022 September Program Expansion FHI expands its Research Scholars Programme, adding specialized tracks in AI governance, biosecurity, and macrostrategy. This program aims to train the next generation of scholars to tackle critical global challenges, fostering interdisciplinary research on existential risks and long-term human survival. The expansion reflects FHI’s dedication to building a robust research community equipped to address AI safety, biosecurity, and global governance issues.[178]
2023 January Public Statement FHI Director Nick Bostrom issues a public apology following the resurfacing of a controversial email from the 1990s. In the email, Bostrom made offensive remarks that sparked backlash within academic and tech communities. His apology addresses the harm caused by the comments and reaffirms his commitment to diversity and inclusion. However, the incident leads to broader discussions about accountability in leadership roles at institutions like FHI.[179] [180]
2023 August 15 Publication Anders Sandberg releases a chapter titled "The Lifespan of Civilizations: Do Societies 'Age,' or Is Collapse Just Bad Luck?" in the book "Existential Risk Studies." In this work, Sandberg investigates whether the collapse of civilizations is driven by internal factors like societal aging or external, unpredictable events such as natural disasters or wars. The chapter provides an analysis of historical civilizations and the reasons behind their collapse, offering a detailed exploration of potential parallels for modern societies facing existential risks. It adds to the broader discourse on the vulnerabilities of civilizations and the importance of resilience in global governance. This publication continues Sandberg's research into existential risks, a central focus of the Future of Humanity Institute.[181]
2023 August 28 Publication A collaboration of researchers from FHI, including Anders Sandberg, publishes "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness" on ArXiv. The paper tackles the problem of consciousness in AI systems, proposing methodologies for assessing whether AI systems could possess or simulate conscious experience. The study draws on the latest theories in the science of consciousness, comparing human cognitive processes with machine learning models. It aims to address key ethical and practical questions in AI development, particularly around the moral status of AI systems and the potential risks associated with conscious or quasi-conscious AI agents.[182]
2023 September 5 Publication Another significant paper titled "Truthful AI: Towards Developing and Governing AI that Does Not Lie" is published by the team including Anders Sandberg. This research focuses on the creation of AI systems designed to uphold truthfulness and transparency. The paper outlines governance frameworks and technical measures to ensure that AI systems do not propagate misinformation, even unintentionally. This work is positioned within the broader context of AI safety research, a primary focus of the Future of Humanity Institute, addressing concerns about the integrity and trustworthiness of AI systems in fields such as journalism, governance, and public decision-making.[183]
2024 January Report Anders Sandberg publishes "Future of Humanity Institute 2005-2024: Final Report", which offers a retrospective on FHI's 19 years of operation. The report documents the institute’s contributions to existential risk research, highlighting its key projects in AI safety, biosecurity, and the study of long-term human futures. Sandberg provides reflections on FHI’s collaborations with global institutions and its evolving focus over the years, as well as the internal and external challenges it faced leading up to its closure.[184]
2024 January Publication Anders Sandberg publishes "Thoughts at the End of an Era" on his blog, reflecting on the closure of the Future of Humanity Institute. In the post, Sandberg shares his personal thoughts on FHI’s legacy, the global impact of its research, and the reasons for its closure. He also discusses the future of existential risk research, expressing optimism for continued work in the field despite the challenges faced by FHI.[185]
2024 April 16 Closure After nearly two decades of pioneering research, the Future of Humanity Institute is officially closed by Oxford University due to administrative disagreements and controversies surrounding its research direction and affiliations. The closure marks the end of a highly influential institution, but FHI's legacy endures through its contributions to global existential risk research, particularly in areas like AI safety, biosecurity, and global governance.[186]

Numerical and visual data

Google Scholar

The following table summarizes per-year mentions on Google Scholar as of December 14, 2021.

Year "Future of Humanity Institute"
2006 16
2007 29
2008 53
2009 48
2010 50
2011 88
2012 75
2013 95
2014 113
2015 139
2016 177
2017 230
2018 315
2019 355
2020 496
Future of Humanity Institute gscho.png

Google Trends

The image below shows Google Trends data for Future of Humanity Institute (Research institute), from January 2005 to February 2021, when the screenshot was taken. Interest is also ranked by country and displayed on world map.[187]

Future of Humanity Institute gt.jpg


Google Ngram Viewer

The chart below shows Google Ngram Viewer data for Future of Humanity Institute, from 2005 to 2019.[188]

Future of humanity ngram.jpeg

Wikipedia pageviews for FHI page

The following plots pageviews for the Future of Humanity Institute Wikipedia page. The image generated on Wikipedia Views.

Future of Humanity Institute Wikipedia pageviews.png

Wikipedia pageviews for Nick Bostrom page

The following plots pageviews for the Nick Bostrom Wikipedia page. The image is generated on Wikipedia Views.

Nick Bostrom Wikipedia pageviews.png


Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

See the commit history on GitHub for a more detailed revision history.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

What the timeline is still missing

Timeline update strategy

See also

External links

  • Official website
  • Future of Humanity Institute (Wikipedia)
  • LessWrong Wiki page on FHI. The LessWrong Wiki is the wiki associated with the group blog LessWrong. The pages on the wiki have a rationalist/effective altruist audience in mind, and is often more useful than the corresponding Wikipedia page on a topic.
  • Donations List Website (donee). The Donations List Website is a website by Vipul Naik that collects donations data regarding many effective altruist and rationality sphere donations. This is the donee page for FHI, which collects donations made to FHI.
  • AI Watch. AI Watch is a website by Issa Rice that tracks people and organizations in AI safety. This is the organization page for FHI, showing some basic information as well as a list of AI safety-related positions at FHI.

References

  1. 1.0 1.1 1.2 1.3 1.4 "Bimonthly Progress Report - Issue 3" (PDF). Future of Humanity Institute. Archived from the original (PDF) on January 17, 2009. Retrieved March 18, 2018. 
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 "Microsoft Word - CV Nick Bostrom (19AugNBrevised).docx - cv.pdf" (PDF). Retrieved March 16, 2018. 
  3. "Showing results for: anthropic-principle.com". ICANN WHOIS. Retrieved March 11, 2018. Creation Date: 1998-08-30T04:00:00Z 
  4. "anthropic-principle.com". Archived from the original on January 25, 1999. Retrieved March 11, 2018. 
  5. "Showing results for: future-studies.com". ICANN WHOIS. Retrieved March 15, 2018. Creation Date: 1998-08-30T04:00:00Z 
  6. "Future Studies". Archived from the original on October 12, 1999. Retrieved March 15, 2018. 
  7. "Showing results for: ANALYTIC.ORG". ICANN WHOIS. Retrieved March 15, 2018. Creation Date: 1998-12-14T05:00:00Z 
  8. "Nick Bostrom's thinking in analytic philosophy". Archived from the original on November 28, 1999. Retrieved March 15, 2018. 
  9. "Nick Bostrom's thinking in analytic philosophy". Retrieved March 15, 2018. 
  10. "Showing results for: simulation-argument.com". ICANN WHOIS. Retrieved March 11, 2018. Creation Date: 2001-10-31T08:55:28Z 
  11. "simulation-argument.com". Internet Archive. Retrieved March 10, 2018. 
  12. 12.0 12.1 12.2 12.3 12.4 12.5 12.6 "Wayback Machine" (PDF). Archived from the original (PDF) on May 16, 2011. Retrieved March 11, 2018. 
  13. Bostrom, Nick. "Astronomical Waste: The Opportunity Cost of Delayed Technological Development" (PDF). Retrieved March 14, 2018. 
  14. 14.0 14.1 14.2 14.3 14.4 14.5 14.6 14.7 14.8 Future of Humanity Institute - FHI. "Selected Publications Archive - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  15. "About | Future of Humanity Institute | Programmes". Oxford Martin School. Retrieved February 7, 2018. 
  16. "Future of Humanity Institute". Archived from the original on October 13, 2005. Retrieved February 7, 2018. 
  17. 17.0 17.1 17.2 "Wayback Machine" (PDF). Archived from the original (PDF) on May 12, 2006. Retrieved February 7, 2018. 
  18. Tegmark, Max; Bostrom, Nick. "How Unlikely is a Doomsday Catastrophe?" (PDF). Retrieved March 14, 2018. 
  19. Bostrom, Nick. "What is a Singleton?". Retrieved March 11, 2018. 
  20. 20.0 20.1 20.2 20.3 20.4 20.5 20.6 20.7 20.8 20.9 "Future of Humanity Report" (PDF). July 18, 2007. Archived from the original (PDF) on September 8, 2011. Retrieved February 7, 2018. 
  21. "Rebecca Roache" (PDF). Archived from the original (PDF) on July 4, 2007. Retrieved March 16, 2018. 
  22. "Anders Sandberg". LinkedIn. Retrieved March 15, 2018. 
  23. Anders Sandberg. "ENHANCE Project Site". Archived from the original on April 6, 2006. Retrieved February 7, 2018. 
  24. "Bimonthly Progress Report - Issue 1" (PDF). Future of Humanity Institute. Archived from the original (PDF) on January 17, 2009. Retrieved March 18, 2018. 
  25. "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" (PDF). Retrieved March 11, 2018. 
  26. 26.0 26.1 "Bimonthly Progress Report - Issue 2" (PDF). Future of Humanity Institute. Archived from the original (PDF) on January 17, 2009. Retrieved March 18, 2018. 
  27. "Showing results for: EXISTENTIAL-RISK.ORG". ICANN WHOIS. Retrieved March 11, 2018. Creation Date: 2006-07-19T23:23:38Z 
  28. "Overcoming Bias : Bio". Retrieved June 1, 2017. 
  29. "Overcoming Bias: How To Join". Retrieved September 26, 2017. 
  30. "Overcoming Bias". Retrieved September 26, 2017. 
  31. "FHI Updates". Archived from the original on July 5, 2007. Retrieved February 7, 2018. 
  32. "Rafaela Hillerbrand". LinkedIn. Retrieved March 15, 2018. 
  33. "FHI Staff". Archived from the original on January 16, 2007. Retrieved March 16, 2018. 
  34. "Future of Humanity Institute: Revision history - Wikipedia". English Wikipedia. Retrieved March 14, 2018. 
  35. 35.0 35.1 "Progress Report - Issue 4" (PDF). Future of Humanity Institute. Archived from the original (PDF) on December 21, 2008. Retrieved March 18, 2018. 
  36. 36.0 36.1 36.2 36.3 36.4 36.5 36.6 36.7 "Wayback Machine" (PDF). Archived from the original (PDF) on April 13, 2012. Retrieved March 11, 2018. 
  37. "Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books". Retrieved February 8, 2018. 
  38. 38.0 38.1 38.2 38.3 "Future of Humanity Institute - Books". Archived from the original on November 3, 2010. Retrieved February 8, 2018. 
  39. "Future of Humanity Institute Updates". Archived from the original on September 15, 2008. Retrieved February 7, 2018. 
  40. "Showing results for: global-catastrophic-risks.com". ICANN WHOIS. Retrieved March 11, 2018. Creation Date: 2008-01-22T20:47:11Z 
  41. "global-catastrophic-risks.com". Retrieved March 10, 2018. 
  42. "Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books". Retrieved February 8, 2018. 
  43. "Overcoming Bias : Moral uncertainty – towards a solution?". Retrieved March 10, 2018. 
  44. Dai, Wei (October 21, 2014). "Is the potential astronomical waste in our universe too small to care about?". [[wikipedia |LessWrong]]. Retrieved March 15, 2018.  line feed character in |publisher= at position 12 (help)
  45. Shulman, Carl (August 21, 2014). "Population ethics and inaccessible populations". Reflective Disequilibrium. Retrieved March 16, 2018. Some approaches, such as Nick Bostrom and Toby Ord's Parliamentary Model, consider what would happen if each normative option had resources to deploy on its own (related to its plausibility or appeal), and look for Pareto-improvements. 
  46. "Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books". Retrieved February 8, 2018. 
  47. "FAQ - Lesswrongwiki". [[wikipedia |LessWrong]]. Retrieved June 1, 2017.  line feed character in |publisher= at position 12 (help)
  48. 48.0 48.1 48.2 "SIAI vs. FHI achievements, 2008-2010 - Less Wrong". [[wikipedia |LessWrong]]. Retrieved March 14, 2018.  line feed character in |publisher= at position 12 (help)
  49. "FHIOxford - YouTube". YouTube. Retrieved March 15, 2018. 
  50. Bostrom, Nick; Sandberg, Anders (2009). "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges" (PDF). Retrieved March 15, 2018. 
  51. "Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books". Retrieved February 8, 2018. 
  52. "Eric Mandelbaum". Archived from the original on March 16, 2018. Retrieved March 16, 2018. 
  53. "Winter Intelligence" (PDF). Archived from the original (PDF) on July 11, 2011. Retrieved March 15, 2018. 
  54. "Future of Humanity Institute - Winter Intelligence Conference". Archived from the original on January 16, 2013. Retrieved March 15, 2018. 
  55. Future of Humanity Institute - FHI (November 8, 2017). "Winter Intelligence Conference 2011 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  56. Future of Humanity Institute - FHI (January 14, 2011). "Winter Intelligence Conference 2011 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  57. "Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books". Retrieved February 8, 2018. 
  58. "Future of Humanity Institute - Books". Archived from the original on January 16, 2013. Retrieved February 8, 2018. 
  59. "CarlShulman comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong". [[wikipedia |LessWrong]]. Retrieved March 15, 2018.  line feed character in |publisher= at position 12 (help)
  60. "Welcome". Oxford Martin Programme on the Impacts of Future Technology. Retrieved July 26, 2017. The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University. 
  61. "Stuart Armstrong". LinkedIn. Retrieved March 15, 2018. 
  62. "Daniel-Dewey.pdf" (PDF). Retrieved March 15, 2018. 
  63. Sandberg, Anders; Armstrong, Stuart (June 6, 2012). "Indefinite survival through backup copies" (PDF). Archived from the original (PDF) on January 16, 2013. Retrieved March 15, 2018. 
  64. "Future of Humanity Institute - Publications". Archived from the original on January 12, 2013. Retrieved March 15, 2018. 
  65. "Winter Intelligence Conferences | The future of artificial general intelligence". Archived from the original on August 15, 2012. Retrieved March 11, 2018. 
  66. "Future of Humanity Institute (@FHIOxford)". Twitter. Retrieved March 11, 2018. 
  67. "Room for more funding at the Future of Humanity Institute - Less Wrong". [[wikipedia |LessWrong]]. Retrieved March 14, 2018.  line feed character in |publisher= at position 12 (help)
  68. "Future of Humanity Institute - News Archive". Archived from the original on January 12, 2013. Retrieved March 11, 2018. 
  69. "AGI Impacts | Winter Intelligence Conferences". Archived from the original on October 30, 2012. Retrieved March 15, 2018. 
  70. "Staff | Future of Humanity Institute". Archived from the original on June 15, 2013. Retrieved March 16, 2018. 
  71. Bostrom, Nick. "Existential Risk Prevention as Global Priority" (PDF). Retrieved March 14, 2018. 
  72. Ross Andersen (February 25, 2013). "Omens When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?". Aeon. Retrieved March 15, 2018. 
  73. ESRogs (March 2, 2013). "[LINK] Well-written article on the Future of Humanity Institute and Existential Risk". [[wikipedia |LessWrong]]. Retrieved March 15, 2018.  line feed character in |publisher= at position 12 (help)
  74. Future of Humanity Institute - FHI (February 25, 2013). "Aeon Magazine Feature: "Omens" - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  75. Armstrong, Stuart; Sandberg, Anders. "Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox" (PDF). Archived from the original (PDF) on April 9, 2014. Retrieved March 15, 2018. 
  76. "FHI & Amlin join forces to understand systemic risk". Oxford Martin School. Retrieved March 15, 2018. 
  77. Future of Humanity Institute - FHI. "FHI-Amlin Collaboration - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 15, 2018. 
  78. "FHI-Amlin Collaboration | Future of Humanity Institute". Archived from the original on May 23, 2014. Retrieved March 15, 2018. 
  79. "Nick Beckstead". LinkedIn. Retrieved March 15, 2018. 
  80. Frey, Carl Benedikt; Osborne, Michael A. "The Future of Employment: How Susceptible are Jobs to Computerisation?" (PDF). Retrieved March 14, 2018. 
  81. Future of Humanity Institute - FHI (November 26, 2013). "FHI Hosts Machine Intelligence Research Institute Maths Workshop - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  82. "Donating to MIRI vs. FHI vs. CEA vs. CFAR - Less Wrong Discussion". [[wikipedia |LessWrong]]. Retrieved March 14, 2018.  line feed character in |publisher= at position 12 (help)
  83. "Global Priorities Project Strategy Overview" (PDF). Retrieved March 10, 2018. 
  84. "HOME". The Global Priorities Project. Retrieved March 10, 2018. 
  85. "Our history". Centre For Effective Altruism. Retrieved March 10, 2018. 
  86. Future of Humanity Institute - FHI (February 7, 2014). "FHI hosts Agent Based Modelling workshop - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  87. Future of Humanity Institute - FHI (February 10, 2014). "FHI-Amlin Conference on Systemic Risk - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  88. "Home | Future of Humanity Institute". Archived from the original on July 17, 2014. Retrieved March 16, 2018. 
  89. "Science AMA Series: We are researchers at the Future of Humanity Institute at Oxford University, ask us anything! • r/science". reddit. Retrieved March 14, 2018. 
  90. Future of Humanity Institute - FHI (May 16, 2014). "Future of Humanity Institute answers questions from the public - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  91. Future of Humanity Institute - FHI (July 16, 2014). "MIRIx at FHI - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  92. "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017. 
  93. Becktead, Nick; Bostrom, Nick; Bowerman, Niel; Cotton-Barratt, Owen; MacAskill, William; hÉigeartaigh, Seán Ó; Ord, Toby. "Unprecedented Technological Risks" (PDF). Retrieved March 14, 2018. 
  94. "Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA • r/science". reddit. Retrieved March 14, 2018. 
  95. "Daniel Dewey". AI Watch. March 1, 2018. Retrieved March 14, 2018. 
  96. Future of Humanity Institute - FHI (October 1, 2014). "Thanks - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  97. "Showing results for: POPULATIONETHICS.ORG". ICANN WHOIS. Retrieved March 15, 2018. Creation Date: 2014-10-28T08:53:08Z 
  98. "Welcome". Population Ethics: Theory and Practice. Archived from the original on December 23, 2014. Retrieved March 15, 2018. 
  99. "FHI contributes chapter on existential risk to UK Chief Scientific Advisor's report". Retrieved March 14,.  Check date values in: |access-date= (help)
  100. "Learning the Preferences of Bounded Agents" (PDF). Retrieved March 10, 2018. 
  101. 101.0 101.1 101.2 101.3 "2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum". Retrieved March 10, 2018. 
  102. "Owain Evans". LinkedIn. Retrieved March 15, 2018. 
  103. "CV". Ben Levinstein. Retrieved March 15, 2018. 
  104. "Staff | Future of Humanity Institute". Archived from the original on April 13, 2015. Retrieved March 16, 2018. 
  105. "CV". Simon Beard. Retrieved March 15, 2018. 
  106. Future of Humanity Institute - FHI (September 28, 2015). "Strategic Artificial Intelligence Research Centre - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  107. "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017. 
  108. Future of Humanity Institute - FHI (January 8, 2015). "FHI in 2014 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  109. "Grants Timeline - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  110. "Help Build a Landing Page for Existential Risk? - Less Wrong". [[wikipedia |LessWrong]]. Retrieved March 14, 2018.  line feed character in |publisher= at position 12 (help)
  111. Future of Humanity Institute - FHI (September 25, 2015). "FHI awarded prestigious €2m ERC Grant - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  112. "I am a researcher at the Future of Humanity Institute in Oxford, working on future studies, human enhancement, global catastrophic risks, reasoning under uncertainty and everything else. Ask me anything! • r/Futurology". reddit. Retrieved March 14, 2018. 
  113. Ord, Toby (2015). "Moral Trade" (PDF). Ethics. Retrieved March 14, 2018. 
  114. Khatchadourian, Raffi (November 23, 2015). "The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?". The New Yorker. Retrieved March 15, 2018. 
  115. "The Unilateralist's Curse and the Case for a Principle of Conformity". Taylor & Francis. Retrieved March 14, 2018. 
  116. Future of Humanity Institute - FHI (October 25, 2016). "Policy workshop hosted on existential risk - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  117. 117.0 117.1 117.2 117.3 117.4 117.5 Future of Humanity Institute - FHI (July 31, 2017). "FHI Annual Review 2016 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  118. 118.0 118.1 Future of Humanity Institute - FHI (July 31, 2017). "Quarterly Update Summer 2016 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  119. "Colloquium Series on Robust and Beneficial AI - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved March 13, 2018. 
  120. Future of Humanity Institute - FHI (August 5, 2016). "Colloquium Series on Robust and Beneficial AI - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  121. "The Age of Em, A Book". Retrieved March 13, 2018. 
  122. Future of Humanity Institute - FHI (October 25, 2016). "Robin Hanson and FHI hold seminar and public talk on "The Age of Em" - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  123. Bensinger, Rob (September 12, 2016). "New paper: "Safely interruptible agents" - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved March 10, 2018. 
  124. "Piers Millett". LinkedIn. Retrieved March 15, 2018. 
  125. Future of Humanity Institute - FHI (December 5, 2016). "FHI hires first biotech policy specialist - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  126. "Future of Humanity Institute — Biosecurity and Pandemic Preparedness". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018. 
  127. "[1609.04994] Exploration Potential". Retrieved March 10, 2018. 
  128. Future of Humanity Institute - FHI (October 5, 2016). "Exploration potential - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  129. Future of Humanity Institute - FHI (March 8, 2017). "DeepMind collaboration - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  130. Future of Humanity Institute - FHI (November 2016). "Biotech Horizon Scanning Workshop". Future of Humanity Institute. Retrieved March 13, 2018. 
  131. Future of Humanity Institute - FHI (January 19, 2017). "FHI holds workshop on AI safety and blockchain - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  132. "[1701.04079v1] Agent-Agnostic Human-in-the-Loop Reinforcement Learning". Retrieved March 14, 2018. 
  133. 133.0 133.1 133.2 Future of Humanity Institute - FHI (July 31, 2017). "Quarterly Update Spring 2017 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  134. Farquhar, Sebastian; Halstead, John; Cotton-Barratt, Owen; Schubert, Stefan; Belfield, Haydn; Snyder-Beattie, Andrew (2017). "Existential Risk: Diplomacy and Governance" (PDF). Global Priorities Project. Retrieved March 14, 2018. 
  135. "Strategic Implications of Openness in AI Development". Retrieved March 10, 2018. 
  136. Future of Humanity Institute - FHI (March 8, 2017). "Workshop on Normative Uncertainty". Future of Humanity Institute. Retrieved March 16, 2018. 
  137. Future of Humanity Institute - FHI (November 4, 2017). "Bad Actors and AI Workshop". Future of Humanity Institute. Retrieved March 16, 2018. 
  138. Cite error: Invalid <ref> tag; no text was provided for refs named open-phil-guide-grant-seekers
  139. Cite error: Invalid <ref> tag; no text was provided for refs named vipul-comment
  140. "Future of Humanity Institute — General Support". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018. 
  141. "Modeling Agents with Probabilistic Programs". Retrieved March 13, 2018. 
  142. Future of Humanity Institute - FHI (April 26, 2017). "New Interactive Tutorial: Modeling Agents with Probabilistic Programs - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  143. "[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox". Retrieved March 10, 2018. 
  144. Future of Humanity Institute - FHI (July 31, 2017). "FHI Quarterly Update Summer 2017". Future of Humanity Institute. Retrieved March 14, 2018. 
  145. Future of Humanity Institute - FHI (May 17, 2017). "FHI is joining the Partnership on AI". Future of Humanity Institute. Retrieved March 16, 2018. 
  146. "[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved July 13, 2017. 
  147. "Has the Fermi paradox been resolved? - Marginal REVOLUTION". Marginal REVOLUTION. July 3, 2017. Retrieved March 13, 2018. 
  148. gwern (August 16, 2017). "September 2017 news - Gwern.net". Retrieved March 13, 2018. 
  149. "[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention". Retrieved March 10, 2018. 
  150. Larks. "2018 AI Safety Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved March 10, 2018. 
  151. Future of Humanity Institute - FHI (August 25, 2017). "FHI publishes three new biosecurity papers in 'Health Security' - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  152. Future of Humanity Institute - FHI (October 10, 2017). "Quarterly Update Autumn 2017 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  153. Flynn, Carrick. "Personal thoughts on careers in AI policy and strategy". Effective Altruism Forum. Retrieved March 15, 2018. 
  154. "EA Grants Fall 2017 Recipients". Google Docs. Retrieved March 11, 2018. 
  155. Future of Humanity Institute - FHI (January 19, 2018). "Quarterly Update Winter 2017 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  156. "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018. 
  157. "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018. 
  158. Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018. 
  159. "Governance of AI Program Launched". Archived from the original on February 25, 2018. Retrieved September 13, 2024. 
  160. "Deciphering China's AI Dream" (PDF). Archived from the original (PDF) on March 5, 2018. Retrieved September 13, 2024. 
  161. "Nick Bostrom at SXSW 2018". Retrieved September 13, 2024. 
  162. "Opportunities for Individual Donors in AI Safety". Retrieved September 13, 2024. 
  163. "When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved September 13, 2024. 
  164. "FHI Research Scholars Programme". Retrieved September 13, 2024. 
  165. "Dissolving the Fermi Paradox". Retrieved September 13, 2024. 
  166. "The Vulnerable World Hypothesis". Retrieved September 13, 2024. 
  167. "2018 AI Alignment Literature Review and Charity Comparison". Retrieved September 13, 2024. 
  168. "Long-Term Trajectories of Human Civilization". Archived from the original on January 10, 2019. Retrieved September 13, 2024. 
  169. "An Upper Bound for the Background Rate of Human Extinction". Archived from the original on February 15, 2019. Retrieved September 13, 2024. 
  170. "CSER and FHI Advice to UN High-Level Panel on Digital Cooperation". Retrieved September 13, 2024. 
  171. "Claims & Assumptions Made in Eternity in Six Hours". Retrieved September 13, 2024. 
  172. "Beneficial AI 2019". Retrieved September 13, 2024. 
  173. "Artificial Intelligence: American Attitudes and Trends". Retrieved September 13, 2024. 
  174. "2019 AI Alignment Literature Review and Charity Comparison". Retrieved September 22, 2024. 
  175. "Ranking the Effectiveness of Worldwide COVID-19 Government Interventions". Nature Human Behaviour. 4: 1303–1312. December 2020. 
  176. "Quantitative National Risk Reports (QNRs)". Retrieved September 22, 2024. 
  177. "ICML 2022 Schedule". Retrieved September 22, 2024. 
  178. "FHI Research Scholars Programme". Archived from the original on September 15, 2022. Retrieved September 22, 2024. 
  179. "Nick Bostrom's Statement". Archived from the original on January 15, 2023. Retrieved September 22, 2024. 
  180. "Why I'm Personally Upset with Nick Bostrom Right Now". Retrieved September 22, 2024. 
  181. "The Lifespan of Civilizations". Retrieved September 22, 2024. 
  182. "Consciousness in Artificial Intelligence". Retrieved September 22, 2024. 
  183. "Truthful AI". Retrieved September 22, 2024. 
  184. "Future of Humanity Institute 2005-2024: Final Report". Retrieved September 13, 2024. 
  185. "Thoughts at the End of an Era". Retrieved September 13, 2024. 
  186. "FHI: Future of Humanity Institute Has Shut Down". Retrieved September 13, 2024. 
  187. "Future of Humanity Institute". Google Trends. Retrieved 16 February 2021. 
  188. "Future of Humanity Institute". books.google.com. Retrieved 20 February 2021.