Difference between revisions of "Timeline of Future of Humanity Institute"

From Timelines
Jump to: navigation, search
Line 165: Line 165:
 
|-
 
|-
 
| 2014 || {{dts|May 12}} || Social media || FHI researchers {{W|Anders Sandberg}} and Andrew Snyder-Beattie do an AMA ("ask me anything") on {{W|Reddit}}.<ref>{{cite web |url=https://www.reddit.com/r/science/comments/25cnbr/science_ama_series_we_are_researchers_at_the/ |publisher=reddit |title=Science AMA Series: We are researchers at the Future of Humanity Institute at Oxford University, ask us anything! • r/science |accessdate=March 14, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/reddit/ |author=Future of Humanity Institute - FHI |title=Future of Humanity Institute answers questions from the public - Future of Humanity Institute |publisher=Future of Humanity Institute |date=May 16, 2014 |accessdate=March 14, 2018}}</ref>
 
| 2014 || {{dts|May 12}} || Social media || FHI researchers {{W|Anders Sandberg}} and Andrew Snyder-Beattie do an AMA ("ask me anything") on {{W|Reddit}}.<ref>{{cite web |url=https://www.reddit.com/r/science/comments/25cnbr/science_ama_series_we_are_researchers_at_the/ |publisher=reddit |title=Science AMA Series: We are researchers at the Future of Humanity Institute at Oxford University, ask us anything! • r/science |accessdate=March 14, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/reddit/ |author=Future of Humanity Institute - FHI |title=Future of Humanity Institute answers questions from the public - Future of Humanity Institute |publisher=Future of Humanity Institute |date=May 16, 2014 |accessdate=March 14, 2018}}</ref>
 +
|-
 +
| 2014 || {{dts|July}} || Workshop || FHI hosts a MIRIx Workshop in collaboration with the {{W|Machine Intelligence Research Institute}} "to develop the technical agenda for AI safety".<ref>{{cite web |url=https://www.fhi.ox.ac.uk/mirix-at-fhi/ |author=Future of Humanity Institute - FHI |title=MIRIx at FHI - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 16, 2014 |accessdate=March 16, 2018}}</ref>
 
|-
 
|-
 
| 2014 || {{dts|July}}–September || Publication || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> In March 2017, the {{W|Open Philanthropy Project}} considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."<ref name="open-phil-grant-march-2017" />
 
| 2014 || {{dts|July}}–September || Publication || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> In March 2017, the {{W|Open Philanthropy Project}} considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."<ref name="open-phil-grant-march-2017" />
Line 196: Line 198:
 
| 2015 || {{dts|January 8}} || Internal review || FHI publishes a one-paragraph review of its work in 2014.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-in-2014/ |author=Future of Humanity Institute - FHI |title=FHI in 2014 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 8, 2015 |accessdate=March 16, 2018}}</ref>
 
| 2015 || {{dts|January 8}} || Internal review || FHI publishes a one-paragraph review of its work in 2014.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-in-2014/ |author=Future of Humanity Institute - FHI |title=FHI in 2014 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 8, 2015 |accessdate=March 16, 2018}}</ref>
 
|-
 
|-
| 2015 || {{dts|July 1}} || Financial || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref> One of the grantees is {{W|Nick Bostrom}}, the director of FHI, who receives a grant of $1,500,000 for the creation of a new research center focused on AI safety.<ref>{{cite web |url=https://futureoflife.org/ai-researcher-nick-bostrom/ |title=AI Researcher Nick Bostrom - Future of Life Institute |publisher=Future of Life Institute |accessdate=March 14, 2018}}</ref> Another grantee is Owain Evans of FHI, who receives a grant of $227,212 for his project on inferring human values.<ref>{{cite web |url=https://futureoflife.org/ai-researcher-owain-evans/ |title=AI Researcher Owain Evans - Future of Life Institute |publisher=Future of Life Institute |accessdate=March 14, 2018}}</ref>
+
| 2015 || {{dts|July 1}} || Financial || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref> One of the grantees is {{W|Nick Bostrom}}, the director of FHI, who receives a grant of $1,500,000 for the creation of a new research center focused on AI safety.<ref>{{cite web |url=https://futureoflife.org/ai-researcher-nick-bostrom/ |title=AI Researcher Nick Bostrom - Future of Life Institute |publisher=Future of Life Institute |accessdate=March 14, 2018}}</ref> Another grantee is Owain Evans of FHI, who receives a grant of $227,212 for his project on inferring human values.<ref name="fli-grant-owain-evans">{{cite web |url=https://futureoflife.org/ai-researcher-owain-evans/ |title=AI Researcher Owain Evans - Future of Life Institute |publisher=Future of Life Institute |accessdate=March 14, 2018}}</ref>
 
|-
 
|-
 
| 2015 || {{dts|July 30}} || Outside review || A post critiquing the lack of intuitive explanation of existential risks on the FHI website (among other places) is posted on ''{{W|LessWrong}}''.<ref>{{cite web |url=http://lesswrong.com/lw/mjy/help_build_a_landing_page_for_existential_risk/ |title=Help Build a Landing Page for Existential Risk? - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 
| 2015 || {{dts|July 30}} || Outside review || A post critiquing the lack of intuitive explanation of existential risks on the FHI website (among other places) is posted on ''{{W|LessWrong}}''.<ref>{{cite web |url=http://lesswrong.com/lw/mjy/help_build_a_landing_page_for_existential_risk/ |title=Help Build a Landing Page for Existential Risk? - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
 +
|-
 +
| 2015 || {{dts|September 1}} || Financial || FHI announces that Nick Bostrom has been awarded a €2 million (about $2,247,200 at the time)<ref>{{cite web |url=https://api.fixer.io/2015-09-01?base=EUR |title=Currency conversion from EUR on 2015-09-01 |publisher=Fixer.io |accessdate=March 16, 2018}}</ref> [[wikipedia:European Research Council#Grants offered|European Research Council Advanced Grant]].<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-awarded-prestigious-e2m-erc-grant/ |author=Future of Humanity Institute - FHI |title=FHI awarded prestigious €2m ERC Grant - Future of Humanity Institute |publisher=Future of Humanity Institute |date=September 25, 2015 |accessdate=March 16, 2018}}</ref>
 
|-
 
|-
 
| 2015 || {{dts|September 15}} || Social media || {{W|Anders Sandberg}} does an AMA ("ask me anything") on {{W|Reddit}}.<ref>{{cite web |url=https://www.reddit.com/r/Futurology/comments/3l1jqs/i_am_a_researcher_at_the_future_of_humanity/ |publisher=reddit |title=I am a researcher at the Future of Humanity Institute in Oxford, working on future studies, human enhancement, global catastrophic risks, reasoning under uncertainty and everything else. Ask me anything! • r/Futurology |accessdate=March 14, 2018}}</ref>
 
| 2015 || {{dts|September 15}} || Social media || {{W|Anders Sandberg}} does an AMA ("ask me anything") on {{W|Reddit}}.<ref>{{cite web |url=https://www.reddit.com/r/Futurology/comments/3l1jqs/i_am_a_researcher_at_the_future_of_humanity/ |publisher=reddit |title=I am a researcher at the Future of Humanity Institute in Oxford, working on future studies, human enhancement, global catastrophic risks, reasoning under uncertainty and everything else. Ask me anything! • r/Futurology |accessdate=March 14, 2018}}</ref>
Line 261: Line 265:
 
|-
 
|-
 
| 2017 || {{dts|February 9}} || Publication || Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal ''{{W|Global Policy}}''.<ref>{{cite web |url=http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/abstract |title=Strategic Implications of Openness in AI Development |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /><ref name="newsletter-spring-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-spring-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Spring 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref> The paper "covers a breadth of areas including long-term AI development, singleton versus multipolar scenarios, race dynamics, responsible AI development, and identification of possible failure modes."<ref name="annual-review-2016" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
 
| 2017 || {{dts|February 9}} || Publication || Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal ''{{W|Global Policy}}''.<ref>{{cite web |url=http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/abstract |title=Strategic Implications of Openness in AI Development |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /><ref name="newsletter-spring-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-spring-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Spring 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref> The paper "covers a breadth of areas including long-term AI development, singleton versus multipolar scenarios, race dynamics, responsible AI development, and identification of possible failure modes."<ref name="annual-review-2016" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
 +
|-
 +
| 2017 || {{dts|February 10}} || Workshop || FHI hosts a workshop on normative uncertainty (i.e. uncertainty regarding moral frameworks).<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-normative-uncertainty-workshop/ |author=Future of Humanity Institute - FHI |title=Workshop on Normative Uncertainty |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 16, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|February 19}}–20 || Workshop || FHI hosts a workshop on potential risks from malicious use of artificial intelligence. The workshop is organized by FHI, the Centre for the Study of Existential Risk, and the Centre for the Future of Intelligence.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/bad-actors-and-artificial-intelligence-workshop/ |author=Future of Humanity Institute - FHI |title=Bad Actors and AI Workshop |publisher=Future of Humanity Institute |date=November 4, 2017 |accessdate=March 16, 2018}}</ref>
 
|-
 
|-
 
| 2017 || {{dts|March}} || Financial || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.<ref name="open-phil-grant-march-2017">{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support |publisher=Open Philanthropy Project |title=Future of Humanity Institute — General Support |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
 
| 2017 || {{dts|March}} || Financial || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.<ref name="open-phil-grant-march-2017">{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support |publisher=Open Philanthropy Project |title=Future of Humanity Institute — General Support |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|April 26}} || Publication || The online book ''Modeling Agents with Probabilistic Programs'' by Owain Evans (FHI Research Fellow), Andreas Stuhlmüller, John Salvatier (FHI intern), and Daniel Filan (FHI intern) is published. The book is available at [https://agentmodels.org/ <code>https://agentmodels.org/</code>].<ref>{{cite web |url=https://agentmodels.org/ |title=Modeling Agents with Probabilistic Programs |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/new-interactive-tutorial-planning-reinforcement-learning/ |author=Future of Humanity Institute - FHI |title=New Interactive Tutorial: Modeling Agents with Probabilistic Programs - Future of Humanity Institute |publisher=Future of Humanity Institute |date=April 26, 2017 |accessdate=March 13, 2018}}</ref>
+
| 2017 || {{dts|April 26}} || Publication || The online book ''Modeling Agents with Probabilistic Programs'' by Owain Evans (FHI Research Fellow), Andreas Stuhlmüller, John Salvatier (FHI intern), and Daniel Filan (FHI intern) is published. The book is available at [https://agentmodels.org/ <code>https://agentmodels.org/</code>].<ref>{{cite web |url=https://agentmodels.org/ |title=Modeling Agents with Probabilistic Programs |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/new-interactive-tutorial-planning-reinforcement-learning/ |author=Future of Humanity Institute - FHI |title=New Interactive Tutorial: Modeling Agents with Probabilistic Programs - Future of Humanity Institute |publisher=Future of Humanity Institute |date=April 26, 2017 |accessdate=March 13, 2018}}</ref> Work on the book began in spring of 2016. The main motivations for writing the book are: (1) to popularize inverse reinforcement learning (IRL) to a broader audience than machine learning researchers; and (2) "to give a detailed explanation of the authors' approach to IRL to the existing AI Safety and AI/ML communities."<ref name="fli-grant-owain-evans" />
 
|-
 
|-
 
| 2017 || {{dts|April 27}} || Publication || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review" /><ref name="newsletter-summer-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-summer-2017/ |author=Future of Humanity Institute - FHI |title=FHI Quarterly Update Summer 2017 |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref>
 
| 2017 || {{dts|April 27}} || Publication || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review" /><ref name="newsletter-summer-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-summer-2017/ |author=Future of Humanity Institute - FHI |title=FHI Quarterly Update Summer 2017 |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|May}} || || FHI announces that it will be joining the {{W|Partnership on AI}}.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/join-partnership-ai/ |author=Future of Humanity Institute - FHI |title=FHI is joining the Partnership on AI |publisher=Future of Humanity Institute |date=May 17, 2017 |accessdate=March 16, 2018}}</ref>
 
|-
 
|-
 
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the {{w|arXiv}}. Three of the authors of this paper are affiliated with FHI: Katja Grace, Allan Dafoe, and Owain Evans.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref>
 
| 2017 || {{dts|May 24}} || Publication || "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the {{w|arXiv}}. Three of the authors of this paper are affiliated with FHI: Katja Grace, Allan Dafoe, and Owain Evans.<ref>{{cite web |url=https://arxiv.org/abs/1705.08807 |title=[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts |accessdate=July 13, 2017}}</ref>
Line 301: Line 311:
 
* when did FHI start doing more ML-based stuff? was it after it hired owain evans?
 
* when did FHI start doing more ML-based stuff? was it after it hired owain evans?
 
* https://web.archive.org/web/20110908095411/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0003/19902/Final_Complete_FHI_Report.pdf (see pages 76-77)
 
* https://web.archive.org/web/20110908095411/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0003/19902/Final_Complete_FHI_Report.pdf (see pages 76-77)
* https://www.fhi.ox.ac.uk/fhi-awarded-prestigious-e2m-erc-grant/
 
* https://www.fhi.ox.ac.uk/join-partnership-ai/
 
* https://www.fhi.ox.ac.uk/bad-actors-and-artificial-intelligence-workshop/
 
* https://www.fhi.ox.ac.uk/fhi-normative-uncertainty-workshop/
 
* https://www.fhi.ox.ac.uk/workshop-hosted-on-existential-risk/
 
* https://www.fhi.ox.ac.uk/mirix-at-fhi/
 
* more at https://www.fhi.ox.ac.uk/news/ (start from "FHI in 2014")
 
  
 
===Timeline update strategy===
 
===Timeline update strategy===

Revision as of 14:17, 16 March 2018

This is a timeline of the Future of Humanity Institute (FHI).

Sample questions

Big picture

Time period Development summary More details
Before 2005 This is the period leading up to FHI's existence. Transhumanism, various mailing lists, Bostrom developing his early ideas, etc.
2005–?? FHI is founded
2014–?? Superintelligence is published

Visual data

Wikipedia pageviews for FHI page

The following plots pageviews for the Future of Humanity Institute Wikipedia page. The image generated on Wikipedia Views.

Future of Humanity Institute Wikipedia pageviews.png

Wikipedia pageviews for Nick Bostrom page

The following plots pageviews for the Nick Bostrom Wikipedia page. The image is generated on Wikipedia Views.

Nick Bostrom Wikipedia pageviews.png

Full timeline

Year Month and date Event type Details
1973 March 10 Nick Bostrom is born.
1998 August 30 Website The domain name for the Anthropic Principle website, anthropic-principle.com, is registered.[1] The first Internet Archive snapshot of the website is from January 25, 1999.[2]
1998 August 30 Website The domain name for Nick Bostrom's Future Studies website, future-studies.com, is registered.[3] The first Internet Archive snapshot of the website is from October 12, 1999.[4]
1998 December 14 Website The domain name for Nick Bostrom's analytic philosophy website, analytic.org, is registered.[5] The first Internet Archive snapshot of the website is from November 28, 1999.[6] As of March 2018, the website is not maintained and points to Bostrom's main website, nickbostrom.com.[7]
2001 October 31 Website The Simulation Argument website's domain name, simulation-argument.com, is registered.[8] The first Internet Archive snapshot of the website would be on December 5, 2001.[9] The website hosts information about the simulation hypothesis, especially as articulated by Bostrom. In the FHI Achievements Report for 2008–2010, the Simulation Argument website is listed under websites maintained by FHI members.[10]
2003 Publication Nick Bostrom's "Astronomical Waste: The Opportunity Cost of Delayed Technological Development" is published in the journal Utilitas.[11] This is a featured FHI publication.[12]
2005 June 1 or November 29 The Future of Humanity Institute is established.[13][14][15]
2005 December 18 Publication "How Unlikely is a Doomsday Catastrophe?" by Max Tegmark and Nick Bostrom is published.[16] This is a featured FHI publication.[12]
2006 Publication "What is a Singleton?" by Nick Bostrom is published in the journal Linguistic and Philosophical Investigations. The paper introduces the idea of a singleton, a hypothetical "world order in which there is a single decision-making agency at the highest level".[17]
2006 Staff Rebecca Roache joins FHI as a Research Fellow. Her topic of research is ethical issues regrading human enhancement and new technology.[18][19]
2006 January Staff Anders Sandberg joins FHI. As of March 2018 he is a Senior Research Fellow at FHI.[20]
2006 March 2 Website The ENHANCE project website is created[21] by Anders Sandberg.[18]
2006 July Publication "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" by Nick Bostrom and Toby Ord is published.[22] The paper introduces the reversal test in the context of bioethics of human enhancement. This is a featured FHI publication.[12]
2006 July 19 Website The domain name for the existential risk website, existential-risk.org, is registered on this day.[23]
2006 November 20 Website Robin Hanson starts Overcoming Bias.[24] The first post on the blog seems to be from November 20.[25] On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.[26] The blog seems to have received support from FHI in the beginning.[27][18]
2006 December Staff Rafaela Hillerbrand joins FHI as a Research Fellow for "work on epistemological and ethical problems for decisions under risk and uncertainty". She would remain at FHI until October 2008.[28]
2006 December 17 Outside review The initial version of the Wikipedia page for FHI is created.[29]
Late 2006 or early 2007 Staff Nicholas Shackel joins FHI as a Research Fellow in Theoretical Ethics.[30]
2005–2007 Lighthill Risk Network is created by Peter Taylor of FHI.[18]
2007 May Workshop The Whole Brain Emulation Workshop is hosted by FHI.[18] The workshop would eventually lead to the publication of "Whole Brain Emulation: A Technical Roadmap" in 2008.[31]
2007 July 18 Internal review The first FHI Achievements Report, covering November 2005 to July 2007, is published.[18]
2007 August 24 Publication Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker is published.[32][33]
2007 November Website Practical Ethics in the News (at www.practicalethicsnews.com) launches.[31] (I think this is the same as Practical Ethics mentioned below.) At some point the site begins redirecting to http://blog.practicalethics.ox.ac.uk/ but as of March 2018 the site is "temporarily offline for maintenance" (for several years now).
2008 Website Practical Ethics, a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches.[34]
2008 Publication "Whole Brain Emulation: A Technical Roadmap" by Anders Sandberg and Nick Bostrom is published.[31] This is a featured FHI publication.[12]
2008 January 22 Website The domain name for the Global Catastrophic Risks website, global-catastrophic-risks.com, is registered.[35] The first snapshot on the Internet Archive would be on May 5, 2008.[36]
2008 September 15 Publication Global Catastrophic Risks is published.[37][33]
2009 Publication "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" by Rafaela Hillerbrand, Toby Ord, and Anders Sandberg is published.[31] This is a featured FHI publication.[12]
2009 January 1 Publication On the group blog (at the time) Overcoming Bias Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published. The paper title might be "Fundamental Moral Uncertainty".[10][38] Despite the idea not being published in full, it is often referenced in discussions.[39][40][41]
2009 January 22 Publication Human Enhancement is published.[42][33][31]
2009 February Website LessWrong, the group blog about rationality, launches.[43] The blog is sponsored and endorsed by FHI, although its written contributions seem to be minimal.[31][44]
2009 March 6 Social media The FHI YouTube account, FHIOxford, is created.[45]
2009 June 19 Publication "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges" by Nick Bostrom and Anders Sandberg is published in the journal Science and Engineering Ethics.[46][12] By 2011, this would be the "overwhelmingly most cited article" from FHI.[44]
2009 September Internal review The FHI Annual Report, covering the period October 1, 2008 to September 30, 2009, is probably published during this month. (The report does not have a date.)[31]
2010 Internal review The FHI Achievements Report, covering the years 2008 to 2010, is probably published during this year. (The report does not have a date so it is unclear when it was published.)[10]
2010 June 21 Publication Anthropic Bias by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.[47][33]
2010 June Staff Eric Mandelbaum joins FHI as a Postdoctoral Research Fellow. He would remain at FHI until July 2011.[48]
2011 January 14–17 The Winter Intelligence Conference, organized by FHI, takes place. The conference brings together experts and students in philosophy, cognitive science, and artificial intelligence for discussions about intelligence.[49][50][51][52]
2011 March 18 Publication Enhancing Human Capacities is published.[53][54]
2011 June 9 Outside review On a comment thread on LessWrong, a discussion takes place regarding FHI's funding needs, productivity of marginal hires, dispersion of research topics (i.e. lack of focus on existential risks), and other topics related to funding FHI.[55]
2011 September Organization The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.[56] The Programme is directed by Nick Bostrom and works closely with FHI, among other organizations.
2011 September Staff Stuart Armstrong joins FHI as Research Fellow.[57]
2011 September 25 Outside review Kaj Sotala posts "SIAI vs. FHI achievements, 2008–2010" on LessWrong, comparing the outputs of FHI and the Machine Intelligence Research Institute (which used to be called the Singularity Institute for Artificial Intelligence, abbreviated SIAI).[44]
2012 Staff Daniel Dewey joins FHI as a Research Fellow.[58]
2012 June 6 Publication The technical report "Indefinite survival through backup copies" by Anders Sandberg and Stuart Armstrong is published. The paper shows that if an individual entity copies itself so that the number of copies grows logarithmically with time, it will have a nonzero probability of ultimate survival.[59] This report used to be a featured FHI publication.[60]
2012 August 15 Website The first Internet Archive snapshot of the Winter Intelligence Conference website is from this day.[61]
2012 September 5 Social media The FHI Twitter account, @FHIOxford, is registered.[62]
2012 November 16 Outside review John Maxwell IV posts "Room for more funding at the Future of Humanity Institute" on LessWrong.[63]
2012 December 10–11 FHI hosts the 2012 conference on Impacts and Risks of Artificial General Intelligence. This conference is one of the two conferences that are part of the Winter Intelligence Multi-Conference 2012, which is hosted by FHI.[64][65]
2013 Staff Carl Frey and Vincent Müller join FHI as Research Fellows sometime around this year.[66]
2013 February Publication "Existential Risk Prevention as Global Priority" by Nick Bostrom is published in Global Policy.[67] This is a featured FHI publication.[12]
2013 February 25 Outside review "Omens When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?" is published on the digital magazine Aeon. The piece covers FHI, existential risk, Nick Bostrom, and some of his ideas.[68][69][70]
2013 March 12 Publication "Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox" by Stuart Armstrong and Anders Sandberg is published.[71] This paper is a featured FHI publication in 2014.[72]
2013 May 30 A collaboration between FHI and the insurance company Amlin is announced. The collaboration is for research into systemic risks.[73][74][75]
2013 June Staff Nick Beckstead joins FHI as a Research Fellow. He would remain at FHI until November 2014.[76]
2013 September 17 Publication "The Future of Employment: How Susceptible are Jobs to Computerisation?" by Carl Benedikt Frey and Michael A. Osborne is published.[77] This is a featured FHI publication.[12]
2013 November Workshop FHI hosts a week-long math workshop led by the Machine Intelligence Research Institute.[78]
2013 December 27 Outside review Chris Hallquist posts "Donating to MIRI vs. FHI vs. CEA vs. CFAR" on LessWrong about the relative merits of donating to the listed organizations. The discussion thread includes a comment from Seán Ó hÉigeartaigh about the funding needs of FHI.[79]
2014 The Global Priorities Project (GPP) runs as a pilot project within the Centre for Effective Altruism. Team members of GPP include Owen Cotton-Barratt and Toby Ord of the Future of Humanity Institute.[80] GPP would also eventually become a collaboration between Centre for Effective Altruism and FHI.[81]
2014 Publication "Managing existential risk from emerging technologies" by Nick Beckstead and Toby Ord is published in the report "Innovation: Managing Risk, Not Avoiding It. Evidence and Case Studies."[82] This is a featured FHI publication.[12]
2014 Staff Toby Ord joins FHI as Research Fellow.[83]
2014 Staff John Cusbert joins FHI as a Research Fellow, for work on the Population Ethics: Theory and Practice project.[84]
2014–2017 Staff Hilary Greaves joins as principal investigator for Population Ethics: Theory and Practice (organized by FHI).[85]
2014 February 4 Workshop FHI hosts a workshop on agent based modelling.[86]
2014 February 11–12 Conference FHI announces the FHI–Amlin conference on systemic risk.[87][88]
2014 May 12 Social media FHI researchers Anders Sandberg and Andrew Snyder-Beattie do an AMA ("ask me anything") on Reddit.[89][90]
2014 July Workshop FHI hosts a MIRIx Workshop in collaboration with the Machine Intelligence Research Institute "to develop the technical agenda for AI safety".[91]
2014 July–September Publication Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published.[92] In March 2017, the Open Philanthropy Project considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."[93]
2014 September Publication The policy brief "Unprecedented Technological Risks" by Nick Beckstead et al. is published.[94] This is a featured FHI publication.[12]
2014 September 24 Social media Nick Bostrom does an AMA ("ask me anything") on Reddit.[95]
2014 September 26 Outside review Daniel Dewey (who is a Research Fellow for FHI at the time)[96] posts "The Future of Humanity Institute could make use of your money" on LessWrong. The post results in some discussion about donating to FHI in the comments section.
2014 October 1 Financial FHI posts a note of thanks to the Investling Group for a "recent financial contribution". The amount is not listed.[97]
2014 October 28 Website The domain name for the Population Ethics: Theory and Practice website, populationethics.org, is registered.[98] The first Internet Archive snapshot of the website would be on December 23, 2014. The "project is organised by the Future of Humanity Institute and supported by the Leverhulme Trust."[99]
2015 The Strategic AI Research Center starts some time after this period.[100]
2015 Publication "Learning the Preferences of Bounded Agents" is published. One of the paper's authors is Owain Evans at FHI.[101][102] This is a featured FHI publication.[12]
2015 Publication "Corrigibility" by Soares et al. is published. One of the authors, Stuart Armstrong, is affiliated with FHI. This is a featured FHI publication.[12]
2015 Staff Owain Evans joins FHI as a postdoctoral research scientist.[103]
2015 Staff Ben Levinstein joins FHI as a Research Fellow. He would stay at FHI until 2016.[104]
2015 Staff Feng Zhou joins FHI as a Research Fellow, for work on the FHI–Amlin collaboration on systemic risk.[105]
2015 Staff Simon Beard joins FHI as a Research Fellow in philosophy, for work on Population Ethics: Theory and Practice. He would remain at FHI until 2016.[106]
2015 January 2–5 Conference The Future of AI: Opportunities and Challenges, an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute, but speakers include Nick Bostrom, the director of FHI.[107] Nate Soares (executive director of Machine Intelligence Research Institute) would later call this the "turning point" of when top academics begin to focus on AI risk.[108]
2015 January 8 Internal review FHI publishes a one-paragraph review of its work in 2014.[109]
2015 July 1 Financial The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.[110][111][112] One of the grantees is Nick Bostrom, the director of FHI, who receives a grant of $1,500,000 for the creation of a new research center focused on AI safety.[113] Another grantee is Owain Evans of FHI, who receives a grant of $227,212 for his project on inferring human values.[114]
2015 July 30 Outside review A post critiquing the lack of intuitive explanation of existential risks on the FHI website (among other places) is posted on LessWrong.[115]
2015 September 1 Financial FHI announces that Nick Bostrom has been awarded a €2 million (about $2,247,200 at the time)[116] European Research Council Advanced Grant.[117]
2015 September 15 Social media Anders Sandberg does an AMA ("ask me anything") on Reddit.[118]
2015 October Publication "Moral Trade" by Toby Ord is published in the journal Ethics.[119] This is a featured FHI publication.[12]
2015 November 23 Outside review A The New Yorker piece featuring Nick Bostrom and FHI is published.[120]
2016 Publication Stuart Armstrong's paper "Off-policy Monte Carlo agents with variable behaviour policies" is published.[121][102] This is a featured FHI publication.[12]
2016 Publication "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.[122][102] This is a featured FHI publication.[12]
2016 The Global Politics of AI Research Group is established by Carrick Flynn and Allan Dafoe (both of whom are affiliated with FHI). The group "consists of eleven research members more than thirty volunteers" and "has the mission of helping researchers and political actors to adopt the best possible strategy around the development of AI."[123] (It's not clear where the group is based or if it even meets physically.)
2016 Publication "Thompson Sampling is Asymptotically Optimal in General Environments" by Leike et al. is published.[124] This is a featured FHI publication.[12]
2016 Staff Owen Cotton-Barratt joins FHI as a Research Fellow.[125]
2016 Staff Eric Drexler becomes an Oxford Martin Senior Fellow at FHI, and later a Senior Research Fellow. Previously, he was an Academic Visitor and then an Academic Advisor.[125][126]
2016 Staff Jan Leike joins FHI as a Research Fellow.[126]
2016 Staff Miles Brundage joins FHI as a Research Fellow.[126]
2016 January 26 Publication "The Unilateralist's Curse and the Case for a Principle of Conformity" by Nick Bostrom, Thomas Douglas, and Anders Sandberg is published in the journal Social Epistemology.[127] This is a featured FHI publication.[12]
2016 February 8–9 Workshop The Global Priorities Project (a collaboration between FHI and the Centre for Effective Altruism) hosts a policy workshop on existential risk. Attendees included "twenty leading academics and policy-makers from the UK, USA, Germany, Finland, and Sweden".[128][123]
2016 May Publication The Global Priorities Project (associated with FHI) releases the Global Catastrophic Report 2016.[129]
2016 May Workshop FHI hosts a week-long workshop in Oxford called "The Control Problem in AI", attended by ten members of Machine Intelligence Research Institute.[123]
2016 May 27 – June 17 Workshop The Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted by the Machine Intelligence Research Institute and FHI, takes place. The program brings "together a variety of academics and professionals to address the technical challenges associated with AI robustness and reliability, with a goal of facilitating conversations between people interested in a number of different approaches." At the program, Jan Leike and Stuart Armstrong of FHI each give a talk.[130][131]
2016 June (approximate) FHI recruits William MacAskill and Hilary Greaves to start a new "Programme on the Philosophical Foundations of Effective Altruism" as a collaboration between FHI and the Centre for Effective Altruism.[129] (It seems like this became the Global Priorities Institute.)
2016 June Publication The Age of Em: Work, Love and Life When Robots Rule the Earth, a book about the implications of whole brain emulation by FHI research associate Robin Hanson, is published.[132] In October, FHI and Hanson would organize a workshop about the book.[123][133]
2016 June 1 Publication The paper "Safely interruptible agents" is announced on the Machine Intelligence Research Institute blog. The paper is a collaboration between Google DeepMind and FHI, and one of the paper's authors is Stuart Armstrong of FHI.[134][102] The paper is also presented at the Conference on Uncertainty in Artificial Intelligence (UAI).[129][135] This is a featured FHI publication.[12]
2016 August Staff Piers Millett joins FHI as Senior Research Fellow.[136][137]
2016 September Financial The Open Philanthropy Project recommends (to Good Ventures?) a grant of $115,652 to FHI to support the hiring of Piers Millett, who will work on biosecurity and pandemic preparedness.[138]
2016 September (approximate) Financial FHI receives a funding offer from Luke Ding to fund Hilary Greaves for four years starting mid-2017 (in case a proposed new institute is unable to raise academic funds for her) and William MacAskill's full salary for five years.[139]
2016 September 16 Publication Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.[140][102][123][141]
2016 September 22 FHI's page on its collaboration with Google DeepMind is published. However it is unclear when the actual collaboration began.[142]
2016 November Workshop The biotech horizon scanning workshop, co-hosted by the Centre for the Study of Existential Risk and FHI, takes place. The workshop and the overall "biological engineering horizon scanning" process is intended to lead up to "a peer-reviewed publication highlighting 15–20 developments of greatest likely impact."[123][143]
2016 December Workshop FHI hosts a workshop on "AI Safety and Blockchain". Attendees include Nick Bostrom, Vitalik Buterin, Jaan Tallinn, Wei Dai, Gwern Branwen, and Allan Dafoe. "The workshop explored the potential technical overlap between AI Safety and blockchain technologies and the possibilities for using blockchain, crypto-economics, and cryptocurrencies to facilitate greater global coordination."[144][123] It is unclear whether any output resulted from this workshop.
2017 Publication Slides for an upcoming paper by FHI researchers Anders Sandberg, Eric Drexler, and Toby Ord, "Dissolving the Fermi Paradox", are posted.[145][146]
2017 Publication The report "Existential Risk: Diplomacy and Governance" is published. "This work began at the Global Priorities Project, whose policy work has now joined FHI."[147] The report gives an overview of existential risks and presents three recommendations for ways to reduce existential risks (chosen out of more then 100 proposals): (1) developing governance of geoengineering research; (2) establishing scenario plans and exercises for severe engineered pandemics at the international level; and (3) building international attention and support for existential risk reduction.[148]
2017 January 15 Publication "Agent-Agnostic Human-in-the-Loop Reinforcement Learning" is uploaded to the arXiv.[149][147]
2017 January 25 Publication The FHI Annual Review 2016 is published.[123]
2017 February 9 Publication Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal Global Policy.[150][102][147] The paper "covers a breadth of areas including long-term AI development, singleton versus multipolar scenarios, race dynamics, responsible AI development, and identification of possible failure modes."[123] This is a featured FHI publication.[12]
2017 February 10 Workshop FHI hosts a workshop on normative uncertainty (i.e. uncertainty regarding moral frameworks).[151]
2017 February 19–20 Workshop FHI hosts a workshop on potential risks from malicious use of artificial intelligence. The workshop is organized by FHI, the Centre for the Study of Existential Risk, and the Centre for the Future of Intelligence.[152]
2017 March Financial The Open Philanthropy Project recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.[93]
2017 April 26 Publication The online book Modeling Agents with Probabilistic Programs by Owain Evans (FHI Research Fellow), Andreas Stuhlmüller, John Salvatier (FHI intern), and Daniel Filan (FHI intern) is published. The book is available at https://agentmodels.org/.[153][154] Work on the book began in spring of 2016. The main motivations for writing the book are: (1) to popularize inverse reinforcement learning (IRL) to a broader audience than machine learning researchers; and (2) "to give a detailed explanation of the authors' approach to IRL to the existing AI Safety and AI/ML communities."[114]
2017 April 27 Publication "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.[155][156][157]
2017 May FHI announces that it will be joining the Partnership on AI.[158]
2017 May 24 Publication "When Will AI Exceed Human Performance? Evidence from AI Experts" is published on the arXiv. Three of the authors of this paper are affiliated with FHI: Katja Grace, Allan Dafoe, and Owain Evans.[159]
2017 July Financial The Open Philanthropy Project recommends (to Good Ventures?) a grant of $299,320 to Yale University to support "to support research on the global politics of advanced artificial intelligence". The work will be led by Allan Dafoe, who will conduct part of the work at FHI.[160]
2017 July 17 Publication "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.[161][156]
2017 August 25 Publication FHI announces three new forthcoming papers in the latest issue of Health Security.[162][163]
2017 September 27 Carrick Flynn, a research project manager at FHI,[164] posts his thoughts on AI policy and strategy on the Effective Altruism Forum. Although he only writes in a personal capacity in the post, it is informed by his experience at FHI.[165]
2017 September 29 Financial Effective Altruism Grants fall 2017 recipients are announced. One of the recipients is Gregory Lewis, who will use the grant for "Research into biological risk mitigation with the Future of Humanity Institute." The grant amount for Lewis is £15,000 (about $20,000).[166]
2017 October–December FHI launches its Governance of AI Program, co-directed by Nick Bostrom and Allan Dafoe.[167]
2018 February 20 Publication The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.[168][169][170]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

See the commit history on GitHub for a more detailed revision history.

Funding information for this timeline is available.

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. "Showing results for: anthropic-principle.com". ICANN WHOIS. Retrieved March 11, 2018. Creation Date: 1998-08-30T04:00:00Z 
  2. "anthropic-principle.com". Archived from the original on January 25, 1999. Retrieved March 11, 2018. 
  3. "Showing results for: future-studies.com". ICANN WHOIS. Retrieved March 15, 2018. Creation Date: 1998-08-30T04:00:00Z 
  4. "Future Studies". Archived from the original on October 12, 1999. Retrieved March 15, 2018. 
  5. "Showing results for: ANALYTIC.ORG". ICANN WHOIS. Retrieved March 15, 2018. Creation Date: 1998-12-14T05:00:00Z 
  6. "Nick Bostrom's thinking in analytic philosophy". Archived from the original on November 28, 1999. Retrieved March 15, 2018. 
  7. "Nick Bostrom's thinking in analytic philosophy". Retrieved March 15, 2018. 
  8. "Showing results for: simulation-argument.com". ICANN WHOIS. Retrieved March 11, 2018. Creation Date: 2001-10-31T08:55:28Z 
  9. "simulation-argument.com". Internet Archive. Retrieved March 10, 2018. 
  10. 10.0 10.1 10.2 "Wayback Machine" (PDF). Archived from the original (PDF) on May 16, 2011. Retrieved March 11, 2018. 
  11. Bostrom, Nick. "Astronomical Waste: The Opportunity Cost of Delayed Technological Development" (PDF). Retrieved March 14, 2018. 
  12. 12.00 12.01 12.02 12.03 12.04 12.05 12.06 12.07 12.08 12.09 12.10 12.11 12.12 12.13 12.14 12.15 12.16 12.17 12.18 Future of Humanity Institute - FHI. "Selected Publications Archive - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  13. "About | Future of Humanity Institute | Programmes". Oxford Martin School. Retrieved February 7, 2018. 
  14. "Future of Humanity Institute". Archived from the original on October 13, 2005. Retrieved February 7, 2018. 
  15. "Wayback Machine" (PDF). Archived from the original (PDF) on May 12, 2006. Retrieved February 7, 2018. 
  16. Tegmark, Max; Bostrom, Nick. "How Unlikely is a Doomsday Catastrophe?" (PDF). Retrieved March 14, 2018. 
  17. Bostrom, Nick. "What is a Singleton?". Retrieved March 11, 2018. 
  18. 18.0 18.1 18.2 18.3 18.4 18.5 "Wayback Machine" (PDF). Archived from the original (PDF) on January 17, 2009. Retrieved February 7, 2018. 
  19. "Rebecca Roache" (PDF). Archived from the original (PDF) on July 4, 2007. Retrieved March 16, 2018. 
  20. "Anders Sandberg". LinkedIn. Retrieved March 15, 2018. 
  21. Anders Sandberg. "ENHANCE Project Site". Archived from the original on April 6, 2006. Retrieved February 7, 2018. 
  22. "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" (PDF). Retrieved March 11, 2018. 
  23. "Showing results for: EXISTENTIAL-RISK.ORG". ICANN WHOIS. Retrieved March 11, 2018. Creation Date: 2006-07-19T23:23:38Z 
  24. "Overcoming Bias : Bio". Retrieved June 1, 2017. 
  25. "Overcoming Bias: How To Join". Retrieved September 26, 2017. 
  26. "Overcoming Bias". Retrieved September 26, 2017. 
  27. "FHI Updates". Archived from the original on July 5, 2007. Retrieved February 7, 2018. 
  28. "Rafaela Hillerbrand". LinkedIn. Retrieved March 15, 2018. 
  29. "Future of Humanity Institute: Revision history - Wikipedia". English Wikipedia. Retrieved March 14, 2018. 
  30. "FHI Staff". Archived from the original on January 16, 2007. Retrieved March 16, 2018. 
  31. 31.0 31.1 31.2 31.3 31.4 31.5 31.6 "Wayback Machine" (PDF). Archived from the original (PDF) on April 13, 2012. Retrieved March 11, 2018. 
  32. "Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books". Retrieved February 8, 2018. 
  33. 33.0 33.1 33.2 33.3 "Future of Humanity Institute - Books". Archived from the original on November 3, 2010. Retrieved February 8, 2018. 
  34. "Future of Humanity Institute Updates". Archived from the original on September 15, 2008. Retrieved February 7, 2018. 
  35. "Showing results for: global-catastrophic-risks.com". ICANN WHOIS. Retrieved March 11, 2018. Creation Date: 2008-01-22T20:47:11Z 
  36. "global-catastrophic-risks.com". Retrieved March 10, 2018. 
  37. "Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books". Retrieved February 8, 2018. 
  38. "Overcoming Bias : Moral uncertainty – towards a solution?". Retrieved March 10, 2018. 
  39. Dai, Wei (October 21, 2014). "Is the potential astronomical waste in our universe too small to care about?". LessWrong. Retrieved March 15, 2018. 
  40. Bostrom, Nick (September 24, 2014). "Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA • r/science". reddit. Retrieved March 15, 2018. 
  41. Shulman, Carl (August 21, 2014). "Population ethics and inaccessible populations". Reflective Disequilibrium. Retrieved March 16, 2018. Some approaches, such as Nick Bostrom and Toby Ord's Parliamentary Model, consider what would happen if each normative option had resources to deploy on its own (related to its plausibility or appeal), and look for Pareto-improvements. 
  42. "Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books". Retrieved February 8, 2018. 
  43. "FAQ - Lesswrongwiki". LessWrong. Retrieved June 1, 2017. 
  44. 44.0 44.1 44.2 "SIAI vs. FHI achievements, 2008-2010 - Less Wrong". LessWrong. Retrieved March 14, 2018. 
  45. "FHIOxford - YouTube". YouTube. Retrieved March 15, 2018. 
  46. Bostrom, Nick; Sandberg, Anders (2009). "Cognitive Enhancement: Methods, Ethics, Regulatory Challenges" (PDF). Retrieved March 15, 2018. 
  47. "Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books". Retrieved February 8, 2018. 
  48. "Eric Mandelbaum". Archived from the original on March 16, 2018. Retrieved March 16, 2018. 
  49. "Winter Intelligence" (PDF). Archived from the original (PDF) on July 11, 2011. Retrieved March 15, 2018. 
  50. "Future of Humanity Institute - Winter Intelligence Conference". Archived from the original on January 16, 2013. Retrieved March 15, 2018. 
  51. Future of Humanity Institute - FHI (November 8, 2017). "Winter Intelligence Conference 2011 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  52. Future of Humanity Institute - FHI (January 14, 2011). "Winter Intelligence Conference 2011 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  53. "Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books". Retrieved February 8, 2018. 
  54. "Future of Humanity Institute - Books". Archived from the original on January 16, 2013. Retrieved February 8, 2018. 
  55. "CarlShulman comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong". LessWrong. Retrieved March 15, 2018. 
  56. "Welcome". Oxford Martin Programme on the Impacts of Future Technology. Retrieved July 26, 2017. The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University. 
  57. "Stuart Armstrong". LinkedIn. Retrieved March 15, 2018. 
  58. "Daniel-Dewey.pdf" (PDF). Retrieved March 15, 2018. 
  59. Sandberg, Anders; Armstrong, Stuart (June 6, 2012). "Indefinite survival through backup copies" (PDF). Archived from the original (PDF) on January 16, 2013. Retrieved March 15, 2018. 
  60. "Future of Humanity Institute - Publications". Archived from the original on January 12, 2013. Retrieved March 15, 2018. 
  61. "Winter Intelligence Conferences | The future of artificial general intelligence". Archived from the original on August 15, 2012. Retrieved March 11, 2018. 
  62. "Future of Humanity Institute (@FHIOxford)". Twitter. Retrieved March 11, 2018. 
  63. "Room for more funding at the Future of Humanity Institute - Less Wrong". LessWrong. Retrieved March 14, 2018. 
  64. "Future of Humanity Institute - News Archive". Archived from the original on January 12, 2013. Retrieved March 11, 2018. 
  65. "AGI Impacts | Winter Intelligence Conferences". Archived from the original on October 30, 2012. Retrieved March 15, 2018. 
  66. "Staff | Future of Humanity Institute". Archived from the original on June 15, 2013. Retrieved March 16, 2018. 
  67. Bostrom, Nick. "Existential Risk Prevention as Global Priority" (PDF). Retrieved March 14, 2018. 
  68. Ross Andersen (February 25, 2013). "Omens When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?". Aeon. Retrieved March 15, 2018. 
  69. ESRogs (March 2, 2013). "[LINK] Well-written article on the Future of Humanity Institute and Existential Risk". LessWrong. Retrieved March 15, 2018. 
  70. Future of Humanity Institute - FHI (February 25, 2013). "Aeon Magazine Feature: "Omens" - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  71. Armstrong, Stuart; Sandberg, Anders. "Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox" (PDF). Archived from the original (PDF) on April 9, 2014. Retrieved March 15, 2018. 
  72. "Publications | Future of Humanity Institute". Archived from the original on May 23, 2014. Retrieved March 15, 2018. 
  73. "FHI & Amlin join forces to understand systemic risk". Oxford Martin School. Retrieved March 15, 2018. 
  74. Future of Humanity Institute - FHI. "FHI-Amlin Collaboration - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 15, 2018. 
  75. "FHI-Amlin Collaboration | Future of Humanity Institute". Archived from the original on May 23, 2014. Retrieved March 15, 2018. 
  76. "Nick Beckstead". LinkedIn. Retrieved March 15, 2018. 
  77. Frey, Carl Benedikt; Osborne, Michael A. "The Future of Employment: How Susceptible are Jobs to Computerisation?" (PDF). Retrieved March 14, 2018. 
  78. Future of Humanity Institute - FHI (November 26, 2013). "FHI Hosts Machine Intelligence Research Institute Maths Workshop - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  79. "Donating to MIRI vs. FHI vs. CEA vs. CFAR - Less Wrong Discussion". LessWrong. Retrieved March 14, 2018. 
  80. "Global Priorities Project Strategy Overview" (PDF). Retrieved March 10, 2018. 
  81. "HOME". The Global Priorities Project. Retrieved March 10, 2018. 
  82. "Innovation: managing risk, not avoiding it" (PDF). 2014. Retrieved March 14, 2018. 
  83. "Toby Ord - CV" (PDF). Retrieved March 15, 2018. 
  84. "Staff | Future of Humanity Institute". Archived from the original on December 9, 2014. Retrieved March 16, 2018. 
  85. "Curriculum Vitae - Hilary Graves" (PDF). Retrieved March 16, 2018. 
  86. Future of Humanity Institute - FHI (February 7, 2014). "FHI hosts Agent Based Modelling workshop - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  87. Future of Humanity Institute - FHI (February 10, 2014). "FHI-Amlin Conference on Systemic Risk - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  88. "Home | Future of Humanity Institute". Archived from the original on July 17, 2014. Retrieved March 16, 2018. 
  89. "Science AMA Series: We are researchers at the Future of Humanity Institute at Oxford University, ask us anything! • r/science". reddit. Retrieved March 14, 2018. 
  90. Future of Humanity Institute - FHI (May 16, 2014). "Future of Humanity Institute answers questions from the public - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  91. Future of Humanity Institute - FHI (July 16, 2014). "MIRIx at FHI - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  92. "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017. 
  93. 93.0 93.1 "Future of Humanity Institute — General Support". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018. 
  94. Becktead, Nick; Bostrom, Nick; Bowerman, Niel; Cotton-Barratt, Owen; MacAskill, William; hÉigeartaigh, Seán Ó; Ord, Toby. "Unprecedented Technological Risks" (PDF). Retrieved March 14, 2018. 
  95. "Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA • r/science". reddit. Retrieved March 14, 2018. 
  96. "Daniel Dewey". AI Watch. March 1, 2018. Retrieved March 14, 2018. 
  97. Future of Humanity Institute - FHI (October 1, 2014). "Thanks - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  98. "Showing results for: POPULATIONETHICS.ORG". ICANN WHOIS. Retrieved March 15, 2018. Creation Date: 2014-10-28T08:53:08Z 
  99. "Welcome". Population Ethics: Theory and Practice. Archived from the original on December 23, 2014. Retrieved March 15, 2018. 
  100. "Opinion | Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk". The Washington Post. Retrieved February 8, 2018. 
  101. "Learning the Preferences of Bounded Agents" (PDF). Retrieved March 10, 2018. 
  102. 102.0 102.1 102.2 102.3 102.4 102.5 "2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum". Retrieved March 10, 2018. 
  103. "Owain Evans". LinkedIn. Retrieved March 15, 2018. 
  104. "CV". Ben Levinstein. Retrieved March 15, 2018. 
  105. "Staff | Future of Humanity Institute". Archived from the original on April 13, 2015. Retrieved March 16, 2018. 
  106. "CV". Simon Beard. Retrieved March 15, 2018. 
  107. "AI safety conference in Puerto Rico". Future of Life Institute. October 12, 2015. Retrieved July 13, 2017. 
  108. Nate Soares (July 16, 2015). "An Astounding Year". Machine Intelligence Research Institute. Retrieved July 13, 2017. 
  109. Future of Humanity Institute - FHI (January 8, 2015). "FHI in 2014 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  110. "Grants Timeline - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  111. "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  112. "AI Safety Research - Future of Life Institute". Future of Life Institute. Retrieved July 13, 2017. 
  113. "AI Researcher Nick Bostrom - Future of Life Institute". Future of Life Institute. Retrieved March 14, 2018. 
  114. 114.0 114.1 "AI Researcher Owain Evans - Future of Life Institute". Future of Life Institute. Retrieved March 14, 2018. 
  115. "Help Build a Landing Page for Existential Risk? - Less Wrong". LessWrong. Retrieved March 14, 2018. 
  116. "Currency conversion from EUR on 2015-09-01". Fixer.io. Retrieved March 16, 2018. 
  117. Future of Humanity Institute - FHI (September 25, 2015). "FHI awarded prestigious €2m ERC Grant - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  118. "I am a researcher at the Future of Humanity Institute in Oxford, working on future studies, human enhancement, global catastrophic risks, reasoning under uncertainty and everything else. Ask me anything! • r/Futurology". reddit. Retrieved March 14, 2018. 
  119. Ord, Toby (2015). "Moral Trade" (PDF). Ethics. Retrieved March 14, 2018. 
  120. Khatchadourian, Raffi (November 23, 2015). "The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?". The New Yorker. Retrieved March 15, 2018. 
  121. Armstrong, Stuart. "Off-policy Monte Carlo agents with variable behaviour policies" (PDF). Retrieved March 10, 2018. 
  122. "Learning the Preferences of Ignorant, Inconsistent Agents" (PDF). Retrieved March 10, 2018. 
  123. 123.0 123.1 123.2 123.3 123.4 123.5 123.6 123.7 123.8 Future of Humanity Institute - FHI (July 31, 2017). "FHI Annual Review 2016 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  124. Leike, Jan; Lattimore, Tor; Orseau, Laurent; Hutter, Marcus. "Thompson Sampling is Asymptotically Optimal in General Environments" (PDF). Retrieved March 14, 2018. 
  125. 125.0 125.1 www.alz.consulting. "Future of Humanity Institute". The Future of Humanity Institute. Archived from the original on June 27, 2016. Retrieved March 16, 2018. 
  126. 126.0 126.1 126.2 www.alz.consulting. "Future of Humanity Institute". The Future of Humanity Institute. Archived from the original on November 23, 2016. Retrieved March 16, 2018. 
  127. "The Unilateralist's Curse and the Case for a Principle of Conformity". Taylor & Francis. Retrieved March 14, 2018. 
  128. Future of Humanity Institute - FHI (October 25, 2016). "Policy workshop hosted on existential risk - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  129. 129.0 129.1 129.2 Future of Humanity Institute - FHI (July 31, 2017). "Quarterly Update Summer 2016 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  130. "Colloquium Series on Robust and Beneficial AI - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved March 13, 2018. 
  131. Future of Humanity Institute - FHI (August 5, 2016). "Colloquium Series on Robust and Beneficial AI - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  132. "The Age of Em, A Book". Retrieved March 13, 2018. 
  133. Future of Humanity Institute - FHI (October 25, 2016). "Robin Hanson and FHI hold seminar and public talk on "The age of em" - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  134. Bensinger, Rob (September 12, 2016). "New paper: "Safely interruptible agents" - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved March 10, 2018. 
  135. Stuart Armstrong. "Google Deepmind and FHI collaborate to present research at UAI 2016". LessWrong. Retrieved March 14, 2018. 
  136. "Piers Millett". LinkedIn. Retrieved March 15, 2018. 
  137. Future of Humanity Institute - FHI (December 5, 2016). "FHI hires first biotech policy specialist - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  138. "Future of Humanity Institute — Biosecurity and Pandemic Preparedness". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018. 
  139. Future of Humanity Institute (July 31, 2017). "Quarterly Update Autumn 2016". Future of Humanity Institute. Retrieved March 13, 2018. 
  140. "[1609.04994] Exploration Potential". Retrieved March 10, 2018. 
  141. Future of Humanity Institute - FHI (October 5, 2016). "Exploration potential - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 16, 2018. 
  142. Future of Humanity Institute - FHI (March 8, 2017). "DeepMind collaboration - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  143. Future of Humanity Institute - FHI (December 12, 2016). "Biotech horizon scanning workshop - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  144. Future of Humanity Institute - FHI (January 19, 2017). "FHI holds workshop on AI safety and blockchain - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  145. "Has the Fermi paradox been resolved? - Marginal REVOLUTION". Marginal REVOLUTION. July 3, 2017. Retrieved March 13, 2018. 
  146. gwern (August 16, 2017). "September 2017 news - Gwern.net". Retrieved March 13, 2018. 
  147. 147.0 147.1 147.2 Future of Humanity Institute - FHI (July 31, 2017). "Quarterly Update Spring 2017 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  148. Farquhar, Sebastian; Halstead, John; Cotton-Barratt, Owen; Schubert, Stefan; Belfield, Haydn; Snyder-Beattie, Andrew (2017). "Existential Risk: Diplomacy and Governance" (PDF). Global Priorities Project. Retrieved March 14, 2018. 
  149. "[1701.04079v1] Agent-Agnostic Human-in-the-Loop Reinforcement Learning". Retrieved March 14, 2018. 
  150. "Strategic Implications of Openness in AI Development". Retrieved March 10, 2018. 
  151. Future of Humanity Institute - FHI (March 8, 2017). "Workshop on Normative Uncertainty". Future of Humanity Institute. Retrieved March 16, 2018. 
  152. Future of Humanity Institute - FHI (November 4, 2017). "Bad Actors and AI Workshop". Future of Humanity Institute. Retrieved March 16, 2018. 
  153. "Modeling Agents with Probabilistic Programs". Retrieved March 13, 2018. 
  154. Future of Humanity Institute - FHI (April 26, 2017). "New Interactive Tutorial: Modeling Agents with Probabilistic Programs - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018. 
  155. "[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox". Retrieved March 10, 2018. 
  156. 156.0 156.1 Larks. "2018 AI Safety Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved March 10, 2018. 
  157. Future of Humanity Institute - FHI (July 31, 2017). "FHI Quarterly Update Summer 2017". Future of Humanity Institute. Retrieved March 14, 2018. 
  158. Future of Humanity Institute - FHI (May 17, 2017). "FHI is joining the Partnership on AI". Future of Humanity Institute. Retrieved March 16, 2018. 
  159. "[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts". Retrieved July 13, 2017. 
  160. "Yale University — Research on the Global Politics of AI". Open Philanthropy Project. December 15, 2017. Retrieved March 11, 2018. 
  161. "[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention". Retrieved March 10, 2018. 
  162. Future of Humanity Institute - FHI (August 25, 2017). "FHI publishes three new biosecurity papers in 'Health Security' - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  163. Future of Humanity Institute - FHI (October 10, 2017). "Quarterly Update Autumn 2017 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  164. Future of Humanity Institute - FHI. "Carrick Flynn - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 15, 2018. 
  165. Flynn, Carrick. "Personal thoughts on careers in AI policy and strategy". Effective Altruism Forum. Retrieved March 15, 2018. 
  166. "EA Grants Fall 2017 Recipients". Google Docs. Retrieved March 11, 2018. 
  167. Future of Humanity Institute - FHI (January 19, 2018). "Quarterly Update Winter 2017 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 14, 2018. 
  168. "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018. 
  169. "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018. 
  170. Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018.