Difference between revisions of "Timeline of Future of Humanity Institute"

From Timelines
Jump to: navigation, search
Line 28: Line 28:
 
|-
 
|-
 
| 2008 || {{Dts|September 15}} || Publication || ''[[w:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published.<ref>{{cite web |url=https://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503 |title=Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 
| 2008 || {{Dts|September 15}} || Publication || ''[[w:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published.<ref>{{cite web |url=https://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503 |title=Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 +
|-
 +
| 2009 || {{dts|January 1}} || || On the group blog (at the time) ''Overcoming Bias'' Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published.<ref>{{cite web |url=http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html |title=Overcoming Bias : Moral uncertainty – towards a solution? |accessdate=March 10, 2018}}</ref>
 
|-
 
|-
 
| 2009 || {{dts|January 22}} || || ''Human Enhancement'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 
| 2009 || {{dts|January 22}} || || ''Human Enhancement'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
Line 35: Line 37:
 
| 2011 || {{dts|March 18}} || || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
 
| 2011 || {{dts|March 18}} || || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
 
|-
 
|-
| 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published. While Bostrom has never worked for MIRI, he is a research advisor to MIRI. MIRI also contributed substantially to the publication of the book.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref>
+
| 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> In March 2017, the {{W|Open Philanthropy Project}} considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."<ref name="open-phil-grant-march-2017" />
 
|-
 
|-
 
| 2015 || || || The Strategic AI Research Center starts some time after this period.<ref>{{cite web |url=https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/qa-philosopher-nick-bostrom-on-superintelligence-human-enhancement-and-existential-risk/?utm_term=.1dd45715e8bd |publisher=[[wikipedia:The Washington Post|The Washington Post]] |title=Opinion {{!}} Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk |accessdate=February 8, 2018}}</ref>
 
| 2015 || || || The Strategic AI Research Center starts some time after this period.<ref>{{cite web |url=https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/qa-philosopher-nick-bostrom-on-superintelligence-human-enhancement-and-existential-risk/?utm_term=.1dd45715e8bd |publisher=[[wikipedia:The Washington Post|The Washington Post]] |title=Opinion {{!}} Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk |accessdate=February 8, 2018}}</ref>
 +
|-
 +
| 2015 || || || "Learning the Preferences of Bounded Agents" is published. One of the paper's authors is Owain Evans at FHI.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/nips-workshop-2015-website.pdf |title=Learning the Preferences of Bounded Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
 +
|-
 +
| 2016 || || || Stuart Armstrong's paper "Off-policy Monte Carlo agents with variable behaviour policies" is published.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/monte_carlo_arXiv.pdf |title=Off-policy Monte Carlo agents with variable behaviour policies |first=Stuart |last=Armstrong |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
 +
|-
 +
| 2016 || || || "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.<ref>{{cite web |url=https://stuhlmueller.org/papers/preferences-aaai2016.pdf |title=Learning the Preferences of Ignorant, Inconsistent Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
 +
|-
 +
| 2016 || {{dts|June 1}} || || The paper "Safely interruptible agents" is announced on the {{W|Machine Intelligence Research Institute}} blog. One of the paper's authors is Stuart Armstrong of FHI.<ref>{{cite web |url=https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/ |title=New paper: "Safely interruptible agents" - Machine Intelligence Research Institute |publisher=[[Machine Intelligence Research Institute]] |date=September 12, 2016 |first=Rob |last=Bensinger |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review">{{cite web |url=http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ |title=2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
 +
|-
 +
| 2016 || {{dts|September}} || || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $115,652 to FHI to support the hiring of Piers Millett, who will work on biosecurity and pandemic preparedness.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/future-humanity-institute-biosecurity-and-pandemic-preparedness |publisher=Open Philanthropy Project |title=Future of Humanity Institute — Biosecurity and Pandemic Preparedness |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
 +
|-
 +
| 2016 || {{dts|September 16}} || || Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1609.04994 |title=[1609.04994] Exploration Potential |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
 +
|-
 +
| 2017 || {{dts|February 9}} || || Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal ''{{W|Global Policy}}''.<ref>{{cite web |url=http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/abstract |title=Strategic Implications of Openness in AI Development |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
 +
|-
 +
| 2017 || {{dts|March}} || || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.<ref name="open-phil-grant-march-2017">{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support |publisher=Open Philanthropy Project |title=Future of Humanity Institute — General Support |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|April 27}} || || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review" />
 +
|-
 +
| 2017 || {{Dts|July 17}} || || "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1707.05173 |title=[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison |author=Larks |publisher=Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
 
|-
 
|-
 
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>
 
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>
Line 65: Line 87:
 
* {{W|Future of Humanity Institute}} (Wikipedia)
 
* {{W|Future of Humanity Institute}} (Wikipedia)
 
* [https://wiki.lesswrong.com/wiki/Future_of_Humanity_Institute LessWrong Wiki]
 
* [https://wiki.lesswrong.com/wiki/Future_of_Humanity_Institute LessWrong Wiki]
 +
* [https://donations.vipulnaik.com/donee.php?donee=Future+of+Humanity+Institute Donations List Website (donee)]
 +
* [https://aiwatch.issarice.com/?organization=Future+of+Humanity+Institute AI Watch]
  
 
==References==
 
==References==
  
 
{{Reflist|30em}}
 
{{Reflist|30em}}

Revision as of 15:59, 10 March 2018

This is a timeline of the Future of Humanity Institute (FHI).

Big picture

Time period Development summary More details

Full timeline

Year Month and date Event type Details
2005 June 1 or November 29 The Future of Humanity Institute is established.[1][2][3]
2006 March 2 The ENHANCE project website is created[4] by Anders Sandberg.[5]
2006 November 20 Robin Hanson starts Overcoming Bias.[6] The first post on the blog seems to be from November 20.[7] On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.[8] The blog seems to have received support from FHI in the beginning.[9][5]
2005–2007 Lighthill Risk Network is created by Peter Taylor of FHI.[5]
2007 May The Whole Brain Emulation Workshop is hosted by FHI.[5]
2007 August 24 Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker is published.[10][11]
2008 Practical Ethics, a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches.[12]
2008 September 15 Publication Global Catastrophic Risks is published.[13][11]
2009 January 1 On the group blog (at the time) Overcoming Bias Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published.[14]
2009 January 22 Human Enhancement is published.[15][11]
2010 June 21 Anthropic Bias by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.[16][11]
2011 March 18 Enhancing Human Capacities is published.[17][18]
2014 July–September Influence Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published.[19] In March 2017, the Open Philanthropy Project considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."[20]
2015 The Strategic AI Research Center starts some time after this period.[21]
2015 "Learning the Preferences of Bounded Agents" is published. One of the paper's authors is Owain Evans at FHI.[22][23]
2016 Stuart Armstrong's paper "Off-policy Monte Carlo agents with variable behaviour policies" is published.[24][23]
2016 "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.[25][23]
2016 June 1 The paper "Safely interruptible agents" is announced on the Machine Intelligence Research Institute blog. One of the paper's authors is Stuart Armstrong of FHI.[26][23]
2016 September The Open Philanthropy Project recommends (to Good Ventures?) a grant of $115,652 to FHI to support the hiring of Piers Millett, who will work on biosecurity and pandemic preparedness.[27]
2016 September 16 Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.[28][23]
2017 February 9 Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal Global Policy.[29][23]
2017 March The Open Philanthropy Project recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.[20]
2017 April 27 "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.[30][31]
2017 July 17 "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.[32][31]
2018 February 20 Publication The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.[33][34][35]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Funding information for this timeline is available.

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. "About | Future of Humanity Institute | Programmes". Oxford Martin School. Retrieved February 7, 2018. 
  2. "Future of Humanity Institute". Archived from the original on October 13, 2005. Retrieved February 7, 2018. 
  3. "Wayback Machine" (PDF). Archived from the original (PDF) on May 12, 2006. Retrieved February 7, 2018. 
  4. Anders Sandberg. "ENHANCE Project Site". Archived from the original on April 6, 2006. Retrieved February 7, 2018. 
  5. 5.0 5.1 5.2 5.3 "Wayback Machine" (PDF). Archived from the original (PDF) on January 17, 2009. Retrieved February 7, 2018. 
  6. "Overcoming Bias : Bio". Retrieved June 1, 2017. 
  7. "Overcoming Bias: How To Join". Retrieved September 26, 2017. 
  8. "Overcoming Bias". Retrieved September 26, 2017. 
  9. "FHI Updates". Archived from the original on July 5, 2007. Retrieved February 7, 2018. 
  10. "Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books". Retrieved February 8, 2018. 
  11. 11.0 11.1 11.2 11.3 "Future of Humanity Institute - Books". Archived from the original on November 3, 2010. Retrieved February 8, 2018. 
  12. "Future of Humanity Institute Updates". Archived from the original on September 15, 2008. Retrieved February 7, 2018. 
  13. "Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books". Retrieved February 8, 2018. 
  14. "Overcoming Bias : Moral uncertainty – towards a solution?". Retrieved March 10, 2018. 
  15. "Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books". Retrieved February 8, 2018. 
  16. "Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books". Retrieved February 8, 2018. 
  17. "Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books". Retrieved February 8, 2018. 
  18. "Future of Humanity Institute - Books". Archived from the original on January 16, 2013. Retrieved February 8, 2018. 
  19. "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017. 
  20. 20.0 20.1 "Future of Humanity Institute — General Support". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018. 
  21. "Opinion | Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk". The Washington Post. Retrieved February 8, 2018. 
  22. "Learning the Preferences of Bounded Agents" (PDF). Retrieved March 10, 2018. 
  23. 23.0 23.1 23.2 23.3 23.4 23.5 "2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum". Retrieved March 10, 2018. 
  24. Armstrong, Stuart. "Off-policy Monte Carlo agents with variable behaviour policies" (PDF). Retrieved March 10, 2018. 
  25. "Learning the Preferences of Ignorant, Inconsistent Agents" (PDF). Retrieved March 10, 2018. 
  26. Bensinger, Rob (September 12, 2016). "New paper: "Safely interruptible agents" - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved March 10, 2018. 
  27. "Future of Humanity Institute — Biosecurity and Pandemic Preparedness". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018. 
  28. "[1609.04994] Exploration Potential". Retrieved March 10, 2018. 
  29. "Strategic Implications of Openness in AI Development". Retrieved March 10, 2018. 
  30. "[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox". Retrieved March 10, 2018. 
  31. 31.0 31.1 Larks. "2018 AI Safety Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved March 10, 2018. 
  32. "[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention". Retrieved March 10, 2018. 
  33. "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018. 
  34. "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018. 
  35. Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018.