Changes

Jump to: navigation, search

Timeline of Future of Humanity Institute

5,477 bytes added, 15:59, 10 March 2018
no edit summary
|-
| 2008 || {{Dts|September 15}} || Publication || ''[[w:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published.<ref>{{cite web |url=https://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503 |title=Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
|-
| 2009 || {{dts|January 1}} || || On the group blog (at the time) ''Overcoming Bias'' Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published.<ref>{{cite web |url=http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html |title=Overcoming Bias : Moral uncertainty – towards a solution? |accessdate=March 10, 2018}}</ref>
|-
| 2009 || {{dts|January 22}} || || ''Human Enhancement'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
| 2011 || {{dts|March 18}} || || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
|-
| 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published. While Bostrom has never worked for MIRI, he is a research advisor to MIRI. MIRI also contributed substantially to the publication of the book.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> In March 2017, the {{W|Open Philanthropy Project}} considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."<ref name="open-phil-grant-march-2017" />
|-
| 2015 || || || The Strategic AI Research Center starts some time after this period.<ref>{{cite web |url=https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/qa-philosopher-nick-bostrom-on-superintelligence-human-enhancement-and-existential-risk/?utm_term=.1dd45715e8bd |publisher=[[wikipedia:The Washington Post|The Washington Post]] |title=Opinion {{!}} Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk |accessdate=February 8, 2018}}</ref>
|-
| 2015 || || || "Learning the Preferences of Bounded Agents" is published. One of the paper's authors is Owain Evans at FHI.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/nips-workshop-2015-website.pdf |title=Learning the Preferences of Bounded Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
|-
| 2016 || || || Stuart Armstrong's paper "Off-policy Monte Carlo agents with variable behaviour policies" is published.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/monte_carlo_arXiv.pdf |title=Off-policy Monte Carlo agents with variable behaviour policies |first=Stuart |last=Armstrong |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
|-
| 2016 || || || "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.<ref>{{cite web |url=https://stuhlmueller.org/papers/preferences-aaai2016.pdf |title=Learning the Preferences of Ignorant, Inconsistent Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
|-
| 2016 || {{dts|June 1}} || || The paper "Safely interruptible agents" is announced on the {{W|Machine Intelligence Research Institute}} blog. One of the paper's authors is Stuart Armstrong of FHI.<ref>{{cite web |url=https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/ |title=New paper: "Safely interruptible agents" - Machine Intelligence Research Institute |publisher=[[Machine Intelligence Research Institute]] |date=September 12, 2016 |first=Rob |last=Bensinger |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review">{{cite web |url=http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ |title=2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
|-
| 2016 || {{dts|September}} || || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $115,652 to FHI to support the hiring of Piers Millett, who will work on biosecurity and pandemic preparedness.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/future-humanity-institute-biosecurity-and-pandemic-preparedness |publisher=Open Philanthropy Project |title=Future of Humanity Institute — Biosecurity and Pandemic Preparedness |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
|-
| 2016 || {{dts|September 16}} || || Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1609.04994 |title=[1609.04994] Exploration Potential |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
|-
| 2017 || {{dts|February 9}} || || Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal ''{{W|Global Policy}}''.<ref>{{cite web |url=http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/abstract |title=Strategic Implications of Openness in AI Development |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
|-
| 2017 || {{dts|March}} || || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.<ref name="open-phil-grant-march-2017">{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support |publisher=Open Philanthropy Project |title=Future of Humanity Institute — General Support |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
|-
| 2017 || {{dts|April 27}} || || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review" />
|-
| 2017 || {{Dts|July 17}} || || "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1707.05173 |title=[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison |author=Larks |publisher=Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
|-
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>
* {{W|Future of Humanity Institute}} (Wikipedia)
* [https://wiki.lesswrong.com/wiki/Future_of_Humanity_Institute LessWrong Wiki]
* [https://donations.vipulnaik.com/donee.php?donee=Future+of+Humanity+Institute Donations List Website (donee)]
* [https://aiwatch.issarice.com/?organization=Future+of+Humanity+Institute AI Watch]
==References==
{{Reflist|30em}}

Navigation menu