Changes

Jump to: navigation, search

Timeline of Future of Humanity Institute

5,020 bytes added, 18:33, 10 March 2018
no edit summary
{| class="sortable wikitable"
! Year !! Month and date !! Event type !! Details
|-
| 1973 || {{dts|March 10}} || || {{W|Nick Bostrom}} is born.
|-
| 2005 || {{dts|June 1}} or {{dts|November 29}} || || The Future of Humanity Institute is established.<ref>{{cite web |url=https://www.oxfordmartin.ox.ac.uk/research/programmes/future-humanity/ |publisher=Oxford Martin School |title=About {{!}} Future of Humanity Institute {{!}} Programmes |accessdate=February 7, 2018}}</ref><ref>{{cite web |url=http://fhi.ox.ac.uk/ |title=Future of Humanity Institute |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20051013060521/fhi.ox.ac.uk/ |archivedate=October 13, 2005 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/Papers/FHI%20Newsletter%201%20-%20April%20200611.pdf |title=Wayback Machine |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20060512085807/http://www.fhi.ox.ac.uk:80/Papers/FHI%20Newsletter%201%20-%20April%20200611.pdf |archivedate=May 12, 2006 |dead-url=yes}}</ref>
|-
| 2006 || || || "What is a Singleton?" by Nick Bostrom is published in the journal ''{{W|Linguistic and Philosophical Investigations}}''. The paper introduces the idea of a [[wikipedia:Singleton (global governance)|singleton]], a hypothetical "world order in which there is a single decision-making agency at the highest level".<ref>{{cite web |url=https://nickbostrom.com/fut/singleton.html |title=What is a Singleton? |first=Nick |last=Bostrom |accessdate=March 11, 2018}}</ref>
|-
| 2006 || {{dts|March 2}} || || The ENHANCE project website is created<ref>{{cite web |url=http://www.enhanceproject.org:80/ |author=Anders Sandberg |title=ENHANCE Project Site |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20060406192957/http://www.enhanceproject.org:80/ |archivedate=April 6, 2006 |dead-url=yes}}</ref> by Anders Sandberg.<ref name="fhi-report" />
|-
| 2006 || {{dts|July}} || || "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" by Nick Bostrom and Toby Ord is published.<ref>{{cite web |url=https://nickbostrom.com/ethics/statusquo.pdf |title=The Reversal Test: Eliminating Status Quo Bias in Applied Ethics |accessdate=March 11, 2018}}</ref> The paper introduces the [[wikipedia:Reversal test|reversal test]] in the context of bioethics of human enhancement.
|-
| 2006 || {{dts|November 20}} || || [[wikipedia:Robin Hanson|Robin Hanson]] starts ''[[wikipedia:Overcoming Bias|Overcoming Bias]]''.<ref>{{cite web |url=http://www.overcomingbias.com/bio |title=Overcoming Bias : Bio |accessdate=June 1, 2017}}</ref> The first post on the blog seems to be from November 20.<ref>{{cite web |url=https://web.archive.org/web/20070119013818/http://robinhanson.typepad.com:80/overcomingbias/2006/11/introduction.html |title=Overcoming Bias: How To Join |accessdate=September 26, 2017}}</ref> On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.<ref>{{cite web |url=https://web.archive.org/web/20061207103140/http://overcomingbias.com/ |title=Overcoming Bias |accessdate=September 26, 2017}}</ref> The blog seems to have received support from FHI in the beginning.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/updates.html |title=FHI Updates |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20070705000635/http://www.fhi.ox.ac.uk:80/updates.html#blog |archivedate=July 5, 2007 |dead-url=yes}}</ref><ref name="fhi-report">{{cite web |url=http://www.fhi.ox.ac.uk:80/newsletters/Final%20Complete%20FHI%20Report.pdf |title=Wayback Machine |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20090117141825/http://www.fhi.ox.ac.uk:80/newsletters/Final%20Complete%20FHI%20Report.pdf |archivedate=January 17, 2009 |dead-url=yes}}</ref>
| 2005–2007 || || || Lighthill Risk Network is created by Peter Taylor of FHI.<ref name="fhi-report" />
|-
| 2007 || {{dts|May}} || || The Whole Brain Emulation Workshop is hosted by FHI.<ref name="fhi-report" /> The workshop would eventually lead to the publication of "Whole Brain Emulation: A Technical Roadmap" in 2008.<ref name="annual-report-oct-2008-to-sep-2009" />
|-
| 2007 || {{dts|August 24}} || || ''Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Wittgenstein-His-Interpreters-Essays-Memory/dp/1405129220/ |title=Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books">{{cite web |url=http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20101103223749/http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |archivedate=November 3, 2010 |dead-url=yes}}</ref>
|-
| 2007 || {{dts|November}} || || ''Practical Ethics in the News'' (at <code>www.practicalethicsnews.com</code>) launches.<ref name="annual-report-oct-2008-to-sep-2009">{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/19901/FHI_Annual_Report.pdf |title=Wayback Machine |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20120413031223/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/19901/FHI_Annual_Report.pdf |archivedate=April 13, 2012 |dead-url=yes}}</ref> (I think this is the same as ''Practical Ethics'' mentioned below.) At some point the site begins redirecting to http://blog.practicalethics.ox.ac.uk/ but as of March 2018 the site is "temporarily offline for maintenance" (for several years now).
|-
| 2008 || || || ''Practical Ethics'', a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/updates.html |title=Future of Humanity Institute Updates |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20080915151519/http://www.fhi.ox.ac.uk:80/updates.html |archivedate=September 15, 2008 |dead-url=yes}}</ref>
|-
| 2008 || || || "Whole Brain Emulation: A Technical Roadmap" by Anders Sandberg and Nick Bostrom is published.<ref name="annual-report-oct-2008-to-sep-2009" />
|-
| 2008 || {{Dts|September 15}} || Publication || ''[[w:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published.<ref>{{cite web |url=https://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503 |title=Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
|-
| 2009 || || || "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" by Rafaela Hillerbrand, Toby Ord, and Anders Sandberg is published.<ref name="annual-report-oct-2008-to-sep-2009" />
|-
| 2009 || {{dts|January 1}} || || On the group blog (at the time) ''Overcoming Bias'' Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published.<ref>{{cite web |url=http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html |title=Overcoming Bias : Moral uncertainty – towards a solution? |accessdate=March 10, 2018}}</ref>
|-
| 2009 || {{dts|January 22}} || || ''Human Enhancement'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" /><ref name="annual-report-oct-2008-to-sep-2009" />|-| 2009 || {{dts|February}} || Project || ''{{W|LessWrong}}'', the group blog about rationality, launches.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/FAQ#Where_did_Less_Wrong_come_from.3F |title=FAQ - Lesswrongwiki |accessdate=June 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> The blog is sponsored by FHI,<ref name="annual-report-oct-2008-to-sep-2009" /> although it is unclear to what extent FHI is involved in the creation.<ref>{{cite web |url=http://lesswrong.com/lw/7sc/siai_vs_fhi_achievements_20082010/ |title=SIAI vs. FHI achievements, 2008-2010 - Less Wrong |accessdate=March 11, 2018 |quote=However, since LW is to such a huge extent Eliezer's creation, and I'm not sure of what exactly the FHI contribution to LW ''is'', I'm counting it as an SIAI and not a joint achievement. |publisher=[[LessWrong]]}}</ref>
|-
| 2010 || {{dts|June 21}} || || ''Anthropic Bias'' by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.<ref>{{cite web |url=https://www.amazon.co.uk/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946/ |title=Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
|-
| 2011 || {{dts|March 18}} || || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
|-
| 2014 || || || The Global Priorities Project (GPP) runs as a pilot project within the Centre for Effective Altruism. Team members of GPP include Owen Cotton-Barratt and Toby Ord of the Future of Humanity Institute.<ref>{{cite web |url=http://globalprioritiesproject.org/wp-content/uploads/2015/03/GPP-Strategy-Overview-February-2015.pdf |title=Global Priorities Project Strategy Overview |accessdate=March 10, 2018}}</ref> GPP would also eventually become a collaboration between Centre for Effective Altruism and FHI.<ref>{{cite web |url=http://globalprioritiesproject.org/ |publisher=The Global Priorities Project |title=HOME |accessdate=March 10, 2018}}</ref>
|-
| 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> In March 2017, the {{W|Open Philanthropy Project}} considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."<ref name="open-phil-grant-march-2017" />
|-
| 2017 || {{dts|April 27}} || || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review" />
|-
| 2017 || {{dts|July}} || || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $299,320 to Yale University to support "to support research on the global politics of advanced artificial intelligence". The work will be led by Allan Dafoe, who will conduct part of the work at FHI.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe |publisher=Open Philanthropy Project |title=Yale University — Research on the Global Politics of AI |date=December 15, 2017 |accessdate=March 11, 2018}}</ref>
|-
| 2017 || {{Dts|July 17}} || || "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1707.05173 |title=[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison |author=Larks |publisher=Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
|-
| 2017 || {{dts|September 29}} || || Effective Altruism Grants fall 2017 recipients are announced. One of the recipients is Gregory Lewis, who will use the grant for "Research into biological risk mitigation with the Future of Humanity Institute." The grant amount for Lewis is £15,000 (about $20,000).<ref>{{cite web |url=https://docs.google.com/spreadsheets/d/1iBy–zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk/edit#gid=0 |title=EA Grants Fall 2017 Recipients |publisher=Google Docs |accessdate=March 11, 2018}}</ref>
|-
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>

Navigation menu