Difference between revisions of "Timeline of Future of Humanity Institute"

From Timelines
Jump to: navigation, search
Line 12: Line 12:
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
 
! Year !! Month and date !! Event type !! Details
 
! Year !! Month and date !! Event type !! Details
 +
|-
 +
| 1973 || {{dts|March 10}} || || {{W|Nick Bostrom}} is born.
 
|-
 
|-
 
| 2005 || {{dts|June 1}} or {{dts|November 29}} || || The Future of Humanity Institute is established.<ref>{{cite web |url=https://www.oxfordmartin.ox.ac.uk/research/programmes/future-humanity/ |publisher=Oxford Martin School |title=About {{!}} Future of Humanity Institute {{!}} Programmes |accessdate=February 7, 2018}}</ref><ref>{{cite web |url=http://fhi.ox.ac.uk/ |title=Future of Humanity Institute |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20051013060521/fhi.ox.ac.uk/ |archivedate=October 13, 2005 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/Papers/FHI%20Newsletter%201%20-%20April%20200611.pdf |title=Wayback Machine |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20060512085807/http://www.fhi.ox.ac.uk:80/Papers/FHI%20Newsletter%201%20-%20April%20200611.pdf |archivedate=May 12, 2006 |dead-url=yes}}</ref>
 
| 2005 || {{dts|June 1}} or {{dts|November 29}} || || The Future of Humanity Institute is established.<ref>{{cite web |url=https://www.oxfordmartin.ox.ac.uk/research/programmes/future-humanity/ |publisher=Oxford Martin School |title=About {{!}} Future of Humanity Institute {{!}} Programmes |accessdate=February 7, 2018}}</ref><ref>{{cite web |url=http://fhi.ox.ac.uk/ |title=Future of Humanity Institute |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20051013060521/fhi.ox.ac.uk/ |archivedate=October 13, 2005 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/Papers/FHI%20Newsletter%201%20-%20April%20200611.pdf |title=Wayback Machine |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20060512085807/http://www.fhi.ox.ac.uk:80/Papers/FHI%20Newsletter%201%20-%20April%20200611.pdf |archivedate=May 12, 2006 |dead-url=yes}}</ref>
 +
|-
 +
| 2006 || || || "What is a Singleton?" by Nick Bostrom is published in the journal ''{{W|Linguistic and Philosophical Investigations}}''. The paper introduces the idea of a [[wikipedia:Singleton (global governance)|singleton]], a hypothetical "world order in which there is a single decision-making agency at the highest level".<ref>{{cite web |url=https://nickbostrom.com/fut/singleton.html |title=What is a Singleton? |first=Nick |last=Bostrom |accessdate=March 11, 2018}}</ref>
 
|-
 
|-
 
| 2006 || {{dts|March 2}} || || The ENHANCE project website is created<ref>{{cite web |url=http://www.enhanceproject.org:80/ |author=Anders Sandberg |title=ENHANCE Project Site |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20060406192957/http://www.enhanceproject.org:80/ |archivedate=April 6, 2006 |dead-url=yes}}</ref> by Anders Sandberg.<ref name="fhi-report" />
 
| 2006 || {{dts|March 2}} || || The ENHANCE project website is created<ref>{{cite web |url=http://www.enhanceproject.org:80/ |author=Anders Sandberg |title=ENHANCE Project Site |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20060406192957/http://www.enhanceproject.org:80/ |archivedate=April 6, 2006 |dead-url=yes}}</ref> by Anders Sandberg.<ref name="fhi-report" />
 +
|-
 +
| 2006 || {{dts|July}} || || "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" by Nick Bostrom and Toby Ord is published.<ref>{{cite web |url=https://nickbostrom.com/ethics/statusquo.pdf |title=The Reversal Test: Eliminating Status Quo Bias in Applied Ethics |accessdate=March 11, 2018}}</ref> The paper introduces the [[wikipedia:Reversal test|reversal test]] in the context of bioethics of human enhancement.
 
|-
 
|-
 
| 2006 || {{dts|November 20}} || || [[wikipedia:Robin Hanson|Robin Hanson]] starts ''[[wikipedia:Overcoming Bias|Overcoming Bias]]''.<ref>{{cite web |url=http://www.overcomingbias.com/bio |title=Overcoming Bias : Bio |accessdate=June 1, 2017}}</ref> The first post on the blog seems to be from November 20.<ref>{{cite web |url=https://web.archive.org/web/20070119013818/http://robinhanson.typepad.com:80/overcomingbias/2006/11/introduction.html |title=Overcoming Bias: How To Join |accessdate=September 26, 2017}}</ref> On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.<ref>{{cite web |url=https://web.archive.org/web/20061207103140/http://overcomingbias.com/ |title=Overcoming Bias |accessdate=September 26, 2017}}</ref> The blog seems to have received support from FHI in the beginning.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/updates.html |title=FHI Updates |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20070705000635/http://www.fhi.ox.ac.uk:80/updates.html#blog |archivedate=July 5, 2007 |dead-url=yes}}</ref><ref name="fhi-report">{{cite web |url=http://www.fhi.ox.ac.uk:80/newsletters/Final%20Complete%20FHI%20Report.pdf |title=Wayback Machine |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20090117141825/http://www.fhi.ox.ac.uk:80/newsletters/Final%20Complete%20FHI%20Report.pdf |archivedate=January 17, 2009 |dead-url=yes}}</ref>
 
| 2006 || {{dts|November 20}} || || [[wikipedia:Robin Hanson|Robin Hanson]] starts ''[[wikipedia:Overcoming Bias|Overcoming Bias]]''.<ref>{{cite web |url=http://www.overcomingbias.com/bio |title=Overcoming Bias : Bio |accessdate=June 1, 2017}}</ref> The first post on the blog seems to be from November 20.<ref>{{cite web |url=https://web.archive.org/web/20070119013818/http://robinhanson.typepad.com:80/overcomingbias/2006/11/introduction.html |title=Overcoming Bias: How To Join |accessdate=September 26, 2017}}</ref> On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.<ref>{{cite web |url=https://web.archive.org/web/20061207103140/http://overcomingbias.com/ |title=Overcoming Bias |accessdate=September 26, 2017}}</ref> The blog seems to have received support from FHI in the beginning.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/updates.html |title=FHI Updates |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20070705000635/http://www.fhi.ox.ac.uk:80/updates.html#blog |archivedate=July 5, 2007 |dead-url=yes}}</ref><ref name="fhi-report">{{cite web |url=http://www.fhi.ox.ac.uk:80/newsletters/Final%20Complete%20FHI%20Report.pdf |title=Wayback Machine |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20090117141825/http://www.fhi.ox.ac.uk:80/newsletters/Final%20Complete%20FHI%20Report.pdf |archivedate=January 17, 2009 |dead-url=yes}}</ref>
Line 21: Line 27:
 
| 2005–2007 || || || Lighthill Risk Network is created by Peter Taylor of FHI.<ref name="fhi-report" />
 
| 2005–2007 || || || Lighthill Risk Network is created by Peter Taylor of FHI.<ref name="fhi-report" />
 
|-
 
|-
| 2007 || {{dts|May}} || || The Whole Brain Emulation Workshop is hosted by FHI.<ref name="fhi-report" />
+
| 2007 || {{dts|May}} || || The Whole Brain Emulation Workshop is hosted by FHI.<ref name="fhi-report" /> The workshop would eventually lead to the publication of "Whole Brain Emulation: A Technical Roadmap" in 2008.<ref name="annual-report-oct-2008-to-sep-2009" />
 
|-
 
|-
 
| 2007 || {{dts|August 24}} || || ''Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Wittgenstein-His-Interpreters-Essays-Memory/dp/1405129220/ |title=Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books">{{cite web |url=http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20101103223749/http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |archivedate=November 3, 2010 |dead-url=yes}}</ref>
 
| 2007 || {{dts|August 24}} || || ''Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Wittgenstein-His-Interpreters-Essays-Memory/dp/1405129220/ |title=Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books">{{cite web |url=http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20101103223749/http://www.fhi.ox.ac.uk:80/selected_outputs/recent_books |archivedate=November 3, 2010 |dead-url=yes}}</ref>
 +
|-
 +
| 2007 || {{dts|November}} || || ''Practical Ethics in the News'' (at <code>www.practicalethicsnews.com</code>) launches.<ref name="annual-report-oct-2008-to-sep-2009">{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/19901/FHI_Annual_Report.pdf |title=Wayback Machine |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20120413031223/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0020/19901/FHI_Annual_Report.pdf |archivedate=April 13, 2012 |dead-url=yes}}</ref> (I think this is the same as ''Practical Ethics'' mentioned below.) At some point the site begins redirecting to http://blog.practicalethics.ox.ac.uk/ but as of March 2018 the site is "temporarily offline for maintenance" (for several years now).
 
|-
 
|-
 
| 2008 || || || ''Practical Ethics'', a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/updates.html |title=Future of Humanity Institute Updates |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20080915151519/http://www.fhi.ox.ac.uk:80/updates.html |archivedate=September 15, 2008 |dead-url=yes}}</ref>
 
| 2008 || || || ''Practical Ethics'', a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/updates.html |title=Future of Humanity Institute Updates |accessdate=February 7, 2018 |archiveurl=https://web.archive.org/web/20080915151519/http://www.fhi.ox.ac.uk:80/updates.html |archivedate=September 15, 2008 |dead-url=yes}}</ref>
 +
|-
 +
| 2008 || || || "Whole Brain Emulation: A Technical Roadmap" by Anders Sandberg and Nick Bostrom is published.<ref name="annual-report-oct-2008-to-sep-2009" />
 
|-
 
|-
 
| 2008 || {{Dts|September 15}} || Publication || ''[[w:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published.<ref>{{cite web |url=https://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503 |title=Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 
| 2008 || {{Dts|September 15}} || Publication || ''[[w:Global Catastrophic Risks (book)|Global Catastrophic Risks]]'' is published.<ref>{{cite web |url=https://www.amazon.com/Global-Catastrophic-Risks-Martin-Rees/dp/0198570503 |title=Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 +
|-
 +
| 2009 || || || "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" by Rafaela Hillerbrand, Toby Ord, and Anders Sandberg is published.<ref name="annual-report-oct-2008-to-sep-2009" />
 
|-
 
|-
 
| 2009 || {{dts|January 1}} || || On the group blog (at the time) ''Overcoming Bias'' Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published.<ref>{{cite web |url=http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html |title=Overcoming Bias : Moral uncertainty – towards a solution? |accessdate=March 10, 2018}}</ref>
 
| 2009 || {{dts|January 1}} || || On the group blog (at the time) ''Overcoming Bias'' Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published.<ref>{{cite web |url=http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html |title=Overcoming Bias : Moral uncertainty – towards a solution? |accessdate=March 10, 2018}}</ref>
 
|-
 
|-
| 2009 || {{dts|January 22}} || || ''Human Enhancement'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
+
| 2009 || {{dts|January 22}} || || ''Human Enhancement'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" /><ref name="annual-report-oct-2008-to-sep-2009" />
 +
|-
 +
| 2009 || {{dts|February}} || Project || ''{{W|LessWrong}}'', the group blog about rationality, launches.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/FAQ#Where_did_Less_Wrong_come_from.3F |title=FAQ - Lesswrongwiki |accessdate=June 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> The blog is sponsored by FHI,<ref name="annual-report-oct-2008-to-sep-2009" /> although it is unclear to what extent FHI is involved in the creation.<ref>{{cite web |url=http://lesswrong.com/lw/7sc/siai_vs_fhi_achievements_20082010/ |title=SIAI vs. FHI achievements, 2008-2010 - Less Wrong |accessdate=March 11, 2018 |quote=However, since LW is to such a huge extent Eliezer's creation, and I'm not sure of what exactly the FHI contribution to LW ''is'', I'm counting it as an SIAI and not a joint achievement. |publisher=[[LessWrong]]}}</ref>
 
|-
 
|-
 
| 2010 || {{dts|June 21}} || || ''Anthropic Bias'' by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.<ref>{{cite web |url=https://www.amazon.co.uk/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946/ |title=Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 
| 2010 || {{dts|June 21}} || || ''Anthropic Bias'' by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.<ref>{{cite web |url=https://www.amazon.co.uk/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946/ |title=Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
 
|-
 
|-
 
| 2011 || {{dts|March 18}} || || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
 
| 2011 || {{dts|March 18}} || || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
 +
|-
 +
| 2014 || || || The Global Priorities Project (GPP) runs as a pilot project within the Centre for Effective Altruism. Team members of GPP include Owen Cotton-Barratt and Toby Ord of the Future of Humanity Institute.<ref>{{cite web |url=http://globalprioritiesproject.org/wp-content/uploads/2015/03/GPP-Strategy-Overview-February-2015.pdf |title=Global Priorities Project Strategy Overview |accessdate=March 10, 2018}}</ref> GPP would also eventually become a collaboration between Centre for Effective Altruism and FHI.<ref>{{cite web |url=http://globalprioritiesproject.org/ |publisher=The Global Priorities Project |title=HOME |accessdate=March 10, 2018}}</ref>
 
|-
 
|-
 
| 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> In March 2017, the {{W|Open Philanthropy Project}} considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."<ref name="open-phil-grant-march-2017" />
 
| 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> In March 2017, the {{W|Open Philanthropy Project}} considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."<ref name="open-phil-grant-march-2017" />
Line 58: Line 74:
 
|-
 
|-
 
| 2017 || {{dts|April 27}} || || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review" />
 
| 2017 || {{dts|April 27}} || || "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1705.03394 |title=[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review" />
 +
|-
 +
| 2017 || {{dts|July}} || || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $299,320 to Yale University to support "to support research on the global politics of advanced artificial intelligence". The work will be led by Allan Dafoe, who will conduct part of the work at FHI.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/yale-university-global-politics-of-ai-dafoe |publisher=Open Philanthropy Project |title=Yale University — Research on the Global Politics of AI |date=December 15, 2017 |accessdate=March 11, 2018}}</ref>
 
|-
 
|-
 
| 2017 || {{Dts|July 17}} || || "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1707.05173 |title=[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison |author=Larks |publisher=Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
 
| 2017 || {{Dts|July 17}} || || "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1707.05173 |title=[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention |accessdate=March 10, 2018}}</ref><ref name="larks-december-2017-review">{{cite web |url=http://effective-altruism.com/ea/1iu/2018_ai_safety_literature_review_and_charity/ |title=2018 AI Safety Literature Review and Charity Comparison |author=Larks |publisher=Effective Altruism Forum |accessdate=March 10, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|September 29}} || || Effective Altruism Grants fall 2017 recipients are announced. One of the recipients is Gregory Lewis, who will use the grant for "Research into biological risk mitigation with the Future of Humanity Institute." The grant amount for Lewis is £15,000 (about $20,000).<ref>{{cite web |url=https://docs.google.com/spreadsheets/d/1iBy–zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk/edit#gid=0 |title=EA Grants Fall 2017 Recipients |publisher=Google Docs |accessdate=March 11, 2018}}</ref>
 
|-
 
|-
 
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>
 
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>

Revision as of 17:33, 10 March 2018

This is a timeline of the Future of Humanity Institute (FHI).

Big picture

Time period Development summary More details

Full timeline

Year Month and date Event type Details
1973 March 10 Nick Bostrom is born.
2005 June 1 or November 29 The Future of Humanity Institute is established.[1][2][3]
2006 "What is a Singleton?" by Nick Bostrom is published in the journal Linguistic and Philosophical Investigations. The paper introduces the idea of a singleton, a hypothetical "world order in which there is a single decision-making agency at the highest level".[4]
2006 March 2 The ENHANCE project website is created[5] by Anders Sandberg.[6]
2006 July "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" by Nick Bostrom and Toby Ord is published.[7] The paper introduces the reversal test in the context of bioethics of human enhancement.
2006 November 20 Robin Hanson starts Overcoming Bias.[8] The first post on the blog seems to be from November 20.[9] On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.[10] The blog seems to have received support from FHI in the beginning.[11][6]
2005–2007 Lighthill Risk Network is created by Peter Taylor of FHI.[6]
2007 May The Whole Brain Emulation Workshop is hosted by FHI.[6] The workshop would eventually lead to the publication of "Whole Brain Emulation: A Technical Roadmap" in 2008.[12]
2007 August 24 Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker is published.[13][14]
2007 November Practical Ethics in the News (at www.practicalethicsnews.com) launches.[12] (I think this is the same as Practical Ethics mentioned below.) At some point the site begins redirecting to http://blog.practicalethics.ox.ac.uk/ but as of March 2018 the site is "temporarily offline for maintenance" (for several years now).
2008 Practical Ethics, a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches.[15]
2008 "Whole Brain Emulation: A Technical Roadmap" by Anders Sandberg and Nick Bostrom is published.[12]
2008 September 15 Publication Global Catastrophic Risks is published.[16][14]
2009 "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" by Rafaela Hillerbrand, Toby Ord, and Anders Sandberg is published.[12]
2009 January 1 On the group blog (at the time) Overcoming Bias Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published.[17]
2009 January 22 Human Enhancement is published.[18][14][12]
2009 February Project LessWrong, the group blog about rationality, launches.[19] The blog is sponsored by FHI,[12] although it is unclear to what extent FHI is involved in the creation.[20]
2010 June 21 Anthropic Bias by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.[21][14]
2011 March 18 Enhancing Human Capacities is published.[22][23]
2014 The Global Priorities Project (GPP) runs as a pilot project within the Centre for Effective Altruism. Team members of GPP include Owen Cotton-Barratt and Toby Ord of the Future of Humanity Institute.[24] GPP would also eventually become a collaboration between Centre for Effective Altruism and FHI.[25]
2014 July–September Influence Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published.[26] In March 2017, the Open Philanthropy Project considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."[27]
2015 The Strategic AI Research Center starts some time after this period.[28]
2015 "Learning the Preferences of Bounded Agents" is published. One of the paper's authors is Owain Evans at FHI.[29][30]
2016 Stuart Armstrong's paper "Off-policy Monte Carlo agents with variable behaviour policies" is published.[31][30]
2016 "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.[32][30]
2016 June 1 The paper "Safely interruptible agents" is announced on the Machine Intelligence Research Institute blog. One of the paper's authors is Stuart Armstrong of FHI.[33][30]
2016 September The Open Philanthropy Project recommends (to Good Ventures?) a grant of $115,652 to FHI to support the hiring of Piers Millett, who will work on biosecurity and pandemic preparedness.[34]
2016 September 16 Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.[35][30]
2017 February 9 Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal Global Policy.[36][30]
2017 March The Open Philanthropy Project recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.[27]
2017 April 27 "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.[37][38]
2017 July The Open Philanthropy Project recommends (to Good Ventures?) a grant of $299,320 to Yale University to support "to support research on the global politics of advanced artificial intelligence". The work will be led by Allan Dafoe, who will conduct part of the work at FHI.[39]
2017 July 17 "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.[40][38]
2017 September 29 Effective Altruism Grants fall 2017 recipients are announced. One of the recipients is Gregory Lewis, who will use the grant for "Research into biological risk mitigation with the Future of Humanity Institute." The grant amount for Lewis is £15,000 (about $20,000).[41]
2018 February 20 Publication The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.[42][43][44]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Funding information for this timeline is available.

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. "About | Future of Humanity Institute | Programmes". Oxford Martin School. Retrieved February 7, 2018. 
  2. "Future of Humanity Institute". Archived from the original on October 13, 2005. Retrieved February 7, 2018. 
  3. "Wayback Machine" (PDF). Archived from the original (PDF) on May 12, 2006. Retrieved February 7, 2018. 
  4. Bostrom, Nick. "What is a Singleton?". Retrieved March 11, 2018. 
  5. Anders Sandberg. "ENHANCE Project Site". Archived from the original on April 6, 2006. Retrieved February 7, 2018. 
  6. 6.0 6.1 6.2 6.3 "Wayback Machine" (PDF). Archived from the original (PDF) on January 17, 2009. Retrieved February 7, 2018. 
  7. "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" (PDF). Retrieved March 11, 2018. 
  8. "Overcoming Bias : Bio". Retrieved June 1, 2017. 
  9. "Overcoming Bias: How To Join". Retrieved September 26, 2017. 
  10. "Overcoming Bias". Retrieved September 26, 2017. 
  11. "FHI Updates". Archived from the original on July 5, 2007. Retrieved February 7, 2018. 
  12. 12.0 12.1 12.2 12.3 12.4 12.5 "Wayback Machine" (PDF). Archived from the original (PDF) on April 13, 2012. Retrieved March 11, 2018. 
  13. "Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books". Retrieved February 8, 2018. 
  14. 14.0 14.1 14.2 14.3 "Future of Humanity Institute - Books". Archived from the original on November 3, 2010. Retrieved February 8, 2018. 
  15. "Future of Humanity Institute Updates". Archived from the original on September 15, 2008. Retrieved February 7, 2018. 
  16. "Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books". Retrieved February 8, 2018. 
  17. "Overcoming Bias : Moral uncertainty – towards a solution?". Retrieved March 10, 2018. 
  18. "Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books". Retrieved February 8, 2018. 
  19. "FAQ - Lesswrongwiki". LessWrong. Retrieved June 1, 2017. 
  20. "SIAI vs. FHI achievements, 2008-2010 - Less Wrong". LessWrong. Retrieved March 11, 2018. However, since LW is to such a huge extent Eliezer's creation, and I'm not sure of what exactly the FHI contribution to LW is, I'm counting it as an SIAI and not a joint achievement. 
  21. "Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books". Retrieved February 8, 2018. 
  22. "Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books". Retrieved February 8, 2018. 
  23. "Future of Humanity Institute - Books". Archived from the original on January 16, 2013. Retrieved February 8, 2018. 
  24. "Global Priorities Project Strategy Overview" (PDF). Retrieved March 10, 2018. 
  25. "HOME". The Global Priorities Project. Retrieved March 10, 2018. 
  26. "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017. 
  27. 27.0 27.1 "Future of Humanity Institute — General Support". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018. 
  28. "Opinion | Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk". The Washington Post. Retrieved February 8, 2018. 
  29. "Learning the Preferences of Bounded Agents" (PDF). Retrieved March 10, 2018. 
  30. 30.0 30.1 30.2 30.3 30.4 30.5 "2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum". Retrieved March 10, 2018. 
  31. Armstrong, Stuart. "Off-policy Monte Carlo agents with variable behaviour policies" (PDF). Retrieved March 10, 2018. 
  32. "Learning the Preferences of Ignorant, Inconsistent Agents" (PDF). Retrieved March 10, 2018. 
  33. Bensinger, Rob (September 12, 2016). "New paper: "Safely interruptible agents" - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved March 10, 2018. 
  34. "Future of Humanity Institute — Biosecurity and Pandemic Preparedness". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018. 
  35. "[1609.04994] Exploration Potential". Retrieved March 10, 2018. 
  36. "Strategic Implications of Openness in AI Development". Retrieved March 10, 2018. 
  37. "[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox". Retrieved March 10, 2018. 
  38. 38.0 38.1 Larks. "2018 AI Safety Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved March 10, 2018. 
  39. "Yale University — Research on the Global Politics of AI". Open Philanthropy Project. December 15, 2017. Retrieved March 11, 2018. 
  40. "[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention". Retrieved March 10, 2018. 
  41. "EA Grants Fall 2017 Recipients". Google Docs. Retrieved March 11, 2018. 
  42. "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018. 
  43. "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018. 
  44. Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018.