Timeline of Future of Humanity Institute
From Timelines
This is a timeline of the Future of Humanity Institute (FHI).
Contents
Big picture
Time period | Development summary | More details |
---|
Full timeline
Year | Month and date | Event type | Details |
---|---|---|---|
1973 | March 10 | Nick Bostrom is born. | |
1998 | August 30 | Website | The domain name for the Anthropic Principle website, anthropic-principle.com , is registered.[1] The first Internet Archive snapshot of the website is from January 25, 1999.[2]
|
2001 | October 31 | Website | The Simulation Argument website's domain name, simulation-argument.com , is registered.[3] The first Internet Archive snapshot of the website would be on December 5, 2001.[4] The website hosts information about the simulation hypothesis, especially as articulated by Bostrom. In the FHI Achievements Report for 2008–2010, the Simulation Argument website is listed under websites maintained by FHI members.[5]
|
2005 | June 1 or November 29 | The Future of Humanity Institute is established.[6][7][8] | |
2006 | Publication | "What is a Singleton?" by Nick Bostrom is published in the journal Linguistic and Philosophical Investigations. The paper introduces the idea of a singleton, a hypothetical "world order in which there is a single decision-making agency at the highest level".[9] | |
2006 | March 2 | Website | The ENHANCE project website is created[10] by Anders Sandberg.[11] |
2006 | July | Publication | "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" by Nick Bostrom and Toby Ord is published.[12] The paper introduces the reversal test in the context of bioethics of human enhancement. |
2006 | July 19 | Website | The domain name for the existential risk website, existential-risk.org , is registered on this day.[13]
|
2006 | November 20 | Website | Robin Hanson starts Overcoming Bias.[14] The first post on the blog seems to be from November 20.[15] On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.[16] The blog seems to have received support from FHI in the beginning.[17][11] |
2005–2007 | Lighthill Risk Network is created by Peter Taylor of FHI.[11] | ||
2007 | May | Workshop | The Whole Brain Emulation Workshop is hosted by FHI.[11] The workshop would eventually lead to the publication of "Whole Brain Emulation: A Technical Roadmap" in 2008.[18] |
2007 | August 24 | Publication | Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker is published.[19][20] |
2007 | November | Website | Practical Ethics in the News (at www.practicalethicsnews.com ) launches.[18] (I think this is the same as Practical Ethics mentioned below.) At some point the site begins redirecting to http://blog.practicalethics.ox.ac.uk/ but as of March 2018 the site is "temporarily offline for maintenance" (for several years now).
|
2008 | Website | Practical Ethics, a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches.[21] | |
2008 | Publication | "Whole Brain Emulation: A Technical Roadmap" by Anders Sandberg and Nick Bostrom is published.[18] | |
2008 | January 22 | Website | The domain name for the Global Catastrophic Risks website, global-catastrophic-risks.com , is registered.[22] The first snapshot on the Internet Archive would be on May 5, 2008.[23]
|
2008 | September 15 | Publication | Global Catastrophic Risks is published.[24][20] |
2009 | Publication | "Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes" by Rafaela Hillerbrand, Toby Ord, and Anders Sandberg is published.[18] | |
2009 | January 1 | Publication | On the group blog (at the time) Overcoming Bias Nick Bostrom publishes a blog post proposing the Parliamentary Model of dealing with moral uncertainty. The blog post mentions that he is writing a paper on the topic with Toby Ord, but as of March 2018 the paper seems to never have been published. The paper title might be "Fundamental Moral Uncertainty".[5][25] Despite the idea not being published in full, it is often referenced in discussions. |
2009 | January 22 | Publication | Human Enhancement is published.[26][20][18] |
2009 | February | Website | LessWrong, the group blog about rationality, launches.[27] The blog is sponsored by FHI,[18] although it is unclear to what extent FHI is involved in the creation.[28] |
2010 | June 21 | Publication | Anthropic Bias by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.[29][20] |
2011 | March 18 | Publication | Enhancing Human Capacities is published.[30][31] |
2012 | August 15 | Website | The first Internet Archive snapshot of the Winter Intelligence Conference website is from this day.[32] FHI hosts the conference for 2012.[33] |
2012 | September 5 | Social media | The FHI Twitter account, @FHIOxford, is registered.[34] |
2014 | The Global Priorities Project (GPP) runs as a pilot project within the Centre for Effective Altruism. Team members of GPP include Owen Cotton-Barratt and Toby Ord of the Future of Humanity Institute.[35] GPP would also eventually become a collaboration between Centre for Effective Altruism and FHI.[36] | ||
2014 | July–September | Publication | Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published.[37] In March 2017, the Open Philanthropy Project considered this book FHI's "most significant output so far and the best strategic analysis of potential risks from advanced AI to date."[38] |
2015 | The Strategic AI Research Center starts some time after this period.[39] | ||
2015 | Publication | "Learning the Preferences of Bounded Agents" is published. One of the paper's authors is Owain Evans at FHI.[40][41] | |
2016 | Publication | Stuart Armstrong's paper "Off-policy Monte Carlo agents with variable behaviour policies" is published.[42][41] | |
2016 | Publication | "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.[43][41] | |
2016 | The Global Politics of AI Research Group is established by Carrick Flynn and Allan Dafoe (both of whom are affiliated with FHI). The group "consists of eleven research members more than thirty volunteers" and "has the mission of helping researchers and political actors to adopt the best possible strategy around the development of AI."[44] (It's not clear where the group is based or if it even meets physically.) | ||
2016 | February 8–9 | Workshop | The Global Priorities Project (a collaboration between FHI and the Centre for Effective Altruism) hosts a policy workshop on existential risk. Attendees included "twenty leading academics and policy-makers from the UK, USA, Germany, Finland, and Sweden".[45][44] |
2016 | May | Publication | The Global Priorities Project (associated with FHI) releases the Global Catastrophic Report 2016.[46] |
2016 | May | Workshop | FHI hosts a week-long workshop in Oxford called "The Control Problem in AI", attended by ten members of Machine Intelligence Research Institute.[44] |
2016 | May 27 – June 17 | Workshop | The Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted by the Machine Intelligence Research Institute and FHI, takes place. The program brings "together a variety of academics and professionals to address the technical challenges associated with AI robustness and reliability, with a goal of facilitating conversations between people interested in a number of different approaches." At the program, Jan Leike and Stuart Armstrong of FHI each give a talk.[47] |
2016 | June (approximate) | FHI recruits William MacAskill and Hilary Greaves to start a new "Programme on the Philosophical Foundations of Effective Altruism" as a collaboration between FHI and the Centre for Effective Altruism.[46] | |
2016 | June | Publication | The Age of Em: Work, Love and Life When Robots Rule the Earth, a book about the implications of whole brain emulation by FHI research associate Robin Hanson, is published.[48] In October, FHI and Hanson would organize a workshop about the book.[44] |
2016 | June 1 | Publication | The paper "Safely interruptible agents" is announced on the Machine Intelligence Research Institute blog. The paper is a collaboration between Google DeepMind and FHI, and one of the paper's authors is Stuart Armstrong of FHI.[49][41] The paper is also presented at the Conference on Uncertainty in Artificial Intelligence (UAI).[46] |
2016 | September | Financial | The Open Philanthropy Project recommends (to Good Ventures?) a grant of $115,652 to FHI to support the hiring of Piers Millett, who will work on biosecurity and pandemic preparedness.[50] |
2016 | September (approximate) | Financial | FHI receives a funding offer from Luke Ding to fund Hilary Greaves for four years starting mid-2017 (in case a proposed new institute is unable to raise academic funds for her) and William MacAskill's full salary for five years.[51] |
2016 | September 16 | Publication | Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.[52][41][44] |
2016 | September 22 | FHI's page on its collaboration with Google DeepMind is published. However it is unclear when the actual collaboration began.[53] | |
2016 | November | Workshop | The biotech horizon scanning workshop, co-hosted by the Centre for the Study of Existential Risk and FHI, takes place. The workshop and the overall "biological engineering horizon scanning" process is intended to lead up to "a peer-reviewed publication highlighting 15–20 developments of greatest likely impact."[44][54] |
2016 | December | Workshop | FHI hosts a workshop on "AI Safety and Blockchain". Attendees include Nick Bostrom, Vitalik Buterin, Jaan Tallinn, Wei Dai, Gwern Branwen, and Allan Dafoe. "The workshop explored the potential technical overlap between AI Safety and blockchain technologies and the possibilities for using blockchain, crypto-economics, and cryptocurrencies to facilitate greater global coordination."[55][44] It is unclear whether any output resulted from this workshop. |
2017 | Publication | Slides for an upcoming paper by FHI researchers Anders Sandberg, Eric Drexler, and Toby Ord, "Dissolving the Fermi Paradox", are posted.[56][57] | |
2017 | Publication | The report "Existential Risk: Diplomacy and Governance" is published. "This work began at the Global Priorities Project, whose policy work has now joined FHI."[58] The report gives an overview of existential risks and presents three recommendations for ways to reduce existential risks (chosen out of more then 100 proposals): (1) developing governance of geoengineering research; (2) establishing scenario plans and exercises for severe engineered pandemics at the international level; and (3) building international attention and support for existential risk reduction.[59] | |
2017 | January 15 | Publication | "Agent-Agnostic Human-in-the-Loop Reinforcement Learning" is uploaded to the arXiv.[60][58] |
2017 | January 25 | Publication | The FHI Annual Review 2016 is published.[44] |
2017 | February 9 | Publication | Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal Global Policy.[61][41][58] The paper "covers a breadth of areas including long-term AI development, singleton versus multipolar scenarios, race dynamics, responsible AI development, and identification of possible failure modes."[44] |
2017 | March | Financial | The Open Philanthropy Project recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.[38] |
2017 | April 26 | Publication | The online book Modeling Agents with Probabilistic Programs by Owain Evans (FHI research fellow), Andreas Stuhlmüller, John Salvatier (FHI intern), and Daniel Filan (FHI intern) is published. The book is available at https://agentmodels.org/ .[62][63]
|
2017 | April 27 | Publication | "That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox" is uploaded to the arXiv.[64][65] |
2017 | July | Financial | The Open Philanthropy Project recommends (to Good Ventures?) a grant of $299,320 to Yale University to support "to support research on the global politics of advanced artificial intelligence". The work will be led by Allan Dafoe, who will conduct part of the work at FHI.[66] |
2017 | July 17 | Publication | "Trial without Error: Towards Safe Reinforcement Learning via Human Intervention" is uploaded to the arXiv.[67][65] |
2017 | September 29 | Financial | Effective Altruism Grants fall 2017 recipients are announced. One of the recipients is Gregory Lewis, who will use the grant for "Research into biological risk mitigation with the Future of Humanity Institute." The grant amount for Lewis is £15,000 (about $20,000).[68] |
2018 | February 20 | Publication | The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.[69][70][71] |
Meta information on the timeline
How the timeline was built
The initial version of the timeline was written by Issa Rice.
Funding information for this timeline is available.
What the timeline is still missing
- the entries in https://www.fhi.ox.ac.uk/reporting/
- check if all featured publications are in the timeline https://web.archive.org/web/20130112235857/http://www.fhi.ox.ac.uk/selected_outputs
- i'm curious what output is associated with "Applied epistemology" research agenda https://web.archive.org/web/20130116011525/http://www.fhi.ox.ac.uk/research/rationality_and_wisdom
- http://lesswrong.com/lw/faa/room_for_more_funding_at_the_future_of_humanity/
- http://lesswrong.com/lw/7sc/siai_vs_fhi_achievements_20082010/
- featured publications on https://www.fhi.ox.ac.uk/publications/
- http://effective-altruism.com/ea/1fa/personal_thoughts_on_careers_in_ai_policy_and/ ?
- when did FHI start doing more ML-based stuff? was it after it hired owain evans?
- rebecca roache enhanced punishment stuff?
- https://web.archive.org/web/20110908095411/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0003/19902/Final_Complete_FHI_Report.pdf (see pages 76-77)
- http://lesswrong.com/lw/5il/siai_an_examination/4fy5
Timeline update strategy
- FHI posts new quarterly updates here: https://www.fhi.ox.ac.uk/reporting/
See also
- Timeline of AI safety
- Timeline of Machine Intelligence Research Institute
- Timeline of the rationality community
External links
References
- ↑ "Showing results for: anthropic-principle.com". ICANN WHOIS. Retrieved March 11, 2018.
Creation Date: 1998-08-30T04:00:00Z
- ↑ "anthropic-principle.com". Archived from the original on January 25, 1999. Retrieved March 11, 2018.
- ↑ "Showing results for: simulation-argument.com". ICANN WHOIS. Retrieved March 11, 2018.
Creation Date: 2001-10-31T08:55:28Z
- ↑ "simulation-argument.com". Internet Archive. Retrieved March 10, 2018.
- ↑ 5.0 5.1 "Wayback Machine" (PDF). Archived from the original (PDF) on May 16, 2011. Retrieved March 11, 2018.
- ↑ "About | Future of Humanity Institute | Programmes". Oxford Martin School. Retrieved February 7, 2018.
- ↑ "Future of Humanity Institute". Archived from the original on October 13, 2005. Retrieved February 7, 2018.
- ↑ "Wayback Machine" (PDF). Archived from the original (PDF) on May 12, 2006. Retrieved February 7, 2018.
- ↑ Bostrom, Nick. "What is a Singleton?". Retrieved March 11, 2018.
- ↑ Anders Sandberg. "ENHANCE Project Site". Archived from the original on April 6, 2006. Retrieved February 7, 2018.
- ↑ 11.0 11.1 11.2 11.3 "Wayback Machine" (PDF). Archived from the original (PDF) on January 17, 2009. Retrieved February 7, 2018.
- ↑ "The Reversal Test: Eliminating Status Quo Bias in Applied Ethics" (PDF). Retrieved March 11, 2018.
- ↑ "Showing results for: EXISTENTIAL-RISK.ORG". ICANN WHOIS. Retrieved March 11, 2018.
Creation Date: 2006-07-19T23:23:38Z
- ↑ "Overcoming Bias : Bio". Retrieved June 1, 2017.
- ↑ "Overcoming Bias: How To Join". Retrieved September 26, 2017.
- ↑ "Overcoming Bias". Retrieved September 26, 2017.
- ↑ "FHI Updates". Archived from the original on July 5, 2007. Retrieved February 7, 2018.
- ↑ 18.0 18.1 18.2 18.3 18.4 18.5 "Wayback Machine" (PDF). Archived from the original (PDF) on April 13, 2012. Retrieved March 11, 2018.
- ↑ "Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books". Retrieved February 8, 2018.
- ↑ 20.0 20.1 20.2 20.3 "Future of Humanity Institute - Books". Archived from the original on November 3, 2010. Retrieved February 8, 2018.
- ↑ "Future of Humanity Institute Updates". Archived from the original on September 15, 2008. Retrieved February 7, 2018.
- ↑ "Showing results for: global-catastrophic-risks.com". ICANN WHOIS. Retrieved March 11, 2018.
Creation Date: 2008-01-22T20:47:11Z
- ↑ "global-catastrophic-risks.com". Retrieved March 10, 2018.
- ↑ "Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books". Retrieved February 8, 2018.
- ↑ "Overcoming Bias : Moral uncertainty – towards a solution?". Retrieved March 10, 2018.
- ↑ "Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books". Retrieved February 8, 2018.
- ↑ "FAQ - Lesswrongwiki". LessWrong. Retrieved June 1, 2017.
- ↑ "SIAI vs. FHI achievements, 2008-2010 - Less Wrong". LessWrong. Retrieved March 11, 2018.
However, since LW is to such a huge extent Eliezer's creation, and I'm not sure of what exactly the FHI contribution to LW is, I'm counting it as an SIAI and not a joint achievement.
- ↑ "Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books". Retrieved February 8, 2018.
- ↑ "Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books". Retrieved February 8, 2018.
- ↑ "Future of Humanity Institute - Books". Archived from the original on January 16, 2013. Retrieved February 8, 2018.
- ↑ "Winter Intelligence Conferences | The future of artificial general intelligence". Archived from the original on August 15, 2012. Retrieved March 11, 2018.
- ↑ "Future of Humanity Institute - News Archive". Archived from the original on January 12, 2013. Retrieved March 11, 2018.
- ↑ "Future of Humanity Institute (@FHIOxford)". Twitter. Retrieved March 11, 2018.
- ↑ "Global Priorities Project Strategy Overview" (PDF). Retrieved March 10, 2018.
- ↑ "HOME". The Global Priorities Project. Retrieved March 10, 2018.
- ↑ "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017.
- ↑ 38.0 38.1 "Future of Humanity Institute — General Support". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018.
- ↑ "Opinion | Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk". The Washington Post. Retrieved February 8, 2018.
- ↑ "Learning the Preferences of Bounded Agents" (PDF). Retrieved March 10, 2018.
- ↑ 41.0 41.1 41.2 41.3 41.4 41.5 "2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum". Retrieved March 10, 2018.
- ↑ Armstrong, Stuart. "Off-policy Monte Carlo agents with variable behaviour policies" (PDF). Retrieved March 10, 2018.
- ↑ "Learning the Preferences of Ignorant, Inconsistent Agents" (PDF). Retrieved March 10, 2018.
- ↑ 44.0 44.1 44.2 44.3 44.4 44.5 44.6 44.7 44.8 Future of Humanity Institute - FHI (July 31, 2017). "FHI Annual Review 2016 - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018.
- ↑ Future of Humanity Institute - FHI (October 25, 2016). "Policy workshop hosted on existential risk - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018.
- ↑ "Colloquium Series on Robust and Beneficial AI - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved March 13, 2018.
- ↑ "The Age of Em, A Book". Retrieved March 13, 2018.
- ↑ Bensinger, Rob (September 12, 2016). "New paper: "Safely interruptible agents" - Machine Intelligence Research Institute". Machine Intelligence Research Institute. Retrieved March 10, 2018.
- ↑ "Future of Humanity Institute — Biosecurity and Pandemic Preparedness". Open Philanthropy Project. December 15, 2017. Retrieved March 10, 2018.
- ↑ Future of Humanity Institute (July 31, 2017). "Quarterly Update Autumn 2016". Future of Humanity Institute. Retrieved March 13, 2018.
- ↑ "[1609.04994] Exploration Potential". Retrieved March 10, 2018.
- ↑ Future of Humanity Institute - FHI (March 8, 2017). "DeepMind collaboration - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018.
- ↑ Future of Humanity Institute - FHI (December 12, 2016). "Biotech horizon scanning workshop - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018.
- ↑ Future of Humanity Institute - FHI (January 19, 2017). "FHI holds workshop on AI safety and blockchain - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018.
- ↑ "Has the Fermi paradox been resolved? - Marginal REVOLUTION". Marginal REVOLUTION. July 3, 2017. Retrieved March 13, 2018.
- ↑ gwern (August 16, 2017). "September 2017 news - Gwern.net". Retrieved March 13, 2018.
- ↑ Farquhar, Sebastian; Halstead, John; Cotton-Barratt, Owen; Schubert, Stefan; Belfield, Haydn; Snyder-Beattie, Andrew (2017). "Existential Risk: Diplomacy and Governance" (PDF). Global Priorities Project. Retrieved March 14, 2018.
- ↑ "[1701.04079v1] Agent-Agnostic Human-in-the-Loop Reinforcement Learning". Retrieved March 14, 2018.
- ↑ "Strategic Implications of Openness in AI Development". Retrieved March 10, 2018.
- ↑ "Modeling Agents with Probabilistic Programs". Retrieved March 13, 2018.
- ↑ Future of Humanity Institute - FHI (April 26, 2017). "New Interactive Tutorial: Modeling Agents with Probabilistic Programs - Future of Humanity Institute". Future of Humanity Institute. Retrieved March 13, 2018.
- ↑ "[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox". Retrieved March 10, 2018.
- ↑ 65.0 65.1 Larks. "2018 AI Safety Literature Review and Charity Comparison". Effective Altruism Forum. Retrieved March 10, 2018.
- ↑ "Yale University — Research on the Global Politics of AI". Open Philanthropy Project. December 15, 2017. Retrieved March 11, 2018.
- ↑ "[1707.05173] Trial without Error: Towards Safe Reinforcement Learning via Human Intervention". Retrieved March 10, 2018.
- ↑ "EA Grants Fall 2017 Recipients". Google Docs. Retrieved March 11, 2018.
- ↑ "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018.
- ↑ "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018.
- ↑ Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018.