Changes

Jump to: navigation, search

Timeline of Future of Humanity Institute

5,323 bytes added, 17:49, 13 March 2018
no edit summary
| 2009 || {{dts|January 22}} || || ''Human Enhancement'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Human-Enhancement-Julian-Savulescu/dp/0199299722/ |title=Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" /><ref name="annual-report-oct-2008-to-sep-2009" />
|-
| 2009 || {{dts|February}} || Project || ''{{W|LessWrong}}'', the group blog about rationality, launches.<ref>{{cite web |url=https://wiki.lesswrong.com/wiki/FAQ#Where_did_Less_Wrong_come_from.3F |title=FAQ - Lesswrongwiki |accessdate=June 1, 2017 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref> The blog is sponsored by FHI,<ref name="annual-report-oct-2008-to-sep-2009" /> although it is unclear to what extent FHI is involved in the creation.<ref>{{cite web |url=http://lesswrong.com/lw/7sc/siai_vs_fhi_achievements_20082010/ |title=SIAI vs. FHI achievements, 2008-2010 - Less Wrong |accessdate=March 11, 2018 |quote=However, since LW is to such a huge extent Eliezer's creation, and I'm not sure of what exactly the FHI contribution to LW ''is'', I'm counting it as an SIAI and not a joint achievement. |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2010 || {{dts|June 21}} || || ''Anthropic Bias'' by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.<ref>{{cite web |url=https://www.amazon.co.uk/Anthropic-Bias-Observation-Selection-Philosophy/dp/0415883946/ |title=Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books |accessdate=February 8, 2018}}</ref><ref name="2010-11-03-books" />
|-
| 2016 || || || "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.<ref>{{cite web |url=https://stuhlmueller.org/papers/preferences-aaai2016.pdf |title=Learning the Preferences of Ignorant, Inconsistent Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" />
|-
| 2016 || || || The Global Politics of AI Research Group is established by Carrick Flynn and Allan Dafoe (both of whom are affiliated with FHI). The group "consists of eleven research members more than thirty volunteers" and "has the mission of helping researchers and political actors to adopt the best possible strategy around the development of AI."<ref name="annual-review-2016" /> (It's not clear where the group is based or if it even meets physically.)
|-
| 2016 || {{dts|February 8}}–9 || || The Global Priorities Project (a collaboration between FHI and the Centre for Effective Altruism) hosts a policy workshop on existential risk. Attendees included "twenty leading academics and policy-makers from the UK, USA, Germany, Finland, and Sweden".<ref>{{cite web |url=https://www.fhi.ox.ac.uk/workshop-hosted-on-existential-risk/ |author=Future of Humanity Institute - FHI |title=Policy workshop hosted on existential risk - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 25, 2016 |accessdate=March 13, 2018}}</ref><ref name="annual-review-2016" />
|-
| 2016 || {{dts|May}} || || The Global Priorities Project (associated with FHI) releases the Global Catastrophic Report 2016.<ref name="newsletter-summer-2016">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-newsletter-july-2016/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Summer 2016 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 13, 2018}}</ref>
|-
| 2016 || {{dts|May}} || || FHI hosts a week-long workshop in Oxford called "The Control Problem in AI", attended by ten members of {{W|Machine Intelligence Research Institute}}.<ref name="annual-review-2016" />
|-
| 2016 || {{dts|May 27}}{{snd}}{{dts|June 17}} || || The Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted by the {{w|Machine Intelligence Research Institute}} and FHI, takes place. The program brings "together a variety of academics and professionals to address the technical challenges associated with AI robustness and reliability, with a goal of facilitating conversations between people interested in a number of different approaches." At the program, Jan Leike and Stuart Armstrong of FHI each give a talk.<ref>{{cite web |url=https://intelligence.org/colloquium-series/ |title=Colloquium Series on Robust and Beneficial AI - Machine Intelligence Research Institute |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |accessdate=March 13, 2018}}</ref>
| 2016 || {{Dts|June}} (approximate) || || FHI recruits {{W|William MacAskill}} and Hilary Greaves to start a new "Programme on the Philosophical Foundations of Effective Altruism" as a collaboration between FHI and the Centre for Effective Altruism.<ref name="newsletter-summer-2016" />
|-
| 2016 || {{dts|June}} || || ''[[wikipedia:The Age of Em|The Age of Em: Work, Love and Life When Robots Rule the Earth]]'', a book about the implications of whole brain emulation by FHI research associate {{W|Robin Hanson}}, is published.<ref>{{cite web |url=http://ageofem.com/ |title=The Age of Em, A Book |accessdate=March 13, 2018}}</ref> In October, FHI and Hanson would organize a workshop about the book.<ref name="annual-review-2016" />|-| 2016 || {{dts|June 1}} || || The paper "Safely interruptible agents" is announced on the {{W|Machine Intelligence Research Institute}} blog. The paper is a collaboration between {{W|Google DeepMind}} and FHI, and one of the paper's authors is Stuart Armstrong of FHI.<ref>{{cite web |url=https://intelligence.org/2016/06/01/new-paper-safely-interruptible-agents/ |title=New paper: "Safely interruptible agents" - Machine Intelligence Research Institute |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |date=September 12, 2016 |first=Rob |last=Bensinger |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review">{{cite web |url=http://effective-altruism.com/ea/14w/2017_ai_risk_literature_review_and_charity/ |title=2017 AI Risk Literature Review and Charity Comparison - Effective Altruism Forum |accessdate=March 10, 2018}}</ref> The paper is also presented at the Conference on Uncertainty in Artificial Intelligence (UAI).<ref name="newsletter-summer-2016" />
|-
| 2016 || {{dts|September}} || || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $115,652 to FHI to support the hiring of Piers Millett, who will work on biosecurity and pandemic preparedness.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/biosecurity/future-humanity-institute-biosecurity-and-pandemic-preparedness |publisher=Open Philanthropy Project |title=Future of Humanity Institute — Biosecurity and Pandemic Preparedness |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
|-
| 2016 || {{dts|September 22}} || || FHI's page on its collaboration with Google DeepMind is published. However it is unclear when the actual collaboration began.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/deepmind-collaboration/ |author=Future of Humanity Institute - FHI |title=DeepMind collaboration - Future of Humanity Institute |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 13, 2018}}</ref>
|-
| 2016 || {{dts|November}} || || The biotech horizon scanning workshop, co-hosted by the Centre for the Study of Existential Risk and FHI, takes place. The workshop and the overall "biological engineering horizon scanning" process is intended to lead up to "a peer-reviewed publication highlighting 15–20 developments of greatest likely impact."<ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/biotech-horizon-scanning-workshop/ |author=Future of Humanity Institute - FHI |title=Biotech horizon scanning workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=December 12, 2016 |accessdate=March 13, 2018}}</ref>
|-
| 2016 || {{dts|December}} || || FHI hosts a workshop on "AI Safety and Blockchain". Attendees include Nick Bostrom, Vitalik Buterin, {{W|Jaan Tallinn}}, {{W|Wei Dai}}, Gwern Branwen, and Allan Dafoe. "The workshop explored the potential technical overlap between AI Safety and blockchain technologies and the possibilities for using blockchain, crypto-economics, and cryptocurrencies to facilitate greater global coordination."<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-holds-workshop-on-ai-safety-and-blockchain/ |author=Future of Humanity Institute - FHI |title=FHI holds workshop on AI safety and blockchain - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 19, 2017 |accessdate=March 13, 2018}}</ref><ref name="annual-review-2016" /> It is unclear whether any output resulted from this workshop.
|-
| 2017 || || || Slides for an upcoming paper by FHI researchers Anders Sandberg, Eric Drexler, and Toby Ord, "Dissolving the Fermi Paradox", are posted.<ref>{{cite web |url=http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html |title=Has the Fermi paradox been resolved? - Marginal REVOLUTION |publisher=Marginal REVOLUTION |date=July 3, 2017 |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/09 |author=gwern |date=August 16, 2017 |title=September 2017 news - Gwern.net |accessdate=March 13, 2018}}</ref>
|-
| 2017 || || || The report "Existential Risk: Diplomacy and Governance" is published. "This work began at the Global Priorities Project, whose policy work has now joined FHI."<ref name="newsletter-spring-2017" /> The report gives an overview of existential risks and presents three recommendations for ways to reduce existential risks (chosen out of more then 100 proposals): (1) developing governance of geoengineering research; (2) establishing scenario plans and exercises for severe engineered pandemics at the international level; and (3) building international attention and support for existential risk reduction.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Existential-Risks-2017-01-23.pdf |title=Existential Risk: Diplomacy and Governance |year=2017 |first1=Sebastian |last1=Farquhar |first2=John |last2=Halstead |first3=Owen |last3=Cotton-Barratt |first4=Stefan |last4=Schubert |first5=Haydn |last5=Belfield |first6=Andrew |last6=Snyder-Beattie |publisher=Global Priorities Project |accessdate=March 14, 2018}}</ref>
|-
| 2017 || {{dts|January 15}} || || "Agent-Agnostic Human-in-the-Loop Reinforcement Learning" is uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1701.04079v1 |title=[1701.04079v1] Agent-Agnostic Human-in-the-Loop Reinforcement Learning |accessdate=March 14, 2018}}</ref><ref name="newsletter-spring-2017" />
|-
| 2017 || {{dts|January 25}} || || The FHI Annual Review 2016 is published.<ref name="annual-review-2016">{{cite web |url=https://www.fhi.ox.ac.uk/fhi-annual-review-2016/ |author=Future of Humanity Institute - FHI |title=FHI Annual Review 2016 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 13, 2018}}</ref>
|-
| 2017 || {{dts|February 9}} || || Nick Bostrom's paper "Strategic Implications of Openness in AI Development" is published in the journal ''{{W|Global Policy}}''.<ref>{{cite web |url=http://onlinelibrary.wiley.com/doi/10.1111/1758-5899.12403/abstract |title=Strategic Implications of Openness in AI Development |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /><ref name="newsletter-spring-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-spring-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Spring 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=July 31, 2017 |accessdate=March 14, 2018}}</ref> The paper "covers a breadth of areas including long-term AI development, singleton versus multipolar scenarios, race dynamics, responsible AI development, and identification of possible failure modes."<ref name="annual-review-2016" />
|-
| 2017 || {{dts|March}} || || The {{W|Open Philanthropy Project}} recommends (to Good Ventures?) a grant of $1,995,425 to FHI for general support.<ref name="open-phil-grant-march-2017">{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support |publisher=Open Philanthropy Project |title=Future of Humanity Institute — General Support |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
==See also==
* [[Timeline of AI safety]]
* [[Timeline of Machine Intelligence Research Institute]]
* [[Timeline of OpenAIthe rationality community]]
==External links==

Navigation menu