Changes

Jump to: navigation, search

Timeline of Future of Humanity Institute

92 bytes added, 18:36, 17 March 2018
no edit summary
* What was FHI up to for the first ten years of its existence (roughly up to the time when ''Superintelligence'' was published)? (Scan the years 2005–2014.)
* What are the website websites FHI has been associated with? (Sort by the "Event type" column and look at the rows labeled "Website".)
* Who were some of the early research staff at FHI? (Sort by the "Event type" column and look at the first several rows labeled "Staff".)
| 2006 || {{dts|December 17}} || External review || The initial version of the [[wikipedia:Future of Humanity Institute|Wikipedia page for FHI]] is created.<ref>{{cite web |url=https://en.wikipedia.org/w/index.php?title=Future_of_Humanity_Institute&dir=prev&action=history |title=Future of Humanity Institute: Revision history - Wikipedia |accessdate=March 14, 2018 |publisher=[[wikipedia:English Wikipedia|English Wikipedia]]}}</ref>
|-
| 2005–2007 || || Project || Lighthill Risk Network is created by Peter Taylor of FHI.<ref name="fhi-report" />
|-
| 2007 || {{dts|April}} || Internal review || Issue 4 of the FHI Progress Report (apparently renamed from "Bimonthly Progress Report") is published.<ref name="report-april-2007">{{cite web |url=http://www.fhi.ox.ac.uk:80/newsletters/April%202007%20final.pdf |title=Progress Report - Issue 4 |publisher=Future of Humanity Institute |accessdate=March 18, 2018 |archiveurl=https://web.archive.org/web/20081221082328/http://www.fhi.ox.ac.uk:80/newsletters/April%202007%20final.pdf |archivedate=December 21, 2008 |dead-url=yes}}</ref>
| 2010 || {{dts|June}} || Staff || Eric Mandelbaum joins FHI as a Postdoctoral Research Fellow. He would remain at FHI until July 2011.<ref>{{cite web |url=https://static1.squarespace.com/static/54c160eae4b060a8974e59cc/t/59b05ac5f7e0ab27e55d54ee/1504729797699/CV+May+2017.doc |title=Eric Mandelbaum |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20180316012900/https://static1.squarespace.com/static/54c160eae4b060a8974e59cc/t/59b05ac5f7e0ab27e55d54ee/1504729797699/CV+May+2017.doc |archivedate=March 16, 2018 |dead-url=no}}</ref>
|-
| 2011 || {{dts|January 14}}–17 || Conference || The Winter Intelligence Conference, organized by FHI, takes place. The conference brings together experts and students in philosophy, cognitive science, and artificial intelligence for discussions about intelligence.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0013/20173/Winter_Intelligence_Conference_Report_280111.pdf |title=Winter Intelligence |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20110711082741/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0013/20173/Winter_Intelligence_Conference_Report_280111.pdf |archivedate=July 11, 2011 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/archived_events/winter_conference |title=Future of Humanity Institute - Winter Intelligence Conference |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20130116104313/http://www.fhi.ox.ac.uk/archived_events/winter_conference |archivedate=January 16, 2013 |dead-url=yes}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/winter-intelligence-conference-2011-2/ |author=Future of Humanity Institute - FHI |title=Winter Intelligence Conference 2011 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=November 8, 2017 |accessdate=March 16, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/winter-intelligence-conference-2011/ |author=Future of Humanity Institute - FHI |title=Winter Intelligence Conference 2011 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 14, 2011 |accessdate=March 16, 2018}}</ref>
|-
| 2011 || {{dts|March 18}} || Publication || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
| 2011 || {{dts|June 9}} || External review || On a comment thread on ''{{W|LessWrong}}'', a discussion takes place regarding FHI's funding needs, productivity of marginal hires, dispersion of research topics (i.e. lack of focus on existential risks), and other topics related to funding FHI.<ref>{{cite web |url=http://lesswrong.com/lw/634/safety_culture_and_the_marginal_effect_of_a_dollar/4bnx |title=CarlShulman comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong |accessdate=March 15, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2011 || {{dts|September}} || Organization Project || The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.<ref>{{cite web |url=http://www.futuretech.ox.ac.uk/www.futuretech.ox.ac.uk/index.html |title=Welcome |publisher=Oxford Martin Programme on the Impacts of Future Technology |accessdate=July 26, 2017 |quote=The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University.}}</ref> The Programme is directed by Nick Bostrom and works closely with FHI, among other organizations.
|-
| 2011 || {{dts|September}} || Staff || Stuart Armstrong joins FHI as a Research Fellow.<ref>{{cite web |url=https://www.linkedin.com/in/stuart-armstrong-2447743/ |title=Stuart Armstrong |accessdate=March 15, 2018 |publisher=LinkedIn}}</ref>
| 2012 || {{dts|November 16}} || External review || John Maxwell IV posts "Room for more funding at the Future of Humanity Institute" on ''{{W|LessWrong}}''.<ref>{{cite web |url=http://lesswrong.com/lw/faa/room_for_more_funding_at_the_future_of_humanity/ |title=Room for more funding at the Future of Humanity Institute - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2012 || {{dts|December 10}}–11 || Conference || FHI hosts the 2012 conference on Impacts and Risks of Artificial General Intelligence. This conference is one of the two conferences that are part of the Winter Intelligence Multi-Conference 2012, which is hosted by FHI.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/archive_news |title=Future of Humanity Institute - News Archive |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20130112235735/http://www.fhi.ox.ac.uk/archive_news |archivedate=January 12, 2013 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.winterintelligence.org/oxford2012/agi-impacts/ |title=AGI Impacts {{!}} Winter Intelligence Conferences |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20121030120754/http://www.winterintelligence.org/oxford2012/agi-impacts/ |archivedate=October 30, 2012 |dead-url=yes}}</ref>
|-
| 2013 || || Staff || [[wikipedia:Carl Benedikt Frey|Carl Frey]] and {{W|Vincent Müller}} join FHI as Research Fellows sometime around this year.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/about/staff/ |title=Staff {{!}} Future of Humanity Institute |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20130615192159/http://www.fhi.ox.ac.uk:80/about/staff/ |archivedate=June 15, 2013 |dead-url=yes}}</ref>
| 2013 || {{dts|March 12}} || Publication || "Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox" by Stuart Armstrong and Anders Sandberg is published.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf |title=Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox |first1=Stuart |last1=Armstrong |first2=Anders |last2=Sandberg |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140409031029/http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf |archivedate=April 9, 2014 |dead-url=yes}}</ref> This paper is a featured FHI publication in 2014.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/research/publications/ |title=Publications {{!}} Future of Humanity Institute |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140523110809/http://www.fhi.ox.ac.uk:80/research/publications/ |archivedate=May 23, 2014 |dead-url=yes}}</ref>
|-
| 2013 || {{dts|May 30}} || Collaboration || A collaboration between FHI and the insurance company {{W|Amlin}} is announced. The collaboration is for research into systemic risks.<ref>{{cite web |url=https://www.oxfordmartin.ox.ac.uk/news/201305AmlinFHI |publisher=Oxford Martin School |title=FHI & Amlin join forces to understand systemic risk |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/amlin/ |author=Future of Humanity Institute - FHI |title=FHI-Amlin Collaboration - Future of Humanity Institute |publisher=Future of Humanity Institute |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/research/amlin/ |title=FHI-Amlin Collaboration {{!}} Future of Humanity Institute |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140523110804/http://www.fhi.ox.ac.uk:80/research/amlin/ |archivedate=May 23, 2014 |dead-url=yes}}</ref>
|-
| 2013 || {{dts|June}} || Staff || Nick Beckstead joins FHI as a Research Fellow. He would remain at FHI until November 2014.<ref>{{cite web |url=https://www.linkedin.com/in/nick-beckstead-7aa54374/ |title=Nick Beckstead |accessdate=March 15, 2018 |publisher=LinkedIn}}</ref>
| 2013 || {{dts|December 27}} || External review || Chris Hallquist posts "Donating to MIRI vs. FHI vs. CEA vs. CFAR" on ''{{W|LessWrong}}'' about the relative merits of donating to the listed organizations. The discussion thread includes a comment from Seán Ó hÉigeartaigh of FHI about the funding needs of FHI.<ref>{{cite web |url=http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/ |title=Donating to MIRI vs. FHI vs. CEA vs. CFAR - Less Wrong Discussion |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2014 || || Project || The Global Priorities Project (GPP) runs as a pilot project within the Centre for Effective Altruism. Team members of GPP include Owen Cotton-Barratt and Toby Ord of the Future of Humanity Institute.<ref>{{cite web |url=http://globalprioritiesproject.org/wp-content/uploads/2015/03/GPP-Strategy-Overview-February-2015.pdf |title=Global Priorities Project Strategy Overview |accessdate=March 10, 2018}}</ref> GPP would also eventually become a collaboration between Centre for Effective Altruism and FHI.<ref>{{cite web |url=http://globalprioritiesproject.org/ |publisher=The Global Priorities Project |title=HOME |accessdate=March 10, 2018}}</ref>
|-
| 2014 || || Publication || "Managing existential risk from emerging technologies" by Nick Beckstead and Toby Ord is published in the report "Innovation: Managing Risk, Not Avoiding It. Evidence and Case Studies."<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Managing-existential-risks-from-Emerging-Technologies.pdf |title=Innovation: managing risk, not avoiding it |year=2014 |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
| 2015 || || Staff || Simon Beard joins FHI as a Research Fellow in philosophy, for work on ''Population Ethics: Theory and Practice''. He would remain at FHI until 2016.<ref>{{cite web |url=http://sjbeard.weebly.com/cv.html |publisher=Simon Beard |title=CV |accessdate=March 15, 2018}}</ref>
|-
| 2015 || || Project || The Strategic Artificial Intelligence Research Centre is established.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/research/research-areas/strategic-centre-for-artificial-intelligence-policy/ |author=Future of Humanity Institute - FHI |title=Strategic Artificial Intelligence Research Centre - Future of Humanity Institute |publisher=Future of Humanity Institute |date=September 28, 2015 |accessdate=March 16, 2018}}</ref><ref>{{cite web |url=https://nickbostrom.com/ |title=Nick Bostrom's Home Page |accessdate=March 16, 2018 |quote=Since 2015, I also direct the Strategic Artificial Intelligence Research Center.}}</ref><ref>{{cite web |url=https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/qa-philosopher-nick-bostrom-on-superintelligence-human-enhancement-and-existential-risk/ |publisher=[[wikipedia:The Washington Post|The Washington Post]] |title=Opinion {{!}} Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk |accessdate=February 8, 2018}}</ref>
|-
| 2015 || {{dts|January 2}}–5 || Conference || ''The Future of AI: Opportunities and Challenges'', an AI safety conference, takes place in Puerto Rico. The conference is organized by the Future of Life Institute, but speakers include {{W|Nick Bostrom}}, the director of FHI.<ref>{{cite web |url=https://futureoflife.org/2015/10/12/ai-safety-conference-in-puerto-rico/ |title=AI safety conference in Puerto Rico |publisher=Future of Life Institute |date=October 12, 2015 |accessdate=July 13, 2017}}</ref> Nate Soares (executive director of {{W|Machine Intelligence Research Institute}}) would later call this the "turning point" of when top academics begin to focus on AI risk.<ref>{{cite web |url=https://intelligence.org/2015/07/16/an-astounding-year/ |title=An Astounding Year |publisher=Machine Intelligence Research Institute |author=Nate Soares |date=July 16, 2015 |accessdate=July 13, 2017}}</ref>
| 2016 || || Publication || "Learning the Preferences of Ignorant, Inconsistent Agents" is published. One of the paper's authors is Owain Evans at FHI.<ref>{{cite web |url=https://stuhlmueller.org/papers/preferences-aaai2016.pdf |title=Learning the Preferences of Ignorant, Inconsistent Agents |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /> This is a featured FHI publication.<ref name="selected-publications-archive" />
|-
| 2016 || || Project || The Global Politics of AI Research Group is established by Carrick Flynn and Allan Dafoe (both of whom are affiliated with FHI). The group "consists of eleven research members [and] more than thirty volunteers" and "has the mission of helping researchers and political actors to adopt the best possible strategy around the development of AI."<ref name="annual-review-2016" /> (It's not clear where the group is based or if it even meets physically.)
|-
| 2016 || || Publication || "Thompson Sampling is Asymptotically Optimal in General Environments" by Leike et al. is published.<ref>{{cite web |url=http://auai.org/uai2016/proceedings/papers/20.pdf |title=Thompson Sampling is Asymptotically Optimal in General Environments |first1=Jan |last1=Leike |first2=Tor |last2=Lattimore |first3=Laurent |last3=Orseau |first4=Marcus |last4=Hutter |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
| 2016 || {{dts|May 27}}{{snd}}{{dts|June 17}} || Workshop || The Colloquium Series on Robust and Beneficial AI (CSRBAI), co-hosted by the {{w|Machine Intelligence Research Institute}} and FHI, takes place. The program brings "together a variety of academics and professionals to address the technical challenges associated with AI robustness and reliability, with a goal of facilitating conversations between people interested in a number of different approaches." At the program, Jan Leike and Stuart Armstrong of FHI each give a talk.<ref>{{cite web |url=https://intelligence.org/colloquium-series/ |title=Colloquium Series on Robust and Beneficial AI - Machine Intelligence Research Institute |publisher=[[wikipedia:Machine Intelligence Research Institute|Machine Intelligence Research Institute]] |accessdate=March 13, 2018}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/colloquium-series-on-robust-and-beneficial-ai/ |author=Future of Humanity Institute - FHI |title=Colloquium Series on Robust and Beneficial AI - Future of Humanity Institute |publisher=Future of Humanity Institute |date=August 5, 2016 |accessdate=March 16, 2018}}</ref>
|-
| 2016 || {{Dts|June}} (approximate) || Staff || FHI recruits {{W|William MacAskill}} and Hilary Greaves to start a new "Programme on the Philosophical Foundations of Effective Altruism" as a collaboration between FHI and the Centre for Effective Altruism.<ref name="newsletter-summer-2016" /> (It seems like this became the Global Priorities Institute, which is not to be confused with the Global Priorities Project.)
|-
| 2016 || {{dts|June}} || Publication || ''[[wikipedia:The Age of Em|The Age of Em: Work, Love and Life When Robots Rule the Earth]]'', a book about the implications of whole brain emulation by FHI research associate {{W|Robin Hanson}}, is published.<ref>{{cite web |url=http://ageofem.com/ |title=The Age of Em, A Book |accessdate=March 13, 2018}}</ref> In October, FHI and Hanson would organize a workshop about the book.<ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/robin-hanson-and-fhi-hold-seminar-and-public-talk-on-the-age-of-em/ |author=Future of Humanity Institute - FHI |title=Robin Hanson and FHI hold seminar and public talk on "The age of em" - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 25, 2016 |accessdate=March 16, 2018}}</ref>
| 2016 || {{dts|September 16}} || Publication || Jan Leike's paper "Exploration Potential" is first uploaded to the arXiv.<ref>{{cite web |url=https://arxiv.org/abs/1609.04994 |title=[1609.04994] Exploration Potential |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /><ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/new-paper-exploration-potential/ |author=Future of Humanity Institute - FHI |title=Exploration potential - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 5, 2016 |accessdate=March 16, 2018}}</ref>
|-
| 2016 || {{dts|September 22}} || Collaboration || FHI's page on its collaboration with Google DeepMind is published. However it is unclear when the actual collaboration began.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/deepmind-collaboration/ |author=Future of Humanity Institute - FHI |title=DeepMind collaboration - Future of Humanity Institute |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 13, 2018}}</ref>
|-
| 2016 || {{dts|November}} || Workshop || The biotech horizon scanning workshop, co-hosted by the {{W|Centre for the Study of Existential Risk}} and FHI, takes place. The workshop and the overall "biological engineering horizon scanning" process is intended to lead up to "a peer-reviewed publication highlighting 15–20 developments of greatest likely impact."<ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/biotech-horizon-scanning-workshop/ |author=Future of Humanity Institute - FHI |title=Biotech horizon scanning workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=December 12, 2016 |accessdate=March 13, 2018}}</ref>
| 2017 || {{dts|September 29}} || Financial || Effective Altruism Grants fall 2017 recipients are announced. One of the recipients is Gregory Lewis, who will use the grant for "Research into biological risk mitigation with the Future of Humanity Institute." The grant amount for Lewis is £15,000 (about $20,000).<ref>{{cite web |url=https://docs.google.com/spreadsheets/d/1iBy–zMyIiTgybYRUQZIm11WKGQZcixaCmIaysRmGvk/edit#gid=0 |title=EA Grants Fall 2017 Recipients |publisher=Google Docs |accessdate=March 11, 2018}}</ref>
|-
| 2017 || {{Dts|October}}–December || Project || FHI launches its Governance of AI Program, co-directed by Nick Bostrom and Allan Dafoe.<ref name="newsletter-winter-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-winter-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Winter 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 19, 2018 |accessdate=March 14, 2018}}</ref>
|-
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at FHI, {{W|Centre for the Study of Existential Risk}}, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>

Navigation menu