Changes

Jump to: navigation, search

Timeline of Future of Humanity Institute

1,648 bytes added, 11:26, 17 March 2018
no edit summary
! Time period !! Development summary !! More details
|-
| Before 2005 || Pre-FHI days || This is the period leading up to FHI's existence. Transhumanism{{W|Nick Bostrom}}, who would become FHI's first (and so far only) director, is born and completes his education. Also happening in this period are the development of transhumanism, the creation of various transhumanism-related mailing lists, and Bostrom developing 's development his early ideas, etc. ||
|-
| 2005–?? 2005–2011 || Early days of FHI is founded ||FHI is established and begins its research. Compared to later periods, this period seems to have a greater focus on ethics of enhancement. FHI publishes three Annual/Achievement Reports during this period.
|-
| 2014–?? 2011–2015 || More development and publication of ''Superintelligence'' || More focus on existential risks, in particular risks from advanced artificial intelligence. FHI did not seem to publish any Annual/Achievement Reports during this period, so it is published somewhat difficult to tell what FHI considers its greatest accomplishments during this period (other than the publication of ''Superintelligence'').|-| 2015–present || More development ||Collaboration between DeepMind, launch of the Strategic AI Research Center and the Governance of AI Program.
|}
* For "Staff", the intention is to include all Research Fellows and leadership positions (so far, Nick Bostrom has been the only director so not much to record here).
* For "Workshop" and "Conference", the intention is to include all events organized or hosted by FHI, but not events where FHI staff only attended or only helped with organizing.
* For "Internal review", the intention is to include all annual review documents.
* For "External review", the intention is to include all reviews that seem substantive (judged by intuition). For mainstream media articles, only ones that treat FHI/Bostrom at length are included.
* For "Financial", the intention is to include all substantial (say, over $10,000) donations, including aggregated donations and donations of unknown amounts.
* For "Nick Bostrom", the intention is to include events sufficient to give a rough overview of Bostrom's development prior to the founding of FHI.
* For "Social media", the intention is to include all Reddit AMAs and account creations (where the date is known).
* Events about FHI staff giving policy advice (to e.g. government bodies) are not included, as there are many such events and it is difficult to tell which ones are more important.
| 2006 || {{dts|December}} || Staff || Rafaela Hillerbrand joins FHI as a Research Fellow for "work on epistemological and ethical problems for decisions under risk and uncertainty". She would remain at FHI until October 2008.<ref>{{cite web |url=https://www.linkedin.com/in/rafaela-hillerbrand-a1759b4/ |title=Rafaela Hillerbrand |publisher=LinkedIn |accessdate=March 15, 2018}}</ref>
|-
| 2006 || {{dts|December 17}} || Outside External review || The initial version of the [[wikipedia:Future of Humanity Institute|Wikipedia page for FHI]] is created.<ref>{{cite web |url=https://en.wikipedia.org/w/index.php?title=Future_of_Humanity_Institute&dir=prev&action=history |title=Future of Humanity Institute: Revision history - Wikipedia |accessdate=March 14, 2018 |publisher=[[wikipedia:English Wikipedia|English Wikipedia]]}}</ref>
|-
| Late 2006 or early 2007 || || Staff || Nicholas Shackel joins FHI as a Research Fellow in Theoretical Ethics.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/staff.html |title=FHI Staff |accessdate=March 16, 2018 |archiveurl=https://web.archive.org/web/20070116112648/http://www.fhi.ox.ac.uk:80/staff.html |archivedate=January 16, 2007 |dead-url=yes}}</ref>
| 2011 || {{dts|March 18}} || Publication || ''Enhancing Human Capacities'' is published.<ref>{{cite web |url=https://www.amazon.co.uk/Enhancing-Human-Capacities-Julian-Savulescu/dp/1405195819/ |title=Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books |accessdate=February 8, 2018}}</ref><ref>{{cite web |url=http://www.fhi.ox.ac.uk/selected_outputs/recent_books |title=Future of Humanity Institute - Books |accessdate=February 8, 2018 |archiveurl=https://web.archive.org/web/20130116012459/http://www.fhi.ox.ac.uk/selected_outputs/recent_books |archivedate=January 16, 2013 |dead-url=yes}}</ref>
|-
| 2011 || {{dts|June 9}} || Outside External review || On a comment thread on ''{{W|LessWrong}}'', a discussion takes place regarding FHI's funding needs, productivity of marginal hires, dispersion of research topics (i.e. lack of focus on existential risks), and other topics related to funding FHI.<ref>{{cite web |url=http://lesswrong.com/lw/634/safety_culture_and_the_marginal_effect_of_a_dollar/4bnx |title=CarlShulman comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong |accessdate=March 15, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2011 || {{dts|September}} || Organization || The Oxford Martin Programme on the Impacts of Future Technology (FutureTech) launches.<ref>{{cite web |url=http://www.futuretech.ox.ac.uk/www.futuretech.ox.ac.uk/index.html |title=Welcome |publisher=Oxford Martin Programme on the Impacts of Future Technology |accessdate=July 26, 2017 |quote=The Oxford Martin Programme on the Impacts of Future Technology, launched in September 2011, is an interdisciplinary horizontal Programme within the Oxford Martin School in collaboration with the Faculty of Philosophy at Oxford University.}}</ref> The Programme is directed by Nick Bostrom and works closely with FHI, among other organizations.
| 2011 || {{dts|September}} || Staff || Stuart Armstrong joins FHI as Research Fellow.<ref>{{cite web |url=https://www.linkedin.com/in/stuart-armstrong-2447743/ |title=Stuart Armstrong |accessdate=March 15, 2018 |publisher=LinkedIn}}</ref>
|-
| 2011 || {{dts|September 25}} || Outside External review || Kaj Sotala posts "SIAI vs. FHI achievements, 2008–2010" on ''{{W|LessWrong}}'', comparing the outputs of FHI and the {{W|Machine Intelligence Research Institute}} (which used to be called the Singularity Institute for Artificial Intelligence, abbreviated SIAI).<ref name="sotala-siai-vs-fhi">{{cite web |url=http://lesswrong.com/lw/7sc/siai_vs_fhi_achievements_20082010/ |title=SIAI vs. FHI achievements, 2008-2010 - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2012 || || Staff || Daniel Dewey joins FHI as a Research Fellow.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/Daniel-Dewey.pdf |title=Daniel-Dewey.pdf |accessdate=March 15, 2018}}</ref>
| 2012 || {{dts|September 5}} || Social media || The FHI {{W|Twitter}} account, @FHIOxford, is registered.<ref>{{cite web |url=https://twitter.com/fhioxford?lang=en |title=Future of Humanity Institute (@FHIOxford) |publisher=Twitter |accessdate=March 11, 2018}}</ref>
|-
| 2012 || {{dts|November 16}} || Outside External review || John Maxwell IV posts "Room for more funding at the Future of Humanity Institute" on ''{{W|LessWrong}}''.<ref>{{cite web |url=http://lesswrong.com/lw/faa/room_for_more_funding_at_the_future_of_humanity/ |title=Room for more funding at the Future of Humanity Institute - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2012 || {{dts|December 10}}–11 || || FHI hosts the 2012 conference on Impacts and Risks of Artificial General Intelligence. This conference is one of the two conferences that are part of the Winter Intelligence Multi-Conference 2012, which is hosted by FHI.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/archive_news |title=Future of Humanity Institute - News Archive |accessdate=March 11, 2018 |archiveurl=https://web.archive.org/web/20130112235735/http://www.fhi.ox.ac.uk/archive_news |archivedate=January 12, 2013 |dead-url=yes}}</ref><ref>{{cite web |url=http://www.winterintelligence.org/oxford2012/agi-impacts/ |title=AGI Impacts {{!}} Winter Intelligence Conferences |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20121030120754/http://www.winterintelligence.org/oxford2012/agi-impacts/ |archivedate=October 30, 2012 |dead-url=yes}}</ref>
| 2013 || {{dts|February}} || Publication || "Existential Risk Prevention as Global Priority" by Nick Bostrom is published in ''{{W|Global Policy}}''.<ref>{{cite web |url=http://www.existential-risk.org/concept.pdf |title=Existential Risk Prevention as Global Priority |first=Nick |last=Bostrom |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
|-
| 2013 || {{dts|February 25}} || Outside External review || "Omens When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?" is published on the digital magazine ''[[wikipedia:Aeon (digital magazine)|Aeon]]''. The piece covers FHI, existential risk, Nick Bostrom, and some of his ideas.<ref>{{cite web |url=https://aeon.co/essays/will-humans-be-around-in-a-billion-years-or-a-trillion |title=Omens When we peer into the fog of the deep future what do we see – human extinction or a future among the stars? |author=Ross Andersen |publisher=Aeon |date=February 25, 2013 |accessdate=March 15, 2018}}</ref><ref>{{cite web |url=http://lesswrong.com/lw/gvb/link_wellwritten_article_on_the_future_of/ |title=[LINK] Well-written article on the Future of Humanity Institute and Existential Risk |date=March 2, 2013 |author=ESRogs |accessdate=March 15, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://www.fhi.ox.ac.uk/aeon-magazine-feature-omens/ |author=Future of Humanity Institute - FHI |title=Aeon Magazine Feature: "Omens" - Future of Humanity Institute |publisher=Future of Humanity Institute |date=February 25, 2013 |accessdate=March 16, 2018}}</ref>
|-
| 2013 || {{dts|March 12}} || Publication || "Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox" by Stuart Armstrong and Anders Sandberg is published.<ref>{{cite web |url=http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf |title=Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox |first1=Stuart |last1=Armstrong |first2=Anders |last2=Sandberg |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140409031029/http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf |archivedate=April 9, 2014 |dead-url=yes}}</ref> This paper is a featured FHI publication in 2014.<ref>{{cite web |url=http://www.fhi.ox.ac.uk:80/research/publications/ |title=Publications {{!}} Future of Humanity Institute |accessdate=March 15, 2018 |archiveurl=https://web.archive.org/web/20140523110809/http://www.fhi.ox.ac.uk:80/research/publications/ |archivedate=May 23, 2014 |dead-url=yes}}</ref>
| 2013 || {{dts|November}} || Workshop || FHI hosts a week-long math workshop led by the {{W|Machine Intelligence Research Institute}}.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/miri/ |author=Future of Humanity Institute - FHI |title=FHI Hosts Machine Intelligence Research Institute Maths Workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=November 26, 2013 |accessdate=March 16, 2018}}</ref>
|-
| 2013 || {{dts|December 27}} || Outside External review || Chris Hallquist posts "Donating to MIRI vs. FHI vs. CEA vs. CFAR" on ''{{W|LessWrong}}'' about the relative merits of donating to the listed organizations. The discussion thread includes a comment from Seán Ó hÉigeartaigh about the funding needs of FHI.<ref>{{cite web |url=http://lesswrong.com/r/discussion/lw/je9/donating_to_miri_vs_fhi_vs_cea_vs_cfar/ |title=Donating to MIRI vs. FHI vs. CEA vs. CFAR - Less Wrong Discussion |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2014 || || || The Global Priorities Project (GPP) runs as a pilot project within the Centre for Effective Altruism. Team members of GPP include Owen Cotton-Barratt and Toby Ord of the Future of Humanity Institute.<ref>{{cite web |url=http://globalprioritiesproject.org/wp-content/uploads/2015/03/GPP-Strategy-Overview-February-2015.pdf |title=Global Priorities Project Strategy Overview |accessdate=March 10, 2018}}</ref> GPP would also eventually become a collaboration between Centre for Effective Altruism and FHI.<ref>{{cite web |url=http://globalprioritiesproject.org/ |publisher=The Global Priorities Project |title=HOME |accessdate=March 10, 2018}}</ref>
| 2014 || {{dts|September 24}} || Social media || {{W|Nick Bostrom}} does an AMA ("ask me anything") on {{W|Reddit}}.<ref>{{cite web |url=https://www.reddit.com/r/science/comments/2hbp21/science_ama_series_im_nick_bostrom_director_of/ |publisher=reddit |title=Science AMA Series: I'm Nick Bostrom, Director of the Future of Humanity Institute, and author of "Superintelligence: Paths, Dangers, Strategies", AMA • r/science |accessdate=March 14, 2018}}</ref>
|-
| 2014 || {{dts|September 26}} || Outside External review || Daniel Dewey (who is a Research Fellow for FHI at the time)<ref>{{cite web |url=https://aiwatch.issarice.com/?person=Daniel+Dewey |date=March 1, 2018 |title=Daniel Dewey |publisher=AI Watch |accessdate=March 14, 2018}}</ref> posts "The Future of Humanity Institute could make use of your money" on ''{{W|LessWrong}}''. The post results in some discussion about donating to FHI in the comments section.
|-
| 2014 || {{dts|October 1}} || Financial || FHI posts a note of thanks to the Investling Group for a "recent financial contribution". The amount is not listed.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/investling/ |author=Future of Humanity Institute - FHI |title=Thanks - Future of Humanity Institute |publisher=Future of Humanity Institute |date=October 1, 2014 |accessdate=March 16, 2018}}</ref>
| 2015 || {{dts|July 1}} || Financial || The Future of Life Institute's Grant Recommendations for its first round of AI safety grants are publicly announced. The grants would be disbursed on September 1.<ref>{{cite web |url=https://futureoflife.org/grants-timeline/ |title=Grants Timeline - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/2015selection/ |title=New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial: Press release for FLI grant awardees. - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref><ref>{{cite web |url=https://futureoflife.org/ai-safety-research/ |title=AI Safety Research - Future of Life Institute |publisher=Future of Life Institute |accessdate=July 13, 2017}}</ref> One of the grantees is {{W|Nick Bostrom}}, the director of FHI, who receives a grant of $1,500,000 for the creation of a new research center focused on AI safety.<ref>{{cite web |url=https://futureoflife.org/ai-researcher-nick-bostrom/ |title=AI Researcher Nick Bostrom - Future of Life Institute |publisher=Future of Life Institute |accessdate=March 14, 2018}}</ref> Another grantee is Owain Evans of FHI, who receives a grant of $227,212 for his project on inferring human values.<ref name="fli-grant-owain-evans">{{cite web |url=https://futureoflife.org/ai-researcher-owain-evans/ |title=AI Researcher Owain Evans - Future of Life Institute |publisher=Future of Life Institute |accessdate=March 14, 2018}}</ref>
|-
| 2015 || {{dts|July 30}} || Outside External review || A post critiquing the lack of intuitive explanation of existential risks on the FHI website (among other places) is posted on ''{{W|LessWrong}}''.<ref>{{cite web |url=http://lesswrong.com/lw/mjy/help_build_a_landing_page_for_existential_risk/ |title=Help Build a Landing Page for Existential Risk? - Less Wrong |accessdate=March 14, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref>
|-
| 2015 || {{dts|September 1}} || Financial || FHI announces that Nick Bostrom has been awarded a €2 million (about $2,247,200 at the time)<ref>{{cite web |url=https://api.fixer.io/2015-09-01?base=EUR |title=Currency conversion from EUR on 2015-09-01 |publisher=Fixer.io |accessdate=March 16, 2018}}</ref> [[wikipedia:European Research Council#Grants offered|European Research Council Advanced Grant]].<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-awarded-prestigious-e2m-erc-grant/ |author=Future of Humanity Institute - FHI |title=FHI awarded prestigious €2m ERC Grant - Future of Humanity Institute |publisher=Future of Humanity Institute |date=September 25, 2015 |accessdate=March 16, 2018}}</ref>
| 2015 || {{dts|October}} || Publication || "Moral Trade" by Toby Ord is published in the journal ''[[wikipedia:Ethics (journal)|Ethics]]''.<ref>{{cite web |url=http://www.amirrorclear.net/files/moral-trade.pdf |title=Moral Trade |first=Toby |last=Ord |journal=Ethics |year=2015 |accessdate=March 14, 2018}}</ref> This is a featured FHI publication.<ref name="selected-publications-archive" />
|-
| 2015 || {{dts|November 23}} || Outside External review || A ''{{W|The New Yorker}}'' piece featuring Nick Bostrom and FHI is published.<ref>{{cite web |url=https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? |publisher=The New Yorker |first=Raffi |last=Khatchadourian |date=November 23, 2015 |accessdate=March 15, 2018}}</ref>
|-
| 2016 || || Publication || Stuart Armstrong's paper "Off-policy Monte Carlo agents with variable behaviour policies" is published.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/wp-content/uploads/monte_carlo_arXiv.pdf |title=Off-policy Monte Carlo agents with variable behaviour policies |first=Stuart |last=Armstrong |accessdate=March 10, 2018}}</ref><ref name="larks-december-2016-review" /> This is a featured FHI publication.<ref name="selected-publications-archive" />

Navigation menu