Difference between revisions of "Timeline of Future of Humanity Institute"
From Timelines
(→Full timeline) |
m (→Full timeline) |
||
Line 37: | Line 37: | ||
| 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published. While Bostrom has never worked for MIRI, he is a research advisor to MIRI. MIRI also contributed substantially to the publication of the book.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> | | 2014 || {{dts|July}}–September || Influence || [[wikipedia:Nick Bostrom|Nick Bostrom]]'s book ''[[wikipedia:Superintelligence: Paths, Dangers, Strategies|Superintelligence: Paths, Dangers, Strategies]]'' is published. While Bostrom has never worked for MIRI, he is a research advisor to MIRI. MIRI also contributed substantially to the publication of the book.<ref name="shulman_miri_causal_influences">{{cite web |url=http://effective-altruism.com/ea/ns/my_cause_selection_michael_dickens/50b |title=Carl_Shulman comments on My Cause Selection: Michael Dickens |publisher=Effective Altruism Forum |accessdate=July 6, 2017 |date=September 17, 2015}}</ref> | ||
|- | |- | ||
− | | 2015 || || || | + | | 2015 || || || The Strategic AI Research Center starts some time after this period.<ref>{{cite web |url=https://www.washingtonpost.com/news/in-theory/wp/2015/11/05/qa-philosopher-nick-bostrom-on-superintelligence-human-enhancement-and-existential-risk/?utm_term=.1dd45715e8bd |publisher=[[wikipedia:The Washington Post|The Washington Post]] |title=Opinion {{!}} Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk |accessdate=February 8, 2018}}</ref> |
|- | |- | ||
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref> | | 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref> |
Revision as of 01:36, 5 March 2018
This is a timeline of the Future of Humanity Institute (FHI).
Contents
Big picture
Time period | Development summary | More details |
---|
Full timeline
Year | Month and date | Event type | Details |
---|---|---|---|
2005 | June 1 or November 29 | The Future of Humanity Institute is established.[1][2][3] | |
2006 | March 2 | The ENHANCE project website is created[4] by Anders Sandberg.[5] | |
2006 | November 20 | Robin Hanson starts Overcoming Bias.[6] The first post on the blog seems to be from November 20.[7] On one of the earliest snapshots of the blog, the listed contributors are: Nick Bostrom, Eliezer Yudkowsky, Robin Hanson, Eric Schliesser, Hal Finney, Nicholas Shackel, Mike Huemer, Guy Kahane, Rebecca Roache, Eric Zitzewitz, Peter McCluskey, Justin Wolfers, Erik Angner, David Pennock, Paul Gowder, Chris Hibbert, David Balan, Patri Friedman, Lee Corbin, Anders Sandberg, and Carl Shulman.[8] The blog seems to have received support from FHI in the beginning.[9][5] | |
2005–2007 | Lighthill Risk Network is created by Peter Taylor of FHI.[5] | ||
2007 | May | The Whole Brain Emulation Workshop is hosted by FHI.[5] | |
2007 | August 24 | Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker is published.[10][11] | |
2008 | Practical Ethics, a blog about ethics by FHI's Program on Ethics of the New Biosciences and the Uehiro Centre for Practical Ethics, launches.[12] | ||
2008 | September 15 | Publication | Global Catastrophic Risks is published.[13][11] |
2009 | January 22 | Human Enhancement is published.[14][11] | |
2010 | June 21 | Anthropic Bias by Nick Bostrom is published. The book covers the topic of reasoning under observation selection effects.[15][11] | |
2011 | March 18 | Enhancing Human Capacities is published.[16][17] | |
2014 | July–September | Influence | Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies is published. While Bostrom has never worked for MIRI, he is a research advisor to MIRI. MIRI also contributed substantially to the publication of the book.[18] |
2015 | The Strategic AI Research Center starts some time after this period.[19] | ||
2018 | February 20 | Publication | The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.[20][21][22] |
Meta information on the timeline
How the timeline was built
The initial version of the timeline was written by Issa Rice.
Funding information for this timeline is available.
What the timeline is still missing
Timeline update strategy
See also
External links
- Future of Humanity Institute (Wikipedia)
- LessWrong Wiki
References
- ↑ "About | Future of Humanity Institute | Programmes". Oxford Martin School. Retrieved February 7, 2018.
- ↑ "Future of Humanity Institute". Archived from the original on October 13, 2005. Retrieved February 7, 2018.
- ↑ "Wayback Machine" (PDF). Archived from the original (PDF) on May 12, 2006. Retrieved February 7, 2018.
- ↑ Anders Sandberg. "ENHANCE Project Site". Archived from the original on April 6, 2006. Retrieved February 7, 2018.
- ↑ 5.0 5.1 5.2 5.3 "Wayback Machine" (PDF). Archived from the original (PDF) on January 17, 2009. Retrieved February 7, 2018.
- ↑ "Overcoming Bias : Bio". Retrieved June 1, 2017.
- ↑ "Overcoming Bias: How To Join". Retrieved September 26, 2017.
- ↑ "Overcoming Bias". Retrieved September 26, 2017.
- ↑ "FHI Updates". Archived from the original on July 5, 2007. Retrieved February 7, 2018.
- ↑ "Wittgenstein and His Interpreters: Essays in Memory of Gordon Baker: Amazon.co.uk: Guy Kahane, Edward Kanterian, Oskari Kuusela: 9781405129220: Books". Retrieved February 8, 2018.
- ↑ 11.0 11.1 11.2 11.3 "Future of Humanity Institute - Books". Archived from the original on November 3, 2010. Retrieved February 8, 2018.
- ↑ "Future of Humanity Institute Updates". Archived from the original on September 15, 2008. Retrieved February 7, 2018.
- ↑ "Global Catastrophic Risks: Nick Bostrom, Milan M. Ćirković: 9780198570509: Amazon.com: Books". Retrieved February 8, 2018.
- ↑ "Human Enhancement: Amazon.co.uk: Julian Savulescu, Nick Bostrom: 9780199299720: Books". Retrieved February 8, 2018.
- ↑ "Anthropic Bias (Studies in Philosophy): Amazon.co.uk: Nick Bostrom: 9780415883948: Books". Retrieved February 8, 2018.
- ↑ "Enhancing Human Capacities: Amazon.co.uk: Julian Savulescu, Ruud ter Meulen, Guy Kahane: 9781405195812: Books". Retrieved February 8, 2018.
- ↑ "Future of Humanity Institute - Books". Archived from the original on January 16, 2013. Retrieved February 8, 2018.
- ↑ "Carl_Shulman comments on My Cause Selection: Michael Dickens". Effective Altruism Forum. September 17, 2015. Retrieved July 6, 2017.
- ↑ "Opinion | Q&A: Philosopher Nick Bostrom on superintelligence, human enhancement and existential risk". The Washington Post. Retrieved February 8, 2018.
- ↑ "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018.
- ↑ "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018.
- ↑ Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018.