Changes

Jump to: navigation, search

Timeline of Future of Humanity Institute

766 bytes added, 16:20, 17 March 2018
no edit summary
| 2016 || {{dts|September 22}} || || FHI's page on its collaboration with Google DeepMind is published. However it is unclear when the actual collaboration began.<ref>{{cite web |url=https://www.fhi.ox.ac.uk/deepmind-collaboration/ |author=Future of Humanity Institute - FHI |title=DeepMind collaboration - Future of Humanity Institute |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 13, 2018}}</ref>
|-
| 2016 || {{dts|November}} || Workshop || The biotech horizon scanning workshop, co-hosted by the {{W|Centre for the Study of Existential Risk }} and FHI, takes place. The workshop and the overall "biological engineering horizon scanning" process is intended to lead up to "a peer-reviewed publication highlighting 15–20 developments of greatest likely impact."<ref name="annual-review-2016" /><ref>{{cite web |url=https://www.fhi.ox.ac.uk/biotech-horizon-scanning-workshop/ |author=Future of Humanity Institute - FHI |title=Biotech horizon scanning workshop - Future of Humanity Institute |publisher=Future of Humanity Institute |date=December 12, 2016 |accessdate=March 13, 2018}}</ref>
|-
| 2016 || {{dts|December}} || Workshop || FHI hosts a workshop on "AI Safety and Blockchain". Attendees include Nick Bostrom, Vitalik Buterin, {{W|Jaan Tallinn}}, {{W|Wei Dai}}, Gwern Branwen, and Allan Dafoe. "The workshop explored the potential technical overlap between AI Safety and blockchain technologies and the possibilities for using blockchain, crypto-economics, and cryptocurrencies to facilitate greater global coordination."<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-holds-workshop-on-ai-safety-and-blockchain/ |author=Future of Humanity Institute - FHI |title=FHI holds workshop on AI safety and blockchain - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 19, 2017 |accessdate=March 13, 2018}}</ref><ref name="annual-review-2016" /> It is unclear whether any output resulted from this workshop.
| 2017 || {{dts|February 10}} || Workshop || FHI hosts a workshop on normative uncertainty (i.e. uncertainty regarding moral frameworks).<ref>{{cite web |url=https://www.fhi.ox.ac.uk/fhi-normative-uncertainty-workshop/ |author=Future of Humanity Institute - FHI |title=Workshop on Normative Uncertainty |publisher=Future of Humanity Institute |date=March 8, 2017 |accessdate=March 16, 2018}}</ref>
|-
| 2017 || {{dts|February 19}}–20 || Workshop || FHI hosts a workshop on potential risks from malicious use of artificial intelligence. The workshop is organized by FHI, the {{W|Centre for the Study of Existential Risk}}, and the [[wikipedia:Leverhulme Centre for the Future of Intelligence|Centre for the Future of Intelligence]].<ref>{{cite web |url=https://www.fhi.ox.ac.uk/bad-actors-and-artificial-intelligence-workshop/ |author=Future of Humanity Institute - FHI |title=Bad Actors and AI Workshop |publisher=Future of Humanity Institute |date=November 4, 2017 |accessdate=March 16, 2018}}</ref>
|-
| 2017 || {{dts|March}} || Financial || The {{W|Open Philanthropy Project}} recommends (to the Open Philanthropy Project fund, Good Ventures, or some other entity)<ref name="open-phil-guide-grant-seekers" /><ref name="vipul-comment" /> a grant of $1,995,425 to FHI for general support.<ref name="open-phil-grant-march-2017">{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-humanity-institute-general-support |publisher=Open Philanthropy Project |title=Future of Humanity Institute — General Support |date=December 15, 2017 |accessdate=March 10, 2018}}</ref>
| 2017 || {{Dts|October}}–December || || FHI launches its Governance of AI Program, co-directed by Nick Bostrom and Allan Dafoe.<ref name="newsletter-winter-2017">{{cite web |url=https://www.fhi.ox.ac.uk/quarterly-update-winter-2017/ |author=Future of Humanity Institute - FHI |title=Quarterly Update Winter 2017 - Future of Humanity Institute |publisher=Future of Humanity Institute |date=January 19, 2018 |accessdate=March 14, 2018}}</ref>
|-
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, {{W|Centre for the Study of Existential Risk}}, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.openai.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref>
|}
* [https://www.fhi.ox.ac.uk/ Official website]
* {{W|Future of Humanity Institute}} (Wikipedia)
* [https://wiki.lesswrong.com/wiki/Future_of_Humanity_Institute LessWrong Wikipage on FHI]. The LessWrong Wiki is the wiki associated with the group blog ''{{W|LessWrong}}''. The pages on the wiki have a rationalist/effective altruist audience in mind, and is often more useful than the corresponding Wikipedia page on a topic.* [https://donations.vipulnaik.com/donee.php?donee=Future+of+Humanity+Institute Donations List Website (donee)]. The Donations List Website is a website by Vipul Naik that collects donations data regarding many effective altruist and rationality sphere donations. This is the donee page for FHI, which collects donations made to FHI.* [https://aiwatch.issarice.com/?organization=Future+of+Humanity+Institute AI Watch]. AI Watch is a website by Issa Rice that tracks people and organizations in AI safety. This is the organization page for FHI, showing some basic information as well as a list of AI safety-related positions at FHI.
==References==
{{Reflist|30em}}

Navigation menu