Changes

Jump to: navigation, search

Timeline of DeepMind

292 bytes added, 14:21, 11 June 2019
no edit summary
| 2016 || April || Controversy || ''{{w|New Scientist}}'' obtains a copy of a data-sharing agreement between DeepMind and the {{w|Royal Free London NHS Foundation Trust}}. The latter operates three London hospitals where an estimated 1.6 million patients are treated annually. The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals, including personal details such as whether patients have been diagnosed with [[w:HIV/AIDS|HIV]], suffered from [[w:major depressive disorder|depression]] or have ever undergone an {{w|abortion}} in order to conduct research to seek better outcomes in various health conditions.<ref>{{cite news |url=https://www.newscientist.com/article/2086454-revealed-google-ai-has-access-to-huge-haul-of-nhs-patient-data |title=Revealed: Google AI has access to huge haul of NHS patient data |first=Hal |last=Hodson |work={{w|New Scientist}} |date=29 April 2016 |accessdate= 28 May 2019}}</ref><ref>{{cite news |url=https://www.newscientist.com/article/mg23030722-900-big-data-if-theres-nothing-to-hide-why-be-secretive/ |title=Leader: If Google has nothing to hide about NHS data, why so secretive? |work={{w|New Scientist}} |date=4 May 2016 |accessdate= 28 May 2019}}</ref>
|-
| 2016 || June 3 || {{w|AI}} development || DeepMind develops a ‘big red button’ to stop AIs from causing harm, using a framework in the form of "safely interruptible" artificial intelligence. The system It guarantees that a machine will not learn to resist attempts by humans to intervene in its learning processes.<ref>{{cite web |last1=Byrne |first1=Michael |title=Google DeepMind Researchers Develop AI Kill Switch |url=https://www.vice.com/en_us/article/bmv7x5/google-researchers-have-come-up-with-an-ai-kill-switch |website=vice.com |accessdate=30 May 2019}}</ref><ref>{{cite web |last1=Li |first1=Abner |title=Google DeepMind has developed a ‘big red button’ to stop AIs from causing harm |url=https://9to5google.com/2016/06/03/google-deepmind-big-red-button-ai/ |website=9to5google.com |accessdate=30 May 2019}}</ref><ref>{{cite web |last1=Shead |first1=Sam |title=Google has developed a 'big red button' that can be used to interrupt artificial intelligence and stop it from causing harm |url=https://www.businessinsider.com/google-deepmind-develops-a-big-red-button-to-stop-dangerous-ais-causing-harm-2016-6 |website=businessinsider.com |accessdate=30 May 2019}}</ref>. The system is described in a paper by Laurent Orseau from DeepMind and Stuart Armstrong from the {{w|Future of Humanity Institute}}.<ref>{{cite web |title=Safely Interruptible Agents |url=https://intelligence.org/files/Interruptibility.pdf |website=intelligence.org |accessdate=11 June 2019}}</ref>
|-
| 2016 || July || {{w|AI}} development || DeepMind announces the ability to cut Google's data centers' energy consumption by 15%, using a {{w|machine learning}} algorithm.<ref>{{cite web |last1=Burgess |first1=Matt |title=Google's DeepMind trains AI to cut its energy bills by 40% |url=https://www.wired.co.uk/article/google-deepmind-data-centres-efficiency |website=wired.co.uk |accessdate=29 May 2019}}</ref><ref>{{cite web |last1=Wakefield |first1=Jane |title=Google uses AI to save on electricity from data centres |url=https://www.bbc.com/news/technology-36845978 |website=bbc.com |accessdate=29 May 2019}}</ref>
62,682
edits

Navigation menu