Changes

Jump to: navigation, search

Timeline of OpenAI

526 bytes added, 11:30, 23 May 2020
Full timeline
| 2016 || {{dts|June 16}} || Generative models || Publication || OpenAI publishes post describing four projects on generative models, a branch of {{w|unsupervised learning}} techniques in machine learning.<ref>{{cite web |title=Generative Models |url=https://openai.com/blog/generative-models/ |website=openai.com |accessdate=5 April 2020}}</ref>
|-
| 2016 || {{dts|June 21}} || || Publication || "Concrete Problems in AI Safety" by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané is submitted to the {{w|arXiv}}. The paper explores practical problems in machine learning systems.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref>The paper would receive a shoutout from the Open Philanthropy Project.<ref>{{cite web|url = https://www.openphilanthropy.org/blog/concrete-problems-ai-safety|title = Concrete Problems in AI Safety|last = Karnofsky|first = Holden|date = June 23, 2016|accessdate = April 18, 2020}}</ref> It would become a landmark in AI safety literature, and many of its authors would continue to do AI safety work at OpenAI in the years to come.
|-
| 2016 || {{Dts|July}} || || Team || Dario Amodei joins OpenAI<ref>{{cite web |url=https://www.crunchbase.com/person/dario-amodei |title=Dario Amodei - Research Scientist @ OpenAI |publisher=Crunchbase |accessdate=May 6, 2018}}</ref>, working on the Team Lead for AI Safety.<ref name="Dario Amodeiy"/><ref name="orgwatch.issarice.com"/>
| 2020 || {{dts|February 5}} || Safety || Publication || Beth Barnes and Paul Christiano on <code>lesswrong.com</code> publish ''Writeup: Progress on AI Safety via Debate'', a writeup of the research done by the "Reflection-Humans" team at OpenAI in third and fourth quarter of 2019.<ref>{{cite web |title=Writeup: Progress on AI Safety via Debate |url=https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1#Things_we_did_in_Q3 |website=lesswrong.com |accessdate=16 May 2020}}</ref>
|-
| 2020 || {{dts|February 17}} || || Coverage || AI reporter Karen Hao at ''MIT Technology Review'' publishes review on OpenAI titled ''The messy, secretive reality behind OpenAI’s bid to save the world'', which suggests the company is surrendering its declaration to be transparent in order to outpace competitors. As a response, {{w|Elon Musk}} criticizes OpenAI, saying it lacks transparency.<ref name="Aaron">{{cite web |last1=Holmes |first1=Aaron |title=Elon Musk just criticized the artificial intelligence company he helped found — and said his confidence in the safety of its AI is 'not high' |url=https://www.businessinsider.com/elon-musk-criticizes-OpenAI-dario-amodei-artificial-intelligence-safety-2020-2 |website=businessinsider.com |accessdate=29 February 2020}}</ref> On his {{w|Twitter}} account, Musk writes "I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high", alluding to OpenAI Vice President of Research Dario Amodei.<ref>{{cite web |title=Elon Musk |url=https://twitter.com/elonmusk/status/1229546206948462597 |website=twitter.com |accessdate=29 February 2020}}</ref>
|-
|}
2,438
edits

Navigation menu