Changes

Jump to: navigation, search

Timeline of OpenAI

2,141 bytes added, 08:22, 24 May 2020
no edit summary
** Sort the full timeline by "Event type" and look for the group of rows with value "Partnership".
** You will read collaborations with organizations like {{w|DeepMind}} and {{w|Microsoft}}.
** What are some significant fundings granted to OpenAI by donors?
** Sort the full timeline by "Event type" and look for the group of rows with value "Donation".
** You will see names like the {{w|Open Philanthropy Project}}, and {{w|Nvidia}}, among others.
| 2016 || {{dts|June 16}} || Generative models || Publication || OpenAI publishes post describing four projects on generative models, a branch of {{w|unsupervised learning}} techniques in machine learning.<ref>{{cite web |title=Generative Models |url=https://openai.com/blog/generative-models/ |website=openai.com |accessdate=5 April 2020}}</ref>
|-
| 2016 || {{dts|June 21}} || || Publication || "Concrete Problems in AI Safety" by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané is submitted to the {{w|arXiv}}. The paper explores practical problems in machine learning systems.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref>The paper would receive a shoutout from the Open Philanthropy Project.<ref>{{cite web|url = https://www.openphilanthropy.org/blog/concrete-problems-ai-safety|title = Concrete Problems in AI Safety|last = Karnofsky|first = Holden|date = June 23, 2016|accessdate = April 18, 2020}}</ref> It would become a landmark in AI safety literature, and many of its authors would continue to do AI safety work at OpenAI in the years to come.
|-
| 2016 || {{Dts|July}} || || Team || Dario Amodei joins OpenAI<ref>{{cite web |url=https://www.crunchbase.com/person/dario-amodei |title=Dario Amodei - Research Scientist @ OpenAI |publisher=Crunchbase |accessdate=May 6, 2018}}</ref>, working on the Team Lead for AI Safety.<ref name="Dario Amodeiy"/><ref name="orgwatch.issarice.com"/>
|-
| 2019 || {{dts|December 3}} || {{w|Reinforcement learning}} || Software release || OpenAI releases Procgen Benchmark, a set of 16 simple-to-use procedurally-generated environments (CoinRun, StarPilot, CaveFlyer, Dodgeball, FruitBot, Chaser, Miner, Jumper, Leaper, Maze, BigFish, Heist, Climber, Plunder, Ninja, and BossFight) which provide a direct measure of how quickly a {{w|reinforcement learning}} agent learns generalizable skills. Procgen Benchmark prevents AI model overfitting.<ref>{{cite web |title=Procgen Benchmark |url=https://openai.com/blog/procgen-benchmark/ |website=openai.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=OpenAI’s Procgen Benchmark prevents AI model overfitting |url=https://venturebeat.com/2019/12/03/openais-procgen-benchmark-overfitting/ |website=venturebeat.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=GENERALIZATION IN REINFORCEMENT LEARNING – EXPLORATION VS EXPLOITATION |url=https://analyticsindiamag.com/generalization-in-reinforcement-learning-exploration-vs-exploitation/ |website=analyticsindiamag.com |accessdate=2 March 2020}}</ref>
|-
| 2019 || {{dts|December 4}} || || Publication || "Deep Double Descent: Where Bigger Models and More Data Hurt" is submitted to the {{w|ArXiv}}. The paper shows that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as the model size increases, performance first gets worse and then gets better.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Bansal |first3=Yamini |last4=Yang |first4=Tristan |last5=Barak |first5=Boaz |last6=Sutskever |first6=Ilya |title=Deep Double Descent: Where Bigger Models and More Data Hurt |website=arxiv.org |url=https://arxiv.org/abs/1912.02292|accessdate=5 April 2020}}</ref> The paper is summarized on the OpenAI blog.<ref>{{cite web|url = https://openai.com/blog/deep-double-descent/|title = Deep Double Descent|publisher = OpenAI|date = December 5, 2019|accessdate = May 23, 2020}}</ref> MIRI researcher Evan Hubinger writes an explanatory post on the subject on LessWrong and the AI Alignment Forum,<ref>{{cite web|url = https://www.lesswrong.com/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent|title = Understanding “Deep Double Descent”|date = December 5, 2019|accessdate = 24 May 2020|publisher = LessWrong|last = Hubinger|first = Evan}}</ref> and follows up with a post on the AI safety implications.<ref>{{cite web|url = https://www.lesswrong.com/posts/nGqzNC6uNueum2w8T/inductive-biases-stick-around|title = Inductive biases stick around|date = December 18, 2019|accessdate = 24 May 2020|last = Hubinger|first = Evan}}</ref>
|-
| 2019 || {{dts|December}} || || Team || Dario Amodei is promoted as OpenAI's Vice President of Research.<ref name="Dario Amodeiy">{{cite web |title=Dario Amodei |url=https://www.linkedin.com/in/dario-amodei-3934934/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
| 2020 || {{dts|February 5}} || Safety || Publication || Beth Barnes and Paul Christiano on <code>lesswrong.com</code> publish ''Writeup: Progress on AI Safety via Debate'', a writeup of the research done by the "Reflection-Humans" team at OpenAI in third and fourth quarter of 2019.<ref>{{cite web |title=Writeup: Progress on AI Safety via Debate |url=https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1#Things_we_did_in_Q3 |website=lesswrong.com |accessdate=16 May 2020}}</ref>
|-
| 2020 || {{dts|February 17}} || || Coverage || AI reporter Karen Hao at ''MIT Technology Review'' publishes review on OpenAI titled ''The messy, secretive reality behind OpenAI’s bid to save the world'', which suggests the company is surrendering its declaration to be transparent in order to outpace competitors. As a response, {{w|Elon Musk}} criticizes OpenAI, saying it lacks transparency.<ref name="Aaron">{{cite web |last1=Holmes |first1=Aaron |title=Elon Musk just criticized the artificial intelligence company he helped found — and said his confidence in the safety of its AI is 'not high' |url=https://www.businessinsider.com/elon-musk-criticizes-OpenAI-dario-amodei-artificial-intelligence-safety-2020-2 |website=businessinsider.com |accessdate=29 February 2020}}</ref> On his {{w|Twitter}} account, Musk writes "I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high", alluding to OpenAI Vice President of Research Dario Amodei.<ref>{{cite web |title=Elon Musk |url=https://twitter.com/elonmusk/status/1229546206948462597 |website=twitter.com |accessdate=29 February 2020}}</ref>
|-
|}
===How the timeline was built===
The initial version of the timeline was written by [[User:Issa|Issa Rice]]. It has been expanded considerably by [[User:Sebastian|Sebastian]].
{{funding info}} is available.
62,666
edits

Navigation menu