Changes

Jump to: navigation, search

Timeline of OpenAI

58 bytes removed, 12:26, 24 April 2020
no edit summary
| 2015 || {{dts|December}} || Coverage || The article "{{w|OpenAI}}" is created on {{w|Wikipedia}}.<ref>{{cite web |title=OpenAI: Revision history |url=https://en.wikipedia.org/w/index.php?title=OpenAI&dir=prev&action=history |website=wikipedia.org |accessdate=6 April 2020}}</ref>
|-
| 2015 || {{dts|December}} || Staff Team || OpenAI announces {{w|Y Combinator}} founding partner {{w|Jessica Livingston}} as one of its financial backers.<ref>{{cite web |url=https://www.forbes.com/sites/theopriestley/2015/12/11/elon-musk-and-peter-thiel-launch-OpenAI-a-non-profit-artificial-intelligence-research-company/ |title=Elon Musk And Peter Thiel Launch OpenAI, A Non-Profit Artificial Intelligence Research Company |first1=Theo |last1=Priestly |date=December 11, 2015 |publisher=''{{w|Forbes}}'' |access-date=8 July 2019 }}</ref>
|-
| 2016 || {{dts|January}} || Staff Team || {{W|Ilya Sutskever}} joins OpenAI as Research Director.<ref>{{cite web |url=https://aiwatch.issarice.com/?person=Ilya+Sutskever |date=April 8, 2018 |title=Ilya Sutskever |publisher=AI Watch |accessdate=May 6, 2018}}</ref>
|-
| 2016 || {{dts|January 9}} || Education || The OpenAI research team does an AMA ("ask me anything") on r/MachineLearning, the subreddit dedicated to machine learning.<ref>{{cite web |url=https://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_OpenAI_research_team/ |publisher=reddit |title=AMA: the OpenAI Research Team • r/MachineLearning |accessdate=May 5, 2018}}</ref>
| 2016 || {{dts|February 25}} || Publication || "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks", a paper on optimization, is first submitted to the {{w|ArXiv}}. The paper presents weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Kingma |first2=Diederik P. |title=Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks |url=https://arxiv.org/abs/1602.07868 |website=arxiv.org |accessdate=27 March 2020}}</ref>
|-
| 2016 || {{dts|March 31}} || Staff Team || A blog post from this day announces that {{W|Ian Goodfellow}} has joined OpenAI.<ref>{{cite web |url=https://blog.OpenAI.com/team-plus-plus/ |publisher=OpenAI Blog |title=Team++ |date=March 22, 2017 |first=Greg |last=Brockman |accessdate=May 6, 2018}}</ref>
|-
| 2016 || {{Dts|April 26}} || Staff Team || A blog post from this day announces that Pieter Abbeel has joined OpenAI.<ref>{{cite web |url=https://blog.OpenAI.com/welcome-pieter-and-shivon/ |publisher=OpenAI Blog |title=Welcome, Pieter and Shivon! |date=March 20, 2017 |first=Ilya |last=Sutskever |accessdate=May 6, 2018}}</ref>
|-
| 2016 || {{dts|April}} || Staff Team || Shivon Zilis joins OpenAI as Advisor.<ref>{{cite web |title=Shivon Zilis |url=https://www.linkedin.com/in/shivonzilis/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2016 || {{dts|April 27}} || Software release || The public beta of OpenAI Gym, an open source toolkit that provides environments to test AI bots, is released.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-gym-beta/ |publisher=OpenAI Blog |title=OpenAI Gym Beta |date=March 20, 2017 |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/04/OpenAI-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ |title=Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free |date=April 27, 2016 |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018 |quote=This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called "reinforcement learning"}}</ref><ref>{{cite web |url=http://www.businessinsider.com/OpenAI-has-launched-a-gym-where-developers-can-train-their-computers-2016-4?op=1 |first=Sam |last=Shead |date=April 28, 2016 |title=Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
| 2016 || {{dts|June 21}} || Publication || "Concrete Problems in AI Safety" is submitted to the {{w|arXiv}}. The paper explores practical problems in machine learning systems.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref>
|-
| 2016 || {{Dts|July}} || Staff Team || Dario Amodei joins OpenAI<ref>{{cite web |url=https://www.crunchbase.com/person/dario-amodei |title=Dario Amodei - Research Scientist @ OpenAI |publisher=Crunchbase |accessdate=May 6, 2018}}</ref>, working on the Team Lead for AI Safety.<ref name="Dario Amodeiy"/>
|-
| 2016 || {{dts|July 8}} || Publication || "Adversarial Examples in the Physical World" is published. One of the authors is {{W|Ian Goodfellow}}, who is at OpenAI at the time.<ref>{{cite web |url=https://www.wired.com/2016/07/fool-ai-seeing-something-isnt/ |title=How To Fool AI Into Seeing Something That Isn't There |publisher=[[wikipedia:WIRED|WIRED]] |date=July 29, 2016 |first=Cade |last=Metz |accessdate=March 3, 2018}}</ref>
| 2016 || {{dts|August 29}} || Publication || "Infrastructure for Deep Learning" is published. The post shows how deep learning research usually proceeds. It also describes the infrastructure choices OpenAI made to support it, and open-source kubernetes-ec2-autoscaler, a batch-optimized scaling manager for {{w|Kubernetes}}.<ref>{{cite web |title=Infrastructure for Deep Learning |url=https://openai.com/blog/infrastructure-for-deep-learning/ |website=openai.com |accessdate=28 March 2020}}</ref>
|-
| 2016 || {{dts|September}} || Staff Team || Alexander Ray joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Alexander Ray |url=https://www.linkedin.com/in/machinaut/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2016 || {{dts|October 11}} || Publication || "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model", a paper on {{w|robotics}}, is submitted to the {{w|ArXiv}}. It investigates settings where the sequence of states traversed in simulation remains reasonable for the real world.<ref>{{cite web |last1=Christiano |first1=Paul |last2=Shah |first2=Zain |last3=Mordatch |first3=Igor |last4=Schneider |first4=Jonas |last5=Blackwell |first5=Trevor |last6=Tobin |first6=Joshua |last7=Abbeel |first7=Pieter |last8=Zaremba |first8=Wojciech |title=Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model |url=https://arxiv.org/abs/1610.03518 |website=arxiv.org |accessdate=28 March 2020}}</ref>
| 2016 || {{dts|October 18}} || Publication || "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data", a paper on safety, is submitted to the {{w|ArXiv}}. It shows an approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE).<ref>{{cite web |last1=Papernot |first1=Nicolas |last2=Abadi |first2=Martín |last3=Erlingsson |first3=Úlfar |last4=Goodfellow |first4=Ian |last5=Talwar |first5=Kunal |title=Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data |url=https://arxiv.org/abs/1610.05755 |website=arxiv.org |accessdate=28 March 2020}}</ref>
|-
| 2016 || {{dts|October}} || Staff Team || Jack Clark joins OpenAI.<ref>{{cite web |title=Jack Clark |url=https://www.linkedin.com/in/jack-clark-5a320317/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2016 || {{dts|October}} || Staff Team || OpenAI Research Scientist Harri Edwards joins the organization.<ref>{{cite web |title=Harri Edwards |url=https://www.linkedin.com/in/harri-edwards-7b199375/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2016 || {{dts|November 2}} || Publication || "Extensions and Limitations of the Neural GPU" is first submitted to the {{w|ArXiv}}. The paper shows that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size.<ref>{{cite web |last1=Price |first1=Eric |last2=Zaremba |first2=Wojciech |last3=Sutskever |first3=Ilya |title=Extensions and Limitations of the Neural GPU |url=https://arxiv.org/abs/1611.00736 |website=arxiv.org |accessdate=28 March 2020}}</ref>
| 2016 || December 21 || Publication || "Faulty Reward Functions in the Wild" is published. The post explores a failed {{w|reinforcement learning}} algorithm, which leads to misspecifying the reward function.<ref>{{cite web |title=Faulty Reward Functions in the Wild |url=https://openai.com/blog/faulty-reward-functions/ |website=openai.com |accessdate=5 April 2020}}</ref>
|-
| 2016 || ? || Staff Team || Tom Brown joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Tom Brown |url=https://www.linkedin.com/in/nottombrown/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
|-
| 2017 || {{dts|January}} || Staff Team || Paul Christiano joins OpenAI to work on AI alignment.<ref>{{cite web |url=https://paulfchristiano.com/ai/ |title=AI Alignment |date=May 13, 2017 |publisher=Paul Christiano |accessdate=May 6, 2018}}</ref> He was previously an intern at OpenAI in 2016.<ref>{{cite web |url=https://blog.OpenAI.com/team-update/ |publisher=OpenAI Blog |title=Team Update |date=March 22, 2017 |accessdate=May 6, 2018}}</ref>
|-
| 2017 || {{dts|January 19}} || Publication || "PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications", a paper on generative models, is submitted to the {{w|ArXiv}}.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Karpathy |first2=Andrej |last3=Chen |first3=Xi |last4=Kingma |first4=Diederik P. |title=PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications |url=https://arxiv.org/abs/1701.05517 |website=arxiv.org |accessdate=28 March 2020}}</ref>
| 2017 || {{dts|February 8}} || Publication || "Adversarial Attacks on Neural Network Policies" is submitted to the {{w|ArXiv}}. The paper shows that adversarial attacks are effective when targeting neural network policies in reinforcement learning.<ref>{{cite web |last1=Huang |first1=Sandy |last2=Papernot |first2=Nicolas |last3=Goodfellow |first3=Ian |last4=Duan |first4=Yan |last5=Abbeel |first5=Pieter |title=Adversarial Attacks on Neural Network Policies |url=https://arxiv.org/abs/1702.02284 |website=arxiv.org |accessdate=28 March 2020}}</ref>
|-
| 2017 || {{dts|February}} || Staff Team || OpenAI Research Scientist Prafulla Dhariwal joins the organization.<ref>{{cite web |title=Prafulla Dhariwal |url=https://www.linkedin.com/in/prafulladhariwal/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2017 || {{dts|February}} || Staff Team || OpenAI Researcher Jakub Pachocki joins the organization.<ref>{{cite web |title=Jakub Pachocki |url=https://www.linkedin.com/in/jakub-pachocki/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2017 || {{dts|March 6}} || Publication || "Third-Person Imitation Learning", a paper on {{w|robotics}}, is submitted to the {{w|ArXiv}}. It presents a method for unsupervised third-person imitation learning.<ref>{{cite web |last1=Stadie |first1=Bradly C. |last2=Abbeel |first2=Pieter |last3=Sutskever |first3=Ilya |title=arxiv.org |url=https://arxiv.org/abs/1703.01703|website=arxiv.org |accessdate=28 March 2020}}</ref>
| 2017 || {{dts|March}} || Reorganization || Greg Brockman and a few other core members of OpenAI begin drafting an internal document to lay out a path to {{w|artificial general intelligence}}. As the team studies trends within the field, they realize staying a nonprofit is financially untenable.<ref name="technologyreview.comñ">{{cite web |title=The messy, secretive reality behind OpenAI’s bid to save the world |url=https://www.technologyreview.com/s/615181/ai-OpenAI-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ |website=technologyreview.com |accessdate=28 February 2020}}</ref>
|-
| 2017 || {{dts|March}} || Staff Team || Christopher Berner joins OpenAI as Head of Infrastructure.<ref>{{cite web |title=Christopher Berner |url=https://www.linkedin.com/in/christopherbernerberkeley/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
|-
| 2017 || {{dts|April}} || Coverage || An article entitled "The People Behind OpenAI" is published on {{W|Red Hat}}'s ''Open Source Stories'' website, covering work at OpenAI.<ref>{{cite web |url=https://www.redhat.com/en/open-source-stories/ai-revolutionaries/people-behind-OpenAI |title=Open Source Stories: The People Behind OpenAI |accessdate=May 5, 2018 |first1=Brent |last1=Simoneaux |first2=Casey |last2=Stegman}} In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.</ref><ref>{{cite web |url=https://www.reddit.com/r/OpenAI/comments/63xr4p/profile_of_the_people_behind_OpenAI/ |publisher=reddit |title=Profile of the people behind OpenAI • r/OpenAI |date=April 7, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=14832524 |title=The People Behind OpenAI |website=Hacker News |accessdate=May 5, 2018 |date=July 23, 2017}}</ref>
| 2017 || {{dts|April 6}} || Research progress || OpenAI unveils reuse of an old field called “neuroevolution”, and a subset of algorithms from it called “evolution strategies,” which are aimed at solving optimization problems. In one hour training on an Atari challenge, an algorithm is found to reach a level of mastery that took a reinforcement-learning system published by DeepMind in 2016 a whole day to learn. On the walking problem the system took 10 minutes, compared to 10 hours for DeepMind's approach.<ref>{{cite web |title=OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s |url=https://singularityhub.com/2017/04/06/OpenAI-just-beat-the-hell-out-of-deepmind-with-an-algorithm-from-the-80s/ |website=singularityhub.com |accessdate=29 June 2019}}</ref>
|-
| 2017 || {{dts|April}} || Staff Team || Matthias Plappert joins OpenAI as Researcher.<ref>{{cite web |title=Matthias Plappert |url=https://www.linkedin.com/in/matthiasplappert/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2017 || {{dts|May 15}} || Software release || OpenAI releases Roboschool, an open-source software for robot simulation, integrated with OpenAI Gym.<ref>{{cite web |title=Roboschool |url=https://openai.com/blog/roboschool/ |website=openai.com |accessdate=5 April 2020}}</ref>
| 2017 || {{dts|May 24}} || Software release || OpenAI releases Baselines, a set of implementations of reinforcement learning algorithms.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-baselines-dqn/ |publisher=OpenAI Blog |title=OpenAI Baselines: DQN |date=November 28, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://github.com/OpenAI/baselines |publisher=GitHub |title=OpenAI/baselines |accessdate=May 5, 2018}}</ref>
|-
| 2017 || {{dts|May}} || Staff Team || {{w|Kevin Frans}} joins OpenAI as Research Intern.<ref>{{cite web |title=Kevin Frans |url=https://www.linkedin.com/in/kevinfrans/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2017 || {{dts|June 7}} || Publication || "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments" is submitted to the {{w|ArXiv}}. The paper explores deep {{w|reinforcement learning}} methods for multi-agent domains.<ref>{{cite web |title=Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments |website=arxiv.org |accessdate=5 April 2020}}</ref>
| 2017 || {{dts|June}} || Partnership || OpenAI partners with {{w|DeepMind}}’s safety team in the development of an algorithm which can infer what humans want by being told which of two proposed behaviors is better. The learning algorithm uses small amounts of human feedback to solve modern {{w|reinforcement learning}} environments.<ref>{{cite web |title=Learning from Human Preferences |url=https://OpenAI.com/blog/deep-reinforcement-learning-from-human-preferences/ |website=OpenAI.com |accessdate=29 June 2019}}</ref>
|-
| 2017 || {{dts|July}} || Staff Team || OpenAI Research Scientist Joshua Achiam joins the organization.<ref>{{cite web |title=Joshua Achiam |url=https://www.linkedin.com/in/joshua-achiam-13887199/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2017 || {{dts|July 27}} || Research progress || OpenAI announces having found that adding adaptive noise to the parameters of {{w|reinforcement learning}} algorithms frequently boosts performance.<ref>{{cite web |title=Better Exploration with Parameter Noise |url=https://openai.com/blog/better-exploration-with-parameter-noise/ |website=openai.com |accessdate=5 April 2020}}</ref>
| 2017 || {{Dts|September 13}} || Publication || "Learning with Opponent-Learning Awareness" is first uploaded to the {{w|ArXiv}}. The paper presents Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in an environment.<ref>{{cite web |url=https://arxiv.org/abs/1709.04326 |title=[1709.04326] Learning with Opponent-Learning Awareness |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/09 |author=gwern |date=August 16, 2017 |title=September 2017 news - Gwern.net |accessdate=March 2, 2018}}</ref>
|-
| 2017 || September || Staff Team || OpenAI Research Scientist Bowen Baker joins the organization.<ref>{{cite web |title=Bowen Baker |url=https://www.linkedin.com/in/bowen-baker-59b48a65/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2017 || {{dts|October 11}} || Software release || RoboSumo, a game that simulates {{W|sumo wrestling}} for AI to learn to play, is released.<ref>{{cite web |url=https://www.wired.com/story/ai-sumo-wrestlers-could-make-future-robots-more-nimble/ |title=AI Sumo Wrestlers Could Make Future Robots More Nimble |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 3, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/elon-musk-OpenAI-virtual-robots-learn-sumo-wrestle-soccer-sports-ai-tech-science-2017-10 |first1=Alexandra |last1=Appolonia |first2=Justin |last2=Gmoser |date=October 20, 2017 |title=Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
| 2017 || {{dts|October 31}} || Publication || "Backpropagation through the Void: Optimizing control variates for black-box gradient estimation", a paper on {{w|reinforcement learning}}, is first submitted to the {{w|ArXiv}}. It introduces a general framework for learning low-variance, unbiased gradient estimators for black-box functions of random variables.<ref>{{cite web |last1=Grathwohl |first1=Will |last2=Choi |first2=Dami |last3=Wu |first3=Yuhuai |last4=Roeder |first4=Geoffrey |last5=Duvenaud |first5=David |title=Backpropagation through the Void: Optimizing control variates for black-box gradient estimation |url=https://arxiv.org/abs/1711.00123 |website=arxiv.org |accessdate=26 March 2020}}</ref>
|-
| 2017 || {{dts|October}} || Staff Team || Jonathan Raiman joins OpenAI as Research Scientist.<ref>{{cite web |title=Jonathan Raiman |url=https://www.linkedin.com/in/jonathan-raiman-36694123/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2017 || {{dts|November 2}} || Publication || "Interpretable and Pedagogical Examples", a paper on language, is first submitted to the {{w|ArXiv}}. It shows that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies.<ref>{{cite web |last1=Milli |first1=Smitha |last2=Abbeel |first2=Pieter |last3=Mordatch |first3=Igor |title=Interpretable and Pedagogical Examples |url=https://arxiv.org/abs/1711.00694 |website=arxiv.org |accessdate=26 March 2020}}</ref>
|-
| 2017 || {{Dts|November 6}} || Staff Team || ''{{W|The New York Times}}'' reports that Pieter Abbeel (a researcher at OpenAI) and three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligence.<ref>{{cite web |url=https://www.nytimes.com/2017/11/06/technology/artificial-intelligence-start-up.html |date=November 6, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up |author=Cade Metz |accessdate=May 5, 2018}}</ref>
|-
| 2017 || {{dts|December 4}} || Publication || "Learning Sparse Neural Networks through ''L<sub>0</sub>'' Regularization", a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}. It describes a method which allows for straightforward and efficient learning of model structures with stochastic gradient descent.<ref>{{cite web |last1=Louizos |first1=Christos |last2=Welling |first2=Max |last3=Kingma |first3=Diederik P. |title=Learning Sparse Neural Networks through L0 Regularization |url=https://arxiv.org/abs/1712.01312 |website=arxiv.org |accessdate=26 March 2020}}</ref>
| 2017 || {{dts|December}} || Publication || The 2017 AI Index is published. OpenAI contributed to the report.<ref>{{cite web |url=https://www.theverge.com/2017/12/1/16723238/ai-artificial-intelligence-progress-index |date=December 1, 2017 |publisher=The Verge |title=Artificial intelligence isn't as clever as we think, but that doesn't stop it being a threat |first=James |last=Vincent |accessdate=March 2, 2018}}</ref>
|-
| 2017 || {{dts|December}} || Staff Team || David Luan joins OpenAI as Director of Engineering.<ref>{{cite web |title=David Luan |url=https://www.linkedin.com/in/jluan/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|January}} || Staff Team || Christy Dennison joins OpenAI as Machine Learning Engineer.<ref>{{cite web |title=Christy Dennison |url=https://www.linkedin.com/in/christydennison/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|January}} || Staff Team || David Farhi joins OpenAI as Researcher.<ref>{{cite web |title=David Farhi |url=https://www.linkedin.com/in/david-farhi-13824175/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|January}} || Staff Team || Mathew Shrwed joins OpenAI as Software Engineer.<ref>{{cite web |title=Mathew Shrwed |url=https://www.linkedin.com/in/mshrwed/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|February 3}} || Publication || "DeepType: Multilingual Entity Linking by Neural Type System Evolution" a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}.<ref>{{cite web |last1=Raiman |first1=Jonathan |last2=Raiman |first2=Olivier |title=DeepType: Multilingual Entity Linking by Neural Type System Evolution |url=https://arxiv.org/abs/1802.01021 |website=arxiv.org |accessdate=26 March 2020}}</ref>
| 2018 || {{dts|February 26}} || Publication || "Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research" is first submitted to the {{w|ArXiv}}. The paper introduces a suite of challenging continuous control tasks based on currently existing robotics hardware, and presents a set of concrete research ideas for improving {{w|reinforcement learning}} algorithms.<ref>{{cite web |title=Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research |url=https://arxiv.org/abs/1802.09464 |website=arxiv.org |accessdate=26 March 2020}}</ref>
|-
| 2018 || {{dts|February}} || Staff Team || Lilian Weng joins OpenAI as Research Scientist.<ref>{{cite web |title=Lilian Weng |url=https://www.linkedin.com/in/lilianweng/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|March 3}} || Publication || "Some Considerations on Learning to Explore via Meta-Reinforcement Learning", a paper on {{w|reinforcement learning}}, is first submitted to {{w|ArXiv}}. It considers the problem of exploration in meta reinforcement learning.<ref>{{cite web |last1=Stadie |first1=Bradly C. |last2=Yang |first2=Ge |last3=Houthooft |first3=Rein |last4=Chen |first4=Xi |last5=Duan |first5=Yan |last6=Wu |first6=Yuhuai |last7=Abbeel |first7=Pieter |last8=Sutskever |first8=Ilya |title=Some Considerations on Learning to Explore via Meta-Reinforcement Learning |url=https://arxiv.org/abs/1803.01118 |website=arxiv.org |accessdate=26 March 2020}}</ref>
| 2018 || {{dts|March 20}} || Publication || "Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines", a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}. The paper shows that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.<ref>{{cite web |last1=Wu |first1=Cathy |last2=Rajeswaran |first2=Aravind |last3=Duan |first3=Yan |last4=KumarVikash Kumar |first4=Vikash |last5=Bayen |first5=Alexandre M |last6=Kakade |first6=Sham |last7=Mordatch |first7=Igor |last8=Abbeel |first8=Pieter |title=Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines |url=https://arxiv.org/abs/1803.07246 |website=arxiv.org |accessdate=26 March 2020}}</ref>
|-
| 2018 || {{dts|March}} || Staff Team || Diane Yoon joins OpenAI as Operations Manager.<ref>{{cite web |title=Diane Yoon |url=https://www.linkedin.com/in/diane-yoon-a0a8911b/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{Dts|April 5}}{{snd}}June 5 || Event hosting || The OpenAI Retro Contest takes place.<ref>{{cite web |url=https://contest.OpenAI.com/ |title=OpenAI Retro Contest |publisher=OpenAI |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/retro-contest/ |publisher=OpenAI Blog |title=Retro Contest |date=April 13, 2018 |accessdate=May 5, 2018}}</ref> As a result of the release of the Gym Retro library, OpenAI's Universe become deprecated.<ref>{{cite web |url=https://github.com/OpenAI/universe/commit/cc9ce6ec241821bfb0f3b85dd455bd36e4ee7a8c |publisher=GitHub |title=OpenAI/universe |accessdate=May 5, 2018}}</ref>
| 2018 || {{Dts|April 19}} || Financial || ''{{W|The New York Times}}'' publishes a story detailing the salaries of researchers at OpenAI, using information from OpenAI's 2016 {{W|Form 990}}. The salaries include $1.9 million paid to {{W|Ilya Sutskever}} and $800,000 paid to {{W|Ian Goodfellow}} (hired in March of that year).<ref>{{cite web |url=https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-OpenAI.html |date=April 19, 2018 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit |author=Cade Metz |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.reddit.com/r/reinforcementlearning/comments/8di9yt/ai_researchers_are_making_more_than_1_million/dxnc76j/ |publisher=reddit |title="A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16880447 |title=gwern comments on A.I. Researchers Are Making More Than $1M, Even at a Nonprofit |website=Hacker News |accessdate=May 5, 2018}}</ref>
|-
| 2018 || {{dts|April}} || Staff Team || Peter Zhokhov joins OpenAI as Member of the Technical Staff.<ref>{{cite web |title=Peter Zhokhov |url=https://www.linkedin.com/in/peter-zhokhov-b68525b3/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{Dts|May 2}} || Publication || The paper "AI safety via debate" by Geoffrey Irving, Paul Christiano, and Dario Amodei is uploaded to the arXiv. The paper proposes training agents via self play on a zero sum debate game, in order to adress tasks that are too complicated for a human to directly judge.<ref>{{cite web |url=https://arxiv.org/abs/1805.00899 |title=[1805.00899] AI safety via debate |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/debate/ |publisher=OpenAI Blog |title=AI Safety via Debate |date=May 3, 2018 |first1=Geoffrey |last1=Irving |first2=Dario |last2=Amodei |accessdate=May 5, 2018}}</ref>
| 2018 || {{dts|May 16}} || Publication || OpenAI releases an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time.<ref>{{cite web |title=AI and Compute |url=https://openai.com/blog/ai-and-compute/ |website=openai.com |accessdate=5 April 2020}}</ref>
|-
| 2018 || {{dts|May}} || Staff Team || Susan Zhang joins OpenAI as Research Engineer.<ref>{{cite web |title=Susan Zhang |url=https://www.linkedin.com/in/suchenzang/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|May}} || Staff Team || Daniel Ziegler joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Daniel Ziegler |url=https://www.linkedin.com/in/daniel-ziegler-b4b61882/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
|-
| 2018|| {{dts|June 2}} || Publication || OpenAI publishes "GamePad: A Learning Environment for Theorem Proving" in {{w|arXiv}}. The paper introduces a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant.<ref>{{cite web |last1=Huang |first1=Daniel |last2=Dhariwal |first2=Prafulla |last3=Song |first3=Dawn |last4=Sutskever |first4=Ilya |title=GamePad: A Learning Environment for Theorem Proving |url=https://arxiv.org/abs/1806.00608 |website=arxiv.org |accessdate=26 March 2020}}</ref>
| 2018 || {{Dts|June 26}} || Notable comment || {{w|Bill Gates}} comments on {{w|Twitter}}: {{Quote|AI bots just beat humans at the video game Dota 2. That’s a big deal, because their victory required teamwork and collaboration – a huge milestone in advancing artificial intelligence.}}<ref>{{cite web |last1=Papadopoulos |first1=Loukia |title=Bill Gates Praises Elon Musk-Founded OpenAI’s Latest Dota 2 Win as “Huge Milestone” in Field |url=https://interestingengineering.com/bill-gates-praises-elon-musk-founded-OpenAIs-latest-dota-2-win-as-huge-milestone-in-field |website=interestingengineering.com |accessdate=14 June 2019}}</ref>
|-
| 2018 || {{dts|June}} || Staff Team || Yilun Du joins OpenAI as Research Fellow.<ref>{{cite web |title=Yilun Du |url=https://www.linkedin.com/in/yilun-du-04a831112/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|June}} || Staff Team || Christine McLeavey Payne joins OpenAI's Deep Learning Scholars Program.<ref>{{cite web |title=Christine McLeavey Payne |url=https://www.linkedin.com/in/mcleavey/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|June}} || Staff Team || Johannes Otterbach joins OpenAI as Member Of Technical Staff (Fellow).<ref>{{cite web |title=Johannes Otterbach |url=https://www.linkedin.com/in/jotterbach/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|June}} || Staff Team || Karl Cobbe joins OpenAI as Machine Learning Fellow.<ref>{{cite web |title=Karl Cobbe |url=https://www.linkedin.com/in/kcobbe/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|July 9}} || Publication || "Glow: Generative Flow with Invertible 1x1 Convolutions" is first submitted to the {{w|ArXiv}}. The paper proposes a method for obtaining a significant improvement in log-likelihood on standard benchmarks.<ref>{{cite web |last1=Kingma |first1=Diederik P. |last2=Dhariwal |first2=Prafulla |title=Glow: Generative Flow with Invertible 1x1 Convolutions |url=https://arxiv.org/abs/1807.03039 |website=arxiv.org |accessdate=26 March 2020}}</ref>
| 2018 || {{Dts|August 7}} || Achievement || Algorithmic team OpenAI Five defeats a team of semi-professional {{w|Dota 2}} players ranked in the 99.95th percentile in the world, in their second public match in the traditional five-versus-five settings, hosted in {{w|San Francisco}}.<ref>{{cite web |last1=Whitwam |first1=Ryan |title=OpenAI Bots Crush the Best Human Dota 2 Players in the World |url=https://www.extremetech.com/gaming/274907-OpenAI-bots-crush-the-best-human-dota-2-players-in-the-world |website=extremetech.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=OpenAI bots thrash team of Dota 2 semi-pros, set eyes on mega-tourney |url=https://www.theregister.co.uk/2018/08/06/OpenAI_bots_dota_2_semipros/ |website=theregister.co.uk |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Savov |first1=Vlad |title=The OpenAI Dota 2 bots just defeated a team of former pros |url=https://www.theverge.com/2018/8/6/17655086/dota2-OpenAI-bots-professional-gaming-ai |website=theverge.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Rigg |first1=Jamie |title=‘Dota 2’ veterans steamrolled by AI team in exhibition match |url=https://www.engadget.com/2018/08/06/OpenAI-five-dumpsters-dota-2-veterans/ |website=engadget.com |accessdate=15 June 2019}}</ref>
|-
| 2018 || {{dts|August}} || Staff Team || Ingmar Kanitscheider joins OpenAI as Research Scientist.<ref>{{cite web |title=Ingmar Kanitscheider |url=https://www.linkedin.com/in/ingmar-kanitscheider-148620127/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|August}} || Staff Team || Miles Brundage joins OpenAI as Research Scientist (Policy).<ref>{{cite web |title=Miles Brundage |url=https://www.linkedin.com/in/miles-brundage-49b62a4/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|August}} || Staff Team || Jeffrey Wu joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Jeffrey Wu |url=https://www.linkedin.com/in/wu-the-jeff/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
|-
| 2018 || {{dts|August 16}} || Publication || OpenAI publishes paper on constant arboricity spectral sparsifiers. The paper shows that every graph is spectrally similar to the union of a constant number of forests.<ref>{{cite web |last1=Chu |first1=Timothy |last2=Cohen |first2=Michael B. |last3=Pachocki |first3=Jakub W. |last4=Peng |first4=Richard |title=Constant Arboricity Spectral Sparsifiers |url=https://arxiv.org/abs/1808.05662 |website=arxiv.org |accessdate=26 March 2020}}</ref>
|-
| 2018 || {{dts|September}} || Staff Team || Christopher Olah joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Christopher Olah |url=https://www.linkedin.com/in/christopher-olah-b574414a/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|September}} || Staff Team || Taehoon Kim joins OpenAI as Research Engineer.<ref>{{cite web |title=Taehoon Kim |url=https://www.linkedin.com/in/carpedm20/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
|-
| 2018 || {{dts|September}} || Staff Team || Dario Amodei becomes OpenAI's Research Director.<ref name="Dario Amodeiy"/>
|-
| 2018 || {{dts|October 2}} || Publication || OpenAI publishes paper on FFJORD (free-form continuous dynamics for scalable reversible generative models), aiming to demonstrate their approach on high-dimensional density estimation, image generation, and variational inference.<ref>{{cite web |last1=Grathwohl |first1=Will |last2=Chen |first2=Ricky T. Q. |last3=Bettencourt |first3=Jesse |last4=Sutskever |first4=Ilya |last5=Duvenaud |first5=David |title=FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models |url=https://arxiv.org/abs/1810.01367 |website=arxiv.org |accessdate=26 March 2020}}</ref>
| 2018 || {{dts|October 19}} || Publication || OpenAI publishes paper proposing Iterated Amplification, an alternative training strategy which progressively builds up a training signal for difficult problems by combining solutions to easier subproblems.<ref>{{cite web |last1=Christiano |first1=Paul |last2=Shlegeris |first2=Buck |last3=Amodei |first3=Dario |title=Supervising strong learners by amplifying weak experts |url=https://arxiv.org/abs/1810.08575 |website=arxiv.org |accessdate=26 March 2020}}</ref>
|-
| 2018 || {{Dts|October}} || Staff Team || Daniela Amodei joins OpenAI as NLP Team Manager and Head of People Operations.<ref>{{cite web |title=Daniela Amodei |url=https://www.linkedin.com/in/daniela-amodei-790bb22a/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|October}} || Staff Team || Lei Zhang joins OpenAI as Research Fellow.<ref>{{cite web |title=Lei Zhang |url=https://www.linkedin.com/in/lei-zhang-34a60910/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|October}} || Staff Team || Mark Chen joins OpenAI as Research Scientist.<ref>{{cite web |title=Mark Chen |url=https://www.linkedin.com/in/markchen90/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|October 31}} || Software release || OpenAI unveils its Random Network Distillation (RND), a prediction-based method for encouraging {{w|reinforcement learning}} agents to explore their environments through curiosity, which for the first time exceeds average human performance on videogame Montezuma’s Revenge.<ref>{{cite web |title=Reinforcement Learning with Prediction-Based Rewards |url=https://openai.com/blog/reinforcement-learning-with-prediction-based-rewards/ |website=openai.com |accessdate=5 April 2020}}</ref>
| 2018 || {{Dts|November 9}} || Notable comment || {{w|Ilya Sutskever}} gives speech at the AI Frontiers Conference in {{w|San Jose}}, and declares: {{Quote|We (OpenAI) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibility.}}<ref>{{cite web |title=OpenAI Founder: Short-Term AGI Is a Serious Possibility |url=https://syncedreview.com/2018/11/13/OpenAI-founder-short-term-agi-is-a-serious-possibility/ |website=syncedreview.com |accessdate=15 June 2019}}</ref>
|-
| 2018 || {{dts|November 18}} || Staff Team || Clemens Winter joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Clemens Winter |url=https://www.linkedin.com/in/clemens-winter-569887a9/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
|-
| 2018 || {{Dts|November 19}} || Partnership || OpenAI partners with {{w|DeepMind}} in a new paper that proposes a new method to train {{w|reinforcement learning}} agents in ways that enables them to surpass human performance. The paper, titled ''Reward learning from human preferences and demonstrations in Atari'', introduces a training model that combines human feedback and reward optimization to maximize the knowledge of RL agents.<ref>{{cite web |last1=Rodriguez |first1=Jesus |title=What’s New in Deep Learning Research: OpenAI and DeepMind Join Forces to Achieve Superhuman Performance in Reinforcement Learning |url=https://towardsdatascience.com/whats-new-in-deep-learning-research-OpenAI-and-deepmind-join-forces-to-achieve-superhuman-48e7d1accf85 |website=towardsdatascience.com |accessdate=29 June 2019}}</ref>
|-
| 2018 || {{dts|November}} || Staff Team || Amanda Askell joins OpenAI as Research Scientist (Policy).<ref>{{cite web |title=Amanda Askell |url=https://www.linkedin.com/in/amanda-askell-1ab457175/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2018 || {{dts|December 4}} || Researh progress || OpenAI announces having discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks.<ref>{{cite web |title=How AI Training Scales |url=https://openai.com/blog/science-of-ai/ |website=openai.com |accessdate=4 April 2020}}</ref>
| 2018 || {{dts|December 14}} || Publication || OpenAI publishes paper demonstrating that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of {{w|supervised learning}} datasets, {{w|reinforcement learning}} domains, and even generative model training.<ref>{{cite web |last1=McCandlish |first1=Sam |last2=Kaplan |first2=Jared |last3=Amodei |first3=Dario |last4=OpenAI Dota Team |title=An Empirical Model of Large-Batch Training |url=https://arxiv.org/abs/1812.06162 |website=arxiv.org |accessdate=25 March 2020}}</ref>
|-
| 2018 || {{dts|December}} || Staff Team || Mateusz Litwin joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Mateusz Litwin |url=https://www.linkedin.com/in/mateusz-litwin-06b3a919/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|January}} || Staff Team || Bianca Martin joins OpenAI as Special Projects Manager.<ref>{{cite web |title=Bianca Martin |url=https://www.linkedin.com/in/biancamartin1/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|February 4}} || Publication || OpenAI publishes paper showing computational limitations in robust classification and win-win results.<ref>{{cite web |last1=Degwekar |first1=Akshay |last2=Nakkiran |first2=Preetum |last3=Vaikuntanathan |first3=Vinod |title=Computational Limitations in Robust Classification and Win-Win Results |url=https://arxiv.org/abs/1902.01086 |website=arxiv.org |accessdate=25 March 2020}}</ref>
| 2019 || {{dts|February 19}} || Publication || "AI Safety Needs Social Scientists" is published. The paper argues that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved.<ref>{{cite journal |last1=Irving |first1=Geoffrey |last2=Askell |first2=Amanda |title=AI Safety Needs Social Scientists |doi=10.23915/distill.00014 |url=https://distill.pub/2019/safety-needs-social-scientists/}}</ref><ref>{{cite web |title=AI Safety Needs Social Scientists |url=https://openai.com/blog/ai-safety-needs-social-scientists/ |website=openai.com |accessdate=5 April 2020}}</ref>
|-
| 2019 || {{dts|February}} || Staff Team || Danny Hernandez joins OpenAI as Research Scientist.<ref>{{cite web |title=Danny Hernandez |url=https://www.linkedin.com/in/danny-hernandez-2b748823/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|March 2}} || Publication || OpenAi publishes paper presenting an artificial intelligence research environment that aims to simulate the {{w|natural environment}} setting in microcosm.<ref>{{cite web |last1=Suarez |first1=Joseph |last2=Du |first2=Yilun |last3=Isola |first3=Phillip |last4=Mordatch |first4=Igor |title=Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents |url=https://arxiv.org/abs/1903.00784 |website=arxiv.org |accessdate=25 March 2020}}</ref>
| 2019 || {{dts|March 21}} || Software release || OpenAI announces progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models.<ref>{{cite web |title=Implicit Generation and Generalization Methods for Energy-Based Models |url=https://openai.com/blog/energy-based-models/ |website=openai.com |accessdate=5 April 2020}}</ref>
|-
| 2019 || {{dts|March}} || Staff Team || Ilge Akkaya joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Ilge Akkaya |url=https://www.linkedin.com/in/ilge-akkaya-311b4631/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{Dts|March}} || Staff Team || {{w|Sam Altman}} leaves his role as the president of {{w|Y Combinator}} to become the {{w|Chief executive officer}} of OpenAI.<ref>{{cite web |title=Sam Altman’s leap of faith |url=https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/ |website=techcrunch.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator |url=https://techcrunch.com/2019/03/08/y-combinator-president-sam-altman-is-stepping-down-amid-a-series-of-changes-at-the-accelerator/ |website=techcrunch.com |accessdate=24 February 2020}}</ref>
|-
| 2019 || {{dts|March}} || Staff Team || Alex Paino joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Alex Paino |url=https://www.linkedin.com/in/atpaino/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|March}} || Staff Team || Karson Elmgren joins OpenAI at People Operations.<ref>{{cite web |title=Karson Elmgren |url=https://www.linkedin.com/in/karson-elmgren-32417732/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
|-
| 2019 || {{Dts|April 23}} || Publication || OpenAI publishes paper announcing Sparse Transformers, a deep neural network for learning sequences of data, including text, sound, and images. It utilizes an improved algorithm based on the attention mechanism, being able to extract patterns from sequences 30 times longer than possible previously.<ref>{{cite web |last1=Alford |first1=Anthony |title=OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences |url=https://www.infoq.com/news/2019/05/OpenAI-sparse-transformers/ |website=infoq.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=OpenAI Sparse Transformer Improves Predictable Sequence Length by 30x |url=https://medium.com/syncedreview/OpenAI-sparse-transformer-improves-predictable-sequence-length-by-30x-5a65ef2592b9 |website=medium.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=Generative Modeling with Sparse Transformers |url=https://OpenAI.com/blog/sparse-transformer/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>
| 2019 || {{Dts|April 27}} || Event hosting || OpenAI hosts the OpenAI Robotics Symposium 2019.<ref>{{cite web |title=OpenAI Robotics Symposium 2019 |url=https://OpenAI.com/blog/symposium-2019/ |website=OpenAI.com |accessdate=14 June 2019}}</ref>
|-
| 2019 || {{dts|April}} || Staff Team || Todor Markov joins OpenAI as Machine Learning Researcher.<ref>{{cite web |title=Todor Markov |url=https://www.linkedin.com/in/todor-markov-4aa38a67/ |website=linkedin.com/ |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|May 3}} || Publication || OpenAI publishes study on the transfer of adversarial robustness of [[w:deep learning|deep neural networks]] between different perturbation types.<ref>{{cite web |last1=Kang |first1=Daniel |last2=Sun |first2=Yi |last3=Brown |first3=Tom |last4=Hendrycks |first4=Dan |last5=Steinhardt |first5=Jacob |title=Transfer of Adversarial Robustness Between Perturbation Types |url=https://arxiv.org/abs/1905.01034 |website=arxiv.org |accessdate=25 March 2020}}</ref>
| 2019 || {{dts|May 28}} || Publication || OpenAI publishes study on the dynamics of Stochastic Gradient Descent (SGD) in learning [[w:Deep learning|deep neural networks]] for several real and synthetic classification tasks.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Kalimeris |first3=Dimitris |last4=Yang |first4=Tristan |last5=Edelman |first5=Benjamin L. |last6=Zhang |first6=Fred |last7=Barak |first7=Boaz |title=SGD on Neural Networks Learns Functions of Increasing Complexity |url=https://arxiv.org/abs/1905.11604 |website=arxiv.org |accessdate=25 March 2020}}</ref>
|-
| 2019 || {{dts|June}} || Staff Team || Long Ouyang joins OpenAI as Research Scientist.<ref>{{cite web |title=Long Ouyang |url=https://www.linkedin.com/in/longouyang/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|July 10}} || Publication || OpenAI publishes paper arguing that competitive pressures could incentivize AI companies to underinvest in ensuring their systems are safe, secure, and have a positive social impact.<ref>{{cite web |last1=Askell |first1=Amanda |last2=Brundage |first2=Miles |last3=Hadfield |first3=Gillian |title=The Role of Cooperation in Responsible AI Development |url=https://arxiv.org/abs/1907.04534 |website=arxiv.org |accessdate=25 March 2020}}</ref>
| 2019 || {{dts|July 22}} || Partnership || OpenAI announces an exclusive partnership with {{w|Microsoft}}. As part of the partnership, Microsoft invests $1 billion in OpenAI, and OpenAI switches to exclusively using {{w|Microsoft Azure}} (Microsoft's cloud solution) as the platform on which it will develop its AI tools. Microsoft will also be OpenAI's "preferred partner for commercializing new AI technologies."<ref>{{cite web|url = https://OpenAI.com/blog/microsoft/|title = Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI|date = July 22, 2019|accessdate = July 26, 2019|publisher = OpenAI}}</ref><ref>{{cite web|url = https://news.microsoft.com/2019/07/22/OpenAI-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/|title = OpenAI forms exclusive computing partnership with Microsoft to build new Azure AI supercomputing technologies|date = July 22, 2019|accessdate = July 26, 2019|publisher = Microsoft}}</ref><ref>{{cite web|url = https://www.businessinsider.com/microsoft-OpenAI-artificial-general-intelligence-investment-2019-7|title = Microsoft is investing $1 billion in OpenAI, the Elon Musk-founded company that's trying to build human-like artificial intelligence|last = Chan|first= Rosalie|date = July 22, 2019|accessdate = July 26, 2019|publisher = Business Insider}}</ref><ref>{{cite web|url = https://www.forbes.com/sites/mohanbirsawhney/2019/07/24/the-real-reasons-microsoft-invested-in-OpenAI/|title = The Real Reasons Microsoft Invested In OpenAI|last = Sawhney|first = Mohanbir|date = July 24, 2019|accessdate = July 26, 2019|publisher = Forbes}}</ref>
|-
| 2019 || {{dts|July}} || Staff Team || Irene Solaiman joins OpenAI as Policy Researcher.<ref>{{cite web |title=Irene Solaiman |url=https://www.linkedin.com/in/irene-solaiman/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|August 20}} || Software release || OpenAI announces plan to release a version of its language-generating system GPT-2, which stirred controversy after it release in February.<ref>{{cite web |title=OpenAI releases curtailed version of GPT-2 language model |url=https://venturebeat.com/2019/08/20/OpenAI-releases-curtailed-version-of-gpt-2-language-model/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI Just Released an Even Scarier Fake News-Writing Algorithm |url=https://interestingengineering.com/OpenAI-just-released-an-even-scarier-fake-news-writing-algorithm |website=interestingengineering.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OPENAI JUST RELEASED A NEW VERSION OF ITS FAKE NEWS-WRITING AI |url=https://futurism.com/the-byte/OpenAI-new-version-writing-ai |website=futurism.com |accessdate=24 February 2020}}</ref>
|-
| 2019 || {{dts|August}} || Staff Team || Melanie Subbiah joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Melanie Subbiah |url=https://www.linkedin.com/in/melanie-subbiah-7b702a8a/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|August}} || Staff Team || Cullen O'Keefe joins OpenAI as Research Scientist (Policy).<ref>{{cite web |title=Cullen O'Keefe |url=https://www.linkedin.com/in/ccokeefe-law/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|September 17}} || Research progress || OpenAI announces having observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training, the agents were able to build a series of six distinct strategies and counterstrategies, some of which were unknown to be supported by the environment.<ref>{{cite web |title=Emergent Tool Use from Multi-Agent Interaction |url=https://openai.com/blog/emergent-tool-use/ |website=openai.com |accessdate=4 April 2020}}</ref><ref>{{cite web |title=Emergent Tool Use From Multi-Agent Autocurricula |url=https://arxiv.org/abs/1909.07528 |website=arxiv.org |accessdate=4 April 2020}}</ref>
| 2019 || {{dts|November 5}} || Software release || OpenAI releases the largest version (1.5B parameters) of its language-generating system GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models.<ref>{{cite web |title=GPT-2: 1.5B Release |url=https://openai.com/blog/gpt-2-1-5b-release/ |website=openai.com |accessdate=5 April 2020}}</ref>
|-
| 2019 || {{dts|November}} || Staff Team || Ryan Lowe joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Ryan Lowe |url=https://www.linkedin.com/in/ryan-lowe-ab67a267/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
|-
| 2019 || {{dts|November 21}} || Software release || OpenAI releases Safety Gym, a suite of environments and tools for measuring progress towards {{w|reinforcement learning}} agents that respect safety constraints while training.<ref>{{cite web |title=Safety Gym |url=https://openai.com/blog/safety-gym/ |website=openai.com |accessdate=5 April 2020}}</ref>
| 2019 || {{dts|December 4}} || Publication || "Deep Double Descent: Where Bigger Models and More Data Hurt" is submitted to the {{w|ArXiv}}. The paper shows that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as the model size increases, performance first gets worse and then gets better.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Bansal |first3=Yamini |last4=Yang |first4=Tristan |last5=Barak |first5=Boaz |last6=Sutskever |first6=Ilya |title=Deep Double Descent: Where Bigger Models and More Data Hurt |website=arxiv.org |url=https://arxiv.org/abs/1912.02292|accessdate=5 April 2020}}</ref>
|-
| 2019 || {{dts|December}} || Staff Team || Dario Amodei is promoted as OpenAI's Vice President of Research.<ref name="Dario Amodeiy">{{cite web |title=Dario Amodei |url=https://www.linkedin.com/in/dario-amodei-3934934/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
|-
| 2020 || {{dts|January 30}} || Software adoption || OpenAI announces migration to the social network’s {{w|PyTorch}} {{w|machine learning}} framework in future projects, setting it as its new standard deep learning framework.<ref>{{cite web |title=OpenAI sets PyTorch as its new standard deep learning framework |url=https://jaxenter.com/OpenAI-pytorch-deep-learning-framework-167641.html |website=jaxenter.com |accessdate=23 February 2020}}</ref><ref>{{cite web |title=OpenAI goes all-in on Facebook’s Pytorch machine learning framework |url=https://venturebeat.com/2020/01/30/OpenAI-facebook-pytorch-google-tensorflow/ |website=venturebeat.com |accessdate=23 February 2020}}</ref>
62,666
edits

Navigation menu