Difference between revisions of "Timeline of OpenAI"

From Timelines
Jump to: navigation, search
 
(101 intermediate revisions by 2 users not shown)
Line 5: Line 5:
 
The following are some interesting questions that can be answered by reading this timeline:
 
The following are some interesting questions that can be answered by reading this timeline:
  
 +
* What are some significant events previous to the creation of OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Prelude".
 +
** You will see some events involving key people like {{w|Elon Musk}} and {{w|Sam Altman}}, that would eventually lead to the creation of OpenAI.
 
* What are the various papers and posts published by OpenAI?
 
* What are the various papers and posts published by OpenAI?
 
** Sort the full timeline by "Event type" and look for the group of rows with value "Publication".
 
** Sort the full timeline by "Event type" and look for the group of rows with value "Publication".
** You will mostly see papers submitted to the {{w|ArXiv}} by OpenAI-affiliated researchers. Also blog posts.
+
** You will see mostly papers submitted to the {{w|ArXiv}} by OpenAI-affiliated researchers. Also blog posts.
* What is the staff composition and what are the different roles at the organization?
+
* What are the several toolkits, implementations, algorithms, systems and software in general released by OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Software release".
 +
** You will see a variety of releases, some of them open-sourced.
 +
* What are some other significant events describing advances in research?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Research progress".
 +
** You will see some discoveries and other significant results obtained by OpenAI.
 +
* What is the staff composition and what are the different roles in the organization?
 
** Sort the full timeline by "Event type" and look for the group of rows with value "Staff".
 
** Sort the full timeline by "Event type" and look for the group of rows with value "Staff".
* What are some important software releases by OpenAI?
+
** You will see the names of incorporated people and their roles.
** Sort the full timeline by "Event type" and look for the group of rows with value "Software release".
+
* What are the several partnerships between OpenAI and other organizations?
* What are important {{w|artificial intelligence}} developments announced by OpenAI?
+
** Sort the full timeline by "Event type" and look for the group of rows with value "Partnership".
** Sort the full timeline by "Event type" and look for the group of rows with value "AI development".
+
** You will read collaborations with organizations like {{w|DeepMind}} and {{w|Microsoft}}.
 +
* What are some significant fundings granted to OpenAI by donors?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Donation".
 +
** You will see names like the {{w|Open Philanthropy Project}}, and {{w|Nvidia}}, among others.
 +
* What are some notable events hosted by OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Event hosting".
 +
* What are some notable publications by third parties about OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Coverage".
  
 
==Big picture==
 
==Big picture==
Line 24: Line 40:
 
| 2015 || Establishment || OpenAI is founded as a nonprofit and begins producing research.
 
| 2015 || Establishment || OpenAI is founded as a nonprofit and begins producing research.
 
|-
 
|-
| 2019 || Restructure || OpenAI shifts from nonprofit to ‘capped-profit’ with the purpose to attract capital.
+
| 2019 || Reorganization || OpenAI shifts from nonprofit to ‘capped-profit’ with the purpose to attract capital.
 
|-
 
|-
 
|}
 
|}
Line 40: Line 56:
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
! Year !! Month and date !! Event type !! Details
+
! Year !! Month and date !! Domain !! Event type !! Details
 
|-
 
|-
| 2014 || {{dts|October 22}}–24 || Background || During an interview at the AeroAstro Centennial Symposium, {{W|Elon Musk}}, who would later become co-chair of OpenAI, calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[w:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref>
+
| 2014 || {{dts|October 22}}–24 || || Prelude || During an interview at the AeroAstro Centennial Symposium, {{W|Elon Musk}}, who would later become co-chair of OpenAI, calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[w:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref>
 
|-
 
|-
| 2015 || {{dts|February 25}} || Background || {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]] who would later become a co-chair of OpenAI, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref>{{cite web |url=http://blog.samaltman.com/machine-intelligence-part-1 |title=Machine intelligence, part 1 |publisher=Sam Altman |accessdate=July 27, 2017}}</ref>
+
| 2015 || {{dts|February 25}} || || Prelude || {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]] who would later become a co-chair of OpenAI, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref>{{cite web |url=http://blog.samaltman.com/machine-intelligence-part-1 |title=Machine intelligence, part 1 |publisher=Sam Altman |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2015 || {{dts|May 6}} || Background || Greg Brockman, who would become CTO of OpenAI, announces in a blog post that he is leaving his role as CTO of [[wikipedia:Stripe (company)|Stripe]]. In the post, in the section "What comes next" he writes "I haven't decided exactly what I'll be building (feel free to ping if you want to chat)".<ref>{{cite web |url=https://blog.gregbrockman.com/leaving-stripe |title=Leaving Stripe |first=Greg |last=Brockman |publisher=Greg Brockman on Svbtle |date=May 6, 2015 |accessdate=May 6, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/stripes-cto-greg-brockman-is-leaving-the-company-2015-5 |date=May 6, 2015 |first=Biz |last=Carson |title=One of the first employees of $3.5 billion startup Stripe is leaving to form his own company |publisher=Business Insider |accessdate=May 6, 2018}}</ref>
+
| 2015 || {{dts|May 6}} || || Prelude || Greg Brockman, who would become CTO of OpenAI, announces in a blog post that he is leaving his role as CTO of [[wikipedia:Stripe (company)|Stripe]]. In the post, in the section "What comes next" he writes "I haven't decided exactly what I'll be building (feel free to ping if you want to chat)".<ref>{{cite web |url=https://blog.gregbrockman.com/leaving-stripe |title=Leaving Stripe |first=Greg |last=Brockman |publisher=Greg Brockman on Svbtle |date=May 6, 2015 |accessdate=May 6, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/stripes-cto-greg-brockman-is-leaving-the-company-2015-5 |date=May 6, 2015 |first=Biz |last=Carson |title=One of the first employees of $3.5 billion startup Stripe is leaving to form his own company |publisher=Business Insider |accessdate=May 6, 2018}}</ref>
 
|-
 
|-
| 2015 || {{dts|June}} || Background || {{W|Sam Altman}} and Greg Brockman have a conversation about next steps for Brockman.<ref name="path-to-OpenAI">{{cite web |url=https://blog.gregbrockman.com/my-path-to-OpenAI |title=My path to OpenAI |date=May 3, 2016 |publisher=Greg Brockman on Svbtle |accessdate=May 8, 2018}}</ref>
+
| 2015 || {{dts|June}} || || Prelude || {{W|Sam Altman}} and Greg Brockman have a conversation about next steps for Brockman.<ref name="path-to-OpenAI">{{cite web |url=https://blog.gregbrockman.com/my-path-to-OpenAI |title=My path to OpenAI |date=May 3, 2016 |publisher=Greg Brockman on Svbtle |accessdate=May 8, 2018}}</ref>
 
|-
 
|-
| 2015 || {{dts|June 4}} || Background || At {{w|Airbnb}}'s Open Air 2015 conference, {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]] who would later become a co-chair of OpenAI, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.<ref>{{cite web |url=http://www.businessinsider.com/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6 |author=Matt Weinberger |date=June 4, 2015 |title=Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last |publisher=Business Insider |accessdate=July 27, 2017}}</ref>
+
| 2015 || {{dts|June 4}} || || Prelude || At {{w|Airbnb}}'s Open Air 2015 conference, {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]] who would later become a co-chair of OpenAI, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.<ref>{{cite web |url=http://www.businessinsider.com/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6 |author=Matt Weinberger |date=June 4, 2015 |title=Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last |publisher=Business Insider |accessdate=July 27, 2017}}</ref>
 
|-
 
|-
| 2015 || {{dts|July}} (approximate) || Background || {{W|Sam Altman}} sets up a dinner in {{W|Menlo Park, California}} to talk about starting an organization to do AI research. Attendees include Greg Brockman, Dario Amodei, Chris Olah, Paul Christiano, {{W|Ilya Sutskever}}, and {{W|Elon Musk}}.<ref name="path-to-OpenAI" />
+
| 2015 || {{dts|July}} (approximate) || || Prelude || {{W|Sam Altman}} sets up a dinner in {{W|Menlo Park, California}} to talk about starting an organization to do AI research. Attendees include Greg Brockman, Dario Amodei, Chris Olah, Paul Christiano, {{W|Ilya Sutskever}}, and {{W|Elon Musk}}.<ref name="path-to-OpenAI" />
 
|-
 
|-
| 2015 || {{dts|December 11}} || Creation || {{w|OpenAI}} is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)<ref>{{cite web |url=https://www.nytimes.com/2015/12/12/science/artificial-intelligence-research-center-is-founded-by-silicon-valley-investors.html |date=December 11, 2015 |publisher=[[w:The New York Times|The New York Times]] |title=Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors |author=John Markoff |accessdate=July 26, 2017 |quote=The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco.}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/introducing-OpenAI/ |publisher=OpenAI Blog |title=Introducing OpenAI |date=December 11, 2015 |accessdate=July 26, 2017}}</ref><ref>{{cite web |url=https://techcrunch.com/2015/12/11/non-profit-OpenAI-launches-with-backing-from-elon-musk-and-sam-altman/ |date=December 11, 2015 |publisher=TechCrunch |title=Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman |author=Drew Olanoff |accessdate=March 2, 2018}}</ref> Co-founders include Wojciech Zaremba<ref>{{cite web |title=Wojciech Zaremba |url=https://www.linkedin.com/in/wojciech-zaremba-356568164/ |website=linkedin.com |accessdate=28 February 2020}}</ref>,  
+
| 2015 || {{dts|December 11}} ||  || Creation || {{w|OpenAI}} is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)<ref>{{cite web |url=https://www.nytimes.com/2015/12/12/science/artificial-intelligence-research-center-is-founded-by-silicon-valley-investors.html |date=December 11, 2015 |publisher=[[w:The New York Times|The New York Times]] |title=Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors |author=John Markoff |accessdate=July 26, 2017 |quote=The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco.}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/introducing-OpenAI/ |publisher=OpenAI Blog |title=Introducing OpenAI |date=December 11, 2015 |accessdate=July 26, 2017}}</ref><ref>{{cite web |url=https://techcrunch.com/2015/12/11/non-profit-OpenAI-launches-with-backing-from-elon-musk-and-sam-altman/ |date=December 11, 2015 |publisher=TechCrunch |title=Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman |author=Drew Olanoff |accessdate=March 2, 2018}}</ref> Co-founders include Wojciech Zaremba<ref>{{cite web |title=Wojciech Zaremba |url=https://www.linkedin.com/in/wojciech-zaremba-356568164/ |website=linkedin.com |accessdate=28 February 2020}}</ref>,  
 
|-
 
|-
| 2015 || {{dts|December}} || Staff || OpenAI announces {{w|Y Combinator}} founding partner {{w|Jessica Livingston}} as one of its financial backers.<ref>{{cite web |url=https://www.forbes.com/sites/theopriestley/2015/12/11/elon-musk-and-peter-thiel-launch-OpenAI-a-non-profit-artificial-intelligence-research-company/ |title=Elon Musk And Peter Thiel Launch OpenAI, A Non-Profit Artificial Intelligence Research Company |first1=Theo |last1=Priestly |date=December 11, 2015 |publisher=''{{w|Forbes}}'' |access-date=8 July 2019 }}</ref>
+
| 2015 || {{dts|December}} || || Coverage || The article "{{w|OpenAI}}" is created on {{w|Wikipedia}}.<ref>{{cite web |title=OpenAI: Revision history |url=https://en.wikipedia.org/w/index.php?title=OpenAI&dir=prev&action=history |website=wikipedia.org |accessdate=6 April 2020}}</ref>  
 
|-
 
|-
| 2016 || {{dts|January}} || Staff || {{W|Ilya Sutskever}} joins OpenAI as Research Director.<ref>{{cite web |url=https://aiwatch.issarice.com/?person=Ilya+Sutskever |date=April 8, 2018 |title=Ilya Sutskever |publisher=AI Watch |accessdate=May 6, 2018}}</ref>
+
| 2015 || {{dts|December}} || || Team || OpenAI announces {{w|Y Combinator}} founding partner {{w|Jessica Livingston}} as one of its financial backers.<ref>{{cite web |url=https://www.forbes.com/sites/theopriestley/2015/12/11/elon-musk-and-peter-thiel-launch-OpenAI-a-non-profit-artificial-intelligence-research-company/ |title=Elon Musk And Peter Thiel Launch OpenAI, A Non-Profit Artificial Intelligence Research Company |first1=Theo |last1=Priestly |date=December 11, 2015 |publisher=''{{w|Forbes}}'' |access-date=8 July 2019 }}</ref>
 
|-
 
|-
| 2016 || {{dts|January 9}} || Education || The OpenAI research team does an AMA ("ask me anything") on r/MachineLearning, the subreddit dedicated to machine learning.<ref>{{cite web |url=https://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_OpenAI_research_team/ |publisher=reddit |title=AMA: the OpenAI Research Team • r/MachineLearning |accessdate=May 5, 2018}}</ref>
+
| 2016 || {{dts|January}} || || Team || {{W|Ilya Sutskever}} joins OpenAI as Research Director.<ref>{{cite web |url=https://aiwatch.issarice.com/?person=Ilya+Sutskever |date=April 8, 2018 |title=Ilya Sutskever |publisher=AI Watch |accessdate=May 6, 2018}}</ref><ref name="orgwatch.issarice.com">{{cite web |title=Information for OpenAI |url=https://orgwatch.issarice.com/?organization=OpenAI |website=orgwatch.issarice.com |accessdate=5 May 2020}}</ref>
 
|-
 
|-
| 2016 || {{dts|February 25}} || Publication || "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks", a paper on optimization, is first submitted to the {{w|ArXiv}}. The paper presents weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Kingma |first2=Diederik P. |title=Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks |url=https://arxiv.org/abs/1602.07868 |website=arxiv.org |accessdate=27 March 2020}}</ref>  
+
| 2016 || {{dts|January 9}} || || Education || The OpenAI research team does an AMA ("ask me anything") on r/MachineLearning, the subreddit dedicated to machine learning.<ref>{{cite web |url=https://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_OpenAI_research_team/ |publisher=reddit |title=AMA: the OpenAI Research Team • r/MachineLearning |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2016 || {{dts|March 31}} || Staff || A blog post from this day announces that {{W|Ian Goodfellow}} has joined OpenAI.<ref>{{cite web |url=https://blog.OpenAI.com/team-plus-plus/ |publisher=OpenAI Blog |title=Team++ |date=March 22, 2017 |first=Greg |last=Brockman |accessdate=May 6, 2018}}</ref>
+
| 2016 || {{dts|February 25}} || Optimization || Publication || "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks", a paper on optimization, is first submitted to the {{w|ArXiv}}. The paper presents weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Kingma |first2=Diederik P. |title=Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks |url=https://arxiv.org/abs/1602.07868 |website=arxiv.org |accessdate=27 March 2020}}</ref>  
 
|-
 
|-
| 2016 || {{Dts|April 26}} || Staff || A blog post from this day announces that Pieter Abbeel has joined OpenAI.<ref>{{cite web |url=https://blog.OpenAI.com/welcome-pieter-and-shivon/ |publisher=OpenAI Blog |title=Welcome, Pieter and Shivon! |date=March 20, 2017 |first=Ilya |last=Sutskever |accessdate=May 6, 2018}}</ref>
+
| 2016 || {{dts|March 31}} || || Team || A blog post from this day announces that {{W|Ian Goodfellow}} has joined OpenAI.<ref>{{cite web |url=https://blog.OpenAI.com/team-plus-plus/ |publisher=OpenAI Blog |title=Team++ |date=March 22, 2017 |first=Greg |last=Brockman |accessdate=May 6, 2018}}</ref> Previously, Goodfellow worked as Senior Research Scientist at {{w|Google}}.<ref>{{cite web |title=Ian Goodfellow |url=https://www.linkedin.com/in/ian-goodfellow-b7187213/ |website=linkedin.com |accessdate=24 April 2020}}</ref><ref name="orgwatch.issarice.com"/>
 
|-
 
|-
| 2016 || {{dts|April}} || Staff || Shivon Zilis joins OpenAI as Advisor.<ref>{{cite web |title=Shivon Zilis |url=https://www.linkedin.com/in/shivonzilis/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2016 || {{Dts|April 26}} || || Team || A blog post from this day announces that Pieter Abbeel has joined OpenAI.<ref>{{cite web |url=https://blog.OpenAI.com/welcome-pieter-and-shivon/ |publisher=OpenAI Blog |title=Welcome, Pieter and Shivon! |date=March 20, 2017 |first=Ilya |last=Sutskever |accessdate=May 6, 2018}}</ref><ref name="orgwatch.issarice.com"/>
 
|-
 
|-
| 2016 || {{dts|April 27}} || Software || The public beta of OpenAI Gym, an open source toolkit that provides environments to test AI bots, is released.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-gym-beta/ |publisher=OpenAI Blog |title=OpenAI Gym Beta |date=March 20, 2017 |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/04/OpenAI-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ |title=Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free |date=April 27, 2016 |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018 |quote=This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called "reinforcement learning"}}</ref><ref>{{cite web |url=http://www.businessinsider.com/OpenAI-has-launched-a-gym-where-developers-can-train-their-computers-2016-4?op=1 |first=Sam |last=Shead |date=April 28, 2016 |title=Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
+
| 2016 || {{dts|April 27}} || || Software release || The public beta of OpenAI Gym, an open source toolkit that provides environments to test AI bots, is released.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-gym-beta/ |publisher=OpenAI Blog |title=OpenAI Gym Beta |date=March 20, 2017 |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/04/OpenAI-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ |title=Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free |date=April 27, 2016 |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018 |quote=This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called "reinforcement learning"}}</ref><ref>{{cite web |url=http://www.businessinsider.com/OpenAI-has-launched-a-gym-where-developers-can-train-their-computers-2016-4?op=1 |first=Sam |last=Shead |date=April 28, 2016 |title=Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
 
|-
 
|-
| 2016 || {{dts|May 25}} || Publication || "Adversarial Training Methods for Semi-Supervised Text Classification" is submitted to the {{w|ArXiv}}. The paper proposes a method that achieves better results on multiple benchmark semi-supervised and purely supervised tasks.<ref>{{cite web |last1=Miyato |first1=Takeru |last2=Dai |first2=Andrew M. |last3=Goodfellow |first3=Ian |title=Adversarial Training Methods for Semi-Supervised Text Classification |url=https://arxiv.org/abs/1605.07725 |website=arxiv.org |accessdate=28 March 2020}}</ref>
+
| 2016 || {{dts|May 25}} || Safety || Publication || "Adversarial Training Methods for Semi-Supervised Text Classification" is submitted to the {{w|ArXiv}}. The paper proposes a method that achieves better results on multiple benchmark semi-supervised and purely supervised tasks.<ref>{{cite web |last1=Miyato |first1=Takeru |last2=Dai |first2=Andrew M. |last3=Goodfellow |first3=Ian |title=Adversarial Training Methods for Semi-Supervised Text Classification |url=https://arxiv.org/abs/1605.07725 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
|-
| 2016 || {{dts|May 31}} || Publication || "VIME: Variational Information Maximizing Exploration", a paper on generative models, is submitted to the {{w|ArXiv}}. The paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics.<ref>{{cite web |last1=Houthooft |first1=Rein |last2=Chen |first2=Xi |last3=Duan |first3=Yan |last4=Schulman |first4=John |last5=De Turck |first5=Filip |last6=Abbeel |first6=Pieter |title=VIME: Variational Information Maximizing Exploration |url=https://arxiv.org/abs/1605.09674 |website=arxiv.org |accessdate=27 March 2020}}</ref>  
+
| 2016 || {{dts|May 31}} || Generative models || Publication || "VIME: Variational Information Maximizing Exploration", a paper on generative models, is submitted to the {{w|ArXiv}}. The paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics.<ref>{{cite web |last1=Houthooft |first1=Rein |last2=Chen |first2=Xi |last3=Duan |first3=Yan |last4=Schulman |first4=John |last5=De Turck |first5=Filip |last6=Abbeel |first6=Pieter |title=VIME: Variational Information Maximizing Exploration |url=https://arxiv.org/abs/1605.09674 |website=arxiv.org |accessdate=27 March 2020}}</ref>  
 
|-
 
|-
| 2016 || {{dts|June 5}} || Publication || "OpenAI Gym", a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}. It presents OpenAI Gym as a toolkit for reinforcement learning research.<ref>{{cite web |last1=Brockman |first1=Greg |last2=Cheung |first2=Vicki |last3=Pettersson |first3=Ludwig |last4=Schneider |first4=Jonas |last5=Schulman |first5=John |last6=Tang |first6=Jie |last7=Zaremba |first7=Wojciech |title=OpenAI Gym |url=https://arxiv.org/abs/1606.01540 |website=arxiv.org |accessdate=27 March 2020}}</ref>  
+
| 2016 || {{dts|June 5}} || {{w|Reinforcement learning}} || Publication || "OpenAI Gym", a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}. It presents OpenAI Gym as a toolkit for reinforcement learning research.<ref>{{cite web |last1=Brockman |first1=Greg |last2=Cheung |first2=Vicki |last3=Pettersson |first3=Ludwig |last4=Schneider |first4=Jonas |last5=Schulman |first5=John |last6=Tang |first6=Jie |last7=Zaremba |first7=Wojciech |title=OpenAI Gym |url=https://arxiv.org/abs/1606.01540 |website=arxiv.org |accessdate=27 March 2020}}</ref> OpenAI Gym is considered by some as "a huge opportunity for speeding up the progress in the creation of better reinforcement algorithms, since it provides an easy way of comparing them, on the same conditions, independently of where the algorithm is executed".<ref>{{cite web |title=OPENAI GYM |url=https://www.theconstructsim.com/tag/openai_gym/ |website=theconstructsim.com |accessdate=16 May 2020}}</ref>
 
|-
 
|-
| 2016 || {{dts|June 10}} || Publication || "Improved Techniques for Training GANs", a paper on generative models, is submitted to the {{w|ArXiv}}. It presents a variety of new architectural features and training procedures that OpenAI applies to the generative adversarial networks (GANs) framework.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Goodfellow |first2=Ian |last3=Zaremba |first3=Wojciech |last4=Cheung |first4=Vicki |last5=Radford |first5=Alec |last6=Chen |first6=Xi |title=Improved Techniques for Training GANs |url=https://arxiv.org/abs/1606.03498 |website=arxiv.org |accessdate=27 March 2020}}</ref>
+
| 2016 || {{dts|June 10}} || Generative models || Publication || "Improved Techniques for Training GANs", a paper on generative models, is submitted to the {{w|ArXiv}}. It presents a variety of new architectural features and training procedures that OpenAI applies to the generative adversarial networks (GANs) framework.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Goodfellow |first2=Ian |last3=Zaremba |first3=Wojciech |last4=Cheung |first4=Vicki |last5=Radford |first5=Alec |last6=Chen |first6=Xi |title=Improved Techniques for Training GANs |url=https://arxiv.org/abs/1606.03498 |website=arxiv.org |accessdate=27 March 2020}}</ref>
 
|-
 
|-
| 2016 || {{dts|June 12}} || Publication || "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", a paper on generative models, is submitted to {{w|ArXiv}}. It describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.<ref>{{cite web |title=InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets |url=https://arxiv.org/abs/1606.03657 |website=arxiv.org |accessdate=27 March 2020}}</ref>
+
| 2016 || {{dts|June 12}} || Generative models || Publication || "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", a paper on generative models, is submitted to {{w|ArXiv}}. It describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.<ref>{{cite web |title=InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets |url=https://arxiv.org/abs/1606.03657 |website=arxiv.org |accessdate=27 March 2020}}</ref>
 
|-
 
|-
| 2016 || {{dts|June 15}} || Publication || "Improving Variational Inference with Inverse Autoregressive Flow", a paper on generative models, is submitted to the {{w|ArXiv}}. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces.<ref>{{cite web |last1=Kingma |first1=Diederik P. |last2=Salimans |first2=Tim |last3=Jozefowicz |first3=Rafal |last4=Chen |first4=Xi |last5=Sutskever |first5=Ilya |last6=Welling |first6=Max |title=Improving Variational Inference with Inverse Autoregressive Flow |url=https://arxiv.org/abs/1606.04934 |website=arxiv.org |accessdate=28 March 2020}}</ref>  
+
| 2016 || {{dts|June 15}} || Generative models || Publication || "Improving Variational Inference with Inverse Autoregressive Flow", a paper on generative models, is submitted to the {{w|ArXiv}}. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces.<ref>{{cite web |last1=Kingma |first1=Diederik P. |last2=Salimans |first2=Tim |last3=Jozefowicz |first3=Rafal |last4=Chen |first4=Xi |last5=Sutskever |first5=Ilya |last6=Welling |first6=Max |title=Improving Variational Inference with Inverse Autoregressive Flow |url=https://arxiv.org/abs/1606.04934 |website=arxiv.org |accessdate=28 March 2020}}</ref>  
 
|-
 
|-
| 2016 || {{dts|June 16}} || Publication || OpenAI publishes post describing four projects on generative models, a branch of {{w|unsupervised learning}} techniques in machine learning.<ref>{{cite web |title=Generative Models |url=https://openai.com/blog/generative-models/ |website=openai.com |accessdate=5 April 2020}}</ref>   
+
| 2016 || {{dts|June 16}} || Generative models || Publication || OpenAI publishes post describing four projects on generative models, a branch of {{w|unsupervised learning}} techniques in machine learning.<ref>{{cite web |title=Generative Models |url=https://openai.com/blog/generative-models/ |website=openai.com |accessdate=5 April 2020}}</ref>   
 
|-
 
|-
| 2016 || {{dts|June 21}} || Publication || "Concrete Problems in AI Safety" is submitted to the {{w|arXiv}}. The paper explores practical problems in machine learning systems.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref>
+
| 2016 || {{dts|June 21}} || || Publication || "Concrete Problems in AI Safety" by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané is submitted to the {{w|arXiv}}. The paper explores practical problems in machine learning systems.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref> The paper would receive a shoutout from the Open Philanthropy Project.<ref>{{cite web|url = https://www.openphilanthropy.org/blog/concrete-problems-ai-safety|title = Concrete Problems in AI Safety|last = Karnofsky|first = Holden|date = June 23, 2016|accessdate = April 18, 2020}}</ref> It would become a landmark in AI safety literature, and many of its authors would continue to do AI safety work at OpenAI in the years to come.
 
|-
 
|-
| 2016 || {{Dts|July}} || Staff || Dario Amodei joins OpenAI<ref>{{cite web |url=https://www.crunchbase.com/person/dario-amodei |title=Dario Amodei - Research Scientist @ OpenAI |publisher=Crunchbase |accessdate=May 6, 2018}}</ref>, working on the Team Lead for AI Safety.<ref name="Dario Amodeiy"/>
+
| 2016 || {{Dts|July}} || || Team || Dario Amodei joins OpenAI<ref>{{cite web |url=https://www.crunchbase.com/person/dario-amodei |title=Dario Amodei - Research Scientist @ OpenAI |publisher=Crunchbase |accessdate=May 6, 2018}}</ref>, working on the Team Lead for AI Safety.<ref name="Dario Amodeiy"/><ref name="orgwatch.issarice.com"/>
 
|-
 
|-
| 2016 || {{dts|July 8}} || Publication || "Adversarial Examples in the Physical World" is published. One of the authors is {{W|Ian Goodfellow}}, who is at OpenAI at the time.<ref>{{cite web |url=https://www.wired.com/2016/07/fool-ai-seeing-something-isnt/ |title=How To Fool AI Into Seeing Something That Isn't There |publisher=[[wikipedia:WIRED|WIRED]] |date=July 29, 2016 |first=Cade |last=Metz |accessdate=March 3, 2018}}</ref>
+
| 2016 || {{dts|July 8}} || || Publication || "Adversarial Examples in the Physical World" is published. One of the authors is {{W|Ian Goodfellow}}, who is at OpenAI at the time.<ref>{{cite web |url=https://www.wired.com/2016/07/fool-ai-seeing-something-isnt/ |title=How To Fool AI Into Seeing Something That Isn't There |publisher=[[wikipedia:WIRED|WIRED]] |date=July 29, 2016 |first=Cade |last=Metz |accessdate=March 3, 2018}}</ref>
 
|-
 
|-
| 2016 || {{dts|July 28}} || || OpenAI publishes post calling for applicants to work in the following problem areas of interest:
+
| 2016 || {{dts|July 28}} || || || OpenAI publishes post calling for applicants to work in the following problem areas of interest:
 
* Detect if someone is using a covert breakthrough AI system in the world.
 
* Detect if someone is using a covert breakthrough AI system in the world.
 
* Build an agent to win online programming competitions.
 
* Build an agent to win online programming competitions.
Line 98: Line 114:
 
* A complex simulation with many long-lived agents.<ref>{{cite web |title=Special Projects |url=https://openai.com/blog/special-projects/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
* A complex simulation with many long-lived agents.<ref>{{cite web |title=Special Projects |url=https://openai.com/blog/special-projects/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2016 || {{dts|August 15}} || Donation || The technology company {{W|Nvidia}} announces that it has donated the first {{W|Nvidia DGX-1}} (a supercomputer) to OpenAI. OpenAI plans to use the supercomputer to train its AI on a corpus of conversations from {{W|Reddit}}.<ref>{{cite web |url=https://blogs.nvidia.com/blog/2016/08/15/first-ai-supercomputer-OpenAI-elon-musk-deep-learning/ |title=NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI |publisher=The Official NVIDIA Blog |date=August 15, 2016 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=http://fortune.com/2016/08/15/elon-musk-artificial-intelligence-OpenAI-nvidia-supercomputer/ |title=Nvidia Just Gave A Supercomputer to Elon Musk-backed Artificial Intelligence Group |publisher=Fortune |first=Jonathan |last=Vanian |date=August 15, 2016 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://futurism.com/elon-musks-OpenAI-is-using-reddit-to-teach-an-artificial-intelligence-how-to-speak/ |date=August 17, 2016 |title=Elon Musk's OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak |first=Cecille |last=De Jesus |publisher=Futurism |accessdate=May 5, 2018}}</ref>
+
| 2016 || {{dts|August 15}} || || Donation || The technology company {{W|Nvidia}} announces that it has donated the first {{W|Nvidia DGX-1}} (a supercomputer) to OpenAI. OpenAI plans to use the supercomputer to train its AI on a corpus of conversations from {{W|Reddit}}.<ref>{{cite web |url=https://blogs.nvidia.com/blog/2016/08/15/first-ai-supercomputer-OpenAI-elon-musk-deep-learning/ |title=NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI |publisher=The Official NVIDIA Blog |date=August 15, 2016 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=http://fortune.com/2016/08/15/elon-musk-artificial-intelligence-OpenAI-nvidia-supercomputer/ |title=Nvidia Just Gave A Supercomputer to Elon Musk-backed Artificial Intelligence Group |publisher=Fortune |first=Jonathan |last=Vanian |date=August 15, 2016 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://futurism.com/elon-musks-OpenAI-is-using-reddit-to-teach-an-artificial-intelligence-how-to-speak/ |date=August 17, 2016 |title=Elon Musk's OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak |first=Cecille |last=De Jesus |publisher=Futurism |accessdate=May 5, 2018}}</ref>
|-
 
| 2016 || {{dts|August 29}} || Publication || "Infrastructure for Deep Learning" is published. The post shows how deep learning research usually proceeds. It also describes the infrastructure choices OpenAI made to support it, and open-source kubernetes-ec2-autoscaler, a batch-optimized scaling manager for {{w|Kubernetes}}.<ref>{{cite web |title=Infrastructure for Deep Learning |url=https://openai.com/blog/infrastructure-for-deep-learning/ |website=openai.com |accessdate=28 March 2020}}</ref>
 
|-
 
| 2016 || {{dts|September}} || Staff || Alexander Ray joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Alexander Ray |url=https://www.linkedin.com/in/machinaut/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2016 || {{dts|October 11}} || Publication || "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model", a paper on {{w|robotics}}, is submitted to the {{w|ArXiv}}. It investigates settings where the sequence of states traversed in simulation remains reasonable for the real world.<ref>{{cite web |last1=Christiano |first1=Paul |last2=Shah |first2=Zain |last3=Mordatch |first3=Igor |last4=Schneider |first4=Jonas |last5=Blackwell |first5=Trevor |last6=Tobin |first6=Joshua |last7=Abbeel |first7=Pieter |last8=Zaremba |first8=Wojciech |title=Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model |url=https://arxiv.org/abs/1610.03518 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
| 2016 || {{dts|October 18}} || Publication || "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data", a paper on safety, is submitted to the {{w|ArXiv}}. It shows an approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE).<ref>{{cite web |last1=Papernot |first1=Nicolas |last2=Abadi |first2=Martín |last3=Erlingsson |first3=Úlfar |last4=Goodfellow |first4=Ian |last5=Talwar |first5=Kunal |title=Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data |url=https://arxiv.org/abs/1610.05755 |website=arxiv.org |accessdate=28 March 2020}}</ref> 
 
|-
 
| 2016 || {{dts|October}} || Staff || Jack Clark joins OpenAI.<ref>{{cite web |title=Jack Clark |url=https://www.linkedin.com/in/jack-clark-5a320317/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2016 || {{dts|October}} || Staff || OpenAI Research Scientist Harri Edwards joins the organization.<ref>{{cite web |title=Harri Edwards |url=https://www.linkedin.com/in/harri-edwards-7b199375/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2016 || {{dts|November 2}} || Publication || "Extensions and Limitations of the Neural GPU" is first submitted to the {{w|ArXiv}}. The paper shows that there are two simple ways of improving the performance of the Neural GPU: by carefully designing a curriculum, and by increasing model size.<ref>{{cite web |last1=Price |first1=Eric |last2=Zaremba |first2=Wojciech |last3=Sutskever |first3=Ilya |title=Extensions and Limitations of the Neural GPU |url=https://arxiv.org/abs/1611.00736 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
| 2016 || {{dts|November 8}} || Publication || "Variational Lossy Autoencoder", a paper on generative models, is submitted to the {{w|ArXiv}}. It presents a method to learn global representations by combining Variational Autoencoder (VAE) with neural autoregressive models.<ref>{{cite web |last1=Chen |first1=Xi |last2=Kingma |first2=Diederik P. |last3=Salimans |first3=Tim |last4=Duan |first4=Yan |last5=Dhariwal |first5=Prafulla |last6=Schulman |first6=John |last7=Sutskever |first7=Ilya |last8=Abbeel |first8=Pieter |title=Variational Lossy Autoencoder |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
| 2016 || {{dts|November 9}} || Publication || "RL<sup>2</sup>: Fast Reinforcement Learning via Slow Reinforcement Learning", a paper on {{w|reinforcement learning}}, is first submitted to the {{w|ArXiv}}. It seeks to bridge the gap in number of trials between the machine learning process which requires a huge number of trials, and animals which can learn new tasks in just a few trials, benefiting from their prior knowledge about the world.<ref>{{cite web |last1=Duan |first1=Yan |last2=Schulman |first2=John |last3=Chen |first3=Xi |last4=Bartlett |first4=Peter L. |last5=Sutskever |first5=Ilya |last6=Abbeel |first6=Pieter |title=RL2: Fast Reinforcement Learning via Slow Reinforcement Learning |website=arxiv.org |url=https://arxiv.org/abs/1611.02779|accessdate=28 March 2020}}</ref>
 
|-
 
| 2016 || {{dts|November 11}} || Publication || "A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models", a paper on generative models, is first submitted to the {{w|ArXiv}}.<ref>{{cite web |last1=Finn |first1=Chelsea |last2=Christiano |first2=Paul |last3=Abbeel |first3=Pieter |last4=Levine |first4=Sergey |title=A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models |website=arxiv.org |accessdate=28 March 2020|url=https://arxiv.org/abs/1611.03852}}</ref>
 
|-
 
| 2016 || {{dts|November 14}} || Publication || "On the Quantitative Analysis of Decoder-Based Generative Models", a paper on generative models, is submitted to the {{w|ArXiv}}. It introduces a technique to analyze the performance of decoder-based models.<ref>{{cite web |last1=Wu |first1=Yuhuai |last2=Burda |first2=Yuri |last3=Salakhutdinov |first3=Ruslan |last4=Grosse |first4=Roger |title=On the Quantitative Analysis of Decoder-Based Generative Models |url=https://arxiv.org/abs/1611.04273 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
| 2016 || {{dts|November 15}} || Partnership || A partnership between OpenAI and Microsoft's artificial intelligence division is announced. As part of the partnership, Microsoft provides a price reduction on computing resources to OpenAI through {{W|Microsoft Azure}}.<ref>{{cite web |url=https://www.theverge.com/2016/11/15/13639904/microsoft-OpenAI-ai-partnership-elon-musk-sam-altman |date=November 15, 2016 |publisher=The Verge |first=Nick |last=Statt |title=Microsoft is partnering with Elon Musk's OpenAI to protect humanity's best interests |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/11/next-battles-clouds-ai-chips/ |title=The Next Big Front in the Battle of the Clouds Is AI Chips. And Microsoft Just Scored a Win |publisher=[[wikipedia:WIRED|WIRED]] |first=Cade |last=Metz |accessdate=March 2, 2018 |quote=According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware.}}</ref>
 
|-
 
| 2016 || {{dts|November 15}} || Publication || "#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning", a paper on {{w|reinforcement learning}}, is first submitted to the {{w|ArXiv}}.<ref>{{cite web |title=#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning |url=https://arxiv.org/abs/1611.04717 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
| 2016 || {{dts|December 5}} || Software || OpenAI's Universe, "a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications", is released.<ref>{{cite web |url=https://github.com/OpenAI/universe |accessdate=March 1, 2018 |publisher=GitHub |title=universe}}</ref><ref>{{cite web |url=https://techcrunch.com/2016/12/05/OpenAIs-universe-is-the-fun-parent-every-artificial-intelligence-deserves/ |date=December 5, 2016 |publisher=TechCrunch |title=OpenAI's Universe is the fun parent every artificial intelligence deserves |author=John Mannes |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/12/OpenAIs-universe-computers-learn-use-apps-like-humans/ |title=Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=13103742 |title=OpenAI Universe |website=Hacker News |accessdate=May 5, 2018}}</ref>
 
|-
 
| 2016 || December 21 || Publication || "Faulty Reward Functions in the Wild" is published. The post explores a failed {{w|reinforcement learning}} algorithm, which leads to misspecifying the reward function.<ref>{{cite web |title=Faulty Reward Functions in the Wild |url=https://openai.com/blog/faulty-reward-functions/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
| 2016 || ? || Staff || Tom Brown joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Tom Brown |url=https://www.linkedin.com/in/nottombrown/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
 
|-
 
| 2017 || {{dts|January}} || Staff || Paul Christiano joins OpenAI to work on AI alignment.<ref>{{cite web |url=https://paulfchristiano.com/ai/ |title=AI Alignment |date=May 13, 2017 |publisher=Paul Christiano |accessdate=May 6, 2018}}</ref> He was previously an intern at OpenAI in 2016.<ref>{{cite web |url=https://blog.OpenAI.com/team-update/ |publisher=OpenAI Blog |title=Team Update |date=March 22, 2017 |accessdate=May 6, 2018}}</ref>
 
|-
 
| 2017 || {{dts|January 19}} || Publication || "PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications", a paper on generative models, is submitted to the {{w|ArXiv}}.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Karpathy |first2=Andrej |last3=Chen |first3=Xi |last4=Kingma |first4=Diederik P. |title=PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications |url=https://arxiv.org/abs/1701.05517 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
| 2017 || {{dts|February 8}} || Publication || "Adversarial Attacks on Neural Network Policies" is submitted to the {{w|ArXiv}}. The paper shows that adversarial attacks are effective when targeting neural network policies in reinforcement learning.<ref>{{cite web |last1=Huang |first1=Sandy |last2=Papernot |first2=Nicolas |last3=Goodfellow |first3=Ian |last4=Duan |first4=Yan |last5=Abbeel |first5=Pieter |title=Adversarial Attacks on Neural Network Policies |url=https://arxiv.org/abs/1702.02284 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
| 2017 || {{dts|February}} || Staff || OpenAI Research Scientist Prafulla Dhariwal joins the organization.<ref>{{cite web |title=Prafulla Dhariwal |url=https://www.linkedin.com/in/prafulladhariwal/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
 
|-
 
|-
| 2017 || {{dts|February}} || Staff || OpenAI Researcher Jakub Pachocki joins the organization.<ref>{{cite web |title=Jakub Pachocki |url=https://www.linkedin.com/in/jakub-pachocki/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2016 || {{dts|August 29}} || Infrastructure || Publication || "Infrastructure for Deep Learning" is published. The post shows how deep learning research usually proceeds. It also describes the infrastructure choices OpenAI made to support it, and open-source kubernetes-ec2-autoscaler, a batch-optimized scaling manager for {{w|Kubernetes}}.<ref>{{cite web |title=Infrastructure for Deep Learning |url=https://openai.com/blog/infrastructure-for-deep-learning/ |website=openai.com |accessdate=28 March 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|March 6}} || Publication || "Third-Person Imitation Learning", a paper on {{w|robotics}}, is submitted to the {{w|ArXiv}}. It presents a method for unsupervised third-person imitation learning.<ref>{{cite web |last1=Stadie |first1=Bradly C. |last2=Abbeel |first2=Pieter |last3=Sutskever |first3=Ilya |title=arxiv.org |url=https://arxiv.org/abs/1703.01703|website=arxiv.org |accessdate=28 March 2020}}</ref>
+
| 2016 || {{dts|October 11}} || {{w|Robotics}} || Publication || "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model", a paper on {{w|robotics}}, is submitted to the {{w|ArXiv}}. It investigates settings where the sequence of states traversed in simulation remains reasonable for the real world.<ref>{{cite web |last1=Christiano |first1=Paul |last2=Shah |first2=Zain |last3=Mordatch |first3=Igor |last4=Schneider |first4=Jonas |last5=Blackwell |first5=Trevor |last6=Tobin |first6=Joshua |last7=Abbeel |first7=Pieter |last8=Zaremba |first8=Wojciech |title=Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model |url=https://arxiv.org/abs/1610.03518 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|March 10}} || Publication || "Evolution Strategies as a Scalable Alternative to Reinforcement Learning" is submitted to the {{w|ArXiv}}. It explores the use of Evolution Strategies (ES), a class of black box optimization algorithms.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Ho |first2=Jonathan |last3=Chen |first3=Xi |last4=Sidor |first4=Szymon |last5=Sutskever |first5=Ilya |title=Evolution Strategies as a Scalable Alternative to Reinforcement Learning |url=https://arxiv.org/abs/1703.03864 |website=arxiv.org |accessdate=28 March 2020}}</ref>
+
| 2016 || {{dts|October 18}} || Safety || Publication || "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data", a paper on safety, is submitted to the {{w|ArXiv}}. It shows an approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE).<ref>{{cite web |last1=Papernot |first1=Nicolas |last2=Abadi |first2=Martín |last3=Erlingsson |first3=Úlfar |last4=Goodfellow |first4=Ian |last5=Talwar |first5=Kunal |title=Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data |url=https://arxiv.org/abs/1610.05755 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|March 12}} || Publication || "Prediction and Control with Temporal Segment Models", a paper on generative models, is first submitted to the {{w|ArXiv}}. It introduces a method for learning the dynamics of complex nonlinear systems based on deep generative models over temporal segments of states and actions.<ref>{{cite web |last1=Mishra |first1=Nikhil |last2=Abbeel |first2=Pieter |last3=Mordatch |first3=Igor |title=Prediction and Control with Temporal Segment Models |url=https://arxiv.org/abs/1703.04070 |website=arxiv.org |accessdate=28 March 2020}}</ref>
+
| 2016 || {{dts|November 14}} || Generative models || Publication || "On the Quantitative Analysis of Decoder-Based Generative Models", a paper on generative models, is submitted to the {{w|ArXiv}}. It introduces a technique to analyze the performance of decoder-based models.<ref>{{cite web |last1=Wu |first1=Yuhuai |last2=Burda |first2=Yuri |last3=Salakhutdinov |first3=Ruslan |last4=Grosse |first4=Roger |title=On the Quantitative Analysis of Decoder-Based Generative Models |url=https://arxiv.org/abs/1611.04273 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|March}} || Donation || The Open Philanthropy Project awards a grant of $30 million to {{w|OpenAI}} for general support.<ref name="donations-portal-open-phil-ai-risk">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+safety |title=Open Philanthropy Project donations made (filtered to cause areas matching AI safety) |accessdate=July 27, 2017}}</ref> The grant initiates a partnership between Open Philanthropy Project and OpenAI, in which {{W|Holden Karnofsky}} (executive director of Open Philanthropy Project) joins OpenAI's board of directors to oversee OpenAI's safety and governance work.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/OpenAI-general-support |publisher=Open Philanthropy Project |title=OpenAI — General Support |date=December 15, 2017 |accessdate=May 6, 2018}}</ref> The grant is criticized by {{W|Maciej Cegłowski}}<ref>{{cite web |url=https://twitter.com/Pinboard/status/848009582492360704 |title=Pinboard on Twitter |publisher=Twitter |accessdate=May 8, 2018 |quote=What the actual fuck… “Open Philanthropy” dude gives a $30M grant to his roommate / future brother-in-law.  Trumpy!}}</ref> and Benjamin Hoffman (who would write the blog post "OpenAI makes humanity less safe")<ref>{{cite web |url=http://benjaminrosshoffman.com/OpenAI-makes-humanity-less-safe/ |title=OpenAI makes humanity less safe |date=April 13, 2017 |publisher=Compass Rose |accessdate=May 6, 2018}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/Nqn2tkAHbejXTDKuW/OpenAI-makes-humanity-less-safe |title=OpenAI makes humanity less safe |accessdate=May 6, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://donations.vipulnaik.com/donee.php?donee=OpenAI |title=OpenAI donations received |accessdate=May 6, 2018}}</ref> among others.<ref>{{cite web |url=https://www.facebook.com/vipulnaik.r/posts/10211478311489366 |title=I'm having a hard time understanding the rationale... |accessdate=May 8, 2018 |first=Vipul |last=Naik}}</ref>
+
| 2016 || {{dts|November 15}} || || Partnership || A partnership between OpenAI and Microsoft's artificial intelligence division is announced. As part of the partnership, Microsoft provides a price reduction on computing resources to OpenAI through {{W|Microsoft Azure}}.<ref>{{cite web |url=https://www.theverge.com/2016/11/15/13639904/microsoft-OpenAI-ai-partnership-elon-musk-sam-altman |date=November 15, 2016 |publisher=The Verge |first=Nick |last=Statt |title=Microsoft is partnering with Elon Musk's OpenAI to protect humanity's best interests |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/11/next-battles-clouds-ai-chips/ |title=The Next Big Front in the Battle of the Clouds Is AI Chips. And Microsoft Just Scored a Win |publisher=[[wikipedia:WIRED|WIRED]] |first=Cade |last=Metz |accessdate=March 2, 2018 |quote=According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware.}}</ref>
 
|-
 
|-
| 2017 || {{dts|March 15}} || Publication || "Emergence of Grounded Compositional Language in Multi-Agent Populations" is first submitted to {{w|ArXiv}}. The paper proposes a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language.<ref>{{cite web |last1=Mordatch |first1=Igor |last2=Abbeel |first2=Pieter |title=Emergence of Grounded Compositional Language in Multi-Agent Populations |url=https://arxiv.org/abs/1703.04908 |website=arxiv.org |accessdate=26 March 2020}}</ref>
+
| 2016 || {{dts|December 5}} || || Software release || OpenAI's Universe, "a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications", is released.<ref>{{cite web |url=https://github.com/OpenAI/universe |accessdate=March 1, 2018 |publisher=GitHub |title=universe}}</ref><ref>{{cite web |url=https://techcrunch.com/2016/12/05/OpenAIs-universe-is-the-fun-parent-every-artificial-intelligence-deserves/ |date=December 5, 2016 |publisher=TechCrunch |title=OpenAI's Universe is the fun parent every artificial intelligence deserves |author=John Mannes |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/12/OpenAIs-universe-computers-learn-use-apps-like-humans/ |title=Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=13103742 |title=OpenAI Universe |website=Hacker News |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|March 20}} || Publication || "Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World", a paper on {{w|robotics}}, is subitted to the {{w|ArXiv}}. It explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator.<ref>{{cite web |last1=Tobin |first1=Josh |last2=Fong |first2=Rachel |last3=Ray |first3=Alex |last4=Schneider |first4=Jonas |last5=Zaremba |first5=Wojciech |last6=Abbeel |first6=Pieter |title=Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World |url=https://arxiv.org/abs/1703.06907 |website=arxiv.org |accessdate=28 March 2020}}</ref>
+
| 2017 || {{dts|January}} || || Staff || Paul Christiano joins OpenAI to work on AI alignment.<ref>{{cite web |url=https://paulfchristiano.com/ai/ |title=AI Alignment |date=May 13, 2017 |publisher=Paul Christiano |accessdate=May 6, 2018}}</ref> He was previously an intern at OpenAI in 2016.<ref>{{cite web |url=https://blog.openai.com/team-update/ |publisher=OpenAI Blog |title=Team Update |date=March 22, 2017 |accessdate=May 6, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|March 21}} || Publication || "One-Shot Imitation Learning", a paper on {{w|robotics}}, is first submitted to the {{w|ArXiv}}. The paper proposes a meta-learning framework for optimizing imitation learning.<ref>{{cite web |title=One-Shot Imitation Learning |url=https://arxiv.org/abs/1703.07326 |website=arxiv.org |accessdate=28 March 2020}}</ref>
+
| 2017 || {{dts|March}} || || Donation || The Open Philanthropy Project awards a grant of $30 million to {{w|OpenAI}} for general support.<ref name="donations-portal-open-phil-ai-risk">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+safety |title=Open Philanthropy Project donations made (filtered to cause areas matching AI safety) |accessdate=July 27, 2017}}</ref> The grant initiates a partnership between Open Philanthropy Project and OpenAI, in which {{W|Holden Karnofsky}} (executive director of Open Philanthropy Project) joins OpenAI's board of directors to oversee OpenAI's safety and governance work.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/OpenAI-general-support |publisher=Open Philanthropy Project |title=OpenAI — General Support |date=December 15, 2017 |accessdate=May 6, 2018}}</ref> The grant is criticized by {{W|Maciej Cegłowski}}<ref>{{cite web |url=https://twitter.com/Pinboard/status/848009582492360704 |title=Pinboard on Twitter |publisher=Twitter |accessdate=May 8, 2018 |quote=What the actual fuck… “Open Philanthropy” dude gives a $30M grant to his roommate / future brother-in-law. Trumpy!}}</ref> and Benjamin Hoffman (who would write the blog post "OpenAI makes humanity less safe")<ref>{{cite web |url=http://benjaminrosshoffman.com/OpenAI-makes-humanity-less-safe/ |title=OpenAI makes humanity less safe |date=April 13, 2017 |publisher=Compass Rose |accessdate=May 6, 2018}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/Nqn2tkAHbejXTDKuW/OpenAI-makes-humanity-less-safe |title=OpenAI makes humanity less safe |accessdate=May 6, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://donations.vipulnaik.com/donee.php?donee=OpenAI |title=OpenAI donations received |accessdate=May 6, 2018}}</ref> among others.<ref>{{cite web |url=https://www.facebook.com/vipulnaik.r/posts/10211478311489366 |title=I'm having a hard time understanding the rationale... |accessdate=May 8, 2018 |first=Vipul |last=Naik}}</ref>
 
|-
 
|-
| 2017 || {{dts|March 24}} || Milestone || OpenAI announces having discovered that [[w:Evolution strategy|evolution strategies]] rival the performance of standard {{w|reinforcement learning}} techniques on modern RL benchmarks (e.g. Atari/MuJoCo), while overcoming many of RL’s inconveniences.<ref>{{cite web |title=Evolution Strategies as a Scalable Alternative to Reinforcement Learning |url=https://openai.com/blog/evolution-strategies/ |website=openai.com |accessdate=5 April 2020}}</ref>
+
| 2017 || {{dts|March 24}} || || Research progress || OpenAI announces having discovered that [[w:Evolution strategy|evolution strategies]] rival the performance of standard {{w|reinforcement learning}} techniques on modern RL benchmarks (e.g. Atari/MuJoCo), while overcoming many of RL’s inconveniences.<ref>{{cite web |title=Evolution Strategies as a Scalable Alternative to Reinforcement Learning |url=https://openai.com/blog/evolution-strategies/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|March}} || Reorganization || Greg Brockman and a few other core members of OpenAI begin drafting an internal document to lay out a path to {{w|artificial general intelligence}}. As the team studies trends within the field, they realize staying a nonprofit is financially untenable.<ref name="technologyreview.comñ">{{cite web |title=The messy, secretive reality behind OpenAI’s bid to save the world |url=https://www.technologyreview.com/s/615181/ai-OpenAI-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ |website=technologyreview.com |accessdate=28 February 2020}}</ref>
+
| 2017 || {{dts|March}} || || Reorganization || Greg Brockman and a few other core members of OpenAI begin drafting an internal document to lay out a path to {{w|artificial general intelligence}}. As the team studies trends within the field, they realize staying a nonprofit is financially untenable.<ref name="technologyreview.comñ">{{cite web |title=The messy, secretive reality behind OpenAI’s bid to save the world |url=https://www.technologyreview.com/s/615181/ai-OpenAI-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ |website=technologyreview.com |accessdate=28 February 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|March}} || Staff || Christopher Berner joins OpenAI as Head of Infrastructure.<ref>{{cite web |title=Christopher Berner |url=https://www.linkedin.com/in/christopherbernerberkeley/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
+
| 2017 || {{dts|April}} || || Coverage || An article entitled "The People Behind OpenAI" is published on {{W|Red Hat}}'s ''Open Source Stories'' website, covering work at OpenAI.<ref>{{cite web |url=https://www.redhat.com/en/open-source-stories/ai-revolutionaries/people-behind-OpenAI |title=Open Source Stories: The People Behind OpenAI |accessdate=May 5, 2018 |first1=Brent |last1=Simoneaux |first2=Casey |last2=Stegman}} In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.</ref><ref>{{cite web |url=https://www.reddit.com/r/OpenAI/comments/63xr4p/profile_of_the_people_behind_OpenAI/ |publisher=reddit |title=Profile of the people behind OpenAI • r/OpenAI |date=April 7, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=14832524 |title=The People Behind OpenAI |website=Hacker News |accessdate=May 5, 2018 |date=July 23, 2017}}</ref>
 
|-
 
|-
| 2017 || {{dts|April}} || Coverage || An article entitled "The People Behind OpenAI" is published on {{W|Red Hat}}'s ''Open Source Stories'' website, covering work at OpenAI.<ref>{{cite web |url=https://www.redhat.com/en/open-source-stories/ai-revolutionaries/people-behind-OpenAI |title=Open Source Stories: The People Behind OpenAI |accessdate=May 5, 2018 |first1=Brent |last1=Simoneaux |first2=Casey |last2=Stegman}} In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.</ref><ref>{{cite web |url=https://www.reddit.com/r/OpenAI/comments/63xr4p/profile_of_the_people_behind_OpenAI/ |publisher=reddit |title=Profile of the people behind OpenAI • r/OpenAI |date=April 7, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=14832524 |title=The People Behind OpenAI |website=Hacker News |accessdate=May 5, 2018 |date=July 23, 2017}}</ref>
+
| 2017 || {{dts|April 6}} || || Software release || OpenAI unveils an unsupervised system which is able to perform a excellent {{w|sentiment analysis}}, despite being trained only to predict the next character in the text of Amazon reviews.<ref>{{cite web |title=Unsupervised Sentiment Neuron |url=https://openai.com/blog/unsupervised-sentiment-neuron/ |website=openai.com |accessdate=5 April 2020}}</ref><ref>{{cite web |url=https://techcrunch.com/2017/04/07/OpenAI-sets-benchmark-for-sentiment-analysis-using-an-efficient-mlstm/ |date=April 7, 2017 |publisher=TechCrunch |title=OpenAI sets benchmark for sentiment analysis using an efficient mLSTM |author=John Mannes |accessdate=March 2, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|April 6}} || Software || OpenAI unveils an unsupervised system which is able to perform a excellent {{w|sentiment analysis}}, despite being trained only to predict the next character in the text of Amazon reviews.<ref>{{cite web |title=Unsupervised Sentiment Neuron |url=https://openai.com/blog/unsupervised-sentiment-neuron/ |website=openai.com |accessdate=5 April 2020}}</ref><ref>{{cite web |url=https://techcrunch.com/2017/04/07/OpenAI-sets-benchmark-for-sentiment-analysis-using-an-efficient-mlstm/ |date=April 7, 2017 |publisher=TechCrunch |title=OpenAI sets benchmark for sentiment analysis using an efficient mLSTM |author=John Mannes |accessdate=March 2, 2018}}</ref>
+
| 2017 || {{dts|April 6}} || || Publication || "Learning to Generate Reviews and Discovering Sentiment" is published.<ref>{{cite web |url=https://techcrunch.com/2017/04/07/openai-sets-benchmark-for-sentiment-analysis-using-an-efficient-mlstm/ |date=April 7, 2017 |publisher=TechCrunch |title=OpenAI sets benchmark for sentiment analysis using an efficient mLSTM |author=John Mannes |accessdate=March 2, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|April 6}} || Software || OpenAI unveils reuse of an old field called “neuroevolution”, and a subset of algorithms from it called “evolution strategies,” which are aimed at solving optimization problems. In one hour training on an Atari challenge, an algorithm is found to reach a level of mastery that took a reinforcement-learning system published by DeepMind in 2016 a whole day to learn. On the walking problem the system took 10 minutes, compared to 10 hours for DeepMind's approach.<ref>{{cite web |title=OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s |url=https://singularityhub.com/2017/04/06/OpenAI-just-beat-the-hell-out-of-deepmind-with-an-algorithm-from-the-80s/ |website=singularityhub.com |accessdate=29 June 2019}}</ref>
+
| 2017 || {{dts|April 6}} || Neuroevolution || Research progress || OpenAI unveils reuse of an old field called “neuroevolution”, and a subset of algorithms from it called “evolution strategies,” which are aimed at solving optimization problems. In one hour training on an Atari challenge, an algorithm is found to reach a level of mastery that took a reinforcement-learning system published by DeepMind in 2016 a whole day to learn. On the walking problem the system took 10 minutes, compared to 10 hours for DeepMind's approach.<ref>{{cite web |title=OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s |url=https://singularityhub.com/2017/04/06/OpenAI-just-beat-the-hell-out-of-deepmind-with-an-algorithm-from-the-80s/ |website=singularityhub.com |accessdate=29 June 2019}}</ref>
 
|-
 
|-
| 2017 || {{dts|April}} || Staff || Matthias Plappert joins OpenAI as Researcher.<ref>{{cite web |title=Matthias Plappert |url=https://www.linkedin.com/in/matthiasplappert/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2017 || {{dts|May 15}} || Robotics || Software release || OpenAI releases Roboschool, an open-source software for robot simulation, integrated with OpenAI Gym.<ref>{{cite web |title=Roboschool |url=https://openai.com/blog/roboschool/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|May 15}} || Software || OpenAI releases Roboschool, an open-source software for robot simulation, integrated with OpenAI Gym.<ref>{{cite web |title=Roboschool |url=https://openai.com/blog/roboschool/ |website=openai.com |accessdate=5 April 2020}}</ref>
+
| 2017 || {{dts|May 16}} || Robotics || Software release || OpenAI introduces a robotics system, trained entirely in simulation and deployed on a physical robot, which can learn a new task after seeing it done once.<ref>{{cite web |title=Robots that Learn |url=https://openai.com/blog/robots-that-learn/ |website=openai.com |accessdate=5 April 2020}}</ref>  
 
|-
 
|-
| 2017 || {{dts|May 16}} || Software || OpenAI introduces a robotics system, trained entirely in simulation and deployed on a physical robot, which can learn a new task after seeing it done once.<ref>{{cite web |title=Robots that Learn |url=https://openai.com/blog/robots-that-learn/ |website=openai.com |accessdate=5 April 2020}}</ref>  
+
| 2017 || {{dts|May 24}} || Reinforcement learning || Software release || OpenAI releases Baselines, a set of implementations of reinforcement learning algorithms.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-baselines-dqn/ |publisher=OpenAI Blog |title=OpenAI Baselines: DQN |date=November 28, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://github.com/OpenAI/baselines |publisher=GitHub |title=OpenAI/baselines |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|May 24}} || Software || OpenAI releases Baselines, a set of implementations of reinforcement learning algorithms.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-baselines-dqn/ |publisher=OpenAI Blog |title=OpenAI Baselines: DQN |date=November 28, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://github.com/OpenAI/baselines |publisher=GitHub |title=OpenAI/baselines |accessdate=May 5, 2018}}</ref>
+
| 2017 || {{dts|June 12}} || Safety || Publication || "Deep reinforcement learning from human preferences" is first uploaded to the arXiv. The paper is a collaboration between researchers at OpenAI and Google DeepMind.<ref>{{cite web |url=https://arxiv.org/abs/1706.03741 |title=[1706.03741] Deep reinforcement learning from human preferences |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/06 |author=gwern |date=June 3, 2017 |title=June 2017 news - Gwern.net |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/story/two-giants-of-ai-team-up-to-head-off-the-robot-apocalypse/ |title=Two Giants of AI Team Up to Head Off the Robot Apocalypse |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018 |quote=A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it's possible to do practical work right now on making machine learning systems less able to produce nasty surprises.}}</ref>
 
|-
 
|-
| 2017 || {{dts|May}} || Staff || {{w|Kevin Frans}} joins OpenAI as Research Intern.<ref>{{cite web |title=Kevin Frans |url=https://www.linkedin.com/in/kevinfrans/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2017 || {{dts|June 28}} || Robotics || Open sourcing || OpenAI open sources a high-performance [[w:Python (programming language)|Python]] library for robotic simulation using the MuJoCo engine, developed over OpenAI research on robotics.<ref>{{cite web |title=Faster Physics in Python |url=https://openai.com/blog/faster-robot-simulation-in-python/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|June 7}} || Publication || "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments" is submitted to the {{w|ArXiv}}. The paper explores deep {{w|reinforcement learning}} methods for multi-agent domains.<ref>{{cite web |title=Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments |website=arxiv.org |accessdate=5 April 2020}}</ref>
+
| 2017 || {{dts|June}} || {{w|Reinforcement learning}} || Partnership || OpenAI partners with {{w|DeepMind}}’s safety team in the development of an algorithm which can infer what humans want by being told which of two proposed behaviors is better. The learning algorithm uses small amounts of human feedback to solve modern {{w|reinforcement learning}} environments.<ref>{{cite web |title=Learning from Human Preferences |url=https://OpenAI.com/blog/deep-reinforcement-learning-from-human-preferences/ |website=OpenAI.com |accessdate=29 June 2019}}</ref>  
 
|-
 
|-
| 2017 || {{dts|June 12}} || Publication || "Deep reinforcement learning from human preferences" is first uploaded to the arXiv. The paper is a collaboration between researchers at OpenAI and Google DeepMind.<ref>{{cite web |url=https://arxiv.org/abs/1706.03741 |title=[1706.03741] Deep reinforcement learning from human preferences |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/06 |author=gwern |date=June 3, 2017 |title=June 2017 news - Gwern.net |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/story/two-giants-of-ai-team-up-to-head-off-the-robot-apocalypse/ |title=Two Giants of AI Team Up to Head Off the Robot Apocalypse |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018 |quote=A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it's possible to do practical work right now on making machine learning systems less able to produce nasty surprises.}}</ref>
+
| 2017 || {{dts|July 27}} || {{w|Reinforcement learning}} || Research progress || OpenAI announces having found that adding adaptive noise to the parameters of {{w|reinforcement learning}} algorithms frequently boosts performance.<ref>{{cite web |title=Better Exploration with Parameter Noise |url=https://openai.com/blog/better-exploration-with-parameter-noise/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|June 28}} || Open sourcing || OpenAI open sources a high-performance Python library for robotic simulation using the MuJoCo engine, developed over OpenAI research on robotics.<ref>{{cite web |title=Faster Physics in Python |url=https://openai.com/blog/faster-robot-simulation-in-python/ |website=openai.com |accessdate=5 April 2020}}</ref>
+
| 2017 || {{dts|August 12}} || || Achievement || OpenAI's Dota 2 bot beats Danil "Dendi" Ishutin, a professional human player, (and possibly others?) in one-on-one battles.<ref>{{cite web |url=https://techcrunch.com/2017/08/12/OpenAI-bot-remains-undefeated-against-worlds-greatest-dota-2-players/ |date=August 12, 2017 |publisher=TechCrunch |title=OpenAI bot remains undefeated against world's greatest Dota 2 players |author=Jordan Crook |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.theverge.com/2017/8/14/16143392/dota-ai-OpenAI-bot-win-elon-musk |date=August 14, 2017 |publisher=The Verge |title=Did Elon Musk's AI champ destroy humans at video games? It's complicated |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/the-international-dota-2-OpenAI-bot-beats-dendi-2017-8 |date=August 11, 2017 |title=Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|June}} || Partnership || OpenAI partners with {{w|DeepMind}}’s safety team in the development of an algorithm which can infer what humans want by being told which of two proposed behaviors is better. The learning algorithm uses small amounts of human feedback to solve modern {{w|reinforcement learning}} environments.<ref>{{cite web |title=Learning from Human Preferences |url=https://OpenAI.com/blog/deep-reinforcement-learning-from-human-preferences/ |website=OpenAI.com |accessdate=29 June 2019}}</ref>  
+
| 2017 || {{dts|August 13}} || || Coverage || ''{{W|The New York Times}}'' publishes a story covering the AI safety work (by Dario Amodei, Geoffrey Irving, and Paul Christiano) at OpenAI.<ref>{{cite web |url=https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html |date=August 13, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=Teaching A.I. Systems to Behave Themselves |author=Cade Metz |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|July}} || Staff || OpenAI Research Scientist Joshua Achiam joins the organization.<ref>{{cite web |title=Joshua Achiam |url=https://www.linkedin.com/in/joshua-achiam-13887199/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2017 || {{dts|August 18}} || {{w|Reinforcement learning}} || Software release || OpenAI releases two implementations: ACKTR, a {{w|reinforcement learning}} algorithm, and A2C, a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C).<ref>{{cite web |title=OpenAI Baselines: ACKTR & A2C |url=https://openai.com/blog/baselines-acktr-a2c/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|July 27}} || Software || OpenAI announces having found that adding adaptive noise to the parameters of {{w|reinforcement learning}} algorithms frequently boosts performance.<ref>{{cite web |title=Better Exploration with Parameter Noise |url=https://openai.com/blog/better-exploration-with-parameter-noise/ |website=openai.com |accessdate=5 April 2020}}</ref>
+
| 2017 || {{Dts|September 13}} || {{w|Reinforcement learning}} || Publication || "Learning with Opponent-Learning Awareness" is first uploaded to the {{w|ArXiv}}. The paper presents Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in an environment.<ref>{{cite web |url=https://arxiv.org/abs/1709.04326 |title=[1709.04326] Learning with Opponent-Learning Awareness |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/09 |author=gwern |date=August 16, 2017 |title=September 2017 news - Gwern.net |accessdate=March 2, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|August 12}} || Achievement || OpenAI's Dota 2 bot beats Danil "Dendi" Ishutin, a professional human player, (and possibly others?) in one-on-one battles.<ref>{{cite web |url=https://techcrunch.com/2017/08/12/OpenAI-bot-remains-undefeated-against-worlds-greatest-dota-2-players/ |date=August 12, 2017 |publisher=TechCrunch |title=OpenAI bot remains undefeated against world's greatest Dota 2 players |author=Jordan Crook |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.theverge.com/2017/8/14/16143392/dota-ai-OpenAI-bot-win-elon-musk |date=August 14, 2017 |publisher=The Verge |title=Did Elon Musk's AI champ destroy humans at video games? It's complicated |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/the-international-dota-2-OpenAI-bot-beats-dendi-2017-8 |date=August 11, 2017 |title=Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
+
| 2017 || {{dts|October 11}} || || Software release || RoboSumo, a game that simulates {{W|sumo wrestling}} for AI to learn to play, is released.<ref>{{cite web |url=https://www.wired.com/story/ai-sumo-wrestlers-could-make-future-robots-more-nimble/ |title=AI Sumo Wrestlers Could Make Future Robots More Nimble |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 3, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/elon-musk-OpenAI-virtual-robots-learn-sumo-wrestle-soccer-sports-ai-tech-science-2017-10 |first1=Alexandra |last1=Appolonia |first2=Justin |last2=Gmoser |date=October 20, 2017 |title=Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|August 13}} || Coverage || ''{{W|The New York Times}}'' publishes a story covering the AI safety work (by Dario Amodei, Geoffrey Irving, and Paul Christiano) at OpenAI.<ref>{{cite web |url=https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html |date=August 13, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=Teaching A.I. Systems to Behave Themselves |author=Cade Metz |accessdate=May 5, 2018}}</ref>
+
| 2017 || {{Dts|November 6}} || || Team || ''{{W|The New York Times}}'' reports that Pieter Abbeel (a researcher at OpenAI) and three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligence.<ref>{{cite web |url=https://www.nytimes.com/2017/11/06/technology/artificial-intelligence-start-up.html |date=November 6, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up |author=Cade Metz |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|August 18}} || Software || OpenAI releases two implementations: ACKTR, a {{w|reinforcement learning}} algorithm, and A2C, a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C).<ref>{{cite web |title=OpenAI Baselines: ACKTR & A2C |url=https://openai.com/blog/baselines-acktr-a2c/ |website=openai.com |accessdate=5 April 2020}}</ref>
+
| 2017 || {{dts|December 6}} || {{w|Neural network}} || Software release || OpenAI releases highly-optimized GPU kernels for networks with block-sparse weights, an underexplored class of neural network architectures. Depending on the chosen sparsity, these kernels can run orders of magnitude faster than cuBLAS or cuSPARSE.<ref>{{cite web |title=Block-Sparse GPU Kernels |url=https://openai.com/blog/block-sparse-gpu-kernels/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2017 || {{Dts|September 13}} || Publication || "Learning with Opponent-Learning Awareness" is first uploaded to the {{w|arXiv}}. The paper presents Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in an environment.<ref>{{cite web |url=https://arxiv.org/abs/1709.04326 |title=[1709.04326] Learning with Opponent-Learning Awareness |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/09 |author=gwern |date=August 16, 2017 |title=September 2017 news - Gwern.net |accessdate=March 2, 2018}}</ref>
+
| 2017 || {{dts|December}} || || Publication || The 2017 AI Index is published. OpenAI contributed to the report.<ref>{{cite web |url=https://www.theverge.com/2017/12/1/16723238/ai-artificial-intelligence-progress-index |date=December 1, 2017 |publisher=The Verge |title=Artificial intelligence isn't as clever as we think, but that doesn't stop it being a threat |first=James |last=Vincent |accessdate=March 2, 2018}}</ref>
 
|-
 
|-
| 2017 || September || Staff || OpenAI Research Scientist Bowen Baker joins the organization.<ref>{{cite web |title=Bowen Baker |url=https://www.linkedin.com/in/bowen-baker-59b48a65/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2018 || {{dts|February 20}} || Safety || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is submitted to the {{w|ArXiv}}. It forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref><ref name="musk-leaves" /><ref>{{cite web |url=https://www.wired.com/story/why-artificial-intelligence-researchers-should-be-more-paranoid/ |title=Why Artificial Intelligence Researchers Should Be More Paranoid |first=Tom |last=Simonite |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|October 11}} || Software || RoboSumo, a game that simulates {{W|sumo wrestling}} for AI to learn to play, is released.<ref>{{cite web |url=https://www.wired.com/story/ai-sumo-wrestlers-could-make-future-robots-more-nimble/ |title=AI Sumo Wrestlers Could Make Future Robots More Nimble |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 3, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/elon-musk-OpenAI-virtual-robots-learn-sumo-wrestle-soccer-sports-ai-tech-science-2017-10 |first1=Alexandra |last1=Appolonia |first2=Justin |last2=Gmoser |date=October 20, 2017 |title=Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
+
| 2018 || {{dts|February 20}} || || Donation || OpenAI announces changes in donors and advisors. New donors are: {{W|Jed McCaleb}}, {{W|Gabe Newell}}, {{W|Michael Seibel}}, {{W|Jaan Tallinn}}, and {{W|Ashton Eaton}} and {{W|Brianne Theisen-Eaton}}. {{W|Reid Hoffman}} is "significantly increasing his contribution". Pieter Abbeel (previously at OpenAI), {{W|Julia Galef}}, and Maran Nelson become advisors. {{W|Elon Musk}} departs the board but remains as a donor and advisor.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-supporters/ |publisher=OpenAI Blog |title=OpenAI Supporters |date=February 21, 2018 |accessdate=March 1, 2018}}</ref><ref name="musk-leaves">{{cite web |url=https://www.theverge.com/2018/2/21/17036214/elon-musk-OpenAI-ai-safety-leaves-board |date=February 21, 2018 |publisher=The Verge |title=Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla |accessdate=March 2, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|October 17}} || Publication || "Domain Randomization and Generative Models for Robotic Grasping", a paper on {{w|robotics}}, is first submitted to the {{w|ArXiv}}. It explores a novel data generation pipeline for training a deep neural network to perform grasp planning that applies the idea of domain randomization to object synthesis.<ref>{{cite web |title=Domain Randomization and Generative Models for Robotic Grasping |url=https://arxiv.org/abs/1710.06425 |website=arxiv.org |accessdate=27 March 2020}}</ref>
+
| 2018 || {{dts|February 26}} || Robotics || Software release || OpenAI releases eight simulated robotics environments and a Baselines implementation of Hindsight Experience Replay, all developed for OpenAI research over the previous year. These environments were to train models which work on physical robots.<ref>{{cite web |title=Ingredients for Robotics Research |url=https://openai.com/blog/ingredients-for-robotics-research/ |website=openai.com |accessdate=5 April 2020}}</ref>  
 
|-
 
|-
| 2017 || {{dts|October 18}} || Publication || "Sim-to-Real Transfer of Robotic Control with Dynamics Randomization", a paper on {{w|robotics}}, is first submitted to {{w|ArXiv}}. It describes a solution for strategies that are successful in simulation but may not transfer to their real world counterparts  due to modeling error.<ref>{{cite web |last1=Bin Peng |first1=Xue |last2=Andrychowicz |first2=Marcin |last3=Zaremba |first3=Wojciech |last4=Abbeel |first4=Pieter |title=Sim-to-Real Transfer of Robotic Control with Dynamics Randomization |url=https://arxiv.org/abs/1710.06537 |website=arxiv.org |accessdate=26 March 2020}}</ref>
+
| 2018 || {{dts|March 3}} || || Event hosting || OpenAI hosts its first hackathon. Applicants include high schoolers, industry practitioners, engineers, researchers at universities, and others, with interests spanning healthcare to {{w|AGI}}.<ref>{{cite web |url=https://blog.OpenAI.com/hackathon/ |publisher=OpenAI Blog |title=OpenAI Hackathon |date=February 24, 2018 |accessdate=March 1, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/hackathon-follow-up/ |publisher=OpenAI Blog |title=Report from the OpenAI Hackathon |date=March 15, 2018 |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|October 26}} || Publication || "Meta Learning Shared Hierarchies", a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}. The paper describes the development of a metalearning approach for learning hierarchically structured policies, improving sample efficiency on unseen tasks through the use of shared primitives.<ref>{{cite web |last1=Frans |first1=Kevin |last2=Ho |first2=Jonathan |last3=Chen |first3=Xi ChenXi |last4=Abbeel |first4=Pieter |last5=Schulman |first5=John |title=Meta Learning Shared Hierarchies |url=https://arxiv.org/abs/1710.09767 |website=arxiv.org |accessdate=26 March 2020}}</ref>
+
| 2018 || {{Dts|April 5}}{{snd}}June 5 || || Event hosting || The OpenAI Retro Contest takes place.<ref>{{cite web |url=https://contest.OpenAI.com/ |title=OpenAI Retro Contest |publisher=OpenAI |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/retro-contest/ |publisher=OpenAI Blog |title=Retro Contest |date=April 13, 2018 |accessdate=May 5, 2018}}</ref> As a result of the release of the Gym Retro library, OpenAI's Universe become deprecated.<ref>{{cite web |url=https://github.com/OpenAI/universe/commit/cc9ce6ec241821bfb0f3b85dd455bd36e4ee7a8c |publisher=GitHub |title=OpenAI/universe |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|October 31}} || Publication || "Backpropagation through the Void: Optimizing control variates for black-box gradient estimation", a paper on {{w|reinforcement learning}}, is first submitted to the {{w|ArXiv}}. It introduces a general framework for learning low-variance, unbiased gradient estimators for black-box functions of random variables.<ref>{{cite web |last1=Grathwohl |first1=Will |last2=Choi |first2=Dami |last3=Wu |first3=Yuhuai |last4=Roeder |first4=Geoffrey |last5=Duvenaud |first5=David |title=Backpropagation through the Void: Optimizing control variates for black-box gradient estimation |url=https://arxiv.org/abs/1711.00123 |website=arxiv.org |accessdate=26 March 2020}}</ref>
+
| 2018 || {{dts|April 9}} || || Commitment || OpenAI releases a charter stating that the organization commits to stop competing with a value-aligned and safety-conscious project that comes close to building artificial general intelligence, and also that OpenAI expects to reduce its traditional publishing in the future due to safety concerns.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-charter/ |publisher=OpenAI Blog |title=OpenAI Charter |date=April 9, 2018 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/e5mFQGMc7JpechJak/OpenAI-charter |title=OpenAI charter |accessdate=May 5, 2018 |date=April 9, 2018 |author=wunan |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://www.reddit.com/r/MachineLearning/comments/8azk2n/d_OpenAI_charter/ |publisher=reddit |title=[D] OpenAI Charter • r/MachineLearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16794194 |title=OpenAI Charter |website=Hacker News |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://thenextweb.com/artificial-intelligence/2018/04/10/the-ai-company-elon-musk-co-founded-is-trying-to-create-sentient-machines/ |title=The AI company Elon Musk co-founded intends to create machines with real intelligence |publisher=The Next Web |date=April 10, 2018 |author=Tristan Greene |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|October}} || Staff || Jonathan Raiman joins OpenAI as Research Scientist.<ref>{{cite web |title=Jonathan Raiman |url=https://www.linkedin.com/in/jonathan-raiman-36694123/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2018 || {{Dts|April 19}} || || Financial || ''{{W|The New York Times}}'' publishes a story detailing the salaries of researchers at OpenAI, using information from OpenAI's 2016 {{W|Form 990}}. The salaries include $1.9 million paid to {{W|Ilya Sutskever}} and $800,000 paid to {{W|Ian Goodfellow}} (hired in March of that year).<ref>{{cite web |url=https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-OpenAI.html |date=April 19, 2018 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit |author=Cade Metz |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.reddit.com/r/reinforcementlearning/comments/8di9yt/ai_researchers_are_making_more_than_1_million/dxnc76j/ |publisher=reddit |title="A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16880447 |title=gwern comments on A.I. Researchers Are Making More Than $1M, Even at a Nonprofit |website=Hacker News |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{dts|November 2}} || Publication || "Interpretable and Pedagogical Examples", a paper on language, is first submitted to the {{w|ArXiv}}. It shows that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies.<ref>{{cite web |last1=Milli |first1=Smitha |last2=Abbeel |first2=Pieter |last3=Mordatch |first3=Igor |title=Interpretable and Pedagogical Examples |url=https://arxiv.org/abs/1711.00694 |website=arxiv.org |accessdate=26 March 2020}}</ref>
+
| 2018 || {{Dts|May 2}} || safety || Publication || The paper "AI safety via debate" by Geoffrey Irving, Paul Christiano, and Dario Amodei is uploaded to the arXiv. The paper proposes training agents via self play on a zero sum debate game, in order to adress tasks that are too complicated for a human to directly judge.<ref>{{cite web |url=https://arxiv.org/abs/1805.00899 |title=[1805.00899] AI safety via debate |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/debate/ |publisher=OpenAI Blog |title=AI Safety via Debate |date=May 3, 2018 |first1=Geoffrey |last1=Irving |first2=Dario |last2=Amodei |accessdate=May 5, 2018}}</ref>
 
|-
 
|-
| 2017 || {{Dts|November 6}} || Staff || ''{{W|The New York Times}}'' reports that Pieter Abbeel (a researcher at OpenAI) and three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligence.<ref>{{cite web |url=https://www.nytimes.com/2017/11/06/technology/artificial-intelligence-start-up.html |date=November 6, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up |author=Cade Metz |accessdate=May 5, 2018}}</ref>
+
| 2018 || {{dts|May 16}} || {{w|Computation}} || Publication || OpenAI releases an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time.<ref>{{cite web |title=AI and Compute |url=https://openai.com/blog/ai-and-compute/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2017 || {{dts|December 4}} || Publication || "Learning Sparse Neural Networks through ''L<sub>0</sub>'' Regularization", a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}. It describes a method which allows for straightforward and efficient learning of model structures with stochastic gradient descent.<ref>{{cite web |last1=Louizos |first1=Christos |last2=Welling |first2=Max |last3=Kingma |first3=Diederik P. |title=Learning Sparse Neural Networks through L0 Regularization |url=https://arxiv.org/abs/1712.01312 |website=arxiv.org |accessdate=26 March 2020}}</ref>
+
| 2018 || {{dts|June 11}} || {{w|Unsupervised learning}} || Research progress || OpenAI announces having obtained significant results on a suite of diverse language tasks with a scalable, task-agnostic system, which uses a combination of transformers and unsupervised pre-training.<ref>{{cite web |title=Improving Language Understanding with Unsupervised Learning |url=https://openai.com/blog/language-unsupervised/ |website=openai.com |accessdate=5 April 2020}}</ref>
|-
 
| 2017 || {{dts|December 6}} || Software || OpenAI releases highly-optimized GPU kernels for networks with block-sparse weights, an underexplored class of neural network architectures. Depending on the chosen sparsity, these kernels can run orders of magnitude faster than cuBLAS or cuSPARSE.<ref>{{cite web |title=Block-Sparse GPU Kernels |url=https://openai.com/blog/block-sparse-gpu-kernels/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
| 2017 || {{dts|December}} || Publication || The 2017 AI Index is published. OpenAI contributed to the report.<ref>{{cite web |url=https://www.theverge.com/2017/12/1/16723238/ai-artificial-intelligence-progress-index |date=December 1, 2017 |publisher=The Verge |title=Artificial intelligence isn't as clever as we think, but that doesn't stop it being a threat |first=James |last=Vincent |accessdate=March 2, 2018}}</ref>
 
|-
 
| 2017 || {{dts|December}} || Staff || David Luan joins OpenAI as Director of Engineering.<ref>{{cite web |title=David Luan |url=https://www.linkedin.com/in/jluan/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|January}} || Staff || Christy Dennison joins OpenAI as Machine Learning Engineer.<ref>{{cite web |title=Christy Dennison |url=https://www.linkedin.com/in/christydennison/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|January}} || Staff || David Farhi joins OpenAI as Researcher.<ref>{{cite web |title=David Farhi |url=https://www.linkedin.com/in/david-farhi-13824175/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|January}} || Staff || Mathew Shrwed joins OpenAI as Software Engineer.<ref>{{cite web |title=Mathew Shrwed |url=https://www.linkedin.com/in/mshrwed/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|February 3}} || Publication || "DeepType: Multilingual Entity Linking by Neural Type System Evolution" a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}.<ref>{{cite web |last1=Raiman |first1=Jonathan |last2=Raiman |first2=Olivier |title=DeepType: Multilingual Entity Linking by Neural Type System Evolution |url=https://arxiv.org/abs/1802.01021 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|February 13}} || Publication || "Evolved Policy Gradients", a {{w|reinforcement learning}} paper, is first submitted to the {{w|ArXiv}}. It proposes a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms.<ref>{{cite web |last1=Houthooft |first1=Rein |last2=Chen |first2=Richard Y. |last3=Isola |first3=Phillip |last4=Stadie |first4=Bradly C. |last5=Wolski |first5=Filip |last6=Ho |first6=Jonathan |last7=Abbeel |first7=Pieter |title=Evolved Policy Gradients |url=https://arxiv.org/abs/1802.04821 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|February 20}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is submitted to the {{w|ArXiv}}. It forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref><ref name="musk-leaves" /><ref>{{cite web |url=https://www.wired.com/story/why-artificial-intelligence-researchers-should-be-more-paranoid/ |title=Why Artificial Intelligence Researchers Should Be More Paranoid |first=Tom |last=Simonite |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018}}</ref>
 
|-
 
| 2018 || {{dts|February 20}} || Donation || OpenAI announces changes in donors and advisors. New donors are: {{W|Jed McCaleb}}, {{W|Gabe Newell}}, {{W|Michael Seibel}}, {{W|Jaan Tallinn}}, and {{W|Ashton Eaton}} and {{W|Brianne Theisen-Eaton}}. {{W|Reid Hoffman}} is "significantly increasing his contribution". Pieter Abbeel (previously at OpenAI), {{W|Julia Galef}}, and Maran Nelson become advisors. {{W|Elon Musk}} departs the board but remains as a donor and advisor.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-supporters/ |publisher=OpenAI Blog |title=OpenAI Supporters |date=February 21, 2018 |accessdate=March 1, 2018}}</ref><ref name="musk-leaves">{{cite web |url=https://www.theverge.com/2018/2/21/17036214/elon-musk-OpenAI-ai-safety-leaves-board |date=February 21, 2018 |publisher=The Verge |title=Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla |accessdate=March 2, 2018}}</ref>
 
|-
 
| 2018 || {{dts|February 26}} || Software || OpenAI releases eight simulated robotics environments and a Baselines implementation of Hindsight Experience Replay, all developed for OpenAI research over the previous year. These environments were to train models which work on physical robots.<ref>{{cite web |title=Ingredients for Robotics Research |url=https://openai.com/blog/ingredients-for-robotics-research/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
| 2018 || {{dts|February 26}} || Publication || "Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research" is first submitted to the {{w|ArXiv}}. The paper introduces a suite of challenging continuous control tasks based on currently existing robotics hardware, and presents a set of concrete research ideas for improving {{w|reinforcement learning}} algorithms.<ref>{{cite web |title=Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research |url=https://arxiv.org/abs/1802.09464 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|February}} || Staff || Lilian Weng joins OpenAI as Research Scientist.<ref>{{cite web |title=Lilian Weng |url=https://www.linkedin.com/in/lilianweng/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|March 3}} || Publication || "Some Considerations on Learning to Explore via Meta-Reinforcement Learning", a paper on {{w|reinforcement learning}}, is first submitted to {{w|ArXiv}}. It considers the problem of exploration in meta reinforcement learning.<ref>{{cite web |last1=Stadie |first1=Bradly C. |last2=Yang |first2=Ge |last3=Houthooft |first3=Rein |last4=Chen |first4=Xi |last5=Duan |first5=Yan |last6=Wu |first6=Yuhuai |last7=Abbeel |first7=Pieter |last8=Sutskever |first8=Ilya |title=Some Considerations on Learning to Explore via Meta-Reinforcement Learning |url=https://arxiv.org/abs/1803.01118 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|March 3}} || Event hosting || OpenAI hosts its first hackathon. Applicants include high schoolers, industry practitioners, engineers, researchers at universities, and others, with interests spanning healthcare to {{w|AGI}}.<ref>{{cite web |url=https://blog.OpenAI.com/hackathon/ |publisher=OpenAI Blog |title=OpenAI Hackathon |date=February 24, 2018 |accessdate=March 1, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/hackathon-follow-up/ |publisher=OpenAI Blog |title=Report from the OpenAI Hackathon |date=March 15, 2018 |accessdate=May 5, 2018}}</ref>
 
|-
 
| 2018 || {{dts|March 8}} || Publication || "On First-Order Meta-Learning Algorithms", a paper on {{w|reinforcement learning}}, is submitted to {{w|ArXiv}}. It analyzes meta-learning problems, where there is a distribution of tasks.<ref>{{cite web |last1=Nichol |first1=Alex |last2=Achiam |first2=Joshua |last3=Schulman |first3=John |title=On First-Order Meta-Learning Algorithms |url=https://arxiv.org/abs/1803.02999 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|March 15}} || Publication || "Improving GANs Using Optimal Transport", a paper on generative models, is first submitted to the {{w|ArXiv}}. It presents Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Zhang |first2=Han |last3=Radford |first3=Alec |last4=Metaxas |first4=Dimitris |title=Improving GANs Using Optimal Transport |url=https://arxiv.org/abs/1803.05573 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|March 20}} || Publication || "Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines", a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}. The paper shows that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks.<ref>{{cite web |last1=Wu |first1=Cathy |last2=Rajeswaran |first2=Aravind |last3=Duan |first3=Yan |last4=KumarVikash Kumar |first4=Vikash |last5=Bayen |first5=Alexandre M |last6=Kakade |first6=Sham |last7=Mordatch |first7=Igor |last8=Abbeel |first8=Pieter |title=Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines |url=https://arxiv.org/abs/1803.07246 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|March}} || Staff || Diane Yoon joins OpenAI as Operations Manager.<ref>{{cite web |title=Diane Yoon |url=https://www.linkedin.com/in/diane-yoon-a0a8911b/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{Dts|April 5}}{{snd}}June 5 || Event hosting || The OpenAI Retro Contest takes place.<ref>{{cite web |url=https://contest.OpenAI.com/ |title=OpenAI Retro Contest |publisher=OpenAI |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/retro-contest/ |publisher=OpenAI Blog |title=Retro Contest |date=April 13, 2018 |accessdate=May 5, 2018}}</ref> As a result of the release of the Gym Retro library, OpenAI's Universe become deprecated.<ref>{{cite web |url=https://github.com/OpenAI/universe/commit/cc9ce6ec241821bfb0f3b85dd455bd36e4ee7a8c |publisher=GitHub |title=OpenAI/universe |accessdate=May 5, 2018}}</ref>
 
|-
 
| 2018 || {{dts|April 9}} || Commitment || OpenAI releases a charter stating that the organization commits to stop competing with a value-aligned and safety-conscious project that comes close to building artificial general intelligence, and also that OpenAI expects to reduce its traditional publishing in the future due to safety concerns.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-charter/ |publisher=OpenAI Blog |title=OpenAI Charter |date=April 9, 2018 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/e5mFQGMc7JpechJak/OpenAI-charter |title=OpenAI charter |accessdate=May 5, 2018 |date=April 9, 2018 |author=wunan |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://www.reddit.com/r/MachineLearning/comments/8azk2n/d_OpenAI_charter/ |publisher=reddit |title=[D] OpenAI Charter • r/MachineLearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16794194 |title=OpenAI Charter |website=Hacker News |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://thenextweb.com/artificial-intelligence/2018/04/10/the-ai-company-elon-musk-co-founded-is-trying-to-create-sentient-machines/ |title=The AI company Elon Musk co-founded intends to create machines with real intelligence |publisher=The Next Web |date=April 10, 2018 |author=Tristan Greene |accessdate=May 5, 2018}}</ref>
 
|-
 
| 2018 || {{dts|April 10}} || Publication || "Gotta Learn Fast: A New Benchmark for Generalization in RL", a paper on {{w|reinforcement learning}}, is first submitted to the {{w|ArXiv}}. The report presents a new {{w|reinforcement learning}} benchmark intended to measure the performance of transfer learning and few-shot learning algorithms in the reinforcement learning domain.<ref>{{cite web |last1=Nichol |first1=Alex |last2=Pfau |first2=Vicki |last3=Hesse |first3=Christopher |last4=Klimov |first4=Oleg |last5=Schulman |first5=John |title=Gotta Learn Fast: A New Benchmark for Generalization in RL |url=https://arxiv.org/abs/1804.03720 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{Dts|April 19}} || Financial || ''{{W|The New York Times}}'' publishes a story detailing the salaries of researchers at OpenAI, using information from OpenAI's 2016 {{W|Form 990}}. The salaries include $1.9 million paid to {{W|Ilya Sutskever}} and $800,000 paid to {{W|Ian Goodfellow}} (hired in March of that year).<ref>{{cite web |url=https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-OpenAI.html |date=April 19, 2018 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit |author=Cade Metz |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.reddit.com/r/reinforcementlearning/comments/8di9yt/ai_researchers_are_making_more_than_1_million/dxnc76j/ |publisher=reddit |title="A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16880447 |title=gwern comments on A.I. Researchers Are Making More Than $1M, Even at a Nonprofit |website=Hacker News |accessdate=May 5, 2018}}</ref>
 
|-
 
| 2018 || {{dts|April}} || Staff || Peter Zhokhov joins OpenAI as Member of the Technical Staff.<ref>{{cite web |title=Peter Zhokhov |url=https://www.linkedin.com/in/peter-zhokhov-b68525b3/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{Dts|May 2}} || Publication || The paper "AI safety via debate" by Geoffrey Irving, Paul Christiano, and Dario Amodei is uploaded to the arXiv. The paper proposes training agents via self play on a zero sum debate game, in order to adress tasks that are too complicated for a human to directly judge.<ref>{{cite web |url=https://arxiv.org/abs/1805.00899 |title=[1805.00899] AI safety via debate |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/debate/ |publisher=OpenAI Blog |title=AI Safety via Debate |date=May 3, 2018 |first1=Geoffrey |last1=Irving |first2=Dario |last2=Amodei |accessdate=May 5, 2018}}</ref>
 
|-
 
| 2018 || {{dts|May 16}} || Publication || OpenAI releases an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time.<ref>{{cite web |title=AI and Compute |url=https://openai.com/blog/ai-and-compute/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
| 2018 || {{dts|May}} || Staff || Susan Zhang joins OpenAI as Research Engineer.<ref>{{cite web |title=Susan Zhang |url=https://www.linkedin.com/in/suchenzang/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|May}} || Staff || Daniel Ziegler joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Daniel Ziegler |url=https://www.linkedin.com/in/daniel-ziegler-b4b61882/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
 
|-
 
| 2018|| {{dts|June 2}} || Publication || OpenAI publishes "GamePad: A Learning Environment for Theorem Proving" in {{w|arXiv}}. The paper introduces a system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant.<ref>{{cite web |last1=Huang |first1=Daniel |last2=Dhariwal |first2=Prafulla |last3=Song |first3=Dawn |last4=Sutskever |first4=Ilya |title=GamePad: A Learning Environment for Theorem Proving |url=https://arxiv.org/abs/1806.00608 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|June 11}} || Software || OpenAI announces having obtained significant results on a suite of diverse language tasks with a scalable, task-agnostic system, which uses a combination of transformers and unsupervised pre-training.<ref>{{cite web |title=Improving Language Understanding with Unsupervised Learning |url=https://openai.com/blog/language-unsupervised/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
| 2018 || {{dts|June 17}} || Publication || OpenAI publishes paper on learning policy representations in multiagent systems. The paper proposes a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data.<ref>{{cite web |title=Learning Policy Representations in Multiagent Systems |url=https://arxiv.org/abs/1806.06464 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
 
|-  
 
|-  
| 2018 || {{Dts|June 25}} || Software || OpenAI announces set of AI algorithms able to hold their own as a team of five and defeat human amateur players at {{w|Dota 2}}, a multiplayer online battle arena video game popular in e-sports for its complexity and necessity for teamwork.<ref>{{cite web |last1=Gershgorn |first1=Dave |title=OpenAI built gaming bots that can work as a team with inhuman precision |url=https://qz.com/1311732/OpenAI-built-gaming-bots-that-can-work-as-a-team-with-inhuman-precision/ |website=qz.com |accessdate=14 June 2019}}</ref> In the algorithmic A team, called OpenAI Five, each algorithm uses a {{w|neural network}} to learn both how to play the game, and how to cooperate with its AI teammates.<ref>{{cite web |last1=Knight |first1=Will |title=A team of AI algorithms just crushed humans in a complex computer game |url=https://www.technologyreview.com/s/611536/a-team-of-ai-algorithms-just-crushed-expert-humans-in-a-complex-computer-game/ |website=technologyreview.com |accessdate=14 June 2019}}</ref><ref>{{cite web |title=OpenAI’s bot can now defeat skilled Dota 2 teams |url=https://venturebeat.com/2018/06/25/OpenAI-trains-ai-to-defeat-teams-of-skilled-dota-2-players/ |website=venturebeat.com |accessdate=14 June 2019}}</ref>
+
| 2018 || {{Dts|June 25}} || {{w|Neural network}} || Software release || OpenAI announces set of AI algorithms able to hold their own as a team of five and defeat human amateur players at {{w|Dota 2}}, a multiplayer online battle arena video game popular in e-sports for its complexity and necessity for teamwork.<ref>{{cite web |last1=Gershgorn |first1=Dave |title=OpenAI built gaming bots that can work as a team with inhuman precision |url=https://qz.com/1311732/OpenAI-built-gaming-bots-that-can-work-as-a-team-with-inhuman-precision/ |website=qz.com |accessdate=14 June 2019}}</ref> In the algorithmic A team, called OpenAI Five, each algorithm uses a {{w|neural network}} to learn both how to play the game, and how to cooperate with its AI teammates.<ref>{{cite web |last1=Knight |first1=Will |title=A team of AI algorithms just crushed humans in a complex computer game |url=https://www.technologyreview.com/s/611536/a-team-of-ai-algorithms-just-crushed-expert-humans-in-a-complex-computer-game/ |website=technologyreview.com |accessdate=14 June 2019}}</ref><ref>{{cite web |title=OpenAI’s bot can now defeat skilled Dota 2 teams |url=https://venturebeat.com/2018/06/25/OpenAI-trains-ai-to-defeat-teams-of-skilled-dota-2-players/ |website=venturebeat.com |accessdate=14 June 2019}}</ref>
|-
 
| 2018 || {{Dts|June 26}} || Notable comment || {{w|Bill Gates}} comments on {{w|Twitter}}: {{Quote|AI bots just beat humans at the video game Dota 2. That’s a big deal, because their victory required teamwork and collaboration – a huge milestone in advancing artificial intelligence.}}<ref>{{cite web |last1=Papadopoulos |first1=Loukia |title=Bill Gates Praises Elon Musk-Founded OpenAI’s Latest Dota 2 Win as “Huge Milestone” in Field |url=https://interestingengineering.com/bill-gates-praises-elon-musk-founded-OpenAIs-latest-dota-2-win-as-huge-milestone-in-field |website=interestingengineering.com |accessdate=14 June 2019}}</ref>
 
|-
 
| 2018 || {{dts|June}} || Staff || Yilun Du joins OpenAI as Research Fellow.<ref>{{cite web |title=Yilun Du |url=https://www.linkedin.com/in/yilun-du-04a831112/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|June}} || Staff || Christine McLeavey Payne joins OpenAI's Deep Learning Scholars Program.<ref>{{cite web |title=Christine McLeavey Payne |url=https://www.linkedin.com/in/mcleavey/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|June}} || Staff || Johannes Otterbach joins OpenAI as Member Of Technical Staff (Fellow).<ref>{{cite web |title=Johannes Otterbach |url=https://www.linkedin.com/in/jotterbach/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|June}} || Staff || Karl Cobbe joins OpenAI as Machine Learning Fellow.<ref>{{cite web |title=Karl Cobbe |url=https://www.linkedin.com/in/kcobbe/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|July 9}} || Publication || "Glow: Generative Flow with Invertible 1x1 Convolutions" is first submitted to the {{w|ArXiv}}. The paper proposes a method for obtaining a significant improvement in log-likelihood on standard benchmarks.<ref>{{cite web |last1=Kingma |first1=Diederik P. |last2=Dhariwal |first2=Prafulla |title=Glow: Generative Flow with Invertible 1x1 Convolutions |url=https://arxiv.org/abs/1807.03039 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{Dts|July 18}} || Commitment || {{w|Elon Musk}}, along with other tech leaders, sign a pledge promising to not develop “lethal autonomous weapons.” They also call on governments to institute laws against such technology. The pledge is organized by the {{w|Future of Life Institute}}, an outreach group focused on tackling existential risks.<ref>{{cite web |last1=Vincent |first1=James |title=Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems |url=https://www.theverge.com/2018/7/18/17582570/ai-weapons-pledge-elon-musk-deepmind-founders-future-of-life-institute |website=theverge.com |accessdate=1 June 2019}}</ref><ref>{{cite web |last1=Locklear |first1=Mallory |title=DeepMind, Elon Musk and others pledge not to make autonomous AI weapons |url=https://www.engadget.com/2018/07/18/deepmind-elon-musk-pledge-autonomous-ai-weapons/ |website=engadget.com |accessdate=1 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=Elon Musk, his arch nemesis DeepMind swear off AI weapons |url=https://www.theregister.co.uk/2018/07/19/keep_ai_nonlethal/ |website=theregister.co.uk |accessdate=1 June 2019}}</ref>
 
|-
 
| 2018 || {{dts|July 26}} || Publication || OpenAI publishes paper on variational option discovery algorithms. The paper highlights a tight connection between variational option discovery methods and variational autoencoders, and introduces Variational Autoencoding Learning of Options by Reinforcement (VALOR), a new method derived from the connection.<ref>{{cite web |last1=Achiam |first1=Joshua |last2=Edwards |first2=Harrison |last3=Amodei |first3=Dario |last4=Abbeel |first4=Pieter |title=Variational Option Discovery Algorithms |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{Dts|July 30}} || Software || OpenAI announces a robotics system that can manipulate objects with humanlike dexterity. The system is able to develop these behaviors all on its own. It uses a reinforcement model, where the AI learns through trial and error, to direct robot hands in grasping and manipulating objects with great precision.<ref>{{cite web |title=OpenAI’s ‘state-of-the-art’ system gives robots humanlike dexterity |url=https://venturebeat.com/2018/07/30/OpenAIs-state-of-the-art-system-gives-robots-humanlike-dexterity/ |website=venturebeat.com |accessdate=14 June 2019}}</ref><ref>{{cite web |last1=Coldewey |first1=Devin |title=OpenAI’s robotic hand doesn’t need humans to teach it human behaviors |url=https://techcrunch.com/2018/07/30/OpenAIs-robotic-hand-doesnt-need-humans-to-teach-it-human-behaviors/ |website=techcrunch.com |accessdate=14 June 2019}}</ref>
 
|-
 
| 2018 || {{dts|August 1}} || Publication || OpenAI publishes paper describing the use of {{w|reinforcement learning}} to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Hand.<ref>{{cite web |title=Learning Dexterous In-Hand Manipulation |url=https://arxiv.org/abs/1808.00177 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{Dts|August 7}} || Achievement || Algorithmic team OpenAI Five defeats a team of semi-professional {{w|Dota 2}} players ranked in the 99.95th percentile in the world, in their second public match in the traditional five-versus-five settings, hosted in {{w|San Francisco}}.<ref>{{cite web |last1=Whitwam |first1=Ryan |title=OpenAI Bots Crush the Best Human Dota 2 Players in the World |url=https://www.extremetech.com/gaming/274907-OpenAI-bots-crush-the-best-human-dota-2-players-in-the-world |website=extremetech.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=OpenAI bots thrash team of Dota 2 semi-pros, set eyes on mega-tourney |url=https://www.theregister.co.uk/2018/08/06/OpenAI_bots_dota_2_semipros/ |website=theregister.co.uk |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Savov |first1=Vlad |title=The OpenAI Dota 2 bots just defeated a team of former pros |url=https://www.theverge.com/2018/8/6/17655086/dota2-OpenAI-bots-professional-gaming-ai |website=theverge.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Rigg |first1=Jamie |title=‘Dota 2’ veterans steamrolled by AI team in exhibition match |url=https://www.engadget.com/2018/08/06/OpenAI-five-dumpsters-dota-2-veterans/ |website=engadget.com |accessdate=15 June 2019}}</ref>
 
|-
 
| 2018 || {{dts|August}} || Staff || Ingmar Kanitscheider joins OpenAI as Research Scientist.<ref>{{cite web |title=Ingmar Kanitscheider |url=https://www.linkedin.com/in/ingmar-kanitscheider-148620127/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|August}} || Staff || Miles Brundage joins OpenAI as Research Scientist (Policy).<ref>{{cite web |title=Miles Brundage |url=https://www.linkedin.com/in/miles-brundage-49b62a4/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|August}} || Staff || Jeffrey Wu joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Jeffrey Wu |url=https://www.linkedin.com/in/wu-the-jeff/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|August 16}} || Publication || OpenAI publishes paper on constant arboricity spectral sparsifiers. The paper shows that every graph is spectrally similar to the union of a constant number of forests.<ref>{{cite web |last1=Chu |first1=Timothy |last2=Cohen |first2=Michael B. |last3=Pachocki |first3=Jakub W. |last4=Peng |first4=Richard |title=Constant Arboricity Spectral Sparsifiers |url=https://arxiv.org/abs/1808.05662 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|September}} || Staff || Christopher Olah joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Christopher Olah |url=https://www.linkedin.com/in/christopher-olah-b574414a/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|September}} || Staff || Taehoon Kim joins OpenAI as Research Engineer.<ref>{{cite web |title=Taehoon Kim |url=https://www.linkedin.com/in/carpedm20/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|September}} || Staff || Dario Amodei becomes OpenAI's Research Director.<ref name="Dario Amodeiy"/>
 
|-
 
| 2018 || {{dts|October 2}} || Publication || OpenAI publishes paper on FFJORD (free-form continuous dynamics for scalable reversible generative models), aiming to demonstrate their approach on high-dimensional density estimation, image generation, and variational inference.<ref>{{cite web |last1=Grathwohl |first1=Will |last2=Chen |first2=Ricky T. Q. |last3=Bettencourt |first3=Jesse |last4=Sutskever |first4=Ilya |last5=Duvenaud |first5=David |title=FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models |url=https://arxiv.org/abs/1810.01367 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{dts|October 19}} || Publication || OpenAI publishes paper proposing Iterated Amplification, an alternative training strategy which progressively builds up a training signal for difficult problems by combining solutions to easier subproblems.<ref>{{cite web |last1=Christiano |first1=Paul |last2=Shlegeris |first2=Buck |last3=Amodei |first3=Dario |title=Supervising strong learners by amplifying weak experts |url=https://arxiv.org/abs/1810.08575 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{Dts|October}} || Staff || Daniela Amodei joins OpenAI as NLP Team Manager and Head of People Operations.<ref>{{cite web |title=Daniela Amodei |url=https://www.linkedin.com/in/daniela-amodei-790bb22a/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|October}} || Staff || Lei Zhang joins OpenAI as Research Fellow.<ref>{{cite web |title=Lei Zhang |url=https://www.linkedin.com/in/lei-zhang-34a60910/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|October}} || Staff || Mark Chen joins OpenAI as Research Scientist.<ref>{{cite web |title=Mark Chen |url=https://www.linkedin.com/in/markchen90/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2018 || {{dts|October 31}} || Software || OpenAI unveils its Random Network Distillation (RND), a prediction-based method for encouraging {{w|reinforcement learning}} agents to explore their environments through curiosity, which for the first time exceeds average human performance on videogame Montezuma’s Revenge.<ref>{{cite web |title=Reinforcement Learning with Prediction-Based Rewards |url=https://openai.com/blog/reinforcement-learning-with-prediction-based-rewards/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
| 2018 || {{Dts|November 1}} || Publication || OpenAI publishes research paper detailing AI able to defeat humans at the retro platformer [[w:Montezuma's Revenge (video game)|Montezuma’s Revenge]]. The top-performing iteration found 22 of the 24 rooms in the first level, and occasionally discovered all 24.<ref>{{cite web |last1=Wiggers |first1=Kyle |title=OpenAI made a system that’s better at Montezuma’s Revenge than humans |url=https://venturebeat.com/2018/11/01/OpenAI-made-a-system-thats-better-at-montezumas-revenge-than-humans/ |website=venturebeat.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Vincent |first1=James |title=New research from OpenAI uses curious AI to beat video games |url=https://www.theverge.com/2018/11/1/18051196/ai-artificial-intelligence-curiosity-OpenAI-montezumas-revenge-noisy-tv-problem |website=theverge.com |accessdate=15 June 2019}}</ref>
 
|-
 
| 2018 || {{dts|November 5}} || Publication || OpenAI publishes paper proposing a plan online and learn offline (POLO) framework for the setting where an agent, with an internal model, needs to continually act and learn in the world.<ref>{{cite web |last1=Lowrey |first1=Kendall |last2=Rajeswaran |first2=Aravind |last3=Kakade |first3=Sham |last4=Todorov |first4=Emanuel |last5=Mordatch |first5=Igor |title=Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
| 2018 || {{Dts|November 8}} || Education || OpenAI launches Spinning Up, an educational resource designed to teach anyone deep reinforcement learning. The program consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.<ref>{{cite web |title=Spinning Up in Deep RL |url=https://OpenAI.com/blog/spinning-up-in-deep-rl/ |website=OpenAI.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Ramesh |first1=Prasad |title=OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners |url=https://hub.packtpub.com/OpenAI-launches-spinning-up-a-learning-resource-for-potential-deep-learning-practitioners/ |website=hub.packtpub.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches reinforcement learning training to prepare for artificial general intelligence |url=https://flipboard.com/@venturebeat/OpenAI-launches-reinforcement-learning-training-to-prepare-for-artificial-genera/a-TxuPmdApTGSzPr0ny7qXsw%3Aa%3A2919225365-bafeac8636%2Fventurebeat.com |website=flipboard.com |accessdate=15 June 2019}}</ref>
 
 
|-
 
|-
| 2018 || {{Dts|November 9}} || Notable comment || {{w|Ilya Sutskever}} gives speech at the AI Frontiers Conference in {{w|San Jose}}, and declares: {{Quote|We (OpenAI) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibility.}}<ref>{{cite web |title=OpenAI Founder: Short-Term AGI Is a Serious Possibility |url=https://syncedreview.com/2018/11/13/OpenAI-founder-short-term-agi-is-a-serious-possibility/ |website=syncedreview.com |accessdate=15 June 2019}}</ref>
+
| 2018 || {{Dts|June 26}} || || Notable comment || {{w|Bill Gates}} comments on {{w|Twitter}}: {{Quote|AI bots just beat humans at the video game Dota 2. That’s a big deal, because their victory required teamwork and collaboration – a huge milestone in advancing artificial intelligence.}}<ref>{{cite web |last1=Papadopoulos |first1=Loukia |title=Bill Gates Praises Elon Musk-Founded OpenAI’s Latest Dota 2 Win as “Huge Milestone” in Field |url=https://interestingengineering.com/bill-gates-praises-elon-musk-founded-OpenAIs-latest-dota-2-win-as-huge-milestone-in-field |website=interestingengineering.com |accessdate=14 June 2019}}</ref>
 
|-
 
|-
| 2018 || {{dts|November 18}} || Staff || Clemens Winter joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Clemens Winter |url=https://www.linkedin.com/in/clemens-winter-569887a9/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
+
| 2018 || {{Dts|July 18}} || || Commitment || {{w|Elon Musk}}, along with other tech leaders, sign a pledge promising to not develop “lethal autonomous weapons.” They also call on governments to institute laws against such technology. The pledge is organized by the {{w|Future of Life Institute}}, an outreach group focused on tackling existential risks.<ref>{{cite web |last1=Vincent |first1=James |title=Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems |url=https://www.theverge.com/2018/7/18/17582570/ai-weapons-pledge-elon-musk-deepmind-founders-future-of-life-institute |website=theverge.com |accessdate=1 June 2019}}</ref><ref>{{cite web |last1=Locklear |first1=Mallory |title=DeepMind, Elon Musk and others pledge not to make autonomous AI weapons |url=https://www.engadget.com/2018/07/18/deepmind-elon-musk-pledge-autonomous-ai-weapons/ |website=engadget.com |accessdate=1 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=Elon Musk, his arch nemesis DeepMind swear off AI weapons |url=https://www.theregister.co.uk/2018/07/19/keep_ai_nonlethal/ |website=theregister.co.uk |accessdate=1 June 2019}}</ref>
 
|-
 
|-
| 2018 || {{Dts|November 19}} || Partnership || OpenAI partners with {{w|DeepMind}} in a new paper that proposes a new method to train {{w|reinforcement learning}} agents in ways that enables them to surpass human performance. The paper, titled ''Reward learning from human preferences and demonstrations in Atari'', introduces a training model that combines human feedback and reward optimization to maximize the knowledge of RL agents.<ref>{{cite web |last1=Rodriguez |first1=Jesus |title=What’s New in Deep Learning Research: OpenAI and DeepMind Join Forces to Achieve Superhuman Performance in Reinforcement Learning |url=https://towardsdatascience.com/whats-new-in-deep-learning-research-OpenAI-and-deepmind-join-forces-to-achieve-superhuman-48e7d1accf85 |website=towardsdatascience.com |accessdate=29 June 2019}}</ref>              
+
| 2018 || {{Dts|July 30}} || Robotics || Software release || OpenAI announces a robotics system that can manipulate objects with humanlike dexterity. The system is able to develop these behaviors all on its own. It uses a reinforcement model, where the AI learns through trial and error, to direct robot hands in grasping and manipulating objects with great precision.<ref>{{cite web |title=OpenAI’s ‘state-of-the-art’ system gives robots humanlike dexterity |url=https://venturebeat.com/2018/07/30/OpenAIs-state-of-the-art-system-gives-robots-humanlike-dexterity/ |website=venturebeat.com |accessdate=14 June 2019}}</ref><ref>{{cite web |last1=Coldewey |first1=Devin |title=OpenAI’s robotic hand doesn’t need humans to teach it human behaviors |url=https://techcrunch.com/2018/07/30/OpenAIs-robotic-hand-doesnt-need-humans-to-teach-it-human-behaviors/ |website=techcrunch.com |accessdate=14 June 2019}}</ref>  
 
|-
 
|-
| 2018 || {{dts|November}} || Staff || Amanda Askell joins OpenAI as Research Scientist (Policy).<ref>{{cite web |title=Amanda Askell |url=https://www.linkedin.com/in/amanda-askell-1ab457175/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2018 || {{Dts|August 7}} || || Achievement || Algorithmic team OpenAI Five defeats a team of semi-professional {{w|Dota 2}} players ranked in the 99.95th percentile in the world, in their second public match in the traditional five-versus-five settings, hosted in {{w|San Francisco}}.<ref>{{cite web |last1=Whitwam |first1=Ryan |title=OpenAI Bots Crush the Best Human Dota 2 Players in the World |url=https://www.extremetech.com/gaming/274907-OpenAI-bots-crush-the-best-human-dota-2-players-in-the-world |website=extremetech.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=OpenAI bots thrash team of Dota 2 semi-pros, set eyes on mega-tourney |url=https://www.theregister.co.uk/2018/08/06/OpenAI_bots_dota_2_semipros/ |website=theregister.co.uk |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Savov |first1=Vlad |title=The OpenAI Dota 2 bots just defeated a team of former pros |url=https://www.theverge.com/2018/8/6/17655086/dota2-OpenAI-bots-professional-gaming-ai |website=theverge.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Rigg |first1=Jamie |title=‘Dota 2’ veterans steamrolled by AI team in exhibition match |url=https://www.engadget.com/2018/08/06/OpenAI-five-dumpsters-dota-2-veterans/ |website=engadget.com |accessdate=15 June 2019}}</ref>
 
|-
 
|-
| 2018 || {{dts|December 4}} || Software || OpenAI announces having discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks.<ref>{{cite web |title=How AI Training Scales |url=https://openai.com/blog/science-of-ai/ |website=openai.com |accessdate=4 April 2020}}</ref>
+
| 2018 || {{dts|August 16}} || {{w|Arboricity}} || Publication || OpenAI publishes paper on constant arboricity spectral sparsifiers. The paper shows that every graph is spectrally similar to the union of a constant number of forests.<ref>{{cite web |last1=Chu |first1=Timothy |last2=Cohen |first2=Michael B. |last3=Pachocki |first3=Jakub W. |last4=Peng |first4=Richard |title=Constant Arboricity Spectral Sparsifiers |url=https://arxiv.org/abs/1808.05662 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 
|-
 
|-
| 2018 || {{Dts|December 6}} || Software || OpenAI publishes CoinRun, a training environment designed to test the adaptability of reinforcement learning agents.<ref>{{cite web |title=OpenAI teaches AI teamwork by playing hide-and-seek |url=https://venturebeat.com/2019/09/17/OpenAI-and-deepmind-teach-ai-to-work-as-a-team-by-playing-hide-and-seek/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI’s CoinRun tests the adaptability of reinforcement learning agents |url=https://venturebeat.com/2018/12/06/OpenAIs-coinrun-tests-the-adaptability-of-reinforcement-learning-agents/ |website=venturebeat.com |accessdate=24 February 2020}}</ref>
+
| 2018 || {{dts|September}} || || Team || Dario Amodei becomes OpenAI's Research Director.<ref name="Dario Amodeiy"/>
 
|-
 
|-
| 2018 || {{dts|December 14}} || Publication || OpenAI publishes paper demonstrating that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of {{w|supervised learning}} datasets, {{w|reinforcement learning}} domains, and even generative model training.<ref>{{cite web |last1=McCandlish |first1=Sam |last2=Kaplan |first2=Jared |last3=Amodei |first3=Dario |last4=OpenAI Dota Team |title=An Empirical Model of Large-Batch Training |url=https://arxiv.org/abs/1812.06162 |website=arxiv.org |accessdate=25 March 2020}}</ref>
+
| 2018 || {{dts|October 31}} || {{w|Reinforcement learning}} || Software release || OpenAI unveils its Random Network Distillation (RND), a prediction-based method for encouraging {{w|reinforcement learning}} agents to explore their environments through curiosity, which for the first time exceeds average human performance on videogame Montezuma’s Revenge.<ref>{{cite web |title=Reinforcement Learning with Prediction-Based Rewards |url=https://openai.com/blog/reinforcement-learning-with-prediction-based-rewards/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2018 || {{dts|December}} || Staff || Mateusz Litwin joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Mateusz Litwin |url=https://www.linkedin.com/in/mateusz-litwin-06b3a919/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2018 || {{Dts|November 8}} || {{w|Reinforcement learning}} || Education || OpenAI launches Spinning Up, an educational resource designed to teach anyone deep reinforcement learning. The program consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.<ref>{{cite web |title=Spinning Up in Deep RL |url=https://OpenAI.com/blog/spinning-up-in-deep-rl/ |website=OpenAI.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Ramesh |first1=Prasad |title=OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners |url=https://hub.packtpub.com/OpenAI-launches-spinning-up-a-learning-resource-for-potential-deep-learning-practitioners/ |website=hub.packtpub.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches reinforcement learning training to prepare for artificial general intelligence |url=https://flipboard.com/@venturebeat/OpenAI-launches-reinforcement-learning-training-to-prepare-for-artificial-genera/a-TxuPmdApTGSzPr0ny7qXsw%3Aa%3A2919225365-bafeac8636%2Fventurebeat.com |website=flipboard.com |accessdate=15 June 2019}}</ref>
 
|-
 
|-
| 2019 || {{dts|January}} || Staff || Bianca Martin joins OpenAI as Special Projects Manager.<ref>{{cite web |title=Bianca Martin |url=https://www.linkedin.com/in/biancamartin1/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2018 || {{Dts|November 9}} || || Notable comment || {{w|Ilya Sutskever}} gives speech at the AI Frontiers Conference in {{w|San Jose}}, and declares: {{Quote|We (OpenAI) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibility.}}<ref>{{cite web |title=OpenAI Founder: Short-Term AGI Is a Serious Possibility |url=https://syncedreview.com/2018/11/13/OpenAI-founder-short-term-agi-is-a-serious-possibility/ |website=syncedreview.com |accessdate=15 June 2019}}</ref>
 
|-
 
|-
| 2019 || {{dts|February 4}} || Publication || OpenAI publishes paper showing computational limitations in robust classification and win-win results.<ref>{{cite web |last1=Degwekar |first1=Akshay |last2=Nakkiran |first2=Preetum |last3=Vaikuntanathan |first3=Vinod |title=Computational Limitations in Robust Classification and Win-Win Results |url=https://arxiv.org/abs/1902.01086 |website=arxiv.org |accessdate=25 March 2020}}</ref>  
+
| 2018 || {{Dts|November 19}} || {{w|Reinforcement learning}} || Partnership || OpenAI partners with {{w|DeepMind}} in a new paper that proposes a new method to train {{w|reinforcement learning}} agents in ways that enables them to surpass human performance. The paper, titled ''Reward learning from human preferences and demonstrations in Atari'', introduces a training model that combines human feedback and reward optimization to maximize the knowledge of RL agents.<ref>{{cite web |last1=Rodriguez |first1=Jesus |title=What’s New in Deep Learning Research: OpenAI and DeepMind Join Forces to Achieve Superhuman Performance in Reinforcement Learning |url=https://towardsdatascience.com/whats-new-in-deep-learning-research-OpenAI-and-deepmind-join-forces-to-achieve-superhuman-48e7d1accf85 |website=towardsdatascience.com |accessdate=29 June 2019}}</ref>              
 
|-
 
|-
| 2019 || {{Dts|February 14}} || Software || OpenAI unveils its language-generating system called GPT-2, a system able to write the news, answer reading comprehension problems, and is beginning to show promise at tasks like translation.<ref>{{cite web |title=An AI helped us write this article |url=https://www.vox.com/future-perfect/2019/2/14/18222270/artificial-intelligence-open-ai-natural-language-processing |website=vox.com |accessdate=28 June 2019}}</ref> However, the data or the parameters of the model are not released, under expressed concerns about potential abuse.<ref>{{cite web |last1=Lowe |first1=Ryan |title=OpenAI’s GPT-2: the model, the hype, and the controversy |url=https://towardsdatascience.com/OpenAIs-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8 |website=towardsdatascience.com |accessdate=10 July 2019}}</ref>
+
| 2018 || {{dts|December 4}} || {{w|Reinforcement learning}} || Researh progress || OpenAI announces having discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks.<ref>{{cite web |title=How AI Training Scales |url=https://openai.com/blog/science-of-ai/ |website=openai.com |accessdate=4 April 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|February 19}} || Publication || "AI Safety Needs Social Scientists" is published. The paper argues that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved.<ref>{{cite journal |last1=Irving |first1=Geoffrey |last2=Askell |first2=Amanda |title=AI Safety Needs Social Scientists |doi=10.23915/distill.00014 |url=https://distill.pub/2019/safety-needs-social-scientists/}}</ref><ref>{{cite web |title=AI Safety Needs Social Scientists |url=https://openai.com/blog/ai-safety-needs-social-scientists/ |website=openai.com |accessdate=5 April 2020}}</ref>
+
| 2018 || {{Dts|December 6}} || {{w|Reinforcement learning}} || Software release || OpenAI releases CoinRun, a training environment designed to test the adaptability of reinforcement learning agents.<ref>{{cite web |title=OpenAI teaches AI teamwork by playing hide-and-seek |url=https://venturebeat.com/2019/09/17/OpenAI-and-deepmind-teach-ai-to-work-as-a-team-by-playing-hide-and-seek/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI’s CoinRun tests the adaptability of reinforcement learning agents |url=https://venturebeat.com/2018/12/06/OpenAIs-coinrun-tests-the-adaptability-of-reinforcement-learning-agents/ |website=venturebeat.com |accessdate=24 February 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|February}} || Staff || Danny Hernandez joins OpenAI as Research Scientist.<ref>{{cite web |title=Danny Hernandez |url=https://www.linkedin.com/in/danny-hernandez-2b748823/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2019 || {{Dts|February 14}} || {{w|Natural-language generation}} || Software release || OpenAI unveils its language-generating system called GPT-2, a system able to write news, answer reading comprehension problems, and shows promise at tasks like translation.<ref>{{cite web |title=An AI helped us write this article |url=https://www.vox.com/future-perfect/2019/2/14/18222270/artificial-intelligence-open-ai-natural-language-processing |website=vox.com |accessdate=28 June 2019}}</ref> However, the data or the parameters of the model are not released, under expressed concerns about potential abuse.<ref>{{cite web |last1=Lowe |first1=Ryan |title=OpenAI’s GPT-2: the model, the hype, and the controversy |url=https://towardsdatascience.com/OpenAIs-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8 |website=towardsdatascience.com |accessdate=10 July 2019}}</ref> OpenAI initially tries to communicate the risk posed by this technology.<ref name="ssfr"/>
 
|-
 
|-
| 2019 || {{dts|March 2}} || Publication || OpenAi publishes paper presenting an artificial intelligence research environment that aims to simulate the {{w|natural environment}} setting in microcosm.<ref>{{cite web |last1=Suarez |first1=Joseph |last2=Du |first2=Yilun |last3=Isola |first3=Phillip |last4=Mordatch |first4=Igor |title=Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents |url=https://arxiv.org/abs/1903.00784 |website=arxiv.org |accessdate=25 March 2020}}</ref>  
+
| 2019 || {{dts|February 19}} || Safety || Publication || "AI Safety Needs Social Scientists" is published. The paper argues that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved.<ref>{{cite journal |last1=Irving |first1=Geoffrey |last2=Askell |first2=Amanda |title=AI Safety Needs Social Scientists |doi=10.23915/distill.00014 |url=https://distill.pub/2019/safety-needs-social-scientists/}}</ref><ref>{{cite web |title=AI Safety Needs Social Scientists |url=https://openai.com/blog/ai-safety-needs-social-scientists/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|March 4}} || Software || OpenAI releases a Neural MMO (massively multiplayer online), a multiagent game environment for {{w|reinforcement learning}} agents. The platform supports a large, variable number of agents within a persistent and open-ended task.<ref>{{cite web |title=Neural MMO: A Massively Multiagent Game Environment |url=https://openai.com/blog/neural-mmo/ |website=openai.com |accessdate=5 April 2020}}</ref>
+
| 2019 || {{dts|March 4}} || {{w|Reinforcement learning}} || Software release || OpenAI releases a Neural MMO (massively multiplayer online), a multiagent game environment for {{w|reinforcement learning}} agents. The platform supports a large, variable number of agents within a persistent and open-ended task.<ref>{{cite web |title=Neural MMO: A Massively Multiagent Game Environment |url=https://openai.com/blog/neural-mmo/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|March 6}} || Software || OpenAI introduces activation atlases, created in collaboration with {{w|Google}} researchers. Activation atlases comprise a new technique for visualizing what interactions between neurons can represent.<ref>{{cite web |title=Introducing Activation Atlases |url=https://openai.com/blog/introducing-activation-atlases/ |website=openai.com |accessdate=5 April 2020}}</ref>  
+
| 2019 || {{dts|March 6}} || || Software release || OpenAI introduces activation atlases, created in collaboration with {{w|Google}} researchers. Activation atlases comprise a new technique for visualizing what interactions between neurons can represent.<ref>{{cite web |title=Introducing Activation Atlases |url=https://openai.com/blog/introducing-activation-atlases/ |website=openai.com |accessdate=5 April 2020}}</ref>  
 
|-
 
|-
| 2019 || {{Dts|March 11}} || Reorganization || OpenAI announces the creation of OpenAI LP, a new “capped-profit” company owned and controlled by the OpenAI nonprofit organization’s board of directors. The new company is purposed to allow OpenAI to rapidly increase their investments in compute and talent while including checks and balances to actualize their mission.<ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches new company for funding safe artificial general intelligence |url=https://venturebeat.com/2019/03/11/OpenAI-launches-new-company-for-funding-safe-artificial-general-intelligence/ |website=venturebeat.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Trazzi |first1=Michaël |title=Considerateness in OpenAI LP Debate |url=https://medium.com/@MichaelTrazzi/considerateness-in-OpenAI-lp-debate-6eb3bf4c5341 |website=medium.com |accessdate=15 June 2019}}</ref>
+
| 2019 || {{Dts|March 11}} || || Reorganization || OpenAI announces the creation of OpenAI LP, a new “capped-profit” company owned and controlled by the OpenAI nonprofit organization’s board of directors. The new company is purposed to allow OpenAI to rapidly increase their investments in compute and talent while including checks and balances to actualize their mission.<ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches new company for funding safe artificial general intelligence |url=https://venturebeat.com/2019/03/11/OpenAI-launches-new-company-for-funding-safe-artificial-general-intelligence/ |website=venturebeat.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Trazzi |first1=Michaël |title=Considerateness in OpenAI LP Debate |url=https://medium.com/@MichaelTrazzi/considerateness-in-OpenAI-lp-debate-6eb3bf4c5341 |website=medium.com |accessdate=15 June 2019}}</ref>
 
|-
 
|-
| 2019 || {{dts|March 20}} || Publication || OpenAI publishes paper presenting techniques to scale MCMC based energy base models training on continuous neural networks.<ref>{{cite web |last1=Du |first1=Yilun |last2=Mordatch |first2=Igor |title=Implicit Generation and Generalization in Energy-Based Models |url=https://arxiv.org/abs/1903.08689 |website=arxiv.org |accessdate=25 March 2020}}</ref>
+
| 2019 || {{dts|March 21}} || || Software release || OpenAI announces progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models.<ref>{{cite web |title=Implicit Generation and Generalization Methods for Energy-Based Models |url=https://openai.com/blog/energy-based-models/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|March 21}} || Software || OpenAI announces progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models.<ref>{{cite web |title=Implicit Generation and Generalization Methods for Energy-Based Models |url=https://openai.com/blog/energy-based-models/ |website=openai.com |accessdate=5 April 2020}}</ref>
+
| 2019 || {{Dts|March}} || || Team || {{w|Sam Altman}} leaves his role as the president of {{w|Y Combinator}} to become the {{w|Chief executive officer}} of OpenAI.<ref>{{cite web |title=Sam Altman’s leap of faith |url=https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/ |website=techcrunch.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator |url=https://techcrunch.com/2019/03/08/y-combinator-president-sam-altman-is-stepping-down-amid-a-series-of-changes-at-the-accelerator/ |website=techcrunch.com |accessdate=24 February 2020}}</ref><ref name="orgwatch.issarice.com"/>
 
|-
 
|-
| 2019 || {{dts|March}} || Staff || Ilge Akkaya joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Ilge Akkaya |url=https://www.linkedin.com/in/ilge-akkaya-311b4631/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2019 || {{Dts|April 23}} || {{w|Deep learning}} || Publication || OpenAI publishes paper announcing Sparse Transformers, a deep neural network for learning sequences of data, including text, sound, and images. It utilizes an improved algorithm based on the attention mechanism, being able to extract patterns from sequences 30 times longer than possible previously.<ref>{{cite web |last1=Alford |first1=Anthony |title=OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences |url=https://www.infoq.com/news/2019/05/OpenAI-sparse-transformers/ |website=infoq.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=OpenAI Sparse Transformer Improves Predictable Sequence Length by 30x |url=https://medium.com/syncedreview/OpenAI-sparse-transformer-improves-predictable-sequence-length-by-30x-5a65ef2592b9 |website=medium.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=Generative Modeling with Sparse Transformers |url=https://OpenAI.com/blog/sparse-transformer/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>
 
|-
 
|-
| 2019 || {{Dts|March}} || Staff || {{w|Sam Altman}} leaves his role as the president of {{w|Y Combinator}} to become the {{w|Chief executive officer}} of OpenAI.<ref>{{cite web |title=Sam Altman’s leap of faith |url=https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/ |website=techcrunch.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator |url=https://techcrunch.com/2019/03/08/y-combinator-president-sam-altman-is-stepping-down-amid-a-series-of-changes-at-the-accelerator/ |website=techcrunch.com |accessdate=24 February 2020}}</ref>
+
| 2019 || {{Dts|April 25}} || {{w|Neural network}} || Software release || OpenAI announces MuseNet, a deep {{w|neural network}} able to generate 4-minute musical compositions with 10 different instruments, and can combine multiple styles from [[w:Country music|country]] to {{w|Mozart}} to {{w|The Beatles}}. The neural network uses general-purpose unsupervised technology.<ref>{{cite web |title=MuseNet |url=https://OpenAI.com/blog/musenet/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>
 
|-
 
|-
| 2019 || {{dts|March}} || Staff || Alex Paino joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Alex Paino |url=https://www.linkedin.com/in/atpaino/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2019 || {{Dts|April 27}} || || Event hosting || OpenAI hosts the OpenAI Robotics Symposium 2019.<ref>{{cite web |title=OpenAI Robotics Symposium 2019 |url=https://OpenAI.com/blog/symposium-2019/ |website=OpenAI.com |accessdate=14 June 2019}}</ref>  
 
|-
 
|-
| 2019 || {{dts|March}} || Staff || Karson Elmgren joins OpenAI at People Operations.<ref>{{cite web |title=Karson Elmgren |url=https://www.linkedin.com/in/karson-elmgren-32417732/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
+
| 2019 || {{Dts|May}} || {{w|Natural-language generation}} || Software release || OpenAI releases a limited version of its language-generating system GPT-2. This version is more powerful (though still significantly limited compared to the whole thing) than the extremely restricted initial release of the system, citing concerns that it’d be abused.<ref>{{cite web |title=A poetry-writing AI has just been unveiled. It’s ... pretty good. |url=https://www.vox.com/2019/5/15/18623134/OpenAI-language-ai-gpt2-poetry-try-it |website=vox.com |accessdate=11 July 2019}}</ref> The potential of the new system is recognized by various experts.<ref>{{cite web |last1=Vincent |first1=James |title=AND OpenAI's new multitalented AI writes, translates, and slanders |url=https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-OpenAI-gpt2 |website=theverge.com |accessdate=11 July 2019}}</ref>
 
|-
 
|-
| 2019 || {{Dts|April 23}} || Publication || OpenAI publishes paper announcing Sparse Transformers, a deep neural network for learning sequences of data, including text, sound, and images. It utilizes an improved algorithm based on the attention mechanism, being able to extract patterns from sequences 30 times longer than possible previously.<ref>{{cite web |last1=Alford |first1=Anthony |title=OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences |url=https://www.infoq.com/news/2019/05/OpenAI-sparse-transformers/ |website=infoq.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=OpenAI Sparse Transformer Improves Predictable Sequence Length by 30x |url=https://medium.com/syncedreview/OpenAI-sparse-transformer-improves-predictable-sequence-length-by-30x-5a65ef2592b9 |website=medium.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=Generative Modeling with Sparse Transformers |url=https://OpenAI.com/blog/sparse-transformer/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>
+
| 2019 || {{dts|June 13}} || {{w|Natural-language generation}} || Coverage || Connor Leahy publishes article entitled ''The Hacker Learns to Trust'' which discusses the work of OpenAI, and particularly the potential danger of its language-generating system GPT-2. Leahy highlights: "Because this isn’t just about GPT2. What matters is that at some point in the future, someone will create something truly dangerous and there need to be commonly accepted safety norms before that happens."<ref name="ssfr">{{cite web |title=The Hacker Learns to Trust |url=https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51 |website=medium.com |accessdate=5 May 2020}}</ref>  
 
|-
 
|-
| 2019 || {{Dts|April 25}} || Software || OpenAI announces MuseNet, a deep {{w|neural network}} able to generate 4-minute musical compositions with 10 different instruments, and can combine multiple styles from [[w:Country music|country]] to {{w|Mozart}} to {{w|The Beatles}}. The neural network uses general-purpose unsupervised technology.<ref>{{cite web |title=MuseNet |url=https://OpenAI.com/blog/musenet/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>
+
| 2019 || {{dts|July 22}} || || Partnership || OpenAI announces an exclusive partnership with {{w|Microsoft}}. As part of the partnership, Microsoft invests $1 billion in OpenAI, and OpenAI switches to exclusively using {{w|Microsoft Azure}} (Microsoft's cloud solution) as the platform on which it will develop its AI tools. Microsoft will also be OpenAI's "preferred partner for commercializing new AI technologies."<ref>{{cite web|url = https://OpenAI.com/blog/microsoft/|title = Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI|date = July 22, 2019|accessdate = July 26, 2019|publisher = OpenAI}}</ref><ref>{{cite web|url = https://news.microsoft.com/2019/07/22/OpenAI-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/|title =  OpenAI forms exclusive computing partnership with Microsoft to build new Azure AI supercomputing technologies|date = July 22, 2019|accessdate = July 26, 2019|publisher = Microsoft}}</ref><ref>{{cite web|url = https://www.businessinsider.com/microsoft-OpenAI-artificial-general-intelligence-investment-2019-7|title = Microsoft is investing $1 billion in OpenAI, the Elon Musk-founded company that's trying to build human-like artificial intelligence|last = Chan|first= Rosalie|date = July 22, 2019|accessdate = July 26, 2019|publisher = Business Insider}}</ref><ref>{{cite web|url = https://www.forbes.com/sites/mohanbirsawhney/2019/07/24/the-real-reasons-microsoft-invested-in-OpenAI/|title = The Real Reasons Microsoft Invested In OpenAI|last = Sawhney|first = Mohanbir|date = July 24, 2019|accessdate = July 26, 2019|publisher = Forbes}}</ref>
 
|-
 
|-
| 2019 || {{Dts|April 27}} || Event hosting || OpenAI hosts the OpenAI Robotics Symposium 2019.<ref>{{cite web |title=OpenAI Robotics Symposium 2019 |url=https://OpenAI.com/blog/symposium-2019/ |website=OpenAI.com |accessdate=14 June 2019}}</ref>  
+
| 2019 || {{dts|August 20}} || {{w|Natural-language generation}} || Software release || OpenAI announces plan to release a version of its language-generating system GPT-2, which stirred controversy after it release in February.<ref>{{cite web |title=OpenAI releases curtailed version of GPT-2 language model |url=https://venturebeat.com/2019/08/20/OpenAI-releases-curtailed-version-of-gpt-2-language-model/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI Just Released an Even Scarier Fake News-Writing Algorithm |url=https://interestingengineering.com/OpenAI-just-released-an-even-scarier-fake-news-writing-algorithm |website=interestingengineering.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OPENAI JUST RELEASED A NEW VERSION OF ITS FAKE NEWS-WRITING AI |url=https://futurism.com/the-byte/OpenAI-new-version-writing-ai |website=futurism.com |accessdate=24 February 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|April}} || Staff || Todor Markov joins OpenAI as Machine Learning Researcher.<ref>{{cite web |title=Todor Markov |url=https://www.linkedin.com/in/todor-markov-4aa38a67/ |website=linkedin.com/ |accessdate=28 February 2020}}</ref>
+
| 2019 || {{dts|September 17}} || || Research progress || OpenAI announces having observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training, the agents were able to build a series of six distinct strategies and counterstrategies, some of which were unknown to be supported by the environment.<ref>{{cite web |title=Emergent Tool Use from Multi-Agent Interaction |url=https://openai.com/blog/emergent-tool-use/ |website=openai.com |accessdate=4 April 2020}}</ref><ref>{{cite web |title=Emergent Tool Use From Multi-Agent Autocurricula |url=https://arxiv.org/abs/1909.07528 |website=arxiv.org |accessdate=4 April 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|May 3}} || Publication || OpenAI publishes study on the transfer of adversarial robustness of [[w:deep learning|deep neural networks]] between different perturbation types.<ref>{{cite web |last1=Kang |first1=Daniel |last2=Sun |first2=Yi |last3=Brown |first3=Tom |last4=Hendrycks |first4=Dan |last5=Steinhardt |first5=Jacob |title=Transfer of Adversarial Robustness Between Perturbation Types |url=https://arxiv.org/abs/1905.01034 |website=arxiv.org |accessdate=25 March 2020}}</ref>
+
| 2019 || {{dts|October 16}} || {{w|Neural network}}s || Research progress || OpenAI announces having trained a pair of {{w|neural network}}s to solve the {{w|Rubik’s Cube}} with a human-like robot hand. The experiment demonstrates that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot.<ref>{{cite web |title=Solving Rubik's Cube with a Robot Hand |url=https://arxiv.org/abs/1910.07113 |website=arxiv.org |accessdate=4 April 2020}}</ref><ref>{{cite web |title=Solving Rubik’s Cube with a Robot Hand |url=https://openai.com/blog/solving-rubiks-cube/ |website=openai.com |accessdate=4 April 2020}}</ref>  
 
|-
 
|-
| 2019 || {{Dts|May}} || Software || OpenAI releases a limited version of its language-generating system GPT-2. This version is more powerful (though still significantly limited compared to the whole thing) than the extremely restricted initial release of the system, citing concerns that it’d be abused.<ref>{{cite web |title=A poetry-writing AI has just been unveiled. It’s ... pretty good. |url=https://www.vox.com/2019/5/15/18623134/OpenAI-language-ai-gpt2-poetry-try-it |website=vox.com |accessdate=11 July 2019}}</ref> The potential of the new system is recognized by various experts.<ref>{{cite web |last1=Vincent |first1=James |title=AND OpenAI's new multitalented AI writes, translates, and slanders |url=https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-OpenAI-gpt2 |website=theverge.com |accessdate=11 July 2019}}</ref>
+
| 2019 || {{dts|November 5}} || {{w|Natural-language generation}} || Software release || OpenAI releases the largest version (1.5B parameters) of its language-generating system GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models.<ref>{{cite web |title=GPT-2: 1.5B Release |url=https://openai.com/blog/gpt-2-1-5b-release/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|May 28}} || Publication || OpenAI publishes study on the dynamics of Stochastic Gradient Descent (SGD) in learning [[w:Deep learning|deep neural networks]] for several real and synthetic classification tasks.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Kalimeris |first3=Dimitris |last4=Yang |first4=Tristan |last5=Edelman |first5=Benjamin L. |last6=Zhang |first6=Fred |last7=Barak |first7=Boaz |title=SGD on Neural Networks Learns Functions of Increasing Complexity |url=https://arxiv.org/abs/1905.11604 |website=arxiv.org |accessdate=25 March 2020}}</ref>
+
| 2019 || {{dts|November 21}} || {{w|Reinforcement learning}} || Software release || OpenAI releases Safety Gym, a suite of environments and tools for measuring progress towards {{w|reinforcement learning}} agents that respect safety constraints while training.<ref>{{cite web |title=Safety Gym |url=https://openai.com/blog/safety-gym/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|June}} || Staff || Long Ouyang joins OpenAI as Research Scientist.<ref>{{cite web |title=Long Ouyang |url=https://www.linkedin.com/in/longouyang/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2019 || {{dts|December 3}} || {{w|Reinforcement learning}} || Software release || OpenAI releases Procgen Benchmark, a set of 16 simple-to-use procedurally-generated environments (CoinRun, StarPilot, CaveFlyer, Dodgeball, FruitBot, Chaser, Miner, Jumper, Leaper, Maze, BigFish, Heist, Climber, Plunder, Ninja, and BossFight) which provide a direct measure of how quickly a {{w|reinforcement learning}} agent learns generalizable skills. Procgen Benchmark prevents AI model overfitting.<ref>{{cite web |title=Procgen Benchmark |url=https://openai.com/blog/procgen-benchmark/ |website=openai.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=OpenAI’s Procgen Benchmark prevents AI model overfitting |url=https://venturebeat.com/2019/12/03/openais-procgen-benchmark-overfitting/ |website=venturebeat.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=GENERALIZATION IN REINFORCEMENT LEARNING – EXPLORATION VS EXPLOITATION |url=https://analyticsindiamag.com/generalization-in-reinforcement-learning-exploration-vs-exploitation/ |website=analyticsindiamag.com |accessdate=2 March 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|July 10}} || Publication || OpenAI publishes paper arguing that competitive pressures could incentivize AI companies to underinvest in ensuring their systems are safe, secure, and have a positive social impact.<ref>{{cite web |last1=Askell |first1=Amanda |last2=Brundage |first2=Miles |last3=Hadfield |first3=Gillian |title=The Role of Cooperation in Responsible AI Development |url=https://arxiv.org/abs/1907.04534 |website=arxiv.org |accessdate=25 March 2020}}</ref>
+
| 2019 || {{dts|December 4}} || || Publication || "Deep Double Descent: Where Bigger Models and More Data Hurt" is submitted to the {{w|ArXiv}}. The paper shows that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as the model size increases, performance first gets worse and then gets better.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Bansal |first3=Yamini |last4=Yang |first4=Tristan |last5=Barak |first5=Boaz |last6=Sutskever |first6=Ilya |title=Deep Double Descent: Where Bigger Models and More Data Hurt |website=arxiv.org |url=https://arxiv.org/abs/1912.02292|accessdate=5 April 2020}}</ref> The paper is summarized on the OpenAI blog.<ref>{{cite web|url = https://openai.com/blog/deep-double-descent/|title = Deep Double Descent|publisher = OpenAI|date = December 5, 2019|accessdate = May 23, 2020}}</ref> MIRI researcher Evan Hubinger writes an explanatory post on the subject on LessWrong and the AI Alignment Forum,<ref>{{cite web|url = https://www.lesswrong.com/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent|title = Understanding “Deep Double Descent”|date = December 5, 2019|accessdate = 24 May 2020|publisher = LessWrong|last = Hubinger|first = Evan}}</ref> and follows up with a post on the AI safety implications.<ref>{{cite web|url = https://www.lesswrong.com/posts/nGqzNC6uNueum2w8T/inductive-biases-stick-around|title = Inductive biases stick around|date = December 18, 2019|accessdate = 24 May 2020|last = Hubinger|first = Evan}}</ref>
 
|-
 
|-
| 2019 || {{dts|July 22}} || Partnership || OpenAI announces an exclusive partnership with {{w|Microsoft}}. As part of the partnership, Microsoft invests $1 billion in OpenAI, and OpenAI switches to exclusively using {{w|Microsoft Azure}} (Microsoft's cloud solution) as the platform on which it will develop its AI tools. Microsoft will also be OpenAI's "preferred partner for commercializing new AI technologies."<ref>{{cite web|url = https://OpenAI.com/blog/microsoft/|title = Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI|date = July 22, 2019|accessdate = July 26, 2019|publisher = OpenAI}}</ref><ref>{{cite web|url = https://news.microsoft.com/2019/07/22/OpenAI-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/|title = OpenAI forms exclusive computing partnership with Microsoft to build new Azure AI supercomputing technologies|date = July 22, 2019|accessdate = July 26, 2019|publisher = Microsoft}}</ref><ref>{{cite web|url = https://www.businessinsider.com/microsoft-OpenAI-artificial-general-intelligence-investment-2019-7|title = Microsoft is investing $1 billion in OpenAI, the Elon Musk-founded company that's trying to build human-like artificial intelligence|last = Chan|first= Rosalie|date = July 22, 2019|accessdate = July 26, 2019|publisher = Business Insider}}</ref><ref>{{cite web|url = https://www.forbes.com/sites/mohanbirsawhney/2019/07/24/the-real-reasons-microsoft-invested-in-OpenAI/|title = The Real Reasons Microsoft Invested In OpenAI|last = Sawhney|first = Mohanbir|date = July 24, 2019|accessdate = July 26, 2019|publisher = Forbes}}</ref>
+
| 2019 || {{dts|December}} || || Team || Dario Amodei is promoted as OpenAI's Vice President of Research.<ref name="Dario Amodeiy">{{cite web |title=Dario Amodei |url=https://www.linkedin.com/in/dario-amodei-3934934/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|July}} || Staff || Irene Solaiman joins OpenAI as Policy Researcher.<ref>{{cite web |title=Irene Solaiman |url=https://www.linkedin.com/in/irene-solaiman/ |website=linkedin.com |accessdate=28 February 2020}}</ref>  
+
| 2020 || {{dts|January 30}} || {{w|Deep learning}} || Software adoption || OpenAI announces migration to the social network’s {{w|PyTorch}} {{w|machine learning}} framework in future projects, setting it as its new standard deep learning framework.<ref>{{cite web |title=OpenAI sets PyTorch as its new standard deep learning framework |url=https://jaxenter.com/OpenAI-pytorch-deep-learning-framework-167641.html |website=jaxenter.com |accessdate=23 February 2020}}</ref><ref>{{cite web |title=OpenAI goes all-in on Facebook’s Pytorch machine learning framework |url=https://venturebeat.com/2020/01/30/OpenAI-facebook-pytorch-google-tensorflow/ |website=venturebeat.com |accessdate=23 February 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|August 20}} || Software || OpenAI announces plan to release a version of its language-generating system GPT-2, which stirred controversy after it release in February.<ref>{{cite web |title=OpenAI releases curtailed version of GPT-2 language model |url=https://venturebeat.com/2019/08/20/OpenAI-releases-curtailed-version-of-gpt-2-language-model/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI Just Released an Even Scarier Fake News-Writing Algorithm |url=https://interestingengineering.com/OpenAI-just-released-an-even-scarier-fake-news-writing-algorithm |website=interestingengineering.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OPENAI JUST RELEASED A NEW VERSION OF ITS FAKE NEWS-WRITING AI |url=https://futurism.com/the-byte/OpenAI-new-version-writing-ai |website=futurism.com |accessdate=24 February 2020}}</ref>
+
| 2020 || {{dts|February 5}} || Safety || Publication || Beth Barnes and Paul Christiano on <code>lesswrong.com</code> publish ''Writeup: Progress on AI Safety via Debate'', a writeup of the research done by the "Reflection-Humans" team at OpenAI in third and fourth quarter of 2019.<ref>{{cite web |title=Writeup: Progress on AI Safety via Debate |url=https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1#Things_we_did_in_Q3 |website=lesswrong.com |accessdate=16 May 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|August}} || Staff || Melanie Subbiah joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Melanie Subbiah |url=https://www.linkedin.com/in/melanie-subbiah-7b702a8a/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
+
| 2020 || {{dts|February 17}} || || Coverage || AI reporter Karen Hao at ''MIT Technology Review'' publishes review on OpenAI titled ''The messy, secretive reality behind OpenAI’s bid to save the world'', which suggests the company is surrendering its declaration to be transparent in order to outpace competitors. As a response, {{w|Elon Musk}} criticizes OpenAI, saying it lacks transparency.<ref name="Aaron">{{cite web |last1=Holmes |first1=Aaron |title=Elon Musk just criticized the artificial intelligence company he helped found — and said his confidence in the safety of its AI is 'not high' |url=https://www.businessinsider.com/elon-musk-criticizes-OpenAI-dario-amodei-artificial-intelligence-safety-2020-2 |website=businessinsider.com |accessdate=29 February 2020}}</ref> On his {{w|Twitter}} account, Musk writes "I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high", alluding to OpenAI Vice President of Research Dario Amodei.<ref>{{cite web |title=Elon Musk |url=https://twitter.com/elonmusk/status/1229546206948462597 |website=twitter.com |accessdate=29 February 2020}}</ref>
|-
 
| 2019 || {{dts|August}} || Staff || Cullen O'Keefe joins OpenAI as Research Scientist (Policy).<ref>{{cite web |title=Cullen O'Keefe |url=https://www.linkedin.com/in/ccokeefe-law/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2019 || {{dts|September 17}} || Software || OpenAI announces having observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training, the agents were able to build a series of six distinct strategies and counterstrategies, some of which were unknown to be supported by the environment.<ref>{{cite web |title=Emergent Tool Use from Multi-Agent Interaction |url=https://openai.com/blog/emergent-tool-use/ |website=openai.com |accessdate=4 April 2020}}</ref><ref>{{cite web |title=Emergent Tool Use From Multi-Agent Autocurricula |url=https://arxiv.org/abs/1909.07528 |website=arxiv.org |accessdate=4 April 2020}}</ref> 
 
|-
 
| 2019 || {{dts|October 16}} || Software || OpenAI announces having trained a pair of {{w|neural network}}s to solve the {{w|Rubik’s Cube}} with a human-like robot hand. The experiment demonstrates that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot.<ref>{{cite web |title=Solving Rubik's Cube with a Robot Hand |url=https://arxiv.org/abs/1910.07113 |website=arxiv.org |accessdate=4 April 2020}}</ref><ref>{{cite web |title=Solving Rubik’s Cube with a Robot Hand |url=https://openai.com/blog/solving-rubiks-cube/ |website=openai.com |accessdate=4 April 2020}}</ref>
 
|-
 
| 2019 || {{dts|November 5}} || Software || OpenAI releases the largest version (1.5B parameters) of its language-generating system GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models.<ref>{{cite web |title=GPT-2: 1.5B Release |url=https://openai.com/blog/gpt-2-1-5b-release/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
| 2019 || {{dts|November}} || Staff || Ryan Lowe joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Ryan Lowe |url=https://www.linkedin.com/in/ryan-lowe-ab67a267/ |website=linkedin.com |accessdate=28 February 2020}}</ref>
 
|-
 
| 2019 || {{dts|November 21}} || Software || OpenAI releases Safety Gym, a suite of environments and tools for measuring progress towards {{w|reinforcement learning}} agents that respect safety constraints while training.<ref>{{cite web |title=Safety Gym |url=https://openai.com/blog/safety-gym/ |website=openai.com |accessdate=5 April 2020}}</ref>
 
|-
 
| 2019 || {{dts|December 3}} || Software || OpenAI releases Procgen Benchmark, a set of 16 simple-to-use procedurally-generated environments (CoinRun, StarPilot, CaveFlyer, Dodgeball, FruitBot, Chaser, Miner, Jumper, Leaper, Maze, BigFish, Heist, Climber, Plunder, Ninja, and BossFight) which provide a direct measure of how quickly a {{w|reinforcement learning}} agent learns generalizable skills. Procgen Benchmark prevents AI model overfitting.<ref>{{cite web |title=Procgen Benchmark |url=https://openai.com/blog/procgen-benchmark/ |website=openai.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=OpenAI’s Procgen Benchmark prevents AI model overfitting |url=https://venturebeat.com/2019/12/03/openais-procgen-benchmark-overfitting/ |website=venturebeat.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=GENERALIZATION IN REINFORCEMENT LEARNING – EXPLORATION VS EXPLOITATION |url=https://analyticsindiamag.com/generalization-in-reinforcement-learning-exploration-vs-exploitation/ |website=analyticsindiamag.com |accessdate=2 March 2020}}</ref>
 
|-
 
| 2019 || {{dts|December 4}} || Publication || "Deep Double Descent: Where Bigger Models and More Data Hurt" is submitted to the {{w|ArXiv}}. The paper shows that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as the model size increases, performance first gets worse and then gets better.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Bansal |first3=Yamini |last4=Yang |first4=Tristan |last5=Barak |first5=Boaz |last6=Sutskever |first6=Ilya |title=Deep Double Descent: Where Bigger Models and More Data Hurt |website=arxiv.org |url=https://arxiv.org/abs/1912.02292|accessdate=5 April 2020}}</ref>
 
|-
 
| 2019 || {{dts|December}} || Staff || Dario Amodei is promoted as OpenAI's Vice President of Research.<ref name="Dario Amodeiy">{{cite web |title=Dario Amodei |url=https://www.linkedin.com/in/dario-amodei-3934934/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
 
|-
 
| 2020 || {{dts|January 30}} || Software adoption || OpenAI announces migration to the social network’s {{w|PyTorch}} {{w|machine learning}} framework in future projects, setting it as its new standard deep learning framework.<ref>{{cite web |title=OpenAI sets PyTorch as its new standard deep learning framework |url=https://jaxenter.com/OpenAI-pytorch-deep-learning-framework-167641.html |website=jaxenter.com |accessdate=23 February 2020}}</ref><ref>{{cite web |title=OpenAI goes all-in on Facebook’s Pytorch machine learning framework |url=https://venturebeat.com/2020/01/30/OpenAI-facebook-pytorch-google-tensorflow/ |website=venturebeat.com |accessdate=23 February 2020}}</ref>
 
|-
 
| 2020 || {{dts|February 17}} || Coverage || AI reporter Karen Hao at ''MIT Technology Review'' publishes review on OpenAI titled ''The messy, secretive reality behind OpenAI’s bid to save the world'', which suggests the company is surrendering its declaration to be transparent in order to outpace competitors. As a response, {{w|Elon Musk}} criticizes OpenAI, saying it lacks transparency.<ref name="Aaron">{{cite web |last1=Holmes |first1=Aaron |title=Elon Musk just criticized the artificial intelligence company he helped found — and said his confidence in the safety of its AI is 'not high' |url=https://www.businessinsider.com/elon-musk-criticizes-OpenAI-dario-amodei-artificial-intelligence-safety-2020-2 |website=businessinsider.com |accessdate=29 February 2020}}</ref> On his {{w|Twitter}} account, Musk writes "I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high", alluding OpenAI Vice President of Research Dario Amodei.<ref>{{cite web |title=Elon Musk |url=https://twitter.com/elonmusk/status/1229546206948462597 |website=twitter.com |accessdate=29 February 2020}}</ref>
 
|-
 
| 2020 || {{dts|January 23}} || Publication || OpenAI publishes study on empirical scaling laws for language model performance on the cross-entropy loss.<ref>{{cite web |last1=Kaplan |first1=Jared |last2=McCandlish |first2=Sam |last3=Henighan |first3=Tom |last4=Brown |first4=Tom B. |last5=Chess |first5=Benjamin |last6=Child |first6=Rewon |last7=Gray |first7=Scott |last8=Radford |first8=Alec |last9=Wu |first9=Jeffrey |last10=Amodei |first10=Dario |title=Scaling Laws for Neural Language Models |url=https://arxiv.org/abs/2001.08361 |website=arxiv.org |accessdate=25 March 2020}}</ref>
 
 
|-
 
|-
 
|}
 
|}
Line 438: Line 274:
 
===How the timeline was built===
 
===How the timeline was built===
  
The initial version of the timeline was written by [[User:Issa|Issa Rice]].
+
The initial version of the timeline was written by [[User:Issa|Issa Rice]]. It has been expanded considerably by [[User:Sebastian|Sebastian]].
  
 
{{funding info}} is available.
 
{{funding info}} is available.
  
 
===What the timeline is still missing===
 
===What the timeline is still missing===
 
  
 
===Timeline update strategy===
 
===Timeline update strategy===

Latest revision as of 08:22, 24 May 2020

This is a timeline of OpenAI. OpenAI is a non-profit safety-conscious artificial intelligence capabilities company.

Sample questions

The following are some interesting questions that can be answered by reading this timeline:

  • What are some significant events previous to the creation of OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Prelude".
    • You will see some events involving key people like Elon Musk and Sam Altman, that would eventually lead to the creation of OpenAI.
  • What are the various papers and posts published by OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Publication".
    • You will see mostly papers submitted to the ArXiv by OpenAI-affiliated researchers. Also blog posts.
  • What are the several toolkits, implementations, algorithms, systems and software in general released by OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Software release".
    • You will see a variety of releases, some of them open-sourced.
  • What are some other significant events describing advances in research?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Research progress".
    • You will see some discoveries and other significant results obtained by OpenAI.
  • What is the staff composition and what are the different roles in the organization?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Staff".
    • You will see the names of incorporated people and their roles.
  • What are the several partnerships between OpenAI and other organizations?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Partnership".
    • You will read collaborations with organizations like DeepMind and Microsoft.
  • What are some significant fundings granted to OpenAI by donors?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Donation".
    • You will see names like the Open Philanthropy Project, and Nvidia, among others.
  • What are some notable events hosted by OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Event hosting".
  • What are some notable publications by third parties about OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Coverage".

Big picture

Time period Development summary More details
2014–2015 Background Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies, about the dangers of superhuman machine intelligence, is published. Soon after the book's publication, Elon Musk and Sam Altman, the two people who would become co-chairs and initial donors of OpenAI, publicly state their concern of superhuman machine intelligence.
2015 Establishment OpenAI is founded as a nonprofit and begins producing research.
2019 Reorganization OpenAI shifts from nonprofit to ‘capped-profit’ with the purpose to attract capital.

Visual data

Wikipedia Views

The image below shows Wikipedia Views data for Open AI entry on English Wikipedia on desktop, mobile web, mobile app, desktop-spider, and mobile-web-spider; from July 2015 (see OpenAI creation around December 2015) to January 2020.[1]

OpenAI wikipedia views.png

Google Trends

The image below shows Google Trends data for OpenAI entry from December 2015 (OpenAI creation) to February 2020.[2]

OpenAI Google Trends.png

Full timeline

Year Month and date Domain Event type Details
2014 October 22–24 Prelude During an interview at the AeroAstro Centennial Symposium, Elon Musk, who would later become co-chair of OpenAI, calls artificial intelligence humanity's "biggest existential threat".[3][4]
2015 February 25 Prelude Sam Altman, president of Y Combinator who would later become a co-chair of OpenAI, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".[5]
2015 May 6 Prelude Greg Brockman, who would become CTO of OpenAI, announces in a blog post that he is leaving his role as CTO of Stripe. In the post, in the section "What comes next" he writes "I haven't decided exactly what I'll be building (feel free to ping if you want to chat)".[6][7]
2015 June Prelude Sam Altman and Greg Brockman have a conversation about next steps for Brockman.[8]
2015 June 4 Prelude At Airbnb's Open Air 2015 conference, Sam Altman, president of Y Combinator who would later become a co-chair of OpenAI, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.[9]
2015 July (approximate) Prelude Sam Altman sets up a dinner in Menlo Park, California to talk about starting an organization to do AI research. Attendees include Greg Brockman, Dario Amodei, Chris Olah, Paul Christiano, Ilya Sutskever, and Elon Musk.[8]
2015 December 11 Creation OpenAI is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)[10][11][12] Co-founders include Wojciech Zaremba[13],
2015 December Coverage The article "OpenAI" is created on Wikipedia.[14]
2015 December Team OpenAI announces Y Combinator founding partner Jessica Livingston as one of its financial backers.[15]
2016 January Team Ilya Sutskever joins OpenAI as Research Director.[16][17]
2016 January 9 Education The OpenAI research team does an AMA ("ask me anything") on r/MachineLearning, the subreddit dedicated to machine learning.[18]
2016 February 25 Optimization Publication "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks", a paper on optimization, is first submitted to the ArXiv. The paper presents weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction.[19]
2016 March 31 Team A blog post from this day announces that Ian Goodfellow has joined OpenAI.[20] Previously, Goodfellow worked as Senior Research Scientist at Google.[21][17]
2016 April 26 Team A blog post from this day announces that Pieter Abbeel has joined OpenAI.[22][17]
2016 April 27 Software release The public beta of OpenAI Gym, an open source toolkit that provides environments to test AI bots, is released.[23][24][25]
2016 May 25 Safety Publication "Adversarial Training Methods for Semi-Supervised Text Classification" is submitted to the ArXiv. The paper proposes a method that achieves better results on multiple benchmark semi-supervised and purely supervised tasks.[26]
2016 May 31 Generative models Publication "VIME: Variational Information Maximizing Exploration", a paper on generative models, is submitted to the ArXiv. The paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics.[27]
2016 June 5 Reinforcement learning Publication "OpenAI Gym", a paper on reinforcement learning, is submitted to the ArXiv. It presents OpenAI Gym as a toolkit for reinforcement learning research.[28] OpenAI Gym is considered by some as "a huge opportunity for speeding up the progress in the creation of better reinforcement algorithms, since it provides an easy way of comparing them, on the same conditions, independently of where the algorithm is executed".[29]
2016 June 10 Generative models Publication "Improved Techniques for Training GANs", a paper on generative models, is submitted to the ArXiv. It presents a variety of new architectural features and training procedures that OpenAI applies to the generative adversarial networks (GANs) framework.[30]
2016 June 12 Generative models Publication "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", a paper on generative models, is submitted to ArXiv. It describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.[31]
2016 June 15 Generative models Publication "Improving Variational Inference with Inverse Autoregressive Flow", a paper on generative models, is submitted to the ArXiv. We propose a new type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast to earlier published flows, scales well to high-dimensional latent spaces.[32]
2016 June 16 Generative models Publication OpenAI publishes post describing four projects on generative models, a branch of unsupervised learning techniques in machine learning.[33]
2016 June 21 Publication "Concrete Problems in AI Safety" by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané is submitted to the arXiv. The paper explores practical problems in machine learning systems.[34] The paper would receive a shoutout from the Open Philanthropy Project.[35] It would become a landmark in AI safety literature, and many of its authors would continue to do AI safety work at OpenAI in the years to come.
2016 July Team Dario Amodei joins OpenAI[36], working on the Team Lead for AI Safety.[37][17]
2016 July 8 Publication "Adversarial Examples in the Physical World" is published. One of the authors is Ian Goodfellow, who is at OpenAI at the time.[38]
2016 July 28 OpenAI publishes post calling for applicants to work in the following problem areas of interest:
  • Detect if someone is using a covert breakthrough AI system in the world.
  • Build an agent to win online programming competitions.
  • Cyber-security defense.
  • A complex simulation with many long-lived agents.[39]
2016 August 15 Donation The technology company Nvidia announces that it has donated the first Nvidia DGX-1 (a supercomputer) to OpenAI. OpenAI plans to use the supercomputer to train its AI on a corpus of conversations from Reddit.[40][41][42]
2016 August 29 Infrastructure Publication "Infrastructure for Deep Learning" is published. The post shows how deep learning research usually proceeds. It also describes the infrastructure choices OpenAI made to support it, and open-source kubernetes-ec2-autoscaler, a batch-optimized scaling manager for Kubernetes.[43]
2016 October 11 Robotics Publication "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model", a paper on robotics, is submitted to the ArXiv. It investigates settings where the sequence of states traversed in simulation remains reasonable for the real world.[44]
2016 October 18 Safety Publication "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data", a paper on safety, is submitted to the ArXiv. It shows an approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE).[45]
2016 November 14 Generative models Publication "On the Quantitative Analysis of Decoder-Based Generative Models", a paper on generative models, is submitted to the ArXiv. It introduces a technique to analyze the performance of decoder-based models.[46]
2016 November 15 Partnership A partnership between OpenAI and Microsoft's artificial intelligence division is announced. As part of the partnership, Microsoft provides a price reduction on computing resources to OpenAI through Microsoft Azure.[47][48]
2016 December 5 Software release OpenAI's Universe, "a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications", is released.[49][50][51][52]
2017 January Staff Paul Christiano joins OpenAI to work on AI alignment.[53] He was previously an intern at OpenAI in 2016.[54]
2017 March Donation The Open Philanthropy Project awards a grant of $30 million to OpenAI for general support.[55] The grant initiates a partnership between Open Philanthropy Project and OpenAI, in which Holden Karnofsky (executive director of Open Philanthropy Project) joins OpenAI's board of directors to oversee OpenAI's safety and governance work.[56] The grant is criticized by Maciej Cegłowski[57] and Benjamin Hoffman (who would write the blog post "OpenAI makes humanity less safe")[58][59][60] among others.[61]
2017 March 24 Research progress OpenAI announces having discovered that evolution strategies rival the performance of standard reinforcement learning techniques on modern RL benchmarks (e.g. Atari/MuJoCo), while overcoming many of RL’s inconveniences.[62]
2017 March Reorganization Greg Brockman and a few other core members of OpenAI begin drafting an internal document to lay out a path to artificial general intelligence. As the team studies trends within the field, they realize staying a nonprofit is financially untenable.[63]
2017 April Coverage An article entitled "The People Behind OpenAI" is published on Red Hat's Open Source Stories website, covering work at OpenAI.[64][65][66]
2017 April 6 Software release OpenAI unveils an unsupervised system which is able to perform a excellent sentiment analysis, despite being trained only to predict the next character in the text of Amazon reviews.[67][68]
2017 April 6 Publication "Learning to Generate Reviews and Discovering Sentiment" is published.[69]
2017 April 6 Neuroevolution Research progress OpenAI unveils reuse of an old field called “neuroevolution”, and a subset of algorithms from it called “evolution strategies,” which are aimed at solving optimization problems. In one hour training on an Atari challenge, an algorithm is found to reach a level of mastery that took a reinforcement-learning system published by DeepMind in 2016 a whole day to learn. On the walking problem the system took 10 minutes, compared to 10 hours for DeepMind's approach.[70]
2017 May 15 Robotics Software release OpenAI releases Roboschool, an open-source software for robot simulation, integrated with OpenAI Gym.[71]
2017 May 16 Robotics Software release OpenAI introduces a robotics system, trained entirely in simulation and deployed on a physical robot, which can learn a new task after seeing it done once.[72]
2017 May 24 Reinforcement learning Software release OpenAI releases Baselines, a set of implementations of reinforcement learning algorithms.[73][74]
2017 June 12 Safety Publication "Deep reinforcement learning from human preferences" is first uploaded to the arXiv. The paper is a collaboration between researchers at OpenAI and Google DeepMind.[75][76][77]
2017 June 28 Robotics Open sourcing OpenAI open sources a high-performance Python library for robotic simulation using the MuJoCo engine, developed over OpenAI research on robotics.[78]
2017 June Reinforcement learning Partnership OpenAI partners with DeepMind’s safety team in the development of an algorithm which can infer what humans want by being told which of two proposed behaviors is better. The learning algorithm uses small amounts of human feedback to solve modern reinforcement learning environments.[79]
2017 July 27 Reinforcement learning Research progress OpenAI announces having found that adding adaptive noise to the parameters of reinforcement learning algorithms frequently boosts performance.[80]
2017 August 12 Achievement OpenAI's Dota 2 bot beats Danil "Dendi" Ishutin, a professional human player, (and possibly others?) in one-on-one battles.[81][82][83]
2017 August 13 Coverage The New York Times publishes a story covering the AI safety work (by Dario Amodei, Geoffrey Irving, and Paul Christiano) at OpenAI.[84]
2017 August 18 Reinforcement learning Software release OpenAI releases two implementations: ACKTR, a reinforcement learning algorithm, and A2C, a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C).[85]
2017 September 13 Reinforcement learning Publication "Learning with Opponent-Learning Awareness" is first uploaded to the ArXiv. The paper presents Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in an environment.[86][87]
2017 October 11 Software release RoboSumo, a game that simulates sumo wrestling for AI to learn to play, is released.[88][89]
2017 November 6 Team The New York Times reports that Pieter Abbeel (a researcher at OpenAI) and three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligence.[90]
2017 December 6 Neural network Software release OpenAI releases highly-optimized GPU kernels for networks with block-sparse weights, an underexplored class of neural network architectures. Depending on the chosen sparsity, these kernels can run orders of magnitude faster than cuBLAS or cuSPARSE.[91]
2017 December Publication The 2017 AI Index is published. OpenAI contributed to the report.[92]
2018 February 20 Safety Publication The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is submitted to the ArXiv. It forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.[93][94][95][96][97]
2018 February 20 Donation OpenAI announces changes in donors and advisors. New donors are: Jed McCaleb, Gabe Newell, Michael Seibel, Jaan Tallinn, and Ashton Eaton and Brianne Theisen-Eaton. Reid Hoffman is "significantly increasing his contribution". Pieter Abbeel (previously at OpenAI), Julia Galef, and Maran Nelson become advisors. Elon Musk departs the board but remains as a donor and advisor.[98][96]
2018 February 26 Robotics Software release OpenAI releases eight simulated robotics environments and a Baselines implementation of Hindsight Experience Replay, all developed for OpenAI research over the previous year. These environments were to train models which work on physical robots.[99]
2018 March 3 Event hosting OpenAI hosts its first hackathon. Applicants include high schoolers, industry practitioners, engineers, researchers at universities, and others, with interests spanning healthcare to AGI.[100][101]
2018 April 5 – June 5 Event hosting The OpenAI Retro Contest takes place.[102][103] As a result of the release of the Gym Retro library, OpenAI's Universe become deprecated.[104]
2018 April 9 Commitment OpenAI releases a charter stating that the organization commits to stop competing with a value-aligned and safety-conscious project that comes close to building artificial general intelligence, and also that OpenAI expects to reduce its traditional publishing in the future due to safety concerns.[105][106][107][108][109]
2018 April 19 Financial The New York Times publishes a story detailing the salaries of researchers at OpenAI, using information from OpenAI's 2016 Form 990. The salaries include $1.9 million paid to Ilya Sutskever and $800,000 paid to Ian Goodfellow (hired in March of that year).[110][111][112]
2018 May 2 safety Publication The paper "AI safety via debate" by Geoffrey Irving, Paul Christiano, and Dario Amodei is uploaded to the arXiv. The paper proposes training agents via self play on a zero sum debate game, in order to adress tasks that are too complicated for a human to directly judge.[113][114]
2018 May 16 Computation Publication OpenAI releases an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time.[115]
2018 June 11 Unsupervised learning Research progress OpenAI announces having obtained significant results on a suite of diverse language tasks with a scalable, task-agnostic system, which uses a combination of transformers and unsupervised pre-training.[116]
2018 June 25 Neural network Software release OpenAI announces set of AI algorithms able to hold their own as a team of five and defeat human amateur players at Dota 2, a multiplayer online battle arena video game popular in e-sports for its complexity and necessity for teamwork.[117] In the algorithmic A team, called OpenAI Five, each algorithm uses a neural network to learn both how to play the game, and how to cooperate with its AI teammates.[118][119]
2018 June 26 Notable comment Bill Gates comments on Twitter:
AI bots just beat humans at the video game Dota 2. That’s a big deal, because their victory required teamwork and collaboration – a huge milestone in advancing artificial intelligence.
[120]
2018 July 18 Commitment Elon Musk, along with other tech leaders, sign a pledge promising to not develop “lethal autonomous weapons.” They also call on governments to institute laws against such technology. The pledge is organized by the Future of Life Institute, an outreach group focused on tackling existential risks.[121][122][123]
2018 July 30 Robotics Software release OpenAI announces a robotics system that can manipulate objects with humanlike dexterity. The system is able to develop these behaviors all on its own. It uses a reinforcement model, where the AI learns through trial and error, to direct robot hands in grasping and manipulating objects with great precision.[124][125]
2018 August 7 Achievement Algorithmic team OpenAI Five defeats a team of semi-professional Dota 2 players ranked in the 99.95th percentile in the world, in their second public match in the traditional five-versus-five settings, hosted in San Francisco.[126][127][128][129]
2018 August 16 Arboricity Publication OpenAI publishes paper on constant arboricity spectral sparsifiers. The paper shows that every graph is spectrally similar to the union of a constant number of forests.[130]
2018 September Team Dario Amodei becomes OpenAI's Research Director.[37]
2018 October 31 Reinforcement learning Software release OpenAI unveils its Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time exceeds average human performance on videogame Montezuma’s Revenge.[131]
2018 November 8 Reinforcement learning Education OpenAI launches Spinning Up, an educational resource designed to teach anyone deep reinforcement learning. The program consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.[132][133][134]
2018 November 9 Notable comment Ilya Sutskever gives speech at the AI Frontiers Conference in San Jose, and declares:
We (OpenAI) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibility.
[135]
2018 November 19 Reinforcement learning Partnership OpenAI partners with DeepMind in a new paper that proposes a new method to train reinforcement learning agents in ways that enables them to surpass human performance. The paper, titled Reward learning from human preferences and demonstrations in Atari, introduces a training model that combines human feedback and reward optimization to maximize the knowledge of RL agents.[136]
2018 December 4 Reinforcement learning Researh progress OpenAI announces having discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks.[137]
2018 December 6 Reinforcement learning Software release OpenAI releases CoinRun, a training environment designed to test the adaptability of reinforcement learning agents.[138][139]
2019 February 14 Natural-language generation Software release OpenAI unveils its language-generating system called GPT-2, a system able to write news, answer reading comprehension problems, and shows promise at tasks like translation.[140] However, the data or the parameters of the model are not released, under expressed concerns about potential abuse.[141] OpenAI initially tries to communicate the risk posed by this technology.[142]
2019 February 19 Safety Publication "AI Safety Needs Social Scientists" is published. The paper argues that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved.[143][144]
2019 March 4 Reinforcement learning Software release OpenAI releases a Neural MMO (massively multiplayer online), a multiagent game environment for reinforcement learning agents. The platform supports a large, variable number of agents within a persistent and open-ended task.[145]
2019 March 6 Software release OpenAI introduces activation atlases, created in collaboration with Google researchers. Activation atlases comprise a new technique for visualizing what interactions between neurons can represent.[146]
2019 March 11 Reorganization OpenAI announces the creation of OpenAI LP, a new “capped-profit” company owned and controlled by the OpenAI nonprofit organization’s board of directors. The new company is purposed to allow OpenAI to rapidly increase their investments in compute and talent while including checks and balances to actualize their mission.[147][148]
2019 March 21 Software release OpenAI announces progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models.[149]
2019 March Team Sam Altman leaves his role as the president of Y Combinator to become the Chief executive officer of OpenAI.[150][151][17]
2019 April 23 Deep learning Publication OpenAI publishes paper announcing Sparse Transformers, a deep neural network for learning sequences of data, including text, sound, and images. It utilizes an improved algorithm based on the attention mechanism, being able to extract patterns from sequences 30 times longer than possible previously.[152][153][154]
2019 April 25 Neural network Software release OpenAI announces MuseNet, a deep neural network able to generate 4-minute musical compositions with 10 different instruments, and can combine multiple styles from country to Mozart to The Beatles. The neural network uses general-purpose unsupervised technology.[155]
2019 April 27 Event hosting OpenAI hosts the OpenAI Robotics Symposium 2019.[156]
2019 May Natural-language generation Software release OpenAI releases a limited version of its language-generating system GPT-2. This version is more powerful (though still significantly limited compared to the whole thing) than the extremely restricted initial release of the system, citing concerns that it’d be abused.[157] The potential of the new system is recognized by various experts.[158]
2019 June 13 Natural-language generation Coverage Connor Leahy publishes article entitled The Hacker Learns to Trust which discusses the work of OpenAI, and particularly the potential danger of its language-generating system GPT-2. Leahy highlights: "Because this isn’t just about GPT2. What matters is that at some point in the future, someone will create something truly dangerous and there need to be commonly accepted safety norms before that happens."[142]
2019 July 22 Partnership OpenAI announces an exclusive partnership with Microsoft. As part of the partnership, Microsoft invests $1 billion in OpenAI, and OpenAI switches to exclusively using Microsoft Azure (Microsoft's cloud solution) as the platform on which it will develop its AI tools. Microsoft will also be OpenAI's "preferred partner for commercializing new AI technologies."[159][160][161][162]
2019 August 20 Natural-language generation Software release OpenAI announces plan to release a version of its language-generating system GPT-2, which stirred controversy after it release in February.[163][164][165]
2019 September 17 Research progress OpenAI announces having observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training, the agents were able to build a series of six distinct strategies and counterstrategies, some of which were unknown to be supported by the environment.[166][167]
2019 October 16 Neural networks Research progress OpenAI announces having trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. The experiment demonstrates that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot.[168][169]
2019 November 5 Natural-language generation Software release OpenAI releases the largest version (1.5B parameters) of its language-generating system GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models.[170]
2019 November 21 Reinforcement learning Software release OpenAI releases Safety Gym, a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training.[171]
2019 December 3 Reinforcement learning Software release OpenAI releases Procgen Benchmark, a set of 16 simple-to-use procedurally-generated environments (CoinRun, StarPilot, CaveFlyer, Dodgeball, FruitBot, Chaser, Miner, Jumper, Leaper, Maze, BigFish, Heist, Climber, Plunder, Ninja, and BossFight) which provide a direct measure of how quickly a reinforcement learning agent learns generalizable skills. Procgen Benchmark prevents AI model overfitting.[172][173][174]
2019 December 4 Publication "Deep Double Descent: Where Bigger Models and More Data Hurt" is submitted to the ArXiv. The paper shows that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as the model size increases, performance first gets worse and then gets better.[175] The paper is summarized on the OpenAI blog.[176] MIRI researcher Evan Hubinger writes an explanatory post on the subject on LessWrong and the AI Alignment Forum,[177] and follows up with a post on the AI safety implications.[178]
2019 December Team Dario Amodei is promoted as OpenAI's Vice President of Research.[37]
2020 January 30 Deep learning Software adoption OpenAI announces migration to the social network’s PyTorch machine learning framework in future projects, setting it as its new standard deep learning framework.[179][180]
2020 February 5 Safety Publication Beth Barnes and Paul Christiano on lesswrong.com publish Writeup: Progress on AI Safety via Debate, a writeup of the research done by the "Reflection-Humans" team at OpenAI in third and fourth quarter of 2019.[181]
2020 February 17 Coverage AI reporter Karen Hao at MIT Technology Review publishes review on OpenAI titled The messy, secretive reality behind OpenAI’s bid to save the world, which suggests the company is surrendering its declaration to be transparent in order to outpace competitors. As a response, Elon Musk criticizes OpenAI, saying it lacks transparency.[182] On his Twitter account, Musk writes "I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high", alluding to OpenAI Vice President of Research Dario Amodei.[183]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice. It has been expanded considerably by Sebastian.

Funding information for this timeline is available.

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. "OpenAI". wikipediaviews.org. Retrieved 1 March 2020. 
  2. "OpenAI". trends.google.com. Retrieved 1 March 2020. 
  3. Samuel Gibbs (October 27, 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Retrieved July 25, 2017. 
  4. "AeroAstro Centennial Webcast". Retrieved July 25, 2017. The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium 
  5. "Machine intelligence, part 1". Sam Altman. Retrieved July 27, 2017. 
  6. Brockman, Greg (May 6, 2015). "Leaving Stripe". Greg Brockman on Svbtle. Retrieved May 6, 2018. 
  7. Carson, Biz (May 6, 2015). "One of the first employees of $3.5 billion startup Stripe is leaving to form his own company". Business Insider. Retrieved May 6, 2018. 
  8. 8.0 8.1 "My path to OpenAI". Greg Brockman on Svbtle. May 3, 2016. Retrieved May 8, 2018. 
  9. Matt Weinberger (June 4, 2015). "Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last". Business Insider. Retrieved July 27, 2017. 
  10. John Markoff (December 11, 2015). "Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors". The New York Times. Retrieved July 26, 2017. The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco. 
  11. "Introducing OpenAI". OpenAI Blog. December 11, 2015. Retrieved July 26, 2017. 
  12. Drew Olanoff (December 11, 2015). "Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman". TechCrunch. Retrieved March 2, 2018. 
  13. "Wojciech Zaremba". linkedin.com. Retrieved 28 February 2020. 
  14. "OpenAI: Revision history". wikipedia.org. Retrieved 6 April 2020. 
  15. Priestly, Theo (December 11, 2015). "Elon Musk And Peter Thiel Launch OpenAI, A Non-Profit Artificial Intelligence Research Company". Forbes. Retrieved 8 July 2019. 
  16. "Ilya Sutskever". AI Watch. April 8, 2018. Retrieved May 6, 2018. 
  17. 17.0 17.1 17.2 17.3 17.4 "Information for OpenAI". orgwatch.issarice.com. Retrieved 5 May 2020. 
  18. "AMA: the OpenAI Research Team • r/MachineLearning". reddit. Retrieved May 5, 2018. 
  19. Salimans, Tim; Kingma, Diederik P. "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks". arxiv.org. Retrieved 27 March 2020. 
  20. Brockman, Greg (March 22, 2017). "Team++". OpenAI Blog. Retrieved May 6, 2018. 
  21. "Ian Goodfellow". linkedin.com. Retrieved 24 April 2020. 
  22. Sutskever, Ilya (March 20, 2017). "Welcome, Pieter and Shivon!". OpenAI Blog. Retrieved May 6, 2018. 
  23. "OpenAI Gym Beta". OpenAI Blog. March 20, 2017. Retrieved March 2, 2018. 
  24. "Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free". WIRED. April 27, 2016. Retrieved March 2, 2018. This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called "reinforcement learning" 
  25. Shead, Sam (April 28, 2016). "Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers". Business Insider. Retrieved March 3, 2018. 
  26. Miyato, Takeru; Dai, Andrew M.; Goodfellow, Ian. "Adversarial Training Methods for Semi-Supervised Text Classification". arxiv.org. Retrieved 28 March 2020. 
  27. Houthooft, Rein; Chen, Xi; Duan, Yan; Schulman, John; De Turck, Filip; Abbeel, Pieter. "VIME: Variational Information Maximizing Exploration". arxiv.org. Retrieved 27 March 2020. 
  28. Brockman, Greg; Cheung, Vicki; Pettersson, Ludwig; Schneider, Jonas; Schulman, John; Tang, Jie; Zaremba, Wojciech. "OpenAI Gym". arxiv.org. Retrieved 27 March 2020. 
  29. "OPENAI GYM". theconstructsim.com. Retrieved 16 May 2020. 
  30. Salimans, Tim; Goodfellow, Ian; Zaremba, Wojciech; Cheung, Vicki; Radford, Alec; Chen, Xi. "Improved Techniques for Training GANs". arxiv.org. Retrieved 27 March 2020. 
  31. "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets". arxiv.org. Retrieved 27 March 2020. 
  32. Kingma, Diederik P.; Salimans, Tim; Jozefowicz, Rafal; Chen, Xi; Sutskever, Ilya; Welling, Max. "Improving Variational Inference with Inverse Autoregressive Flow". arxiv.org. Retrieved 28 March 2020. 
  33. "Generative Models". openai.com. Retrieved 5 April 2020. 
  34. "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017. 
  35. Karnofsky, Holden (June 23, 2016). "Concrete Problems in AI Safety". Retrieved April 18, 2020. 
  36. "Dario Amodei - Research Scientist @ OpenAI". Crunchbase. Retrieved May 6, 2018. 
  37. 37.0 37.1 37.2 "Dario Amodei". linkedin.com. Retrieved 29 February 2020. 
  38. Metz, Cade (July 29, 2016). "How To Fool AI Into Seeing Something That Isn't There". WIRED. Retrieved March 3, 2018. 
  39. "Special Projects". openai.com. Retrieved 5 April 2020. 
  40. "NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI". The Official NVIDIA Blog. August 15, 2016. Retrieved May 5, 2018. 
  41. Vanian, Jonathan (August 15, 2016). "Nvidia Just Gave A Supercomputer to Elon Musk-backed Artificial Intelligence Group". Fortune. Retrieved May 5, 2018. 
  42. De Jesus, Cecille (August 17, 2016). "Elon Musk's OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak". Futurism. Retrieved May 5, 2018. 
  43. "Infrastructure for Deep Learning". openai.com. Retrieved 28 March 2020. 
  44. Christiano, Paul; Shah, Zain; Mordatch, Igor; Schneider, Jonas; Blackwell, Trevor; Tobin, Joshua; Abbeel, Pieter; Zaremba, Wojciech. "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model". arxiv.org. Retrieved 28 March 2020. 
  45. Papernot, Nicolas; Abadi, Martín; Erlingsson, Úlfar; Goodfellow, Ian; Talwar, Kunal. "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data". arxiv.org. Retrieved 28 March 2020. 
  46. Wu, Yuhuai; Burda, Yuri; Salakhutdinov, Ruslan; Grosse, Roger. "On the Quantitative Analysis of Decoder-Based Generative Models". arxiv.org. Retrieved 28 March 2020. 
  47. Statt, Nick (November 15, 2016). "Microsoft is partnering with Elon Musk's OpenAI to protect humanity's best interests". The Verge. Retrieved March 2, 2018. 
  48. Metz, Cade. "The Next Big Front in the Battle of the Clouds Is AI Chips. And Microsoft Just Scored a Win". WIRED. Retrieved March 2, 2018. According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware. 
  49. "universe". GitHub. Retrieved March 1, 2018. 
  50. John Mannes (December 5, 2016). "OpenAI's Universe is the fun parent every artificial intelligence deserves". TechCrunch. Retrieved March 2, 2018. 
  51. "Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do". WIRED. Retrieved March 2, 2018. 
  52. "OpenAI Universe". Hacker News. Retrieved May 5, 2018. 
  53. "AI Alignment". Paul Christiano. May 13, 2017. Retrieved May 6, 2018. 
  54. "Team Update". OpenAI Blog. March 22, 2017. Retrieved May 6, 2018. 
  55. "Open Philanthropy Project donations made (filtered to cause areas matching AI safety)". Retrieved July 27, 2017. 
  56. "OpenAI — General Support". Open Philanthropy Project. December 15, 2017. Retrieved May 6, 2018. 
  57. "Pinboard on Twitter". Twitter. Retrieved May 8, 2018. What the actual fuck… “Open Philanthropy” dude gives a $30M grant to his roommate / future brother-in-law. Trumpy! 
  58. "OpenAI makes humanity less safe". Compass Rose. April 13, 2017. Retrieved May 6, 2018. 
  59. "OpenAI makes humanity less safe". LessWrong. Retrieved May 6, 2018. 
  60. "OpenAI donations received". Retrieved May 6, 2018. 
  61. Naik, Vipul. "I'm having a hard time understanding the rationale...". Retrieved May 8, 2018. 
  62. "Evolution Strategies as a Scalable Alternative to Reinforcement Learning". openai.com. Retrieved 5 April 2020. 
  63. "The messy, secretive reality behind OpenAI's bid to save the world". technologyreview.com. Retrieved 28 February 2020. 
  64. Simoneaux, Brent; Stegman, Casey. "Open Source Stories: The People Behind OpenAI". Retrieved May 5, 2018.  In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.
  65. "Profile of the people behind OpenAI • r/OpenAI". reddit. April 7, 2017. Retrieved May 5, 2018. 
  66. "The People Behind OpenAI". Hacker News. July 23, 2017. Retrieved May 5, 2018. 
  67. "Unsupervised Sentiment Neuron". openai.com. Retrieved 5 April 2020. 
  68. John Mannes (April 7, 2017). "OpenAI sets benchmark for sentiment analysis using an efficient mLSTM". TechCrunch. Retrieved March 2, 2018. 
  69. John Mannes (April 7, 2017). "OpenAI sets benchmark for sentiment analysis using an efficient mLSTM". TechCrunch. Retrieved March 2, 2018. 
  70. "OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s". singularityhub.com. Retrieved 29 June 2019. 
  71. "Roboschool". openai.com. Retrieved 5 April 2020. 
  72. "Robots that Learn". openai.com. Retrieved 5 April 2020. 
  73. "OpenAI Baselines: DQN". OpenAI Blog. November 28, 2017. Retrieved May 5, 2018. 
  74. "OpenAI/baselines". GitHub. Retrieved May 5, 2018. 
  75. "[1706.03741] Deep reinforcement learning from human preferences". Retrieved March 2, 2018. 
  76. gwern (June 3, 2017). "June 2017 news - Gwern.net". Retrieved March 2, 2018. 
  77. "Two Giants of AI Team Up to Head Off the Robot Apocalypse". WIRED. Retrieved March 2, 2018. A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it's possible to do practical work right now on making machine learning systems less able to produce nasty surprises. 
  78. "Faster Physics in Python". openai.com. Retrieved 5 April 2020. 
  79. "Learning from Human Preferences". OpenAI.com. Retrieved 29 June 2019. 
  80. "Better Exploration with Parameter Noise". openai.com. Retrieved 5 April 2020. 
  81. Jordan Crook (August 12, 2017). "OpenAI bot remains undefeated against world's greatest Dota 2 players". TechCrunch. Retrieved March 2, 2018. 
  82. "Did Elon Musk's AI champ destroy humans at video games? It's complicated". The Verge. August 14, 2017. Retrieved March 2, 2018. 
  83. "Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer". Business Insider. August 11, 2017. Retrieved March 3, 2018. 
  84. Cade Metz (August 13, 2017). "Teaching A.I. Systems to Behave Themselves". The New York Times. Retrieved May 5, 2018. 
  85. "OpenAI Baselines: ACKTR & A2C". openai.com. Retrieved 5 April 2020. 
  86. "[1709.04326] Learning with Opponent-Learning Awareness". Retrieved March 2, 2018. 
  87. gwern (August 16, 2017). "September 2017 news - Gwern.net". Retrieved March 2, 2018. 
  88. "AI Sumo Wrestlers Could Make Future Robots More Nimble". WIRED. Retrieved March 3, 2018. 
  89. Appolonia, Alexandra; Gmoser, Justin (October 20, 2017). "Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer". Business Insider. Retrieved March 3, 2018. 
  90. Cade Metz (November 6, 2017). "A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up". The New York Times. Retrieved May 5, 2018. 
  91. "Block-Sparse GPU Kernels". openai.com. Retrieved 5 April 2020. 
  92. Vincent, James (December 1, 2017). "Artificial intelligence isn't as clever as we think, but that doesn't stop it being a threat". The Verge. Retrieved March 2, 2018. 
  93. "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018. 
  94. "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018. 
  95. Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018. 
  96. 96.0 96.1 "Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla". The Verge. February 21, 2018. Retrieved March 2, 2018. 
  97. Simonite, Tom. "Why Artificial Intelligence Researchers Should Be More Paranoid". WIRED. Retrieved March 2, 2018. 
  98. "OpenAI Supporters". OpenAI Blog. February 21, 2018. Retrieved March 1, 2018. 
  99. "Ingredients for Robotics Research". openai.com. Retrieved 5 April 2020. 
  100. "OpenAI Hackathon". OpenAI Blog. February 24, 2018. Retrieved March 1, 2018. 
  101. "Report from the OpenAI Hackathon". OpenAI Blog. March 15, 2018. Retrieved May 5, 2018. 
  102. "OpenAI Retro Contest". OpenAI. Retrieved May 5, 2018. 
  103. "Retro Contest". OpenAI Blog. April 13, 2018. Retrieved May 5, 2018. 
  104. "OpenAI/universe". GitHub. Retrieved May 5, 2018. 
  105. "OpenAI Charter". OpenAI Blog. April 9, 2018. Retrieved May 5, 2018. 
  106. wunan (April 9, 2018). "OpenAI charter". LessWrong. Retrieved May 5, 2018. 
  107. "[D] OpenAI Charter • r/MachineLearning". reddit. Retrieved May 5, 2018. 
  108. "OpenAI Charter". Hacker News. Retrieved May 5, 2018. 
  109. Tristan Greene (April 10, 2018). "The AI company Elon Musk co-founded intends to create machines with real intelligence". The Next Web. Retrieved May 5, 2018. 
  110. Cade Metz (April 19, 2018). "A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit". The New York Times. Retrieved May 5, 2018. 
  111. ""A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning". reddit. Retrieved May 5, 2018. 
  112. "gwern comments on A.I. Researchers Are Making More Than $1M, Even at a Nonprofit". Hacker News. Retrieved May 5, 2018. 
  113. "[1805.00899] AI safety via debate". Retrieved May 5, 2018. 
  114. Irving, Geoffrey; Amodei, Dario (May 3, 2018). "AI Safety via Debate". OpenAI Blog. Retrieved May 5, 2018. 
  115. "AI and Compute". openai.com. Retrieved 5 April 2020. 
  116. "Improving Language Understanding with Unsupervised Learning". openai.com. Retrieved 5 April 2020. 
  117. Gershgorn, Dave. "OpenAI built gaming bots that can work as a team with inhuman precision". qz.com. Retrieved 14 June 2019. 
  118. Knight, Will. "A team of AI algorithms just crushed humans in a complex computer game". technologyreview.com. Retrieved 14 June 2019. 
  119. "OpenAI's bot can now defeat skilled Dota 2 teams". venturebeat.com. Retrieved 14 June 2019. 
  120. Papadopoulos, Loukia. "Bill Gates Praises Elon Musk-Founded OpenAI's Latest Dota 2 Win as "Huge Milestone" in Field". interestingengineering.com. Retrieved 14 June 2019. 
  121. Vincent, James. "Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems". theverge.com. Retrieved 1 June 2019. 
  122. Locklear, Mallory. "DeepMind, Elon Musk and others pledge not to make autonomous AI weapons". engadget.com. Retrieved 1 June 2019. 
  123. Quach, Katyanna. "Elon Musk, his arch nemesis DeepMind swear off AI weapons". theregister.co.uk. Retrieved 1 June 2019. 
  124. "OpenAI's 'state-of-the-art' system gives robots humanlike dexterity". venturebeat.com. Retrieved 14 June 2019. 
  125. Coldewey, Devin. "OpenAI's robotic hand doesn't need humans to teach it human behaviors". techcrunch.com. Retrieved 14 June 2019. 
  126. Whitwam, Ryan. "OpenAI Bots Crush the Best Human Dota 2 Players in the World". extremetech.com. Retrieved 15 June 2019. 
  127. Quach, Katyanna. "OpenAI bots thrash team of Dota 2 semi-pros, set eyes on mega-tourney". theregister.co.uk. Retrieved 15 June 2019. 
  128. Savov, Vlad. "The OpenAI Dota 2 bots just defeated a team of former pros". theverge.com. Retrieved 15 June 2019. 
  129. Rigg, Jamie. "'Dota 2' veterans steamrolled by AI team in exhibition match". engadget.com. Retrieved 15 June 2019. 
  130. Chu, Timothy; Cohen, Michael B.; Pachocki, Jakub W.; Peng, Richard. "Constant Arboricity Spectral Sparsifiers". arxiv.org. Retrieved 26 March 2020. 
  131. "Reinforcement Learning with Prediction-Based Rewards". openai.com. Retrieved 5 April 2020. 
  132. "Spinning Up in Deep RL". OpenAI.com. Retrieved 15 June 2019. 
  133. Ramesh, Prasad. "OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners". hub.packtpub.com. Retrieved 15 June 2019. 
  134. Johnson, Khari. "OpenAI launches reinforcement learning training to prepare for artificial general intelligence". flipboard.com. Retrieved 15 June 2019. 
  135. "OpenAI Founder: Short-Term AGI Is a Serious Possibility". syncedreview.com. Retrieved 15 June 2019. 
  136. Rodriguez, Jesus. "What's New in Deep Learning Research: OpenAI and DeepMind Join Forces to Achieve Superhuman Performance in Reinforcement Learning". towardsdatascience.com. Retrieved 29 June 2019. 
  137. "How AI Training Scales". openai.com. Retrieved 4 April 2020. 
  138. "OpenAI teaches AI teamwork by playing hide-and-seek". venturebeat.com. Retrieved 24 February 2020. 
  139. "OpenAI's CoinRun tests the adaptability of reinforcement learning agents". venturebeat.com. Retrieved 24 February 2020. 
  140. "An AI helped us write this article". vox.com. Retrieved 28 June 2019. 
  141. Lowe, Ryan. "OpenAI's GPT-2: the model, the hype, and the controversy". towardsdatascience.com. Retrieved 10 July 2019. 
  142. 142.0 142.1 "The Hacker Learns to Trust". medium.com. Retrieved 5 May 2020. 
  143. Irving, Geoffrey; Askell, Amanda. "AI Safety Needs Social Scientists". doi:10.23915/distill.00014. 
  144. "AI Safety Needs Social Scientists". openai.com. Retrieved 5 April 2020. 
  145. "Neural MMO: A Massively Multiagent Game Environment". openai.com. Retrieved 5 April 2020. 
  146. "Introducing Activation Atlases". openai.com. Retrieved 5 April 2020. 
  147. Johnson, Khari. "OpenAI launches new company for funding safe artificial general intelligence". venturebeat.com. Retrieved 15 June 2019. 
  148. Trazzi, Michaël. "Considerateness in OpenAI LP Debate". medium.com. Retrieved 15 June 2019. 
  149. "Implicit Generation and Generalization Methods for Energy-Based Models". openai.com. Retrieved 5 April 2020. 
  150. "Sam Altman's leap of faith". techcrunch.com. Retrieved 24 February 2020. 
  151. "Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator". techcrunch.com. Retrieved 24 February 2020. 
  152. Alford, Anthony. "OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences". infoq.com. Retrieved 15 June 2019. 
  153. "OpenAI Sparse Transformer Improves Predictable Sequence Length by 30x". medium.com. Retrieved 15 June 2019. 
  154. "Generative Modeling with Sparse Transformers". OpenAI.com. Retrieved 15 June 2019. 
  155. "MuseNet". OpenAI.com. Retrieved 15 June 2019. 
  156. "OpenAI Robotics Symposium 2019". OpenAI.com. Retrieved 14 June 2019. 
  157. "A poetry-writing AI has just been unveiled. It's ... pretty good.". vox.com. Retrieved 11 July 2019. 
  158. Vincent, James. "AND OpenAI's new multitalented AI writes, translates, and slanders". theverge.com. Retrieved 11 July 2019. 
  159. "Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI". OpenAI. July 22, 2019. Retrieved July 26, 2019. 
  160. "OpenAI forms exclusive computing partnership with Microsoft to build new Azure AI supercomputing technologies". Microsoft. July 22, 2019. Retrieved July 26, 2019. 
  161. Chan, Rosalie (July 22, 2019). "Microsoft is investing $1 billion in OpenAI, the Elon Musk-founded company that's trying to build human-like artificial intelligence". Business Insider. Retrieved July 26, 2019. 
  162. Sawhney, Mohanbir (July 24, 2019). "The Real Reasons Microsoft Invested In OpenAI". Forbes. Retrieved July 26, 2019. 
  163. "OpenAI releases curtailed version of GPT-2 language model". venturebeat.com. Retrieved 24 February 2020. 
  164. "OpenAI Just Released an Even Scarier Fake News-Writing Algorithm". interestingengineering.com. Retrieved 24 February 2020. 
  165. "OPENAI JUST RELEASED A NEW VERSION OF ITS FAKE NEWS-WRITING AI". futurism.com. Retrieved 24 February 2020. 
  166. "Emergent Tool Use from Multi-Agent Interaction". openai.com. Retrieved 4 April 2020. 
  167. "Emergent Tool Use From Multi-Agent Autocurricula". arxiv.org. Retrieved 4 April 2020. 
  168. "Solving Rubik's Cube with a Robot Hand". arxiv.org. Retrieved 4 April 2020. 
  169. "Solving Rubik's Cube with a Robot Hand". openai.com. Retrieved 4 April 2020. 
  170. "GPT-2: 1.5B Release". openai.com. Retrieved 5 April 2020. 
  171. "Safety Gym". openai.com. Retrieved 5 April 2020. 
  172. "Procgen Benchmark". openai.com. Retrieved 2 March 2020. 
  173. "OpenAI's Procgen Benchmark prevents AI model overfitting". venturebeat.com. Retrieved 2 March 2020. 
  174. "GENERALIZATION IN REINFORCEMENT LEARNING – EXPLORATION VS EXPLOITATION". analyticsindiamag.com. Retrieved 2 March 2020. 
  175. Nakkiran, Preetum; Kaplun, Gal; Bansal, Yamini; Yang, Tristan; Barak, Boaz; Sutskever, Ilya. "Deep Double Descent: Where Bigger Models and More Data Hurt". arxiv.org. Retrieved 5 April 2020. 
  176. "Deep Double Descent". OpenAI. December 5, 2019. Retrieved May 23, 2020. 
  177. Hubinger, Evan (December 5, 2019). "Understanding "Deep Double Descent"". LessWrong. Retrieved 24 May 2020. 
  178. Hubinger, Evan (December 18, 2019). "Inductive biases stick around". Retrieved 24 May 2020. 
  179. "OpenAI sets PyTorch as its new standard deep learning framework". jaxenter.com. Retrieved 23 February 2020. 
  180. "OpenAI goes all-in on Facebook's Pytorch machine learning framework". venturebeat.com. Retrieved 23 February 2020. 
  181. "Writeup: Progress on AI Safety via Debate". lesswrong.com. Retrieved 16 May 2020. 
  182. Holmes, Aaron. "Elon Musk just criticized the artificial intelligence company he helped found — and said his confidence in the safety of its AI is 'not high'". businessinsider.com. Retrieved 29 February 2020. 
  183. "Elon Musk". twitter.com. Retrieved 29 February 2020.