Changes

Jump to: navigation, search

Timeline of OpenAI

2,750 bytes removed, 08:22, 24 May 2020
no edit summary
The following are some interesting questions that can be answered by reading this timeline:
* What are some significant events previous to the creation of OpenAI?** Sort the full timeline by "Event type" and look for the group of rows with value "Prelude".** You will see some events involving key people like {{w|Elon Musk}} and {{w|Sam Altman}}, that would eventually lead to the creation of OpenAI.* What are the various papers and posts published by OpenAI?
** Sort the full timeline by "Event type" and look for the group of rows with value "Publication".
** You will see mostly papers submitted to the {{w|ArXiv}} by OpenAI-affiliated researchers. Also blog posts.* What are the several toolkits, implementations, algorithms, systems and software in general released by OpenAI?** Sort the full timeline by "Event type" and look for the group of rows with value "Software release".** You will see a variety of releases, some of them open-sourced.* What are some other significant events describing advances in research?** Sort the full timeline by "Event type" and look for the group of rows with value "Research progress".** You will see some discoveries and other significant results obtained by OpenAI.* What is the staff composition and what are the different roles at in the organization?
** Sort the full timeline by "Event type" and look for the group of rows with value "Staff".
** You will see the names of incorporated people and their roles.* What are some important software releases by the several partnerships between OpenAIand other organizations?** Sort the full timeline by "Event type" and look for the group of rows with value "Software releasePartnership".** You will read collaborations with organizations like {{w|DeepMind}} and {{w|Microsoft}}.* What are important some significant fundings granted to OpenAI by donors?** Sort the full timeline by "Event type" and look for the group of rows with value "Donation".** You will see names like the {{w|artificial intelligenceOpen Philanthropy Project}} developments announced , and {{w|Nvidia}}, among others. * What are some notable events hosted by OpenAI?** Sort the full timeline by "Event type" and look for the group of rows with value "Event hosting".* What are some notable publications by third parties about OpenAI?** Sort the full timeline by "Event type" and look for the group of rows with value "AI developmentCoverage".
==Big picture==
| 2015 || Establishment || OpenAI is founded as a nonprofit and begins producing research.
|-
| 2019 || Restructure Reorganization || OpenAI shifts from nonprofit to ‘capped-profit’ with the purpose to attract capital.
|-
|}
{| class="sortable wikitable"
! Year !! Month and date !! Domain !! Event type !! Details|-| 2014 || {{dts|October 22}}–24 || Background || During an interview at the AeroAstro Centennial Symposium, {{W|Elon Musk}}, who would later become co-chair of OpenAI, calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[w:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref>|-| 2015 || {{dts|February 25}} || Background || {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]] who would later become a co-chair of OpenAI, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref>{{cite web |url=http://blog.samaltman.com/machine-intelligence-part-1 |title=Machine intelligence, part 1 |publisher=Sam Altman |accessdate=July 27, 2017}}</ref>
|-
| 2015 2014 || {{dts|May 6October 22}} –24 || Background || Greg BrockmanPrelude || During an interview at the AeroAstro Centennial Symposium, {{W|Elon Musk}}, who would later become CTO co-chair of OpenAI, announces in a blog post that he is leaving his role as CTO of [[wikipedia:Stripe (company)|Stripe]]. In the post, in the section "What comes next" he writes calls artificial intelligence humanity's "I haven't decided exactly what I'll be building (feel free to ping if you want to chat)biggest existential threat".<ref>{{cite web |url=https://blogwww.gregbrockmantheguardian.com/leavingtechnology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-stripe existential-threat |titleauthor=Leaving Stripe Samuel Gibbs |firstdate=Greg October 27, 2014 |lasttitle=Brockman Elon Musk: artificial intelligence is our biggest existential threat |publisher=Greg Brockman on Svbtle [[w:The Guardian|date=May 6, 2015 The Guardian]] |accessdate=May 6July 25, 20182017}}</ref><ref>{{cite web |url=http://wwwwebcast.amps.ms.businessinsidermit.comedu/fall2014/AeroAstro/stripesindex-ctoFri-greg-brockman-is-leaving-the-company-2015-5 PM.html |datetitle=May 6, 2015 AeroAstro Centennial Webcast |firstaccessdate=Biz July 25, 2017 |lastquote=Carson |title=One The high point of the first employees of $3.5 billion startup Stripe MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is leaving to form his own company |publisher=Business Insider |accessdate=May 6, 2018the October 22-24 Centennial Symposium}}</ref>
|-
| 2015 || {{dts|JuneFebruary 25}} || Background || Prelude || {{Ww|Sam Altman}} and Greg Brockman have , president of [[w:Y Combinator (company)|Y Combinator]] who would later become a co-chair of OpenAI, publishes a conversation about next steps for Brockmanblog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref name="path-to-OpenAI">{{cite web |url=httpshttp://blog.gregbrockmansamaltman.com/mymachine-pathintelligence-topart-OpenAI 1 |title=My path to OpenAI |date=May 3Machine intelligence, 2016 part 1 |publisher=Greg Brockman on Svbtle Sam Altman |accessdate=May 8July 27, 20182017}}</ref>
|-
| 2015 || {{dts|June 4May 6}} || Background || At {{wPrelude ||Airbnb}}'s Open Air 2015 conferenceGreg Brockman, {{w|Sam Altman}}who would become CTO of OpenAI, president announces in a blog post that he is leaving his role as CTO of [[wwikipedia:Y Combinator Stripe (company)|Y CombinatorStripe]] who would later become a co-chair of OpenAI. In the post, states his concern for advanced artificial intelligence and shares that in the section "What comes next" he recently invested in a company doing AI safety researchwrites "I haven't decided exactly what I'll be building (feel free to ping if you want to chat)".<ref>{{cite web |url=https://blog.gregbrockman.com/leaving-stripe |title=Leaving Stripe |first=Greg |last=Brockman |publisher=Greg Brockman on Svbtle |date=May 6, 2015 |accessdate=May 6, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/samstripes-altmancto-ygreg-combinatorbrockman-talksis-megaleaving-bubblethe-nuclear-power-and-morecompany-2015-5 |date=May 6 , 2015 |authorfirst=Matt Weinberger Biz |datelast=June 4, 2015 Carson |title=Head One of Silicon Valley's most important the first employees of $3.5 billion startup farm says we're in a 'mega bubble' that won't last Stripe is leaving to form his own company |publisher=Business Insider |accessdate=July 27May 6, 20172018}}</ref>
|-
| 2015 || {{dts|JulyJune}} (approximate) || Background || Prelude || {{W|Sam Altman}} sets up and Greg Brockman have a dinner in {{W|Menlo Park, California}} to talk conversation about starting an organization to do AI research. Attendees include Greg next steps for Brockman, Dario Amodei, Chris Olah, Paul Christiano, {{W|Ilya Sutskever}}, and {{W|Elon Musk}}.<ref name="path-to-OpenAI" >{{cite web |url=https://blog.gregbrockman.com/my-path-to-OpenAI |title=My path to OpenAI |date=May 3, 2016 |publisher=Greg Brockman on Svbtle |accessdate=May 8, 2018}}</ref>
|-
| 2015 || {{dts|December 11June 4}} || Creation || Prelude || At {{w|OpenAIAirbnb}} is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)<ref>'s Open Air 2015 conference, {{cite web w|url=https://www.nytimes.com/2015/12/12/science/artificial-intelligence-research-center-is-founded-by-silicon-valley-investors.html |date=December 11Sam Altman}}, 2015 |publisher=president of [[w:The New York TimesY Combinator (company)|The New York TimesY Combinator]] |title=Artificialwho would later become a co-Intelligence Research Center Is Founded by Silicon Valley Investors |author=John Markoff |accessdate=July 26, 2017 |quote=The organization, to be named chair of OpenAI, will be established as a nonprofit, states his concern for advanced artificial intelligence and will be based shares that he recently invested in San Franciscoa company doing AI safety research.}}</ref><ref>{{cite web |url=httpshttp://blogwww.OpenAIbusinessinsider.com/introducingsam-OpenAI/ |publisher=OpenAI Blog |title=Introducing OpenAI |date=December 11, 2015 |accessdate=July 26, 2017}}</ref><ref>{{cite web |url=https://techcrunch.com/2015/12/11/nonaltman-profit-OpenAIy-launchescombinator-withtalks-backingmega-frombubble-elonnuclear-muskpower-and-sammore-altman/ |date=December 11, 2015 |publisher=TechCrunch |title=Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman -6 |author=Drew Olanoff Matt Weinberger |accessdatedate=March 2June 4, 2018}}</ref> Co-founders include Wojciech Zaremba<ref>{{cite web 2015 |title=Wojciech Zaremba |url=https://www.linkedin.com/Head of Silicon Valley's most important startup farm says we're in/wojciech-zaremba-356568164/ a 'mega bubble' that won't last |websitepublisher=linkedin.com Business Insider |accessdate=28 February 2020July 27, 2017}}</ref>,
|-
| 2015 || {{dts|DecemberJuly}} (approximate) || Staff || Prelude || OpenAI announces {{wW|Y CombinatorSam Altman}} founding partner sets up a dinner in {{wW|Jessica LivingstonMenlo Park, California}} as one of its financial backersto talk about starting an organization to do AI research.<ref>{{cite web |url=https://www.forbes.com/sites/theopriestley/2015/12/11/elon-musk-and-peter-thiel-launch-OpenAI-a-non-profit-artificial-intelligence-research-company/ |title=Elon Musk And Peter Thiel Launch OpenAIAttendees include Greg Brockman, Dario Amodei, Chris Olah, A Non-Profit Artificial Intelligence Research Company |first1=Theo |last1=Priestly |date=December 11Paul Christiano, 2015 |publisher=''{{wW|ForbesIlya Sutskever}}'' , and {{W|access-date=8 July 2019 Elon Musk}}.<ref name="path-to-OpenAI" /ref>
|-
| 2016 2015 || {{dts|JanuaryDecember 11}} || Staff || Creation || {{Ww|Ilya SutskeverOpenAI}} joins is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)<ref>{{cite web |url=https://www.nytimes.com/2015/12/12/science/artificial-intelligence-research-center-is-founded-by-silicon-valley-investors.html |date=December 11, 2015 |publisher=[[w:The New York Times|The New York Times]] |title=Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors |author=John Markoff |accessdate=July 26, 2017 |quote=The organization, to be named OpenAI, will be established as Research Directora nonprofit, and will be based in San Francisco.}}</ref><ref>{{cite web |url=https://aiwatchblog.issariceOpenAI.com/?personintroducing-OpenAI/ |publisher=OpenAI Blog |title=Introducing OpenAI |date=December 11, 2015 |accessdate=July 26, 2017}}</ref><ref>{{cite web |url=Ilya+Sutskever https://techcrunch.com/2015/12/11/non-profit-OpenAI-launches-with-backing-from-elon-musk-and-sam-altman/ |date=April 8December 11, 2018 2015 |publisher=TechCrunch |title=Ilya Sutskever Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman |publisherauthor=AI Watch Drew Olanoff |accessdate=May 6March 2, 2018}}</ref>Co-founders include Wojciech Zaremba<ref>{{cite web |title=Wojciech Zaremba |url=https://www.linkedin.com/in/wojciech-zaremba-356568164/ |website=linkedin.com |accessdate=28 February 2020}}</ref>,
|-
| 2016 2015 || {{dts|January 9December}} || Education || Coverage || The article "{{w|OpenAI research team does an AMA ("ask me anything}}") is created on r/MachineLearning, the subreddit dedicated to machine learning{{w|Wikipedia}}.<ref>{{cite web |title=OpenAI: Revision history |url=https://wwwen.redditwikipedia.comorg/rw/MachineLearning/comments/404r9m/ama_the_OpenAI_research_team/ |publisherindex.php?title=OpenAI&dir=prev&action=reddit history |titlewebsite=AMA: the OpenAI Research Team • r/MachineLearning wikipedia.org |accessdate=May 5, 20186 April 2020}}</ref>
|-
| 2016 2015 || {{dts|March 31December}} || Staff || A blog post from this day Team || OpenAI announces that {{Ww|Y Combinator}} founding partner {{w|Ian GoodfellowJessica Livingston}} has joined OpenAIas one of its financial backers.<ref>{{cite web |url=https://blogwww.OpenAIforbes.com/teamsites/theopriestley/2015/12/11/elon-musk-and-peter-thiel-launch-plusOpenAI-plusa-non-profit-artificial-intelligence-research-company/ |publishertitle=Elon Musk And Peter Thiel Launch OpenAI Blog , A Non-Profit Artificial Intelligence Research Company |titlefirst1=Theo |last1=Team++ Priestly |date=March 22December 11, 2017 2015 |firstpublisher=Greg ''{{w|last=Brockman Forbes}}'' |accessdateaccess-date=May 6, 20188 July 2019 }}</ref>
|-
| 2016 || {{Dtsdts|April 26January}} || Staff || A blog post from this day announces that Pieter Abbeel has joined Team || {{W|Ilya Sutskever}} joins OpenAIas Research Director.<ref>{{cite web |url=https://blogaiwatch.OpenAIissarice.com/welcome-pieter-and-shivon/ |publisher?person=OpenAI Blog |title=Welcome, Pieter and Shivon! Ilya+Sutskever |date=March 20April 8, 2017 2018 |firsttitle=Ilya Sutskever |lastpublisher=Sutskever AI Watch |accessdate=May 6, 2018}}</ref><ref name="orgwatch.issarice.com">{{cite web |title=Information for OpenAI |url=https://orgwatch.issarice.com/?organization=OpenAI |website=orgwatch.issarice.com |accessdate=5 May 2020}}</ref>
|-
| 2016 || {{dts|AprilJanuary 9}} || Staff || Shivon Zilis joins Education || The OpenAI as Advisorresearch team does an AMA ("ask me anything") on r/MachineLearning, the subreddit dedicated to machine learning.<ref>{{cite web |title=Shivon Zilis |url=https://www.linkedinreddit.com/inr/MachineLearning/comments/shivonzilis404r9m/ama_the_OpenAI_research_team/ |websitepublisher=linkedin.com reddit |title=AMA: the OpenAI Research Team • r/MachineLearning |accessdate=28 February 2020May 5, 2018}}</ref>
|-
| 2016 || {{dts|April 27February 25}} || Software release Optimization || The public beta Publication || "Weight Normalization: A Simple Reparameterization to Accelerate Training of OpenAI GymDeep Neural Networks", an open source toolkit that provides environments to test AI botsa paper on optimization, is released.<ref>first submitted to the {{cite web w|url=httpsArXiv}}. The paper presents weight normalization://blog.OpenAIa reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction.com/OpenAI-gym-beta/ |publisher=OpenAI Blog |title=OpenAI Gym Beta |date=March 20, 2017 |accessdate=March 2, 2018}}</ref><ref>{{cite web |urllast1=https://www.wired.com/2016/04/OpenAI-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ Salimans |titlefirst1=Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free Tim |datelast2=April 27, 2016 Kingma |publisherfirst2=[[wikipedia:WIREDDiederik P. |WIRED]] |accessdatetitle=March 2, 2018 |quote=This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way Weight Normalization: A Simple Reparameterization to Accelerate Training of a technology called "reinforcement learning"}}</ref><ref>{{cite web Deep Neural Networks |url=httphttps://wwwarxiv.businessinsiderorg/abs/1602.com/OpenAI-has-launched-a-gym-where-developers-can-train-their-computers-2016-4?op=1 07868 |firstwebsite=Sam |last=Shead |date=April 28, 2016 |title=Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers |publisher=Business Insider arxiv.org |accessdate=27 March 3, 20182020}}</ref>
|-
| 2016 || {{dts|June 21March 31}} || Publication || "Concrete Problems in AI Safety" is submitted to the Team || A blog post from this day announces that {{wW|arXivIan Goodfellow}}has joined OpenAI.<ref>{{cite web |url=https://arxivblog.OpenAI.orgcom/absteam-plus-plus/1606|publisher=OpenAI Blog |title=Team++ |date=March 22, 2017 |first=Greg |last=Brockman |accessdate=May 6, 2018}}</ref> Previously, Goodfellow worked as Senior Research Scientist at {{w|Google}}.06565 <ref>{{cite web |title=[1606Ian Goodfellow |url=https://www.06565] Concrete Problems linkedin.com/in AI Safety /ian-goodfellow-b7187213/ |datewebsite=June 21, 2016 linkedin.com |accessdate=July 25, 201724 April 2020}}</ref><ref name="orgwatch.issarice.com"/>
|-
| 2016 || {{Dts|JulyApril 26}} || Staff || Dario Amodei joins Team || A blog post from this day announces that Pieter Abbeel has joined OpenAI.<ref>{{cite web |url=https://wwwblog.crunchbaseOpenAI.com/personwelcome-pieter-and-shivon/dario-amodei |publisher=OpenAI Blog |title=Dario Amodei - Research Scientist @ OpenAI Welcome, Pieter and Shivon! |date=March 20, 2017 |first=Ilya |publisherlast=Crunchbase Sutskever |accessdate=May 6, 2018}}</ref>, working on the Team Lead for AI Safety.<ref name="Dario Amodeiyorgwatch.issarice.com"/>
|-
| 2016 || {{dts|July 8April 27}} || Publication || "Adversarial Examples in the Physical World" Software release || The public beta of OpenAI Gym, an open source toolkit that provides environments to test AI bots, is publishedreleased. One of the authors is <ref>{{Wcite web |url=https://blog.OpenAI.com/OpenAI-gym-beta/ |Ian Goodfellowpublisher=OpenAI Blog |title=OpenAI Gym Beta |date=March 20, 2017 |accessdate=March 2, 2018}}, who is at OpenAI at the time.</ref><ref>{{cite web |url=https://www.wired.com/2016/0704/foolOpenAI-elon-musk-sam-altman-plan-to-aiset-seeingartificial-somethingintelligence-isntfree/ |title=How To Fool AI Into Seeing Something That IsnInside OpenAI, Elon Musk't There s Wild Plan to Set Artificial Intelligence Free |date=April 27, 2016 |publisher=[[wikipedia:WIRED|WIRED]] |dateaccessdate=March 2, 2018 |quote=July 29This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called "reinforcement learning"}}</ref><ref>{{cite web |url=http://www.businessinsider.com/OpenAI-has-launched-a-gym-where-developers-can-train-their-computers-2016 -4?op=1 |first=Cade Sam |last=Metz Shead |date=April 28, 2016 |title=Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
|-
| 2016 || {{dts|August 15May 25}} || Donation Safety || The technology company {{WPublication |Nvidia}} announces that it has donated the first {{W|Nvidia DGX"Adversarial Training Methods for Semi-1}} (a supercomputer) Supervised Text Classification" is submitted to OpenAI. OpenAI plans to use the supercomputer to train its AI on a corpus of conversations from {{Ww|RedditArXiv}}. The paper proposes a method that achieves better results on multiple benchmark semi-supervised and purely supervised tasks.<ref>{{cite web |urllast1=https://blogs.nvidia.com/blog/2016/08/15/first-ai-supercomputer-OpenAI-elon-musk-deep-learning/ Miyato |titlefirst1=NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI Takeru |publisherlast2=The Official NVIDIA Blog Dai |datefirst2=August 15, 2016 Andrew M. |accessdatelast3=May 5, 2018}}</ref><ref>{{cite web Goodfellow |urlfirst3=http://fortune.com/2016/08/15/elon-musk-artificial-intelligence-OpenAI-nvidia-supercomputer/ Ian |title=Nvidia Just Gave A Supercomputer to Elon MuskAdversarial Training Methods for Semi-backed Artificial Intelligence Group |publisher=Fortune |first=Jonathan |last=Vanian |date=August 15, 2016 |accessdate=May 5, 2018}}</ref><ref>{{cite web Supervised Text Classification |url=https://futurismarxiv.comorg/elon-musks-OpenAI-is-using-reddit-to-teach-an-artificial-intelligence-how-to-speakabs/ 1605.07725 |datewebsite=August 17, 2016 |title=Elon Musk's OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak |first=Cecille |last=De Jesus |publisher=Futurism arxiv.org |accessdate=May 5, 201828 March 2020}}</ref>
|-
| 2016 || {{dts|SeptemberMay 31}} || Staff Generative models || Alexander Ray joins OpenAI as Member Publication || "VIME: Variational Information Maximizing Exploration", a paper on generative models, is submitted to the {{w|ArXiv}}. The paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of Technical Staffinformation gain about the agent's belief of environment dynamics.<ref>{{cite web |last1=Houthooft |first1=Rein |last2=Chen |first2=Xi |last3=Duan |first3=Yan |last4=Schulman |first4=John |last5=De Turck |first5=Filip |last6=Abbeel |first6=Pieter |title=Alexander Ray VIME: Variational Information Maximizing Exploration |url=https://www.linkedinarxiv.comorg/in/machinautabs/ 1605.09674 |website=linkedinarxiv.com org |accessdate=28 February 27 March 2020}}</ref>
|-
| 2016 || {{dts|OctoberJune 5}} || Staff {{w|Reinforcement learning}} || Publication || "OpenAI Gym", a paper on {{w|reinforcement learning}}, is submitted to the {{w|ArXiv}}. It presents OpenAI Gym as a toolkit for reinforcement learning research.<ref>{{cite web |last1=Brockman |first1=Greg |last2=Cheung |first2=Vicki |last3=Pettersson |first3=Ludwig |last4=Schneider |first4=Jonas |last5=Schulman |first5=John |last6=Tang |first6=Jie |last7=Zaremba |first7=Wojciech | Jack Clark joins title=OpenAIGym |url=https://arxiv.org/abs/1606.01540 |website=arxiv.org |accessdate=27 March 2020}}</ref> OpenAI Gym is considered by some as "a huge opportunity for speeding up the progress in the creation of better reinforcement algorithms, since it provides an easy way of comparing them, on the same conditions, independently of where the algorithm is executed".<ref>{{cite web |title=Jack Clark OPENAI GYM |url=https://www.linkedintheconstructsim.com/intag/jack-clark-5a320317openai_gym/ |website=linkedintheconstructsim.com |accessdate=28 February 16 May 2020}}</ref>
|-
| 2016 || {{dts|OctoberJune 10}} || Staff Generative models || Publication || "Improved Techniques for Training GANs", a paper on generative models, is submitted to the {{w|ArXiv}}. It presents a variety of new architectural features and training procedures that OpenAI Research Scientist Harri Edwards joins applies to the organizationgenerative adversarial networks (GANs) framework.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Goodfellow |first2=Ian |last3=Zaremba |first3=Wojciech |last4=Cheung |first4=Vicki |last5=Radford |first5=Alec |last6=Chen |first6=Xi |title=Harri Edwards Improved Techniques for Training GANs |url=https://www.linkedinarxiv.comorg/in/harri-edwards-7b199375abs/ 1606.03498 |website=linkedinarxiv.com org |accessdate=28 February 27 March 2020}}</ref>
|-
| 2016 || {{dts|November 15June 12}} || Partnership Generative models || A partnership between OpenAI and Microsoft's artificial intelligence division is announced. As part of the partnershipPublication || "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", Microsoft provides a price reduction paper on computing resources generative models, is submitted to OpenAI through {{Ww|Microsoft AzureArXiv}}. It describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner.<ref>{{cite web |urltitle=httpsInfoGAN://www.theverge.com/2016/11/15/13639904/microsoft-OpenAI-ai-partnership-elon-musk-sam-altman |date=November 15, 2016 |publisher=The Verge |first=Nick |last=Statt |title=Microsoft is partnering with Elon Musk's OpenAI to protect humanity's best interests |accessdate=March 2, 2018}}</ref><ref>{{cite web Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets |url=https://wwwarxiv.wired.comorg/2016/11/next-battles-clouds-ai-chipsabs/ 1606.03657 |titlewebsite=The Next Big Front in the Battle of the Clouds Is AI Chipsarxiv. And Microsoft Just Scored a Win |publisher=[[wikipedia:WIRED|WIRED]] |first=Cade |last=Metz org |accessdate=27 March 2, 2018 |quote=According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware.2020}}</ref>
|-
| 2016 || {{dts|December 5June 15}} || Software release Generative models || OpenAI's UniversePublication || "Improving Variational Inference with Inverse Autoregressive Flow", "a software platform for measuring and training an AI's general intelligence across paper on generative models, is submitted to the world's supply {{w|ArXiv}}. We propose a new type of gamesnormalizing flow, inverse autoregressive flow (IAF), that, websites and other applications"in contrast to earlier published flows, is releasedscales well to high-dimensional latent spaces.<ref>{{cite web |urllast1=Kingma |first1=https://githubDiederik P.com/OpenAI/universe |accessdatelast2=March 1, 2018 Salimans |publisherfirst2=GitHub Tim |titlelast3=universe}}</ref><ref>{{cite web Jozefowicz |urlfirst3=https://techcrunch.com/2016/12/05/OpenAIs-universe-is-the-fun-parent-every-artificial-intelligence-deserves/ Rafal |datelast4=December 5, 2016 Chen |publisherfirst4=TechCrunch Xi |titlelast5=OpenAI's Universe is the fun parent every artificial intelligence deserves Sutskever |authorfirst5=John Mannes Ilya |accessdatelast6=March 2, 2018}}</ref><ref>{{cite web Welling |urlfirst6=https://www.wired.com/2016/12/OpenAIs-universe-computers-learn-use-apps-like-humans/ Max |title=Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018}}</ref><ref>{{cite web Improving Variational Inference with Inverse Autoregressive Flow |url=https://newsarxiv.ycombinatororg/abs/1606.com/item?id=13103742 |title=OpenAI Universe 04934 |website=Hacker News arxiv.org |accessdate=May 5, 201828 March 2020}}</ref>
|-
| 2016 || ? {{dts|June 16}} || Generative models || Staff Publication || Tom Brown joins OpenAI as Member publishes post describing four projects on generative models, a branch of Technical Staff{{w|unsupervised learning}} techniques in machine learning.<ref>{{cite web |title=Tom Brown Generative Models |url=https://www.linkedinopenai.com/inblog/nottombrowngenerative-models/ |website=linkedinopenai.com |accessdate=29 February 5 April 2020}}</ref>
|-
| 2017 2016 || {{dts|JanuaryJune 21}} || Staff || Publication || "Concrete Problems in AI Safety" by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano joins OpenAI , John Schulman, and Dan Mané is submitted to work on AI alignmentthe {{w|arXiv}}. The paper explores practical problems in machine learning systems.<ref>{{cite web |url=https://paulfchristianoarxiv.comorg/aiabs/ 1606.06565 |title=[1606.06565] Concrete Problems in AI Alignment Safety |date=May 13June 21, 2017 |publisher=Paul Christiano 2016 |accessdate=May 6July 25, 20182017}}</ref> He was previously an intern at OpenAI in 2016The paper would receive a shoutout from the Open Philanthropy Project.<ref>{{cite web |url=https://blogwww.OpenAIopenphilanthropy.comorg/blog/teamconcrete-problems-ai-update/ safety|title = Concrete Problems in AI Safety|publisherlast =OpenAI Blog Karnofsky|titlefirst =Team Update Holden|date=March 22June 23, 2017 2016|accessdate=May 6April 18, 20182020}}</ref>It would become a landmark in AI safety literature, and many of its authors would continue to do AI safety work at OpenAI in the years to come.
|-
| 2017 2016 || {{dtsDts|FebruaryJuly}} || Staff || Team || Dario Amodei joins OpenAI Research Scientist Prafulla Dhariwal joins the organization.<ref>{{cite web |title=Prafulla Dhariwal |url=https://www.linkedincrunchbase.com/in/prafulladhariwalperson/ dario-amodei |title=Dario Amodei - Research Scientist @ OpenAI |websitepublisher=linkedin.com Crunchbase |accessdate=28 February 2020May 6, 2018}}</ref>, working on the Team Lead for AI Safety.<ref name="Dario Amodeiy"/><ref name="orgwatch.issarice.com"/>
|-
| 2017 2016 || {{dts|FebruaryJuly 8}} || Staff || Publication || "Adversarial Examples in the Physical World" is published. One of the authors is {{W|Ian Goodfellow}}, who is at OpenAI Researcher Jakub Pachocki joins at the organizationtime.<ref>{{cite web |title=Jakub Pachocki |url=https://www.linkedinwired.com/in2016/07/jakubfool-ai-seeing-something-pachockiisnt/ |websitetitle=How To Fool AI Into Seeing Something That Isn't There |publisher=[[wikipedia:WIRED|WIRED]] |date=July 29, 2016 |first=Cade |last=linkedin.com Metz |accessdate=28 February 2020March 3, 2018}}</ref>
|-
| 2017 2016 || {{dts|MarchJuly 28}} || Donation || The Open Philanthropy Project awards a grant of $30 million to {{w||OpenAI}} publishes post calling for general support.<ref name="donations-portal-open-phil-ai-risk">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+safety |title=Open Philanthropy Project donations made (filtered applicants to cause work in the following problem areas matching of interest:* Detect if someone is using a covert breakthrough AI safety) |accessdate=July 27, 2017}}</ref> The grant initiates a partnership between Open Philanthropy Project and OpenAI, system in which {{W|Holden Karnofsky}} (executive director of Open Philanthropy Project) joins OpenAI's board of directors the world.* Build an agent to oversee OpenAI's safety and governance workwin online programming competitions.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic* Cyber-risks/potential-risks-advanced-artificial-intelligence/OpenAI-general-support |publisher=Open Philanthropy Project |title=OpenAI — General Support |date=December 15, 2017 |accessdate=May 6, 2018}}</ref> The grant is criticized by {{W|Maciej Cegłowski}}<ref>{{cite web |url=https://twittersecurity defense.com/Pinboard/status/848009582492360704 |title=Pinboard on Twitter |publisher=Twitter |accessdate=May 8, 2018 |quote=What the actual fuck… “Open Philanthropy” dude gives a $30M grant to his roommate / future brother* A complex simulation with many long-in-lawlived agents. Trumpy!}}</ref> and Benjamin Hoffman (who would write the blog post "OpenAI makes humanity less safe")<ref>{{cite web |url=http://benjaminrosshoffman.com/OpenAI-makes-humanity-less-safe/ |title=OpenAI makes humanity less safe |date=April 13, 2017 |publisher=Compass Rose |accessdate=May 6, 2018}}</ref><ref>{{cite web Special Projects |url=https://www.lesswrongopenai.com/postsblog/Nqn2tkAHbejXTDKuW/OpenAI-makes-humanity-lessspecial-safe |title=OpenAI makes humanity less safe |accessdate=May 6, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}<projects/ref><ref>{{cite web |urlwebsite=https://donations.vipulnaikopenai.com/donee.php?donee=OpenAI |title=OpenAI donations received |accessdate=May 6, 2018}}</ref> among others.<ref>{{cite web |url=https://www.facebook.com/vipulnaik.r/posts/10211478311489366 |title=I'm having a hard time understanding the rationale... |accessdate=May 8, 2018 |first=Vipul |last=Naik5 April 2020}}</ref>
|-
| 2017 2016 || {{dts|March August 15}} || Publication || "Emergence of Grounded Compositional Language in Multi-Agent Populations" is Donation || The technology company {{W|Nvidia}} announces that it has donated the first submitted to {{wW|ArXivNvidia DGX-1}}(a supercomputer) to OpenAI. The paper proposes OpenAI plans to use the supercomputer to train its AI on a multi-agent learning environment and learning methods that bring about emergence corpus of a basic compositional languageconversations from {{W|Reddit}}.<ref>{{cite web |last1url=https://blogs.nvidia.com/blog/2016/08/15/first-ai-supercomputer-OpenAI-elon-musk-deep-learning/ |title=Mordatch NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI |first1publisher=Igor The Official NVIDIA Blog |last2date=Abbeel August 15, 2016 |first2accessdate=Pieter May 5, 2018}}</ref><ref>{{cite web |url=http://fortune.com/2016/08/15/elon-musk-artificial-intelligence-OpenAI-nvidia-supercomputer/ |title=Emergence of Grounded Compositional Language in MultiNvidia Just Gave A Supercomputer to Elon Musk-Agent Populations backed Artificial Intelligence Group |publisher=Fortune |first=Jonathan |last=Vanian |date=August 15, 2016 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://arxivfuturism.orgcom/abselon-musks-OpenAI-is-using-reddit-to-teach-an-artificial-intelligence-how-to-speak/1703.04908 |websitedate=August 17, 2016 |title=Elon Musk's OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak |first=Cecille |last=De Jesus |publisher=arxiv.org Futurism |accessdate=26 March 2020May 5, 2018}}</ref>
|-
| 2017 2016 || {{dts|MarchAugust 29}} || Reorganization Infrastructure || Publication || Greg Brockman and a few other core members of "Infrastructure for Deep Learning" is published. The post shows how deep learning research usually proceeds. It also describes the infrastructure choices OpenAI begin drafting an internal document made to lay out support it, and open-source kubernetes-ec2-autoscaler, a path to batch-optimized scaling manager for {{w|artificial general intelligenceKubernetes}}. As the team studies trends within the field, they realize staying a nonprofit is financially untenable.<ref name="technologyreview.comñ">{{cite web |title=The messy, secretive reality behind OpenAI’s bid to save the world Infrastructure for Deep Learning |url=https://www.technologyreviewopenai.com/sblog/615181/ai-OpenAI-moonshotinfrastructure-elonfor-muskdeep-sam-altman-greg-brockman-messy-secretive-realitylearning/ |website=technologyreviewopenai.com |accessdate=28 February March 2020}}</ref>
|-
| 2017 2016 || {{dts|MarchOctober 11}} || {{w|Robotics}} || Staff Publication || Christopher Berner joins OpenAI as Head "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model", a paper on {{w|robotics}}, is submitted to the {{w|ArXiv}}. It investigates settings where the sequence of Infrastructurestates traversed in simulation remains reasonable for the real world.<ref>{{cite web |last1=Christiano |first1=Paul |last2=Shah |first2=Zain |last3=Mordatch |first3=Igor |last4=Schneider |first4=Jonas |last5=Blackwell |first5=Trevor |last6=Tobin |first6=Joshua |last7=Abbeel |first7=Pieter |last8=Zaremba |first8=Wojciech |title=Christopher Berner Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model |url=https://www.linkedinarxiv.comorg/in/christopherbernerberkeleyabs/ 1610.03518 |website=linkedinarxiv.com org |accessdate=29 February 28 March 2020}}</ref>
|-
| 2017 2016 || {{dts|AprilOctober 18}} || Coverage Safety || Publication || An article entitled "The People Behind OpenAISemi-supervised Knowledge Transfer for Deep Learning from Private Training Data" , a paper on safety, is published on submitted to the {{Ww|Red HatArXiv}}'s ''Open Source Stories'' website, covering work at OpenAI. It shows an approach to providing strong privacy guarantees for training data: Private Aggregation of Teacher Ensembles (PATE).<ref>{{cite web |urllast1=https://www.redhat.com/en/open-source-stories/ai-revolutionaries/people-behind-OpenAI Papernot |titlefirst1=Open Source Stories: The People Behind OpenAI Nicolas |accessdatelast2=May 5, 2018 Abadi |first1first2=Brent Martín |last1last3=Simoneaux Erlingsson |first2first3=Casey Úlfar |last2last4=Stegman}} In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.</ref><ref>{{cite web Goodfellow |urlfirst4=https://www.reddit.com/r/OpenAI/comments/63xr4p/profile_of_the_people_behind_OpenAI/ Ian |publisherlast5=reddit Talwar |titlefirst5=Profile of the people behind OpenAI • r/OpenAI Kunal |datetitle=April 7, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data |url=https://newsarxiv.ycombinatororg/abs/1610.com/item?id=14832524 |title=The People Behind OpenAI 05755 |website=Hacker News arxiv.org |accessdate=May 5, 2018 |date=July 23, 201728 March 2020}}</ref>
|-
| 2017 2016 || {{dts|April 6November 14}} || Coverage Generative models || Publication || "Learning to Generate Reviews and Discovering SentimentOn the Quantitative Analysis of Decoder-Based Generative Models" , a paper on generative models, is publishedsubmitted to the {{w|ArXiv}}. It introduces a technique to analyze the performance of decoder-based models.<ref>{{cite web |last1=Wu |first1=Yuhuai |last2=Burda |first2=Yuri |last3=Salakhutdinov |first3=Ruslan |last4=Grosse |first4=Roger |title=On the Quantitative Analysis of Decoder-Based Generative Models |url=https://techcruncharxiv.comorg/2017/04/07/OpenAI-sets-benchmark-for-sentiment-analysis-using-an-efficient-mlstmabs/ 1611.04273 |datewebsite=April 7, 2017 |publisher=TechCrunch |title=OpenAI sets benchmark for sentiment analysis using an efficient mLSTM |author=John Mannes arxiv.org |accessdate=28 March 2, 20182020}}</ref>
|-
| 2017 2016 || {{dts|April 6November 15}} || AI development || Partnership || A partnership between OpenAI unveils reuse of an old field called “neuroevolution”, and a subset of algorithms from it called “evolution strategies,” which are aimed at solving optimization problemsMicrosoft's artificial intelligence division is announced. In one hour training on an Atari challenge, an algorithm is found to reach a level As part of mastery that took a reinforcement-learning system published by DeepMind in 2016 a whole day to learn. On the walking problem the system took 10 minutespartnership, compared Microsoft provides a price reduction on computing resources to 10 hours for DeepMind's approachOpenAI through {{W|Microsoft Azure}}.<ref>{{cite web |title=OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s |url=https://singularityhubwww.theverge.com/20172016/11/0415/0613639904/microsoft-OpenAI-justai-beatpartnership-theelon-hellmusk-out-of-deepmindsam-altman |date=November 15, 2016 |publisher=The Verge |first=Nick |last=Statt |title=Microsoft is partnering withElon Musk's OpenAI to protect humanity's best interests |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/11/next-anbattles-algorithmclouds-from-theai-80schips/ |websitetitle=singularityhubThe Next Big Front in the Battle of the Clouds Is AI Chips.com And Microsoft Just Scored a Win |publisher=[[wikipedia:WIRED|WIRED]] |first=Cade |last=Metz |accessdate=29 June 2019March 2, 2018 |quote=According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware.}}</ref>
|-
| 2017 2016 || {{dts|AprilDecember 5}} || Staff || Matthias Plappert joins Software release || OpenAI as Researcher's Universe, "a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications", is released.<ref>{{cite web |url=https://github.com/OpenAI/universe |accessdate=March 1, 2018 |publisher=GitHub |title=universe}}</ref><ref>{{cite web |url=https://techcrunch.com/2016/12/05/OpenAIs-universe-is-the-fun-parent-every-artificial-intelligence-deserves/ |date=December 5, 2016 |publisher=TechCrunch |title=Matthias Plappert OpenAI's Universe is the fun parent every artificial intelligence deserves |author=John Mannes |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.linkedinwired.com/in2016/12/matthiasplappertOpenAIs-universe-computers-learn-use-apps-like-humans/ |title=Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018}}</ ref><ref>{{cite web |websiteurl=linkedinhttps://news.ycombinator.com /item?id=13103742 |title=OpenAI Universe |website=Hacker News |accessdate=28 February 2020May 5, 2018}}</ref>
|-
| 2017 || {{dts|May 24January}} || Software release || Staff || Paul Christiano joins OpenAI releases Baselines, a set of implementations of reinforcement learning algorithmsto work on AI alignment.<ref>{{cite web |url=https://blog.OpenAIpaulfchristiano.com/OpenAI-baselines-dqnai/ |publisher=OpenAI Blog |title=OpenAI Baselines: DQN AI Alignment |date=November 28May 13, 2017 |publisher=Paul Christiano |accessdate=May 56, 2018}}</ref>He was previously an intern at OpenAI in 2016.<ref>{{cite web |url=https://githubblog.openai.com/OpenAIteam-update/baselines |publisher=GitHub OpenAI Blog |title=OpenAI/baselines Team Update |date=March 22, 2017 |accessdate=May 56, 2018}}</ref>
|-
| 2017 || {{dts|MayMarch}} || Staff || Donation || The Open Philanthropy Project awards a grant of $30 million to {{w|Kevin FransOpenAI}} for general support.<ref name="donations-portal-open-phil-ai-risk">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+safety |title=Open Philanthropy Project donations made (filtered to cause areas matching AI safety) |accessdate=July 27, 2017}}</ref> The grant initiates a partnership between Open Philanthropy Project and OpenAI, in which {{W|Holden Karnofsky}} (executive director of Open Philanthropy Project) joins OpenAI as Research Intern's board of directors to oversee OpenAI's safety and governance work.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/OpenAI-general-support |publisher=Open Philanthropy Project |title=OpenAI — General Support |date=December 15, 2017 |accessdate=May 6, 2018}}</ref> The grant is criticized by {{W|Maciej Cegłowski}}<ref>{{cite web |url=https://twitter.com/Pinboard/status/848009582492360704 |title=Pinboard on Twitter |publisher=Twitter |accessdate=May 8, 2018 |quote=What the actual fuck… “Open Philanthropy” dude gives a $30M grant to his roommate / future brother-in-law. Trumpy!}}</ref> and Benjamin Hoffman (who would write the blog post "OpenAI makes humanity less safe")<ref>{{cite web |url=http://benjaminrosshoffman.com/OpenAI-makes-humanity-less-safe/ |title=Kevin Frans OpenAI makes humanity less safe |date=April 13, 2017 |publisher=Compass Rose |accessdate=May 6, 2018}}</ref><ref>{{cite web |url=https://www.linkedinlesswrong.com/inposts/Nqn2tkAHbejXTDKuW/OpenAI-makes-humanity-less-safe |title=OpenAI makes humanity less safe |accessdate=May 6, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://kevinfransdonations.vipulnaik.com/ donee.php?donee=OpenAI |title=OpenAI donations received |accessdate=May 6, 2018}}</ref> among others.<ref>{{cite web |websiteurl=linkedinhttps://www.facebook.com /vipulnaik.r/posts/10211478311489366 |title=I'm having a hard time understanding the rationale... |accessdate=28 February 2020May 8, 2018 |first=Vipul |last=Naik}}</ref>
|-
| 2017 || {{dts|June 12March 24}} || Publication || "Deep reinforcement learning from human preferences" is first uploaded to Research progress || OpenAI announces having discovered that [[w:Evolution strategy|evolution strategies]] rival the arXiv. The paper is a collaboration between researchers at OpenAI and Google DeepMind.<ref>performance of standard {{cite web |url=https://arxiv.org/abs/1706.03741 w|title=[1706.03741] Deep reinforcement learning from human preferences |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://wwwtechniques on modern RL benchmarks (e.gwerng.net/newsletter/2017Atari/06 |author=gwern |date=June 3MuJoCo), 2017 |title=June 2017 news - Gwernwhile overcoming many of RL’s inconveniences.net |accessdate=March 2, 2018}}</ref><ref>{{cite web |title=Evolution Strategies as a Scalable Alternative to Reinforcement Learning |url=https://www.wiredopenai.com/storyblog/twoevolution-giants-of-ai-team-up-to-head-off-the-robot-apocalypsestrategies/ |title=Two Giants of AI Team Up to Head Off the Robot Apocalypse |publisherwebsite=[[wikipedia:WIRED|WIRED]] openai.com |accessdate=March 2, 2018 |quote=A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it's possible to do practical work right now on making machine learning systems less able to produce nasty surprises.5 April 2020}}</ref>
|-
| 2017 || {{dts|JuneMarch}} || Partnership || Reorganization || Greg Brockman and a few other core members of OpenAI partners with begin drafting an internal document to lay out a path to {{w|DeepMindartificial general intelligence}}’s safety . As the team in studies trends within the development of an algorithm which can infer what humans want by being told which of two proposed behaviors field, they realize staying a nonprofit is better. The learning algorithm uses small amounts of human feedback to solve modern {{w|reinforcement learning}} environmentsfinancially untenable.<refname="technologyreview.comñ">{{cite web |title=Learning from Human Preferences The messy, secretive reality behind OpenAI’s bid to save the world |url=https://OpenAIwww.technologyreview.com/blogs/615181/deepai-reinforcementOpenAI-learningmoonshot-fromelon-humanmusk-preferencessam-altman-greg-brockman-messy-secretive-reality/ |website=OpenAItechnologyreview.com |accessdate=29 June 201928 February 2020}}</ref>
|-
| 2017 || {{dts|JulyApril}} || Staff || Coverage || An article entitled "The People Behind OpenAI" is published on {{W|Red Hat}}'s ''Open Source Stories'' website, covering work at OpenAI Research Scientist Joshua Achiam joins the organization.<ref>{{cite web |url=https://www.redhat.com/en/open-source-stories/ai-revolutionaries/people-behind-OpenAI |title=Joshua Achiam Open Source Stories: The People Behind OpenAI |accessdate=May 5, 2018 |first1=Brent |last1=Simoneaux |first2=Casey |last2=Stegman}} In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.</ref><ref>{{cite web |url=https://www.linkedinreddit.com/inr/OpenAI/comments/63xr4p/profile_of_the_people_behind_OpenAI/ |publisher=reddit |title=Profile of the people behind OpenAI • r/joshua-achiam-13887199OpenAI |date=April 7, 2017 |accessdate=May 5, 2018}}</ ref><ref>{{cite web |websiteurl=linkedinhttps://news.ycombinator.com /item?id=14832524 |title=The People Behind OpenAI |website=Hacker News |accessdate=28 February 2020May 5, 2018 |date=July 23, 2017}}</ref>
|-
| 2017 || {{dts|August 12April 6}} || Achievement || Software release || OpenAI's Dota 2 bot beats Danil "Dendi" Ishutin, unveils an unsupervised system which is able to perform a professional human playerexcellent {{w|sentiment analysis}}, (and possibly others?) despite being trained only to predict the next character in one-on-one battlesthe text of Amazon reviews.<ref>{{cite web |title=Unsupervised Sentiment Neuron |url=https://techcrunchopenai.com/2017/08blog/12/OpenAI-bot-remains-undefeated-againstunsupervised-worldssentiment-greatest-dota-2-playersneuron/ |datewebsite=August 12, 2017 |publisher=TechCrunch |title=OpenAI bot remains undefeated against world's greatest Dota 2 players |author=Jordan Crook openai.com |accessdate=March 2, 20185 April 2020}}</ref><ref>{{cite web |url=https://www.thevergetechcrunch.com/2017/804/1407/16143392/dotaOpenAI-sets-benchmark-for-aisentiment-OpenAIanalysis-botusing-winan-elonefficient-musk mlstm/ |date=August 14April 7, 2017 |publisher=The Verge TechCrunch |title=Did Elon Musk's AI champ destroy humans at video games? It's complicated |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/the-international-dota-2-OpenAI-bot-beats-dendi-2017-8 sets benchmark for sentiment analysis using an efficient mLSTM |date=August 11, 2017 |titleauthor=Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer |publisher=Business Insider John Mannes |accessdate=March 32, 2018}}</ref>
|-
| 2017 || {{dts|August 13April 6}} || Coverage || ''{{WPublication ||The New York Times}}'' publishes a story covering the AI safety work (by Dario Amodei, Geoffrey Irving, "Learning to Generate Reviews and Paul Christiano) at OpenAIDiscovering Sentiment" is published.<ref>{{cite web |url=https://www.nytimestechcrunch.com/2017/0804/1307/technology/artificialopenai-sets-benchmark-for-sentiment-analysis-using-intelligencean-safetyefficient-training.html mlstm/ |date=August 13April 7, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] TechCrunch |title=Teaching A.I. Systems to Behave Themselves OpenAI sets benchmark for sentiment analysis using an efficient mLSTM |author=Cade Metz John Mannes |accessdate=May 5March 2, 2018}}</ref>
|-
| 2017 || {{Dtsdts|September 13April 6}} || Publication Neuroevolution || "Learning with Opponent-Learning Awareness" is first uploaded to the {{wResearch progress ||arXiv}}OpenAI unveils reuse of an old field called “neuroevolution”, and a subset of algorithms from it called “evolution strategies,” which are aimed at solving optimization problems. The paper presents Learning with Opponent-Learning Awareness (LOLA)In one hour training on an Atari challenge, an algorithm is found to reach a level of mastery that took a method reinforcement-learning system published by DeepMind in which each agent shapes 2016 a whole day to learn. On the anticipated learning of walking problem the other agents in an environmentsystem took 10 minutes, compared to 10 hours for DeepMind's approach.<ref>{{cite web |url=https://arxiv.org/abs/1709.04326 |title=[1709.04326] Learning with Opponent-Learning Awareness |accessdate=March 2, 2018}}</ref><ref>{{cite web OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s |url=https://wwwsingularityhub.gwern.netcom/2017/04/newsletter06/2017OpenAI-just-beat-the-hell-out-of-deepmind-with-an-algorithm-from-the-80s/09 |author=gwern |date=August 16, 2017 |titlewebsite=September 2017 news - Gwernsingularityhub.net com |accessdate=March 2, 201829 June 2019}}</ref>
|-
| 2017 || September {{dts|May 15}} || Robotics || Staff Software release || OpenAI Research Scientist Bowen Baker joins the organizationreleases Roboschool, an open-source software for robot simulation, integrated with OpenAI Gym.<ref>{{cite web |title=Bowen Baker Roboschool |url=https://www.linkedinopenai.com/inblog/bowen-baker-59b48a65roboschool/ |website=linkedinopenai.com |accessdate=28 February 5 April 2020}}</ref>
|-
| 2017 || {{dts|October 11May 16}} || Robotics || Software release || RoboSumoOpenAI introduces a robotics system, trained entirely in simulation and deployed on a game that simulates {{W|sumo wrestling}} for AI to physical robot, which can learn to play, is releaseda new task after seeing it done once.<ref>{{cite web |title=Robots that Learn |url=https://www.wiredopenai.com/storyblog/ai-sumo-wrestlers-could-make-future-robots-morethat-nimble/ |title=AI Sumo Wrestlers Could Make Future Robots More Nimble |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 3, 2018}}<learn/ref><ref>{{cite web |urlwebsite=http://www.businessinsideropenai.com/elon-musk-OpenAI-virtual-robots-learn-sumo-wrestle-soccer-sports-ai-tech-science-2017-10 |first1=Alexandra |last1=Appolonia |first2=Justin |last2=Gmoser |date=October 20, 2017 |title=Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer |publisher=Business Insider |accessdate=March 3, 20185 April 2020}}</ref>
|-
| 2017 || {{dts|October 18May 24}} || Publication Reinforcement learning || "Sim-to-Real Transfer of Robotic Control with Dynamics Randomization", a paper on {{wSoftware release ||robotics}}OpenAI releases Baselines, is first submitted to {{w|ArXiv}}. It describes a solution for strategies that are successful in simulation but may not transfer to their real world counterparts due to modeling errorset of implementations of reinforcement learning algorithms.<ref>{{cite web |last1url=Bin Peng https://blog.OpenAI.com/OpenAI-baselines-dqn/ |first1publisher=Xue OpenAI Blog |last2title=Andrychowicz OpenAI Baselines: DQN |first2date=Marcin November 28, 2017 |last3accessdate=Zaremba |first3=Wojciech |last4=Abbeel |first4=Pieter |title=Sim-to-Real Transfer of Robotic Control with Dynamics Randomization May 5, 2018}}</ref><ref>{{cite web |url=https://arxivgithub.orgcom/absOpenAI/1710.06537 baselines |publisher=GitHub |websitetitle=arxiv.org OpenAI/baselines |accessdate=26 March 2020May 5, 2018}}</ref>
|-
| 2017 || {{dts|October 31June 12}} || Safety || Publication || "Backpropagation through the Void: Optimizing control variates for black-box gradient estimation", a paper on {{w|Deep reinforcement learning}}, from human preferences" is first submitted uploaded to the {{w|ArXiv}}arXiv. It introduces The paper is a general framework for learning low-variance, unbiased gradient estimators for black-box functions of random variablescollaboration between researchers at OpenAI and Google DeepMind.<ref>{{cite web |last1url=Grathwohl https://arxiv.org/abs/1706.03741 |first1title=Will [1706.03741] Deep reinforcement learning from human preferences |last2accessdate=Choi March 2, 2018}}</ref><ref>{{cite web |first2url=Dami https://www.gwern.net/newsletter/2017/06 |last3author=Wu gwern |first3date=Yuhuai June 3, 2017 |last4title=Roeder June 2017 news - Gwern.net |first4accessdate=Geoffrey |last5=Duvenaud |first5=David |title=Backpropagation through the Void: Optimizing control variates for black-box gradient estimation March 2, 2018}}</ref><ref>{{cite web |url=https://arxivwww.wired.orgcom/story/abstwo-giants-of-ai-team-up-to-head-off-the-robot-apocalypse/1711.00123 |websitetitle=Two Giants of AI Team Up to Head Off the Robot Apocalypse |publisher=arxiv.org [[wikipedia:WIRED|WIRED]] |accessdate=26 March 20202, 2018 |quote=A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it's possible to do practical work right now on making machine learning systems less able to produce nasty surprises.}}</ref>
|-
| 2017 || {{dts|OctoberJune 28}} || Staff Robotics || Open sourcing || Jonathan Raiman joins OpenAI as Research Scientistopen sources a high-performance [[w:Python (programming language)|Python]] library for robotic simulation using the MuJoCo engine, developed over OpenAI research on robotics.<ref>{{cite web |title=Jonathan Raiman Faster Physics in Python |url=https://www.linkedinopenai.com/inblog/jonathanfaster-raimanrobot-36694123simulation-in-python/ |website=linkedinopenai.com |accessdate=28 February 5 April 2020}}</ref>
|-
| 2017 || {{Dtsdts|November 6June}} || Staff {{w|Reinforcement learning}} | ''| Partnership || OpenAI partners with {{Ww|DeepMind}}’s safety team in the development of an algorithm which can infer what humans want by being told which of two proposed behaviors is better. The New York Timeslearning algorithm uses small amounts of human feedback to solve modern {{w|reinforcement learning}}'' reports that Pieter Abbeel (a researcher at OpenAI) and three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligenceenvironments.<ref>{{cite web |title=Learning from Human Preferences |url=https://www.nytimesOpenAI.com/2017blog/11/06/technology/artificialdeep-reinforcement-learning-intelligencefrom-starthuman-up.html preferences/ |datewebsite=November 6, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=AOpenAI.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up |author=Cade Metz com |accessdate=May 5, 201829 June 2019}}</ref>
|-
| 2017 || {{dts|DecemberJuly 27}} || Publication {{w|Reinforcement learning}} || Research progress || The 2017 AI Index is published. OpenAI contributed announces having found that adding adaptive noise to the reportparameters of {{w|reinforcement learning}} algorithms frequently boosts performance.<ref>{{cite web |title=Better Exploration with Parameter Noise |url=https://www.thevergeopenai.com/2017blog/12/1/16723238/aibetter-artificialexploration-intelligencewith-progressparameter-index noise/ |datewebsite=December 1, 2017 |publisher=The Verge |title=Artificial intelligence isn't as clever as we think, but that doesn't stop it being a threat |first=James |last=Vincent openai.com |accessdate=March 2, 20185 April 2020}}</ref>
|-
| 2017 || {{dts|DecemberAugust 12}} || Staff || David Luan joins Achievement || OpenAI as Director of Engineering's Dota 2 bot beats Danil "Dendi" Ishutin, a professional human player, (and possibly others?) in one-on-one battles.<ref>{{cite web |url=https://techcrunch.com/2017/08/12/OpenAI-bot-remains-undefeated-against-worlds-greatest-dota-2-players/ |date=August 12, 2017 |publisher=TechCrunch |title=David Luan OpenAI bot remains undefeated against world's greatest Dota 2 players |author=Jordan Crook |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.linkedintheverge.com/in2017/8/14/jluan16143392/ dota-ai-OpenAI-bot-win-elon-musk |date=August 14, 2017 |websitepublisher=linkedinThe Verge |title=Did Elon Musk's AI champ destroy humans at video games? It's complicated |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com /the-international-dota-2-OpenAI-bot-beats-dendi-2017-8 |date=August 11, 2017 |title=Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer |publisher=Business Insider |accessdate=28 February 2020March 3, 2018}}</ref>
|-
| 2018 2017 || {{dts|JanuaryAugust 13}} || Staff || Christy Dennison joins Coverage || ''{{W|The New York Times}}'' publishes a story covering the AI safety work (by Dario Amodei, Geoffrey Irving, and Paul Christiano) at OpenAI as Machine Learning Engineer.<ref>{{cite web |title=Christy Dennison |url=https://www.linkedinnytimes.com/in2017/08/13/christydennisontechnology/ artificial-intelligence-safety-training.html |date=August 13, 2017 |websitepublisher=linkedin[[wikipedia:The New York Times|The New York Times]] |title=Teaching A.com I. Systems to Behave Themselves |author=Cade Metz |accessdate=28 February 2020May 5, 2018}}</ref>
|-
| 2018 2017 || {{dts|JanuaryAugust 18}} || Staff {{w|Reinforcement learning}} || Software release || David Farhi joins OpenAI as Researcherreleases two implementations: ACKTR, a {{w|reinforcement learning}} algorithm, and A2C, a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C).<ref>{{cite web |title=David Farhi OpenAI Baselines: ACKTR & A2C |url=https://www.linkedinopenai.com/inblog/davidbaselines-farhiacktr-13824175a2c/ |website=linkedinopenai.com |accessdate=28 February 5 April 2020}}</ref>
|-
| 2018 2017 || {{Dts|September 13}} || {{dtsw|JanuaryReinforcement learning}} || Staff Publication || Mathew Shrwed joins OpenAI as Software Engineer"Learning with Opponent-Learning Awareness" is first uploaded to the {{w|ArXiv}}. The paper presents Learning with Opponent-Learning Awareness (LOLA), a method in which each agent shapes the anticipated learning of the other agents in an environment.<ref>{{cite web |url=https://arxiv.org/abs/1709.04326 |title=Mathew Shrwed [1709.04326] Learning with Opponent-Learning Awareness |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.linkedingwern.comnet/innewsletter/mshrwed2017/ 09 |websiteauthor=linkedingwern |date=August 16, 2017 |title=September 2017 news - Gwern.com net |accessdate=28 February 2020March 2, 2018}}</ref>
|-
| 2018 2017 || {{dts|February 3October 11}} || Publication || "DeepType: Multilingual Entity Linking by Neural Type System Evolution" Software release || RoboSumo, a paper on game that simulates {{wW|reinforcement learningsumo wrestling}}for AI to learn to play, is submitted to the released.<ref>{{wcite web |url=https://www.wired.com/story/ai-sumo-wrestlers-could-make-future-robots-more-nimble/ |ArXivtitle=AI Sumo Wrestlers Could Make Future Robots More Nimble |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 3, 2018}}.</ref><ref>{{cite web |last1url=Raiman http://www.businessinsider.com/elon-musk-OpenAI-virtual-robots-learn-sumo-wrestle-soccer-sports-ai-tech-science-2017-10 |first1=Jonathan Alexandra |last2last1=Raiman Appolonia |first2=Olivier Justin |titlelast2=Gmoser |date=DeepType: Multilingual Entity Linking by Neural Type System Evolution October 20, 2017 |urltitle=https://arxiv.org/abs/1802.01021 Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer |websitepublisher=arxiv.org Business Insider |accessdate=26 March 20203, 2018}}</ref>
|-
| 2018 2017 || {{dtsDts|February 13November 6}} || Publication || "Evolved Policy Gradients", a {{wTeam ||reinforcement learning}} paper, is first submitted to the ''{{wW|ArXivThe New York Times}}. It proposes '' reports that Pieter Abbeel (a metalearning approach for learning gradient-based reinforcement learning (RLresearcher at OpenAI) algorithmsand three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligence.<ref>{{cite web |last1url=Houthooft |first1=Rein |last2=Chen |first2=Richard Yhttps://www.nytimes. |last3=Isola |first3=Phillip |last4=Stadie |first4=Bradly Ccom/2017/11/06/technology/artificial-intelligence-start-up. html |last5date=Wolski November 6, 2017 |first5publisher=Filip [[wikipedia:The New York Times|last6=Ho |first6=Jonathan |last7=Abbeel |first7=Pieter The New York Times]] |title=Evolved Policy Gradients |url=https://arxivA.org/abs/1802I.04821 Researchers Leave Elon Musk Lab to Begin Robotics Start-Up |websiteauthor=arxiv.org Cade Metz |accessdate=26 March 2020May 5, 2018}}</ref>
|-
| 2018 2017 || {{dts|February 20December 6}} || Publication || The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is submitted to the {{w|ArXivNeural network}}. It forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation Software release |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/preparingreleases highly-optimized GPU kernels fornetworks with block-malicious-uses-sparse weights, an underexplored class of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses neural network architectures. Depending on the chosen sparsity, these kernels can run orders of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref>magnitude faster than cuBLAS or cuSPARSE.<ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref><ref name="muskBlock-leaves" /><ref>{{cite web Sparse GPU Kernels |url=https://www.wiredopenai.com/storyblog/why-artificial-intelligenceblock-researcherssparse-shouldgpu-be-more-paranoidkernels/ |titlewebsite=Why Artificial Intelligence Researchers Should Be More Paranoid |first=Tom |last=Simonite |publisher=[[wikipedia:WIRED|WIRED]] openai.com |accessdate=March 2, 20185 April 2020}}</ref>
|-
| 2018 2017 || {{dts|February 20December}} || Donation || OpenAI announces changes in donors and advisors. New donors are: {{WPublication |Jed McCaleb}}, {{W|Gabe Newell}}, {{W|Michael Seibel}}, {{W|Jaan Tallinn}}, and {{W|Ashton Eaton}} and {{W|Brianne Theisen-Eaton}}. {{W|Reid Hoffman}} The 2017 AI Index is "significantly increasing his contribution"published. Pieter Abbeel (previously at OpenAI), {{W|Julia Galef}}, and Maran Nelson become advisors. {{W|Elon Musk}} departs contributed to the board but remains as a donor and advisorreport.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-supporters/ |publisher=OpenAI Blog |title=OpenAI Supporters |date=February 21, 2018 |accessdate=March 1, 2018}}</ref><ref name="musk-leaves">{{cite web |url=https://www.theverge.com/20182017/212/211/1703621416723238/elonai-muskartificial-OpenAIintelligence-ai-safety-leavesprogress-board index |date=February 21December 1, 2018 2017 |publisher=The Verge |title=Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla Artificial intelligence isn't as clever as we think, but that doesn't stop it being a threat |first=James |last=Vincent |accessdate=March 2, 2018}}</ref>
|-
| 2018 || {{dts|February 2620}} || Safety || Publication || The report "Multi-Goal Reinforcement LearningThe Malicious Use of Artificial Intelligence: Challenging Robotics Environments Forecasting, Prevention, and Request for ResearchMitigation" is first submitted to the {{w|ArXiv}}. It forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The paper introduces a suite report is authored by individuals at Future of challenging continuous control tasks based on currently existing robotics hardwareHumanity Institute, and presents a set Centre for the Study of concrete research ideas Existential Risk, OpenAI, Electronic Frontier Foundation, Center for improving a New American Security, and other institutions.<ref>{{wcite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |reinforcement learningaccessdate=February 24, 2018}} algorithms</ref><ref>{{cite web |url=https://blog.OpenAI.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=MultiThe Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref><ref name="musk-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research leaves" /><ref>{{cite web |url=https://arxivwww.wired.orgcom/story/abswhy-artificial-intelligence-researchers-should-be-more-paranoid/1802.09464 |websitetitle=Why Artificial Intelligence Researchers Should Be More Paranoid |first=Tom |last=Simonite |publisher=arxiv.org [[wikipedia:WIRED|WIRED]] |accessdate=26 March 20202, 2018}}</ref>
|-
| 2018 || {{dts|February20}} || Staff || Lilian Weng joins Donation || OpenAI announces changes in donors and advisors. New donors are: {{W|Jed McCaleb}}, {{W|Gabe Newell}}, {{W|Michael Seibel}}, {{W|Jaan Tallinn}}, and {{W|Ashton Eaton}} and {{W|Brianne Theisen-Eaton}}. {{W|Reid Hoffman}} is "significantly increasing his contribution". Pieter Abbeel (previously at OpenAI), {{W|Julia Galef}}, and Maran Nelson become advisors. {{W|Elon Musk}} departs the board but remains as Research Scientista donor and advisor.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-supporters/ |publisher=OpenAI Blog |title=Lilian Weng OpenAI Supporters |date=February 21, 2018 |accessdate=March 1, 2018}}</ref><ref name="musk-leaves">{{cite web |url=https://www.linkedintheverge.com/in2018/2/21/lilianweng17036214/ elon-musk-OpenAI-ai-safety-leaves-board |websitedate=linkedin.com February 21, 2018 |publisher=The Verge |title=Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla |accessdate=28 February 2020March 2, 2018}}</ref>
|-
| 2018 || {{dts|March 3February 26}} || Publication Robotics || "Some Considerations on Learning to Explore via Meta-Reinforcement Learning", Software release || OpenAI releases eight simulated robotics environments and a paper on {{w|reinforcement learning}}Baselines implementation of Hindsight Experience Replay, is first submitted all developed for OpenAI research over the previous year. These environments were to {{w|ArXiv}}. It considers the problem of exploration in meta reinforcement learningtrain models which work on physical robots.<ref>{{cite web |last1=Stadie |first1=Bradly C. |last2=Yang |first2=Ge |last3=Houthooft |first3=Rein |last4=Chen |first4=Xi |last5=Duan |first5=Yan |last6=Wu |first6=Yuhuai |last7=Abbeel |first7=Pieter |last8=Sutskever |first8=Ilya |title=Some Considerations on Learning to Explore via Meta-Reinforcement Learning Ingredients for Robotics Research |url=https://arxivopenai.orgcom/blog/absingredients-for-robotics-research/1803.01118 |website=arxivopenai.org com |accessdate=26 March 5 April 2020}}</ref>
|-
| 2018 || {{dts|March 3}} || || Event host hosting || OpenAI hosts its first hackathon. Applicants include high schoolers, industry practitioners, engineers, researchers at universities, and others, with interests spanning healthcare to {{w|AGI}}.<ref>{{cite web |url=https://blog.OpenAI.com/hackathon/ |publisher=OpenAI Blog |title=OpenAI Hackathon |date=February 24, 2018 |accessdate=March 1, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/hackathon-follow-up/ |publisher=OpenAI Blog |title=Report from the OpenAI Hackathon |date=March 15, 2018 |accessdate=May 5, 2018}}</ref>
|-
| 2018 || {{dtsDts|March 8April 5}} {{snd}}June 5 || Publication || "On First-Order Meta-Learning Algorithms", a paper on {{wEvent hosting |reinforcement learning}}, is submitted to {{w|ArXiv}}. It analyzes meta-learning problems, where there is a distribution of tasksThe OpenAI Retro Contest takes place.<ref>{{cite web |last1url=Nichol https://contest.OpenAI.com/ |first1title=Alex OpenAI Retro Contest |last2publisher=Achiam OpenAI |first2accessdate=Joshua May 5, 2018}}</ref><ref>{{cite web |last3url=Schulman https://blog.OpenAI.com/retro-contest/ |first3publisher=John OpenAI Blog |title=On First-Order Meta-Learning Algorithms Retro Contest |date=April 13, 2018 |accessdate=May 5, 2018}}</ref> As a result of the release of the Gym Retro library, OpenAI's Universe become deprecated.<ref>{{cite web |url=https://arxivgithub.orgcom/OpenAI/universe/abscommit/1803.02999 cc9ce6ec241821bfb0f3b85dd455bd36e4ee7a8c |publisher=GitHub |websitetitle=arxiv.org OpenAI/universe |accessdate=26 March 2020May 5, 2018}}</ref>
|-
| 2018 || {{dts|March 15April 9}} || Publication || "Improving GANs Using Optimal Transport", Commitment || OpenAI releases a charter stating that the organization commits to stop competing with a paper on generative modelsvalue-aligned and safety-conscious project that comes close to building artificial general intelligence, is first submitted and also that OpenAI expects to reduce its traditional publishing in the future due to safety concerns.<ref>{{wcite web |ArXiv}}url=https://blog.OpenAI. It presents Optimal Transport GAN (OTcom/OpenAI-GAN)charter/ |publisher=OpenAI Blog |title=OpenAI Charter |date=April 9, 2018 |accessdate=May 5, a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution.2018}}</ref><ref>{{cite web |last1url=Salimans https://www.lesswrong.com/posts/e5mFQGMc7JpechJak/OpenAI-charter |first1title=Tim OpenAI charter |last2accessdate=Zhang May 5, 2018 |first2date=Han April 9, 2018 |last3author=Radford wunan |first3publisher=Alec [[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |last4url=Metaxas https://www.reddit.com/r/MachineLearning/comments/8azk2n/d_OpenAI_charter/ |first4publisher=Dimitris reddit |title=Improving GANs Using Optimal Transport [D] OpenAI Charter • r/MachineLearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://arxivnews.orgycombinator.com/item?id=16794194 |title=OpenAI Charter |website=Hacker News |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https:/abs/1803thenextweb.05573 com/artificial-intelligence/2018/04/10/the-ai-company-elon-musk-co-founded-is-trying-to-create-sentient-machines/ |title=The AI company Elon Musk co-founded intends to create machines with real intelligence |publisher=The Next Web |websitedate=April 10, 2018 |author=arxiv.org Tristan Greene |accessdate=26 March 2020May 5, 2018}}</ref>
|-
| 2018 || {{dtsDts|March 20April 19}} || Publication || "Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines", a paper on Financial || ''{{wW|reinforcement learningThe New York Times}}'' publishes a story detailing the salaries of researchers at OpenAI, is submitted to the using information from OpenAI's 2016 {{wW|ArXivForm 990}}. The paper shows that the general idea of including additional information in baselines for improved variance reduction can be extended salaries include $1.9 million paid to partially observed {{W|Ilya Sutskever}} and multi-agent tasks$800,000 paid to {{W|Ian Goodfellow}} (hired in March of that year).<ref>{{cite web |last1url=Wu https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-OpenAI.html |first1date=Cathy April 19, 2018 |last2publisher=Rajeswaran [[wikipedia:The New York Times|first2=Aravind The New York Times]] |last3title=Duan A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit |first3author=Yan Cade Metz |last4accessdate=KumarVikash Kumar May 5, 2018}}</ref><ref>{{cite web |first4url=Vikash https://www.reddit.com/r/reinforcementlearning/comments/8di9yt/ai_researchers_are_making_more_than_1_million/dxnc76j/ |last5publisher=Bayen reddit |first5title=Alexandre M "A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning |last6accessdate=Kakade |first6=Sham |last7=Mordatch |first7=Igor |last8=Abbeel |first8=Pieter |title=Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines May 5, 2018}}</ref><ref>{{cite web |url=https://arxivnews.orgycombinator.com/abs/1803item?id=16880447 |title=gwern comments on A.I.07246 Researchers Are Making More Than $1M, Even at a Nonprofit |website=arxiv.org Hacker News |accessdate=26 March 2020May 5, 2018}}</ref>
|-
| 2018 || {{dtsDts|MarchMay 2}} || Staff safety || Diane Yoon joins OpenAI as Operations ManagerPublication || The paper "AI safety via debate" by Geoffrey Irving, Paul Christiano, and Dario Amodei is uploaded to the arXiv. The paper proposes training agents via self play on a zero sum debate game, in order to adress tasks that are too complicated for a human to directly judge.<ref>{{cite web |url=https://arxiv.org/abs/1805.00899 |title=Diane Yoon [1805.00899] AI safety via debate |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://wwwblog.linkedinOpenAI.com/in/diane-yoon-a0a8911bdebate/ |websitepublisher=OpenAI Blog |title=AI Safety via Debate |date=May 3, 2018 |first1=Geoffrey |last1=Irving |first2=Dario |last2=linkedin.com Amodei |accessdate=28 February 2020May 5, 2018}}</ref>
|-
| 2018 || {{Dtsdts|April 5}}{{sndMay 16}}June 5 || Event host || The OpenAI Retro Contest takes place.<ref>{{cite web w|url=https://contest.OpenAI.com/ |title=OpenAI Retro Contest |publisher=OpenAI |accessdate=May 5, 2018Computation}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/retro-contest/ |publisher=OpenAI Blog |title=Retro Contest Publication |date=April 13, 2018 |accessdate=May 5OpenAI releases an analysis showing that since 2012, 2018}}</ref> As a result of the release amount of compute used in the Gym Retro library, OpenAI's Universe become deprecatedlargest AI training runs has been increasing exponentially with a 3.4-month doubling time.<ref>{{cite web |title=AI and Compute |url=https://githubopenai.com/OpenAIblog/universe/commitai-and-compute/cc9ce6ec241821bfb0f3b85dd455bd36e4ee7a8c |publisherwebsite=GitHub |title=OpenAI/universe openai.com |accessdate=May 5, 2018April 2020}}</ref>
|-
| 2018 || {{dts|April 9}} || Commitment || OpenAI releases a charter. The charter says in part that OpenAI commits to stop competing with a value-aligned and safety-conscious project that comes close to building artificial general intelligence, and also that OpenAI expects to reduce its traditional publishing in the future due to safety concerns.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-charter/ |publisher=OpenAI Blog |title=OpenAI Charter |date=April 9, 2018 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/e5mFQGMc7JpechJak/OpenAI-charter |title=OpenAI charter |accessdate=May 5, 2018 |date=April 9, 2018 |author=wunan |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://www.reddit.com/r/MachineLearning/comments/8azk2n/d_OpenAI_charter/ |publisher=reddit |title=[D] OpenAI Charter • r/MachineLearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16794194 |title=OpenAI Charter |website=Hacker News |accessdate=May 5, 2018June 11}}</ref><ref>{{cite web |url=https://thenextweb.com/artificial-intelligence/2018/04/10/the-ai-company-elon-musk-co-founded-is-trying-to-create-sentient-machines/ |title=The AI company Elon Musk co-founded intends to create machines with real intelligence |publisher=The Next Web |date=April 10, 2018 |author=Tristan Greene |accessdate=May 5, 2018}}</ref>|-| 2018 || {{dts|April 10}} || Publication || "Emergence of Grounded Compositional Language in Multi-Agent Populations" is first submitted to the {{w|ArXiv}}. The report presents a new {{w|reinforcement Unsupervised learning}} benchmark intended to measure the performance of transfer learning and few-shot learning algorithms in the reinforcement learning domain.<ref>{{cite web |last1=Nichol |first1=Alex |last2=Pfau |first2=Vicki |last3=Hesse |first3=Christopher |last4=Klimov |first4=Oleg |last5=Schulman |first5=John |title=Gotta Learn Fast: A New Benchmark for Generalization in RL |url=https://arxiv.org/abs/1804.03720 |website=arxiv.org |accessdate=26 March 2020}}</ref>|-| 2018 || {{Dts|April 19}} || Financial || ''{{W|The New York Times}}'' publishes a story detailing the salaries of researchers at OpenAI, using information from OpenAI's 2016 {{W|Form 990}}. The salaries include $1.9 million paid to {{W|Ilya Sutskever}} and $800,000 paid to {{WResearch progress |Ian Goodfellow}} (hired in March of that year).<ref>{{cite web |url=https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-OpenAI.html |date=April 19, 2018 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit |author=Cade Metz |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.reddit.com/r/reinforcementlearning/comments/8di9yt/ai_researchers_are_making_more_than_1_million/dxnc76j/ |publisher=reddit |title="A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16880447 |title=gwern comments announces having obtained significant results on A.I. Researchers Are Making More Than $1M, Even at a Nonprofit |website=Hacker News |accessdate=May 5, 2018}}</ref>|-| 2018 || {{dts|April 25}} || Publication || "Glow: Generative Flow with Invertible 1x1 Convolutions" is first submitted to the {{w|ArXiv}}. The paper proposes a method for obtaining a significant improvement in log-likelihood on standard benchmarks.<ref>{{cite web |last1=Kingma |first1=Diederik P. |last2=Dhariwal |first2=Prafulla |title=Glow: Generative Flow with Invertible 1x1 Convolutions |url=https://arxiv.org/abs/1807.03039 |website=arxiv.org |accessdate=26 March 2020}}</ref>|-| 2018 || {{dts|April}} || Staff || Peter Zhokhov joins OpenAI as Member suite of the Technical Staff.<ref>{{cite web |title=Peter Zhokhov |url=https://www.linkedin.com/in/peter-zhokhov-b68525b3/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|-| 2018 || {{Dts|May 2}} || Publication || The paper "AI safety via debate" by Geoffrey Irving, Paul Christiano, and Dario Amodei is uploaded to the arXiv. The paper proposes training agents via self play on a zero sum debate game, in order to adress diverse language tasks that are too complicated for with a human to directly judge.<ref>{{cite web |url=https://arxiv.org/abs/1805.00899 |title=[1805.00899] AI safety via debate |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/debate/ |publisher=OpenAI Blog |title=AI Safety via Debate |date=May 3scalable, 2018 |first1=Geoffrey |last1=Irving |first2=Dario |last2=Amodei |accessdate=May 5, 2018}}</ref>|-| 2018 || {{dts|May}} || Staff || Susan Zhang joins OpenAI as Research Engineer.<ref>{{cite web |title=Susan Zhang |url=https://www.linkedin.com/in/suchenzang/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|task-| 2018 || {{dts|May}} || Staff || Daniel Ziegler joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Daniel Ziegler |url=https://www.linkedin.com/in/daniel-ziegler-b4b61882/ |website=linkedin.com |accessdate=29 February 2020}}</ref>|-| 2018|| {{dts|June 2}} || Publication || OpenAI publishes "GamePad: A Learning Environment for Theorem Proving" in {{w|arXiv}}. The paper introduces a agnostic system called GamePad that can be used to explore the application of machine learning methods to theorem proving in the Coq proof assistant.<ref>{{cite web |last1=Huang |first1=Daniel |last2=Dhariwal |first2=Prafulla |last3=Song |first3=Dawn |last4=Sutskever |first4=Ilya |title=GamePad: A Learning Environment for Theorem Proving |url=https://arxiv.org/abs/1806.00608 |website=arxiv.org |accessdate=26 March 2020}}</ref>|-| 2018 || {{Dts|June 25}} || AI development || OpenAI announces set of AI algorithms able to hold their own as a team of five and defeat human amateur players at {{w|Dota 2}}, a multiplayer online battle arena video game popular in e-sports for its complexity and necessity for teamwork.<ref>{{cite web |last1=Gershgorn |first1=Dave |title=OpenAI built gaming bots that can work as a team with inhuman precision |url=https://qz.com/1311732/OpenAI-built-gaming-bots-that-can-work-as-a-team-with-inhuman-precision/ |website=qz.com |accessdate=14 June 2019}}</ref> In the algorithmic A team, called OpenAI Five, each algorithm which uses a {{w|neural network}} to learn both how to play the game, and how to cooperate with its AI teammates.<ref>{{cite web |last1=Knight |first1=Will |title=A team of AI algorithms just crushed humans in a complex computer game |url=https://www.technologyreview.com/s/611536/a-team-of-ai-algorithms-just-crushed-expert-humans-in-a-complex-computer-game/ |website=technologyreview.com |accessdate=14 June 2019}}</ref><ref>{{cite web |title=OpenAI’s bot can now defeat skilled Dota 2 teams |url=https://venturebeat.com/2018/06/25/OpenAI-trains-ai-to-defeat-teams-combination of-skilled-dota-2-players/ |website=venturebeat.com |accessdate=14 June 2019}}</ref>|-| 2018 || {{Dts|June 26}} || Notable comment || {{w|Bill Gates}} comments on {{w|Twitter}}: {{Quote|AI bots just beat humans at the video game Dota 2. That’s a big deal, because their victory required teamwork transformers and collaboration – a huge milestone in advancing artificial intelligence.}}<ref>{{cite web |last1=Papadopoulos |first1=Loukia |title=Bill Gates Praises Elon Muskunsupervised pre-Founded OpenAI’s Latest Dota 2 Win as “Huge Milestone” in Field |url=https://interestingengineering.com/bill-gates-praises-elon-musk-founded-OpenAIs-latest-dota-2-win-as-huge-milestone-in-field |website=interestingengineering.com |accessdate=14 June 2019}}</ref>|-| 2018 || {{dts|June}} || Staff || Yilun Du joins OpenAI as Research Fellowtraining.<ref>{{cite web |title=Yilun Du Improving Language Understanding with Unsupervised Learning |url=https://www.linkedinopenai.com/inblog/yilun-dulanguage-04a831112unsupervised/ |website=linkedinopenai.com |accessdate=28 February 2020}}</ref>|-| 2018 || {{dts|June}} || Staff || Christine McLeavey Payne joins OpenAI's Deep Learning Scholars Program.<ref>{{cite web |title=Christine McLeavey Payne |url=https://www.linkedin.com/in/mcleavey/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|-| 2018 || {{dts|June 17}} || Publication || OpenAI publishes paper on learning policy representations in multiagent systems. The paper proposes a general learning framework for modeling agent behavior in any multiagent system using only a handful of interaction data.<ref>{{cite web |title=Learning Policy Representations in Multiagent Systems |url=https://arxiv.org/abs/1806.06464 |website=arxiv.org |accessdate=26 March 5 April 2020}}</ref>
|-
| 2018 || {{dtsDts|June25}} || Staff {{w|Neural network}} || Software release || OpenAI announces set of AI algorithms able to hold their own as a team of five and defeat human amateur players at {{w|Dota 2}}, a multiplayer online battle arena video game popular in e-sports for its complexity and necessity for teamwork.<ref>{{cite web |last1=Gershgorn |first1=Dave |title=OpenAI built gaming bots that can work as a team with inhuman precision | Johannes Otterbach joins url=https://qz.com/1311732/OpenAI -built-gaming-bots-that-can-work-as Member Of Technical Staff (Fellow)-a-team-with-inhuman-precision/ |website=qz.com |accessdate=14 June 2019}}</ref> In the algorithmic A team, called OpenAI Five, each algorithm uses a {{w|neural network}} to learn both how to play the game, and how to cooperate with its AI teammates.<ref>{{cite web |last1=Knight |first1=Will |title=Johannes Otterbach A team of AI algorithms just crushed humans in a complex computer game |url=https://www.linkedintechnologyreview.com/s/611536/a-team-of-ai-algorithms-just-crushed-expert-humans-in-a-complex-computer-game/ |website=technologyreview.com |accessdate=14 June 2019}}</ref><ref>{{cite web |title=OpenAI’s bot can now defeat skilled Dota 2 teams |url=https://venturebeat.com/2018/jotterbach06/25/OpenAI-trains-ai-to-defeat-teams-of-skilled-dota-2-players/ |website=linkedinventurebeat.com |accessdate=28 February 202014 June 2019}}</ref>
|-
| 2018 || {{dtsDts|June26}} || Staff || Karl Cobbe joins OpenAI as Machine Learning FellowNotable comment || {{w|Bill Gates}} comments on {{w|Twitter}}: {{Quote|AI bots just beat humans at the video game Dota 2. That’s a big deal, because their victory required teamwork and collaboration – a huge milestone in advancing artificial intelligence.}}<ref>{{cite web |last1=Papadopoulos |first1=Loukia |title=Karl Cobbe Bill Gates Praises Elon Musk-Founded OpenAI’s Latest Dota 2 Win as “Huge Milestone” in Field |url=https://www.linkedininterestingengineering.com/bill-gates-praises-elon-musk-founded-OpenAIs-latest-dota-2-win-as-huge-milestone-in/kcobbe/ -field |website=linkedininterestingengineering.com |accessdate=28 February 202014 June 2019}}</ref>
|-
| 2018 || {{Dts|July 18}} || || Commitment || {{w|Elon Musk}}, along with other tech leaders, sign a pledge promising to not develop “lethal autonomous weapons.” They also call on governments to institute laws against such technology. The pledge is organized by the {{w|Future of Life Institute}}, an outreach group focused on tackling existential risks.<ref>{{cite web |last1=Vincent |first1=James |title=Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems |url=https://www.theverge.com/2018/7/18/17582570/ai-weapons-pledge-elon-musk-deepmind-founders-future-of-life-institute |website=theverge.com |accessdate=1 June 2019}}</ref><ref>{{cite web |last1=Locklear |first1=Mallory |title=DeepMind, Elon Musk and others pledge not to make autonomous AI weapons |url=https://www.engadget.com/2018/07/18/deepmind-elon-musk-pledge-autonomous-ai-weapons/ |website=engadget.com |accessdate=1 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=Elon Musk, his arch nemesis DeepMind swear off AI weapons |url=https://www.theregister.co.uk/2018/07/19/keep_ai_nonlethal/ |website=theregister.co.uk |accessdate=1 June 2019}}</ref>
|-
| 2018 || {{dtsDts|July 2630}} || Publication Robotics || Software release || OpenAI publishes paper announces a robotics system that can manipulate objects with humanlike dexterity. The system is able to develop these behaviors all on variational option discovery algorithmsits own. The paper highlights It uses a tight connection between variational option discovery methods reinforcement model, where the AI learns through trial and variational autoencoderserror, to direct robot hands in grasping and introduces Variational Autoencoding Learning of Options by Reinforcement (VALOR), a new method derived from the connectionmanipulating objects with great precision.<ref>{{cite web |last1title=Achiam OpenAI’s ‘state-of-the-art’ system gives robots humanlike dexterity |first1url=Joshua https://venturebeat.com/2018/07/30/OpenAIs-state-of-the-art-system-gives-robots-humanlike-dexterity/ |last2website=Edwards venturebeat.com |first2accessdate=Harrison 14 June 2019}}</ref><ref>{{cite web |last3last1=Amodei Coldewey |first3first1=Dario Devin |last4title=Abbeel OpenAI’s robotic hand doesn’t need humans to teach it human behaviors |first4url=Pieter |title=Variational Option Discovery Algorithms https://techcrunch.com/2018/07/30/OpenAIs-robotic-hand-doesnt-need-humans-to-teach-it-human-behaviors/ |website=arxivtechcrunch.org com |accessdate=26 March 202014 June 2019}}</ref>
|-
| 2018 || {{Dts|July 30August 7}} || AI development || Achievement || Algorithmic team OpenAI announces Five defeats a robotics system that can manipulate objects with humanlike dexterityteam of semi-professional {{w|Dota 2}} players ranked in the 99. The system is able to develop these behaviors all on its own. It uses a reinforcement model95th percentile in the world, where in their second public match in the AI learns through trial and errortraditional five-versus-five settings, to direct robot hands hosted in grasping and manipulating objects with great precision{{w|San Francisco}}.<ref>{{cite web |last1=Whitwam |first1=Ryan |title=OpenAI’s ‘state-of-OpenAI Bots Crush the Best Human Dota 2 Players in the-art’ system gives robots humanlike dexterity World |url=https://venturebeatwww.extremetech.com/2018gaming/07/30/OpenAIs274907-OpenAI-statebots-ofcrush-the-artbest-systemhuman-givesdota-robots2-humanlikeplayers-dexterityin-the-world |website=extremetech.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=OpenAI bots thrash team of Dota 2 semi-pros, set eyes on mega-tourney |url=https://www.theregister.co.uk/2018/08/06/OpenAI_bots_dota_2_semipros/ |website=venturebeattheregister.com co.uk |accessdate=14 15 June 2019}}</ref><ref>{{cite web |last1=Coldewey Savov |first1=Devin Vlad |title=OpenAI’s robotic hand doesn’t need humans to teach it human behaviors The OpenAI Dota 2 bots just defeated a team of former pros |url=https://techcrunchwww.theverge.com/2018/078/306/OpenAIs17655086/dota2-roboticOpenAI-handbots-doesntprofessional-needgaming-humansai |website=theverge.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Rigg |first1=Jamie |title=‘Dota 2’ veterans steamrolled by AI team in exhibition match |url=https://www.engadget.com/2018/08/06/OpenAI-tofive-teachdumpsters-itdota-human2-behaviorsveterans/ |website=techcrunchengadget.com |accessdate=14 15 June 2019}}</ref>
|-
| 2018 || {{dts|August 116}} || {{w|Arboricity}} || Publication || OpenAI publishes paper describing on constant arboricity spectral sparsifiers. The paper shows that every graph is spectrally similar to the use union of {{w|reinforcement learning}} to learn dexterous in-hand manipulation policies which can perform vision-based object reorientation on a physical Shadow Dexterous Handconstant number of forests.<ref>{{cite web |last1=Chu |first1=Timothy |last2=Cohen |first2=Michael B. |last3=Pachocki |first3=Jakub W. |last4=Peng |first4=Richard |title=Learning Dexterous In-Hand Manipulation Constant Arboricity Spectral Sparsifiers |url=https://arxiv.org/abs/1808.00177 05662 |website=arxiv.org |accessdate=26 March 2020}}</ref>
|-
| 2018 || {{Dtsdts|August 7September}} || Achievement || Algorithmic team OpenAI Five defeats a team of semi-professional {{w|Dota 2}} players ranked in the 99.95th percentile in the world, in their second public match in the traditional five-versus-five settings, hosted in {{w|San Francisco}}.<ref>{{cite web |last1=Whitwam |first1=Ryan |title=OpenAI Bots Crush the Best Human Dota 2 Players in the World |url=https://www.extremetech.com/gaming/274907-OpenAI-bots-crush-the-best-human-dota-2-players-in-the-world |website=extremetech.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=OpenAI bots thrash team of Dota 2 semi-pros, set eyes on mega-tourney |url=https://www.theregister.co.uk/2018/08/06/OpenAI_bots_dota_2_semipros/ |website=theregister.co.uk |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Savov Team |first1=Vlad |title=The Dario Amodei becomes OpenAI Dota 2 bots just defeated a team of former pros |url=https://www.theverge's Research Director.com/2018/8/6/17655086/dota2-OpenAI-bots-professional-gaming-ai |website=theverge.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Rigg |first1=Jamie |title=‘Dota 2’ veterans steamrolled by AI team in exhibition match |url=https://www.engadget.com/2018/08/06/OpenAI-five-dumpsters-dota-2-veterans/ |websitename=engadget.com |accessdate=15 June 2019}}<"Dario Amodeiy"/ref>
|-
| 2018 || {{dts|AugustOctober 31}} || Staff {{w|Reinforcement learning}} || Software release || Ingmar Kanitscheider joins OpenAI as Research Scientistunveils its Random Network Distillation (RND), a prediction-based method for encouraging {{w|reinforcement learning}} agents to explore their environments through curiosity, which for the first time exceeds average human performance on videogame Montezuma’s Revenge.<ref>{{cite web |title=Ingmar Kanitscheider Reinforcement Learning with Prediction-Based Rewards |url=https://www.linkedinopenai.com/inblog/ingmarreinforcement-learning-with-prediction-kanitscheiderbased-148620127rewards/ |website=linkedinopenai.com |accessdate=28 February 5 April 2020}}</ref>
|-
| 2018 || {{dtsDts|AugustNovember 8}} || Staff {{w|Reinforcement learning}} || Education ||OpenAI launches Spinning Up, an educational resource designed to teach anyone deep reinforcement learning. The program consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.<ref>{{cite web |title=Spinning Up in Deep RL |url=https://OpenAI.com/blog/spinning-up-in-deep-rl/ | Miles Brundage joins website=OpenAI as Research Scientist (Policy).com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Ramesh |first1=Prasad |title=Miles Brundage OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners |url=https://wwwhub.packtpub.com/OpenAI-launches-spinning-up-a-learning-resource-for-potential-deep-learning-practitioners/ |website=hub.linkedinpacktpub.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches reinforcement learning training to prepare for artificial general intelligence |url=https://flipboard.com/in@venturebeat/milesOpenAI-launches-reinforcement-learning-brundagetraining-49b62a4to-prepare-for-artificial-genera/ a-TxuPmdApTGSzPr0ny7qXsw%3Aa%3A2919225365-bafeac8636%2Fventurebeat.com |website=linkedinflipboard.com |accessdate=28 February 202015 June 2019}}</ref>
|-
| 2018 || {{dtsDts|AugustNovember 9}} || Staff || Jeffrey Wu joins Notable comment || {{w|Ilya Sutskever}} gives speech at the AI Frontiers Conference in {{w|San Jose}}, and declares: {{Quote|We (OpenAI ) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as Member of Technical Staffa serious possibility.}}<ref>{{cite web |title=Jeffrey Wu OpenAI Founder: Short-Term AGI Is a Serious Possibility |url=https://www.linkedinsyncedreview.com/in2018/11/13/wuOpenAI-thefounder-short-term-agi-is-a-serious-jeffpossibility/ |website=linkedinsyncedreview.com |accessdate=29 February 202015 June 2019}}</ref>
|-
| 2018 || {{dtsDts|August 16November 19}} || Publication {{w|Reinforcement learning}} || Partnership || OpenAI publishes partners with {{w|DeepMind}} in a new paper on constant arboricity spectral sparsifiersthat proposes a new method to train {{w|reinforcement learning}} agents in ways that enables them to surpass human performance. The paper shows , titled ''Reward learning from human preferences and demonstrations in Atari'', introduces a training model that every graph is spectrally similar combines human feedback and reward optimization to maximize the union knowledge of a constant number of forestsRL agents.<ref>{{cite web |last1=Chu Rodriguez |first1=Timothy |last2=Cohen |first2=Michael B. |last3=Pachocki |first3=Jakub W. |last4=Peng |first4=Richard Jesus |title=Constant Arboricity Spectral Sparsifiers What’s New in Deep Learning Research: OpenAI and DeepMind Join Forces to Achieve Superhuman Performance in Reinforcement Learning |url=https://arxivtowardsdatascience.orgcom/abs/1808.05662 whats-new-in-deep-learning-research-OpenAI-and-deepmind-join-forces-to-achieve-superhuman-48e7d1accf85 |website=arxivtowardsdatascience.org com |accessdate=26 March 202029 June 2019}}</ref>
|-
| 2018 || {{dts|SeptemberDecember 4}} || Staff {{w|Reinforcement learning}} || Researh progress || Christopher Olah joins OpenAI as Member Of Technical Staffannounces having discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks.<ref>{{cite web |title=Christopher Olah How AI Training Scales |url=https://www.linkedinopenai.com/inblog/christopherscience-olahof-b574414aai/ |website=linkedinopenai.com |accessdate=28 February 4 April 2020}}</ref>
|-
| 2018 || {{dtsDts|SeptemberDecember 6}} || Staff {{w|Reinforcement learning}} | Taehoon Kim joins | Software release || OpenAI releases CoinRun, a training environment designed to test the adaptability of reinforcement learning agents.<ref>{{cite web |title=OpenAI teaches AI teamwork by playing hide-and-seek |url=https://venturebeat.com/2019/09/17/OpenAI -and-deepmind-teach-ai-to-work-as Research Engineer-a-team-by-playing-hide-and-seek/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=Taehoon Kim OpenAI’s CoinRun tests the adaptability of reinforcement learning agents |url=https://www.linkedinventurebeat.com/in2018/12/06/carpedm20OpenAIs-coinrun-tests-the-adaptability-of-reinforcement-learning-agents/ |website=linkedinventurebeat.com |accessdate=29 24 February 2020}}</ref>
|-
| 2018 2019 || {{dtsDts|SeptemberFebruary 14}} || Staff {{w|Natural-language generation}} || Software release || OpenAI unveils its language-generating system called GPT-2, a system able to write news, answer reading comprehension problems, and shows promise at tasks like translation.<ref>{{cite web |title=An AI helped us write this article | Dario Amodei becomes url=https://www.vox.com/future-perfect/2019/2/14/18222270/artificial-intelligence-open-ai-natural-language-processing |website=vox.com |accessdate=28 June 2019}}</ref> However, the data or the parameters of the model are not released, under expressed concerns about potential abuse.<ref>{{cite web |last1=Lowe |first1=Ryan |title=OpenAI’s GPT-2: the model, the hype, and the controversy |url=https://towardsdatascience.com/OpenAIs-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8 |website=towardsdatascience.com |accessdate=10 July 2019}}</ref> OpenAI's Research Directorinitially tries to communicate the risk posed by this technology.<ref name="Dario Amodeiyssfr"/>
|-
| 2018 2019 || {{dts|October 2February 19}} || Safety || Publication || OpenAI publishes "AI Safety Needs Social Scientists" is published. The paper on FFJORD (freeargues that long-form continuous dynamics for scalable reversible generative models), aiming term AI safety research needs social scientists to demonstrate their approach on high-dimensional density estimation, image generation, and variational inferenceensure AI alignment algorithms succeed when actual humans are involved.<ref>{{cite web journal |last1=Grathwohl Irving |first1=Will Geoffrey |last2=Chen Askell |first2=Ricky T. Q. Amanda |last3title=Bettencourt AI Safety Needs Social Scientists |first3doi=Jesse 10.23915/distill.00014 |last4url=Sutskever |first4=Ilya |last5=Duvenaud |first5=David https://distill.pub/2019/safety-needs-social-scientists/}}</ref><ref>{{cite web |title=FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models AI Safety Needs Social Scientists |url=https://arxivopenai.orgcom/blog/absai-safety-needs-social-scientists/1810.01367 |website=arxivopenai.org com |accessdate=26 March 5 April 2020}}</ref>
|-
| 2018 2019 || {{dts|October 19March 4}} || Publication {{w|Reinforcement learning}} || Software release || OpenAI publishes paper proposing Iterated Amplificationreleases a Neural MMO (massively multiplayer online), an alternative training strategy which progressively builds up a training signal multiagent game environment for difficult problems by combining solutions to easier subproblems{{w|reinforcement learning}} agents. The platform supports a large, variable number of agents within a persistent and open-ended task.<ref>{{cite web |last1=Christiano |first1=Paul |last2=Shlegeris |first2=Buck |last3=Amodei |first3=Dario |title=Supervising strong learners by amplifying weak experts Neural MMO: A Massively Multiagent Game Environment |url=https://arxivopenai.orgcom/blog/absneural-mmo/1810.08575 |website=arxivopenai.org com |accessdate=26 March 5 April 2020}}</ref>
|-
| 2018 2019 || {{Dtsdts|OctoberMarch 6}} || Staff || Daniela Amodei joins Software release || OpenAI as NLP Team Manager and Head of People Operationsintroduces activation atlases, created in collaboration with {{w|Google}} researchers. Activation atlases comprise a new technique for visualizing what interactions between neurons can represent.<ref>{{cite web |title=Daniela Amodei Introducing Activation Atlases |url=https://www.linkedinopenai.com/inblog/danielaintroducing-amodeiactivation-790bb22aatlases/ |website=linkedinopenai.com |accessdate=28 February 5 April 2020}}</ref>
|-
| 2018 2019 || {{dtsDts|OctoberMarch 11}} || Staff || Lei Zhang joins Reorganization || OpenAI as Research Fellowannounces the creation of OpenAI LP, a new “capped-profit” company owned and controlled by the OpenAI nonprofit organization’s board of directors. The new company is purposed to allow OpenAI to rapidly increase their investments in compute and talent while including checks and balances to actualize their mission.<ref>{{cite web |last1=Johnson |first1=Khari |title=Lei Zhang OpenAI launches new company for funding safe artificial general intelligence |url=https://wwwventurebeat.linkedincom/2019/03/11/OpenAI-launches-new-company-for-funding-safe-artificial-general-intelligence/ |website=venturebeat.com|accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Trazzi |first1=Michaël |title=Considerateness inOpenAI LP Debate |url=https://leimedium.com/@MichaelTrazzi/considerateness-in-OpenAI-lp-zhangdebate-34a60910/ 6eb3bf4c5341 |website=linkedinmedium.com |accessdate=28 February 202015 June 2019}}</ref>
|-
| 2018 2019 || {{dts|OctoberMarch 21}} || Staff || Mark Chen joins Software release || OpenAI as Research Scientistannounces progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models.<ref>{{cite web |title=Mark Chen Implicit Generation and Generalization Methods for Energy-Based Models |url=https://www.linkedinopenai.com/inblog/markchen90energy-based-models/ |website=linkedinopenai.com |accessdate=28 February 5 April 2020}}</ref>
|-
| 2018 2019 || {{Dts|November 1March}} || Publication || OpenAI publishes research paper detailing AI able Team || {{w|Sam Altman}} leaves his role as the president of {{w|Y Combinator}} to defeat humans at become the retro platformer [[{{w:Montezuma's Revenge (video game)|Montezuma’s Revenge]]. The top-performing iteration found 22 Chief executive officer}} of the 24 rooms in the first level, and occasionally discovered all 24OpenAI.<ref>{{cite web |last1=Wiggers |first1=Kyle |title=OpenAI made a system that’s better at Montezuma’s Revenge than humans Sam Altman’s leap of faith |url=https://venturebeattechcrunch.com/20182019/1105/0118/OpenAIsam-madealtmans-aleap-systemof-thats-better-at-montezumas-revenge-than-humansfaith/ |website=venturebeattechcrunch.com |accessdate=15 June 201924 February 2020}}</ref><ref>{{cite web |last1=Vincent |first1=James |title=New research from OpenAI uses curious AI to beat video games Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator |url=https://www.thevergetechcrunch.com/20182019/1103/108/18051196/aiy-combinator-president-sam-altman-is-artificialstepping-intelligencedown-curiosityamid-OpenAIa-series-montezumasof-revengechanges-noisyat-tvthe-problem accelerator/ |website=thevergetechcrunch.com |accessdate=15 June 201924 February 2020}}</ref><ref name="orgwatch.issarice.com"/>
|-
| 2018 2019 || {{dtsDts|November 5April 23}} || {{w|Deep learning}} || Publication || OpenAI publishes paper proposing announcing Sparse Transformers, a plan online and learn offline (POLO) framework deep neural network for the setting where an agentlearning sequences of data, including text, sound, with and images. It utilizes an internal modelimproved algorithm based on the attention mechanism, needs being able to continually act and learn in the worldextract patterns from sequences 30 times longer than possible previously.<ref>{{cite web |last1=Lowrey Alford |first1=Kendall Anthony |last2title=Rajeswaran OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences |first2url=Aravind https://www.infoq.com/news/2019/05/OpenAI-sparse-transformers/ |last3website=Kakade infoq.com |first3accessdate=Sham 15 June 2019}}</ref><ref>{{cite web |last4title=Todorov OpenAI Sparse Transformer Improves Predictable Sequence Length by 30x |first4url=Emanuel https://medium.com/syncedreview/OpenAI-sparse-transformer-improves-predictable-sequence-length-by-30x-5a65ef2592b9 |last5website=Mordatch medium.com |first5accessdate=Igor 15 June 2019}}</ref><ref>{{cite web |title=Plan Online, Learn OfflineGenerative Modeling with Sparse Transformers |url=https: Efficient Learning and Exploration via Model//OpenAI.com/blog/sparse-Based Control transformer/ |website=arxivOpenAI.org com |accessdate=26 March 202015 June 2019}}</ref>
|-
| 2018 2019 || {{Dts|November 8April 25}} || {{w|Neural network}} || Education Software release || OpenAI launches Spinning Upannounces MuseNet, an educational resource designed a deep {{w|neural network}} able to teach anyone deep reinforcement learning. The program consists of crystalgenerate 4-clear examples of RL code, educational exercises, documentationminute musical compositions with 10 different instruments, and tutorials.<ref>can combine multiple styles from [[w:Country music|country]] to {{cite web |title=Spinning Up in Deep RL |url=https://OpenAI.com/blog/spinning-up-in-deep-rl/ w|website=OpenAI.com |accessdate=15 June 2019Mozart}}</ref><ref>to {{cite web w|last1=Ramesh |first1=Prasad |title=OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners |url=https://hubThe Beatles}}.packtpub.com/OpenAI-launches-spinning-up-a-learning-resource-forThe neural network uses general-potential-deep-learning-practitioners/ |website=hub.packtpubpurpose unsupervised technology.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches reinforcement learning training to prepare for artificial general intelligence MuseNet |url=https://flipboardOpenAI.com/@venturebeatblog/OpenAI-launches-reinforcement-learning-training-to-prepare-for-artificial-generamusenet/a-TxuPmdApTGSzPr0ny7qXsw%3Aa%3A2919225365-bafeac8636%2Fventurebeat.com |website=flipboardOpenAI.com |accessdate=15 June 2019}}</ref>
|-
| 2018 2019 || {{Dts|November 9April 27}} || Notable comment || {{w|Ilya Sutskever}} gives speech at the AI Frontiers Conference in {{wEvent hosting |San Jose}}, and declares: {{Quote|We (OpenAI) have reviewed progress in hosts the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibilityOpenAI Robotics Symposium 2019.}}<ref>{{cite web |title=OpenAI Founder: Short-Term AGI Is a Serious Possibility Robotics Symposium 2019 |url=https://syncedreviewOpenAI.com/2018/11/13blog/OpenAI-founder-short-term-agi-is-a-serioussymposium-possibility2019/ |website=syncedreviewOpenAI.com |accessdate=15 14 June 2019}}</ref>
|-
| 2018 2019 || {{dtsDts|November 18May}} || Staff {{w|Natural-language generation}} || Software release || Clemens Winter joins OpenAI as Member Of Technical Staffreleases a limited version of its language-generating system GPT-2. This version is more powerful (though still significantly limited compared to the whole thing) than the extremely restricted initial release of the system, citing concerns that it’d be abused.<ref>{{cite web |title=Clemens Winter A poetry-writing AI has just been unveiled. It’s ... pretty good. |url=https://www.linkedinvox.com/in2019/5/15/18623134/clemensOpenAI-language-ai-gpt2-poetry-wintertry-569887a9it |website=vox.com |accessdate=11 July 2019}}</ ref> The potential of the new system is recognized by various experts.<ref>{{cite web |last1=Vincent |first1=James |title=AND OpenAI's new multitalented AI writes, translates, and slanders |url=https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-OpenAI-gpt2 |website=linkedintheverge.com |accessdate=29 February 202011 July 2019}}</ref>
|-
| 2018 2019 || {{Dtsdts|November 19June 13}} || Partnership || OpenAI partners with {{w|DeepMindNatural-language generation}} in a new paper that proposes a new method to train {{w|reinforcement learning}} agents in ways that enables them | Coverage || Connor Leahy publishes article entitled ''The Hacker Learns to surpass human performance. The paper, titled Trust''Reward learning from human preferences which discusses the work of OpenAI, and demonstrations particularly the potential danger of its language-generating system GPT-2. Leahy highlights: "Because this isn’t just about GPT2. What matters is that at some point in Atari''the future, introduces a training model that combines human feedback someone will create something truly dangerous and reward optimization there need to maximize the knowledge of RL agentsbe commonly accepted safety norms before that happens."<refname="ssfr">{{cite web |last1=Rodriguez |first1=Jesus |title=What’s New in Deep Learning Research: OpenAI and DeepMind Join Forces The Hacker Learns to Achieve Superhuman Performance in Reinforcement Learning Trust |url=https://towardsdatasciencemedium.com/whats@NPCollapse/the-newhacker-in-deep-learning-research-OpenAI-and-deepmind-join-forceslearns-to-achieve-superhumantrust-48e7d1accf85 62f3c1490f51 |website=towardsdatasciencemedium.com |accessdate=29 June 20195 May 2020}}</ref>
|-
| 2018 2019 || {{dts|NovemberJuly 22}} || Staff || Amanda Askell joins Partnership || OpenAI announces an exclusive partnership with {{w|Microsoft}}. As part of the partnership, Microsoft invests $1 billion in OpenAI as Research Scientist , and OpenAI switches to exclusively using {{w|Microsoft Azure}} (PolicyMicrosoft's cloud solution)as the platform on which it will develop its AI tools.Microsoft will also be OpenAI's "preferred partner for commercializing new AI technologies."<ref>{{cite web |url = https://OpenAI.com/blog/microsoft/|title=Amanda Askell Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI|date = July 22, 2019|accessdate = July 26, 2019|publisher = OpenAI}}</ref><ref>{{cite web|url = https://news.microsoft.com/2019/07/22/OpenAI-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/|title = OpenAI forms exclusive computing partnership with Microsoft to build new Azure AI supercomputing technologies|date = July 22, 2019|accessdate = July 26, 2019|publisher = Microsoft}}</ref><ref>{{cite web|url=https://www.linkedinbusinessinsider.com/microsoft-OpenAI-artificial-general-intelligence-investment-2019-7|title = Microsoft is investing $1 billion in/amandaOpenAI, the Elon Musk-askellfounded company that's trying to build human-1ab457175like artificial intelligence|last = Chan|first= Rosalie|date = July 22, 2019|accessdate = July 26, 2019|publisher = Business Insider}}</ ref><ref>{{cite web|websiteurl =linkedinhttps://www.forbes.com /sites/mohanbirsawhney/2019/07/24/the-real-reasons-microsoft-invested-in-OpenAI/|title = The Real Reasons Microsoft Invested In OpenAI|last = Sawhney|first = Mohanbir|date = July 24, 2019|accessdate=28 February 2020July 26, 2019|publisher = Forbes}}</ref>
|-
| 2018 2019 || {{dts|August 20}} || {{Dtsw|December 6Natural-language generation}} || Publication Software release || OpenAI publishes CoinRunannounces plan to release a version of its language-generating system GPT-2, which is designed to test the adaptability of reinforcement learning agentsstirred controversy after it release in February.<ref>{{cite web |title=OpenAI teaches AI teamwork by playing hidereleases curtailed version of GPT-and-seek 2 language model |url=https://venturebeat.com/2019/0908/1720/OpenAI-andreleases-deepmindcurtailed-teachversion-aiof-gpt-2-language-model/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI Just Released an Even Scarier Fake News-toWriting Algorithm |url=https://interestingengineering.com/OpenAI-workjust-asreleased-aan-teameven-byscarier-playingfake-hidenews-andwriting-seek/ algorithm |website=venturebeatinterestingengineering.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI’s CoinRun tests the adaptability of reinforcement learning agents OPENAI JUST RELEASED A NEW VERSION OF ITS FAKE NEWS-WRITING AI |url=https://venturebeatfuturism.com/2018/12/06/OpenAIs-coinrun-tests-the-adaptabilitybyte/OpenAI-ofnew-reinforcementversion-learningwriting-agents/ ai |website=venturebeatfuturism.com |accessdate=24 February 2020}}</ref>
|-
| 2018 2019 || {{dts|December 14September 17}} || Publication || Research progress || OpenAI publishes paper demonstrating that announces having observed agents discovering progressively more complex tool use while playing a simple game of hide-and easy-seek. Through training, the agents were able to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains build a series of six distinct strategies and applicationscounterstrategies, including a number some of {{w|supervised learning}} datasets, {{w|reinforcement learning}} domains, and even generative model trainingwhich were unknown to be supported by the environment.<ref>{{cite web |last1title=McCandlish Emergent Tool Use from Multi-Agent Interaction |first1url=Sam https://openai.com/blog/emergent-tool-use/ |last2website=Kaplan openai.com |first2accessdate=Jared |last3=Amodei |first3=Dario |last4=OpenAI Dota Team 4 April 2020}}</ref><ref>{{cite web |title=An Empirical Model of LargeEmergent Tool Use From Multi-Batch Training Agent Autocurricula |url=https://arxiv.org/abs/18121909.06162 07528 |website=arxiv.org |accessdate=25 March 4 April 2020}}</ref>
|-
| 2018 2019 || {{dts|DecemberOctober 16}} || Staff {{w|Neural network}}s || Research progress || Mateusz Litwin joins OpenAI as Member Of Technical Staffannounces having trained a pair of {{w|neural network}}s to solve the {{w|Rubik’s Cube}} with a human-like robot hand. The experiment demonstrates that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot.<ref>{{cite web |title=Mateusz Litwin Solving Rubik's Cube with a Robot Hand |url=https://wwwarxiv.org/abs/1910.07113 |website=arxiv.linkedinorg |accessdate=4 April 2020}}</ref><ref>{{cite web |title=Solving Rubik’s Cube with a Robot Hand |url=https://openai.com/inblog/mateuszsolving-litwinrubiks-06b3a919cube/ |website=linkedinopenai.com |accessdate=28 February 4 April 2020}}</ref>
|-
| 2019 || {{dts|JanuaryNovember 5}} || Staff {{w|Natural-language generation}} || Software release || Bianca Martin joins OpenAI as Special Projects Managerreleases the largest version (1.5B parameters) of its language-generating system GPT-2 along with code and model weights to facilitate detection of outputs of GPT-2 models.<ref>{{cite web |title=Bianca Martin GPT-2: 1.5B Release |url=https://www.linkedinopenai.com/inblog/biancamartin1gpt-2-1-5b-release/ |website=linkedinopenai.com |accessdate=28 February 5 April 2020}}</ref>
|-
| 2019 || {{dts|February 4November 21}} || Publication {{w|Reinforcement learning}} || Software release || OpenAI publishes paper showing computational limitations in robust classification releases Safety Gym, a suite of environments and win-win resultstools for measuring progress towards {{w|reinforcement learning}} agents that respect safety constraints while training.<ref>{{cite web |last1=Degwekar |first1=Akshay |last2=Nakkiran |first2=Preetum |last3=Vaikuntanathan |first3=Vinod |title=Computational Limitations in Robust Classification and Win-Win Results Safety Gym |url=https://arxivopenai.orgcom/absblog/safety-gym/1902.01086 |website=arxivopenai.org com |accessdate=25 March 5 April 2020}}</ref>
|-
| 2019 || {{Dtsdts|February 14December 3}} || AI development {{w|Reinforcement learning}} || Software release || OpenAI unveils its languagereleases Procgen Benchmark, a set of 16 simple-generating system called GPTto-2use procedurally-generated environments (CoinRun, StarPilot, CaveFlyer, Dodgeball, FruitBot, Chaser, Miner, Jumper, Leaper, Maze, BigFish, Heist, Climber, a system able to write the newsPlunder, answer reading comprehension problemsNinja, and is beginning to show promise at tasks like translationBossFight) which provide a direct measure of how quickly a {{w|reinforcement learning}} agent learns generalizable skills. Procgen Benchmark prevents AI model overfitting.<ref>{{cite web |title=An AI helped us write this article Procgen Benchmark |url=https://wwwopenai.voxcom/blog/procgen-benchmark/ |website=openai.com|accessdate=2 March 2020}}</ref><ref>{{cite web |title=OpenAI’s Procgen Benchmark prevents AI model overfitting |url=https:/future-perfect/venturebeat.com/2019/2/1412/1822227003/artificialopenais-intelligenceprocgen-openbenchmark-ai-natural-language-processing overfitting/ |website=voxventurebeat.com |accessdate=28 June 20192 March 2020}}</ref> However, the data or the parameters of the model are not released, under expressed concerns about potential abuse.<ref>{{cite web |last1=Lowe |first1=Ryan |title=OpenAI’s GPT-2: the model, the hype, and the controversy GENERALIZATION IN REINFORCEMENT LEARNING – EXPLORATION VS EXPLOITATION |url=https://towardsdatascienceanalyticsindiamag.com/OpenAIsgeneralization-gptin-2reinforcement-thelearning-modelexploration-thevs-hype-and-the-controversy-1109f4bfd5e8 exploitation/ |website=towardsdatascienceanalyticsindiamag.com |accessdate=10 July 20192 March 2020}}</ref>
|-
| 2019 || {{dts|FebruaryDecember 4}} || Staff || Danny Hernandez joins Publication || "Deep Double Descent: Where Bigger Models and More Data Hurt" is submitted to the {{w|ArXiv}}. The paper shows that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as the model size increases, performance first gets worse and then gets better.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Bansal |first3=Yamini |last4=Yang |first4=Tristan |last5=Barak |first5=Boaz |last6=Sutskever |first6=Ilya |title=Deep Double Descent: Where Bigger Models and More Data Hurt |website=arxiv.org |url=https://arxiv.org/abs/1912.02292|accessdate=5 April 2020}}</ref> The paper is summarized on the OpenAI as Research Scientistblog.<ref>{{cite web |url = https://openai.com/blog/deep-double-descent/|title=Danny Hernandez Deep Double Descent|publisher = OpenAI|date = December 5, 2019|accessdate = May 23, 2020}}</ref> MIRI researcher Evan Hubinger writes an explanatory post on the subject on LessWrong and the AI Alignment Forum,<ref>{{cite web|url=https://www.linkedinlesswrong.com/inposts/dannyFRv7ryoqtvSuqBxuT/understanding-hernandezdeep-2b748823double-descent|title = Understanding “Deep Double Descent”|date = December 5, 2019|accessdate = 24 May 2020|publisher = LessWrong|last = Hubinger|first = Evan}}</ ref> and follows up with a post on the AI safety implications.<ref>{{cite web|websiteurl =linkedinhttps://www.lesswrong.com /posts/nGqzNC6uNueum2w8T/inductive-biases-stick-around|title = Inductive biases stick around|date = December 18, 2019|accessdate=28 February 24 May 2020|last = Hubinger|first = Evan}}</ref>
|-
| 2019 || {{dts|March 2December}} || Publication || OpenAi publishes paper presenting an artificial intelligence research environment that aims to simulate the {{wTeam ||natural environment}} setting in microcosmDario Amodei is promoted as OpenAI's Vice President of Research.<refname="Dario Amodeiy">{{cite web |last1=Suarez |first1=Joseph |last2=Du |first2=Yilun |last3=Isola |first3=Phillip |last4=Mordatch |first4=Igor |title=Neural MMO: A Massively Multiagent Game Environment for Training and Evaluating Intelligent Agents Dario Amodei |url=https://arxivwww.orglinkedin.com/absin/dario-amodei-3934934/1903.00784 |website=arxivlinkedin.org com |accessdate=25 March 29 February 2020}}</ref>
|-
| 2019 2020 || {{Dtsdts|March 11January 30}} || Reorganization {{w|Deep learning}} || Software adoption || OpenAI announces migration to the creation of OpenAI LPsocial network’s {{w|PyTorch}} {{w|machine learning}} framework in future projects, a setting it as its new “capped-profit” company owned and controlled by the OpenAI nonprofit organization’s board of directors. The new company is purposed to allow OpenAI to rapidly increase their investments in compute and talent while including checks and balances to actualize their missionstandard deep learning framework.<ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches sets PyTorch as its new company for funding safe artificial general intelligence standard deep learning framework |url=https://venturebeatjaxenter.com/2019/03/11/OpenAI-launches-newpytorch-companydeep-forlearning-fundingframework-safe-artificial-general-intelligence/ 167641.html |website=venturebeatjaxenter.com |accessdate=15 June 201923 February 2020}}</ref><ref>{{cite web |last1=Trazzi |first1=Michaël |title=Considerateness OpenAI goes all-in OpenAI LP Debate on Facebook’s Pytorch machine learning framework |url=https://mediumventurebeat.com/@MichaelTrazzi2020/considerateness01/30/OpenAI-infacebook-OpenAIpytorch-lpgoogle-debate-6eb3bf4c5341 tensorflow/ |website=mediumventurebeat.com |accessdate=15 June 201923 February 2020}}</ref>
|-
| 2019 2020 || {{dts|March 20February 5}} || Safety || Publication || Beth Barnes and Paul Christiano on <code>lesswrong.com</code> publish ''Writeup: Progress on AI Safety via Debate'', a writeup of the research done by the "Reflection-Humans" team at OpenAI publishes paper presenting techniques to scale MCMC based energy base models training on continuous neural networksin third and fourth quarter of 2019.<ref>{{cite web |last1=Du |first1=Yilun |last2=Mordatch |first2=Igor |title=Implicit Generation and Generalization in Energy-Based Models Writeup: Progress on AI Safety via Debate |url=https://arxivwww.orglesswrong.com/absposts/1903.08689 Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1#Things_we_did_in_Q3 |website=arxivlesswrong.org com |accessdate=25 March 16 May 2020}}</ref>
|-
| 2019 || {{dts|March}} || Staff || Ilge Akkaya joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Ilge Akkaya |url=https://www.linkedin.com/in/ilge-akkaya-311b4631/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|-| 2019 || {{Dts|March}} || Staff || {{w|Sam Altman}} leaves his role as the president of {{w|Y Combinator}} to become the {{w|Chief executive officer}} of OpenAI.<ref>{{cite web |title=Sam Altman’s leap of faith |url=https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/ |website=techcrunch.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator |url=https://techcrunch.com/2019/03/08/y-combinator-president-sam-altman-is-stepping-down-amid-a-series-of-changes-at-the-accelerator/ |website=techcrunch.com |accessdate=24 February 2020}}</ref>|-| 2019 || {{dts|March}} || Staff || Alex Paino joins OpenAI as Member of Technical Staff.<ref>{{cite web |title=Alex Paino |url=https://www.linkedin.com/in/atpaino/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|-| 2019 || {{dts|March}} || Staff || Karson Elmgren joins OpenAI at People Operations.<ref>{{cite web |title=Karson Elmgren |url=https://www.linkedin.com/in/karson-elmgren-32417732/ |website=linkedin.com |accessdate=29 February 2020}}</ref>|-| 2019 || {{Dts|April 23}} || Publication || OpenAI publishes paper announcing Sparse Transformers, a deep neural network for learning sequences of data, including text, sound, and images. It utilizes an improved algorithm based on the attention mechanism, being able to extract patterns from sequences 30 times longer than possible previously.<ref>{{cite web |last1=Alford |first1=Anthony |title=OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences |url=https://www.infoq.com/news/2019/05/OpenAI-sparse-transformers/ |website=infoq.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=OpenAI Sparse Transformer Improves Predictable Sequence Length by 30x |url=https://medium.com/syncedreview/OpenAI-sparse-transformer-improves-predictable-sequence-length-by-30x-5a65ef2592b9 |website=medium.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=Generative Modeling with Sparse Transformers |url=https://OpenAI.com/blog/sparse-transformer/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>|-| 2019 || {{Dts|April 25}} || AI development || OpenAI announces MuseNet, a deep {{w|neural network}} able to generate 4-minute musical compositions with 10 different instruments, and can combine multiple styles from [[w:Country music|country]] to {{w|Mozart}} to {{w|The Beatles}}. The neural network uses general-purpose unsupervised technology.<ref>{{cite web |title=MuseNet |url=https://OpenAI.com/blog/musenet/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>|-| 2019 || {{Dts|April 27}} || Event host || OpenAI hosts the OpenAI Robotics Symposium 2019.<ref>{{cite web |title=OpenAI Robotics Symposium 2019 |url=https://OpenAI.com/blog/symposium-2019/ |website=OpenAI.com |accessdate=14 June 2019}}</ref> |-| 2019 || {{dts|April}} || Staff || Todor Markov joins OpenAI as Machine Learning Researcher.<ref>{{cite web |title=Todor Markov |url=https://www.linkedin.com/in/todor-markov-4aa38a67/ |website=linkedin.com/ |accessdate=28 February 2020}}</ref>|-| 2019 || {{dts|May 3}} || Publication || OpenAI publishes study on the transfer of adversarial robustness of [[w:deep learning|deep neural networks]] between different perturbation types.<ref>{{cite web |last1=Kang |first1=Daniel |last2=Sun |first2=Yi |last3=Brown |first3=Tom |last4=Hendrycks |first4=Dan |last5=Steinhardt |first5=Jacob |title=Transfer of Adversarial Robustness Between Perturbation Types |url=https://arxiv.org/abs/1905.01034 |website=arxiv.org |accessdate=25 March 2020}}</ref>|-| 2019 || {{Dts|May}} || AI development || OpenAI releases a limited version of its language-generating system GPT-2. This version is more powerful (though still significantly limited compared to the whole thing) than the extremely restricted initial release of the system, citing concerns that it’d be abused.<ref>{{cite web |title=A poetry-writing AI has just been unveiled. It’s ... pretty good. |url=https://www.vox.com/2019/5/15/18623134/OpenAI-language-ai-gpt2-poetry-try-it |website=vox.com |accessdate=11 July 2019}}</ref> The potential of the new system is recognized by various experts.<ref>{{cite web |last1=Vincent |first1=James |title=AND OpenAI's new multitalented AI writes, translates, and slanders |url=https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-OpenAI-gpt2 |website=theverge.com |accessdate=11 July 201917}}</ref>|-| 2019 || {{dts|May 28}} || Publication || OpenAI publishes study on the dynamics of Stochastic Gradient Descent (SGD) in learning [[w:Deep learning|deep neural networks]] for several real and synthetic classification tasks.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Kalimeris |first3=Dimitris |last4=Yang |first4=Tristan |last5=Edelman |first5=Benjamin L. |last6=Zhang |first6=Fred |last7=Barak |first7=Boaz |title=SGD on Neural Networks Learns Functions of Increasing Complexity |url=https://arxiv.org/abs/1905.11604 |website=arxiv.org |accessdate=25 March 2020}}</ref>|-| 2019 || {{dts|June}} || Staff || Long Ouyang joins OpenAI as Research Scientist.<ref>{{cite web |title=Long Ouyang |url=https://www.linkedin.com/in/longouyang/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|-| 2019 || {{dts|July 10}} || Publication || OpenAI publishes paper arguing that competitive pressures could incentivize AI companies to underinvest in ensuring their systems are safe, secure, and have a positive social impact.<ref>{{cite web |last1=Askell |first1=Amanda |last2=Brundage |first2=Miles |last3=Hadfield |first3=Gillian |title=The Role of Cooperation in Responsible AI Development |url=https://arxiv.org/abs/1907.04534 |website=arxiv.org |accessdate=25 March 2020}}</ref>|-| 2019 || {{dts|July 22}} || Partnership || OpenAI announces an exclusive partnership with {{w|Microsoft}}. As part of the partnership, Microsoft invests $1 billion in OpenAI, and OpenAI switches to exclusively using {{w|Microsoft Azure}} (Microsoft's cloud solution) as the platform on which it will develop its AI tools. Microsoft will also be OpenAI's "preferred partner for commercializing new AI technologies."<ref>{{cite web|url = https://OpenAI.com/blog/microsoft/|title = Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI|date = July 22, 2019|accessdate = July 26, 2019|publisher = OpenAI}}</ref><ref>{{cite web|url = https://news.microsoft.com/2019/07/22/OpenAI-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/|title = OpenAI forms exclusive computing partnership with Microsoft to build new Azure AI supercomputing technologies|date = July 22, 2019|accessdate = July 26, 2019|publisher = Microsoft}}</ref><ref>{{cite web|url = https://www.businessinsider.com/microsoft-OpenAI-artificial-general-intelligence-investment-2019-7|title = Microsoft is investing $1 billion in OpenAI, the Elon Musk-founded company that's trying to build human-like artificial intelligence|last = Chan|first= Rosalie|date = July 22, 2019|accessdate = July 26, 2019|publisher = Business Insider}}</ref><ref>{{cite web|url = https://www.forbes.com/sites/mohanbirsawhney/2019/07/24/the-real-reasons-microsoft-invested-in-OpenAI/|title = The Real Reasons Microsoft Invested In OpenAI|last = Sawhney|first = Mohanbir|date = July 24, 2019|accessdate = July 26, 2019|publisher = Forbes}}</ref>|-| 2019 || {{dts|July}} || Staff || Irene Solaiman joins OpenAI as Policy Researcher.<ref>{{cite web |title=Irene Solaiman |url=https://www.linkedin.com/in/irene-solaiman/ |website=linkedin.com |accessdate=28 February 2020}}</ref> |-| 2019 || {{dts|August 20}} || AI development || OpenAI announces plan to release a version of its language-generating system GPT-2, which stirred controversy after it release in February.<ref>{{cite web |title=OpenAI releases curtailed version of GPT-2 language model |url=https://venturebeat.com/2019/08/20/OpenAI-releases-curtailed-version-of-gpt-2-language-model/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI Just Released an Even Scarier Fake News-Writing Algorithm |url=https://interestingengineering.com/OpenAI-just-released-an-even-scarier-fake-news-writing-algorithm |website=interestingengineering.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OPENAI JUST RELEASED A NEW VERSION OF ITS FAKE NEWS-WRITING AI |url=https://futurism.com/the-byte/OpenAI-new-version-writing-ai |website=futurism.com |accessdate=24 February 2020}}</ref>|-| 2019 || {{dts|August}} || Staff || Melanie Subbiah joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Melanie Subbiah |url=https://www.linkedin.com/in/melanie-subbiah-7b702a8a/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|-| 2019 || {{dts|August}} || Staff || Cullen O'Keefe joins OpenAI as Research Scientist (Policy).<ref>{{cite web |title=Cullen O'Keefe |url=https://www.linkedin.com/in/ccokeefe-law/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|-| 2019 || {{dts|November}} || Staff || Ryan Lowe joins OpenAI as Member Of Technical Staff.<ref>{{cite web |title=Ryan Lowe |url=https://www.linkedin.com/in/ryan-lowe-ab67a267/ |website=linkedin.com |accessdate=28 February 2020}}</ref>|-| 2019 || {{dts|December 3}} || Software release || OpenAI releases Procgen Benchmark, a set of 16 simple-to-use procedurally-generated environments (CoinRun, StarPilot, CaveFlyer, Dodgeball, FruitBot, Chaser, Miner, Jumper, Leaper, Maze, BigFish, Heist, Climber, Plunder, Ninja, and BossFight) which provide a direct measure of how quickly a {{w|reinforcement learning}} agent learns generalizable skills. Procgen Benchmark prevents AI model overfitting.<ref>{{cite web |title=Procgen Benchmark |url=https://openai.com/blog/procgen-benchmark/ |website=openai.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=OpenAI’s Procgen Benchmark prevents AI model overfitting |url=https://venturebeat.com/2019/12/03/openais-procgen-benchmark-overfitting/ |website=venturebeat.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=GENERALIZATION IN REINFORCEMENT LEARNING – EXPLORATION VS EXPLOITATION |url=https://analyticsindiamag.com/generalization-in-reinforcement-learning-exploration-vs-exploitation/ |website=analyticsindiamag.com |accessdate=2 March 2020}}</ref>|-| 2019 || {{dts|December}} || Staff || Dario Amodei is promoted as OpenAI's Vice President of Research.<ref name="Dario Amodeiy">{{cite web |title=Dario Amodei |url=https://www.linkedin.com/in/dario-amodei-3934934/ |website=linkedin.com |accessdate=29 February 2020}}</ref>|-| 2020 || {{dts|January 30}} || Software adoption || OpenAI announces migration to the social network’s {{w|PyTorch}} {{w|machine learning}} framework in future projects, setting it as its new standard deep learning framework.<ref>{{cite web |title=OpenAI sets PyTorch as its new standard deep learning framework |url=https://jaxenter.com/OpenAI-pytorch-deep-learning-framework-167641.html |website=jaxenter.com |accessdate=23 February 2020}}</ref><ref>{{cite web |title=OpenAI goes all-in on Facebook’s Pytorch machine learning framework |url=https://venturebeat.com/2020/01/30/OpenAI-facebook-pytorch-google-tensorflow/ |website=venturebeat.com |accessdate=23 February 2020}}</ref>|-| 2020 || {{dts|February 17}} || Coverage || AI reporter Karen Hao at ''MIT Technology Review'' publishes review on OpenAI titled ''The messy, secretive reality behind OpenAI’s bid to save the world'', which suggests the company is surrendering its declaration to be transparent in order to outpace competitors. As a response, {{w|Elon Musk}} criticizes OpenAI, saying it lacks transparency.<ref name="Aaron">{{cite web |last1=Holmes |first1=Aaron |title=Elon Musk just criticized the artificial intelligence company he helped found — and said his confidence in the safety of its AI is 'not high' |url=https://www.businessinsider.com/elon-musk-criticizes-OpenAI-dario-amodei-artificial-intelligence-safety-2020-2 |website=businessinsider.com |accessdate=29 February 2020}}</ref> On his {{w|Twitter}} account, Musk writes "I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high", alluding to OpenAI Vice President of Research Dario Amodei.<ref>{{cite web |title=Elon Musk |url=https://twitter.com/elonmusk/status/1229546206948462597 |website=twitter.com |accessdate=29 February 2020}}</ref>|-| 2020 || {{dts|January 23}} || Publication || OpenAI publishes study on empirical scaling laws for language model performance on the cross-entropy loss.<ref>{{cite web |last1=Kaplan |first1=Jared |last2=McCandlish |first2=Sam |last3=Henighan |first3=Tom |last4=Brown |first4=Tom B. |last5=Chess |first5=Benjamin |last6=Child |first6=Rewon |last7=Gray |first7=Scott |last8=Radford |first8=Alec |last9=Wu |first9=Jeffrey |last10=Amodei |first10=Dario |title=Scaling Laws for Neural Language Models |url=https://arxiv.org/abs/2001.08361 |website=arxiv.org |accessdate=25 March 2020}}</ref>
|-
|}
===How the timeline was built===
The initial version of the timeline was written by [[User:Issa|Issa Rice]]. It has been expanded considerably by [[User:Sebastian|Sebastian]].
{{funding info}} is available.
===What the timeline is still missing===
 
===Timeline update strategy===
62,762
edits

Navigation menu