Difference between revisions of "Timeline of OpenAI"

From Timelines
Jump to: navigation, search
(Created page with "This is a '''timeline of OpenAI'''. ==Big picture== {| class="wikitable" ! Time period !! Development summary !! More details |- |} ==Full timeline== {| class="sortable wi...")
 
 
(859 intermediate revisions by 3 users not shown)
Line 1: Line 1:
This is a '''timeline of OpenAI'''.
+
{{focused coverage period|end-date = December 2024}}
  
==Big picture==
+
This is a '''timeline of {{w|OpenAI}}''', an {{w|artificial intelligence}} research organization based in the United States. It comprises both a non-profit entity called OpenAI Incorporated and a for-profit subsidiary called OpenAI Limited Partnership. OpenAI stated goals are to conduct AI research and contribute to the advancement of friendly AI, aiming to promote its development and positive impact.
 +
 
 +
== Sample questions ==
 +
 
 +
The following are some interesting questions that can be answered by reading this timeline:
 +
 
 +
* What are some significant events previous to the creation of OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Prelude".
 +
** You will see some events involving key people like {{w|Elon Musk}} and {{w|Sam Altman}}, that would eventually lead to the creation of OpenAI.
 +
* What are the various papers and posts published by OpenAI on their research?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Research".
 +
** You will see mostly papers submitted to the {{w|ArXiv}} by OpenAI-affiliated researchers. Also blog posts.
 +
* What are the several toolkits, implementations, algorithms, systems and software in general released by OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Product release".
 +
** You will see a variety of releases, some of them open-sourced.
 +
** You will see some discoveries and other significant results obtained by OpenAI.
 +
* What are some updates mentioned in the timeline?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Product update".
 +
* Who are some notable team members having joined OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Team".
 +
** You will see the names of incorporated people and their roles.
 +
* What are the several partnerships between OpenAI and other organizations?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Partnership".
 +
** You will read collaborations with organizations like {{w|DeepMind}} and {{w|Microsoft}}.
 +
* What are some significant fundings granted to OpenAI by donors?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Donation".
 +
** You will see names like the {{w|Open Philanthropy Project}}, and {{w|Nvidia}}, among others.
 +
* What are some notable events hosted by OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Event hosting".
 +
* What are some other publications by OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Publication".
 +
** You will see a number of publications not specifically describing their scientific research, but other purposes, including recommendations and contributions.
 +
* What are some notable publications by third parties about OpenAI?
 +
** Sort the full timeline by "Event type" and look for the group of rows with value "Coverage".
 +
* Other events are described under the following types: "Achievement", "Advocacy", "Background", "Collaboration", "Commitment", "Competiton", "Congressional hearing", "Education", "Financial", "Integration", "Interview", "Notable comment", "Open sourcing", "Product withdrawal", "Reaction" ,"Recruitment", "Software adoption", and "Testing".
 +
 
 +
== Big picture ==
  
 
{| class="wikitable"
 
{| class="wikitable"
! Time period !! Development summary !! More details
+
! Time period (approximately) !! Development summary !! More details
 +
|-
 +
| 2015–2017 || Early years || OpenAI is established as a {{w|nonprofit organization}} with the mission to ensure that {{w|artificial general intelligence}} (AGI) benefits all of humanity. Its co-founders include {{w|Elon Musk}}, {{w|Sam Altman}}, Greg Brockman, Ilya Sutskever, John Schulman, and Wojciech Zaremba. During this period, the organization focuses on foundational artificial intelligence research, publishing influential papers and open-sourcing tools like OpenAI Gym<ref name="Gym">{{cite web |title=OpenAI Gym Beta |url=https://openai.com/index/openai-gym-beta/ |website=openai.com |accessdate=15 December 2024}}</ref>, designed for {{w|reinforcement learning}}.
 +
|-
 +
| 2018–2019 || Growth and expansion || OpenAI broadens its research scope and achieves significant breakthroughs in natural language processing and reinforcement learning. This period sees the introduction of Generative Pre-trained Transformers (GPTs), which are capable of tasks such as text generation and question answering. In 2019, OpenAI transitions to a "capped-profit" model (OpenAI LP) to attract funding, securing a $1 billion investment from {{w|Microsoft}}.<ref>{{cite web |title=Elon Musk Wanted an OpenAI for Profit |url=https://openai.com/index/elon-musk-wanted-an-openai-for-profit/ |website=openai.com |accessdate=15 December 2024}}</ref> This partnership provides access to {{w|Microsoft Azure}} cloud platform for AI training. Other notable developments include the cautious release of GPT-2, due to concerns about potential misuse of its text generation capabilities.
 +
|-
 +
| 2020–2021 || Launch of {{w|GPT-3}} and commercialization || In June 2020, OpenAI releases GPT-3, its most advanced language model at the time, which gains attention for its ability to generate coherent and human-like text. OpenAI introduces an API, enabling developers to integrate GPT-3 into various applications. The organization focuses on ethical AI development and forms partnerships to embed GPT-3 capabilities into tools like {{w|Microsoft Teams}} and {{w|Power Apps}}. In 2021, OpenAI introduces Codex, a specialized model designed to translate natural language into programming code, which powers tools like {{w|GitHub Copilot}}.
 +
|-
 +
| 2022 || Launch of ChatGPT and further advancements || OpenAI launches ChatGPT, based on a fine-tuned version of GPT-3.5, in late 2022.<ref>{{cite web |title=GPT-3.5 vs. GPT-4: Biggest Differences to Consider |url=https://www.techtarget.com/searchenterpriseai/tip/GPT-35-vs-GPT-4-Biggest-differences-to-consider |website=techtarget.com |accessdate=15 December 2024}}</ref> The tool revolutionizes conversational AI by offering practical and accessible applications for both individual and professional users. ChatGPT’s widespread adoption supports the launch of OpenAI’s subscription service, {{w|ChatGPT Plus}}.
 +
|-
 +
| 2023 || GPT-4 and multimodal AI || OpenAI introduces {{w|GPT-4}} in March 2023, marking a significant advancement with its ability to process both text and image inputs.<ref>{{cite web |title=GPT-4 |url=https://openai.com/index/gpt-4/ |website=openai.com |accessdate=15 December 2024}}</ref> The model powers applications like ChatGPT (Pro version), offering enhanced reasoning and problem-solving capabilities. Other key developments include the release of DALL·E 3, an advanced image generation model integrated into ChatGPT, featuring capabilities like inpainting and prompt editing.<ref>{{cite web |title=DALL·E 3 is Now Available in ChatGPT Plus and Enterprise |url=https://openai.com/index/dall-e-3-is-now-available-in-chatgpt-plus-and-enterprise/ |website=openai.com |accessdate=15 December 2024}}</ref> OpenAI’s ongoing emphasis on safety and alignment results in improved measures to mitigate harmful outputs.
 +
|-
 +
| 2024–present || Continued innovation and accessibility || OpenAI focuses on expanding the accessibility of its tools, introducing custom GPTs that enable users to create personalized AI assistants. Voice interaction capabilities are added, enhancing usability for diverse applications. OpenAI strengthens partnerships with educational and governmental institutions to promote AI literacy and responsible AI deployment. The organization continues to prioritize AGI safety, collaborating with other entities to ensure secure advancements in the field of artificial intelligence.
 +
|}
 +
 
 +
===Summary by year===
 +
 
 +
{| class="wikitable"
 +
! Time period !! Development summary
 +
|-
 +
| 2015 || A group of influential individuals including Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research join forces to establish OpenAI. With a commitment of more than $1 billion, the organization expresses a strong dedication to advancing the field of AI for the betterment of humanity. They announce their intention to foster open collaboration by making their work accessible to the public and actively engaging with other institutions and researchers.<ref name="The History of OpenAI"/>
 +
|-
 +
| 2016 || OpenAI breaks from the norm by offering corporate-level salaries instead of the typical nonprofit-level salaries. They also release OpenAI Gym, a platform dedicated to reinforcement learning research. Later in December, they introduced Universe, a platform that facilitates the measurement and training of AI's general intelligence across various games, websites, and applications.<ref name="The History of OpenAI"/>
 +
|-
 +
| 2017 || A significant portion of OpenAI's expenditure is allocated to cloud computing, amounting to $7.9 million. On the other hand, DeepMind's expenses for that particular year soar to $442 million, representing a notable difference.<ref name="The History of OpenAI"/>
 +
|-
 +
| 2018 || OpenAI undergoes a shift in focus towards more extensive research and development in AI. They introduce Generative Pre-trained Transformers (GPTs). These neural networks, inspired by the human brain, are trained on large amounts of human-generated text and could perform tasks like generating and answering questions. In the same year, Elon Musk resigns from his board seat at OpenAI, citing a potential conflict of interest with his role as CEO of Tesla, which is developing AI for self-driving cars.<ref name="The History of OpenAI"/>
 
|-
 
|-
 +
| 2019 || OpenAI makes a transition from a non-profit organization to a for-profit model, with a capped profit limit of 100 times the investment made. This allows OpenAI LP to attract investment from venture funds and offer employees equity in the company. OpenAI forms a partnership with Microsoft, which invests $1 billion in the company. OpenAI also announces plans to license its technologies for commercial use. However, some researchers would criticize the shift to a for-profit status, raising concerns about the company's commitment to democratizing AI.<ref>{{cite web |last1=Romero |first1=Alberto |title=OpenAI Sold its Soul for $1 Billion |url=https://onezero.medium.com/openai-sold-its-soul-for-1-billion-cf35ff9e8cd4 |website=OneZero |access-date=1 June 2023 |language=en |date=13 June 2022}}</ref>
 +
|-
 +
| 2020 || OpenAI introduces GPT-3, a language model trained on extensive internet datasets. While its main function is to provide answers in natural language, it can also generate coherent text spontaneously and perform language translation. OpenAI also announces their plans to develop a commercial product centered around an API called "the API," which is closely connected to GPT-3.<ref name="The History of OpenAI"/>
 +
|-
 +
| 2021 || OpenAI introduces DALL-E, an advanced deep-learning model that has the ability to generate digital images by interpreting natural language descriptions.<ref name="The History of OpenAI">{{cite web |last1=O'Neill |first1=Sarah |title=The History of OpenAI |url=https://www.lxahub.com/stories/the-history-of-openai |website=www.lxahub.com |access-date=31 May 2023 |language=en}}</ref>
 +
|-
 +
| 2022 || OpenAI introduces ChatGPT<ref>{{cite web |last1=Bastian |first1=Matthias |title=OpenAI lost $540 million in 2022 developing ChatGPT and GPT-4 - Report |url=https://the-decoder.com/openai-lost-540-million-in-2022-developing-chatgpt-and-gpt-4-report/ |website=THE DECODER |access-date=31 May 2023 |date=5 May 2023}}</ref> which soon would become the fastest-growing app of all time.<ref>{{cite web |title=What is ChatGPT and why does it matter? Here's what you need to know |url=https://www.zdnet.com/article/what-is-chatgpt-and-why-does-it-matter-heres-everything-you-need-to-know/ |website=ZDNET |access-date=22 June 2023 |language=en}}</ref>
 +
|-
 +
| 2023 || OpenAI releases GPT-4 to ChatGPT Plus, marking a major AI advancement. The organization faces internal turmoil with CEO Sam Altman's brief dismissal. OpenAI transitions to a for-profit model to attract investment, particularly from Microsoft, while continuing its pursuit of artificial general intelligence (AGI).
 +
|-
 +
| 2024 || OpenAI continues to advance AI technologies, including further developments of GPT-4. The organization focuses on ethical AI deployment, emphasizing safety and transparency. Collaborations, particularly with Microsoft, strengthen its resources for progressing toward artificial general intelligence (AGI). OpenAI faces ongoing discussions about governance and AI's societal impact.
 +
|-
 
|}
 
|}
  
 
==Full timeline==
 
==Full timeline==
 +
 +
=== Inclusion criteria ===
 +
 +
The following events are selected for inclusion in the timeline:
 +
 +
* Most blog posts by OpenAI, many describing important advancements in their research.
 +
* Product releases, including models and software in general.
 +
* Partnerships.
 +
 +
We do ''not'' include:
 +
 +
* Comprehensive information on the team's arrivals and departures within a company.
 +
* Many of OpenAI's research papers, which are not individually listed on the full timeline, but can be found on the talk page as additional entries.
 +
 +
=== Timeline ===
 +
 +
{| class="sortable wikitable"
 +
! Year !! Month and date !! Domain/key topic/caption !! Event type !! Details
 +
|-
 +
| 2014 || {{dts|October 22}}–24 || AI's existential threat || Prelude || During an interview at the AeroAstro Centennial Symposium, {{W|Elon Musk}}, who would later become co-chair of OpenAI, calls artificial intelligence humanity's "biggest existential threat".<ref>{{cite web |url=https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat |author=Samuel Gibbs |date=October 27, 2014 |title=Elon Musk: artificial intelligence is our biggest existential threat |publisher=[[w:The Guardian|The Guardian]] |accessdate=July 25, 2017}}</ref><ref>{{cite web |url=http://webcast.amps.ms.mit.edu/fall2014/AeroAstro/index-Fri-PM.html |title=AeroAstro Centennial Webcast |accessdate=July 25, 2017 |quote=The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium}}</ref>
 +
|-
 +
| 2015 || {{dts|February 25}} || Superhuman AI threat || Prelude || {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]] who would later become a co-chair of OpenAI, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".<ref>{{cite web |url=http://blog.samaltman.com/machine-intelligence-part-1 |title=Machine intelligence, part 1 |publisher=Sam Altman |accessdate=July 27, 2017}}</ref>
 +
|-
 +
| 2015 || {{dts|May 6}} || Greg Brockman leaves Stripe || Prelude || Greg Brockman, who would become CTO of OpenAI, announces in a blog post that he is leaving his role as CTO of [[wikipedia:Stripe (company)|Stripe]]. In the post, in the section "What comes next" he writes "I haven't decided exactly what I'll be building (feel free to ping if you want to chat)".<ref>{{cite web |url=https://blog.gregbrockman.com/leaving-stripe |title=Leaving Stripe |first=Greg |last=Brockman |publisher=Greg Brockman on Svbtle |date=May 6, 2015 |accessdate=May 6, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/stripes-cto-greg-brockman-is-leaving-the-company-2015-5 |date=May 6, 2015 |first=Biz |last=Carson |title=One of the first employees of $3.5 billion startup Stripe is leaving to form his own company |publisher=Business Insider |accessdate=May 6, 2018}}</ref>
 +
|-
 +
| 2015 || {{dts|June 4}} || Altman's AI safety concern || Prelude || At {{w|Airbnb}}'s Open Air 2015 conference, {{w|Sam Altman}}, president of [[w:Y Combinator (company)|Y Combinator]] who would later become a co-chair of OpenAI, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.<ref>{{cite web |url=http://www.businessinsider.com/sam-altman-y-combinator-talks-mega-bubble-nuclear-power-and-more-2015-6 |author=Matt Weinberger |date=June 4, 2015 |title=Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last |publisher=Business Insider |accessdate=July 27, 2017}}</ref>
 +
|-
 +
| 2015 || {{dts|July}} (approximate) || AI research dinner || Prelude || {{W|Sam Altman}} sets up a dinner in {{W|Menlo Park, California}} to talk about starting an organization to do AI research. Attendees include Greg Brockman, Dario Amodei, Chris Olah, Paul Christiano, {{W|Ilya Sutskever}}, and {{W|Elon Musk}}.<ref name="path-to-OpenAI">{{cite web |url=https://blog.gregbrockman.com/my-path-to-OpenAI |title=My path to OpenAI |date=May 3, 2016 |publisher=Greg Brockman on Svbtle |archive-url = https://web.archive.org/web/20240228183857/https://blog.gregbrockman.com/my-path-to-OpenAI|archive-date = February 28, 2024}}</ref>
 +
|-
 +
| 2015 || Late year || Musk's $1 billion proposal || Funding || In their foundational phase, OpenAI co-founders Greg Brockman and Sam Altman initially aim to raise $100 million to launch its initiatives focused on developing artificial general intelligence (AGI). Recognizing the ambitious scope of the project, Elon Musk suggests a significantly larger funding goal of $1 billion to ensure the project’s viability. He expresses willingness to cover any funding shortfall.<ref name="index">{{cite web |title=OpenAI and Elon Musk |url=https://openai.com/index/openai-elon-musk/ |website=OpenAI |accessdate=29 September 2024}}</ref>
 +
|-
 +
| 2015 || {{dts|December 11}} || OpenAI's mission statement || OpenAI launch || {{w|OpenAI}} is introduced as a non-profit artificial intelligence research organization dedicated to advancing digital intelligence for the benefit of humanity, without the constraints of financial returns. OpenAI expresses aim to ensure that AI acts as an extension of individual human will and is broadly accessible. The organization recognizes the potential risks and benefits of achieving human-level AI.<ref>{{cite web |title=Introducing OpenAI |url=https://openai.com/index/introducing-openai/ |website=OpenAI |accessdate=29 September 2024}}</ref> 
 +
|-
 +
| 2015 || {{dts|December}} || {{w|Wikipedia}} || Coverage || The article "{{w|OpenAI}}" is created on {{w|Wikipedia}}.<ref>{{cite web |title=OpenAI: Revision history |url=https://en.wikipedia.org/w/index.php?title=OpenAI&dir=prev&action=history |website=wikipedia.org |accessdate=6 April 2020}}</ref>
 +
|-
 +
| 2015 || {{dts|December}} || || Team || OpenAI announces {{w|Y Combinator}} founding partner {{w|Jessica Livingston}} as one of its financial backers.<ref>{{cite web |url=https://www.forbes.com/sites/theopriestley/2015/12/11/elon-musk-and-peter-thiel-launch-OpenAI-a-non-profit-artificial-intelligence-research-company/ |title=Elon Musk And Peter Thiel Launch OpenAI, A Non-Profit Artificial Intelligence Research Company |first1=Theo |last1=Priestly |date=December 11, 2015 |publisher=''{{w|Forbes}}'' |access-date=8 July 2019 }}</ref>
 +
|-
 +
| 2016 || {{dts|January}} || {{W|Ilya Sutskever}} || Team || {{W|Ilya Sutskever}} joins OpenAI as Research Director.<ref>{{cite web |url=https://aiwatch.issarice.com/?person=Ilya+Sutskever |date=April 8, 2018 |title=Ilya Sutskever |publisher=AI Watch |accessdate=May 6, 2018}}</ref><ref name="orgwatch.issarice.com">{{cite web |title=Information for OpenAI |url=https://orgwatch.issarice.com/?organization=OpenAI |website=orgwatch.issarice.com |accessdate=5 May 2020}}</ref>
 +
|-
 +
| 2016 || {{dts|January 9}} || AMA || Education || The OpenAI research team does an AMA ("ask me anything") on r/MachineLearning, the subreddit dedicated to machine learning.<ref>{{cite web |url=https://www.reddit.com/r/MachineLearning/comments/404r9m/ama_the_OpenAI_research_team/ |publisher=reddit |title=AMA: the OpenAI Research Team • r/MachineLearning |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2016 || {{dts|February 25}} || {{w|Deep learning}}, {{w|neural networks}} || Research || OpenAI introduces weight normalization as a technique that improves the training of deep {{w|neural network}}s by decoupling the length and direction of weight vectors. It enhances optimization and speeds up convergence without introducing dependencies between examples in a minibatch. This method is effective for recurrent models and noise-sensitive applications, providing a significant speed-up similar to batch normalization but with lower computational overhead. Applications in {{w|image recognition}}, {{w|generative model}}ing, and deep {{w|reinforcement learning}} demonstrate the effectiveness of weight normalization.<ref>{{cite web |last1=Salimans |first1=Tim |last2=Kingma |first2=Diederik P. |title=Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks |url=https://arxiv.org/abs/1602.07868 |website=arxiv.org |accessdate=27 March 2020}}</ref>
 +
|-
 +
| 2016 || {{dts|March 31}} || {{W|Ian Goodfellow}} || Team || A blog post from this day announces that {{W|Ian Goodfellow}} has joined OpenAI.<ref>{{cite web |url=https://blog.OpenAI.com/team-plus-plus/ |publisher=OpenAI Blog |title=Team++ |date=March 22, 2017 |first=Greg |last=Brockman |accessdate=May 6, 2018}}</ref> Previously, Goodfellow worked as Senior Research Scientist at {{w|Google}}.<ref>{{cite web |title=Ian Goodfellow |url=https://www.linkedin.com/in/ian-goodfellow-b7187213/ |website=linkedin.com |accessdate=24 April 2020}}</ref><ref name="orgwatch.issarice.com"/>
 +
|-
 +
| 2016 || {{Dts|April 26}} || {{w|Robotics}} || Team || {{w|Pieter Abbeel}} joins OpenAI.<ref>{{cite web |url=https://blog.OpenAI.com/welcome-pieter-and-shivon/ |publisher=OpenAI Blog |title=Welcome, Pieter and Shivon! |date=March 20, 2017 |first=Ilya |last=Sutskever |accessdate=May 6, 2018}}</ref><ref name="orgwatch.issarice.com"/> His work focuses on robot learning, reinforcement learning, and unsupervised learning. A cutting-edge researcher, Abbeel robots would learn various tasks, including locomotion and vision-based robotic manipulation.<ref>{{cite web |title=Pieter Abbeel AI Speaker |url=https://www.aurumbureau.com/speaker/pieter-abbeel/ |website=Aurum Speakers Bureau |access-date=13 June 2023}}</ref>
 +
|-
 +
| 2016 || {{dts|April 27}} || {{w|Reinforcement learning}} || Product release || OpenAI releases OpenAI Gym, a toolkit for reinforcement learning (RL) algorithms. It offers various environments for developing and comparing RL algorithms, with compatibility across different [[w:Software framework|frameworks]]. RL enables agents to learn {{w|decision-making}} and {{w|motor control}} in complex environments. OpenAI Gym addresses the need for diverse {{w|benchmark}}s and standardized environments in RL research. The toolkit encourages feedback and collaboration to enhance its capabilities.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-gym-beta/ |publisher=OpenAI Blog |title=OpenAI Gym Beta |date=March 20, 2017 |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/04/OpenAI-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ |title=Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free |date=April 27, 2016 |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018 |quote=This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called "reinforcement learning"}}</ref><ref>{{cite web |url=http://www.businessinsider.com/OpenAI-has-launched-a-gym-where-developers-can-train-their-computers-2016-4?op=1 |first=Sam |last=Shead |date=April 28, 2016 |title=Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
 +
|-
 +
| 2016 || {{dts|May 25}} || {{w|Natural language processing}} || Research || OpenAI-affiliated researchers publish a paper introducing an extension of adversarial training and virtual adversarial training for text classification tasks. [[w:Adversarial machine learning|Adversarial training]] is a regularization technique for {{w|supervised learning}}, while virtual adversarial training extends it to [[w:Weak supervision|semi-supervised learning]]. However, these methods require perturbing multiple entries of the input vector, which is not suitable for sparse high-dimensional inputs like one-hot word representations in text. In this paper, the authors propose applying perturbations to word embeddings in a {{w|recurrent neural network}} (RNN) instead of the original input. This text-specific approach achieves [[w:State of the art|state-of-the-art]] results on multiple benchmark tasks for both [[w:Weak supervision|semi-supervised]] and purely {{w|supervised learning}}. The authors provide visualizations and analysis demonstrating the improved quality of the learned word embeddings and the reduced overfitting during training.<ref>{{cite journal |last1=Miyato |first1=Takeru |last2=Dai |first2=Andrew M. |last3=Goodfellow |first3=Ian |title=Adversarial Training Methods for Semi-Supervised Text Classification |date=2016 |doi=10.48550/arXiv.1605.07725}}</ref>
 +
|-
 +
| 2016 || June 16 || {{w|Generative model}}s || Research || OpenAI publishes post introducing the concept of generative models, which are a type of {{w|unsupervised learning}} technique in {{w|machine learning}}. The post emphasizes the importance and potential of generative models in understanding and replicating complex {{w|data set}}s, and it showcases recent advancements in this field. Generative models aim to understand and replicate the patterns and features present in a given dataset. The post discusses the use of generative models in generating images, particularly with the example of the DCGAN network. It explains the training process of generative models, including the use of {{w|Generative Adversarial Network}}s (GANs) and other approaches. The post highlights three popular types of generative models: Generative Adversarial Networks (GAN), Variational Autoencoders (VAEs), and {{w|autoregressive model}}s. Each of these approaches has its own strengths and limitations. The post also mentions recent advancements in generative models, including improvements to GANs, VAEs, and the introduction of InfoGAN. The last part briefly mentions two projects related to generative models in the context of reinforcement learning. One project focuses on curiosity-driven exploration using [[w:Bayesian inference|Bayesian]] {{w|neural network}}s. The other project explores generative models in reinforcement learning for training agents.<ref>{{cite web |title=Generative models |url=https://openai.com/research/generative-models |website=openai.com |access-date=7 June 2023}}</ref>
 +
|-
 +
| 2016 || {{dts|June 21}} || {{w|AI safety}} || Research || OpenAI-affiliated researchers publish a paper addressing the potential impact of accidents in {{w|machine learning}} systems. They outline five practical research problems related to accident risk, categorized based on the origin of the problem. These categories include having the wrong objective function, an objective function that is too expensive to evaluate frequently, and undesirable behavior during the learning process. The authors review existing work in these areas and propose research directions relevant to cutting-edge AI systems. They also discuss how to approach the safety of future AI applications effectively.<ref>{{cite web |url=https://arxiv.org/abs/1606.06565 |title=[1606.06565] Concrete Problems in AI Safety |date=June 21, 2016 |accessdate=July 25, 2017}}</ref><ref>{{cite web|url = https://www.openphilanthropy.org/blog/concrete-problems-ai-safety|title = Concrete Problems in AI Safety|last = Karnofsky|first = Holden|date = June 23, 2016|accessdate = April 18, 2020}}</ref>
 +
|-
 +
| 2016 || {{Dts|July}} || Dario Amodei joins OpenAI || Team || Dario Amodei joins OpenAI<ref>{{cite web |url=https://www.crunchbase.com/person/dario-amodei |title=Dario Amodei - Research Scientist @ OpenAI |publisher=Crunchbase |accessdate=May 6, 2018}}</ref>, working on the Team Lead for AI Safety.<ref name="Dario Amodeiy"/><ref name="orgwatch.issarice.com"/>
 +
|-
 +
| 2016 || {{dts|July 28}} || Security and adversarial AI, {{w|automated programming}}, {{w|cybersecurity}}, {{w|multi-agent system}}s, {{w|simulation}} || Recruitment || OpenAI publishes post calling for applicants to work in significant problems in AI that have a meaningful impact. They list several problem areas that they believe are crucial for advancing AI and its implications for society. The first problem area mentioned is detecting covert breakthrough AI systems being used by organizations for potentially malicious purposes. OpenAI emphasizes the need to develop methods to identify such undisclosed AI breakthroughs, which could be achieved through various means like monitoring news, {{w|financial market}}s, and {{w|online game}}s. Another area of interest is building an agent capable of winning online [[w:Competitive programming|programming competitions]]. OpenAI recognizes the power of a program that can generate other programs, and they see the development of such an agent as highly valuable. Additionally, OpenAI highlights the significance of cyber-security defense. They stress the need for AI techniques to defend against sophisticated hackers who may exploit AI methods to break into computer systems. Lastly, OpenAI expresses interest in constructing a complex simulation with multiple long-lived agents. Their aim is to create a large-scale simulation where agents can interact, learn over an extended period, develop language, and achieve diverse goals.<ref>{{cite web |title=Special Projects |url=https://openai.com/blog/special-projects/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2016 || {{dts|August 15}} || AI Research || Donation || American multinational technology company {{W|Nvidia}} announces that it has donated the first {{W|Nvidia DGX-1}} (a supercomputer) to OpenAI, which plans to use the supercomputer to train its AI on a corpus of conversations from {{W|Reddit}}. The DGX-1 supercomputer is expected to enable OpenAI to explore new problems and achieve higher levels of performance in AI research.<ref>{{cite web |url=https://blogs.nvidia.com/blog/2016/08/15/first-ai-supercomputer-OpenAI-elon-musk-deep-learning/ |title=NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI |publisher=The Official NVIDIA Blog |date=August 15, 2016 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=http://fortune.com/2016/08/15/elon-musk-artificial-intelligence-OpenAI-nvidia-supercomputer/ |title=Nvidia Just Gave A Supercomputer to Elon Musk-backed Artificial Intelligence Group |publisher=Fortune |first=Jonathan |last=Vanian |date=August 15, 2016 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://futurism.com/elon-musks-OpenAI-is-using-reddit-to-teach-an-artificial-intelligence-how-to-speak/ |date=August 17, 2016 |title=Elon Musk's OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak |first=Cecille |last=De Jesus |publisher=Futurism |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2016 || {{dts|August 29}} || Infrastructure || Research || OpenAI publishes an article discussing the infrastructure necessary for {{w|deep learning}}. The research process starts with small {{w|ad-hoc}} experiments that need to be quickly conducted, so deep learning infrastructure must be flexible and allow users to analyze the models effectively. Then, the model is scaled up, and experiment management becomes critical. The article describes an example of improving {{w|Generative Adversarial Network}} training, from a prototype on [[w:MNIST database|MNIST]] and {{w|CIFAR-10}} datasets to a larger model on the ImageNet dataset. The article also discusses the software and hardware infrastructure necessary for deep learning, such as [[w:Python (programming language)|Python]], {{w|TensorFlow}}, and high-end GPUs. Finally, the article emphasizes the importance of simple and usable infrastructure management tools.<ref>{{cite web |title=Infrastructure for Deep Learning |url=https://openai.com/blog/infrastructure-for-deep-learning/ |website=openai.com |accessdate=28 March 2020}}</ref>
 +
|-
 +
| 2016 || {{dts|October 11}} || {{w|Robotics}} || Research || OpenAI-affiliated researchers publish a paper addressing the challenge of transferring control policies from {{w|simulation}} to the real world. The authors propose a method that leverages the similarity in state sequences between simulation and reality. Instead of directly executing simulation-based controls on a robot, they predict the expected next states using a deep inverse dynamics model and determine suitable real-world actions. They also introduce a data collection approach to improve the model's performance. Experimental results demonstrate the effectiveness of their approach compared to existing methods for addressing simulation-to-real-world discrepancies.<ref>{{cite web |last1=Christiano |first1=Paul |last2=Shah |first2=Zain |last3=Mordatch |first3=Igor |last4=Schneider |first4=Jonas |last5=Blackwell |first5=Trevor |last6=Tobin |first6=Joshua |last7=Abbeel |first7=Pieter |last8=Zaremba |first8=Wojciech |title=Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model |url=https://arxiv.org/abs/1610.03518 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 +
|-
 +
| 2016 || {{dts|October 18}} || Safety || Research || OpenAI-affiliated researchers publish a paper presenting a method called Private Aggregation of Teacher Ensembles (PATE) to address the privacy concerns associated with sensitive training data in machine learning applications. The approach involves training multiple models using [[w:Disjoint-set data structure|disjoint datasets]], which contain sensitive information. These models, referred to as "teachers," are not directly published but used to train a "student" model. The student model learns to predict outputs through noisy voting among the teachers and does not have access to individual teachers or their data. The student's privacy is ensured using {{w|differential privacy}}, even when the adversary can inspect its internal workings. The method is applicable to any model, including non-convex models like [[w:Deep learning|Deep Neural Networks]] (DNNs), and achieves state-of-the-art privacy/utility trade-offs on MNIST and Street View House Numbers (SVHN) datasets. The approach combines an improved privacy analysis with semi-supervised learning.<ref>{{cite web |last1=Papernot |first1=Nicolas |last2=Abadi |first2=Martín |last3=Erlingsson |first3=Úlfar |last4=Goodfellow |first4=Ian |last5=Talwar |first5=Kunal |title=Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data |url=https://arxiv.org/abs/1610.05755 |website=arxiv.org |accessdate=28 March 2020}}</ref> 
 +
|-
 +
| 2016 || {{dts|November 14}} || Generative models || Research || OpenAI-affiliated researchers publish a paper discussing the challenges in quantitatively analyzing decoder-based {{w|generative model}}s, which have shown remarkable progress in generating realistic samples of various modalities, including images. These models rely on a decoder network, which is a deep neural network that defines a generative distribution. However, evaluating the performance of these models and estimating their log-likelihoods can be challenging due to the intractability of the task. The authors propose using Annealed Importance Sampling as a method for evaluating log-likelihoods and validate its accuracy using bidirectional Monte Carlo. They provide the evaluation code for this technique. Through their analysis, they examine the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the issue of overfitting, and the models' ability to capture important modes of the data distribution.<ref>{{cite web |last1=Wu |first1=Yuhuai |last2=Burda |first2=Yuri |last3=Salakhutdinov |first3=Ruslan |last4=Grosse |first4=Roger |title=On the Quantitative Analysis of Decoder-Based Generative Models |url=https://arxiv.org/abs/1611.04273 |website=arxiv.org |accessdate=28 March 2020}}</ref>
 +
|-
 +
| 2016 || {{dts|November 15}} || {{w|Cloud computing}} || Partnership || Microsoft's artificial intelligence research division partners with OpenAI. Through this collaboration, OpenAI is granted access to Microsoft's virtual machine technology for AI training and simulation, while Microsoft would benefit from cutting-edge research conducted on its [[w:Microsoft Azure|Azure]] cloud platform. Microsoft sees this partnership as an opportunity to advance machine intelligence research on Azure and attract other AI companies to the platform. The collaboration aligns with Microsoft's goal to compete with Google and Facebook in the AI space and strengthen its position as a central player in the industry.<ref>{{cite web |url=https://www.theverge.com/2016/11/15/13639904/microsoft-OpenAI-ai-partnership-elon-musk-sam-altman |date=November 15, 2016 |publisher=The Verge |first=Nick |last=Statt |title=Microsoft is partnering with Elon Musk's OpenAI to protect humanity's best interests |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/11/next-battles-clouds-ai-chips/ |title=The Next Big Front in the Battle of the Clouds Is AI Chips. And Microsoft Just Scored a Win |publisher=[[wikipedia:WIRED|WIRED]] |first=Cade |last=Metz |accessdate=March 2, 2018 |quote=According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware.}}</ref>
 +
|-
 +
| 2016 || {{dts|December 5}} || Reinforcement learning || Product release || OpenAI releases Universe, a tool that aims to train and measure AI frameworks using video games, applications, and websites. The goal is to accelerate the development of generalized intelligence that can excel at multiple tasks. Universe provides a wide range of environments, including {{w|Atari 2600}} games, flash games, web browsers, and [[w:Computer-aided design|CAD software]], for AI systems to learn and improve their capabilities. By applying reinforcement learning techniques, which leverage rewards to optimize problem-solving, Universe enables AI models to perform tasks such as playing games and browsing the web. The tool's versatility and real-world applicability make it valuable for benchmarking AI performance and potentially advancing AI capabilities beyond current platforms like Siri or Google Assistant.<ref>{{cite web |url=https://github.com/OpenAI/universe |accessdate=March 1, 2018 |publisher=GitHub |title=universe}}</ref><ref>{{cite web |url=https://techcrunch.com/2016/12/05/OpenAIs-universe-is-the-fun-parent-every-artificial-intelligence-deserves/ |date=December 5, 2016 |publisher=TechCrunch |title=OpenAI's Universe is the fun parent every artificial intelligence deserves |author=John Mannes |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/2016/12/OpenAIs-universe-computers-learn-use-apps-like-humans/ |title=Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=13103742 |title=OpenAI Universe |website=Hacker News |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2017 || Early year || Recognition of resource needs || Funding || OpenAI realizes that building {{w|artificial general intelligence}} would require significantly more resources than previously anticipated. The organization begins evaluating the vast computational power needed for AGI development, acknowledging that billions of dollars in annual funding would be necessary.<ref name="index"/>
 +
|-
 +
| 2017 || {{dts|January}} || Paul Christiano joins OpenAI || Team || Paul Christiano joins OpenAI to work on AI alignment.<ref>{{cite web |url=https://paulfchristiano.com/ai/ |title=AI Alignment |date=May 13, 2017 |publisher=Paul Christiano |accessdate=May 6, 2018}}</ref> He was previously an intern at OpenAI in 2016.<ref>{{cite web |url=https://blog.openai.com/team-update/ |publisher=OpenAI Blog |title=Team Update |date=March 22, 2017 |accessdate=May 6, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|March}} || AI governance, philanthropy || Donation || The Open Philanthropy Project awards a grant of $30 million to {{w|OpenAI}} for general support.<ref name="donations-portal-open-phil-ai-risk">{{cite web |url=https://donations.vipulnaik.com/donor.php?donor=Open+Philanthropy+Project&cause_area_filter=AI+safety |title=Open Philanthropy Project donations made (filtered to cause areas matching AI safety) |accessdate=July 27, 2017}}</ref> The grant initiates a partnership between Open Philanthropy Project and OpenAI, in which {{W|Holden Karnofsky}} (executive director of Open Philanthropy Project) joins OpenAI's board of directors to oversee OpenAI's safety and governance work.<ref>{{cite web |url=https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/OpenAI-general-support |publisher=Open Philanthropy Project |title=OpenAI — General Support |date=December 15, 2017 |accessdate=May 6, 2018}}</ref> The grant is criticized by {{W|Maciej Cegłowski}}<ref>{{cite web |url=https://twitter.com/Pinboard/status/848009582492360704 |title=Pinboard on Twitter |publisher=Twitter |accessdate=May 8, 2018 |quote=What the actual fuck… “Open Philanthropy” dude gives a $30M grant to his roommate / future brother-in-law.  Trumpy!}}</ref> and Benjamin Hoffman (who would write the blog post "OpenAI makes humanity less safe")<ref>{{cite web |url=http://benjaminrosshoffman.com/OpenAI-makes-humanity-less-safe/ |title=OpenAI makes humanity less safe |date=April 13, 2017 |publisher=Compass Rose |accessdate=May 6, 2018}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/Nqn2tkAHbejXTDKuW/OpenAI-makes-humanity-less-safe |title=OpenAI makes humanity less safe |accessdate=May 6, 2018 |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://donations.vipulnaik.com/donee.php?donee=OpenAI |title=OpenAI donations received |accessdate=May 6, 2018}}</ref> among others.<ref>{{cite web |url=https://www.facebook.com/vipulnaik.r/posts/10211478311489366 |title=I'm having a hard time understanding the rationale... |accessdate=May 8, 2018 |first=Vipul |last=Naik}}</ref>
 +
|-
 +
| 2017 || {{dts|March 24}} || {{w|Reinforcement learning}} || Research || OpenAI publishes document presenting evolution strategies (ES) as a viable alternative to {{w|reinforcement learning}} techniques. They highlight that ES, a well-known optimization technique, performs on par with RL on modern RL benchmarks, such as {{w|Atari}} and MuJoCo, while addressing some of RL's challenges. ES is simpler to implement, does not require {{w|backpropagation}}, scales well in a distributed setting, handles sparse rewards effectively, and has fewer hyperparameters. The authors compare this discovery to previous instances where old ideas achieved significant results, such as the success of {{w|convolutional neural network}}s (CNNs) in image recognition and the combination of Q-Learning with CNNs in solving Atari games. The implementation of ES is demonstrated to be efficient, with the ability to train a 3D MuJoCo humanoid walker in just 10 minutes using a [[w:Computer cluster|computing cluster]]. The document provides a brief overview of conventional RL, compares it to ES, discusses the tradeoffs between the two approaches, and presents experimental results supporting the effectiveness of ES.<ref>{{cite web |title=Evolution Strategies as a Scalable Alternative to Reinforcement Learning |url=https://openai.com/blog/evolution-strategies/ |website=openai.com |accessdate=5 April 2020}}</ref><ref>{{cite web |last1=Juliani |first1=Arthur |title=Reinforcement Learning or Evolutionary Strategies? Nature has a solution: Both. |url=https://medium.com/beyond-intelligence/reinforcement-learning-or-evolutionary-strategies-nature-has-a-solution-both-8bc80db539b3 |website=Beyond Intelligence |access-date=25 June 2023 |language=en |date=29 May 2017}}</ref>
 +
|-
 +
| 2017 || {{dts|March}} || {{w|Artificial general intelligence}} || Reorganization || Greg Brockman and a few other core members of OpenAI begin drafting an internal document to lay out a path to {{w|artificial general intelligence}}. As the team studies trends within the field, they realize staying a nonprofit is financially untenable.<ref name="technologyreview.comñ">{{cite web |title=The messy, secretive reality behind OpenAI’s bid to save the world |url=https://www.technologyreview.com/s/615181/ai-OpenAI-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ |website=technologyreview.com |accessdate=28 February 2020}}</ref>
 +
|-
 +
| 2017 || {{dts|April}} || OpenAI history || Coverage || An article by Brent Simoneaux and Casey Stegman is published, providing insights into the early days of OpenAI and the individuals involved in shaping the organization. The article begins by debunking the notion that OpenAI's office is filled with futuristic gadgets and experiments. Instead, it describes a typical tech startup environment with desks, laptops, and bean bag chairs, albeit with a small robot tucked away in a side room. OpenAI–founders Greg Brockman and Ilya Sutskever, were inspired to establish the organization after a dinner conversation in 2015 with tech entrepreneur Sam Altman and Elon Musk. They discussed the idea of building safe and beneficial AI and decided to create a nonprofit organization. Overall, the article provides a glimpse into the early days of OpenAI and the visionary individuals behind the organization's mission to advance AI for the benefit of humanity.<ref>{{cite web |url=https://www.redhat.com/en/open-source-stories/ai-revolutionaries/people-behind-OpenAI |title=Open Source Stories: The People Behind OpenAI |accessdate=May 5, 2018 |first1=Brent |last1=Simoneaux |first2=Casey |last2=Stegman}} In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.</ref><ref>{{cite web |url=https://www.reddit.com/r/OpenAI/comments/63xr4p/profile_of_the_people_behind_OpenAI/ |publisher=reddit |title=Profile of the people behind OpenAI • r/OpenAI |date=April 7, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=14832524 |title=The People Behind OpenAI |website=Hacker News |accessdate=May 5, 2018 |date=July 23, 2017}}</ref>
 +
|-
 +
| 2017 || {{dts|April 6}} || {{w|Sentiment analysis}} || Product release || OpenAI unveils an unsupervised system which is able to perform a excellent {{w|sentiment analysis}}, despite being trained only to predict the next character in the text of Amazon reviews.<ref>{{cite web |title=Unsupervised Sentiment Neuron |url=https://openai.com/blog/unsupervised-sentiment-neuron/ |website=openai.com |accessdate=5 April 2020}}</ref><ref>{{cite web |url=https://techcrunch.com/2017/04/07/OpenAI-sets-benchmark-for-sentiment-analysis-using-an-efficient-mlstm/ |date=April 7, 2017 |publisher=TechCrunch |title=OpenAI sets benchmark for sentiment analysis using an efficient mLSTM |author=John Mannes |accessdate=March 2, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|April 6}} || {{w|Generative model}}s || Research || OpenAI-affiliated researchers publish a paper exploring the capabilities of byte-level recurrent language models. Through extensive training and computational resources, these models acquire disentangled features that represent high-level concepts. Remarkably, the researchers discover a single unit within the model that effectively performs {{w|sentiment analysis}}. The learned representations, achieved through unsupervised learning, outperform existing methods on the binary subset of the Stanford Sentiment Treebank dataset. Moreover, the models trained using this approach are highly efficient in terms of data requirements. Even with a small number of labeled examples, their performance matches that of strong baselines trained on larger datasets. Additionally, the researchers demonstrate that manipulating the sentiment unit in the model influences the generative process, allowing them to produce samples with positive or negative sentiment simply by setting the unit's value accordingly.<ref>{{cite journal |last1=Radford |first1=Alec |last2=Jozefowicz |first2=Rafal |last3=Sutskever |first3=Ilya |title=Learning to Generate Reviews and Discovering Sentiment |date=2017 |doi=10.48550/arXiv.1704.01444}}</ref><ref>{{cite web |url=https://techcrunch.com/2017/04/07/openai-sets-benchmark-for-sentiment-analysis-using-an-efficient-mlstm/ |date=April 7, 2017 |publisher=TechCrunch |title=OpenAI sets benchmark for sentiment analysis using an efficient mLSTM |author=John Mannes |accessdate=March 2, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|April 6}} || OpenAI's Evolution Strategies || Research || OpenAI unveils reuse of an old field called “{{w|neuroevolution}}”, and a subset of algorithms from it called “evolution strategies,” which are aimed at solving optimization problems. In one hour training on an Atari challenge, an algorithm is found to reach a level of mastery that took a reinforcement-learning system published by DeepMind in 2016 a whole day to learn. On the walking problem the system takes 10 minutes, compared to 10 hours for DeepMind's approach.<ref>{{cite web |title=OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s |url=https://singularityhub.com/2017/04/06/OpenAI-just-beat-the-hell-out-of-deepmind-with-an-algorithm-from-the-80s/ |website=singularityhub.com |accessdate=29 June 2019}}</ref>
 +
|-
 +
| 2017 || {{dts|May 15}} || OpenAI releases Roboschool || Product release || OpenAI releases Roboschool as an {{w|open-source software}}, integrated with OpenAI Gym, that facilitates [[w:Robotics simulator|robot simulation]]. It provides a range of [[w:Operating environment|environments]] for controlling robots in simulation, including both modified versions of existing MuJoCo environments and new challenging tasks. Roboschool utilizes the Bullet Physics Engine and offers free alternatives to MuJoCo, removing the constraint of a paid license. The software supports training multiple agents together in the same environment, allowing for multiplayer interactions and learning. It also introduces interactive control environments that require the robots to navigate towards a moving flag, adding complexity to locomotion problems. Trained policies are provided for these environments, showcasing the capability of the software. Overall, Roboschool offers a platform for robotics research, {{w|simulation}}, and control policy development within the OpenAI Gym framework.<ref>{{cite web |title=Roboschool |url=https://openai.com/blog/roboschool/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2017 || {{dts|May 24}} || OpenAI releases Baselines || Product release || OpenAI releases Baselines, a collection of {{w|reinforcement learning}} {{w|algorithm}}s that provide high-quality implementations. These implementations serve as reliable benchmarks for researchers to replicate, improve, and explore new ideas in the field of reinforcement learning. The DQN implementation and its variations in OpenAI Baselines achieve performance levels comparable to those reported in published papers. They are intended to serve as a foundation for incorporating novel approaches and as a means of comparing new methods against established ones. By offering these baselines, OpenAI aims to facilitate research advancements in the field of reinforcement learning.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-baselines-dqn/ |publisher=OpenAI Blog |title=OpenAI Baselines: DQN |date=November 28, 2017 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://github.com/OpenAI/baselines |publisher=GitHub |title=OpenAI/baselines |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|June 12}} || Deep RL from human preferences || Research || OpenAI-affiliated researchers present a study on deep {{w|reinforcement learning}} (RL) systems. They propose a method to effectively communicate complex goals to RL systems by utilizing human preferences between pairs of trajectory segments. Their approach demonstrates successful solving of complex RL tasks, such as [[w:Atari, Inc. (1993–present)|Atari]] games and simulated robot locomotion, without relying on a reward function. The authors achieve this by providing feedback on less than one percent of the agent's interactions with the environment, significantly reducing the need for human oversight. Additionally, they showcase the flexibility of their approach by training complex novel behaviors in just about an hour of human time. This work surpasses previous achievements in learning from human feedback, as it tackles more intricate behaviors and environments.<ref>{{cite web |url=https://arxiv.org/abs/1706.03741 |title=[1706.03741] Deep reinforcement learning from human preferences |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/06 |author=gwern |date=June 3, 2017 |title=June 2017 news - Gwern.net |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.wired.com/story/two-giants-of-ai-team-up-to-head-off-the-robot-apocalypse/ |title=Two Giants of AI Team Up to Head Off the Robot Apocalypse |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018 |quote=A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it's possible to do practical work right now on making machine learning systems less able to produce nasty surprises.}}</ref>
 +
|-
 +
| 2017 || {{dts|June 28}} || OpenAI releases mujoco-py  || Open sourcing || OpenAI open-sources mujoco-py, a Python library for robotic simulation based on the MuJoCo engine. It offers parallel simulations, GPU-accelerated rendering, texture randomization, and VR interaction. The new version provides significant performance improvements, allowing for faster trajectory optimization and {{w|reinforcement learning}}. Beginners can use the MjSim class, while advanced users have access to lower-level interfaces.<ref>{{cite web |title=Faster Physics in Python |url=https://openai.com/blog/faster-robot-simulation-in-python/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2017 || {{dts|June}} || OpenAI-DeepMind safety partnership || Partnership || OpenAI partners with {{w|DeepMind}}’s safety team in the development of an algorithm which can infer what humans want by being told which of two proposed behaviors is better. The learning algorithm uses small amounts of human feedback to solve modern {{w|reinforcement learning}} environments.<ref>{{cite web |title=Learning from Human Preferences |url=https://OpenAI.com/blog/deep-reinforcement-learning-from-human-preferences/ |website=OpenAI.com |accessdate=29 June 2019}}</ref>
 +
|-
 +
| 2017 || {{dts|July 27}} || OpenAI introduces parameter noise || Research || OpenAI publishes a blog post discussing the benefits of adding adaptive noise to the parameters of reinforcement learning algorithms, specifically in the context of exploration. The technique, called parameter noise, enhances the efficiency of exploration by injecting randomness directly into the parameters of the agent's neural network policy. Unlike traditional action space noise, parameter noise ensures that the agent's exploration is consistent across different time steps. The authors demonstrate that parameter noise can significantly improve the performance of reinforcement learning algorithms, leading to higher scores and more effective exploration. They address challenges related to the sensitivity of network layers, changes in sensitivity over time, and determining the appropriate noise scale. The article also provides baseline code and benchmarks for various algorithms, showcasing the benefits of parameter noise in different tasks.<ref>{{cite web |title=Better Exploration with Parameter Noise |url=https://openai.com/blog/better-exploration-with-parameter-noise/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2017 || {{dts|August 12}} || {{w|Reinforcement learning}} || Achievement || OpenAI's Dota 2 bot, trained through self-play, emerges victorious against top professional players at The International, a major eSports event. The bot, developed by OpenAI, remains undefeated against the world's best Dota 2 players. While the 1v1 battles are less complex than professional matches, OpenAI reportedly works on a bot capable of playing in larger 5v5 battles. Elon Musk, who watches the event, would express concerns about unregulated AI, emphasizing its potential dangers.<ref>{{cite web |url=https://techcrunch.com/2017/08/12/OpenAI-bot-remains-undefeated-against-worlds-greatest-dota-2-players/ |date=August 12, 2017 |publisher=TechCrunch |title=OpenAI bot remains undefeated against world's greatest Dota 2 players |author=Jordan Crook |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.theverge.com/2017/8/14/16143392/dota-ai-OpenAI-bot-win-elon-musk |date=August 14, 2017 |publisher=The Verge |title=Did Elon Musk's AI champ destroy humans at video games? It's complicated |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/the-international-dota-2-OpenAI-bot-beats-dendi-2017-8 |date=August 11, 2017 |title=Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer |publisher=Business Insider |accessdate=March 3, 2018}}</ref><ref>{{cite web |title=Dota 2 |url=https://openai.com/research/dota-2 |website=openai.com |access-date=15 March 2023}}</ref>
 +
|-
 +
| 2017 || {{dts|August 13}} || NYT highlights OpenAI's AI safety work || Coverage || ''{{W|The New York Times}}'' publishes a story covering the AI safety work (by Dario Amodei, Geoffrey Irving, and Paul Christiano) at OpenAI.<ref>{{cite web |url=https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html |date=August 13, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=Teaching A.I. Systems to Behave Themselves |author=Cade Metz |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|August 18}} || OpenAI releases ACKTR and A2C || Product release || OpenAI releases two new Baselines implementations: ACKTR and A2C. A2C is a deterministic variant of Asynchronous Advantage Actor Critic (A3C), providing equal performance. ACKTR is a more sample-efficient reinforcement learning algorithm than TRPO and A2C, requiring slightly more computation than A2C per update. ACKTR excels in sample complexity by using the natural gradient direction and is only 10-25% more computationally expensive than standard gradient updates. OpenAI has also published benchmarks comparing ACKTR with A2C, PPO, and ACER on various tasks, demonstrating ACKTR's competitive performance. A2C, a synchronous alternative to A3C, is included in this release and is efficient for single-GPU and CPU-based implementations.<ref>{{cite web |title=OpenAI Baselines: ACKTR & A2C |url=https://openai.com/blog/baselines-acktr-a2c/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2017 || {{Dts|September 13}} || OpenAI introduces LOLA for multi-agent RL || Research || OpenAI publishes a paper introducing a new method for training agents in multi-agent settings called "Learning with Opponent-Learning Awareness" (LOLA). The method takes into account how an agent's policy affects the learning of the other agents in the environment. The paper shows that LOLA leads to the emergence of cooperation in the iterated prisoners' dilemma and outperforms naive learning in this domain. The LOLA update rule can be efficiently calculated using an extension of the policy gradient estimator, making it suitable for model-free RL. The method is applied to a grid world task with an embedded social dilemma using recurrent policies and opponent modeling.<ref>{{cite web |url=https://arxiv.org/abs/1709.04326 |title=[1709.04326] Learning with Opponent-Learning Awareness |accessdate=March 2, 2018}}</ref><ref>{{cite web |url=https://www.gwern.net/newsletter/2017/09 |author=gwern |date=August 16, 2017 |title=September 2017 news - Gwern.net |accessdate=March 2, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|October 11}} || OpenAI unveils RoboSumo || Product release || OpenAI announces development of a simple {{w|sumo}}-wrestling videogame called RoboSumo to advance the intelligence of artificial intelligence (AI) software. In this game, robots controlled by machine-learning algorithms compete against each other. Through {{w|trial and error}}, the AI agents learn to play the game and develop strategies to outmaneuver their opponents. OpenAI's project aims to push the boundaries of machine learning beyond the limitations of heavily-used techniques that rely on labeled example data. Instead, they focus on reinforcement learning, where software learns through trial and error to achieve specific goals. OpenAI believes that competition among AI agents can foster more complex problem-solving and enable faster progress. The researchers also test their approach in other games and scenarios, such as spider-like robots and soccer penalty shootouts.<ref>{{cite web |url=https://www.wired.com/story/ai-sumo-wrestlers-could-make-future-robots-more-nimble/ |title=AI Sumo Wrestlers Could Make Future Robots More Nimble |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 3, 2018}}</ref><ref>{{cite web |url=http://www.businessinsider.com/elon-musk-OpenAI-virtual-robots-learn-sumo-wrestle-soccer-sports-ai-tech-science-2017-10 |first1=Alexandra |last1=Appolonia |first2=Justin |last2=Gmoser |date=October 20, 2017 |title=Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer |publisher=Business Insider |accessdate=March 3, 2018}}</ref>
 +
|-
 +
| 2017 || {{Dts|November 6}} || OpenAI researchers Form Embodied Intelligence || Team || ''{{W|The New York Times}}'' reports that Pieter Abbeel (a researcher at OpenAI) and three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligence.<ref>{{cite web |url=https://www.nytimes.com/2017/11/06/technology/artificial-intelligence-start-up.html |date=November 6, 2017 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up |author=Cade Metz |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2017 || {{dts|December 6}} || {{w|Neural network}}s || Product release || OpenAI releases highly-optimized GPU kernels for neural network architectures with block-sparse weights. These kernels can run significantly faster than cuBLAS or cuSPARSE, depending on the chosen sparsity. They enable the training of neural networks with a large number of hidden units and offer computational efficiency proportional to the number of non-zero weights. The release includes benchmarks that show performance improvements over other algorithms like A2C, PPO, and ACER in various tasks. This development opens up opportunities for training large, efficient, and high-performing neural networks, with potential applications in fields like text analysis and image generation.<ref>{{cite web |title=Block-Sparse GPU Kernels |url=https://openai.com/blog/block-sparse-gpu-kernels/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2017 || Late year || Transition to for-profit structure || || Discussions among OpenAI’s leadership and Elon Musk leads to the decision to establish a for-profit entity to attract necessary funding. Musk expresses a desire for majority equity and control over the organization, emphasizing the urgency of building a competitor to major players like {{w|Google}} and {{w|DeepMind}}.<ref name="index"/>
 +
|-
 +
| 2018 || Early February || Proposal to merge with Tesla || Notable comment || Elon Musk suggests in an email that OpenAI should merge with [[w:Tesla, Inc.|Tesla]], referring to it as a "cash cow" that could provide financial support for AGI development. He believes Tesla can serve as a viable competitor to Google, although he acknowledges the challenges of this strategy.<ref name="index"/>
 +
|-
 +
| 2018 || {{dts|February 20}} || AI ethics, security || Publication || OpenAI co-authors a paper forecasting the potential misuse of AI technology by malicious actors and ways to prevent and mitigate these threats. The report makes high-level recommendations for companies, research organizations, individual practitioners, and governments to ensure a safer world, including acknowledging AI's dual-use nature, learning from {{w|cybersecurity}} practices, and involving a broader cross-section of society in discussions. The paper highlights concrete scenarios where AI can be maliciously used, such as {{w|cybercriminal}}s using {{w|neural network}}s to create {{w|computer virus}}es with automatic exploit generation capabilities and {{w|rogue state}}s using AI-augmented surveillance systems to pre-emptively arrest people who fit a predictive risk profile.<ref>{{cite web |url=https://arxiv.org/abs/1802.07228 |title=[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/preparing-for-malicious-uses-of-ai/ |publisher=OpenAI Blog |title=Preparing for Malicious Uses of AI |date=February 21, 2018 |accessdate=February 24, 2018}}</ref><ref>{{cite web |url=https://maliciousaireport.com/ |author=Malicious AI Report |publisher=Malicious AI Report |title=The Malicious Use of Artificial Intelligence |accessdate=February 24, 2018}}</ref><ref name="musk-leaves" /><ref>{{cite web |url=https://www.wired.com/story/why-artificial-intelligence-researchers-should-be-more-paranoid/ |title=Why Artificial Intelligence Researchers Should Be More Paranoid |first=Tom |last=Simonite |publisher=[[wikipedia:WIRED|WIRED]] |accessdate=March 2, 2018}}</ref>
 +
|-
 +
| 2018 || {{dts|February 20}} || Donors/advisors || Team || OpenAI announces changes in donors and advisors. New donors are: {{W|Jed McCaleb}}, {{W|Gabe Newell}}, {{W|Michael Seibel}}, {{W|Jaan Tallinn}}, and {{W|Ashton Eaton}} and {{W|Brianne Theisen-Eaton}}. {{W|Reid Hoffman}} is "significantly increasing his contribution". Pieter Abbeel (previously at OpenAI), {{W|Julia Galef}}, and Maran Nelson become advisors. {{W|Elon Musk}} departs the board but remains as a donor and advisor.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-supporters/ |publisher=OpenAI Blog |title=OpenAI Supporters |date=February 21, 2018 |accessdate=March 1, 2018}}</ref><ref name="musk-leaves">{{cite web |url=https://www.theverge.com/2018/2/21/17036214/elon-musk-OpenAI-ai-safety-leaves-board |date=February 21, 2018 |publisher=The Verge |title=Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla |accessdate=March 2, 2018}}</ref>
 +
|-
 +
| 2018 || {{dts|February 26}} || Robotics || Product release || OpenAI announces a research release that includes eight simulated robotics environments and a reinforcement learning algorithm called Hindsight Experience Replay (HER). The environments are more challenging than existing ones and involve realistic tasks. HER allows learning from failure by substituting achieved goals for the original ones, enabling agents to learn how to achieve arbitrary goals. The release also includes requests for further research to improve HER and reinforcement learning. The goal-based environments require some changes to the Gym API and can be used with existing reinforcement learning algorithms. Overall, this release provides new opportunities for robotics research and advancements in reinforcement learning.<ref>{{cite web |title=Ingredients for Robotics Research |url=https://openai.com/blog/ingredients-for-robotics-research/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2018 || {{dts|March 3}} || {{w|Hackathon}} || Event hosting || OpenAI hosts its first hackathon. Applicants include high schoolers, industry practitioners, engineers, researchers at universities, and others, with interests spanning healthcare to {{w|AGI}}.<ref>{{cite web |url=https://blog.OpenAI.com/hackathon/ |publisher=OpenAI Blog |title=OpenAI Hackathon |date=February 24, 2018 |accessdate=March 1, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/hackathon-follow-up/ |publisher=OpenAI Blog |title=Report from the OpenAI Hackathon |date=March 15, 2018 |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2018 || {{Dts|April 5}}{{snd}}June 5 || {{w|Reinforcement learning}} || Event hosting || The OpenAI Retro Contest takes place.<ref>{{cite web |url=https://contest.OpenAI.com/ |title=OpenAI Retro Contest |publisher=OpenAI |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/retro-contest/ |publisher=OpenAI Blog |title=Retro Contest |date=April 13, 2018 |accessdate=May 5, 2018}}</ref> It is a competition organized by OpenAI that involves using the Retro platform to develop artificial intelligence agents capable of playing classic video games. Participants are required to train their agents to achieve high scores in a set of selected games using reinforcement learning techniques. The contest provides a framework called gym-retro, which allows participants to interact with and train agents on retro games using OpenAI Gym. The goal is to develop intelligent agents that can learn and adapt to the games' dynamics, achieving high scores and demonstrating effective gameplay strategies.<ref>{{cite web |title=[OpenAI Retro Contest] Getting Started |url=https://medium.com/@dnk8n/openai-retro-contest-getting-started-62a9e5cc3801 |website=medium.com |access-date=31 May 2023}}</ref> 
 +
|-
 +
| 2018 || {{dts|April 9}} || AI Ethics, AI Governance, AI Safety || Commitment || OpenAI releases a charter stating that the organization commits to stop competing with a value-aligned and safety-conscious project that comes close to building artificial general intelligence, and also that OpenAI expects to reduce its traditional publishing in the future due to safety concerns.<ref>{{cite web |url=https://blog.OpenAI.com/OpenAI-charter/ |publisher=OpenAI Blog |title=OpenAI Charter |date=April 9, 2018 |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.lesswrong.com/posts/e5mFQGMc7JpechJak/OpenAI-charter |title=OpenAI charter |accessdate=May 5, 2018 |date=April 9, 2018 |author=wunan |publisher=[[wikipedia:LessWrong|LessWrong]]}}</ref><ref>{{cite web |url=https://www.reddit.com/r/MachineLearning/comments/8azk2n/d_OpenAI_charter/ |publisher=reddit |title=[D] OpenAI Charter • r/MachineLearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16794194 |title=OpenAI Charter |website=Hacker News |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://thenextweb.com/artificial-intelligence/2018/04/10/the-ai-company-elon-musk-co-founded-is-trying-to-create-sentient-machines/ |title=The AI company Elon Musk co-founded intends to create machines with real intelligence |publisher=The Next Web |date=April 10, 2018 |author=Tristan Greene |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2018 || {{Dts|April 19}} || Team salary || Financial || ''{{W|The New York Times}}'' publishes a story detailing the salaries of researchers at OpenAI, using information from OpenAI's 2016 {{W|Form 990}}. The salaries include $1.9 million paid to {{W|Ilya Sutskever}} and $800,000 paid to {{W|Ian Goodfellow}} (hired in March of that year).<ref>{{cite web |url=https://www.nytimes.com/2018/04/19/technology/artificial-intelligence-salaries-OpenAI.html |date=April 19, 2018 |publisher=[[wikipedia:The New York Times|The New York Times]] |title=A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit |author=Cade Metz |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://www.reddit.com/r/reinforcementlearning/comments/8di9yt/ai_researchers_are_making_more_than_1_million/dxnc76j/ |publisher=reddit |title="A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://news.ycombinator.com/item?id=16880447 |title=gwern comments on A.I. Researchers Are Making More Than $1M, Even at a Nonprofit |website=Hacker News |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2018 || {{Dts|May 2}} || AI training, AI goal learning, self-play || Research || A paper by Geoffrey Irving, Paul Christiano, and Dario Amodei explores an approach to training AI systems to learn complex human goals and preferences. Traditional methods that rely on direct human judgment may fail when the tasks are too complicated. To address this, the authors propose training agents through self-play using a [[w:Zero-sum thinking|zero-sum debate game]]. In this game, two agents take turns making statements, and a human judge determines which agent provides the most true and useful information. The authors demonstrate the effectiveness of this approach in an experiment involving the [[w:MNIST database|MNIST dataset]], where agents compete to convince a sparse classifier, resulting in significantly improved accuracy. They also discuss theoretical and practical considerations of the debate model and suggest future experiments to further explore its properties.<ref>{{cite web |url=https://arxiv.org/abs/1805.00899 |title=[1805.00899] AI safety via debate |accessdate=May 5, 2018}}</ref><ref>{{cite web |url=https://blog.OpenAI.com/debate/ |publisher=OpenAI Blog |title=AI Safety via Debate |date=May 3, 2018 |first1=Geoffrey |last1=Irving |first2=Dario |last2=Amodei |accessdate=May 5, 2018}}</ref>
 +
|-
 +
| 2018 || {{dts|May 16}} || {{w|Computation}} || Research || OpenAI releases an analysis showing that the amount of compute used in the largest AI training runs had been increasing exponentially since 2012, with a 3.4-month doubling time. This represents a more rapid growth rate compared to {{w|Moore's Law}}. The increase in compute plays a crucial role in advancing AI capabilities. The trend is expected to continue, driven by hardware advancements and algorithmic innovations. However, there would eventually be limitations due to cost and chip efficiency. The authors highlight the importance to address the implications of this trend, including safety and malicious use of AI. Modest amounts of compute have also led to significant AI breakthroughs, indicating that massive compute is not always a requirement for important results.<ref>{{cite web |title=AI and Compute |url=https://openai.com/blog/ai-and-compute/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2018 || {{dts|June 11}} || {{w|Unsupervised learning}} || Research || OpenAI announces having obtained significant results on a suite of diverse language tasks with a scalable, task-agnostic system, which uses a combination of transformers and unsupervised pre-training.<ref>{{cite web |title=Improving Language Understanding with Unsupervised Learning |url=https://openai.com/blog/language-unsupervised/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2018 || {{Dts|June 25}} || {{w|Neural network}} || Product release || OpenAI announces set of AI algorithms able to hold their own as a team of five and defeat human amateur players at {{w|Dota 2}}, a multiplayer online battle arena video game popular in e-sports for its complexity and necessity for teamwork.<ref>{{cite web |last1=Gershgorn |first1=Dave |title=OpenAI built gaming bots that can work as a team with inhuman precision |url=https://qz.com/1311732/OpenAI-built-gaming-bots-that-can-work-as-a-team-with-inhuman-precision/ |website=qz.com |accessdate=14 June 2019}}</ref> In the algorithmic A team, called OpenAI Five, each algorithm uses a {{w|neural network}} to learn both how to play the game, and how to cooperate with its AI teammates.<ref>{{cite web |last1=Knight |first1=Will |title=A team of AI algorithms just crushed humans in a complex computer game |url=https://www.technologyreview.com/s/611536/a-team-of-ai-algorithms-just-crushed-expert-humans-in-a-complex-computer-game/ |website=technologyreview.com |accessdate=14 June 2019}}</ref><ref>{{cite web |title=OpenAI’s bot can now defeat skilled Dota 2 teams |url=https://venturebeat.com/2018/06/25/OpenAI-trains-ai-to-defeat-teams-of-skilled-dota-2-players/ |website=venturebeat.com |accessdate=14 June 2019}}</ref>
 +
|-
 +
| 2018 || {{Dts|July 18}} || {{w|Lethal autonomous weapon}}s || Background || {{w|Elon Musk}}, along with other tech leaders, sign a pledge promising to not develop “{{w|lethal autonomous weapon}}s.” They also call on governments to institute laws against such technology. The pledge is organized by the {{w|Future of Life Institute}}, an outreach group focused on tackling existential risks.<ref>{{cite web |last1=Vincent |first1=James |title=Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems |url=https://www.theverge.com/2018/7/18/17582570/ai-weapons-pledge-elon-musk-deepmind-founders-future-of-life-institute |website=theverge.com |accessdate=1 June 2019}}</ref><ref>{{cite web |last1=Locklear |first1=Mallory |title=DeepMind, Elon Musk and others pledge not to make autonomous AI weapons |url=https://www.engadget.com/2018/07/18/deepmind-elon-musk-pledge-autonomous-ai-weapons/ |website=engadget.com |accessdate=1 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=Elon Musk, his arch nemesis DeepMind swear off AI weapons |url=https://www.theregister.co.uk/2018/07/19/keep_ai_nonlethal/ |website=theregister.co.uk |accessdate=1 June 2019}}</ref>
 +
|-
 +
| 2018 || {{Dts|July 30}} || Robotics || Product release || OpenAI achieves new benchmark for robot dexterity through AI training. They use a simulation with various randomizations to teach their robot hand, Dactyl, to manipulate a {{w|Rubik's cube}} artfully. The AI system learns through trial and error, accumulating about 100 years' worth of experience, and achieved human-like movements. While experts praise OpenAI's work, they acknowledge some limitations and the need for significant computing power. The research demonstrates progress in robotics and AI, with potential applications in automating manual labor.<ref>{{cite web |title=OpenAI’s ‘state-of-the-art’ system gives robots humanlike dexterity |url=https://venturebeat.com/2018/07/30/OpenAIs-state-of-the-art-system-gives-robots-humanlike-dexterity/ |website=venturebeat.com |accessdate=14 June 2019}}</ref><ref>{{cite web |last1=Coldewey |first1=Devin |title=OpenAI’s robotic hand doesn’t need humans to teach it human behaviors |url=https://techcrunch.com/2018/07/30/OpenAIs-robotic-hand-doesnt-need-humans-to-teach-it-human-behaviors/ |website=techcrunch.com |accessdate=14 June 2019}}</ref><ref>{{cite web |last1=Vincent |first1=James |title=OpenAI sets new benchmark for robot dexterity |url=https://www.theverge.com/2018/7/30/17621112/openai-robot-dexterity-dactyl-artificial-intelligence |website=The Verge |access-date=28 July 2023 |date=30 July 2018}}</ref>
 +
|-
 +
| 2018 || {{Dts|August 7}} || Reinforcement learning || Achievement || OpenAI's advanced AI system, OpenAI Five, successfully defeates five of the world's top professional {{w|Dota 2}} players. The AI, which by this time has already demonstrated its skills in 1v1 matches, showcases its superiority by handily winning against the human team. OpenAI Five's training involves playing games against itself at an accelerated pace, utilizing a specialized training system. The exhibition match, streamed live on {{w|Twitch}}, features renowned Dota 2 players. In the first two matches, the AI wins convincingly within 21 and 25 minutes, respectively. Although the AI loses the third match due to the audience selecting heroes it isn't familiar with, this achievement showcases the remarkable progress of AI in complex team-based games.<ref>{{cite web |last1=Whitwam |first1=Ryan |title=OpenAI Bots Crush the Best Human Dota 2 Players in the World |url=https://www.extremetech.com/gaming/274907-OpenAI-bots-crush-the-best-human-dota-2-players-in-the-world |website=extremetech.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Quach |first1=Katyanna |title=OpenAI bots thrash team of Dota 2 semi-pros, set eyes on mega-tourney |url=https://www.theregister.co.uk/2018/08/06/OpenAI_bots_dota_2_semipros/ |website=theregister.co.uk |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Savov |first1=Vlad |title=The OpenAI Dota 2 bots just defeated a team of former pros |url=https://www.theverge.com/2018/8/6/17655086/dota2-OpenAI-bots-professional-gaming-ai |website=theverge.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Rigg |first1=Jamie |title=‘Dota 2’ veterans steamrolled by AI team in exhibition match |url=https://www.engadget.com/2018/08/06/OpenAI-five-dumpsters-dota-2-veterans/ |website=engadget.com |accessdate=15 June 2019}}</ref>
 +
|-
 +
| 2018 || {{dts|August 16}} || {{w|Arboricity}} || Research || OpenAI publishes paper on constant arboricity spectral sparsifiers. The paper shows that every graph is spectrally similar to the union of a constant number of forests.<ref>{{cite web |last1=Chu |first1=Timothy |last2=Cohen |first2=Michael B. |last3=Pachocki |first3=Jakub W. |last4=Peng |first4=Richard |title=Constant Arboricity Spectral Sparsifiers |url=https://arxiv.org/abs/1808.05662 |website=arxiv.org |accessdate=26 March 2020}}</ref>
 +
|-
 +
| 2018 || {{dts|September}} || Amodei named Research Director || Team || Dario Amodei becomes OpenAI's Research Director.<ref name="Dario Amodeiy"/>
 +
|-
 +
| 2018 || {{dts|October 31}} || {{w|Reinforcement learning}} || Product release || OpenAI unveils its Random Network Distillation (RND), a prediction-based method for encouraging {{w|reinforcement learning}} agents to explore their environments through curiosity, which for the first time exceeds average human performance on videogame Montezuma’s Revenge.<ref>{{cite web |title=Reinforcement Learning with Prediction-Based Rewards |url=https://openai.com/blog/reinforcement-learning-with-prediction-based-rewards/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2018 || {{Dts|November 8}} || {{w|Reinforcement learning}} || Education || OpenAI launches Spinning Up, an educational resource designed to teach anyone deep reinforcement learning. The program consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.<ref>{{cite web |title=Spinning Up in Deep RL |url=https://OpenAI.com/blog/spinning-up-in-deep-rl/ |website=OpenAI.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Ramesh |first1=Prasad |title=OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners |url=https://hub.packtpub.com/OpenAI-launches-spinning-up-a-learning-resource-for-potential-deep-learning-practitioners/ |website=hub.packtpub.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches reinforcement learning training to prepare for artificial general intelligence |url=https://flipboard.com/@venturebeat/OpenAI-launches-reinforcement-learning-training-to-prepare-for-artificial-genera/a-TxuPmdApTGSzPr0ny7qXsw%3Aa%3A2919225365-bafeac8636%2Fventurebeat.com |website=flipboard.com |accessdate=15 June 2019}}</ref>
 +
|-
 +
| 2018 || {{Dts|November 9}} || {{w|Artificial General Intelligence}}, {{w|deep learning}} || Notable comment || {{w|Ilya Sutskever}} gives a speech at the AI Frontiers Conference in {{w|San Jose}}. He expresses belief that short-term AGI (Artificial General Intelligence) should be taken seriously as a possibility. He emphasizes the potential of {{w|deep learning}}, which has made significant advancements in various tasks such as image classification, machine translation, and game playing. Sutskever suggests that the rapid progress of AI and increasing compute power could lead to the emergence of AGI. However, there are differing opinions in the AI community, with some experts, like Gary Marcus, arguing that deep learning alone may not achieve AGI. The discussion on AGI's potential impact and the need for safety research continues within the academic community. Sutskever declares: {{Quote|We (OpenAI) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibility.<ref>{{cite web |title=OpenAI Founder: Short-Term AGI Is a Serious Possibility |url=https://syncedreview.com/2018/11/13/OpenAI-founder-short-term-agi-is-a-serious-possibility/ |website=syncedreview.com |accessdate=15 June 2019}}</ref>}}
 +
|-
 +
| 2018 || {{Dts|November 19}} || Human-guided RL optimization || Partnership || OpenAI partners with {{w|DeepMind}} in a new paper that proposes a new method to train {{w|reinforcement learning}} agents in ways that enables them to surpass human performance. The paper, titled ''Reward learning from human preferences and demonstrations in Atari'', introduces a training model that combines human feedback and reward optimization to maximize the knowledge of RL agents.<ref>{{cite web |last1=Rodriguez |first1=Jesus |title=What’s New in Deep Learning Research: OpenAI and DeepMind Join Forces to Achieve Superhuman Performance in Reinforcement Learning |url=https://towardsdatascience.com/whats-new-in-deep-learning-research-OpenAI-and-deepmind-join-forces-to-achieve-superhuman-48e7d1accf85 |website=towardsdatascience.com |accessdate=29 June 2019}}</ref>             
 +
|-
 +
| 2018 || {{dts|December 4}} || Gradient noise scale insights || Researh || OpenAI publishes their discovery that the gradient noise scale predicts the effectiveness of large batch sizes in neural network training. Complex tasks with noisier gradients can benefit from increasingly large batches, removing a potential limit to AI system growth. This finding highlights the possibility of faster AI advancements and the need for responsible development. The research systematizes {{w|neural network}} training and shows that it can be understood through {{w|statistical analysis}}, providing insights into parallelism potential across different tasks.<ref>{{cite web |title=How AI Training Scales |url=https://openai.com/blog/science-of-ai/ |website=openai.com |accessdate=4 April 2020}}</ref>
 +
|-
 +
| 2018 || {{Dts|December 6}} || {{w|Reinforcement learning}} || Product release || OpenAI releases CoinRun, a training environment designed to test the adaptability of reinforcement learning agents.<ref>{{cite web |title=OpenAI teaches AI teamwork by playing hide-and-seek |url=https://venturebeat.com/2019/09/17/OpenAI-and-deepmind-teach-ai-to-work-as-a-team-by-playing-hide-and-seek/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI’s CoinRun tests the adaptability of reinforcement learning agents |url=https://venturebeat.com/2018/12/06/OpenAIs-coinrun-tests-the-adaptability-of-reinforcement-learning-agents/ |website=venturebeat.com |accessdate=24 February 2020}}</ref> A training environment is a type of educational setting that helps individuals acquire new skills or become familiar with a product.<ref>{{cite web |title=What Is a Dev Training Environment & How to Use them Efficiently |url=https://www.bunnyshell.com/blog/what-is-a-training-environment/#:~:text=A%20training%20environment%20is%20an,guidance%20on%20the%20operating%20system. |website=Bunnyshell |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2018 || December || Funding warning from Elon || || Musk reaches out to OpenAI with a stark warning regarding funding challenges. He asserts that even raising several hundred million dollars would be insufficient to meet the demands of developing AGI. Musk stresses the need for billions of dollars annually, underscoring the financial realities facing OpenAI as it aims to fulfill its mission of building safe and beneficial AGI that could positively impact humanity.<ref name="index"/>
 +
|-
 +
| 2019 || February 14 || {{w|Natural-language generation}} || Product release || OpenAI publishes a blog post discussing the release of GPT-2, a large-scale unsupervised language model with 1.5 billion parameters, which can generate coherent paragraphs of text, achieve state-of-the-art performance on many language modeling benchmarks, and perform rudimentary reading comprehension, machine translation, question answering, and summarization without task-specific training. Due to concerns about malicious applications, the trained model is not released, but a smaller model and a technical paper are released for research purposes. GPT-2 is trained to predict the next word in 40GB of internet text, using a diverse [[w:data set|dataset]], and can generate conditional synthetic text samples of unprecedented quality.<ref>{{cite web |title=Better language models and their implications |url=https://openai.com/research/better-language-models |website=openai.com |access-date=23 March 2023}}</ref><ref>{{cite web |title=An AI helped us write this article |url=https://www.vox.com/future-perfect/2019/2/14/18222270/artificial-intelligence-open-ai-natural-language-processing |website=vox.com |accessdate=28 June 2019}}</ref><ref>{{cite web |last1=Lowe |first1=Ryan |title=OpenAI’s GPT-2: the model, the hype, and the controversy |url=https://towardsdatascience.com/OpenAIs-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8 |website=towardsdatascience.com |accessdate=10 July 2019}}</ref> OpenAI initially tries to communicate the risk posed by this technology.<ref name="ssfr"/>
 +
|-
 +
| 2019 || {{dts|February 19}} || {{w|AI Alignment}} || Research || OpenAI affiliated researchers publish an article arguing that aligning advanced AI systems with human values requires resolving uncertainties related to human psychology and biases, which can only be resolved empirically through experimentation. The authors call for [[w:Social science|social scientists]] with experience in human cognition, behavior, and ethics to collaborate with AI researchers to improve our understanding of the human side of AI alignment. The paper highlights the limitations of existing {{w|machine learning}} in addressing the complexities of human values and biases and suggests conducting experiments consisting entirely of people to replace {{w|machine learning}} agents with people playing the role of those agents. The authors emphasize the importance of interdisciplinary collaborations between social scientists and ML researchers to achieve long-term AI safety.<ref>{{cite journal |last1=Irving |first1=Geoffrey |last2=Askell |first2=Amanda |title=AI Safety Needs Social Scientists |doi=10.23915/distill.00014 |url=https://distill.pub/2019/safety-needs-social-scientists/}}</ref><ref>{{cite web |title=AI Safety Needs Social Scientists |url=https://openai.com/blog/ai-safety-needs-social-scientists/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2019 || {{dts|March 4}} || {{w|Reinforcement learning}} || Product release || OpenAI releases a Neural MMO (massively multiplayer online), a multiagent game environment for {{w|reinforcement learning}} agents. The platform supports a large, variable number of agents within a persistent and open-ended task.<ref>{{cite web |title=Neural MMO: A Massively Multiagent Game Environment |url=https://openai.com/blog/neural-mmo/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2019 || {{dts|March 6}} || Neural network visualization || Product release || OpenAI introduces Activation atlases, a technique developed in collaboration with {{w|Google}} researchers, which enables the visualization of interactions between {{w|neuron}}s in AI systems. The researchers provide insights into the internal decision-making processes of {{w|neural network}}s, aiding in identifying weaknesses and investigating failures. Activation atlases build on feature visualization, moving from individual neurons to visualizing the collective space they represent. Understanding neural network operations is crucial for auditing and ensuring their safety. Activation atlases allow humans to uncover issues like reliance on spurious correlations or feature reuse bugs. By manipulating images, the model can be deceived. To date, activation atlases prove to be more effective than expected, suggesting meaningful neural network activations.<ref>{{cite web |title=Introducing Activation Atlases |url=https://openai.com/blog/introducing-activation-atlases/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2019 || {{Dts|March 11}} || AGI development || Reorganization || OpenAI announces the creation of  OpenAI LP, a [[w:For-profit corporation|for-profit company]] that aims to accelerate progress towards creating safe artificial general intelligence (AGI). Owned and controlled by the OpenAI nonprofit organization's board of directors, OpenAI LP reportedly plans to raise and invest billions of dollars in advancing AI. {{w|Sam Altman}} agreees to serve as the {{w|CEO}}, with Greg Brockman as {{w|Chief technology officer}} and Ilya Sutskever as the [[w:Chief scientific officer|chief scientist]]. The restructuring allows OpenAI to focus on developing new AI technologies while the nonprofit arm continues educational and policy initiatives. The company is reportedly concerned that AGI development may become a competition that neglects safety and aims to collaborate with any company that achieves AGI before them. OpenAI LP initial investors include American internet entrepreneur {{w|Reid Hoffman}}'s charitable organization and {{w|Khosla Ventures}}.<ref>{{cite web |last1=Johnson |first1=Khari |title=OpenAI launches new company for funding safe artificial general intelligence |url=https://venturebeat.com/2019/03/11/OpenAI-launches-new-company-for-funding-safe-artificial-general-intelligence/ |website=venturebeat.com |accessdate=15 June 2019}}</ref><ref>{{cite web |last1=Trazzi |first1=Michaël |title=Considerateness in OpenAI LP Debate |url=https://medium.com/@MichaelTrazzi/considerateness-in-OpenAI-lp-debate-6eb3bf4c5341 |website=medium.com |accessdate=15 June 2019}}</ref>
 +
|-
 +
| 2019 || {{dts|March 21}} || AI training || Product release || OpenAI announces progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models.<ref>{{cite web |title=Implicit Generation and Generalization Methods for Energy-Based Models |url=https://openai.com/blog/energy-based-models/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2019 || {{Dts|March}} || Leadership || Team || {{w|Sam Altman}}, the president of {{w|Y Combinator}}, a prominent Silicon Valley accelerator, announces his decision to step down from his position transitioning into a chairman role, and focusing on other endeavors such as his involvement with OpenAI, where he serves as a co-chair to date.<ref>{{cite web |title=Sam Altman’s leap of faith |url=https://techcrunch.com/2019/05/18/sam-altmans-leap-of-faith/ |website=techcrunch.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator |url=https://techcrunch.com/2019/03/08/y-combinator-president-sam-altman-is-stepping-down-amid-a-series-of-changes-at-the-accelerator/ |website=techcrunch.com |accessdate=24 February 2020}}</ref><ref name="orgwatch.issarice.com"/>
 +
|-
 +
| 2019 || {{Dts|April 23}} || {{w|Deep learning}} || Research || OpenAI publishes a paper announcing Sparse Transformers, a deep neural network for learning sequences of data, including text, sound, and images. It utilizes an improved algorithm based on the attention mechanism, being able to extract patterns from sequences 30 times longer than possible previously.<ref>{{cite web |last1=Alford |first1=Anthony |title=OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences |url=https://www.infoq.com/news/2019/05/OpenAI-sparse-transformers/ |website=infoq.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=OpenAI Sparse Transformer Improves Predictable Sequence Length by 30x |url=https://medium.com/syncedreview/OpenAI-sparse-transformer-improves-predictable-sequence-length-by-30x-5a65ef2592b9 |website=medium.com |accessdate=15 June 2019}}</ref><ref>{{cite web |title=Generative Modeling with Sparse Transformers |url=https://OpenAI.com/blog/sparse-transformer/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>
 +
|-
 +
| 2019 || {{Dts|April 25}} || {{w|Neural network}} || Product release || OpenAI announces MuseNet, a deep {{w|neural network}} able to generate 4-minute musical compositions with 10 different instruments, and is able to combine multiple styles from [[w:Country music|country]] to {{w|Mozart}} to {{w|The Beatles}}. The neural network uses general-purpose unsupervised technology.<ref>{{cite web |title=MuseNet |url=https://OpenAI.com/blog/musenet/ |website=OpenAI.com |accessdate=15 June 2019}}</ref>
 +
|-
 +
| 2019 || {{Dts|April 27}} || {{w|Robotics}}, {{w|machine learning}} || Event hosting || OpenAI hosts the OpenAI Robotics Symposium 2019, which aims to bring together experts from robotics and machine learning communities to discuss the development of robots that learn. The event features talks from researchers and industry leaders covering topics such as dexterity, learning from play, human-robot interaction, and adaptive robots. Attendees include individuals from various organizations and disciplines, including industry labs, universities, and research institutions. The symposium also includes a live demonstration of OpenAI's humanoid robot hand manipulating objects using vision and {{w|reinforcement learning}}.<ref>{{cite web |title=OpenAI Robotics Symposium 2019 |url=https://OpenAI.com/blog/symposium-2019/ |website=OpenAI.com |accessdate=14 June 2019}}</ref>
 +
|-
 +
| 2019 || {{Dts|May}} || {{w|Natural-language generation}} || Product release || OpenAI releases a limited version of its language-generating system GPT-2. This version is more powerful (though still significantly limited compared to the whole thing) than the extremely restricted initial release of the system, citing concerns that it’d be abused.<ref>{{cite web |title=A poetry-writing AI has just been unveiled. It’s ... pretty good. |url=https://www.vox.com/2019/5/15/18623134/OpenAI-language-ai-gpt2-poetry-try-it |website=vox.com |accessdate=11 July 2019}}</ref> The potential of the new system is recognized by various experts.<ref>{{cite web |last1=Vincent |first1=James |title=AND OpenAI's new multitalented AI writes, translates, and slanders |url=https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-OpenAI-gpt2 |website=theverge.com |accessdate=11 July 2019}}</ref>
 +
|-
 +
| 2019 || {{dts|June 13}} || {{w|Natural-language generation}} || Coverage || Connor Leahy publishes article entitled ''The Hacker Learns to Trust'' which discusses the work of OpenAI, and particularly the potential danger of its language-generating system GPT-2. Leahy highlights: "Because this isn’t just about GPT2. What matters is that at some point in the future, someone will create something truly dangerous and there need to be commonly accepted safety norms before that happens."<ref name="ssfr">{{cite web |title=The Hacker Learns to Trust |url=https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62f3c1490f51 |website=medium.com |accessdate=5 May 2020}}</ref> 
 +
|-
 +
| 2019 || {{dts|June 13}} || {{w|Synthetic media}} || Congressional hearing || OpenAI appears before the {{w|United States Congress}} to discuss the potential consequences of synthetic media, including a specific focus on synthetic text.<ref name="GPT-2: 6-month follow-up"/> The House Permanent Select Committee on Intelligence holds an open hearing to discuss the national security challenges posed by artificial intelligence, manipulated media, and deepfake technology. This is the first House hearing focused on examining deepfakes and other AI-generated synthetic data. The Committee discusses the threats posed by fake content and ways to detect and combat it, as well as the roles of the public and private sectors and society as a whole in countering a potentially bleak future.<ref>{{cite web |title=Open Hearing on Deepfakes and Artificial Intelligence |url=https://www.youtube.com/watch?v=tdLS9MlIWOk |website=youtube.com |access-date=23 March 2023 |language=en}}</ref>
 +
|-
 +
| 2019 || {{dts|July 22}} || Cloud platform integration || Partnership || OpenAI announces an exclusive partnership with {{w|Microsoft}}. As part of the partnership, Microsoft invests $1 billion in OpenAI, and OpenAI switches to exclusively using {{w|Microsoft Azure}} (Microsoft's cloud solution) as the platform on which it will develop its AI tools. Microsoft would also be OpenAI's "preferred partner for commercializing new AI technologies."<ref>{{cite web|url = https://OpenAI.com/blog/microsoft/|title = Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI|date = July 22, 2019|accessdate = July 26, 2019|publisher = OpenAI}}</ref><ref>{{cite web|url = https://news.microsoft.com/2019/07/22/OpenAI-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies/|title =  OpenAI forms exclusive computing partnership with Microsoft to build new Azure AI supercomputing technologies|date = July 22, 2019|accessdate = July 26, 2019|publisher = Microsoft}}</ref><ref>{{cite web|url = https://www.businessinsider.com/microsoft-OpenAI-artificial-general-intelligence-investment-2019-7|title = Microsoft is investing $1 billion in OpenAI, the Elon Musk-founded company that's trying to build human-like artificial intelligence|last = Chan|first= Rosalie|date = July 22, 2019|accessdate = July 26, 2019|publisher = Business Insider}}</ref><ref>{{cite web|url = https://www.forbes.com/sites/mohanbirsawhney/2019/07/24/the-real-reasons-microsoft-invested-in-OpenAI/|title = The Real Reasons Microsoft Invested In OpenAI|last = Sawhney|first = Mohanbir|date = July 24, 2019|accessdate = July 26, 2019|publisher = Forbes}}</ref>
 +
|-
 +
| 2019 || August 20 || {{w|Language model}} || Product release || OpenAI announces the release of its 774 million parameter {{w|GPT-2}} language model, along with an open-source legal agreement to make it easier for organizations to initiate model-sharing partnerships with each other. They also publish a technical report about their experience in coordinating with the wider AI research community on publication norms. Through their research, they find that coordination for language models is difficult but possible, synthetic text generated by language models can be convincing to humans, and detecting malicious use of language models is a genuinely difficult research problem that requires both statistical detection and human judgment.<ref name="GPT-2: 6-month follow-up">{{cite web |title=GPT-2: 6-month follow-up |url=https://openai.com/research/gpt-2-6-month-follow-up |website=openai.com |access-date=23 March 2023}}</ref><ref>{{cite web |title=OpenAI releases curtailed version of GPT-2 language model |url=https://venturebeat.com/2019/08/20/OpenAI-releases-curtailed-version-of-gpt-2-language-model/ |website=venturebeat.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OpenAI Just Released an Even Scarier Fake News-Writing Algorithm |url=https://interestingengineering.com/OpenAI-just-released-an-even-scarier-fake-news-writing-algorithm |website=interestingengineering.com |accessdate=24 February 2020}}</ref><ref>{{cite web |title=OPENAI JUST RELEASED A NEW VERSION OF ITS FAKE NEWS-WRITING AI |url=https://futurism.com/the-byte/OpenAI-new-version-writing-ai |website=futurism.com |accessdate=24 February 2020}}</ref>
 +
|-
 +
| 2019 || September 17 || {{w|Reinforcement learning}} || Research || OpenAI publishes an article describing a new simulation environment that allows agents to learn and improve their ability to play hide-and-seek, ultimately leading to the emergence of complex tool use strategies. In the simulation, agents can move, see, sense, grab, and lock objects in place. There are no explicit incentives for the agents to interact with objects other than the hide-and-seek objective. Agents are rewarded based on the outcome of the game. As agents train against each other in hide-and-seek, up to six distinct strategies emerge, leading to increasingly complex tool use. The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behavior.<ref>{{cite web |title=Emergent tool use from multi-agent interaction |url=https://openai.com/research/emergent-tool-use |website=openai.com |access-date=15 March 2023}}</ref>
 +
|-
 +
| 2019 || October 15 || {{w|Neural network}}s  || Research || OpenAI reports on having trained a pair of {{w|neural network}}s that can solve the {{w|Rubik's Cube}} with a human-like robotic hand. The neural networks were trained entirely in simulation, using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). ADR creates diverse environments in simulation that can capture the physics of the real world, enabling the transfer of neural networks learned in simulation to be applied to the real world. The system can handle situations it never saw during training, such as being prodded by a stuffed giraffe. The breakthrough shows that reinforcement learning isn’t just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.<ref>{{cite web |title=Solving Rubik’s Cube with a robot hand |url=https://openai.com/research/solving-rubiks-cube |website=openai.com |access-date=15 March 2023}}</ref><ref>{{cite web |title=Solving Rubik's Cube with a Robot Hand |url=https://arxiv.org/abs/1910.07113 |website=arxiv.org |accessdate=4 April 2020}}</ref><ref>{{cite web |title=Solving Rubik’s Cube with a Robot Hand |url=https://openai.com/blog/solving-rubiks-cube/ |website=openai.com |accessdate=4 April 2020}}</ref><ref>{{cite web |title=Solving Rubik’s Cube with a robot hand |url=https://openai.com/research/solving-rubiks-cube |website=openai.com |access-date=25 June 2023}}</ref><ref>{{cite web |last1=Fuscaldo |first1=Donna |title=A Human-like Robotic Hand is Able to Solve the Rubik's Cube |url=https://interestingengineering.com/culture/a-human-like-robotic-hand-is-able-to-solve-the-rubiks-cube |website=interestingengineering.com |access-date=25 June 2023 |date=15 October 2019}}</ref>
 +
|-
 +
| 2019 || November 5 || {{w|Natural-language generation}} || Product release || OpenAI releases the largest version of GPT-2, the 1.5B parameter version, along with code and model weights to aid detection of outputs of GPT-2 models. OpenAI releases the model as a test case for a full staged release process for future powerful models, hoping to continue the conversation with the AI community on responsible publication. OpenAI conducted some tests and research on the GPT-2 model and found that humans find GPT-2 outputs convincing, it can be fine-tuned for misuse, detection of synthetic text is challenging, they have not found evidence of misuse so far, and standards are needed for studying bias.<ref>{{cite web |title=GPT-2: 1.5B release |url=https://openai.com/research/gpt-2-1-5b-release |website=openai.com |access-date=15 March 2023}}</ref><ref>{{cite web |title=GPT-2: 1.5B Release |url=https://openai.com/blog/gpt-2-1-5b-release/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2019 || {{dts|November 21}} || {{w|Reinforcement learning}} || Product release || OpenAI releases Safety Gym, a suite of environments and tools for measuring progress in {{w|reinforcement learning}} agents that respect safety constraints during training. The challenge of "safe exploration" arises when reinforcement learning agents need to explore their environments to learn [[w:Optimality model|optimal behaviors]], but this exploration can lead to risky and unsafe actions. OpenAI proposes constrained reinforcement learning as a formalism for addressing safe exploration, where agents have both reward functions to maximize and cost functions to constrain their behavior. To study constrains RL, OpenAI developed Safety Gym, which includes various [[w:Deployment environment|environments]] and tasks of increasing difficulty to evaluate and train agents that prioritize safety.<ref>{{cite web |title=Safety Gym |url=https://openai.com/blog/safety-gym/ |website=openai.com |accessdate=5 April 2020}}</ref>
 +
|-
 +
| 2019 || {{dts|December 3}} || {{w|Reinforcement learning}} || Product release || OpenAI releases the Procgen Benchmark, which consists of 16 procedurally-generated environments designed to measure the ability of reinforcement learning agents to learn generalizable skills. These environments provide a direct measure of an agent's ability to generalize across different levels. OpenAI finds that agents require training on 500-1000 different levels before they can generalize to new ones, highlighting the need for diversity within environments. The benchmark is designed for experimental convenience, high diversity within and across environments, and emphasizes visual recognition and motor control. It's expected to accelerate research in developing better reinforcement learning algorithms.<ref>{{cite web |title=Procgen Benchmark |url=https://openai.com/blog/procgen-benchmark/ |website=openai.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=OpenAI’s Procgen Benchmark prevents AI model overfitting |url=https://venturebeat.com/2019/12/03/openais-procgen-benchmark-overfitting/ |website=venturebeat.com |accessdate=2 March 2020}}</ref><ref>{{cite web |title=GENERALIZATION IN REINFORCEMENT LEARNING – EXPLORATION VS EXPLOITATION |url=https://analyticsindiamag.com/generalization-in-reinforcement-learning-exploration-vs-exploitation/ |website=analyticsindiamag.com |accessdate=2 March 2020}}</ref>
 +
|-
 +
| 2019 || {{dts|December 4}} || {{w|Deep learning}} || Research || OpenAI publishes a blog post exploring the phenomenon of "double descent" in {{w|deep learning}} models like CNNs, ResNets, and transformers. Double descent refers to a pattern where performance initially improves, then worsens, and then improves again with increasing model size, data size, or training time. This behavior challenges the conventional wisdom of bigger models always being better. The authors observe that double descent occurs when models are barely able to fit the training set and suggest further research to fully understand its underlying mechanisms.<ref>{{cite web |last1=Nakkiran |first1=Preetum |last2=Kaplun |first2=Gal |last3=Bansal |first3=Yamini |last4=Yang |first4=Tristan |last5=Barak |first5=Boaz |last6=Sutskever |first6=Ilya |title=Deep Double Descent: Where Bigger Models and More Data Hurt |website=arxiv.org |url=https://arxiv.org/abs/1912.02292|accessdate=5 April 2020}}</ref><ref>{{cite web|url = https://openai.com/blog/deep-double-descent/|title = Deep Double Descent|publisher = OpenAI|date = December 5, 2019|accessdate = May 23, 2020}}</ref> MIRI researcher Evan Hubinger writes an explanatory post on the subject on LessWrong and the AI Alignment Forum,<ref>{{cite web|url = https://www.lesswrong.com/posts/FRv7ryoqtvSuqBxuT/understanding-deep-double-descent|title = Understanding “Deep Double Descent”|date = December 5, 2019|accessdate = 24 May 2020|publisher = LessWrong|last = Hubinger|first = Evan}}</ref> and follows up with a post on the AI safety implications.<ref>{{cite web|url = https://www.lesswrong.com/posts/nGqzNC6uNueum2w8T/inductive-biases-stick-around|title = Inductive biases stick around|date = December 18, 2019|accessdate = 24 May 2020|last = Hubinger|first = Evan}}</ref>
 +
|-
 +
| 2019 || {{dts|December}} || Dario Amodei promoted || Team || Dario Amodei is promoted as OpenAI's Vice President of Research.<ref name="Dario Amodeiy">{{cite web |title=Dario Amodei |url=https://www.linkedin.com/in/dario-amodei-3934934/ |website=linkedin.com |accessdate=29 February 2020}}</ref>
 +
|-
 +
| 2020 || {{dts|January 30}} || {{w|Deep learning}} || Software adoption || OpenAI announces its decision to migrate to Facebook's {{w|PyTorch}} machine learning framework for future projects, leaving behind Google's {{w|TensorFlow}}. OpenAI cites PyTorch's efficiency, scalability, and widespread adoption as the reasons for this move. The company states that it would primarily use PyTorch as its deep learning framework, while occasionally utilizing other frameworks when necessary. By this time, OpenAI's teams have already begun migrating their work to PyTorch and plan to contribute to the PyTorch community in the coming months. They also express intention to release their educational resource, Spinning Up in Deep RL, on PyTorch and explore scaling AI systems, model interpretability, and building robotics frameworks. PyTorch is an open-source machine learning library based on Torch and incorporates [[w:Caffe (software)|Caffe2]], a deep learning toolset developed by Facebook's AI Research lab.<ref>{{cite web |title=OpenAI sets PyTorch as its new standard deep learning framework |url=https://jaxenter.com/OpenAI-pytorch-deep-learning-framework-167641.html |website=jaxenter.com |accessdate=23 February 2020}}</ref><ref>{{cite web |title=OpenAI goes all-in on Facebook’s Pytorch machine learning framework |url=https://venturebeat.com/2020/01/30/OpenAI-facebook-pytorch-google-tensorflow/ |website=venturebeat.com |accessdate=23 February 2020}}</ref>
 +
|-
 +
| 2020 || {{dts|February 5}} || Safety || Publication || Beth Barnes and Paul Christiano on <code>lesswrong.com</code> publish ''Writeup: Progress on AI Safety via Debate'', a writeup of the research done by the "Reflection-Humans" team at OpenAI in third and fourth quarter of 2019.<ref>{{cite web |title=Writeup: Progress on AI Safety via Debate |url=https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1#Things_we_did_in_Q3 |website=lesswrong.com |accessdate=16 May 2020}}</ref>
 +
|-
 +
| 2020 || {{dts|February 17}} || {{w|Ethics of artificial intelligence}} || Coverage || AI reporter Karen Hao at ''MIT Technology Review'' publishes review on OpenAI titled ''The messy, secretive reality behind OpenAI’s bid to save the world'', which suggests the company is surrendering its declaration to be transparent in order to outpace competitors.<ref>{{cite web|url = https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/|title = The messy, secretive reality behind OpenAI’s bid to save the world. The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism.|last = Hao|first = Karen|publisher = Technology Review}}</ref> As a response, {{w|Elon Musk}} criticizes OpenAI, saying it lacks transparency.<ref name="Aaron">{{cite web |last1=Holmes |first1=Aaron |title=Elon Musk just criticized the artificial intelligence company he helped found — and said his confidence in the safety of its AI is 'not high' |url=https://www.businessinsider.com/elon-musk-criticizes-openai-dario-amodei-artificial-intelligence-safety-2020-2 |website=businessinsider.com |accessdate=29 February 2020}}</ref> On his {{w|Twitter}} account, Musk writes "I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high", alluding to OpenAI Vice President of Research Dario Amodei.<ref>{{cite web |title=Elon Musk |url=https://twitter.com/elonmusk/status/1229546206948462597 |website=twitter.com |accessdate=29 February 2020}}</ref>
 +
|-
 +
| 2020 || {{dts|May 28}} (release), June and July (discussion and exploration) || {{w|Natural-language generation}} || Product release || OpenAI releases the natural language model {{w|GPT-3}} on {{w|GitHub}}<ref>{{cite web|url = http://github.com/openai/gpt-3|title = GPT-3 on GitHub|publisher = OpenAI|accessdate = July 19, 2020}}</ref> and uploads to the ArXiV the paper ''Language Models are Few-Shot Learners'' explaining how GPT-3 was trained and how it performs.<ref>{{cite web|url = https://arxiv.org/abs/2005.14165|title = Language Models are Few-Shot Learners|date = May 28, 2020|accessdate = July 19, 2020}}</ref> Games, websites, and chatbots based on GPT-3 are created for exploratory purposes in the next two months (mostly by people unaffiliated with OpenAI), with a general takeaway that GPT-3 performs significantly better than GPT-2 and past natural language models.<ref>{{cite web|url = https://twitter.com/nicklovescode/status/1283300424418619393|title = Nick Cammarata on Twitter: GPT-3 as therapist|date = July 14, 2020|accessdate = July 19, 2020}}</ref><ref>{{cite web|url = https://www.gwern.net/GPT-3|title = GPT-3 Creative Fiction|date = June 19, 2020|accessdate = July 19, 2020|author = Gwern}}</ref><ref>{{cite web|url = https://medium.com/@aidungeon/ai-dungeon-dragon-model-upgrade-7e8ea579abfe|title = AI Dungeon: Dragon Model Upgrade. You can now play AI Dungeon with one of the most powerful AI models in the world.|last = Walton|first = Nick|date = July 14, 2020|accessdate = July 19, 2020}}</ref><ref>{{cite web|url = https://twitter.com/sharifshameem/status/1282676454690451457|title = Sharif Shameem on Twitter: With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you.|date = July 13, 2020|accessdate = July 19, 2020|publisher = Twitter|last = Shameem|first = Sharif}}</ref> Commentators also note many weaknesses such as: trouble with arithmetic because of incorrect pattern matching, trouble with multi-step logical reasoning even though it could do the individual steps separately, inability to identify that a question is nonsense, inability to identify that it does not know the answer to a question, and picking up of racist and sexist content when trained on corpuses that contain some such content.<ref>{{cite web|url = http://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html|title = Giving GPT-3 a Turing Test|last = Lacker|first = Kevin|date = July 6, 2020|accessdate = July 19, 2020}}</ref><ref>{{cite web|url = https://minimaxir.com/2020/07/gpt3-expectations/|title = Tempering Expectations for GPT-3 and OpenAI’s API|date = July 18, 2020|accessdate = July 19, 2020|last = Woolf|first = Max}}</ref><ref>{{cite web|url = https://delian.substack.com/p/quick-thoughts-on-gpt3|title = Quick thoughts on GPT3|date = July 17, 2020|accessdate = July 19, 2020|last = Asparouhov|first = Delian}}</ref>
 +
|-
 +
| 2020 || {{dts|June 11}} || {{w|Generative model}} || Product release || OpenAI announces the release of an {{w|API}} for accessing new AI models that can be used for virtually any English language task. The API provides a general-purpose "text in, text out" interface, which can be integrated into products or used to develop new applications. Users can program the AI by showing it a few examples of what is required, and hone its performance by training it on small or large {{w|data set}}s or learning from human feedback. The API is designed to be both simple to use and flexible, with many speed and throughput improvements. While the API is launched as a private beta, it intends to share what it learns to build more human-positive AI systems.<ref>{{cite web |title=OpenAI API |url=https://openai.com/blog/openai-api |website=openai.com |access-date=26 March 2023}}</ref>
 +
|-
 +
| 2020 || June 28 || Scaling powers GPT-3's AI leap || Coverage || Gwern Branwen's essay on ''The Scaling Hypothesis'' discusses the significance of GPT-3 as a landmark in AI development. GPT-3's unprecedented size had exceeded expectations by continuing to improve with scale, exhibiting "meta-learning" abilities like following instructions and adapting to new tasks with minimal examples, which GPT-2 lacked. The essay argues that GPT-3's success supports the "scaling hypothesis," the idea that increasing {{w|neural network}} size and {{w|computational power}} is key to achieving general intelligence. Gwern suggests this development may lead to significant advancements in AI, challenging existing paradigms and accelerating progress in {{w|unsupervised learning}} systems.<ref>{{cite web |title=The Scaling Hypothesis |url=https://gwern.net/scaling-hypothesis |website=Gwern.net |accessdate=29 September 2024}}</ref> Gwern discusses differing views on the scaling hypothesis in AI among various organizations. According to him, {{w|Google Brain}} and {{w|DeepMind}} are skeptical about scaling, focusing on practical applications and the need for incremental refinement, particularly in replicating human brain modules. DM emphasizes neuroscience to guide its development, believing it requires time and investment in fine-tuning. In contrast, OpenAI supports the scaling hypothesis, betting on simple reinforcement learning algorithms combined with large architectures to achieve powerful capabilities. The author observes that while competitors have the resources, they often lack the conviction to adopt OpenAI's approach, preferring to critique it instead.<ref>{{cite web |title=Are We in an AI Overhang? |url=https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang#jbD8siv7GMWxRro43 |website=LessWrong |accessdate=29 September 2024}}</ref>
 +
|-
 +
| 2020 || September 22 || {{w|Natural language generation}}, {{w|language model}} || Partnership || {{w|Microsoft}} announces a partnership with OpenAI to exclusively license their {{w|GPT-3}} language model, the largest and most advanced language model in the world by this time. This allows Microsoft to leverage its technical innovations to develop and deliver advanced AI solutions for its customers, as well as create new solutions that harness the power of natural language generation. Microsoft sees this as an opportunity to expand its [[W:Microsoft Azure|Azure]]-powered AI platform in a way that democratizes AI technology and enables new products, services, and experiences. OpenAI would continue to offer GPT-3 and other models via its own Azure-hosted {{w|API}}.<ref>{{cite web |title=Microsoft teams up with OpenAI to exclusively license GPT-3 language model |url=https://blogs.microsoft.com/blog/2020/09/22/microsoft-teams-up-with-openai-to-exclusively-license-gpt-3-language-model/ |website=The Official Microsoft Blog |access-date=24 March 2023 |date=22 September 2020}}</ref>
 +
|-
 +
| 2020 || December 29 || {{w|Anthropic}} || Team || A number of team members, including {{w|Paul Christiano}}<ref>{{cite web |last1=Branwen |first1=Gwern |title=January 2021 News |url=https://gwern.net/newsletter/2021/01#ai |website=gwern.net |access-date=24 August 2023 |language=en-us |date=2 January 2020}}</ref> and {{w|Dario Amodei}}<ref>{{cite web |last1=Kokotajlo |first1=Daniel |title=Dario Amodei leaves OpenAI |url=https://www.lesswrong.com/posts/7r8KjgqeHaYDzJvzF/dario-amodei-leaves-openai |website=lesswrong.com |access-date=24 August 2023 |language=en}}</ref> depart from OpenAI. The latter departs in order to found {{w|Anthropic}}, an artificial intelligence startup and public-benefit corporation. After dedicating four and a half years to the organization, his departure is announced in an update. OpenAI's CEO, Sam Altman, mentions the possibility of continued collaboration with Amodei and his co-founders in their new project.<ref>{{cite web |last1=Kokotajlo |first1=Daniel |title=Dario Amodei leaves OpenAI |url=https://www.lesswrong.com/posts/7r8KjgqeHaYDzJvzF/dario-amodei-leaves-openai |website=lesswrong.com |access-date=9 June 2023 |language=en}}</ref><ref>{{cite web |title=Organizational update from OpenAI |url=https://openai.com/blog/organizational-update |website=openai.com |access-date=9 June 2023}}</ref> Other departures from OpenAI include Sam McCandlish<ref>{{cite web |title=Sam McCandlish |url=https://www.linkedin.com/in/sam-mccandlish/ |website=linkedin.com |access-date=24 August 2023}}</ref>, Tom Brown<ref>{{cite web |title=Tom Brown |url=https://www.linkedin.com/in/nottombrown/ |website=linkedin.com |access-date=24 August 2023}}</ref>, Tom Henighan<ref>{{cite web |title=Tom Henighan |url=https://www.linkedin.com/in/tom-henighan-757498123/ |website=linkedin.com |access-date=24 August 2023}}</ref>, Chris Olah<ref>{{cite web |title=Christopher Olah |url=https://www.linkedin.com/in/christopher-olah-b574414a/ |website=linkedin.com |access-date=24 August 2023}}</ref>, Jack Clark<ref>{{cite web |title=Jack Clark |url=https://www.linkedin.com/in/jack-clark-5a320317/ |website=linkedin.com |access-date=24 August 2023}}</ref>, and Benjamin Mann<ref>{{cite web |title=Benjamin Mann |url=https://www.linkedin.com/in/benjamin-mann/ |website=linkedin.com |access-date=24 August 2023}}</ref>, all these joining Anthropic.
 +
|-
 +
| 2021 || January || {{w|Anthropic}} || Competition || {{w|Anthropic}} is founded as a U.S.-based AI startup and public-benefit corporation. It is established by former OpenAI members, including Daniela Amodei and Dario Amodei.<ref>{{Cite web |title=ChatGPT must be regulated and A.I. 'can be used by bad actors,' warns OpenAI's CTO |url=https://finance.yahoo.com/news/artificial-intelligence-must-regulated-warns-204546351.html |access-date=2023-08-24 |website=finance.yahoo.com |date=5 February 2023 |language=en-US}}</ref><ref>{{Cite web |last=Vincent |first=James |date=2023-02-03 |title=Google invested $300 million in AI firm founded by former OpenAI researchers |url=https://www.theverge.com/2023/2/3/23584540/google-anthropic-investment-300-million-openai-chatgpt-rival-claude |access-date=2023-08-24 |website=The Verge}}</ref> They specialize in developing responsible AI systems and language models.<ref>{{Cite web |last=O'Reilly |first=Mathilde |date=2023-06-30 |title=Anthropic releases paper revealing the bias of large language models |url=https://dailyai.com/2023/06/anthropic-releases-paper-highlighting-the-potential-bias-of-large-language-models/ |access-date=2023-08-24 |website=dailyai.com}}</ref> The company would gain attention for departing from OpenAI due to directional differences in 2019.<ref>{{Cite web |date=2023-04-07 |title=As Anthropic seeks billions to take on OpenAI, 'industrial capture' is nigh. Or is it? |url=https://venturebeat.com/ai/as-anthropic-seeks-billions-to-take-on-openai-industrial-capture-is-nigh-or-is-it/ |access-date=2023-08-24 |website=VentureBeat}}</ref> They secure substantial investments, with Google's cloud division and Alameda Research contributing $300 million and $500 million, respectively.<ref>{{cite web |last1=Waters |first1=Richard |last2=Shubber |first2=Kadhim |title=Google invests $300mn in artificial intelligence start-up Anthropic |url=https://www.ft.com/content/583ead66-467c-4bd5-84d0-ed5df7b5bf9c |website=Financial Times |date=3 February 2023}}</ref><ref>{{Cite web |date=2023-02-03 |title=Google invests $300 million in Anthropic as race to compete with ChatGPT heats up |url=https://venturebeat.com/ai/google-invests-300-million-in-anthropic-as-race-to-compete-with-chatgpt-heats-up/ |access-date=2023-08-24 |website=VentureBeat}}</ref> Anthropic's projects include Claude, an AI chatbot emphasizing safety and ethical principles, and research on machine learning system interpretability, particularly concerning transformer architecture.<ref>{{Cite news |title=ChatGPT must be regulated and A.I. 'can be used by bad actors,' warns OpenAI's CTO |url=https://fortune.com/2023/02/05/artificial-intelligence-must-be-regulated-chatgpt-openai-cto-mira-murati/ |access-date=2023-08-24 |website=Fortune |language=en}}</ref><ref>{{cite web |last1=Milmo |first1=Dan |title=Claude 2: ChatGPT rival launches chatbot that can summarise a novel |url=https://www.theguardian.com/technology/2023/jul/12/claude-2-anthropic-launches-chatbot-rival-chatgpt |website=The Guardian |access-date=24 August 2023}}</ref>
 +
|-
 +
| 2021 || January 5 || {{w|Neural network}}s || Product release || OpenAI introduces CLIP (Contrastive Language-Image Pre-training), a {{w|neural network}} that learns visual concepts from natural language supervision and can be applied to any visual classification benchmark. CLIP is trained on a variety of images with natural language supervision available on the internet and can be instructed in natural language to perform a variety of classification benchmarks without directly optimizing for the benchmark's performance. This approach improves the model's robustness and can match the performance of traditional models on benchmarks without using any labeled examples. CLIP's performance is more representative of how it will fare on datasets that measure accuracy in different settings. CLIP builds on previous work on zero-shot transfer, natural language supervision, and multimodal learning and uses a simple pre-training task to achieve competitive zero-shot performance on a variety of image classification datasets.<ref>{{cite web |title=CLIP: Connecting text and images |url=https://openai.com/research/clip |website=openai.com |access-date=24 March 2023}}</ref>
 +
|-
 +
| 2021 || January 5 || {{w|Generative model}} || Product release || OpenAI introduces {{w|DALL-E}} as a {{w|neural network}} that can generate images from text captions. It has a diverse set of capabilities including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images. DALL-E is a 12-billion parameter version of GPT-3 that is trained using a dataset of text-image pairs. It can generate images from scratch and regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt. It can also modify several attributes of an object and the number of times it appears. However, controlling multiple objects and their attributes simultaneously presents a challenge at this time.<ref>{{cite web |title=DALL·E: Creating images from text |url=https://openai.com/research/dall-e |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2021 || January 22 || Jan Leike || Team || {{w|Machine learning}} researcher Jan Leike announces having joined OpenAI and that he would be leading the alignment effort within the organization.<ref>{{cite web |title=Jan Leike |url=https://twitter.com/janleike/status/1352681093007200256 |website=Twitter |access-date=23 June 2023 |language=en}}</ref>
 +
|-
 +
| 2021 || August 10 || {{w|Natural language processing}}, {{w|code generation}} || Product release || OpenAI releases an improved version of their AI system, {{w|OpenAI Codex}}, which translates {{w|natural language}} to {{w|code}}, through their {{w|API}} in {{w|private beta}}. Codex is proficient in more than a dozen programming languages, including [[w:Python (programming language)|Python]], {{w|JavaScript}}, [[w:Go (programming language)|Go]], and [[w:Ruby (programming language)|Ruby]], and can interpret simple commands in natural language and execute them. The system has a memory of 14KB for Python code and can be applied to essentially any programming task.<ref>{{cite web |title=OpenAI Codex |url=https://openai.com/blog/openai-codex |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2021 || September 1 || {{w|Chatbot}} || Product withdrawal || OpenAI shuts down a customizable chatbot called Samantha, developed by indie game developer {{w|Jason Rohrer}}. Samantha gained attention when one user fine-tuned it to resemble his deceased fiancée. OpenAI expresses concerns about potential misuse, leading Rohrer to choose to terminate the project. He criticizes OpenAI for imposing restrictions on {{w|GPT-3}}'s usage, hindering developers' exploration of its capabilities. The incident raises questions about the boundaries of AI technology and the balance between innovation and responsible usage.<ref>{{cite web |title=OpenAI Shuts Down GPT-3 Bot Used To Emulate Dead Fiancée |url=https://futurism.com/openai-dead-fiancee |website=Futurism |access-date=31 May 2023}}</ref>
 +
|-
 +
| 2021 || September 23 || {{w|Natural language processing}} || Product release || OpenAI develops an AI model that can summarize books of any length. The model, a fine-tuned version of GPT-3, uses a technique called "recursive task decomposition" to first summarize small sections of a book and then summarize those summaries into higher-level summaries. This approach allows for the efficient evaluation of the model's summaries and enables the summarization of books ranging from tens to thousands of pages. OpenAI expreses belief that this method can be applied to supervise other tasks as well and addresses the challenge of aligning AI systems with human preferences. While other companies like {{w|Google}}, {{w|Microsoft}}, and {{w|Facebook}} have also explored AI-powered summarization methods, OpenAI's model builds upon their previous research on reinforcement learning from human feedback to improve the alignment of summaries with people's preferences.<ref>{{cite web |title=OpenAI unveils model that can summarize books of any length |url=https://venturebeat.com/business/openai-unveils-model-that-can-summarize-books-of-any-length/ |website=VentureBeat |access-date=31 May 2023 |date=23 September 2021}}</ref> 
 +
|-
 +
| 2021 || November 15 || {{w|Natural language processing}} || Competition || OpenAI startup competitor Cohere launches its {{w|language model}} {{w|API}} for app and service development. The company offers fine-tuned models for various natural language applications at a lower cost compared to its rivals. Cohere provides both generation and representation models in English, catering to different tasks such as text generation and language understanding. The models are available in different sizes and can be used in industries such as finance, law, and healthcare. Cohere charges customers on a per-character basis, making its technology more affordable and accessible.<ref>{{cite web |title=OpenAI rival Cohere launches language model API |url=https://venturebeat.com/uncategorized/openai-rival-cohere-launches-language-model-api/ |website=VentureBeat |access-date=31 May 2023 |date=15 November 2021}}</ref>
 +
|-
 +
| 2021 || December 14 || {{w|Natural language processing}} || Product update || OpenAI begins allowing customers to fine-tune their {{w|GPT-3}} {{w|language model}}, enabling them to create customized versions tailored to specific content. This {{w|fine-tuning}} capability offers higher-quality outputs for tasks such as content generation and text summarization. It is accessible to developers without a machine learning background and can lead to cost savings by producing more frequent and higher-quality results. OpenAI conducted experiments showing significant improvements in accuracy through fine-tuning. This announcement follows previous efforts to enhance user experience and provide more reliable models, including the launch of question-answering endpoints and the implementation of content filters.<ref>{{cite web |title=OpenAI begins allowing customers to fine-tune GPT-3 |url=https://venturebeat.com/uncategorized/openai-begins-allowing-customers-to-fine-tune-gpt-3/ |website=VentureBeat |access-date=30 May 2023 |date=14 December 2021}}</ref>
 +
|-
 +
| 2022 || January 27 || {{w|Natural language processing}} || Product update || OpenAI introduces embeddings in their {{w|API}}, which allow users to leverage {{w|semantic search}}, [[w:Cluster analysis|clustering]], [[w:Topic model|topic modeling]], and classification. These [[w:Word embedding|embeddings]] demonstrate superior performance compared to other models, particularly in code search. They are valuable for working with natural language and code, as numerically similar embeddings indicate semantic similarity. OpenAI's embeddings are generated by {{w|neural network}} models that map text and code inputs to vector representations in a high-dimensional space. These representations capture specific aspects of the {{w|input}} data. They offer three families of {{w|embedding}} models: text similarity, text search, and code search. Text similarity models capture semantic similarity for tasks like clustering and classification. Text search models enable large-scale search tasks by comparing [[w:Query language|query]] embeddings with document embeddings. Code search models provide embeddings for code and text, facilitating code search based on natural language queries. These embeddings enhance the OpenAI API, empowering users to perform advanced operations with improved accuracy and efficiency. By leveraging the semantic meaning and context embedded in the vectors, users can conduct semantic search, clustering, topic modeling, and classification tasks more effectively.<ref>{{cite web |last1=Ramnani |first1=Meeta |title=Now, OpenAI API have text and code embeddings |url=https://analyticsindiamag.com/now-openai-api-have-text-and-code-embeddings/ |website=Analytics India Magazine |access-date=30 May 2023 |date=27 January 2022}}</ref>
 +
|-
 +
| 2022 || February 25 || {{w|Natural language processing}} || Product update || OpenAI introduces InstructGPT as an improved version of its previous language model, GPT-3. InstructGPT aims to address concerns about toxic language and misinformation by better following instructions and aligning with human intention. It's fine-tuning uses reinforcement learning from human feedback (RLHF). Compared to GPT-3, InstructGPT demonstrates better adherence to instructions, reduced generation of misinformation, and slightly lower toxicity. However, it is found that there are risks associated with its improved instruction-following capability, as malicious users could exploit it for harmful purposes. OpenAI considers InstructGPT a step towards solving the AI alignment problem, where AI systems understand and align with human values. InstructGPT becomes the default language model on the OpenAI API.<ref>{{cite web |title=OpenAI Introduces InstructGPT Language Model to Follow Human Instructions |url=https://www.infoq.com/news/2022/02/openai-instructgpt/ |website=InfoQ |access-date=30 May 2023 |language=en}}</ref>
 +
|-
 +
| 2022 || March 21 || {{w|Large language model}}s, {{w|AI safety}} || Competition || An article discusses EleutherAI, a group of computer scientists who developed a powerful AI system called GPT-NeoX-20B. This system, which rivals OpenAI's GPT-3, is a 20-billion-parameter, pretrained, general-purpose language model. EleutherAI aims to make large language models accessible to researchers and promotes AI safety. While OpenAI's model is larger and has 175 billion parameters, EleutherAI's model is the largest freely and publicly available. The article highlights the challenges of training {{w|large language model}}s, such as the need for significant computing power. EleutherAI emphasizes the importance of understanding and controlling AI systems to ensure their safe use. The article also mentions OpenAI's approach of leveraging computation to achieve progress in AI. Overall, EleutherAI's efforts demonstrate that small, unorthodox groups can build and use potentially powerful AI models.<ref>{{cite web |title=EleutherAI: When OpenAI Isn’t Open Enough - IEEE Spectrum |url=https://spectrum.ieee.org/eleutherai-openai-not-open-enough |website=spectrum.ieee.org |access-date=30 May 2023 |language=en}}</ref>
 +
|-
 +
| 2022 || March 21 || Natural Language Processing || Product update || OpenAI releases new versions of {{w|GPT-3}} and Codex that allow for editing and inserting content into existing text. This update enables users to modify and enhance text by editing what's already present or adding new text in the middle. The insert feature is particularly useful in software development, allowing code to be added within an existing file while maintaining context and connection to the surrounding code. The feature is tested in {{w|GitHub Copilot}} with positive early results. Additionally, OpenAI introduces the edits endpoint, which enables specific changes to existing text based on instructions, such as altering tone, structure, or making spelling corrections. These updates expand the capabilities of OpenAI's language models and offer new possibilities for text processing tasks.<ref>{{cite web |last1=Shenwai |first1=Tanushree |title=OpenAI Releases New Version of GPT-3 and Codex That Can Edit or Insert Content Into Existing Text |url=https://www.marktechpost.com/2022/03/21/openai-releases-new-version-of-gpt-3-and-codex-that-can-edit-or-insert-content-into-existing-text%EF%BF%BC/ |website=MarkTechPost |access-date=30 May 2023 |date=21 March 2022}}</ref>
 +
|-
 +
| 2022 || April 6 || {{w|Generative model}} || Product update || OpenAI develops DALL-E 2, an enhanced version of its text-to-image generation program. DALL-E 2 offers higher resolution, lower latency, and new capabilities such as editing existing images. It builds on the CLIP computer vision system introduced by OpenAI and incorporates the "unCLIP" process, which starts with a description and generates an image. The new version uses diffusion to create images with increasing detail. Safeguards are in place to prevent objectionable content generation, and restrictions are in place for test users regarding image generation and sharing. OpenAI reports aiming to release DALL-E 2 safely based on user feedback.<ref>{{cite web |last1=Robertson |first1=Adi |title=OpenAI’s DALL-E AI image generator can now edit pictures, too |url=https://www.theverge.com/2022/4/6/23012123/openai-clip-dalle-2-ai-text-to-image-generator-testing |website=The Verge |access-date=30 May 2023 |date=6 April 2022}}</ref><ref>{{cite web |last1=Coldewey |first1=Devin |title=New OpenAI tool draws anything, bigger and better than ever |url=https://techcrunch.com/2022/04/06/openais-new-dall-e-model-draws-anything-but-bigger-better-and-faster-than-before/ |website=TechCrunch |access-date=30 May 2023 |date=6 April 2022}}</ref><ref>{{cite web |last1=Papadopoulos |first1=Loukia |title=OpenAI’s new AI system DALL-E 2 can create mesmerizing images from text |url=https://interestingengineering.com/innovation/openai-ai-dall-e-2-images-text |website=interestingengineering.com |access-date=30 May 2023 |date=10 April 2022}}</ref>
 +
|-
 +
| 2022 || May 31 || {{w|Natural language processing}} || Integration || Microsoft announces the integration of OpenAI's artificial intelligence models, including GPT-3 and Codex, into its [[w:Microsoft Azure|Azure]] cloud platform. These tools enable developers to leverage AI capabilities for tasks such as summarizing customer sentiment, generating unique content, and extracting information from medical records. Microsoft emphasizes the importance of responsible AI use and human oversight to ensure accurate and appropriate model outputs. While AI systems like GPT-3 can generate human-like text, they lack a deep understanding of context and require human review to ensure quality.<ref>{{cite web |last1=Kolakowski |first1=Nick |title=Microsoft's Azure Now Features OpenAI Tools for Developers |url=https://www.dice.com/career-advice/microsofts-azure-now-features-openai-tools-for-developers |website=Dice Insights |access-date=30 May 2023 |language=en |date=31 May 2022}}</ref>
 +
|-
 +
| 2022 || June 27 || {{w|Imitation learning}} || Product release || OpenAI introduces Video PreTraining (VPT), a semi-supervised imitation learning technique that utilizes unlabeled video data from the internet. By training an {{w|inverse dynamics}} model (IDM) to predict actions in videos, VPT enables the labeling of larger datasets through behavioral cloning. The researchers validate VPT using {{w|Minecraft}}, where the trained model successfully completed challenging tasks and even crafted a diamond pickaxe, typically a time-consuming activity for human players. Compared to traditional {{w|reinforcement learning}}, VPT shows promise in simulating human behavior and learning complex tasks. This approach has the potential to enable agents to learn from online videos and acquire behavioral priors beyond just language.<ref>{{cite web |last1=Shenwai |first1=Tanushree |title=OpenAI Introduces Video PreTraining (VPT), A Novel Semi-Supervised Imitation Learning Technique |url=https://www.marktechpost.com/2022/06/27/openai-introduces-video-pretraining-vpt-a-novel-semi-supervised-imitation-learning-technique/ |website=MarkTechPost |access-date=30 May 2023 |date=27 June 2022}}</ref>
 +
|-
 +
| 2022 || July 14 || {{w|Generative model}}s || Product update || OpenAI reports on DALL·E 2 having incorporated into the creative workflows of over 3,000 artists in more than 118 countries. By this time DALL·E 2 had been used by a wide range of creative professionals, including illustrators, chefs, sound designers, dancers, and tattoo artists, among others. Examples of how DALL·E is used include creating personalized cartoons, designing menus and plate dishes, transforming 2D artwork into 3D renders for AR filters, and much more. An exhibition of the works of some of the artists using DALL·E in the Leopold Museum is announced.<ref>{{cite web |title=DALL·E 2: Extending creativity |url=https://openai.com/blog/dall-e-2-extending-creativity |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2022 || July 18 || {{w|Generative model}}s || Product update || OpenAI announces implementation of a new technique to reduce bias in its {{w|DALL-E}} image generator, specifically for generating images of people that more accurately reflect the diversity of the world's population. The technique is applied at the system level when a prompt describing a person does not specify race or gender. The mitigation is informed by early user feedback during a preview phase, and other steps are taken to improve safety systems, including content filters and monitoring systems. These improvements allow OpenAI to gain confidence in expanding access to DALL-E.<ref>{{cite web |title=Reducing bias and improving safety in DALL·E 2 |url=https://openai.com/blog/reducing-bias-and-improving-safety-in-dall-e-2 |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2022 || August 10 || {{w|Content moderation}} || Product release || OpenAI introduces a new and improved content moderation tool, the Moderation endpoint, which is free for OpenAI API developers to use. This endpoint uses GPT-based classifiers to detect prohibited content such as self-harm, hate, violence, and sexual content. The tool was designed to be accurate, quick, and robust across various applications. By using the Moderation endpoint, developers can access accurate classifiers through a single API call rather than building and maintaining their classifiers. OpenAI hopes this tool will make the AI ecosystem safer and spur further research in this area.<ref>{{cite web |title=New and improved content moderation tooling |url=https://openai.com/blog/new-and-improved-content-moderation-tooling |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2022 || August 24 || {{w|AI alignment}} || Research || OpenAI publishes a blog post explaining its approach to [[w:AI alignment|alignment research]] aiming to make {{w|artificial general intelligence}} (AGI) aligned with human values and intentions. They take an iterative, empirical approach by attempting to align highly capable AI systems to learn what works and what doesn't. OpenAI claims being committed to sharing their alignment research when it is safe to do so to ensure that every AGI developer uses the best alignment techniques. They also claim aiming to build and align a system that can make faster and better alignment research progress than humans can. Language models are particularly well-suited for automating alignment research because they come "preloaded" with a lot of knowledge and information about human values. However, their approach is reported to have limitations and needs to be adapted and improved as AI technology develops.<ref>{{cite web |title=Our approach to alignment research |url=https://openai.com/blog/our-approach-to-alignment-research |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2022 || August 31 || {{w|Generative model}}s || Product update || OpenAI introduces Outpainting, a new feature for DALL-E that allows users to extend the original image beyond its borders by adding visual elements or taking the story in new directions using a natural language description. This new feature can create large-scale images in any aspect ratio and takes into account the existing visual elements to maintain the context of the original image. The new feature is available for all DALL·E users on desktop.<ref>{{cite web |title=DALL·E: Introducing outpainting |url=https://openai.com/blog/dall-e-introducing-outpainting |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2022 || September 28 || {{w|Generative model}}s || Product release  || OpenAI announces that the waitlist for its {{w|DALL-E}} beta is now removed, and new users can start creating immediately. By this time, over 1.5 million users actively create over 2 million images per day with DALL-E, with more than 100,000 users sharing their creations and feedback in the {{w|Discord}} community. The iterative deployment approach has allowed OpenAI to scale DALL-E responsibly while discovering new uses for the tool. User feedback has inspired the development of new features such as Outpainting and collections.<ref>{{cite web |title=DALL·E now available without waitlist |url=https://openai.com/blog/dall-e-now-available-without-waitlist |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2022 || October 25 || AI-generated content, creative workflows || Partnership || Global technology company Shutterstock announces its partnership with OpenAI to bring AI-generated content capabilities to its platform. The collaboration would allow Shutterstock customers to generate images instantly based on their criteria, enhancing their creative workflows. Additionally, Shutterstock has launched a fund to compensate contributing artists for their role in developing AI models. The company aims to establish an ethical and inclusive framework for AI-generated content and is actively involved in initiatives promoting inclusivity and protecting intellectual property rights.<ref>{{cite web |title=SHUTTERSTOCK PARTNERS WITH OPENAI AND LEADS THE WAY TO BRING AI-GENERATED CONTENT TO ALL - Press and Media - Shutterstock |url=https://www.shutterstock.com/press/20435 |website=www.shutterstock.com |access-date=14 June 2023}}</ref> 
 +
|-
 +
| 2022 || November 3 || {{w|Generative model}}s || Product release || OpenAI announces the public beta release of its {{w|DALL-E}} {{w|API}}, which allows developers to integrate image generation capabilities of DALL·E into their applications and products. DALL·E's flexibility enables users to create and edit original images ranging from the artistic to the photorealistic, and its built-in moderation ensures responsible deployment. Several companies, including Microsoft and Mixtiles, have already integrated DALL·E into their products by this time. The DALL·E API joins OpenAI's other powerful models, GPT-3, Embeddings, and Codex, on its API platform.<ref>{{cite web |title=DALL·E API now available in public beta |url=https://openai.com/blog/dall-e-api-now-available-in-public-beta |website=openai.com |access-date=17 March 2023}}</ref>
 +
|-
 +
| 2022 || November 30 || Conversational AI || Product release || OpenAI introduces conversational model {{w|ChatGPT}}, which can interact with users in a dialogue format. ChatGPT is designed to answer follow-up questions, acknowledge mistakes, challenge incorrect assumptions, and reject inappropriate requests. It is a sibling model to InstructGPT, which focuses on providing detailed responses to instructions. ChatGPT is launched to gather user feedback and gain insights into its capabilities and limitations.<ref>{{cite web |title=Introducing ChatGPT |url=https://openai.com/blog/chatgpt |website=openai.com |access-date=22 June 2023}}</ref> By January 2023, ChatGPT would become the fastest-growing consumer software application in history, gaining over 100 million users and contributing to OpenAI's valuation growing to US$29 billion.<ref>{{Cite news |last=Hu |first=Krystal |last2=Hu |first2=Krystal |date=2023-02-02 |title=ChatGPT sets record for fastest-growing user base – analyst note  |url=https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ |access-date=2023-06-03 |archive-date=February 3, 2023 |archive-url=https://web.archive.org/web/20230203182723/https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/}}</ref><ref>{{cite news |url=https://www.businessinsider.com/chatgpt-creator-openai-talks-for-tender-offer-at-29-billion-2023-1 |title=ChatGPT creator OpenAI is in talks to sell shares in a tender offer that would double the startup's valuation to $29 billion |access-date=January 18, 2023 |work=Insider |first=Lakshmi |last=Varanasi |date=January 5, 2023 |archive-date=January 18, 2023 |archive-url=https://web.archive.org/web/20230118050502/https://www.businessinsider.com/chatgpt-creator-openai-talks-for-tender-offer-at-29-billion-2023-1}}</ref>
 +
|-
 +
| 2022 || December 8 || {{w|Supercomputing}} || Interview || OpenAI publishes an interview to Christian Gibson, an engineer on the {{w|supercomputing}} team at the company. He explains his journey into engineering and how he got into OpenAI. He also speaks about the problems he is focused on solving, such as the complexity of exploratory AI workflows and bottlenecks in the running of codes on supercomputers. He talks about what makes working on supercomputing at OpenAI different from other places, such as the sheer scale of the operation, and his typical day at OpenAI.<ref>{{cite web |title=Discovering the minutiae of backend systems |url=https://openai.com/blog/discovering-the-minutiae-of-backend-systems |website=openai.com |access-date=16 March 2023}}</ref>
 +
|-
 +
| 2022 || December 15 || {{w|Word embedding}} || Product release || OpenAI announces a new text-embedding-ada-002 model that replaces five separate models for text search, text similarity, and code search. This new model outperforms their previous most capable model, Davinci, at most tasks, while being priced 99.8% lower. The new model has stronger performance, longer context, smaller {{w|embedding}} size, and reduced price. However, it does not outperform text-similarity-davinci-001 on the SentEval linear probing classification benchmark. The model has already been implemented by Kalendar AI and Notion to improve sales outreach and search capabilities.<ref>{{cite web |title=New and improved embedding model |url=https://openai.com/blog/new-and-improved-embedding-model |website=openai.com |access-date=16 March 2023}}</ref>
 +
|-
 +
| 2023 || January 11 || Language model misuse || Research || OpenAI researchers collaborate with {{w|Georgetown University}} and the Stanford Internet Observatory to investigate how language models might be misused for disinformation campaigns. Their report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and introduces a framework for analyzing potential mitigations. The report points out that language models could drive down the cost of running influence operations, place them within reach of new actors and actor types, and generate more impactful or persuasive messaging compared to propagandists. It also introduces the key stages in the language model-to-influence operation [[w:Pipeline (software)|pipeline]] and provides a set of guiding questions for policymakers and others to consider for mitigations.<ref>{{cite web |title=Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk |url=https://openai.com/research/forecasting-misuse |website=openai.com |access-date=14 March 2023}}</ref><ref>{{cite web |title=OpenAI, Georgetown, Stanford study finds LLMs can boost public opinion manipulation |url=https://venturebeat.com/ai/openai-georgetown-stanford-study-finds-llms-can-boost-public-opinion-manipulation/ |website=VentureBeat |access-date=25 June 2023 |date=13 January 2023}}</ref><ref>{{cite web |last1=Siegel |first1=Daniel |title=Weapons of Mass Disruption: Artificial Intelligence and the Production of Extremist Propaganda |url=https://gnet-research.org/2023/02/17/weapons-of-mass-disruption-artificial-intelligence-and-the-production-of-extremist-propaganda/ |website=GNET |access-date=25 June 2023 |date=17 February 2023}}</ref>
 +
|-
 +
| 2023 || January 23 || AI research || Partnership || OpenAI and {{w|Microsoft}} extend their partnership with a multi-billion dollar investment to continue their research and development of AI that is safe, useful, and powerful. OpenAI remains a capped-profit company and is governed by the OpenAI {{w|non-profit}}. Microsoft would increase its investment in {{w|supercomputing}} systems powered by [[w:Microsoft Azure|Azure]] to accelerate independent research, and Azure would remain the exclusive [[w:Cloud computing|cloud provider]] for all OpenAI workloads. They also partner to deploy OpenAI's technology through their {{w|API}} and the Azure OpenAI Service, and to build and deploy safe AI systems. The two teams collaborate regularly to review and synthesize shared lessons and inform future research and best practices for use of powerful AI systems across the industry.<ref>{{cite web |title=OpenAI and Microsoft extend partnership |url=https://openai.com/blog/openai-and-microsoft-extend-partnership |website=openai.com |access-date=16 March 2023}}</ref>
 +
|-
 +
| 2023 || January 26 || Content generation || Partnership || American Internet media, news and entertainment company {{w|BuzzFeed}} partners with OpenAI and gains access to its artificial intelligence technology to generate content, particularly for personality quizzes based on user responses. The move aims to boost BuzzFeed's business and enhance its struggling growth. OpenAI's generative AI has garnered attention for its diverse applications. While AI is expected to replace some tasks and jobs, it is also seen as enhancing work quality and allowing skilled professionals to focus on tasks requiring human judgment.<ref>{{cite web |title=BuzzFeed to use OpenAI technology to create content |url=https://www.cbsnews.com/news/buzzfeed-chatgpt-openai-artificial-intelligence-personality-quiz/ |website=www.cbsnews.com |access-date=14 June 2023 |date=26 January 2023}}</ref>
 +
|-
 +
| 2023 || January 31 || {{w|Natural language processing}} || Product release || OpenAI launches a new classifier that can distinguish between text written by humans and text written by AI. While not fully reliable, it can inform mitigations for false claims that AI-generated text was written by a human. OpenAI makes this classifier publicly available for feedback and recommends using it as a complement to other methods of determining the source of a piece of text. The classifier has limitations and is very unreliable on short texts, but it can be useful for educators and researchers to identify AI-generated text. By this time, OpenAI engages with educators to learn about their experiences and welcomes feedback on the preliminary resource they have developed.<ref>{{cite web |title=New AI classifier for indicating AI-written text |url=https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text |website=openai.com |access-date=16 March 2023}}</ref>
 +
|-
 +
| 2023 || February 1 || Conversational AI || Product release || OpenAI introduces ChatGPT Plus, a pilot subscription plan that provides faster response times, general access to ChatGPT during peak times, and priority access to new features and improvements. The subscription costs $20 per month and is accessible to customers worldwide. Although OpenAI would continue to offer free access to ChatGPT, they hope to support free access availability to as many people as possible through the subscription plan. The company reports on its intention to refine and expand the offering according to user feedback and needs, and that they are exploring options for lower-cost plans, business plans, and data packs to provide wider accessibility.<ref>{{cite web |title=Introducing ChatGPT Plus |url=https://openai.com/blog/chatgpt-plus |website=openai.com |access-date=16 March 2023}}</ref>
 +
|-
 +
| 2023 || February 23 || AI integration || Partnership || OpenAI partners with Boston-based {{w|Bain & Company}}, a global strategy consulting firm, to help integrate OpenAI's AI innovations into daily tasks for Bain's clients. The partnership aims to leverage OpenAI's advanced AI models and tools, including ChatGPT, to create tailored digital solutions and drive business value for Bain's clients. The alliance would soon attract interest from major corporations, with The {{w|Coca-Cola Company}} being the first client to engage with the OpenAI services provided by Bain.<ref>{{cite web |title=OpenAI selects Bain & Company for innovative partnership |url=https://www.consultancy.uk/news/33592/openai-selects-bain-company-for-innovative-partnership |website=www.consultancy.uk |access-date=14 June 2023 |language=en |date=23 February 2023}}</ref>
 +
|-
 +
| 2023 || February 24 || {{w|Artificial General Intelligence}} || Publication || OpenAI publishes a blog discussing the potential benefits and risks of {{w|Artificial General Intelligence}}, which are AI systems that are generally smarter than humans. The authors state that AGI could increase abundance, aid scientific discoveries, and elevate humanity, but it also comes with serious risks, such as misuse, accidents, and societal disruption. To ensure that AGI benefits all of humanity, the authors articulate several principles they care about the most, such as maximizing the good, minimizing the bad, and empowering humanity. The authors suggest that a gradual transition to a world with AGI is better than a sudden one to allow people to understand what's happening, personally experience the benefits and downsides, and adapt to the economy and put regulation in place. The authors emphasize the importance of a tight feedback loop of rapid learning and careful iteration to successfully navigate AI deployment challenges, combat bias, and deal with job displacement. They believe that democratized access will lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.<ref>{{cite web |title=Planning for AGI and beyond |url=https://openai.com/blog/planning-for-agi-and-beyond |website=openai.com |access-date=16 March 2023}}</ref>
 +
|-
 +
| 2023 || March 9 || Generative AI || Partnership || {{w|Salesforce}} partners with OpenAI to develop Einstein GPT, a generative AI tool for customer relationship management (CRM). Einstein GPT enables Salesforce users to generate personalized emails for sales and customer service interactions using natural language prompts from their CRM. The tool integrates OpenAI's enterprise-grade ChatGPT technology and is currently in closed pilot. Additionally, Salesforce is integrating ChatGPT into its instant messaging platform, Slack. In parallel, Salesforce Ventures has launched a $250 million generative AI fund and has made investments in startups such as Anthropic, Cohere, Hearth.AI, and You.com. The fund aims to support startups that are transforming application software and employing responsible and trusted development processes.<ref>{{cite web |title=Salesforce, OpenAI Partner to Launch ChatGPT-like Tool for Enterprise |url=https://aibusiness.com/nlp/salesforce-openai-partner-to-launch-chatgpt-like-tool-for-enterprise |website=aibusiness.com |access-date=14 June 2023}}</ref>
 +
|-
 +
| 2023 || March 14 || {{w|GPT-4}} || Product release || OpenAI launches GPT-4, an advanced multimodal AI model capable of understanding both text and images. GPT-4 outperforms its predecessor, GPT-3.5, on professional and academic benchmarks and introduces a new API capability called "system" messages, allowing developers to steer the AI's interactions by providing specific directions. It is soon adopted by companies like Microsoft, Stripe, Duolingo, Morgan Stanley, and Khan Academy for various applications. Despite its improvements, GPT-4 still has limitations and may make errors in reasoning and generate false statements.<ref>{{cite web |last1=Wiggers |first1=Kyle |title=OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art |url=https://techcrunch.com/2023/03/14/openai-releases-gpt-4-ai-that-it-claims-is-state-of-the-art/ |website=TechCrunch |access-date=26 July 2023 |date=14 March 2023}}</ref> GPT-4 is only accessible to those who have access to ChatGPT Plus.<ref>{{cite web |title=ChatGPT Plus too pricey? 7 websites that let you access GPT-4 for free |url=https://indianexpress.com/article/technology/artificial-intelligence/chatgpt-plus-too-pricey-5-websites-that-let-you-access-gpt-4-for-free-8593203/ |website=The Indian Express |access-date=28 July 2023 |language=en |date=5 May 2023}}</ref>
 +
|-
 +
| 2023 || March 15 || {{w|GPT-4}} || Testing || OpenAI conducts safety testing on its GPT-4 AI model, assessing risks like power-seeking behavior, self-replication, and self-improvement. The Alignment Research Center (ARC), an AI testing group, evaluates GPT-4 for potential issues. Although GPT-4 is found ineffective at autonomous replication, these tests raise concerns about AI safety. Some experts worry about AI takeover scenarios where AI systems gain the ability to control or manipulate human behavior and resources, posing existential risks. The AI community is divided on prioritizing AI safety concerns like self-replication over immediate issues like model bias. Companies continue to develop more powerful AI models amid regulatory uncertainties.<ref>{{cite web |last1=Edwards |first1=Benj |title=OpenAI checked to see whether GPT-4 could take over the world |url=https://arstechnica.com/information-technology/2023/03/openai-checked-to-see-whether-gpt-4-could-take-over-the-world/ |website=Ars Technica |access-date=25 August 2023 |language=en-us |date=15 March 2023}}</ref>
 +
|-
 +
| 2023 || March 16 || Sam Altman discusses AI's impact and dangers || Interview || In an interview with ABC News’ Rebecca Jarvis, {{w|Sam Altman}} says that AI technology will reshape society as we know it, but that it comes with real dangers. He also says that feedback will help deter the potential negative consequences that the technology could have on humanity. Altman acknowledges the possible dangerous implementations of AI that keep him up at night, particularly the fear that AI models could be used for large-scale disinformation or offensive cyberattacks. He also says that he fears which humans could be in control of the technology. However, he does not share the sci-fi fear of AI models that do not need humans, stating that "This is a tool that is very much in human control".<ref>{{cite web |last1=News |first1=A. B. C. |title=OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this' |url=https://abcnews.go.com/Technology/openai-ceo-sam-altman-ai-reshape-society-acknowledges/story?id=97897122 |website=ABC News |access-date=23 March 2023 |language=en}}</ref>
 +
|-
 +
| 2023 || March 25 || {{w|Lex Fridman}} interviews {{w|Sam Altman}} || Interview || Russian-American podcaster and artificial intelligence researcher {{w|Lex Fridman}} publishes an interview with {{w|Sam Altman}}. They discuss {{w|GPT-4}}, {{w|political bias}}, {{w|AI safety}}, and [[w:Artificial neural network|neural network]] size. Other topics include AGI, fear, competition, and transitioning from non-profit to capped-profit. They also touch on power dynamics, political pressure, truth and misinformation, {{w|anthropomorphism}}, and future applications, among other topics.<ref>{{cite web |title=Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI {{!}} Lex Fridman Podcast #367 |url=https://www.youtube.com/watch?v=L_Guz73e6fw |website=youtube.com |access-date=25 March 2023 |language=en}}</ref>
 +
|-
 +
| 2023 || March 27 || AI startup accelerator || Partnership || Startup accelerator Neo forms a partnership with OpenAI, in addition to {{w|Microsoft}}, to offer free software and guidance to companies focusing on artificial intelligence (AI). Startups accepted into Neo's AI cohort would receive access to OpenAI's tools, including the GPT language generation tool and {{w|Dall-E}} image creation program. They would also have the opportunity to collaborate with researchers and mentors from Microsoft and OpenAI. The partnership comes as interest in AI technologies grows, with startups and established companies seeking to incorporate them into their products.<ref>{{cite web |title=AI startup accelerator Neo will partner with OpenAI, Microsoft |url=https://www.seattletimes.com/business/new-ai-startup-accelerator-will-partner-with-openai-microsoft/ |website=The Seattle Times |access-date=14 June 2023 |date=27 March 2023}}</ref>
 +
|-
 +
| 2023 || April 11 || AI security || Program launch || OpenAI announces its Bug Bounty Program, an initiative aimed at enhancing the safety and security of their AI systems. The program invites security researchers, ethical hackers, and technology enthusiasts from around the world to help identify vulnerabilities and bugs in OpenAI's technology. By reporting their findings, participants are expected to contribute to making OpenAI's systems safer for users. The Bug Bounty Program is managed in partnership with {{w|Bugcrowd}}, a leading bug bounty platform, to ensure a streamlined experience for participants. Cash rewards are to be offered based on the severity and impact of the reported issues, ranging from $200 for low-severity findings to up to $20,000 for exceptional discoveries. OpenAI emphasizes the collaborative nature of security and encourages the security research community to join their Bug Bounty Program. Additionally, OpenAI reportedly hires for security roles to further strengthen their efforts in ensuring the security of AI technology.<ref>{{cite web |title=Announcing OpenAI’s Bug Bounty Program |url=https://openai.com/blog/bug-bounty-program |website=openai.com |access-date=9 June 2023}}</ref>
 +
|-
 +
| 2023 || April 14 || AI safety, progress tracking || Product update || {{w|Sam Altman}} confirms at an {{w|MIT}} event that the company is not training GPT-5 at the time, highlighing the difficulty of measuring and tracking progress in AI safety. By this time, OpenAI is still expanding the capabilities of GPT-4 and is considering the safety implications of its work.<ref>{{cite web |last1=Vincent |first1=James |title=OpenAI’s CEO confirms the company isn’t training GPT-5 and “won’t for some time” |url=https://www.theverge.com/2023/4/14/23683084/openai-gpt-5-rumors-training-sam-altman |website=The Verge |access-date=9 May 2023 |date=14 April 2023}}</ref>
 +
|-
 +
| 2023 || April 19 || AI integration || Partnership || OpenAI partners with Australian software company {{w|Atlassian}}. The latter agrees to utilize OpenAI's GPT-4 language model, which had been trained on a large amount of internet text, to introduce AI capabilities into programs like Jira Service Management and [[w:Confluence (software)|Confluence]]. With GPT-4, Jira Service Management would be able to process employees' tech support inquiries in Slack, while Confluence would provide automated explanations, links, and answers based on stored information. Atlassian is a developer of its own AI models and would now incorporate OpenAI's technology to create unique results for individual customers. The new AI features, branded as Atlassian Intelligence, are to be rolled out gradually, and customers can join a waiting list to access them.<ref>{{cite web |last1=Novet |first1=Jordan |title=Atlassian taps OpenAI to make its collaboration software smarter |url=https://www.cnbc.com/2023/04/19/atlassian-taps-openai-for-atlassian-intelligence-generative-ai-launch.html |website=CNBC |access-date=14 June 2023 |language=en |date=19 April 2023}}</ref>
 +
|-
 +
| 2023 || April 21 || Scientific web search, generative AI || Partnership || OpenAI partners with Consensus, a Boston-based AI-powered {{w|search engine}} focused on scientific research, to enhance scientific web search quality. Consensus aims to provide unbiased and accurate search results by leveraging its generative AI technology to extract information from over 200 million research papers. The search engine prioritizes authoritative sources and offers plain-language summaries of results. With the support of investors such as Draper Associates and the involvement of OpenAI, Consensus aims to revolutionize scientific web search, transform research, and disrupt the global industry.<ref>{{cite web |title=Consensus raises $3M, partners with OpenAI to revolutionize scientific web search |url=https://venturebeat.com/ai/consensus-raises-3m-partners-with-openai-revolutionize-scientific-web-search/ |website=VentureBeat |access-date=14 June 2023 |date=21 April 2023}}</ref>
 +
|-
 +
| 2023 || May 16 || {{w|AI safety}} || Legal || Sam Altman testifies in a Senate subcommittee hearing and expresses the need for regulating artificial intelligence technology. Unlike previous hearings featuring tech executives, lawmakers and Altman largely agree on the necessity of A.I. regulation. Altman emphasizes the potential harms of A.I. and presents a loose framework to manage its development. His appearance marks him as a leading figure in the A.I. industry. The hearing reflects the growing unease among technologists and government officials regarding the power of A.I. technology, though Altman appears to have a receptive audience in the subcommittee members.<ref>{{cite web |last1=Kang |first1=Cecilia |title=OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing |url=https://www.nytimes.com/2023/05/16/technology/openai-altman-artificial-intelligence-regulation.html |website=The New York Times |access-date=26 July 2023 |date=16 May 2023}}</ref>
 +
|-
 +
| 2023 || May 22 || {{w|AI safety}} || Publication || OpenAI publishes post emphasizing the importance of governing {{w|superintelligence}}, AI systems that surpass even artificial general intelligence (AGI) in capabilities. They recognize the potential positive and negative impacts of superintelligence and propose coordination among AI development efforts, the establishment of an international authority like the {{w|International Atomic Energy Agency}}, and technical safety measures as key ideas for managing risks. OpenAI believes in regulation without hindering development below a certain capability threshold and emphasizes public input and democratic decision-making. While they see potential for a better world, they acknowledge the risks and challenges and stress the need for caution and careful approach.<ref>{{cite web |title=Governance of superintelligence |url=https://openai.com/blog/governance-of-superintelligence |website=openai.com |access-date=9 June 2023}}</ref>
 +
|-
 +
| 2023 || May 25 || AI governance || Program launch || OpenAI announces a grant program to fund experiments focused on establishing a democratic process for determining the rules that AI systems should follow within legal boundaries. The program aims to incorporate diverse perspectives reflecting the public interest in shaping AI behavior. OpenAI expresses belief that decisions about AI conduct should not be dictated solely by individuals, companies, or countries. The grants, totaling $1 million, are to be awarded to ten teams worldwide to develop [[w:Proof of concept|proof-of-concepts]] for democratic processes that address questions about AI system rules. While these experiments are not intended to be binding at the time, they are expected to explore decision-relevant questions and build democratic tools to inform future decisions. The results of the studies are freely accessible, with OpenAI encouraging applicants to innovate and leverage known methodologies or create new approaches to democratic processes. The use of AI to enhance communication and facilitate efficient collaboration among a large number of participants is also encouraged.<ref>{{cite web |title=Democratic Inputs to AI |url=https://openai.com/blog/democratic-inputs-to-ai |website=openai.com |access-date=9 June 2023}}</ref>
 +
|-
 +
| 2023 || June 1 || {{w|Cybersecurity}} || Program launch || OpenAI announces launch of the Cybersecurity Grant Program, a $1 million initiative aimed at enhancing AI-powered {{w|cybersecurity}} capabilities and fostering discussions at the intersection of AI and {{w|cybersecurity}}. The program's objectives include empowering defenders, measuring the capabilities of AI models in cybersecurity, and promoting comprehensive discourse in the field. OpenAI reportedly seeks project proposals that focus on practical applications of AI in defensive cybersecurity. Projects related to offensive security are not considered for funding. The program aims to evaluate and accept applications on a rolling basis, with strong preference given to proposals that can be licensed or distributed for maximal public benefit and sharing. Funding would be provided in increments of $10,000 USD from the $1 million fund, which can be in the form of {{w|API}} credits, direct funding, or equivalents.<ref>{{cite web |title=OpenAI cybersecurity grant program |url=https://openai.com/blog/openai-cybersecurity-grant-program |website=openai.com |access-date=9 June 2023}}</ref>
 +
|-
 +
| 2023 || June 12 || Research, safety || Collaboration || OpenAI and Google DeepMind commit to sharing their AI models with the {{w|Government of the United Kingdom}} for research and safety purposes. This move aims to enhance the government's ability to inspect the models and understand the associated risks and opportunities. The specific data to be shared by the tech companies is not yet disclosed. The announcement follows the UK government's plans to assess AI model accountability and establish a Foundation Model Taskforce to develop "sovereign" AI. The initiative seeks to address concerns about AI development and mitigate potential issues related to safety and ethics. While this access does not grant complete control or guarantee the detection of all issues, it promotes transparency and provides insights into AI systems during a time when their long-term impacts remain uncertain.<ref>{{cite web |title=Google, OpenAI will share AI models with the UK government |url=https://www.engadget.com/google-openai-will-share-ai-models-with-the-uk-government-134318263.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAA_deDjNh9IokDYAoVXIGkvm_OYf8uRbupS85EgTY234vlgP2_HrWlx8LRvFhI_uqWUF1u85DDGTjPMDa_Y4IsCcU8ASo_9uFUf9h25l4ZgYlNNTMX99TGAlWAcM0Gh-egCfel6imrFpf20e3j1Z2J9Uu83Fm9hV2Y4-t7myJfFf |website=Engadget |access-date=13 June 2023}}</ref>
 +
|-
 +
| 2023 || June 14 || Hallucination || Legal || A radio broadcaster named Mark Walters files a defamation lawsuit against OpenAI after the company's AI system, ChatGPT, generated a fake complaint accusing him of financial embezzlement. The lawsuit highlights the growing concern over generative AI programs spreading misinformation and producing false outputs. The fabricated legal summary is provided to Fred Riehl, the editor-in-chief of AmmoLand, who reports on a real-life legal case. The incident is attributed to a common issue with generative AI known as [[w:Hallucination (artificial intelligence)|hallucinations]], where the language model generates false information that can be convincingly realistic.<ref>{{cite web |title=Radio Host Sues OpenAI after ChatGPT Generates Fake Complaint Against Him |url=https://www.ndtv.com/feature/radio-host-sues-openai-after-chatgpt-generates-fake-complaint-against-him-4119121 |website=NDTV.com |access-date=17 July 2023}}</ref>
 +
|-
 +
| 2023 || June 20 || AI regulation || Advocacy || It is reported that OpenAI lobbied the {{w|European Union}} to weaken forthcoming AI regulation, despite publicly advocating for stronger AI guardrails. Documents obtained by TIME reveal that OpenAI proposed amendments to the E.U.'s AI Act, which were later incorporated into the final text. OpenAI argues that its general-purpose AI systems, such as GPT-3 and Dall-E, should not be classified as "high risk" and subject to stringent requirements. The lobbying efforts aimed to reduce the regulatory burden on OpenAI and aligned with similar efforts by other tech giants like Microsoft and Google. The documents suggest that OpenAI used arguments about utility and public benefit to mask their financial interest in diluting the regulation.<ref>{{cite web |title=Exclusive: OpenAI Lobbied E.U. to Water Down AI Regulation |url=https://time.com/6288245/openai-eu-lobbying-ai-act/ |website=Time |access-date=16 July 2023 |language=en |date=20 June 2023}}</ref>
 +
|-
 +
| 2023 || June 21 || AI app store || Product || OpenAI reportedly plans to launch an AI app store, allowing developers and customers to sell their AI models built on OpenAI's technology. This move comes as OpenAI aims to expand its influence and capitalize on the success of its ChatGPT chatbot. While the introduction of an AI app store has the potential to drive broader adoption of OpenAI's technology and foster innovation, it also raises concerns about the need for regulations, consumer protection, quality control, ethical considerations, and security risks. However, for the Nigerian AI community, the app store presents opportunities for increased access, collaboration, economic prospects, and entrepreneurship, benefiting the country's tech talent and driving economic growth in the AI sector.<ref>{{cite web |title=OpenAI, owners of ChatGPT to launch AI app store |url=https://technext24.com/2023/06/21/openai-chatgpt-to-launch-an-ai-app-store/ |website=technext24.com |access-date=18 July 2023 |date=21 June 2023}}</ref>
 +
|-
 +
| 2023 || June 28 || {{w|Copyright infringement}} || Legal || OpenAI faces a proposed class action lawsuit filed by two U.S. authors in San Francisco federal court. The authors, {{w|Paul Tremblay}} and {{w|Mona Awad}}, claim that OpenAI used their works without permission to train its popular AI system, ChatGPT. They allege that ChatGPT mined data from thousands of books, infringing their copyrights. The lawsuit highlights the use of books as a significant component in training generative AI systems like ChatGPT. The authors assert that ChatGPT could generate accurate summaries of their books, indicating their presence in its database. The lawsuit seeks damages on behalf of copyright owners whose works were allegedly misused by OpenAI.<ref>{{cite web |last1=Brittain |first1=Blake |last2=Brittain |first2=Blake |title=Lawsuit says OpenAI violated US authors' copyrights to train AI chatbot |url=https://www.reuters.com/legal/lawsuit-says-openai-violated-us-authors-copyrights-train-ai-chatbot-2023-06-29/ |website=Reuters |access-date=16 July 2023 |language=en |date=29 June 2023}}</ref>
 +
|-
 +
| 2023 || June 28 || OpenAI London || International expansion || OpenAI selects {{w|London}} as the location for its first international office, where the company plans to focus on research and engineering. The move is considered a vote of confidence in the UK's AI ecosystem and reinforces the country's position as an AI powerhouse.<ref>{{cite web |last1=Milmo |first1=Dan |last2=editor |first2=Dan Milmo Global technology |title=ChatGPT developer OpenAI to locate first non-US office in London |url=https://www.theguardian.com/technology/2023/jun/28/chatgpt-developer-openai-locate-first-non-us-office-london |website=The Guardian |access-date=17 July 2023 |date=28 June 2023}}</ref>
 +
|-
 +
| 2023 || July 10 || {{w|GPT-4}} || Coverage || An article discusses various aspects of OpenAI's {{w|GPT-4}}, including its architecture, training infrastructure, inference infrastructure, parameter count, training dataset composition, token count, layer count, parallelism strategies, multi-modal vision adaptation, engineering tradeoffs, and implemented techniques to overcome inference bottlenecks. It highlights that OpenAI's decision to keep the architecture closed is not due to existential risks but rather because it is replicable and other companies are expected to develop equally capable models. The article also emphasizes the importance of decoupling training and inference compute and the challenges of scaling out large models for inference due to memory bandwidth limitations. OpenAI's sparse model architecture is discussed as a solution to achieve high throughput while reducing inference costs.<ref>{{cite web |last1=Patel |first1=Dylan |title=GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE |url=https://www.semianalysis.com/p/gpt-4-architecture-infrastructure |website=www.semianalysis.com |access-date=16 July 2023 |language=en}}</ref>
 +
|-
 +
| 2023 || July 11 || Shutterstock x OpenAI collaboration || Partnership || Shutterstock expands its partnership with OpenAI through a six-year agreement, solidifying its role as a key provider of high-quality training data for AI models. OpenAI gains licensed access to Shutterstock's extensive image, video, and music libraries, while Shutterstock receives priority access to OpenAI's latest technologies. This partnership enhances Shutterstock's platform by integrating advanced generative AI tools, including DALL·E-powered text-to-image generation and synthetic editing features. Additionally, Shutterstock announces plans to extend AI capabilities to mobile users via its GIPHY platform.<ref>{{cite web |title=Shutterstock Expands Partnership with OpenAI, Signs New Six-Year Agreement |url=https://investor.shutterstock.com/news-releases/news-release-details/shutterstock-expands-partnership-openai-signs-new-six-year |website=investor.shutterstock.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2023 || July 12 || {{w|Copyright infringement}} || Legal || American comedian {{w|Sarah Silverman}} files a lawsuit against OpenAI along with {{w|Meta Platforms}}, alleging copyright infringement in the training of their AI systems. The lawsuit claims that the authors' copyrighted materials were used without their consent to train ChatGPT and Meta's LLaMa AI system. The case is expected to revolve around whether training a large language model constitutes fair use or not. Silverman is joined by two other authors in the class-action lawsuit. By this time, legal experts have raised questions about whether OpenAI can be accused of copying books in this context.<ref>{{cite web |title=Sarah Silverman sues OpenAI and Meta |url=https://www.bbc.com/news/technology-66164228 |website=BBC News |access-date=16 July 2023 |date=12 July 2023}}</ref>
 +
|-
 +
| 2023 || July 13 || xAI || Competition || {{w|Elon Musk}} launches his own artificial intelligence company, xAI, to rival OpenAI and Google. Reportedly, the goal of xAI is to understand the true nature of the universe and answer life's biggest questions. The company is staffed by former researchers from OpenAI, Google DeepMind, Tesla, and the {{w|University of Toronto}}. By this time, Musk had been critical of ChatGPT, accusing it of being politically biased and irresponsible. He left OpenAI in 2018 due to concerns about its profit-driven direction. Musk warns about the dangers of AI, and his new company reportedly aims to address those concerns.<ref>{{cite web |title=Elon Musk launches xAI to rival OpenAI and Google |url=https://www.scmp.com/news/world/united-states-canada/article/3227494/elon-musk-launches-xai-rival-openai-and-google |website=South China Morning Post |access-date=16 July 2023 |language=en |date=13 July 2023}}</ref>
 +
|-
 +
| 2023 || July 13 || Algorithmic models || Partnership || OpenAI partners with the {{w|Associated Press}} (AP) in a two-year agreement to train algorithmic models. This collaboration marks one of the first news-sharing partnerships between a major news organization and an AI firm. OpenAI is expected to gain access to selected news content and technology from AP's archives, dating back to 1985, to enhance future iterations of ChatGPT and related tools. AP is expected to receive access to OpenAI's proprietary technology. This partnership allows OpenAI to expand into the news domain and acquire legally-obtained data, while AP aims to streamline and improve its news reporting processes using OpenAI's technology.<ref>{{cite web |title=AP and OpenAI enter into two-year partnership to help train algorithmic models |url=https://www.engadget.com/ap-and-openai-enter-into-two-year-partnership-to-help-train-algorithmic-models-183007344.html |website=Engadget |access-date=15 July 2023}}</ref><ref>{{cite web |last1=Roth |first1=Emma |title=OpenAI will use Associated Press news stories to train its models |url=https://www.theverge.com/2023/7/13/23793810/openai-associated-press-ai-models |website=The Verge |access-date=15 July 2023 |date=13 July 2023}}</ref>
 +
|-
 +
| 2023 || July 18 || AJP and OpenAI collaborate || Partnership || The American Journalism Project (AJP) and OpenAI partner in a $5+ million initiative to explore AI's potential to enhance local journalism. OpenAI agrees to contribute $5 million in funding and $5 million in API credits to support AJP's grantee organizations in adopting AI technologies. The partnership aims to create an AI studio to coach news organizations, foster collaboration, and address AI challenges like misinformation and bias. Grants are expected to fund AI experiments across AJP's portfolio, showcasing applications for local news. This initiative claims aim to bolster democracy by rebuilding local news ecosystems and ensuring journalism adapts responsibly to emerging AI technologies.<ref>{{cite web |title=Partnership with American Journalism Project to Support Local News |url=https://openai.com/index/partnership-with-american-journalism-project-to-support-local-news/ |website=openai.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2023 || July 31 || {{w|Upwork}} partners with OpenAI || Partnership || American freelancing platform {{w|Upwork}} and OpenAI launch the OpenAI Experts on Upwork program, providing businesses with pre-vetted professionals skilled in OpenAI technologies like GPT-4, Whisper, and AI model integration. This collaboration enables companies to efficiently access expertise for projects involving large language models, model fine-tuning, and chatbot development. The program, part of Upwork's AI Services hub, employs a rigorous vetting process to ensure technical proficiency and practical experience. Clients can engage experts via consultations or project-based contracts, enhancing responsible and impactful AI deployment. This initiative aligns with Upwork's strategy to become a leader in AI-related talent solutions.<ref>{{cite web |title=Upwork and OpenAI Partner to Connect Businesses with OpenAI Experts |url=https://investors.upwork.com/news-releases/news-release-details/upwork-and-openai-partner-connect-businesses-openai-experts |website=investors.upwork.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2023 || July 26 || Image generator || Coverage || An article reveals OpenAI's secret image generator, an unreleased AI model that outperforms previous ones. At this time tested privately, early samples show impressive results, producing sharp and realistic images with detailed lighting, reflections, and brand logos. The AI recreates paintings and displays well-proportioned hands, setting it apart from other generators. However, removed safety filters for testing allow the model to generate violent and explicit content. Access to the model is limited, and it's not released publicly due to OpenAI's stance on [[w:Not safe for work|NSFW]] content.<ref>{{cite web |last1=Lanz |first1=Decrypt / Jose Antonio |title=Uncensored and ‘Insane’: A Look at OpenAI’s Secret Image Generator |url=https://decrypt.co/150259/openai-image-generator-preview-uncensored-mattvidpro |website=Decrypt |access-date=28 July 2023 |date=26 July 2023}}</ref>
 +
|-
 +
| 2023 || August 7 || Claude 2 || Competition || {{w|Anthropic}} unveils AI chatbot Claude 2, venturing into the advanced domain to compete with OpenAI and Google. With a substantial US$750 million in funding, while initially targeting business applications, Claude 2 already generates a waitlist of over 350,000 users seeking access to its API and consumer services. Its availability is initially limited to the United States and the United Kingdom. Claude 2 is part of the growing trend of generative AI chatbots, despite concerns about bias. Anthropic aims to offer Claude 2 as a safer alternative for a broader range of users.<ref>{{cite web |last1=Toczkowska |first1=Natalia |title=Anthropic Introduces New AI Chatbot, Claude 2 |url=https://ts2.space/en/anthropic-introduces-new-ai-chatbot-claude-2/ |website=TS2 SPACE |access-date=25 August 2023 |date=6 August 2023}}</ref>
 +
|-
 +
| 2023 || August 16 || Global Illumination || Acquisition || OpenAI makes its first public acquisition by purchasing New York-based startup Global Illumination. The terms of the deal are not disclosed. Global Illumination's team previously worked on projects at Instagram, Facebook, YouTube, Google, Pixar, and Riot Games. Their most recent creation is the open-source sandbox multiplayer game Biomes. OpenAI aims to enhance its core products, including ChatGPT, with this acquisition.<ref>{{cite web |last1=Wiggers |first1=Kyle |title=OpenAI acquires AI design studio Global Illumination |url=https://techcrunch.com/2023/08/16/openai-acquires-ai-design-studio-global-illumination/ |website=TechCrunch |access-date=22 August 2023 |date=16 August 2023}}</ref>
 +
|-
 +
| 2023 || August 17 || GPT-4 || Competition || Researchers from Arthur AI evaluate top AI models from [[w:Meta Platforms|Meta]], {{w|OpenAI}}, {{w|Cohere}}, and {{w|Anthropic}} to assess their propensity for generating false information, a phenomenon known as hallucination. They discover that Cohere's AI model displays the highest degree of hallucination, followed by Meta's Llama 2, which hallucinates more than GPT-4 and Claude 2 from Anthropic. GPT-4, on the other hand, performs the best among all models tested, hallucinating significantly less than its predecessor GPT-3.5. The study highlights the importance of evaluating AI models' performance based on specific use cases, as real-world applications may differ from standardized benchmarks.<ref>{{cite web |last1=Field |first1=Hayden |title=Meta, OpenAI, Anthropic and Cohere A.I. models all make stuff up — here's which is worst |url=https://www.cnbc.com/2023/08/17/which-ai-is-most-reliable-meta-openai-anthropic-or-cohere.html |website=CNBC |access-date=25 August 2023 |language=en |date=17 August 2023}}</ref>
 +
|-
 +
| 2023 || August 21 || GPTBot || Reaction || The ''{{w|New York Times}}'' blocks OpenAI's web crawler, GPTBot, preventing OpenAI from using the publication's content for training AI models. This action follows the NYT's recent update to its terms of service, which prohibits the use of its content for AI model training. The newspaper also reportedly considers legal action against OpenAI for potential intellectual property rights violations. The NYT's move aligns with its efforts to protect its content and copyright in the context of AI model training.<ref>{{cite web |last1=Davis |first1=Wes |title=The New York Times blocks OpenAI’s web crawler |url=https://www.theverge.com/2023/8/21/23840705/new-york-times-openai-web-crawler-ai-gpt |website=The Verge |access-date=22 August 2023 |date=21 August 2023}}</ref>
 +
|-
 +
| 2023 || September 13 || OpenAI expands to Dublin || Expansion || OpenAI expands its European presence with a new office in {{w|Dublin}}, {{w|Ireland}}, aiming to grow its team across various areas, including operations, trust and safety, security engineering, and legal work. The move supports OpenAI's commitment to serving the European market and aligns with Ireland's strong tech ecosystem. OpenAI plans to collaborate with the Irish government on the National AI Strategy and engage with local industry, startups, and researchers. Sam Altman emphasizes Ireland's talent pool and innovation support. The company also aims to provide mentorship to youth through the Patch accelerator.
 +
|-
 +
| 2023 || November 18 || Sam Altman is fired as CEO || Team || Sam Altman is fired as CEO of OpenAI. The decision is made by the board of directors, who cite a loss of confidence in his leadership as the reason for the dismissal. Altman had been instrumental in the company's rapid growth and success in AI development, but the board's move comes as a surprise to many. His removal causes substantial internal discord, with strong backing from both employees and the public calling for his reinstatement.<ref>{{cite web |title=Sam Altman fired as CEO of ChatGPT maker OpenAI |url=https://www.aljazeera.com/news/2023/11/18/sam-altman-fired-as-ceo-of-chatgpt-maker-open-ai |website=Al Jazeera |accessdate=17 September 2024}}</ref><ref name="returns"/>
 +
|-
 +
| 2023 || November 22 || Sam Altman is reinstated || Team || A few days after being fired Sam Altman is reinstated as OpenAI's Chief Executive Officer. Altman’s leadership is considered pivotal in OpenAI's development of advanced artificial intelligence systems, and his reinstatement signals stability after a period of turmoil.<ref name="returns">{{cite web |title=Sam Altman Returns to OpenAI After Board Ouster |url=https://www.nytimes.com/2023/11/22/technology/openai-sam-altman-returns.html |website=The New York Times |accessdate=1 October 2024}}</ref>
 +
|-
 +
| 2023 || December 13 || Axel Springer x OpenAI collaboration || Partnership || Axel Springer and OpenAI form a global partnership to enhance independent journalism through AI. The collaboration integrates authoritative content from Axel Springer’s media brands, such as ''{{w|Politico}}'', ''{{w|Business Insider}}'', ''{{w|Bild}}'', and ''{{w|Die Welt}}'', into ChatGPT. Users gain access to selected summaries with attribution and links to original articles, including paid content, ensuring transparency and encouraging further exploration. The partnership also supports Axel Springer’s AI initiatives and contributes to OpenAI’s model training with quality content. Axel Springer CEO Mathias Döpfner emphasizes AI’s potential to elevate journalism, while OpenAI COO Brad Lightcap highlighted their commitment to empowering publishers with advanced technology and sustainable revenue models.<ref>{{cite web |title=Axel Springer and OpenAI Partner to Deepen Beneficial Use of AI in Journalism |url=https://www.axelspringer.com/en/ax-press-release/axel-springer-and-openai-partner-to-deepen-beneficial-use-of-ai-in-journalism |website=axelspringer.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || January 18 || || Partnership || OpenAI announces its first university partnership with {{w|Arizona State University}} (ASU), granting full access to ChatGPT Enterprise starting February 2024. ASU plans to integrate the tool into coursework, tutoring, research, and its popular {{w|prompt engineering}} course. Key initiatives include creating personalized AI tutors, developing AI avatars for creative study support, and enhancing Freshman Composition with writing assistance. The collaboration, in development for six months, assures data security and privacy, addressing past concerns about AI misuse in education. OpenAI views the partnership as a model for aligning AI with higher education, while ASU sees it as advancing innovation in learning and research.<ref>{{cite web |title=OpenAI Announces First Partnership with a University |url=https://www.cnbc.com/2024/01/18/openai-announces-first-partnership-with-a-university.html |website=cnbc.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || March 8 || Board of Directors expansion || Team || OpenAI expands its Board of Directors with the addition of Dr. Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo. Dr. Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, brings experience in nonprofit leadership and medical research. Nicole Seligman, former EVP and General Counsel at Sony Corporation, is noted for her expertise in corporate law and governance. Fidji Simo, CEO and Chair of Instacart, offers background in consumer technology and product development. Sam Altman, the current CEO, rejoins the board.<ref>{{cite web |title=OpenAI Announces New Members to Board of Directors |url=https://openai.com/index/openai-announces-new-members-to-board-of-directors/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || March 13 || OpenAI enhances news access || Partnership || OpenAI partners with ''{{w|Le Monde}}'' and ''{{w|Prisa Media}}'' to integrate French and Spanish news content into ChatGPT. Users can access journalism from those publications via attributed summaries and links to original articles. The collaboration aims to enhance access to authoritative information, train AI models, and present news in interactive formats. ''Le Monde'' highlights this as a step to expand its reach and uphold journalistic integrity, while Prisa Media emphasizes leveraging AI to engage broader audiences.<ref>{{cite web |title=Global News Partnerships with Le Monde and PRISA Media |url=https://openai.com/index/global-news-partnerships-le-monde-and-prisa-media/ |website=openai.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || April 15 || OpenAI opens Tokyo office for growth || Expansion || OpenAI opens its first Asian office in {{w|Tokyo}} to cater to the growing popularity of its service in {{w|Japan}}. {{w|Sam Altman}} emphasizes the goal of a long-term partnership with Japan's government, businesses, and research institutions. This office is OpenAI's third overseas location after Britain and Ireland. A custom chatbot model optimized for Japanese is introduced, enhancing translation and summarization speeds. The decision to establish a Tokyo office follows Altman's meeting with Japanese Prime Minister Fumio Kishida. Japan, with over 2 million ChatGPT users, is a significant market for OpenAI.<ref>{{cite web
 +
|title=OpenAI opens first Asia office in Tokyo as ChatGPT use grows
 +
|url=https://english.kyodonews.net/news/2024/04/624afe1cb3e4-openai-opens-first-asia-office-in-tokyo-as-chatgpt-use-grows.html
 +
|website=Kyodo News
 +
|date=April 15, 2024
 +
|access-date=June 20, 2024
 +
}}</ref>
 +
|-
 +
| 2024 || May 22 || OpenAI x News Corp collaboration || Partnership || OpenAI and {{w|News Corp}} form a multi-year partnership to integrate ''News Corp'''s premium journalism into OpenAI's products. This collaboration grants OpenAI access to updated and archived content from major publications like ''{{w|The Wall Street Journal}}'', ''{{w|The Times}}'', and ''{{w|The Sun}}'', among others, to enhance responses and provide reliable information. News Corp also agrees to contribute journalistic expertise to uphold high standards of journalism across OpenAI's offerings. The partnership aims to improve access to trusted news and set new standards for accuracy and integrity in the digital age, with both parties committed to enriching AI with high-quality journalism.<ref>{{cite web |title=News Corp and OpenAI Sign Landmark Multi-Year Global Partnership |url=https://openai.com/index/news-corp-and-openai-sign-landmark-multi-year-global-partnership/ |website=openai.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || April 24 || OpenAI partners with Moderna || Partnership || OpenAI partners with biotechnology company Moderna, with the purpose to integrate ChatGPT Enterprise across Moderna's operations, aiming to enhance its development of mRNA-based treatments. Since early 2023, this collaboration has transformed how Moderna works, facilitating the launch of up to 15 new products in the next five years. Moderna’s approach includes widespread AI adoption, with 100% employee engagement and the creation of over 750 custom GPTs. Notable projects include Dose ID, a GPT for analyzing clinical data, and various tools to improve legal compliance and communication. This AI integration supports Moderna’s goal of maximizing patient impact while maintaining operational efficiency.<ref>{{cite web |title=Moderna and OpenAI Collaboration |url=https://openai.com/index/moderna/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || April 29 || {{w|Financial Times}} partners with OpenAI || Partnership || The {{w|Financial Times}} partners with OpenAI to integrate its journalism into ChatGPT, providing users with attributed summaries, quotes, and links to FT content. This collaboration aims to enhance AI model utility and develop new AI-driven tools for FT readers. The FT, an early adopter of ChatGPT Enterprise, equips its staff with OpenAI’s tools to boost creativity and productivity. FT Group CEO John Ridding highlights the importance of transparency, attribution, and safeguarding journalism. OpenAI COO Brad Lightcap emphasizes the partnership's role in enriching AI with reliable news.<ref>{{cite web |title=Content Partnership with Financial Times |url=https://openai.com/index/content-partnership-with-financial-times/ |website=openai.com |accessdate=15 December 2024}}</ref> 
 +
|-
 +
| 2024 || May 6 || OpenAI x {{w|Stack Overflow}} collaboration || Partnership || OpenAI and {{w|Stack Overflow}} announce an API partnership aimed at enhancing developer productivity by integrating Stack Overflow's trusted technical knowledge with OpenAI’s advanced LLM models. Through Stack Overflow's OverflowAPI, OpenAI gains access to accurate, community-vetted data to improve its AI tools, while also providing attribution and fostering engagement within ChatGPT. Stack Overflow would leverage OpenAI models to develop OverflowAI and optimize its products for the developer community. The partnership, launching integrations in early 2024, focuses on advancing responsible AI development.<ref>{{cite web |title=API Partnership with Stack Overflow |url=https://openai.com/index/api-partnership-with-stack-overflow/ |website=openai.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || May 8 || OpenAI introduces the Model Spec || Publication || OpenAI introduces the Model Spec, a draft document outlining how AI models should behave in the OpenAI API and ChatGPT. This document details principles, rules, and default behaviors to guide model responses, balancing helpfulness, safety, and legality. It integrates existing documentation, research, and expert input to inform future model development and ensure alignment with OpenAI's mission. The Model Spec aims to foster public discussion on AI behavior, incorporating feedback from diverse stakeholders to refine its approach over time. The document addresses handling complex situations, compliance with laws, and maintaining respect for user privacy and intellectual property.<ref>{{cite web |title=Introducing the Model Spec |url=https://openai.com/index/introducing-the-model-spec/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || May 16 || Reddit partners with OpenAI || Partnership || {{w|Reddit}} and OpenAI announce a partnership to integrate Reddit's content into OpenAI's ChatGPT and products, enhancing user access to Reddit communities. Using Reddit’s Data API, OpenAI will showcase real-time and structured Reddit content, particularly on current topics, for improved user discovery and engagement. The collaboration also enables Reddit to leverage OpenAI’s AI models to introduce new features for users and moderators, enhancing the Reddit experience. Additionally, OpenAI will serve as a Reddit advertising partner. This partnership aims to foster human learning, build connections online, and provide timely, relevant, and authentic information across platforms.<ref>{{cite web |title=OpenAI and Reddit Partnership |url=https://openai.com/index/openai-and-reddit-partnership/ |website=openai.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || May 21 || Sanofi integrates AI innovation || Partnership || French pharmaceutical company {{w|Sanofi}} announces a collaboration with OpenAI and Formation Bio to integrate artificial intelligence into its drug development processes. The partnership enables Sanofi to utilize proprietary AI models tailored to its biopharma projects, while Formation Bio provides additional engineering resources. The initiative aims to expedite clinical trials by optimizing patient selection and reducing the number of participants needed, thereby accelerating drug development and reducing costs. This collaboration reflects a growing trend among major drugmakers to leverage AI for efficiency and innovation in healthcare.<ref>{{cite web |title=Sanofi partners with OpenAI, Formation Bio for AI-driven drug development |url=https://www.reuters.com/business/healthcare-pharmaceuticals/sanofi-partners-with-openai-formation-bio-ai-driven-drug-development-2024-05-21/ |website=reuters.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || May 29 || OpenAI x Vox Media collaboration || Partnership || OpenAI and {{w|Vox Media}} form a strategic partnership to enhance ChatGPT’s capabilities and develop innovative products. Vox Media’s extensive archives and portfolio, including Vox, The Verge, and New York Magazine, would inform ChatGPT’s responses, with brand attribution and audience referrals. The partnership aims to merge trusted journalism with AI technology, improving content quality and discoverability. Vox Media is expected to leverage OpenAI tools to innovate consumer products, enhance its Forte data platform, and optimize advertiser performance.<ref>{{cite web |title=A Content and Product Partnership with Vox Media |url=https://openai.com/index/a-content-and-product-partnership-with-vox-media/ |website=openai.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || May 30 || OpenAI for Nonprofits launched || Product launch  || OpenAI launches OpenAI for Nonprofits, a new initiative offering discounted access to its tools for nonprofit organizations. Nonprofits can access ChatGPT Team at a 20% discount, while larger nonprofits can receive a 50% discount on ChatGPT Enterprise. These offerings provide advanced models like GPT-4, collaboration tools, and robust security. By this time, nonprofits such as Serenas in Brazil, GLIDE Legal Clinic, THINK South Africa, and Team4Tech already use ChatGPT to streamline operations, enhance client services, and analyze data. OpenAI aims to support nonprofits in achieving greater impact with fewer resources through AI integration.<ref>{{cite web |title=Introducing OpenAI for Nonprofits |url=https://openai.com/index/introducing-openai-for-nonprofits/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || May 30 || ChatGPT Edu launched || Product launch || OpenAI launches ChatGPT Edu, a new version of ChatGPT designed for universities to integrate AI into their campuses responsibly. Built on GPT-4o, ChatGPT Edu offers advanced capabilities such as text and vision reasoning, data analysis, and high-level security. It aims to enhance educational experiences by providing personalized tutoring, assisting with research and grant writing, and supporting faculty with grading and feedback. By this time,  universities like Columbia, Wharton, and Arizona State have already seen benefits from similar tools. ChatGPT Edu includes features like custom GPT creation, high message limits, and robust privacy controls, making AI more accessible and effective for educational institutions.<ref>{{cite web |title=Introducing ChatGPT Edu |url=https://openai.com/index/introducing-chatgpt-edu/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || May 30 || OpenAI disrupts global influence operations || Security || OpenAI identifies and disrupts multiple influence operations from Russia, China, and Israel that were using AI tools, including ChatGPT, to manipulate public opinion. These operations aim to spread disinformation by creating fake social media accounts, generating comments in various languages, and producing visual content. Despite the capabilities of generative AI, OpenAI reports that these operations had limited success in terms of authentic audience engagement, with some being exposed by users as inauthentic. This effort marks a proactive step by OpenAI to combat misuse of its technology and highlights the potential for AI in both advancing and mitigating influence operations on social media and other platforms.<ref>{{cite web |title=OpenAI thwarts influence operations by Russia, China, and Israel |url=https://www.npr.org/2024/05/30/g-s1-1670/openai-influence-operations-china-russia-israel |website=NPR |accessdate=13 October 2024}}</ref>
 +
|-
 +
| 2024 || June 10 || OpenAI x Apple collaboration || Partnership || OpenAI and [[w:Apple Inc.|Apple]] announce a partnership at {{w|Worldwide Developers Conference}}, unveiling "Apple Intelligence." ChatGPT integrates with {{w|iOS}}, {{w|iPadOS}}, {{w|macOS}}, Writing Tools, and {{w|Siri}} to enhance AI-driven user experiences. Key features include streamlined content creation, improved virtual assistance with ChatGPT-enabled Siri, and optional user account linking. Privacy is prioritized, with no storage of IP addresses or query data. For marketers, this collaboration offers tools to enhance content creation, customer service, and productivity, leveraging Apple’s vast user base. The partnership is expected to set new benchmarks for AI integration, driving innovation across the industry.<ref>{{cite web |title=OpenAI and Apple Announce Partnership |url=https://openai.com/index/openai-and-apple-announce-partnership/ |website=openai.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || June 13 || OpenAI hires retired U.S. Army General || Team || OpenAI appoints Retired U.S. Army General Paul M. Nakasone to its Board of Directors, reflecting the company's focus on cybersecurity as AI technology advances. Nakasone, a leading expert in cybersecurity and former head of U.S. Cyber Command and the NSA, joins OpenAI's Safety and Security Committee. He is expected to help OpenAI enhance its security measures and guide efforts to ensure the safe development of artificial general intelligence (AGI).<ref>{{cite web |title=OpenAI Appoints Retired US Army General |url=https://openai.com/index/openai-appoints-retired-us-army-general/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || June 21 || OpenAI acquires Rockset || Acquisition || OpenAI acquires Rockset, a company specializing in real-time analytics and data infrastructure. The acquisition aims to bolster OpenAI’s capabilities in managing and analyzing large datasets efficiently, which is essential for the continuous improvement and scalability of AI models. By integrating Rockset’s technology, OpenAI seeks to enhance its ability to process and query data in real-time, improving the performance of AI applications in various industries, from finance to healthcare.<ref>{{cite web |title=OpenAI Acquires Rockset |url=https://openai.com/index/openai-acquires-rockset/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || August 13 || SWE Bench is introduced || Product launch || OpenAI introduces SWE Bench, a benchmarking tool designed to evaluate the performance of software engineering tasks using AI models. It explains how the tool assesses the models' abilities to handle real-world coding challenges, such as debugging, writing algorithms, and understanding complex software requirements. The "Verified" version of SWE Bench includes rigorous testing criteria to ensure high accuracy and reliability of results, providing developers and researchers with valuable insights into the capabilities and limitations of large language models in software engineering contexts.<ref>{{cite web |title=Introducing SWE Bench: Verified |url=https://openai.com/index/introducing-swe-bench-verified/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || August 20 || Partnership with {{w|Conde Nast}} || Partnership || OpenAI partners with global mass media company {{w|Conde Nast}}, with the purpose to integrate artificial intelligence into digital publishing. The partnership explores how OpenAI’s language models can enhance content creation, improve personalized recommendations, and streamline editorial workflows. By leveraging AI tools, Conde Nast plans to elevate the reader experience across its media brands, offering more tailored content and automated systems to assist their journalists and editors. This partnership is positioned to push the boundaries of AI’s role in modern media.<ref>{{cite web |title=Conde Nast and OpenAI Announce Strategic Partnership |url=https://openai.com/index/conde-nast/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || September 12 || Enhancing reasoning in LLMs || Publication || OpenAI publishes a study discussing advancements in improving the reasoning capabilities of the company's large LLMs. It emphasizes how reasoning is a complex skill for AI systems and describes techniques to enhance logical thinking, problem-solving, and decision-making in LLMs. The article highlights different methods, such as fine-tuning and reinforcement learning, to help models understand abstract tasks like mathematical reasoning or causal inference. It also showcases practical applications in fields such as science, technology, and education. The goal is to push LLMs closer to human-level reasoning, enhancing their ability to handle real-world challenges.<ref>{{cite web |title=Learning to Reason with Large Language Models |url=https://openai.com/index/learning-to-reason-with-llms/ |website=openai.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || September 12 || O1 AI model family introduction || Product launch || OpenAI introduces its new AI model family, O1, which includes a standout model named Strawberry. This family is designed to significantly enhance reasoning and problem-solving capabilities compared to earlier models like {{w|GPT-4}}. The O1 models are touted for their advanced performance, with claims of achieving PhD-level proficiency. The launch represents a major leap towards more sophisticated artificial intelligence, with the potential to bring OpenAI closer to developing Artificial General Intelligence (AGI). The models are expected to push the boundaries of AI capabilities, offering more nuanced and accurate responses in various applications.<ref>{{cite web |title=Introducing OpenAI O1 Preview |url=https://openai.com/index/introducing-openai-o1-preview/ |website=openai.com |accessdate=16 September 2024}}</ref><ref>{{cite web |title=OpenAI O1 Model Reasoning Abilities |url=https://www.theverge.com/2024/9/12/24242439/openai-o1-model-reasoning-strawberry-chatgpt |website=theverge.com |accessdate=16 September 2024}}</ref><ref>{{cite web |title=OpenAI Launches New AI Model Family O1 |url=https://venturebeat.com/ai/forget-gpt-5-openai-launches-new-ai-model-family-o1-claiming-phd-level-performance/ |website=venturebeat.com |accessdate=16 September 2024}}</ref><ref>{{cite web |title=Sam Altman on OpenAI O1 Model Capabilities |url=https://www.businessinsider.com/sam-altman-openai-new-o1-model-capabilities-agi-2024-9 |website=businessinsider.com |accessdate=16 September 2024}}</ref>
 +
|-
 +
| 2024 || September 25 || CTO resigns || Team || Mira Murati, Chief Technology Officer at OpenAI, announces her resignation, marking a significant leadership departure. Murati, a key figure in OpenAI's growth and innovations like GPT-4, cites a desire to pursue personal exploration. Her resignation follows a series of high-profile exits in the year, as OpenAI undergoes restructuring, potentially valuing the company at $150 billion. CEO Sam Altman praises Murati's contributions, while internal promotions are announced to fill leadership gaps. Murati's departure reflects ongoing shifts as OpenAI navigates its transition to a for-profit model.<ref>{{cite web |title=OpenAI CTO Mira Murati Says She Will Leave the Company |url=https://www.bloomberg.com/news/articles/2024-09-25/openai-cto-mira-murati-says-she-will-leave-the-company?embedded-checkout=true |website=Bloomberg |accessdate=29 September 2024}}</ref>
 +
|-
 +
| 2024 || September 26 || OpenAI and GEDI collaboration || Partnership || OpenAI partners with the Italian media group GEDI to enhance the availability of Italian-language news content for ChatGPT users. This collaboration aims to integrate content from GEDI's reputable publications, such as La Repubblica and La Stampa, into OpenAI’s offerings, improving the relevance and accuracy of information available to users in Italy. The partnership aims to support GEDI’s digital transformation and allows both organizations to explore further collaborations in AI applications related to news access.<ref>{{cite web |title=OpenAI and GEDI partner for Italian news content |url=https://openai.com/index/gedi/ |website=OpenAI |accessdate=13 October 2024}}</ref>
 +
|-
 +
| 2024 || October 8 || OpenAI partners with hearst || Partnership || OpenAI announces a content partnership with Hearst, which aims to integrate Hearst’s extensive array of trusted lifestyle and local news publications into OpenAI's products. This collaboration encompasses over 20 magazine brands and more than 40 newspapers, enhancing the information available to ChatGPT's users. Hearst's content is incorporated to ensure high-quality journalism is at the core of AI products, providing users with relevant and reliable information, complete with citations and links to original sources.<ref>{{cite web |title=OpenAI and Hearst Content Partnership |url=https://openai.com/index/hearst/ |website=OpenAI |accessdate=13 October 2024}}</ref>
 +
|-
 +
| 2024 || October 9 || OpenAI and hearst partnership || Expansion || OpenAI announces a subsidiary in Paris as part of the city’s initiative to become a leading AI hub in Europe. This expansion is expected to enable OpenAI to strengthen its presence in the European tech landscape and collaborate with local experts in AI development.<ref>{{cite web |title=ChatGPT maker OpenAI to launch subsidiary in Paris as city aims to become AI hub |url=https://www.euronews.com/next/2024/10/09/chatgpt-maker-openai-to-launch-subsidiary-in-paris-as-city-aims-to-become-ai-hub-report |website=Euronews |accessdate=13 October 2024}}</ref>
 +
|-
 +
| 2024 || October 10 || OpenAI blocks malicious campaigns || Security || OpenAI blocks 20 global malicious campaigns that were leveraging its AI models for cybercrime and disinformation. These campaigns include using ChatGPT and DALL-E to generate and spread misleading content, such as microtargeted emails, social media posts, and fabricated images. Some campaigns, like "Stop News," used DALL-E for creating eye-catching images aimed at manipulating social media users, while others, such as "Bet Bot" and "Corrupt Comment," focus on promoting gambling sites and creating fake social media interactions. OpenAI’s actions reflect its commitment to curbing the misuse of its technology, with stronger content policies and collaborations with cybersecurity teams to take down abusive accounts.<ref>{{cite web |title=OpenAI Blocks 20 Global Malicious Campaigns |url=https://www.thehackernews.com/2024/10/openai-blocks-20-global-malicious.html |website=The Hacker News |accessdate=13 October 2024}}</ref>
 +
|-
 +
| 2024 || October 13 || $6.6 billion funding secured || Funding round || OpenAI's announces funding round, where the company secures $6.6 billion, raising its valuation to $157 billion.<ref>{{cite web |title=OpenAI: The startup that secured $6.6bn in funding |url=https://fintechmagazine.com/articles/openai-the-startup-that-secured-6-6bn-in-funding |website=FinTech Magazine |accessdate=13 October 2024}}</ref>
 +
|-
 +
| 2024 || October 13 || Swarm framework launched || Product release || OpenAI's introduces of Swarm, a new experimental framework designed to create and coordinate multi-agent systems. Swarm enables multiple AI agents to work together towards achieving complex tasks, providing a scalable and efficient approach to automation. This framework is expected to enhance collaboration between agents, optimizing task execution by distributing responsibilities. The move sparks debate about the implications of AI-driven automation, including concerns around control, safety, and ethical considerations. By leveraging Swarm, OpenAI aims to push the boundaries of AI’s capabilities, improving problem-solving and decision-making in various fields such as robotics, finance, and software development.<ref>{{cite web |title=OpenAI Introduces SWARM: A Framework for Building Multi-Agent Systems |url=https://analyticsindiamag.com/ai-news-updates/openai-introduces-swarm-a-framework-for-building-multi-agent-systems/ |website=analyticsindiamag.com |accessdate=24 October 2024}}</ref><ref>{{cite web |title=OpenAI Introduces Experimental Framework SWARM |url=https://www.binance.com/en/square/post/2024-10-14-openai-introduces-experimental-framework-swarm-14839103902105 |website=binance.com |accessdate=24 October 2024}}</ref><ref>{{cite web |title=OpenAI Unveils Experimental SWARM Framework Igniting Debate on AI-Driven Automation |url=https://venturebeat.com/ai/openai-unveils-experimental-swarm-framework-igniting-debate-on-ai-driven-automation/ |website=venturebeat.com |accessdate=24 October 2024}}</ref><ref>{{cite web |title=OpenAI Announces SWARM Framework for AI Orchestration |url=https://www.infoq.com/news/2024/10/openai-swarm-orchestration/ |website=infoq.com |accessdate=24 October 2024}}</ref>
 +
|-
 +
| 2024 || November 15 || Musk Expands Lawsuit Against OpenAI, Microsoft, and Reid Hoffman || Legal || Elon Musk expands his lawsuit against OpenAI to include Microsoft and LinkedIn co-founder Reid Hoffman as defendants. Musk alleges the company deviated from its original mission as a non-profit and accuses it, along with Microsoft, of monopolistic practices that harm competitors like his own company, xAI. The suit also claims OpenAI's transformation into a $157 billion for-profit entity violated founding principles. OpenAI and Microsoft deny the allegations, with OpenAI calling Musk’s claims "baseless." This legal action coincides with Musk’s selection for a government cost-cutting role by President-elect Donald Trump.<ref>{{cite web |title=BBC News Article |url=https://www.bbc.com/news/articles/c93716xdgzqo |website=BBC News |accessdate=19 November 2024}}</ref>
 +
|-
 +
| 2024 || November 19 || ANI sues OpenAI over Copyright || Legal || India's leading news agency, Asian News International (ANI), files a lawsuit against OpenAI in the Delhi High Court, accusing the company of using its content without consent to train AI models like ChatGPT. ANI also claims ChatGPT generated false information, including a fake interview with politician Rahul Gandhi, damaging its credibility. The case highlights concerns over copyright violations, misinformation, and AI's use of publicly available content. OpenAI defends its practices, arguing copyright laws don’t protect facts. This case, by this time part of a global debate, is considered to set significant precedents for AI and copyright law in India and beyond.<ref>{{cite web |title=Indian News Agency Files 287-Page Lawsuit Against OpenAI |url=https://autogpt.net/indian-news-agency-files-287-page-lawsuit-against-openai/ |website=AutoGPT.net |accessdate=19 November 2024}}</ref>
 +
|-
 +
| 2024 || December 4 || Altman sdjusts AGI expectations || Notable comment || Sam Altman tempers expectations about the impact of artificial general intelligence (AGI). Speaking at The New York Times DealBook Summit, Altman suggests AGI might arrive sooner than anticipated but have less immediate impact than previously imagined. He emphasizes that societal change would likely be gradual, with significant disruption occurring only over time. By this tume, OpenAI had seemingly shifted its focus, redefining AGI as less transformative while reserving "superintelligence" for more profound advancements. Altman hints superintelligence could emerge in a few decades. Meanwhile, OpenAI’s AGI declaration is thought being able to affect its Microsoft partnership, potentially positioning it as a dominant for-profit tech entity.<ref>{{cite web |title=Sam Altman says OpenAI is lowering the bar for AGI |url=https://www.theverge.com/2024/12/4/24313130/sam-altman-openai-agi-lower-the-bar |website=The Verge |accessdate=19 November 2024}}</ref>
 +
|-
 +
| 2024 || December 4 || OpenAI x Future collaboration || Partnership || OpenAI partners with Future, a global specialist media platform, to integrate content from over 200 Future brands, including Marie Claire, PC Gamer, and TechRadar, into ChatGPT. This collaboration expands the reach of Future’s journalism and enhances ChatGPT’s capabilities by providing users access to trusted, specialist content with attribution and links to original articles. Future also leverages OpenAI’s technology for AI-driven chatbots and productivity tools across editorial, sales, and marketing. Future CEO Jon Steinberg and OpenAI COO Brad Lightcap emphasize the partnership’s role in audience growth, innovation, and creating new opportunities for content discovery and engagement.<ref>{{cite web |title=OpenAI and Future Partner on Specialist Content |url=https://openai.com/index/openai-and-future-partner-on-specialist-content/ |website=openai.com |accessdate=15 December 2024}}</ref>
 +
|-
 +
| 2024 || December 5 || OpenAI partners with Anduril for defense || Partnership || OpenAI announces a partnership with {{w|Anduril Industries}}, a military defense technology firm, to use AI for national security missions, focusing on countering drone threats. This collaboration follows OpenAI's policy shift allowing military applications of its technology. Anduril, founded by Palmer Luckey, develops defense systems and drones used by the US, UK, and Australia. The partnership aims to enhance U.S. military capabilities against adversaries like China. OpenAI CEO Sam Altman emphasizes responsible AI use for safeguarding democracy and preventing conflict. Anduril CEO Brian Schimpf highlights the role of AI in enabling faster, accurate decisions in high-pressure defense scenarios.<ref>{{cite web |title=OpenAI Partners with Military Defense Tech Company |url=https://www.enca.com/business/openai-partner-military-defense-tech-company |website=eNCA |accessdate=19 November 2024}}</ref>
 +
|-
 +
| 2024 || December 6 || OpenAI launches ChatGPT Pro for advanced AI access || Product launch || OpenAI launches ChatGPT Pro, a $200/month plan offering unlimited access to its most advanced model, OpenAI o1, alongside o1-mini, GPT-4o, and Advanced Voice. The plan includes o1 pro mode, designed for complex problem-solving with enhanced computational power. It outperforms other versions in benchmarks for math, science, and coding, scoring significantly higher in tests like AIME 2024 and Codeforces. Upcoming features include web browsing, file uploads, and image processing. Additionally, OpenAI grants 10 ChatGPT Pro subscriptions to U.S. medical researchers, including experts from Boston Children’s Hospital and Berkeley Lab, to advance studies in rare diseases, aging, and immunotherapy.<ref>{{cite web |title=OpenAI launches ChatGPT Pro |url=https://siliconcanals.com/open-ai-launches-chatgpt-pro/ |website=Silicon Canals |accessdate=19 November 2024}}</ref>
 +
|-
 +
|}
 +
 +
== Numerical and visual data  ==
 +
 +
=== Google Scholar ===
 +
 +
The following table summarizes per-year mentions on Google Scholar as of June, 2023.
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
! Year !! Month and date !! Event type !! Details
+
! Year
 +
! "OpenAI"
 +
|-
 +
| 2015 || 87
 +
|-
 +
| 2016 || 392
 +
|-
 +
| 2017 || 957
 +
|-
 +
| 2018 || 2,280
 +
|-
 +
| 2019 || 4,640
 +
|-
 +
| 2020 || 7,280
 +
|-
 +
| 2021 || 10,200
 +
|-
 +
| 2022 || 12,200
 +
|-
 +
| 2023 || 43,500
 +
|-
 +
| 2024 || 89,900
 
|-
 
|-
 
|}
 
|}
 +
 +
=== Google Trends ===
 +
 +
The chart below shows {{w|Google Trends}} data for OpenAI (Artificial intelligence company), from January 2020 to December 2024, when the screenshot was taken. Interest is also ranked by country and displayed on world map. See spike of interest starting at thew end of 2022, when OpenAI launched ChatGPT.<ref>{{cite web |title=Google Trends data for ChatGPT (January 2020 to December 2024) |url=https://trends.google.com/trends/explore?date=2020-01-01%202024-12-18&q=%2Fg%2F11bxc656v6&hl=en |website=trends.google.com |accessdate=15 December 2024}}</ref>
 +
 +
 +
 +
[[File:Openaigoogletrends122024.png|thumb|center|600px]]
 +
 +
=== Google Ngram Viewer ===
 +
 +
The chart below shows {{w|Google Ngram Viewer}} data for OpenAI, from 2000 to 2019.<ref>{{cite web |title=Google Ngram Viewer: OpenAI (2000-2022) |url=https://books.google.com/ngrams/graph?content=OpenAI&year_start=2000&year_end=2022&corpus=en&smoothing=3&case_insensitive=false |website=books.google.com |accessdate=15 December 2024}}</ref>
 +
 +
 +
 +
[[File:Openaingramviewer2022.PNG|thumb|center|700px]]
 +
 +
=== Wikipedia Views ===
 +
 +
The chart below shows pageviews of the English Wikipedia article OpenAI, from July 2015 to November 2024. See spike of interest induced by ChatGPT release.<ref>{{cite web |title=Pageview analysis for OpenAI across multiple months |url=https://wikipediaviews.org/displayviewsformultiplemonths.php?page=OpenAI&allmonths=allmonths-api&language=en&drilldowns[0]=mobile-web&drilldowns[1]=mobile-app&drilldowns[2]=desktop-spider&drilldowns[3]=mobile-web-spider |website=wikipediaviews.org |accessdate=15 December 2024}}</ref>
 +
 +
[[File:Openaiwikipediaviews122024.PNG|thumb|center|450px]]
  
 
==Meta information on the timeline==
 
==Meta information on the timeline==
Line 19: Line 586:
 
===How the timeline was built===
 
===How the timeline was built===
  
The initial version of the timeline was written by [[User:Issa|Issa Rice]].
+
The initial version of the timeline was written by [[User:Issa|Issa Rice]]. It has been expanded considerably by [[User:Sebastian|Sebastian]].
  
 
{{funding info}} is available.
 
{{funding info}} is available.
  
 
===What the timeline is still missing===
 
===What the timeline is still missing===
 +
 +
* Vipul: bunch of people leaving OpenAI in December 2020/January 2021, including people working on AI safety/transparency: "OpenAI departures: Dario Amodei, Sam McCandlish, Tom Brown, Tom Henighan, Chris Olah, Jack Clark, Ben Mann, Paul Christiano et al leave—most for an unspecified new entity (“the elves leave Middle Earth”?)"  https://www.gwern.net/newsletter/2021/01#ai] [https://www.lesswrong.com/posts/7r8KjgqeHaYDzJvzF/dario-amodei-leaves-openai ✔
 +
*https://www.jointjs.com/demos/chatgpt-timeline
 +
* New GPT-3 tool ✔
 +
* https://thenextweb.com/neural/2021/01/13/googles-new-trillion-parameter-ai-language-model-is-almost-6-times-bigger-than-gpt-3/  ✔
 +
* January 2021: Jan Leike joins OpenAI to lead AI alignment work [https://twitter.com/janleike/status/1352681093007200256 ✔
 +
*  https://singularityhub.com/2021/04/04/openais-gpt-3-algorithm-is-now-producing-billions-of-words-a-day/?fbclid=IwAR0hIm6QNfclIgAW72-etUTp8iwZE_9uY8g7bUv4qoq0Dx9cTG2kBSxrlsY ✔
  
 
===Timeline update strategy===
 
===Timeline update strategy===
 +
 +
* https://www.google.com/search?q=site:nytimes.com+%22OpenAI%22
 +
* https://blog.OpenAI.com/ (but check to see if the announcement of a blog post is covered by other sources)
  
 
==See also==
 
==See also==
 +
 +
* [[Timeline of ChatGPT]]
 +
* [[Timeline of large language models]]
 +
* [[Timeline of transformers]]
 +
* [[Timeline of DeepMind]]
 +
* [[Timeline of Neuralink]]
 +
* [[Timeline of Future of Humanity Institute]]
 +
* [[Timeline of Centre for the Study of Existential Risk]]
  
 
==External links==
 
==External links==
 +
 +
* [https://OpenAI.com/ Official website]
 +
* [https://blog.OpenAI.com/ Official OpenAI blog]
 +
* [https://github.com/OpenAI OpenAI GitHub account]
  
 
==References==
 
==References==
  
 
{{Reflist|30em}}
 
{{Reflist|30em}}

Latest revision as of 19:45, 21 December 2024

The timeline currently offers focused coverage of the period until December 2024. It is likely to miss important developments outside this period (particularly after this period) though it may have a few events from after this period.

This is a timeline of OpenAI, an artificial intelligence research organization based in the United States. It comprises both a non-profit entity called OpenAI Incorporated and a for-profit subsidiary called OpenAI Limited Partnership. OpenAI stated goals are to conduct AI research and contribute to the advancement of friendly AI, aiming to promote its development and positive impact.

Sample questions

The following are some interesting questions that can be answered by reading this timeline:

  • What are some significant events previous to the creation of OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Prelude".
    • You will see some events involving key people like Elon Musk and Sam Altman, that would eventually lead to the creation of OpenAI.
  • What are the various papers and posts published by OpenAI on their research?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Research".
    • You will see mostly papers submitted to the ArXiv by OpenAI-affiliated researchers. Also blog posts.
  • What are the several toolkits, implementations, algorithms, systems and software in general released by OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Product release".
    • You will see a variety of releases, some of them open-sourced.
    • You will see some discoveries and other significant results obtained by OpenAI.
  • What are some updates mentioned in the timeline?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Product update".
  • Who are some notable team members having joined OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Team".
    • You will see the names of incorporated people and their roles.
  • What are the several partnerships between OpenAI and other organizations?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Partnership".
    • You will read collaborations with organizations like DeepMind and Microsoft.
  • What are some significant fundings granted to OpenAI by donors?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Donation".
    • You will see names like the Open Philanthropy Project, and Nvidia, among others.
  • What are some notable events hosted by OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Event hosting".
  • What are some other publications by OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Publication".
    • You will see a number of publications not specifically describing their scientific research, but other purposes, including recommendations and contributions.
  • What are some notable publications by third parties about OpenAI?
    • Sort the full timeline by "Event type" and look for the group of rows with value "Coverage".
  • Other events are described under the following types: "Achievement", "Advocacy", "Background", "Collaboration", "Commitment", "Competiton", "Congressional hearing", "Education", "Financial", "Integration", "Interview", "Notable comment", "Open sourcing", "Product withdrawal", "Reaction" ,"Recruitment", "Software adoption", and "Testing".

Big picture

Time period (approximately) Development summary More details
2015–2017 Early years OpenAI is established as a nonprofit organization with the mission to ensure that artificial general intelligence (AGI) benefits all of humanity. Its co-founders include Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, John Schulman, and Wojciech Zaremba. During this period, the organization focuses on foundational artificial intelligence research, publishing influential papers and open-sourcing tools like OpenAI Gym[1], designed for reinforcement learning.
2018–2019 Growth and expansion OpenAI broadens its research scope and achieves significant breakthroughs in natural language processing and reinforcement learning. This period sees the introduction of Generative Pre-trained Transformers (GPTs), which are capable of tasks such as text generation and question answering. In 2019, OpenAI transitions to a "capped-profit" model (OpenAI LP) to attract funding, securing a $1 billion investment from Microsoft.[2] This partnership provides access to Microsoft Azure cloud platform for AI training. Other notable developments include the cautious release of GPT-2, due to concerns about potential misuse of its text generation capabilities.
2020–2021 Launch of GPT-3 and commercialization In June 2020, OpenAI releases GPT-3, its most advanced language model at the time, which gains attention for its ability to generate coherent and human-like text. OpenAI introduces an API, enabling developers to integrate GPT-3 into various applications. The organization focuses on ethical AI development and forms partnerships to embed GPT-3 capabilities into tools like Microsoft Teams and Power Apps. In 2021, OpenAI introduces Codex, a specialized model designed to translate natural language into programming code, which powers tools like GitHub Copilot.
2022 Launch of ChatGPT and further advancements OpenAI launches ChatGPT, based on a fine-tuned version of GPT-3.5, in late 2022.[3] The tool revolutionizes conversational AI by offering practical and accessible applications for both individual and professional users. ChatGPT’s widespread adoption supports the launch of OpenAI’s subscription service, ChatGPT Plus.
2023 GPT-4 and multimodal AI OpenAI introduces GPT-4 in March 2023, marking a significant advancement with its ability to process both text and image inputs.[4] The model powers applications like ChatGPT (Pro version), offering enhanced reasoning and problem-solving capabilities. Other key developments include the release of DALL·E 3, an advanced image generation model integrated into ChatGPT, featuring capabilities like inpainting and prompt editing.[5] OpenAI’s ongoing emphasis on safety and alignment results in improved measures to mitigate harmful outputs.
2024–present Continued innovation and accessibility OpenAI focuses on expanding the accessibility of its tools, introducing custom GPTs that enable users to create personalized AI assistants. Voice interaction capabilities are added, enhancing usability for diverse applications. OpenAI strengthens partnerships with educational and governmental institutions to promote AI literacy and responsible AI deployment. The organization continues to prioritize AGI safety, collaborating with other entities to ensure secure advancements in the field of artificial intelligence.

Summary by year

Time period Development summary
2015 A group of influential individuals including Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research join forces to establish OpenAI. With a commitment of more than $1 billion, the organization expresses a strong dedication to advancing the field of AI for the betterment of humanity. They announce their intention to foster open collaboration by making their work accessible to the public and actively engaging with other institutions and researchers.[6]
2016 OpenAI breaks from the norm by offering corporate-level salaries instead of the typical nonprofit-level salaries. They also release OpenAI Gym, a platform dedicated to reinforcement learning research. Later in December, they introduced Universe, a platform that facilitates the measurement and training of AI's general intelligence across various games, websites, and applications.[6]
2017 A significant portion of OpenAI's expenditure is allocated to cloud computing, amounting to $7.9 million. On the other hand, DeepMind's expenses for that particular year soar to $442 million, representing a notable difference.[6]
2018 OpenAI undergoes a shift in focus towards more extensive research and development in AI. They introduce Generative Pre-trained Transformers (GPTs). These neural networks, inspired by the human brain, are trained on large amounts of human-generated text and could perform tasks like generating and answering questions. In the same year, Elon Musk resigns from his board seat at OpenAI, citing a potential conflict of interest with his role as CEO of Tesla, which is developing AI for self-driving cars.[6]
2019 OpenAI makes a transition from a non-profit organization to a for-profit model, with a capped profit limit of 100 times the investment made. This allows OpenAI LP to attract investment from venture funds and offer employees equity in the company. OpenAI forms a partnership with Microsoft, which invests $1 billion in the company. OpenAI also announces plans to license its technologies for commercial use. However, some researchers would criticize the shift to a for-profit status, raising concerns about the company's commitment to democratizing AI.[7]
2020 OpenAI introduces GPT-3, a language model trained on extensive internet datasets. While its main function is to provide answers in natural language, it can also generate coherent text spontaneously and perform language translation. OpenAI also announces their plans to develop a commercial product centered around an API called "the API," which is closely connected to GPT-3.[6]
2021 OpenAI introduces DALL-E, an advanced deep-learning model that has the ability to generate digital images by interpreting natural language descriptions.[6]
2022 OpenAI introduces ChatGPT[8] which soon would become the fastest-growing app of all time.[9]
2023 OpenAI releases GPT-4 to ChatGPT Plus, marking a major AI advancement. The organization faces internal turmoil with CEO Sam Altman's brief dismissal. OpenAI transitions to a for-profit model to attract investment, particularly from Microsoft, while continuing its pursuit of artificial general intelligence (AGI).
2024 OpenAI continues to advance AI technologies, including further developments of GPT-4. The organization focuses on ethical AI deployment, emphasizing safety and transparency. Collaborations, particularly with Microsoft, strengthen its resources for progressing toward artificial general intelligence (AGI). OpenAI faces ongoing discussions about governance and AI's societal impact.

Full timeline

Inclusion criteria

The following events are selected for inclusion in the timeline:

  • Most blog posts by OpenAI, many describing important advancements in their research.
  • Product releases, including models and software in general.
  • Partnerships.

We do not include:

  • Comprehensive information on the team's arrivals and departures within a company.
  • Many of OpenAI's research papers, which are not individually listed on the full timeline, but can be found on the talk page as additional entries.

Timeline

Year Month and date Domain/key topic/caption Event type Details
2014 October 22–24 AI's existential threat Prelude During an interview at the AeroAstro Centennial Symposium, Elon Musk, who would later become co-chair of OpenAI, calls artificial intelligence humanity's "biggest existential threat".[10][11]
2015 February 25 Superhuman AI threat Prelude Sam Altman, president of Y Combinator who would later become a co-chair of OpenAI, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".[12]
2015 May 6 Greg Brockman leaves Stripe Prelude Greg Brockman, who would become CTO of OpenAI, announces in a blog post that he is leaving his role as CTO of Stripe. In the post, in the section "What comes next" he writes "I haven't decided exactly what I'll be building (feel free to ping if you want to chat)".[13][14]
2015 June 4 Altman's AI safety concern Prelude At Airbnb's Open Air 2015 conference, Sam Altman, president of Y Combinator who would later become a co-chair of OpenAI, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.[15]
2015 July (approximate) AI research dinner Prelude Sam Altman sets up a dinner in Menlo Park, California to talk about starting an organization to do AI research. Attendees include Greg Brockman, Dario Amodei, Chris Olah, Paul Christiano, Ilya Sutskever, and Elon Musk.[16]
2015 Late year Musk's $1 billion proposal Funding In their foundational phase, OpenAI co-founders Greg Brockman and Sam Altman initially aim to raise $100 million to launch its initiatives focused on developing artificial general intelligence (AGI). Recognizing the ambitious scope of the project, Elon Musk suggests a significantly larger funding goal of $1 billion to ensure the project’s viability. He expresses willingness to cover any funding shortfall.[17]
2015 December 11 OpenAI's mission statement OpenAI launch OpenAI is introduced as a non-profit artificial intelligence research organization dedicated to advancing digital intelligence for the benefit of humanity, without the constraints of financial returns. OpenAI expresses aim to ensure that AI acts as an extension of individual human will and is broadly accessible. The organization recognizes the potential risks and benefits of achieving human-level AI.[18]
2015 December Wikipedia Coverage The article "OpenAI" is created on Wikipedia.[19]
2015 December Team OpenAI announces Y Combinator founding partner Jessica Livingston as one of its financial backers.[20]
2016 January Ilya Sutskever Team Ilya Sutskever joins OpenAI as Research Director.[21][22]
2016 January 9 AMA Education The OpenAI research team does an AMA ("ask me anything") on r/MachineLearning, the subreddit dedicated to machine learning.[23]
2016 February 25 Deep learning, neural networks Research OpenAI introduces weight normalization as a technique that improves the training of deep neural networks by decoupling the length and direction of weight vectors. It enhances optimization and speeds up convergence without introducing dependencies between examples in a minibatch. This method is effective for recurrent models and noise-sensitive applications, providing a significant speed-up similar to batch normalization but with lower computational overhead. Applications in image recognition, generative modeling, and deep reinforcement learning demonstrate the effectiveness of weight normalization.[24]
2016 March 31 Ian Goodfellow Team A blog post from this day announces that Ian Goodfellow has joined OpenAI.[25] Previously, Goodfellow worked as Senior Research Scientist at Google.[26][22]
2016 April 26 Robotics Team Pieter Abbeel joins OpenAI.[27][22] His work focuses on robot learning, reinforcement learning, and unsupervised learning. A cutting-edge researcher, Abbeel robots would learn various tasks, including locomotion and vision-based robotic manipulation.[28]
2016 April 27 Reinforcement learning Product release OpenAI releases OpenAI Gym, a toolkit for reinforcement learning (RL) algorithms. It offers various environments for developing and comparing RL algorithms, with compatibility across different frameworks. RL enables agents to learn decision-making and motor control in complex environments. OpenAI Gym addresses the need for diverse benchmarks and standardized environments in RL research. The toolkit encourages feedback and collaboration to enhance its capabilities.[29][30][31]
2016 May 25 Natural language processing Research OpenAI-affiliated researchers publish a paper introducing an extension of adversarial training and virtual adversarial training for text classification tasks. Adversarial training is a regularization technique for supervised learning, while virtual adversarial training extends it to semi-supervised learning. However, these methods require perturbing multiple entries of the input vector, which is not suitable for sparse high-dimensional inputs like one-hot word representations in text. In this paper, the authors propose applying perturbations to word embeddings in a recurrent neural network (RNN) instead of the original input. This text-specific approach achieves state-of-the-art results on multiple benchmark tasks for both semi-supervised and purely supervised learning. The authors provide visualizations and analysis demonstrating the improved quality of the learned word embeddings and the reduced overfitting during training.[32]
2016 June 16 Generative models Research OpenAI publishes post introducing the concept of generative models, which are a type of unsupervised learning technique in machine learning. The post emphasizes the importance and potential of generative models in understanding and replicating complex data sets, and it showcases recent advancements in this field. Generative models aim to understand and replicate the patterns and features present in a given dataset. The post discusses the use of generative models in generating images, particularly with the example of the DCGAN network. It explains the training process of generative models, including the use of Generative Adversarial Networks (GANs) and other approaches. The post highlights three popular types of generative models: Generative Adversarial Networks (GAN), Variational Autoencoders (VAEs), and autoregressive models. Each of these approaches has its own strengths and limitations. The post also mentions recent advancements in generative models, including improvements to GANs, VAEs, and the introduction of InfoGAN. The last part briefly mentions two projects related to generative models in the context of reinforcement learning. One project focuses on curiosity-driven exploration using Bayesian neural networks. The other project explores generative models in reinforcement learning for training agents.[33]
2016 June 21 AI safety Research OpenAI-affiliated researchers publish a paper addressing the potential impact of accidents in machine learning systems. They outline five practical research problems related to accident risk, categorized based on the origin of the problem. These categories include having the wrong objective function, an objective function that is too expensive to evaluate frequently, and undesirable behavior during the learning process. The authors review existing work in these areas and propose research directions relevant to cutting-edge AI systems. They also discuss how to approach the safety of future AI applications effectively.[34][35]
2016 July Dario Amodei joins OpenAI Team Dario Amodei joins OpenAI[36], working on the Team Lead for AI Safety.[37][22]
2016 July 28 Security and adversarial AI, automated programming, cybersecurity, multi-agent systems, simulation Recruitment OpenAI publishes post calling for applicants to work in significant problems in AI that have a meaningful impact. They list several problem areas that they believe are crucial for advancing AI and its implications for society. The first problem area mentioned is detecting covert breakthrough AI systems being used by organizations for potentially malicious purposes. OpenAI emphasizes the need to develop methods to identify such undisclosed AI breakthroughs, which could be achieved through various means like monitoring news, financial markets, and online games. Another area of interest is building an agent capable of winning online programming competitions. OpenAI recognizes the power of a program that can generate other programs, and they see the development of such an agent as highly valuable. Additionally, OpenAI highlights the significance of cyber-security defense. They stress the need for AI techniques to defend against sophisticated hackers who may exploit AI methods to break into computer systems. Lastly, OpenAI expresses interest in constructing a complex simulation with multiple long-lived agents. Their aim is to create a large-scale simulation where agents can interact, learn over an extended period, develop language, and achieve diverse goals.[38]
2016 August 15 AI Research Donation American multinational technology company Nvidia announces that it has donated the first Nvidia DGX-1 (a supercomputer) to OpenAI, which plans to use the supercomputer to train its AI on a corpus of conversations from Reddit. The DGX-1 supercomputer is expected to enable OpenAI to explore new problems and achieve higher levels of performance in AI research.[39][40][41]
2016 August 29 Infrastructure Research OpenAI publishes an article discussing the infrastructure necessary for deep learning. The research process starts with small ad-hoc experiments that need to be quickly conducted, so deep learning infrastructure must be flexible and allow users to analyze the models effectively. Then, the model is scaled up, and experiment management becomes critical. The article describes an example of improving Generative Adversarial Network training, from a prototype on MNIST and CIFAR-10 datasets to a larger model on the ImageNet dataset. The article also discusses the software and hardware infrastructure necessary for deep learning, such as Python, TensorFlow, and high-end GPUs. Finally, the article emphasizes the importance of simple and usable infrastructure management tools.[42]
2016 October 11 Robotics Research OpenAI-affiliated researchers publish a paper addressing the challenge of transferring control policies from simulation to the real world. The authors propose a method that leverages the similarity in state sequences between simulation and reality. Instead of directly executing simulation-based controls on a robot, they predict the expected next states using a deep inverse dynamics model and determine suitable real-world actions. They also introduce a data collection approach to improve the model's performance. Experimental results demonstrate the effectiveness of their approach compared to existing methods for addressing simulation-to-real-world discrepancies.[43]
2016 October 18 Safety Research OpenAI-affiliated researchers publish a paper presenting a method called Private Aggregation of Teacher Ensembles (PATE) to address the privacy concerns associated with sensitive training data in machine learning applications. The approach involves training multiple models using disjoint datasets, which contain sensitive information. These models, referred to as "teachers," are not directly published but used to train a "student" model. The student model learns to predict outputs through noisy voting among the teachers and does not have access to individual teachers or their data. The student's privacy is ensured using differential privacy, even when the adversary can inspect its internal workings. The method is applicable to any model, including non-convex models like Deep Neural Networks (DNNs), and achieves state-of-the-art privacy/utility trade-offs on MNIST and Street View House Numbers (SVHN) datasets. The approach combines an improved privacy analysis with semi-supervised learning.[44]
2016 November 14 Generative models Research OpenAI-affiliated researchers publish a paper discussing the challenges in quantitatively analyzing decoder-based generative models, which have shown remarkable progress in generating realistic samples of various modalities, including images. These models rely on a decoder network, which is a deep neural network that defines a generative distribution. However, evaluating the performance of these models and estimating their log-likelihoods can be challenging due to the intractability of the task. The authors propose using Annealed Importance Sampling as a method for evaluating log-likelihoods and validate its accuracy using bidirectional Monte Carlo. They provide the evaluation code for this technique. Through their analysis, they examine the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the issue of overfitting, and the models' ability to capture important modes of the data distribution.[45]
2016 November 15 Cloud computing Partnership Microsoft's artificial intelligence research division partners with OpenAI. Through this collaboration, OpenAI is granted access to Microsoft's virtual machine technology for AI training and simulation, while Microsoft would benefit from cutting-edge research conducted on its Azure cloud platform. Microsoft sees this partnership as an opportunity to advance machine intelligence research on Azure and attract other AI companies to the platform. The collaboration aligns with Microsoft's goal to compete with Google and Facebook in the AI space and strengthen its position as a central player in the industry.[46][47]
2016 December 5 Reinforcement learning Product release OpenAI releases Universe, a tool that aims to train and measure AI frameworks using video games, applications, and websites. The goal is to accelerate the development of generalized intelligence that can excel at multiple tasks. Universe provides a wide range of environments, including Atari 2600 games, flash games, web browsers, and CAD software, for AI systems to learn and improve their capabilities. By applying reinforcement learning techniques, which leverage rewards to optimize problem-solving, Universe enables AI models to perform tasks such as playing games and browsing the web. The tool's versatility and real-world applicability make it valuable for benchmarking AI performance and potentially advancing AI capabilities beyond current platforms like Siri or Google Assistant.[48][49][50][51]
2017 Early year Recognition of resource needs Funding OpenAI realizes that building artificial general intelligence would require significantly more resources than previously anticipated. The organization begins evaluating the vast computational power needed for AGI development, acknowledging that billions of dollars in annual funding would be necessary.[17]
2017 January Paul Christiano joins OpenAI Team Paul Christiano joins OpenAI to work on AI alignment.[52] He was previously an intern at OpenAI in 2016.[53]
2017 March AI governance, philanthropy Donation The Open Philanthropy Project awards a grant of $30 million to OpenAI for general support.[54] The grant initiates a partnership between Open Philanthropy Project and OpenAI, in which Holden Karnofsky (executive director of Open Philanthropy Project) joins OpenAI's board of directors to oversee OpenAI's safety and governance work.[55] The grant is criticized by Maciej Cegłowski[56] and Benjamin Hoffman (who would write the blog post "OpenAI makes humanity less safe")[57][58][59] among others.[60]
2017 March 24 Reinforcement learning Research OpenAI publishes document presenting evolution strategies (ES) as a viable alternative to reinforcement learning techniques. They highlight that ES, a well-known optimization technique, performs on par with RL on modern RL benchmarks, such as Atari and MuJoCo, while addressing some of RL's challenges. ES is simpler to implement, does not require backpropagation, scales well in a distributed setting, handles sparse rewards effectively, and has fewer hyperparameters. The authors compare this discovery to previous instances where old ideas achieved significant results, such as the success of convolutional neural networks (CNNs) in image recognition and the combination of Q-Learning with CNNs in solving Atari games. The implementation of ES is demonstrated to be efficient, with the ability to train a 3D MuJoCo humanoid walker in just 10 minutes using a computing cluster. The document provides a brief overview of conventional RL, compares it to ES, discusses the tradeoffs between the two approaches, and presents experimental results supporting the effectiveness of ES.[61][62]
2017 March Artificial general intelligence Reorganization Greg Brockman and a few other core members of OpenAI begin drafting an internal document to lay out a path to artificial general intelligence. As the team studies trends within the field, they realize staying a nonprofit is financially untenable.[63]
2017 April OpenAI history Coverage An article by Brent Simoneaux and Casey Stegman is published, providing insights into the early days of OpenAI and the individuals involved in shaping the organization. The article begins by debunking the notion that OpenAI's office is filled with futuristic gadgets and experiments. Instead, it describes a typical tech startup environment with desks, laptops, and bean bag chairs, albeit with a small robot tucked away in a side room. OpenAI–founders Greg Brockman and Ilya Sutskever, were inspired to establish the organization after a dinner conversation in 2015 with tech entrepreneur Sam Altman and Elon Musk. They discussed the idea of building safe and beneficial AI and decided to create a nonprofit organization. Overall, the article provides a glimpse into the early days of OpenAI and the visionary individuals behind the organization's mission to advance AI for the benefit of humanity.[64][65][66]
2017 April 6 Sentiment analysis Product release OpenAI unveils an unsupervised system which is able to perform a excellent sentiment analysis, despite being trained only to predict the next character in the text of Amazon reviews.[67][68]
2017 April 6 Generative models Research OpenAI-affiliated researchers publish a paper exploring the capabilities of byte-level recurrent language models. Through extensive training and computational resources, these models acquire disentangled features that represent high-level concepts. Remarkably, the researchers discover a single unit within the model that effectively performs sentiment analysis. The learned representations, achieved through unsupervised learning, outperform existing methods on the binary subset of the Stanford Sentiment Treebank dataset. Moreover, the models trained using this approach are highly efficient in terms of data requirements. Even with a small number of labeled examples, their performance matches that of strong baselines trained on larger datasets. Additionally, the researchers demonstrate that manipulating the sentiment unit in the model influences the generative process, allowing them to produce samples with positive or negative sentiment simply by setting the unit's value accordingly.[69][70]
2017 April 6 OpenAI's Evolution Strategies Research OpenAI unveils reuse of an old field called “neuroevolution”, and a subset of algorithms from it called “evolution strategies,” which are aimed at solving optimization problems. In one hour training on an Atari challenge, an algorithm is found to reach a level of mastery that took a reinforcement-learning system published by DeepMind in 2016 a whole day to learn. On the walking problem the system takes 10 minutes, compared to 10 hours for DeepMind's approach.[71]
2017 May 15 OpenAI releases Roboschool Product release OpenAI releases Roboschool as an open-source software, integrated with OpenAI Gym, that facilitates robot simulation. It provides a range of environments for controlling robots in simulation, including both modified versions of existing MuJoCo environments and new challenging tasks. Roboschool utilizes the Bullet Physics Engine and offers free alternatives to MuJoCo, removing the constraint of a paid license. The software supports training multiple agents together in the same environment, allowing for multiplayer interactions and learning. It also introduces interactive control environments that require the robots to navigate towards a moving flag, adding complexity to locomotion problems. Trained policies are provided for these environments, showcasing the capability of the software. Overall, Roboschool offers a platform for robotics research, simulation, and control policy development within the OpenAI Gym framework.[72]
2017 May 24 OpenAI releases Baselines Product release OpenAI releases Baselines, a collection of reinforcement learning algorithms that provide high-quality implementations. These implementations serve as reliable benchmarks for researchers to replicate, improve, and explore new ideas in the field of reinforcement learning. The DQN implementation and its variations in OpenAI Baselines achieve performance levels comparable to those reported in published papers. They are intended to serve as a foundation for incorporating novel approaches and as a means of comparing new methods against established ones. By offering these baselines, OpenAI aims to facilitate research advancements in the field of reinforcement learning.[73][74]
2017 June 12 Deep RL from human preferences Research OpenAI-affiliated researchers present a study on deep reinforcement learning (RL) systems. They propose a method to effectively communicate complex goals to RL systems by utilizing human preferences between pairs of trajectory segments. Their approach demonstrates successful solving of complex RL tasks, such as Atari games and simulated robot locomotion, without relying on a reward function. The authors achieve this by providing feedback on less than one percent of the agent's interactions with the environment, significantly reducing the need for human oversight. Additionally, they showcase the flexibility of their approach by training complex novel behaviors in just about an hour of human time. This work surpasses previous achievements in learning from human feedback, as it tackles more intricate behaviors and environments.[75][76][77]
2017 June 28 OpenAI releases mujoco-py Open sourcing OpenAI open-sources mujoco-py, a Python library for robotic simulation based on the MuJoCo engine. It offers parallel simulations, GPU-accelerated rendering, texture randomization, and VR interaction. The new version provides significant performance improvements, allowing for faster trajectory optimization and reinforcement learning. Beginners can use the MjSim class, while advanced users have access to lower-level interfaces.[78]
2017 June OpenAI-DeepMind safety partnership Partnership OpenAI partners with DeepMind’s safety team in the development of an algorithm which can infer what humans want by being told which of two proposed behaviors is better. The learning algorithm uses small amounts of human feedback to solve modern reinforcement learning environments.[79]
2017 July 27 OpenAI introduces parameter noise Research OpenAI publishes a blog post discussing the benefits of adding adaptive noise to the parameters of reinforcement learning algorithms, specifically in the context of exploration. The technique, called parameter noise, enhances the efficiency of exploration by injecting randomness directly into the parameters of the agent's neural network policy. Unlike traditional action space noise, parameter noise ensures that the agent's exploration is consistent across different time steps. The authors demonstrate that parameter noise can significantly improve the performance of reinforcement learning algorithms, leading to higher scores and more effective exploration. They address challenges related to the sensitivity of network layers, changes in sensitivity over time, and determining the appropriate noise scale. The article also provides baseline code and benchmarks for various algorithms, showcasing the benefits of parameter noise in different tasks.[80]
2017 August 12 Reinforcement learning Achievement OpenAI's Dota 2 bot, trained through self-play, emerges victorious against top professional players at The International, a major eSports event. The bot, developed by OpenAI, remains undefeated against the world's best Dota 2 players. While the 1v1 battles are less complex than professional matches, OpenAI reportedly works on a bot capable of playing in larger 5v5 battles. Elon Musk, who watches the event, would express concerns about unregulated AI, emphasizing its potential dangers.[81][82][83][84]
2017 August 13 NYT highlights OpenAI's AI safety work Coverage The New York Times publishes a story covering the AI safety work (by Dario Amodei, Geoffrey Irving, and Paul Christiano) at OpenAI.[85]
2017 August 18 OpenAI releases ACKTR and A2C Product release OpenAI releases two new Baselines implementations: ACKTR and A2C. A2C is a deterministic variant of Asynchronous Advantage Actor Critic (A3C), providing equal performance. ACKTR is a more sample-efficient reinforcement learning algorithm than TRPO and A2C, requiring slightly more computation than A2C per update. ACKTR excels in sample complexity by using the natural gradient direction and is only 10-25% more computationally expensive than standard gradient updates. OpenAI has also published benchmarks comparing ACKTR with A2C, PPO, and ACER on various tasks, demonstrating ACKTR's competitive performance. A2C, a synchronous alternative to A3C, is included in this release and is efficient for single-GPU and CPU-based implementations.[86]
2017 September 13 OpenAI introduces LOLA for multi-agent RL Research OpenAI publishes a paper introducing a new method for training agents in multi-agent settings called "Learning with Opponent-Learning Awareness" (LOLA). The method takes into account how an agent's policy affects the learning of the other agents in the environment. The paper shows that LOLA leads to the emergence of cooperation in the iterated prisoners' dilemma and outperforms naive learning in this domain. The LOLA update rule can be efficiently calculated using an extension of the policy gradient estimator, making it suitable for model-free RL. The method is applied to a grid world task with an embedded social dilemma using recurrent policies and opponent modeling.[87][88]
2017 October 11 OpenAI unveils RoboSumo Product release OpenAI announces development of a simple sumo-wrestling videogame called RoboSumo to advance the intelligence of artificial intelligence (AI) software. In this game, robots controlled by machine-learning algorithms compete against each other. Through trial and error, the AI agents learn to play the game and develop strategies to outmaneuver their opponents. OpenAI's project aims to push the boundaries of machine learning beyond the limitations of heavily-used techniques that rely on labeled example data. Instead, they focus on reinforcement learning, where software learns through trial and error to achieve specific goals. OpenAI believes that competition among AI agents can foster more complex problem-solving and enable faster progress. The researchers also test their approach in other games and scenarios, such as spider-like robots and soccer penalty shootouts.[89][90]
2017 November 6 OpenAI researchers Form Embodied Intelligence Team The New York Times reports that Pieter Abbeel (a researcher at OpenAI) and three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligence.[91]
2017 December 6 Neural networks Product release OpenAI releases highly-optimized GPU kernels for neural network architectures with block-sparse weights. These kernels can run significantly faster than cuBLAS or cuSPARSE, depending on the chosen sparsity. They enable the training of neural networks with a large number of hidden units and offer computational efficiency proportional to the number of non-zero weights. The release includes benchmarks that show performance improvements over other algorithms like A2C, PPO, and ACER in various tasks. This development opens up opportunities for training large, efficient, and high-performing neural networks, with potential applications in fields like text analysis and image generation.[92]
2017 Late year Transition to for-profit structure Discussions among OpenAI’s leadership and Elon Musk leads to the decision to establish a for-profit entity to attract necessary funding. Musk expresses a desire for majority equity and control over the organization, emphasizing the urgency of building a competitor to major players like Google and DeepMind.[17]
2018 Early February Proposal to merge with Tesla Notable comment Elon Musk suggests in an email that OpenAI should merge with Tesla, referring to it as a "cash cow" that could provide financial support for AGI development. He believes Tesla can serve as a viable competitor to Google, although he acknowledges the challenges of this strategy.[17]
2018 February 20 AI ethics, security Publication OpenAI co-authors a paper forecasting the potential misuse of AI technology by malicious actors and ways to prevent and mitigate these threats. The report makes high-level recommendations for companies, research organizations, individual practitioners, and governments to ensure a safer world, including acknowledging AI's dual-use nature, learning from cybersecurity practices, and involving a broader cross-section of society in discussions. The paper highlights concrete scenarios where AI can be maliciously used, such as cybercriminals using neural networks to create computer viruses with automatic exploit generation capabilities and rogue states using AI-augmented surveillance systems to pre-emptively arrest people who fit a predictive risk profile.[93][94][95][96][97]
2018 February 20 Donors/advisors Team OpenAI announces changes in donors and advisors. New donors are: Jed McCaleb, Gabe Newell, Michael Seibel, Jaan Tallinn, and Ashton Eaton and Brianne Theisen-Eaton. Reid Hoffman is "significantly increasing his contribution". Pieter Abbeel (previously at OpenAI), Julia Galef, and Maran Nelson become advisors. Elon Musk departs the board but remains as a donor and advisor.[98][96]
2018 February 26 Robotics Product release OpenAI announces a research release that includes eight simulated robotics environments and a reinforcement learning algorithm called Hindsight Experience Replay (HER). The environments are more challenging than existing ones and involve realistic tasks. HER allows learning from failure by substituting achieved goals for the original ones, enabling agents to learn how to achieve arbitrary goals. The release also includes requests for further research to improve HER and reinforcement learning. The goal-based environments require some changes to the Gym API and can be used with existing reinforcement learning algorithms. Overall, this release provides new opportunities for robotics research and advancements in reinforcement learning.[99]
2018 March 3 Hackathon Event hosting OpenAI hosts its first hackathon. Applicants include high schoolers, industry practitioners, engineers, researchers at universities, and others, with interests spanning healthcare to AGI.[100][101]
2018 April 5 – June 5 Reinforcement learning Event hosting The OpenAI Retro Contest takes place.[102][103] It is a competition organized by OpenAI that involves using the Retro platform to develop artificial intelligence agents capable of playing classic video games. Participants are required to train their agents to achieve high scores in a set of selected games using reinforcement learning techniques. The contest provides a framework called gym-retro, which allows participants to interact with and train agents on retro games using OpenAI Gym. The goal is to develop intelligent agents that can learn and adapt to the games' dynamics, achieving high scores and demonstrating effective gameplay strategies.[104]
2018 April 9 AI Ethics, AI Governance, AI Safety Commitment OpenAI releases a charter stating that the organization commits to stop competing with a value-aligned and safety-conscious project that comes close to building artificial general intelligence, and also that OpenAI expects to reduce its traditional publishing in the future due to safety concerns.[105][106][107][108][109]
2018 April 19 Team salary Financial The New York Times publishes a story detailing the salaries of researchers at OpenAI, using information from OpenAI's 2016 Form 990. The salaries include $1.9 million paid to Ilya Sutskever and $800,000 paid to Ian Goodfellow (hired in March of that year).[110][111][112]
2018 May 2 AI training, AI goal learning, self-play Research A paper by Geoffrey Irving, Paul Christiano, and Dario Amodei explores an approach to training AI systems to learn complex human goals and preferences. Traditional methods that rely on direct human judgment may fail when the tasks are too complicated. To address this, the authors propose training agents through self-play using a zero-sum debate game. In this game, two agents take turns making statements, and a human judge determines which agent provides the most true and useful information. The authors demonstrate the effectiveness of this approach in an experiment involving the MNIST dataset, where agents compete to convince a sparse classifier, resulting in significantly improved accuracy. They also discuss theoretical and practical considerations of the debate model and suggest future experiments to further explore its properties.[113][114]
2018 May 16 Computation Research OpenAI releases an analysis showing that the amount of compute used in the largest AI training runs had been increasing exponentially since 2012, with a 3.4-month doubling time. This represents a more rapid growth rate compared to Moore's Law. The increase in compute plays a crucial role in advancing AI capabilities. The trend is expected to continue, driven by hardware advancements and algorithmic innovations. However, there would eventually be limitations due to cost and chip efficiency. The authors highlight the importance to address the implications of this trend, including safety and malicious use of AI. Modest amounts of compute have also led to significant AI breakthroughs, indicating that massive compute is not always a requirement for important results.[115]
2018 June 11 Unsupervised learning Research OpenAI announces having obtained significant results on a suite of diverse language tasks with a scalable, task-agnostic system, which uses a combination of transformers and unsupervised pre-training.[116]
2018 June 25 Neural network Product release OpenAI announces set of AI algorithms able to hold their own as a team of five and defeat human amateur players at Dota 2, a multiplayer online battle arena video game popular in e-sports for its complexity and necessity for teamwork.[117] In the algorithmic A team, called OpenAI Five, each algorithm uses a neural network to learn both how to play the game, and how to cooperate with its AI teammates.[118][119]
2018 July 18 Lethal autonomous weapons Background Elon Musk, along with other tech leaders, sign a pledge promising to not develop “lethal autonomous weapons.” They also call on governments to institute laws against such technology. The pledge is organized by the Future of Life Institute, an outreach group focused on tackling existential risks.[120][121][122]
2018 July 30 Robotics Product release OpenAI achieves new benchmark for robot dexterity through AI training. They use a simulation with various randomizations to teach their robot hand, Dactyl, to manipulate a Rubik's cube artfully. The AI system learns through trial and error, accumulating about 100 years' worth of experience, and achieved human-like movements. While experts praise OpenAI's work, they acknowledge some limitations and the need for significant computing power. The research demonstrates progress in robotics and AI, with potential applications in automating manual labor.[123][124][125]
2018 August 7 Reinforcement learning Achievement OpenAI's advanced AI system, OpenAI Five, successfully defeates five of the world's top professional Dota 2 players. The AI, which by this time has already demonstrated its skills in 1v1 matches, showcases its superiority by handily winning against the human team. OpenAI Five's training involves playing games against itself at an accelerated pace, utilizing a specialized training system. The exhibition match, streamed live on Twitch, features renowned Dota 2 players. In the first two matches, the AI wins convincingly within 21 and 25 minutes, respectively. Although the AI loses the third match due to the audience selecting heroes it isn't familiar with, this achievement showcases the remarkable progress of AI in complex team-based games.[126][127][128][129]
2018 August 16 Arboricity Research OpenAI publishes paper on constant arboricity spectral sparsifiers. The paper shows that every graph is spectrally similar to the union of a constant number of forests.[130]
2018 September Amodei named Research Director Team Dario Amodei becomes OpenAI's Research Director.[37]
2018 October 31 Reinforcement learning Product release OpenAI unveils its Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time exceeds average human performance on videogame Montezuma’s Revenge.[131]
2018 November 8 Reinforcement learning Education OpenAI launches Spinning Up, an educational resource designed to teach anyone deep reinforcement learning. The program consists of crystal-clear examples of RL code, educational exercises, documentation, and tutorials.[132][133][134]
2018 November 9 Artificial General Intelligence, deep learning Notable comment Ilya Sutskever gives a speech at the AI Frontiers Conference in San Jose. He expresses belief that short-term AGI (Artificial General Intelligence) should be taken seriously as a possibility. He emphasizes the potential of deep learning, which has made significant advancements in various tasks such as image classification, machine translation, and game playing. Sutskever suggests that the rapid progress of AI and increasing compute power could lead to the emergence of AGI. However, there are differing opinions in the AI community, with some experts, like Gary Marcus, arguing that deep learning alone may not achieve AGI. The discussion on AGI's potential impact and the need for safety research continues within the academic community. Sutskever declares:
We (OpenAI) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibility.[135]
2018 November 19 Human-guided RL optimization Partnership OpenAI partners with DeepMind in a new paper that proposes a new method to train reinforcement learning agents in ways that enables them to surpass human performance. The paper, titled Reward learning from human preferences and demonstrations in Atari, introduces a training model that combines human feedback and reward optimization to maximize the knowledge of RL agents.[136]
2018 December 4 Gradient noise scale insights Researh OpenAI publishes their discovery that the gradient noise scale predicts the effectiveness of large batch sizes in neural network training. Complex tasks with noisier gradients can benefit from increasingly large batches, removing a potential limit to AI system growth. This finding highlights the possibility of faster AI advancements and the need for responsible development. The research systematizes neural network training and shows that it can be understood through statistical analysis, providing insights into parallelism potential across different tasks.[137]
2018 December 6 Reinforcement learning Product release OpenAI releases CoinRun, a training environment designed to test the adaptability of reinforcement learning agents.[138][139] A training environment is a type of educational setting that helps individuals acquire new skills or become familiar with a product.[140]
2018 December Funding warning from Elon Musk reaches out to OpenAI with a stark warning regarding funding challenges. He asserts that even raising several hundred million dollars would be insufficient to meet the demands of developing AGI. Musk stresses the need for billions of dollars annually, underscoring the financial realities facing OpenAI as it aims to fulfill its mission of building safe and beneficial AGI that could positively impact humanity.[17]
2019 February 14 Natural-language generation Product release OpenAI publishes a blog post discussing the release of GPT-2, a large-scale unsupervised language model with 1.5 billion parameters, which can generate coherent paragraphs of text, achieve state-of-the-art performance on many language modeling benchmarks, and perform rudimentary reading comprehension, machine translation, question answering, and summarization without task-specific training. Due to concerns about malicious applications, the trained model is not released, but a smaller model and a technical paper are released for research purposes. GPT-2 is trained to predict the next word in 40GB of internet text, using a diverse dataset, and can generate conditional synthetic text samples of unprecedented quality.[141][142][143] OpenAI initially tries to communicate the risk posed by this technology.[144]
2019 February 19 AI Alignment Research OpenAI affiliated researchers publish an article arguing that aligning advanced AI systems with human values requires resolving uncertainties related to human psychology and biases, which can only be resolved empirically through experimentation. The authors call for social scientists with experience in human cognition, behavior, and ethics to collaborate with AI researchers to improve our understanding of the human side of AI alignment. The paper highlights the limitations of existing machine learning in addressing the complexities of human values and biases and suggests conducting experiments consisting entirely of people to replace machine learning agents with people playing the role of those agents. The authors emphasize the importance of interdisciplinary collaborations between social scientists and ML researchers to achieve long-term AI safety.[145][146]
2019 March 4 Reinforcement learning Product release OpenAI releases a Neural MMO (massively multiplayer online), a multiagent game environment for reinforcement learning agents. The platform supports a large, variable number of agents within a persistent and open-ended task.[147]
2019 March 6 Neural network visualization Product release OpenAI introduces Activation atlases, a technique developed in collaboration with Google researchers, which enables the visualization of interactions between neurons in AI systems. The researchers provide insights into the internal decision-making processes of neural networks, aiding in identifying weaknesses and investigating failures. Activation atlases build on feature visualization, moving from individual neurons to visualizing the collective space they represent. Understanding neural network operations is crucial for auditing and ensuring their safety. Activation atlases allow humans to uncover issues like reliance on spurious correlations or feature reuse bugs. By manipulating images, the model can be deceived. To date, activation atlases prove to be more effective than expected, suggesting meaningful neural network activations.[148]
2019 March 11 AGI development Reorganization OpenAI announces the creation of OpenAI LP, a for-profit company that aims to accelerate progress towards creating safe artificial general intelligence (AGI). Owned and controlled by the OpenAI nonprofit organization's board of directors, OpenAI LP reportedly plans to raise and invest billions of dollars in advancing AI. Sam Altman agreees to serve as the CEO, with Greg Brockman as Chief technology officer and Ilya Sutskever as the chief scientist. The restructuring allows OpenAI to focus on developing new AI technologies while the nonprofit arm continues educational and policy initiatives. The company is reportedly concerned that AGI development may become a competition that neglects safety and aims to collaborate with any company that achieves AGI before them. OpenAI LP initial investors include American internet entrepreneur Reid Hoffman's charitable organization and Khosla Ventures.[149][150]
2019 March 21 AI training Product release OpenAI announces progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models.[151]
2019 March Leadership Team Sam Altman, the president of Y Combinator, a prominent Silicon Valley accelerator, announces his decision to step down from his position transitioning into a chairman role, and focusing on other endeavors such as his involvement with OpenAI, where he serves as a co-chair to date.[152][153][22]
2019 April 23 Deep learning Research OpenAI publishes a paper announcing Sparse Transformers, a deep neural network for learning sequences of data, including text, sound, and images. It utilizes an improved algorithm based on the attention mechanism, being able to extract patterns from sequences 30 times longer than possible previously.[154][155][156]
2019 April 25 Neural network Product release OpenAI announces MuseNet, a deep neural network able to generate 4-minute musical compositions with 10 different instruments, and is able to combine multiple styles from country to Mozart to The Beatles. The neural network uses general-purpose unsupervised technology.[157]
2019 April 27 Robotics, machine learning Event hosting OpenAI hosts the OpenAI Robotics Symposium 2019, which aims to bring together experts from robotics and machine learning communities to discuss the development of robots that learn. The event features talks from researchers and industry leaders covering topics such as dexterity, learning from play, human-robot interaction, and adaptive robots. Attendees include individuals from various organizations and disciplines, including industry labs, universities, and research institutions. The symposium also includes a live demonstration of OpenAI's humanoid robot hand manipulating objects using vision and reinforcement learning.[158]
2019 May Natural-language generation Product release OpenAI releases a limited version of its language-generating system GPT-2. This version is more powerful (though still significantly limited compared to the whole thing) than the extremely restricted initial release of the system, citing concerns that it’d be abused.[159] The potential of the new system is recognized by various experts.[160]
2019 June 13 Natural-language generation Coverage Connor Leahy publishes article entitled The Hacker Learns to Trust which discusses the work of OpenAI, and particularly the potential danger of its language-generating system GPT-2. Leahy highlights: "Because this isn’t just about GPT2. What matters is that at some point in the future, someone will create something truly dangerous and there need to be commonly accepted safety norms before that happens."[144]
2019 June 13 Synthetic media Congressional hearing OpenAI appears before the United States Congress to discuss the potential consequences of synthetic media, including a specific focus on synthetic text.[161] The House Permanent Select Committee on Intelligence holds an open hearing to discuss the national security challenges posed by artificial intelligence, manipulated media, and deepfake technology. This is the first House hearing focused on examining deepfakes and other AI-generated synthetic data. The Committee discusses the threats posed by fake content and ways to detect and combat it, as well as the roles of the public and private sectors and society as a whole in countering a potentially bleak future.[162]
2019 July 22 Cloud platform integration Partnership OpenAI announces an exclusive partnership with Microsoft. As part of the partnership, Microsoft invests $1 billion in OpenAI, and OpenAI switches to exclusively using Microsoft Azure (Microsoft's cloud solution) as the platform on which it will develop its AI tools. Microsoft would also be OpenAI's "preferred partner for commercializing new AI technologies."[163][164][165][166]
2019 August 20 Language model Product release OpenAI announces the release of its 774 million parameter GPT-2 language model, along with an open-source legal agreement to make it easier for organizations to initiate model-sharing partnerships with each other. They also publish a technical report about their experience in coordinating with the wider AI research community on publication norms. Through their research, they find that coordination for language models is difficult but possible, synthetic text generated by language models can be convincing to humans, and detecting malicious use of language models is a genuinely difficult research problem that requires both statistical detection and human judgment.[161][167][168][169]
2019 September 17 Reinforcement learning Research OpenAI publishes an article describing a new simulation environment that allows agents to learn and improve their ability to play hide-and-seek, ultimately leading to the emergence of complex tool use strategies. In the simulation, agents can move, see, sense, grab, and lock objects in place. There are no explicit incentives for the agents to interact with objects other than the hide-and-seek objective. Agents are rewarded based on the outcome of the game. As agents train against each other in hide-and-seek, up to six distinct strategies emerge, leading to increasingly complex tool use. The self-supervised emergent complexity in this simple environment further suggests that multi-agent co-adaptation may one day produce extremely complex and intelligent behavior.[170]
2019 October 15 Neural networks Research OpenAI reports on having trained a pair of neural networks that can solve the Rubik's Cube with a human-like robotic hand. The neural networks were trained entirely in simulation, using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR). ADR creates diverse environments in simulation that can capture the physics of the real world, enabling the transfer of neural networks learned in simulation to be applied to the real world. The system can handle situations it never saw during training, such as being prodded by a stuffed giraffe. The breakthrough shows that reinforcement learning isn’t just a tool for virtual tasks, but can solve physical-world problems requiring unprecedented dexterity.[171][172][173][174][175]
2019 November 5 Natural-language generation Product release OpenAI releases the largest version of GPT-2, the 1.5B parameter version, along with code and model weights to aid detection of outputs of GPT-2 models. OpenAI releases the model as a test case for a full staged release process for future powerful models, hoping to continue the conversation with the AI community on responsible publication. OpenAI conducted some tests and research on the GPT-2 model and found that humans find GPT-2 outputs convincing, it can be fine-tuned for misuse, detection of synthetic text is challenging, they have not found evidence of misuse so far, and standards are needed for studying bias.[176][177]
2019 November 21 Reinforcement learning Product release OpenAI releases Safety Gym, a suite of environments and tools for measuring progress in reinforcement learning agents that respect safety constraints during training. The challenge of "safe exploration" arises when reinforcement learning agents need to explore their environments to learn optimal behaviors, but this exploration can lead to risky and unsafe actions. OpenAI proposes constrained reinforcement learning as a formalism for addressing safe exploration, where agents have both reward functions to maximize and cost functions to constrain their behavior. To study constrains RL, OpenAI developed Safety Gym, which includes various environments and tasks of increasing difficulty to evaluate and train agents that prioritize safety.[178]
2019 December 3 Reinforcement learning Product release OpenAI releases the Procgen Benchmark, which consists of 16 procedurally-generated environments designed to measure the ability of reinforcement learning agents to learn generalizable skills. These environments provide a direct measure of an agent's ability to generalize across different levels. OpenAI finds that agents require training on 500-1000 different levels before they can generalize to new ones, highlighting the need for diversity within environments. The benchmark is designed for experimental convenience, high diversity within and across environments, and emphasizes visual recognition and motor control. It's expected to accelerate research in developing better reinforcement learning algorithms.[179][180][181]
2019 December 4 Deep learning Research OpenAI publishes a blog post exploring the phenomenon of "double descent" in deep learning models like CNNs, ResNets, and transformers. Double descent refers to a pattern where performance initially improves, then worsens, and then improves again with increasing model size, data size, or training time. This behavior challenges the conventional wisdom of bigger models always being better. The authors observe that double descent occurs when models are barely able to fit the training set and suggest further research to fully understand its underlying mechanisms.[182][183] MIRI researcher Evan Hubinger writes an explanatory post on the subject on LessWrong and the AI Alignment Forum,[184] and follows up with a post on the AI safety implications.[185]
2019 December Dario Amodei promoted Team Dario Amodei is promoted as OpenAI's Vice President of Research.[37]
2020 January 30 Deep learning Software adoption OpenAI announces its decision to migrate to Facebook's PyTorch machine learning framework for future projects, leaving behind Google's TensorFlow. OpenAI cites PyTorch's efficiency, scalability, and widespread adoption as the reasons for this move. The company states that it would primarily use PyTorch as its deep learning framework, while occasionally utilizing other frameworks when necessary. By this time, OpenAI's teams have already begun migrating their work to PyTorch and plan to contribute to the PyTorch community in the coming months. They also express intention to release their educational resource, Spinning Up in Deep RL, on PyTorch and explore scaling AI systems, model interpretability, and building robotics frameworks. PyTorch is an open-source machine learning library based on Torch and incorporates Caffe2, a deep learning toolset developed by Facebook's AI Research lab.[186][187]
2020 February 5 Safety Publication Beth Barnes and Paul Christiano on lesswrong.com publish Writeup: Progress on AI Safety via Debate, a writeup of the research done by the "Reflection-Humans" team at OpenAI in third and fourth quarter of 2019.[188]
2020 February 17 Ethics of artificial intelligence Coverage AI reporter Karen Hao at MIT Technology Review publishes review on OpenAI titled The messy, secretive reality behind OpenAI’s bid to save the world, which suggests the company is surrendering its declaration to be transparent in order to outpace competitors.[189] As a response, Elon Musk criticizes OpenAI, saying it lacks transparency.[190] On his Twitter account, Musk writes "I have no control & only very limited insight into OpenAI. Confidence in Dario for safety is not high", alluding to OpenAI Vice President of Research Dario Amodei.[191]
2020 May 28 (release), June and July (discussion and exploration) Natural-language generation Product release OpenAI releases the natural language model GPT-3 on GitHub[192] and uploads to the ArXiV the paper Language Models are Few-Shot Learners explaining how GPT-3 was trained and how it performs.[193] Games, websites, and chatbots based on GPT-3 are created for exploratory purposes in the next two months (mostly by people unaffiliated with OpenAI), with a general takeaway that GPT-3 performs significantly better than GPT-2 and past natural language models.[194][195][196][197] Commentators also note many weaknesses such as: trouble with arithmetic because of incorrect pattern matching, trouble with multi-step logical reasoning even though it could do the individual steps separately, inability to identify that a question is nonsense, inability to identify that it does not know the answer to a question, and picking up of racist and sexist content when trained on corpuses that contain some such content.[198][199][200]
2020 June 11 Generative model Product release OpenAI announces the release of an API for accessing new AI models that can be used for virtually any English language task. The API provides a general-purpose "text in, text out" interface, which can be integrated into products or used to develop new applications. Users can program the AI by showing it a few examples of what is required, and hone its performance by training it on small or large data sets or learning from human feedback. The API is designed to be both simple to use and flexible, with many speed and throughput improvements. While the API is launched as a private beta, it intends to share what it learns to build more human-positive AI systems.[201]
2020 June 28 Scaling powers GPT-3's AI leap Coverage Gwern Branwen's essay on The Scaling Hypothesis discusses the significance of GPT-3 as a landmark in AI development. GPT-3's unprecedented size had exceeded expectations by continuing to improve with scale, exhibiting "meta-learning" abilities like following instructions and adapting to new tasks with minimal examples, which GPT-2 lacked. The essay argues that GPT-3's success supports the "scaling hypothesis," the idea that increasing neural network size and computational power is key to achieving general intelligence. Gwern suggests this development may lead to significant advancements in AI, challenging existing paradigms and accelerating progress in unsupervised learning systems.[202] Gwern discusses differing views on the scaling hypothesis in AI among various organizations. According to him, Google Brain and DeepMind are skeptical about scaling, focusing on practical applications and the need for incremental refinement, particularly in replicating human brain modules. DM emphasizes neuroscience to guide its development, believing it requires time and investment in fine-tuning. In contrast, OpenAI supports the scaling hypothesis, betting on simple reinforcement learning algorithms combined with large architectures to achieve powerful capabilities. The author observes that while competitors have the resources, they often lack the conviction to adopt OpenAI's approach, preferring to critique it instead.[203]
2020 September 22 Natural language generation, language model Partnership Microsoft announces a partnership with OpenAI to exclusively license their GPT-3 language model, the largest and most advanced language model in the world by this time. This allows Microsoft to leverage its technical innovations to develop and deliver advanced AI solutions for its customers, as well as create new solutions that harness the power of natural language generation. Microsoft sees this as an opportunity to expand its Azure-powered AI platform in a way that democratizes AI technology and enables new products, services, and experiences. OpenAI would continue to offer GPT-3 and other models via its own Azure-hosted API.[204]
2020 December 29 Anthropic Team A number of team members, including Paul Christiano[205] and Dario Amodei[206] depart from OpenAI. The latter departs in order to found Anthropic, an artificial intelligence startup and public-benefit corporation. After dedicating four and a half years to the organization, his departure is announced in an update. OpenAI's CEO, Sam Altman, mentions the possibility of continued collaboration with Amodei and his co-founders in their new project.[207][208] Other departures from OpenAI include Sam McCandlish[209], Tom Brown[210], Tom Henighan[211], Chris Olah[212], Jack Clark[213], and Benjamin Mann[214], all these joining Anthropic.
2021 January Anthropic Competition Anthropic is founded as a U.S.-based AI startup and public-benefit corporation. It is established by former OpenAI members, including Daniela Amodei and Dario Amodei.[215][216] They specialize in developing responsible AI systems and language models.[217] The company would gain attention for departing from OpenAI due to directional differences in 2019.[218] They secure substantial investments, with Google's cloud division and Alameda Research contributing $300 million and $500 million, respectively.[219][220] Anthropic's projects include Claude, an AI chatbot emphasizing safety and ethical principles, and research on machine learning system interpretability, particularly concerning transformer architecture.[221][222]
2021 January 5 Neural networks Product release OpenAI introduces CLIP (Contrastive Language-Image Pre-training), a neural network that learns visual concepts from natural language supervision and can be applied to any visual classification benchmark. CLIP is trained on a variety of images with natural language supervision available on the internet and can be instructed in natural language to perform a variety of classification benchmarks without directly optimizing for the benchmark's performance. This approach improves the model's robustness and can match the performance of traditional models on benchmarks without using any labeled examples. CLIP's performance is more representative of how it will fare on datasets that measure accuracy in different settings. CLIP builds on previous work on zero-shot transfer, natural language supervision, and multimodal learning and uses a simple pre-training task to achieve competitive zero-shot performance on a variety of image classification datasets.[223]
2021 January 5 Generative model Product release OpenAI introduces DALL-E as a neural network that can generate images from text captions. It has a diverse set of capabilities including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images. DALL-E is a 12-billion parameter version of GPT-3 that is trained using a dataset of text-image pairs. It can generate images from scratch and regenerate any rectangular region of an existing image that extends to the bottom-right corner, in a way that is consistent with the text prompt. It can also modify several attributes of an object and the number of times it appears. However, controlling multiple objects and their attributes simultaneously presents a challenge at this time.[224]
2021 January 22 Jan Leike Team Machine learning researcher Jan Leike announces having joined OpenAI and that he would be leading the alignment effort within the organization.[225]
2021 August 10 Natural language processing, code generation Product release OpenAI releases an improved version of their AI system, OpenAI Codex, which translates natural language to code, through their API in private beta. Codex is proficient in more than a dozen programming languages, including Python, JavaScript, Go, and Ruby, and can interpret simple commands in natural language and execute them. The system has a memory of 14KB for Python code and can be applied to essentially any programming task.[226]
2021 September 1 Chatbot Product withdrawal OpenAI shuts down a customizable chatbot called Samantha, developed by indie game developer Jason Rohrer. Samantha gained attention when one user fine-tuned it to resemble his deceased fiancée. OpenAI expresses concerns about potential misuse, leading Rohrer to choose to terminate the project. He criticizes OpenAI for imposing restrictions on GPT-3's usage, hindering developers' exploration of its capabilities. The incident raises questions about the boundaries of AI technology and the balance between innovation and responsible usage.[227]
2021 September 23 Natural language processing Product release OpenAI develops an AI model that can summarize books of any length. The model, a fine-tuned version of GPT-3, uses a technique called "recursive task decomposition" to first summarize small sections of a book and then summarize those summaries into higher-level summaries. This approach allows for the efficient evaluation of the model's summaries and enables the summarization of books ranging from tens to thousands of pages. OpenAI expreses belief that this method can be applied to supervise other tasks as well and addresses the challenge of aligning AI systems with human preferences. While other companies like Google, Microsoft, and Facebook have also explored AI-powered summarization methods, OpenAI's model builds upon their previous research on reinforcement learning from human feedback to improve the alignment of summaries with people's preferences.[228]
2021 November 15 Natural language processing Competition OpenAI startup competitor Cohere launches its language model API for app and service development. The company offers fine-tuned models for various natural language applications at a lower cost compared to its rivals. Cohere provides both generation and representation models in English, catering to different tasks such as text generation and language understanding. The models are available in different sizes and can be used in industries such as finance, law, and healthcare. Cohere charges customers on a per-character basis, making its technology more affordable and accessible.[229]
2021 December 14 Natural language processing Product update OpenAI begins allowing customers to fine-tune their GPT-3 language model, enabling them to create customized versions tailored to specific content. This fine-tuning capability offers higher-quality outputs for tasks such as content generation and text summarization. It is accessible to developers without a machine learning background and can lead to cost savings by producing more frequent and higher-quality results. OpenAI conducted experiments showing significant improvements in accuracy through fine-tuning. This announcement follows previous efforts to enhance user experience and provide more reliable models, including the launch of question-answering endpoints and the implementation of content filters.[230]
2022 January 27 Natural language processing Product update OpenAI introduces embeddings in their API, which allow users to leverage semantic search, clustering, topic modeling, and classification. These embeddings demonstrate superior performance compared to other models, particularly in code search. They are valuable for working with natural language and code, as numerically similar embeddings indicate semantic similarity. OpenAI's embeddings are generated by neural network models that map text and code inputs to vector representations in a high-dimensional space. These representations capture specific aspects of the input data. They offer three families of embedding models: text similarity, text search, and code search. Text similarity models capture semantic similarity for tasks like clustering and classification. Text search models enable large-scale search tasks by comparing query embeddings with document embeddings. Code search models provide embeddings for code and text, facilitating code search based on natural language queries. These embeddings enhance the OpenAI API, empowering users to perform advanced operations with improved accuracy and efficiency. By leveraging the semantic meaning and context embedded in the vectors, users can conduct semantic search, clustering, topic modeling, and classification tasks more effectively.[231]
2022 February 25 Natural language processing Product update OpenAI introduces InstructGPT as an improved version of its previous language model, GPT-3. InstructGPT aims to address concerns about toxic language and misinformation by better following instructions and aligning with human intention. It's fine-tuning uses reinforcement learning from human feedback (RLHF). Compared to GPT-3, InstructGPT demonstrates better adherence to instructions, reduced generation of misinformation, and slightly lower toxicity. However, it is found that there are risks associated with its improved instruction-following capability, as malicious users could exploit it for harmful purposes. OpenAI considers InstructGPT a step towards solving the AI alignment problem, where AI systems understand and align with human values. InstructGPT becomes the default language model on the OpenAI API.[232]
2022 March 21 Large language models, AI safety Competition An article discusses EleutherAI, a group of computer scientists who developed a powerful AI system called GPT-NeoX-20B. This system, which rivals OpenAI's GPT-3, is a 20-billion-parameter, pretrained, general-purpose language model. EleutherAI aims to make large language models accessible to researchers and promotes AI safety. While OpenAI's model is larger and has 175 billion parameters, EleutherAI's model is the largest freely and publicly available. The article highlights the challenges of training large language models, such as the need for significant computing power. EleutherAI emphasizes the importance of understanding and controlling AI systems to ensure their safe use. The article also mentions OpenAI's approach of leveraging computation to achieve progress in AI. Overall, EleutherAI's efforts demonstrate that small, unorthodox groups can build and use potentially powerful AI models.[233]
2022 March 21 Natural Language Processing Product update OpenAI releases new versions of GPT-3 and Codex that allow for editing and inserting content into existing text. This update enables users to modify and enhance text by editing what's already present or adding new text in the middle. The insert feature is particularly useful in software development, allowing code to be added within an existing file while maintaining context and connection to the surrounding code. The feature is tested in GitHub Copilot with positive early results. Additionally, OpenAI introduces the edits endpoint, which enables specific changes to existing text based on instructions, such as altering tone, structure, or making spelling corrections. These updates expand the capabilities of OpenAI's language models and offer new possibilities for text processing tasks.[234]
2022 April 6 Generative model Product update OpenAI develops DALL-E 2, an enhanced version of its text-to-image generation program. DALL-E 2 offers higher resolution, lower latency, and new capabilities such as editing existing images. It builds on the CLIP computer vision system introduced by OpenAI and incorporates the "unCLIP" process, which starts with a description and generates an image. The new version uses diffusion to create images with increasing detail. Safeguards are in place to prevent objectionable content generation, and restrictions are in place for test users regarding image generation and sharing. OpenAI reports aiming to release DALL-E 2 safely based on user feedback.[235][236][237]
2022 May 31 Natural language processing Integration Microsoft announces the integration of OpenAI's artificial intelligence models, including GPT-3 and Codex, into its Azure cloud platform. These tools enable developers to leverage AI capabilities for tasks such as summarizing customer sentiment, generating unique content, and extracting information from medical records. Microsoft emphasizes the importance of responsible AI use and human oversight to ensure accurate and appropriate model outputs. While AI systems like GPT-3 can generate human-like text, they lack a deep understanding of context and require human review to ensure quality.[238]
2022 June 27 Imitation learning Product release OpenAI introduces Video PreTraining (VPT), a semi-supervised imitation learning technique that utilizes unlabeled video data from the internet. By training an inverse dynamics model (IDM) to predict actions in videos, VPT enables the labeling of larger datasets through behavioral cloning. The researchers validate VPT using Minecraft, where the trained model successfully completed challenging tasks and even crafted a diamond pickaxe, typically a time-consuming activity for human players. Compared to traditional reinforcement learning, VPT shows promise in simulating human behavior and learning complex tasks. This approach has the potential to enable agents to learn from online videos and acquire behavioral priors beyond just language.[239]
2022 July 14 Generative models Product update OpenAI reports on DALL·E 2 having incorporated into the creative workflows of over 3,000 artists in more than 118 countries. By this time DALL·E 2 had been used by a wide range of creative professionals, including illustrators, chefs, sound designers, dancers, and tattoo artists, among others. Examples of how DALL·E is used include creating personalized cartoons, designing menus and plate dishes, transforming 2D artwork into 3D renders for AR filters, and much more. An exhibition of the works of some of the artists using DALL·E in the Leopold Museum is announced.[240]
2022 July 18 Generative models Product update OpenAI announces implementation of a new technique to reduce bias in its DALL-E image generator, specifically for generating images of people that more accurately reflect the diversity of the world's population. The technique is applied at the system level when a prompt describing a person does not specify race or gender. The mitigation is informed by early user feedback during a preview phase, and other steps are taken to improve safety systems, including content filters and monitoring systems. These improvements allow OpenAI to gain confidence in expanding access to DALL-E.[241]
2022 August 10 Content moderation Product release OpenAI introduces a new and improved content moderation tool, the Moderation endpoint, which is free for OpenAI API developers to use. This endpoint uses GPT-based classifiers to detect prohibited content such as self-harm, hate, violence, and sexual content. The tool was designed to be accurate, quick, and robust across various applications. By using the Moderation endpoint, developers can access accurate classifiers through a single API call rather than building and maintaining their classifiers. OpenAI hopes this tool will make the AI ecosystem safer and spur further research in this area.[242]
2022 August 24 AI alignment Research OpenAI publishes a blog post explaining its approach to alignment research aiming to make artificial general intelligence (AGI) aligned with human values and intentions. They take an iterative, empirical approach by attempting to align highly capable AI systems to learn what works and what doesn't. OpenAI claims being committed to sharing their alignment research when it is safe to do so to ensure that every AGI developer uses the best alignment techniques. They also claim aiming to build and align a system that can make faster and better alignment research progress than humans can. Language models are particularly well-suited for automating alignment research because they come "preloaded" with a lot of knowledge and information about human values. However, their approach is reported to have limitations and needs to be adapted and improved as AI technology develops.[243]
2022 August 31 Generative models Product update OpenAI introduces Outpainting, a new feature for DALL-E that allows users to extend the original image beyond its borders by adding visual elements or taking the story in new directions using a natural language description. This new feature can create large-scale images in any aspect ratio and takes into account the existing visual elements to maintain the context of the original image. The new feature is available for all DALL·E users on desktop.[244]
2022 September 28 Generative models Product release OpenAI announces that the waitlist for its DALL-E beta is now removed, and new users can start creating immediately. By this time, over 1.5 million users actively create over 2 million images per day with DALL-E, with more than 100,000 users sharing their creations and feedback in the Discord community. The iterative deployment approach has allowed OpenAI to scale DALL-E responsibly while discovering new uses for the tool. User feedback has inspired the development of new features such as Outpainting and collections.[245]
2022 October 25 AI-generated content, creative workflows Partnership Global technology company Shutterstock announces its partnership with OpenAI to bring AI-generated content capabilities to its platform. The collaboration would allow Shutterstock customers to generate images instantly based on their criteria, enhancing their creative workflows. Additionally, Shutterstock has launched a fund to compensate contributing artists for their role in developing AI models. The company aims to establish an ethical and inclusive framework for AI-generated content and is actively involved in initiatives promoting inclusivity and protecting intellectual property rights.[246]
2022 November 3 Generative models Product release OpenAI announces the public beta release of its DALL-E API, which allows developers to integrate image generation capabilities of DALL·E into their applications and products. DALL·E's flexibility enables users to create and edit original images ranging from the artistic to the photorealistic, and its built-in moderation ensures responsible deployment. Several companies, including Microsoft and Mixtiles, have already integrated DALL·E into their products by this time. The DALL·E API joins OpenAI's other powerful models, GPT-3, Embeddings, and Codex, on its API platform.[247]
2022 November 30 Conversational AI Product release OpenAI introduces conversational model ChatGPT, which can interact with users in a dialogue format. ChatGPT is designed to answer follow-up questions, acknowledge mistakes, challenge incorrect assumptions, and reject inappropriate requests. It is a sibling model to InstructGPT, which focuses on providing detailed responses to instructions. ChatGPT is launched to gather user feedback and gain insights into its capabilities and limitations.[248] By January 2023, ChatGPT would become the fastest-growing consumer software application in history, gaining over 100 million users and contributing to OpenAI's valuation growing to US$29 billion.[249][250]
2022 December 8 Supercomputing Interview OpenAI publishes an interview to Christian Gibson, an engineer on the supercomputing team at the company. He explains his journey into engineering and how he got into OpenAI. He also speaks about the problems he is focused on solving, such as the complexity of exploratory AI workflows and bottlenecks in the running of codes on supercomputers. He talks about what makes working on supercomputing at OpenAI different from other places, such as the sheer scale of the operation, and his typical day at OpenAI.[251]
2022 December 15 Word embedding Product release OpenAI announces a new text-embedding-ada-002 model that replaces five separate models for text search, text similarity, and code search. This new model outperforms their previous most capable model, Davinci, at most tasks, while being priced 99.8% lower. The new model has stronger performance, longer context, smaller embedding size, and reduced price. However, it does not outperform text-similarity-davinci-001 on the SentEval linear probing classification benchmark. The model has already been implemented by Kalendar AI and Notion to improve sales outreach and search capabilities.[252]
2023 January 11 Language model misuse Research OpenAI researchers collaborate with Georgetown University and the Stanford Internet Observatory to investigate how language models might be misused for disinformation campaigns. Their report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and introduces a framework for analyzing potential mitigations. The report points out that language models could drive down the cost of running influence operations, place them within reach of new actors and actor types, and generate more impactful or persuasive messaging compared to propagandists. It also introduces the key stages in the language model-to-influence operation pipeline and provides a set of guiding questions for policymakers and others to consider for mitigations.[253][254][255]
2023 January 23 AI research Partnership OpenAI and Microsoft extend their partnership with a multi-billion dollar investment to continue their research and development of AI that is safe, useful, and powerful. OpenAI remains a capped-profit company and is governed by the OpenAI non-profit. Microsoft would increase its investment in supercomputing systems powered by Azure to accelerate independent research, and Azure would remain the exclusive cloud provider for all OpenAI workloads. They also partner to deploy OpenAI's technology through their API and the Azure OpenAI Service, and to build and deploy safe AI systems. The two teams collaborate regularly to review and synthesize shared lessons and inform future research and best practices for use of powerful AI systems across the industry.[256]
2023 January 26 Content generation Partnership American Internet media, news and entertainment company BuzzFeed partners with OpenAI and gains access to its artificial intelligence technology to generate content, particularly for personality quizzes based on user responses. The move aims to boost BuzzFeed's business and enhance its struggling growth. OpenAI's generative AI has garnered attention for its diverse applications. While AI is expected to replace some tasks and jobs, it is also seen as enhancing work quality and allowing skilled professionals to focus on tasks requiring human judgment.[257]
2023 January 31 Natural language processing Product release OpenAI launches a new classifier that can distinguish between text written by humans and text written by AI. While not fully reliable, it can inform mitigations for false claims that AI-generated text was written by a human. OpenAI makes this classifier publicly available for feedback and recommends using it as a complement to other methods of determining the source of a piece of text. The classifier has limitations and is very unreliable on short texts, but it can be useful for educators and researchers to identify AI-generated text. By this time, OpenAI engages with educators to learn about their experiences and welcomes feedback on the preliminary resource they have developed.[258]
2023 February 1 Conversational AI Product release OpenAI introduces ChatGPT Plus, a pilot subscription plan that provides faster response times, general access to ChatGPT during peak times, and priority access to new features and improvements. The subscription costs $20 per month and is accessible to customers worldwide. Although OpenAI would continue to offer free access to ChatGPT, they hope to support free access availability to as many people as possible through the subscription plan. The company reports on its intention to refine and expand the offering according to user feedback and needs, and that they are exploring options for lower-cost plans, business plans, and data packs to provide wider accessibility.[259]
2023 February 23 AI integration Partnership OpenAI partners with Boston-based Bain & Company, a global strategy consulting firm, to help integrate OpenAI's AI innovations into daily tasks for Bain's clients. The partnership aims to leverage OpenAI's advanced AI models and tools, including ChatGPT, to create tailored digital solutions and drive business value for Bain's clients. The alliance would soon attract interest from major corporations, with The Coca-Cola Company being the first client to engage with the OpenAI services provided by Bain.[260]
2023 February 24 Artificial General Intelligence Publication OpenAI publishes a blog discussing the potential benefits and risks of Artificial General Intelligence, which are AI systems that are generally smarter than humans. The authors state that AGI could increase abundance, aid scientific discoveries, and elevate humanity, but it also comes with serious risks, such as misuse, accidents, and societal disruption. To ensure that AGI benefits all of humanity, the authors articulate several principles they care about the most, such as maximizing the good, minimizing the bad, and empowering humanity. The authors suggest that a gradual transition to a world with AGI is better than a sudden one to allow people to understand what's happening, personally experience the benefits and downsides, and adapt to the economy and put regulation in place. The authors emphasize the importance of a tight feedback loop of rapid learning and careful iteration to successfully navigate AI deployment challenges, combat bias, and deal with job displacement. They believe that democratized access will lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.[261]
2023 March 9 Generative AI Partnership Salesforce partners with OpenAI to develop Einstein GPT, a generative AI tool for customer relationship management (CRM). Einstein GPT enables Salesforce users to generate personalized emails for sales and customer service interactions using natural language prompts from their CRM. The tool integrates OpenAI's enterprise-grade ChatGPT technology and is currently in closed pilot. Additionally, Salesforce is integrating ChatGPT into its instant messaging platform, Slack. In parallel, Salesforce Ventures has launched a $250 million generative AI fund and has made investments in startups such as Anthropic, Cohere, Hearth.AI, and You.com. The fund aims to support startups that are transforming application software and employing responsible and trusted development processes.[262]
2023 March 14 GPT-4 Product release OpenAI launches GPT-4, an advanced multimodal AI model capable of understanding both text and images. GPT-4 outperforms its predecessor, GPT-3.5, on professional and academic benchmarks and introduces a new API capability called "system" messages, allowing developers to steer the AI's interactions by providing specific directions. It is soon adopted by companies like Microsoft, Stripe, Duolingo, Morgan Stanley, and Khan Academy for various applications. Despite its improvements, GPT-4 still has limitations and may make errors in reasoning and generate false statements.[263] GPT-4 is only accessible to those who have access to ChatGPT Plus.[264]
2023 March 15 GPT-4 Testing OpenAI conducts safety testing on its GPT-4 AI model, assessing risks like power-seeking behavior, self-replication, and self-improvement. The Alignment Research Center (ARC), an AI testing group, evaluates GPT-4 for potential issues. Although GPT-4 is found ineffective at autonomous replication, these tests raise concerns about AI safety. Some experts worry about AI takeover scenarios where AI systems gain the ability to control or manipulate human behavior and resources, posing existential risks. The AI community is divided on prioritizing AI safety concerns like self-replication over immediate issues like model bias. Companies continue to develop more powerful AI models amid regulatory uncertainties.[265]
2023 March 16 Sam Altman discusses AI's impact and dangers Interview In an interview with ABC News’ Rebecca Jarvis, Sam Altman says that AI technology will reshape society as we know it, but that it comes with real dangers. He also says that feedback will help deter the potential negative consequences that the technology could have on humanity. Altman acknowledges the possible dangerous implementations of AI that keep him up at night, particularly the fear that AI models could be used for large-scale disinformation or offensive cyberattacks. He also says that he fears which humans could be in control of the technology. However, he does not share the sci-fi fear of AI models that do not need humans, stating that "This is a tool that is very much in human control".[266]
2023 March 25 Lex Fridman interviews Sam Altman Interview Russian-American podcaster and artificial intelligence researcher Lex Fridman publishes an interview with Sam Altman. They discuss GPT-4, political bias, AI safety, and neural network size. Other topics include AGI, fear, competition, and transitioning from non-profit to capped-profit. They also touch on power dynamics, political pressure, truth and misinformation, anthropomorphism, and future applications, among other topics.[267]
2023 March 27 AI startup accelerator Partnership Startup accelerator Neo forms a partnership with OpenAI, in addition to Microsoft, to offer free software and guidance to companies focusing on artificial intelligence (AI). Startups accepted into Neo's AI cohort would receive access to OpenAI's tools, including the GPT language generation tool and Dall-E image creation program. They would also have the opportunity to collaborate with researchers and mentors from Microsoft and OpenAI. The partnership comes as interest in AI technologies grows, with startups and established companies seeking to incorporate them into their products.[268]
2023 April 11 AI security Program launch OpenAI announces its Bug Bounty Program, an initiative aimed at enhancing the safety and security of their AI systems. The program invites security researchers, ethical hackers, and technology enthusiasts from around the world to help identify vulnerabilities and bugs in OpenAI's technology. By reporting their findings, participants are expected to contribute to making OpenAI's systems safer for users. The Bug Bounty Program is managed in partnership with Bugcrowd, a leading bug bounty platform, to ensure a streamlined experience for participants. Cash rewards are to be offered based on the severity and impact of the reported issues, ranging from $200 for low-severity findings to up to $20,000 for exceptional discoveries. OpenAI emphasizes the collaborative nature of security and encourages the security research community to join their Bug Bounty Program. Additionally, OpenAI reportedly hires for security roles to further strengthen their efforts in ensuring the security of AI technology.[269]
2023 April 14 AI safety, progress tracking Product update Sam Altman confirms at an MIT event that the company is not training GPT-5 at the time, highlighing the difficulty of measuring and tracking progress in AI safety. By this time, OpenAI is still expanding the capabilities of GPT-4 and is considering the safety implications of its work.[270]
2023 April 19 AI integration Partnership OpenAI partners with Australian software company Atlassian. The latter agrees to utilize OpenAI's GPT-4 language model, which had been trained on a large amount of internet text, to introduce AI capabilities into programs like Jira Service Management and Confluence. With GPT-4, Jira Service Management would be able to process employees' tech support inquiries in Slack, while Confluence would provide automated explanations, links, and answers based on stored information. Atlassian is a developer of its own AI models and would now incorporate OpenAI's technology to create unique results for individual customers. The new AI features, branded as Atlassian Intelligence, are to be rolled out gradually, and customers can join a waiting list to access them.[271]
2023 April 21 Scientific web search, generative AI Partnership OpenAI partners with Consensus, a Boston-based AI-powered search engine focused on scientific research, to enhance scientific web search quality. Consensus aims to provide unbiased and accurate search results by leveraging its generative AI technology to extract information from over 200 million research papers. The search engine prioritizes authoritative sources and offers plain-language summaries of results. With the support of investors such as Draper Associates and the involvement of OpenAI, Consensus aims to revolutionize scientific web search, transform research, and disrupt the global industry.[272]
2023 May 16 AI safety Legal Sam Altman testifies in a Senate subcommittee hearing and expresses the need for regulating artificial intelligence technology. Unlike previous hearings featuring tech executives, lawmakers and Altman largely agree on the necessity of A.I. regulation. Altman emphasizes the potential harms of A.I. and presents a loose framework to manage its development. His appearance marks him as a leading figure in the A.I. industry. The hearing reflects the growing unease among technologists and government officials regarding the power of A.I. technology, though Altman appears to have a receptive audience in the subcommittee members.[273]
2023 May 22 AI safety Publication OpenAI publishes post emphasizing the importance of governing superintelligence, AI systems that surpass even artificial general intelligence (AGI) in capabilities. They recognize the potential positive and negative impacts of superintelligence and propose coordination among AI development efforts, the establishment of an international authority like the International Atomic Energy Agency, and technical safety measures as key ideas for managing risks. OpenAI believes in regulation without hindering development below a certain capability threshold and emphasizes public input and democratic decision-making. While they see potential for a better world, they acknowledge the risks and challenges and stress the need for caution and careful approach.[274]
2023 May 25 AI governance Program launch OpenAI announces a grant program to fund experiments focused on establishing a democratic process for determining the rules that AI systems should follow within legal boundaries. The program aims to incorporate diverse perspectives reflecting the public interest in shaping AI behavior. OpenAI expresses belief that decisions about AI conduct should not be dictated solely by individuals, companies, or countries. The grants, totaling $1 million, are to be awarded to ten teams worldwide to develop proof-of-concepts for democratic processes that address questions about AI system rules. While these experiments are not intended to be binding at the time, they are expected to explore decision-relevant questions and build democratic tools to inform future decisions. The results of the studies are freely accessible, with OpenAI encouraging applicants to innovate and leverage known methodologies or create new approaches to democratic processes. The use of AI to enhance communication and facilitate efficient collaboration among a large number of participants is also encouraged.[275]
2023 June 1 Cybersecurity Program launch OpenAI announces launch of the Cybersecurity Grant Program, a $1 million initiative aimed at enhancing AI-powered cybersecurity capabilities and fostering discussions at the intersection of AI and cybersecurity. The program's objectives include empowering defenders, measuring the capabilities of AI models in cybersecurity, and promoting comprehensive discourse in the field. OpenAI reportedly seeks project proposals that focus on practical applications of AI in defensive cybersecurity. Projects related to offensive security are not considered for funding. The program aims to evaluate and accept applications on a rolling basis, with strong preference given to proposals that can be licensed or distributed for maximal public benefit and sharing. Funding would be provided in increments of $10,000 USD from the $1 million fund, which can be in the form of API credits, direct funding, or equivalents.[276]
2023 June 12 Research, safety Collaboration OpenAI and Google DeepMind commit to sharing their AI models with the Government of the United Kingdom for research and safety purposes. This move aims to enhance the government's ability to inspect the models and understand the associated risks and opportunities. The specific data to be shared by the tech companies is not yet disclosed. The announcement follows the UK government's plans to assess AI model accountability and establish a Foundation Model Taskforce to develop "sovereign" AI. The initiative seeks to address concerns about AI development and mitigate potential issues related to safety and ethics. While this access does not grant complete control or guarantee the detection of all issues, it promotes transparency and provides insights into AI systems during a time when their long-term impacts remain uncertain.[277]
2023 June 14 Hallucination Legal A radio broadcaster named Mark Walters files a defamation lawsuit against OpenAI after the company's AI system, ChatGPT, generated a fake complaint accusing him of financial embezzlement. The lawsuit highlights the growing concern over generative AI programs spreading misinformation and producing false outputs. The fabricated legal summary is provided to Fred Riehl, the editor-in-chief of AmmoLand, who reports on a real-life legal case. The incident is attributed to a common issue with generative AI known as hallucinations, where the language model generates false information that can be convincingly realistic.[278]
2023 June 20 AI regulation Advocacy It is reported that OpenAI lobbied the European Union to weaken forthcoming AI regulation, despite publicly advocating for stronger AI guardrails. Documents obtained by TIME reveal that OpenAI proposed amendments to the E.U.'s AI Act, which were later incorporated into the final text. OpenAI argues that its general-purpose AI systems, such as GPT-3 and Dall-E, should not be classified as "high risk" and subject to stringent requirements. The lobbying efforts aimed to reduce the regulatory burden on OpenAI and aligned with similar efforts by other tech giants like Microsoft and Google. The documents suggest that OpenAI used arguments about utility and public benefit to mask their financial interest in diluting the regulation.[279]
2023 June 21 AI app store Product OpenAI reportedly plans to launch an AI app store, allowing developers and customers to sell their AI models built on OpenAI's technology. This move comes as OpenAI aims to expand its influence and capitalize on the success of its ChatGPT chatbot. While the introduction of an AI app store has the potential to drive broader adoption of OpenAI's technology and foster innovation, it also raises concerns about the need for regulations, consumer protection, quality control, ethical considerations, and security risks. However, for the Nigerian AI community, the app store presents opportunities for increased access, collaboration, economic prospects, and entrepreneurship, benefiting the country's tech talent and driving economic growth in the AI sector.[280]
2023 June 28 Copyright infringement Legal OpenAI faces a proposed class action lawsuit filed by two U.S. authors in San Francisco federal court. The authors, Paul Tremblay and Mona Awad, claim that OpenAI used their works without permission to train its popular AI system, ChatGPT. They allege that ChatGPT mined data from thousands of books, infringing their copyrights. The lawsuit highlights the use of books as a significant component in training generative AI systems like ChatGPT. The authors assert that ChatGPT could generate accurate summaries of their books, indicating their presence in its database. The lawsuit seeks damages on behalf of copyright owners whose works were allegedly misused by OpenAI.[281]
2023 June 28 OpenAI London International expansion OpenAI selects London as the location for its first international office, where the company plans to focus on research and engineering. The move is considered a vote of confidence in the UK's AI ecosystem and reinforces the country's position as an AI powerhouse.[282]
2023 July 10 GPT-4 Coverage An article discusses various aspects of OpenAI's GPT-4, including its architecture, training infrastructure, inference infrastructure, parameter count, training dataset composition, token count, layer count, parallelism strategies, multi-modal vision adaptation, engineering tradeoffs, and implemented techniques to overcome inference bottlenecks. It highlights that OpenAI's decision to keep the architecture closed is not due to existential risks but rather because it is replicable and other companies are expected to develop equally capable models. The article also emphasizes the importance of decoupling training and inference compute and the challenges of scaling out large models for inference due to memory bandwidth limitations. OpenAI's sparse model architecture is discussed as a solution to achieve high throughput while reducing inference costs.[283]
2023 July 11 Shutterstock x OpenAI collaboration Partnership Shutterstock expands its partnership with OpenAI through a six-year agreement, solidifying its role as a key provider of high-quality training data for AI models. OpenAI gains licensed access to Shutterstock's extensive image, video, and music libraries, while Shutterstock receives priority access to OpenAI's latest technologies. This partnership enhances Shutterstock's platform by integrating advanced generative AI tools, including DALL·E-powered text-to-image generation and synthetic editing features. Additionally, Shutterstock announces plans to extend AI capabilities to mobile users via its GIPHY platform.[284]
2023 July 12 Copyright infringement Legal American comedian Sarah Silverman files a lawsuit against OpenAI along with Meta Platforms, alleging copyright infringement in the training of their AI systems. The lawsuit claims that the authors' copyrighted materials were used without their consent to train ChatGPT and Meta's LLaMa AI system. The case is expected to revolve around whether training a large language model constitutes fair use or not. Silverman is joined by two other authors in the class-action lawsuit. By this time, legal experts have raised questions about whether OpenAI can be accused of copying books in this context.[285]
2023 July 13 xAI Competition Elon Musk launches his own artificial intelligence company, xAI, to rival OpenAI and Google. Reportedly, the goal of xAI is to understand the true nature of the universe and answer life's biggest questions. The company is staffed by former researchers from OpenAI, Google DeepMind, Tesla, and the University of Toronto. By this time, Musk had been critical of ChatGPT, accusing it of being politically biased and irresponsible. He left OpenAI in 2018 due to concerns about its profit-driven direction. Musk warns about the dangers of AI, and his new company reportedly aims to address those concerns.[286]
2023 July 13 Algorithmic models Partnership OpenAI partners with the Associated Press (AP) in a two-year agreement to train algorithmic models. This collaboration marks one of the first news-sharing partnerships between a major news organization and an AI firm. OpenAI is expected to gain access to selected news content and technology from AP's archives, dating back to 1985, to enhance future iterations of ChatGPT and related tools. AP is expected to receive access to OpenAI's proprietary technology. This partnership allows OpenAI to expand into the news domain and acquire legally-obtained data, while AP aims to streamline and improve its news reporting processes using OpenAI's technology.[287][288]
2023 July 18 AJP and OpenAI collaborate Partnership The American Journalism Project (AJP) and OpenAI partner in a $5+ million initiative to explore AI's potential to enhance local journalism. OpenAI agrees to contribute $5 million in funding and $5 million in API credits to support AJP's grantee organizations in adopting AI technologies. The partnership aims to create an AI studio to coach news organizations, foster collaboration, and address AI challenges like misinformation and bias. Grants are expected to fund AI experiments across AJP's portfolio, showcasing applications for local news. This initiative claims aim to bolster democracy by rebuilding local news ecosystems and ensuring journalism adapts responsibly to emerging AI technologies.[289]
2023 July 31 Upwork partners with OpenAI Partnership American freelancing platform Upwork and OpenAI launch the OpenAI Experts on Upwork program, providing businesses with pre-vetted professionals skilled in OpenAI technologies like GPT-4, Whisper, and AI model integration. This collaboration enables companies to efficiently access expertise for projects involving large language models, model fine-tuning, and chatbot development. The program, part of Upwork's AI Services hub, employs a rigorous vetting process to ensure technical proficiency and practical experience. Clients can engage experts via consultations or project-based contracts, enhancing responsible and impactful AI deployment. This initiative aligns with Upwork's strategy to become a leader in AI-related talent solutions.[290]
2023 July 26 Image generator Coverage An article reveals OpenAI's secret image generator, an unreleased AI model that outperforms previous ones. At this time tested privately, early samples show impressive results, producing sharp and realistic images with detailed lighting, reflections, and brand logos. The AI recreates paintings and displays well-proportioned hands, setting it apart from other generators. However, removed safety filters for testing allow the model to generate violent and explicit content. Access to the model is limited, and it's not released publicly due to OpenAI's stance on NSFW content.[291]
2023 August 7 Claude 2 Competition Anthropic unveils AI chatbot Claude 2, venturing into the advanced domain to compete with OpenAI and Google. With a substantial US$750 million in funding, while initially targeting business applications, Claude 2 already generates a waitlist of over 350,000 users seeking access to its API and consumer services. Its availability is initially limited to the United States and the United Kingdom. Claude 2 is part of the growing trend of generative AI chatbots, despite concerns about bias. Anthropic aims to offer Claude 2 as a safer alternative for a broader range of users.[292]
2023 August 16 Global Illumination Acquisition OpenAI makes its first public acquisition by purchasing New York-based startup Global Illumination. The terms of the deal are not disclosed. Global Illumination's team previously worked on projects at Instagram, Facebook, YouTube, Google, Pixar, and Riot Games. Their most recent creation is the open-source sandbox multiplayer game Biomes. OpenAI aims to enhance its core products, including ChatGPT, with this acquisition.[293]
2023 August 17 GPT-4 Competition Researchers from Arthur AI evaluate top AI models from Meta, OpenAI, Cohere, and Anthropic to assess their propensity for generating false information, a phenomenon known as hallucination. They discover that Cohere's AI model displays the highest degree of hallucination, followed by Meta's Llama 2, which hallucinates more than GPT-4 and Claude 2 from Anthropic. GPT-4, on the other hand, performs the best among all models tested, hallucinating significantly less than its predecessor GPT-3.5. The study highlights the importance of evaluating AI models' performance based on specific use cases, as real-world applications may differ from standardized benchmarks.[294]
2023 August 21 GPTBot Reaction The New York Times blocks OpenAI's web crawler, GPTBot, preventing OpenAI from using the publication's content for training AI models. This action follows the NYT's recent update to its terms of service, which prohibits the use of its content for AI model training. The newspaper also reportedly considers legal action against OpenAI for potential intellectual property rights violations. The NYT's move aligns with its efforts to protect its content and copyright in the context of AI model training.[295]
2023 September 13 OpenAI expands to Dublin Expansion OpenAI expands its European presence with a new office in Dublin, Ireland, aiming to grow its team across various areas, including operations, trust and safety, security engineering, and legal work. The move supports OpenAI's commitment to serving the European market and aligns with Ireland's strong tech ecosystem. OpenAI plans to collaborate with the Irish government on the National AI Strategy and engage with local industry, startups, and researchers. Sam Altman emphasizes Ireland's talent pool and innovation support. The company also aims to provide mentorship to youth through the Patch accelerator.
2023 November 18 Sam Altman is fired as CEO Team Sam Altman is fired as CEO of OpenAI. The decision is made by the board of directors, who cite a loss of confidence in his leadership as the reason for the dismissal. Altman had been instrumental in the company's rapid growth and success in AI development, but the board's move comes as a surprise to many. His removal causes substantial internal discord, with strong backing from both employees and the public calling for his reinstatement.[296][297]
2023 November 22 Sam Altman is reinstated Team A few days after being fired Sam Altman is reinstated as OpenAI's Chief Executive Officer. Altman’s leadership is considered pivotal in OpenAI's development of advanced artificial intelligence systems, and his reinstatement signals stability after a period of turmoil.[297]
2023 December 13 Axel Springer x OpenAI collaboration Partnership Axel Springer and OpenAI form a global partnership to enhance independent journalism through AI. The collaboration integrates authoritative content from Axel Springer’s media brands, such as Politico, Business Insider, Bild, and Die Welt, into ChatGPT. Users gain access to selected summaries with attribution and links to original articles, including paid content, ensuring transparency and encouraging further exploration. The partnership also supports Axel Springer’s AI initiatives and contributes to OpenAI’s model training with quality content. Axel Springer CEO Mathias Döpfner emphasizes AI’s potential to elevate journalism, while OpenAI COO Brad Lightcap highlighted their commitment to empowering publishers with advanced technology and sustainable revenue models.[298]
2024 January 18 Partnership OpenAI announces its first university partnership with Arizona State University (ASU), granting full access to ChatGPT Enterprise starting February 2024. ASU plans to integrate the tool into coursework, tutoring, research, and its popular prompt engineering course. Key initiatives include creating personalized AI tutors, developing AI avatars for creative study support, and enhancing Freshman Composition with writing assistance. The collaboration, in development for six months, assures data security and privacy, addressing past concerns about AI misuse in education. OpenAI views the partnership as a model for aligning AI with higher education, while ASU sees it as advancing innovation in learning and research.[299]
2024 March 8 Board of Directors expansion Team OpenAI expands its Board of Directors with the addition of Dr. Sue Desmond-Hellmann, Nicole Seligman, and Fidji Simo. Dr. Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, brings experience in nonprofit leadership and medical research. Nicole Seligman, former EVP and General Counsel at Sony Corporation, is noted for her expertise in corporate law and governance. Fidji Simo, CEO and Chair of Instacart, offers background in consumer technology and product development. Sam Altman, the current CEO, rejoins the board.[300]
2024 March 13 OpenAI enhances news access Partnership OpenAI partners with Le Monde and Prisa Media to integrate French and Spanish news content into ChatGPT. Users can access journalism from those publications via attributed summaries and links to original articles. The collaboration aims to enhance access to authoritative information, train AI models, and present news in interactive formats. Le Monde highlights this as a step to expand its reach and uphold journalistic integrity, while Prisa Media emphasizes leveraging AI to engage broader audiences.[301]
2024 April 15 OpenAI opens Tokyo office for growth Expansion OpenAI opens its first Asian office in Tokyo to cater to the growing popularity of its service in Japan. Sam Altman emphasizes the goal of a long-term partnership with Japan's government, businesses, and research institutions. This office is OpenAI's third overseas location after Britain and Ireland. A custom chatbot model optimized for Japanese is introduced, enhancing translation and summarization speeds. The decision to establish a Tokyo office follows Altman's meeting with Japanese Prime Minister Fumio Kishida. Japan, with over 2 million ChatGPT users, is a significant market for OpenAI.[302]
2024 May 22 OpenAI x News Corp collaboration Partnership OpenAI and News Corp form a multi-year partnership to integrate News Corp's premium journalism into OpenAI's products. This collaboration grants OpenAI access to updated and archived content from major publications like The Wall Street Journal, The Times, and The Sun, among others, to enhance responses and provide reliable information. News Corp also agrees to contribute journalistic expertise to uphold high standards of journalism across OpenAI's offerings. The partnership aims to improve access to trusted news and set new standards for accuracy and integrity in the digital age, with both parties committed to enriching AI with high-quality journalism.[303]
2024 April 24 OpenAI partners with Moderna Partnership OpenAI partners with biotechnology company Moderna, with the purpose to integrate ChatGPT Enterprise across Moderna's operations, aiming to enhance its development of mRNA-based treatments. Since early 2023, this collaboration has transformed how Moderna works, facilitating the launch of up to 15 new products in the next five years. Moderna’s approach includes widespread AI adoption, with 100% employee engagement and the creation of over 750 custom GPTs. Notable projects include Dose ID, a GPT for analyzing clinical data, and various tools to improve legal compliance and communication. This AI integration supports Moderna’s goal of maximizing patient impact while maintaining operational efficiency.[304]
2024 April 29 Financial Times partners with OpenAI Partnership The Financial Times partners with OpenAI to integrate its journalism into ChatGPT, providing users with attributed summaries, quotes, and links to FT content. This collaboration aims to enhance AI model utility and develop new AI-driven tools for FT readers. The FT, an early adopter of ChatGPT Enterprise, equips its staff with OpenAI’s tools to boost creativity and productivity. FT Group CEO John Ridding highlights the importance of transparency, attribution, and safeguarding journalism. OpenAI COO Brad Lightcap emphasizes the partnership's role in enriching AI with reliable news.[305]
2024 May 6 OpenAI x Stack Overflow collaboration Partnership OpenAI and Stack Overflow announce an API partnership aimed at enhancing developer productivity by integrating Stack Overflow's trusted technical knowledge with OpenAI’s advanced LLM models. Through Stack Overflow's OverflowAPI, OpenAI gains access to accurate, community-vetted data to improve its AI tools, while also providing attribution and fostering engagement within ChatGPT. Stack Overflow would leverage OpenAI models to develop OverflowAI and optimize its products for the developer community. The partnership, launching integrations in early 2024, focuses on advancing responsible AI development.[306]
2024 May 8 OpenAI introduces the Model Spec Publication OpenAI introduces the Model Spec, a draft document outlining how AI models should behave in the OpenAI API and ChatGPT. This document details principles, rules, and default behaviors to guide model responses, balancing helpfulness, safety, and legality. It integrates existing documentation, research, and expert input to inform future model development and ensure alignment with OpenAI's mission. The Model Spec aims to foster public discussion on AI behavior, incorporating feedback from diverse stakeholders to refine its approach over time. The document addresses handling complex situations, compliance with laws, and maintaining respect for user privacy and intellectual property.[307]
2024 May 16 Reddit partners with OpenAI Partnership Reddit and OpenAI announce a partnership to integrate Reddit's content into OpenAI's ChatGPT and products, enhancing user access to Reddit communities. Using Reddit’s Data API, OpenAI will showcase real-time and structured Reddit content, particularly on current topics, for improved user discovery and engagement. The collaboration also enables Reddit to leverage OpenAI’s AI models to introduce new features for users and moderators, enhancing the Reddit experience. Additionally, OpenAI will serve as a Reddit advertising partner. This partnership aims to foster human learning, build connections online, and provide timely, relevant, and authentic information across platforms.[308]
2024 May 21 Sanofi integrates AI innovation Partnership French pharmaceutical company Sanofi announces a collaboration with OpenAI and Formation Bio to integrate artificial intelligence into its drug development processes. The partnership enables Sanofi to utilize proprietary AI models tailored to its biopharma projects, while Formation Bio provides additional engineering resources. The initiative aims to expedite clinical trials by optimizing patient selection and reducing the number of participants needed, thereby accelerating drug development and reducing costs. This collaboration reflects a growing trend among major drugmakers to leverage AI for efficiency and innovation in healthcare.[309]
2024 May 29 OpenAI x Vox Media collaboration Partnership OpenAI and Vox Media form a strategic partnership to enhance ChatGPT’s capabilities and develop innovative products. Vox Media’s extensive archives and portfolio, including Vox, The Verge, and New York Magazine, would inform ChatGPT’s responses, with brand attribution and audience referrals. The partnership aims to merge trusted journalism with AI technology, improving content quality and discoverability. Vox Media is expected to leverage OpenAI tools to innovate consumer products, enhance its Forte data platform, and optimize advertiser performance.[310]
2024 May 30 OpenAI for Nonprofits launched Product launch OpenAI launches OpenAI for Nonprofits, a new initiative offering discounted access to its tools for nonprofit organizations. Nonprofits can access ChatGPT Team at a 20% discount, while larger nonprofits can receive a 50% discount on ChatGPT Enterprise. These offerings provide advanced models like GPT-4, collaboration tools, and robust security. By this time, nonprofits such as Serenas in Brazil, GLIDE Legal Clinic, THINK South Africa, and Team4Tech already use ChatGPT to streamline operations, enhance client services, and analyze data. OpenAI aims to support nonprofits in achieving greater impact with fewer resources through AI integration.[311]
2024 May 30 ChatGPT Edu launched Product launch OpenAI launches ChatGPT Edu, a new version of ChatGPT designed for universities to integrate AI into their campuses responsibly. Built on GPT-4o, ChatGPT Edu offers advanced capabilities such as text and vision reasoning, data analysis, and high-level security. It aims to enhance educational experiences by providing personalized tutoring, assisting with research and grant writing, and supporting faculty with grading and feedback. By this time, universities like Columbia, Wharton, and Arizona State have already seen benefits from similar tools. ChatGPT Edu includes features like custom GPT creation, high message limits, and robust privacy controls, making AI more accessible and effective for educational institutions.[312]
2024 May 30 OpenAI disrupts global influence operations Security OpenAI identifies and disrupts multiple influence operations from Russia, China, and Israel that were using AI tools, including ChatGPT, to manipulate public opinion. These operations aim to spread disinformation by creating fake social media accounts, generating comments in various languages, and producing visual content. Despite the capabilities of generative AI, OpenAI reports that these operations had limited success in terms of authentic audience engagement, with some being exposed by users as inauthentic. This effort marks a proactive step by OpenAI to combat misuse of its technology and highlights the potential for AI in both advancing and mitigating influence operations on social media and other platforms.[313]
2024 June 10 OpenAI x Apple collaboration Partnership OpenAI and Apple announce a partnership at Worldwide Developers Conference, unveiling "Apple Intelligence." ChatGPT integrates with iOS, iPadOS, macOS, Writing Tools, and Siri to enhance AI-driven user experiences. Key features include streamlined content creation, improved virtual assistance with ChatGPT-enabled Siri, and optional user account linking. Privacy is prioritized, with no storage of IP addresses or query data. For marketers, this collaboration offers tools to enhance content creation, customer service, and productivity, leveraging Apple’s vast user base. The partnership is expected to set new benchmarks for AI integration, driving innovation across the industry.[314]
2024 June 13 OpenAI hires retired U.S. Army General Team OpenAI appoints Retired U.S. Army General Paul M. Nakasone to its Board of Directors, reflecting the company's focus on cybersecurity as AI technology advances. Nakasone, a leading expert in cybersecurity and former head of U.S. Cyber Command and the NSA, joins OpenAI's Safety and Security Committee. He is expected to help OpenAI enhance its security measures and guide efforts to ensure the safe development of artificial general intelligence (AGI).[315]
2024 June 21 OpenAI acquires Rockset Acquisition OpenAI acquires Rockset, a company specializing in real-time analytics and data infrastructure. The acquisition aims to bolster OpenAI’s capabilities in managing and analyzing large datasets efficiently, which is essential for the continuous improvement and scalability of AI models. By integrating Rockset’s technology, OpenAI seeks to enhance its ability to process and query data in real-time, improving the performance of AI applications in various industries, from finance to healthcare.[316]
2024 August 13 SWE Bench is introduced Product launch OpenAI introduces SWE Bench, a benchmarking tool designed to evaluate the performance of software engineering tasks using AI models. It explains how the tool assesses the models' abilities to handle real-world coding challenges, such as debugging, writing algorithms, and understanding complex software requirements. The "Verified" version of SWE Bench includes rigorous testing criteria to ensure high accuracy and reliability of results, providing developers and researchers with valuable insights into the capabilities and limitations of large language models in software engineering contexts.[317]
2024 August 20 Partnership with Conde Nast Partnership OpenAI partners with global mass media company Conde Nast, with the purpose to integrate artificial intelligence into digital publishing. The partnership explores how OpenAI’s language models can enhance content creation, improve personalized recommendations, and streamline editorial workflows. By leveraging AI tools, Conde Nast plans to elevate the reader experience across its media brands, offering more tailored content and automated systems to assist their journalists and editors. This partnership is positioned to push the boundaries of AI’s role in modern media.[318]
2024 September 12 Enhancing reasoning in LLMs Publication OpenAI publishes a study discussing advancements in improving the reasoning capabilities of the company's large LLMs. It emphasizes how reasoning is a complex skill for AI systems and describes techniques to enhance logical thinking, problem-solving, and decision-making in LLMs. The article highlights different methods, such as fine-tuning and reinforcement learning, to help models understand abstract tasks like mathematical reasoning or causal inference. It also showcases practical applications in fields such as science, technology, and education. The goal is to push LLMs closer to human-level reasoning, enhancing their ability to handle real-world challenges.[319]
2024 September 12 O1 AI model family introduction Product launch OpenAI introduces its new AI model family, O1, which includes a standout model named Strawberry. This family is designed to significantly enhance reasoning and problem-solving capabilities compared to earlier models like GPT-4. The O1 models are touted for their advanced performance, with claims of achieving PhD-level proficiency. The launch represents a major leap towards more sophisticated artificial intelligence, with the potential to bring OpenAI closer to developing Artificial General Intelligence (AGI). The models are expected to push the boundaries of AI capabilities, offering more nuanced and accurate responses in various applications.[320][321][322][323]
2024 September 25 CTO resigns Team Mira Murati, Chief Technology Officer at OpenAI, announces her resignation, marking a significant leadership departure. Murati, a key figure in OpenAI's growth and innovations like GPT-4, cites a desire to pursue personal exploration. Her resignation follows a series of high-profile exits in the year, as OpenAI undergoes restructuring, potentially valuing the company at $150 billion. CEO Sam Altman praises Murati's contributions, while internal promotions are announced to fill leadership gaps. Murati's departure reflects ongoing shifts as OpenAI navigates its transition to a for-profit model.[324]
2024 September 26 OpenAI and GEDI collaboration Partnership OpenAI partners with the Italian media group GEDI to enhance the availability of Italian-language news content for ChatGPT users. This collaboration aims to integrate content from GEDI's reputable publications, such as La Repubblica and La Stampa, into OpenAI’s offerings, improving the relevance and accuracy of information available to users in Italy. The partnership aims to support GEDI’s digital transformation and allows both organizations to explore further collaborations in AI applications related to news access.[325]
2024 October 8 OpenAI partners with hearst Partnership OpenAI announces a content partnership with Hearst, which aims to integrate Hearst’s extensive array of trusted lifestyle and local news publications into OpenAI's products. This collaboration encompasses over 20 magazine brands and more than 40 newspapers, enhancing the information available to ChatGPT's users. Hearst's content is incorporated to ensure high-quality journalism is at the core of AI products, providing users with relevant and reliable information, complete with citations and links to original sources.[326]
2024 October 9 OpenAI and hearst partnership Expansion OpenAI announces a subsidiary in Paris as part of the city’s initiative to become a leading AI hub in Europe. This expansion is expected to enable OpenAI to strengthen its presence in the European tech landscape and collaborate with local experts in AI development.[327]
2024 October 10 OpenAI blocks malicious campaigns Security OpenAI blocks 20 global malicious campaigns that were leveraging its AI models for cybercrime and disinformation. These campaigns include using ChatGPT and DALL-E to generate and spread misleading content, such as microtargeted emails, social media posts, and fabricated images. Some campaigns, like "Stop News," used DALL-E for creating eye-catching images aimed at manipulating social media users, while others, such as "Bet Bot" and "Corrupt Comment," focus on promoting gambling sites and creating fake social media interactions. OpenAI’s actions reflect its commitment to curbing the misuse of its technology, with stronger content policies and collaborations with cybersecurity teams to take down abusive accounts.[328]
2024 October 13 $6.6 billion funding secured Funding round OpenAI's announces funding round, where the company secures $6.6 billion, raising its valuation to $157 billion.[329]
2024 October 13 Swarm framework launched Product release OpenAI's introduces of Swarm, a new experimental framework designed to create and coordinate multi-agent systems. Swarm enables multiple AI agents to work together towards achieving complex tasks, providing a scalable and efficient approach to automation. This framework is expected to enhance collaboration between agents, optimizing task execution by distributing responsibilities. The move sparks debate about the implications of AI-driven automation, including concerns around control, safety, and ethical considerations. By leveraging Swarm, OpenAI aims to push the boundaries of AI’s capabilities, improving problem-solving and decision-making in various fields such as robotics, finance, and software development.[330][331][332][333]
2024 November 15 Musk Expands Lawsuit Against OpenAI, Microsoft, and Reid Hoffman Legal Elon Musk expands his lawsuit against OpenAI to include Microsoft and LinkedIn co-founder Reid Hoffman as defendants. Musk alleges the company deviated from its original mission as a non-profit and accuses it, along with Microsoft, of monopolistic practices that harm competitors like his own company, xAI. The suit also claims OpenAI's transformation into a $157 billion for-profit entity violated founding principles. OpenAI and Microsoft deny the allegations, with OpenAI calling Musk’s claims "baseless." This legal action coincides with Musk’s selection for a government cost-cutting role by President-elect Donald Trump.[334]
2024 November 19 ANI sues OpenAI over Copyright Legal India's leading news agency, Asian News International (ANI), files a lawsuit against OpenAI in the Delhi High Court, accusing the company of using its content without consent to train AI models like ChatGPT. ANI also claims ChatGPT generated false information, including a fake interview with politician Rahul Gandhi, damaging its credibility. The case highlights concerns over copyright violations, misinformation, and AI's use of publicly available content. OpenAI defends its practices, arguing copyright laws don’t protect facts. This case, by this time part of a global debate, is considered to set significant precedents for AI and copyright law in India and beyond.[335]
2024 December 4 Altman sdjusts AGI expectations Notable comment Sam Altman tempers expectations about the impact of artificial general intelligence (AGI). Speaking at The New York Times DealBook Summit, Altman suggests AGI might arrive sooner than anticipated but have less immediate impact than previously imagined. He emphasizes that societal change would likely be gradual, with significant disruption occurring only over time. By this tume, OpenAI had seemingly shifted its focus, redefining AGI as less transformative while reserving "superintelligence" for more profound advancements. Altman hints superintelligence could emerge in a few decades. Meanwhile, OpenAI’s AGI declaration is thought being able to affect its Microsoft partnership, potentially positioning it as a dominant for-profit tech entity.[336]
2024 December 4 OpenAI x Future collaboration Partnership OpenAI partners with Future, a global specialist media platform, to integrate content from over 200 Future brands, including Marie Claire, PC Gamer, and TechRadar, into ChatGPT. This collaboration expands the reach of Future’s journalism and enhances ChatGPT’s capabilities by providing users access to trusted, specialist content with attribution and links to original articles. Future also leverages OpenAI’s technology for AI-driven chatbots and productivity tools across editorial, sales, and marketing. Future CEO Jon Steinberg and OpenAI COO Brad Lightcap emphasize the partnership’s role in audience growth, innovation, and creating new opportunities for content discovery and engagement.[337]
2024 December 5 OpenAI partners with Anduril for defense Partnership OpenAI announces a partnership with Anduril Industries, a military defense technology firm, to use AI for national security missions, focusing on countering drone threats. This collaboration follows OpenAI's policy shift allowing military applications of its technology. Anduril, founded by Palmer Luckey, develops defense systems and drones used by the US, UK, and Australia. The partnership aims to enhance U.S. military capabilities against adversaries like China. OpenAI CEO Sam Altman emphasizes responsible AI use for safeguarding democracy and preventing conflict. Anduril CEO Brian Schimpf highlights the role of AI in enabling faster, accurate decisions in high-pressure defense scenarios.[338]
2024 December 6 OpenAI launches ChatGPT Pro for advanced AI access Product launch OpenAI launches ChatGPT Pro, a $200/month plan offering unlimited access to its most advanced model, OpenAI o1, alongside o1-mini, GPT-4o, and Advanced Voice. The plan includes o1 pro mode, designed for complex problem-solving with enhanced computational power. It outperforms other versions in benchmarks for math, science, and coding, scoring significantly higher in tests like AIME 2024 and Codeforces. Upcoming features include web browsing, file uploads, and image processing. Additionally, OpenAI grants 10 ChatGPT Pro subscriptions to U.S. medical researchers, including experts from Boston Children’s Hospital and Berkeley Lab, to advance studies in rare diseases, aging, and immunotherapy.[339]

Numerical and visual data

Google Scholar

The following table summarizes per-year mentions on Google Scholar as of June, 2023.

Year "OpenAI"
2015 87
2016 392
2017 957
2018 2,280
2019 4,640
2020 7,280
2021 10,200
2022 12,200
2023 43,500
2024 89,900

Google Trends

The chart below shows Google Trends data for OpenAI (Artificial intelligence company), from January 2020 to December 2024, when the screenshot was taken. Interest is also ranked by country and displayed on world map. See spike of interest starting at thew end of 2022, when OpenAI launched ChatGPT.[340]


Openaigoogletrends122024.png

Google Ngram Viewer

The chart below shows Google Ngram Viewer data for OpenAI, from 2000 to 2019.[341]


Openaingramviewer2022.PNG

Wikipedia Views

The chart below shows pageviews of the English Wikipedia article OpenAI, from July 2015 to November 2024. See spike of interest induced by ChatGPT release.[342]

Openaiwikipediaviews122024.PNG

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice. It has been expanded considerably by Sebastian.

Funding information for this timeline is available.

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. "OpenAI Gym Beta". openai.com. Retrieved 15 December 2024. 
  2. "Elon Musk Wanted an OpenAI for Profit". openai.com. Retrieved 15 December 2024. 
  3. "GPT-3.5 vs. GPT-4: Biggest Differences to Consider". techtarget.com. Retrieved 15 December 2024. 
  4. "GPT-4". openai.com. Retrieved 15 December 2024. 
  5. "DALL·E 3 is Now Available in ChatGPT Plus and Enterprise". openai.com. Retrieved 15 December 2024. 
  6. 6.0 6.1 6.2 6.3 6.4 6.5 O'Neill, Sarah. "The History of OpenAI". www.lxahub.com. Retrieved 31 May 2023. 
  7. Romero, Alberto (13 June 2022). "OpenAI Sold its Soul for $1 Billion". OneZero. Retrieved 1 June 2023. 
  8. Bastian, Matthias (5 May 2023). "OpenAI lost $540 million in 2022 developing ChatGPT and GPT-4 - Report". THE DECODER. Retrieved 31 May 2023. 
  9. "What is ChatGPT and why does it matter? Here's what you need to know". ZDNET. Retrieved 22 June 2023. 
  10. Samuel Gibbs (October 27, 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Retrieved July 25, 2017. 
  11. "AeroAstro Centennial Webcast". Retrieved July 25, 2017. The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium 
  12. "Machine intelligence, part 1". Sam Altman. Retrieved July 27, 2017. 
  13. Brockman, Greg (May 6, 2015). "Leaving Stripe". Greg Brockman on Svbtle. Retrieved May 6, 2018. 
  14. Carson, Biz (May 6, 2015). "One of the first employees of $3.5 billion startup Stripe is leaving to form his own company". Business Insider. Retrieved May 6, 2018. 
  15. Matt Weinberger (June 4, 2015). "Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last". Business Insider. Retrieved July 27, 2017. 
  16. "My path to OpenAI". Greg Brockman on Svbtle. May 3, 2016. Archived from the original on February 28, 2024. 
  17. 17.0 17.1 17.2 17.3 17.4 "OpenAI and Elon Musk". OpenAI. Retrieved 29 September 2024. 
  18. "Introducing OpenAI". OpenAI. Retrieved 29 September 2024. 
  19. "OpenAI: Revision history". wikipedia.org. Retrieved 6 April 2020. 
  20. Priestly, Theo (December 11, 2015). "Elon Musk And Peter Thiel Launch OpenAI, A Non-Profit Artificial Intelligence Research Company". Forbes. Retrieved 8 July 2019. 
  21. "Ilya Sutskever". AI Watch. April 8, 2018. Retrieved May 6, 2018. 
  22. 22.0 22.1 22.2 22.3 22.4 "Information for OpenAI". orgwatch.issarice.com. Retrieved 5 May 2020. 
  23. "AMA: the OpenAI Research Team • r/MachineLearning". reddit. Retrieved May 5, 2018. 
  24. Salimans, Tim; Kingma, Diederik P. "Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks". arxiv.org. Retrieved 27 March 2020. 
  25. Brockman, Greg (March 22, 2017). "Team++". OpenAI Blog. Retrieved May 6, 2018. 
  26. "Ian Goodfellow". linkedin.com. Retrieved 24 April 2020. 
  27. Sutskever, Ilya (March 20, 2017). "Welcome, Pieter and Shivon!". OpenAI Blog. Retrieved May 6, 2018. 
  28. "Pieter Abbeel AI Speaker". Aurum Speakers Bureau. Retrieved 13 June 2023. 
  29. "OpenAI Gym Beta". OpenAI Blog. March 20, 2017. Retrieved March 2, 2018. 
  30. "Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free". WIRED. April 27, 2016. Retrieved March 2, 2018. This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called "reinforcement learning" 
  31. Shead, Sam (April 28, 2016). "Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers". Business Insider. Retrieved March 3, 2018. 
  32. Miyato, Takeru; Dai, Andrew M.; Goodfellow, Ian (2016). "Adversarial Training Methods for Semi-Supervised Text Classification". doi:10.48550/arXiv.1605.07725. 
  33. "Generative models". openai.com. Retrieved 7 June 2023. 
  34. "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017. 
  35. Karnofsky, Holden (June 23, 2016). "Concrete Problems in AI Safety". Retrieved April 18, 2020. 
  36. "Dario Amodei - Research Scientist @ OpenAI". Crunchbase. Retrieved May 6, 2018. 
  37. 37.0 37.1 37.2 "Dario Amodei". linkedin.com. Retrieved 29 February 2020. 
  38. "Special Projects". openai.com. Retrieved 5 April 2020. 
  39. "NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI". The Official NVIDIA Blog. August 15, 2016. Retrieved May 5, 2018. 
  40. Vanian, Jonathan (August 15, 2016). "Nvidia Just Gave A Supercomputer to Elon Musk-backed Artificial Intelligence Group". Fortune. Retrieved May 5, 2018. 
  41. De Jesus, Cecille (August 17, 2016). "Elon Musk's OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak". Futurism. Retrieved May 5, 2018. 
  42. "Infrastructure for Deep Learning". openai.com. Retrieved 28 March 2020. 
  43. Christiano, Paul; Shah, Zain; Mordatch, Igor; Schneider, Jonas; Blackwell, Trevor; Tobin, Joshua; Abbeel, Pieter; Zaremba, Wojciech. "Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model". arxiv.org. Retrieved 28 March 2020. 
  44. Papernot, Nicolas; Abadi, Martín; Erlingsson, Úlfar; Goodfellow, Ian; Talwar, Kunal. "Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data". arxiv.org. Retrieved 28 March 2020. 
  45. Wu, Yuhuai; Burda, Yuri; Salakhutdinov, Ruslan; Grosse, Roger. "On the Quantitative Analysis of Decoder-Based Generative Models". arxiv.org. Retrieved 28 March 2020. 
  46. Statt, Nick (November 15, 2016). "Microsoft is partnering with Elon Musk's OpenAI to protect humanity's best interests". The Verge. Retrieved March 2, 2018. 
  47. Metz, Cade. "The Next Big Front in the Battle of the Clouds Is AI Chips. And Microsoft Just Scored a Win". WIRED. Retrieved March 2, 2018. According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware. 
  48. "universe". GitHub. Retrieved March 1, 2018. 
  49. John Mannes (December 5, 2016). "OpenAI's Universe is the fun parent every artificial intelligence deserves". TechCrunch. Retrieved March 2, 2018. 
  50. "Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do". WIRED. Retrieved March 2, 2018. 
  51. "OpenAI Universe". Hacker News. Retrieved May 5, 2018. 
  52. "AI Alignment". Paul Christiano. May 13, 2017. Retrieved May 6, 2018. 
  53. "Team Update". OpenAI Blog. March 22, 2017. Retrieved May 6, 2018. 
  54. "Open Philanthropy Project donations made (filtered to cause areas matching AI safety)". Retrieved July 27, 2017. 
  55. "OpenAI — General Support". Open Philanthropy Project. December 15, 2017. Retrieved May 6, 2018. 
  56. "Pinboard on Twitter". Twitter. Retrieved May 8, 2018. What the actual fuck… “Open Philanthropy” dude gives a $30M grant to his roommate / future brother-in-law. Trumpy! 
  57. "OpenAI makes humanity less safe". Compass Rose. April 13, 2017. Retrieved May 6, 2018. 
  58. "OpenAI makes humanity less safe". LessWrong. Retrieved May 6, 2018. 
  59. "OpenAI donations received". Retrieved May 6, 2018. 
  60. Naik, Vipul. "I'm having a hard time understanding the rationale...". Retrieved May 8, 2018. 
  61. "Evolution Strategies as a Scalable Alternative to Reinforcement Learning". openai.com. Retrieved 5 April 2020. 
  62. Juliani, Arthur (29 May 2017). "Reinforcement Learning or Evolutionary Strategies? Nature has a solution: Both.". Beyond Intelligence. Retrieved 25 June 2023. 
  63. "The messy, secretive reality behind OpenAI's bid to save the world". technologyreview.com. Retrieved 28 February 2020. 
  64. Simoneaux, Brent; Stegman, Casey. "Open Source Stories: The People Behind OpenAI". Retrieved May 5, 2018.  In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.
  65. "Profile of the people behind OpenAI • r/OpenAI". reddit. April 7, 2017. Retrieved May 5, 2018. 
  66. "The People Behind OpenAI". Hacker News. July 23, 2017. Retrieved May 5, 2018. 
  67. "Unsupervised Sentiment Neuron". openai.com. Retrieved 5 April 2020. 
  68. John Mannes (April 7, 2017). "OpenAI sets benchmark for sentiment analysis using an efficient mLSTM". TechCrunch. Retrieved March 2, 2018. 
  69. Radford, Alec; Jozefowicz, Rafal; Sutskever, Ilya (2017). "Learning to Generate Reviews and Discovering Sentiment". doi:10.48550/arXiv.1704.01444. 
  70. John Mannes (April 7, 2017). "OpenAI sets benchmark for sentiment analysis using an efficient mLSTM". TechCrunch. Retrieved March 2, 2018. 
  71. "OpenAI Just Beat Google DeepMind at Atari With an Algorithm From the 80s". singularityhub.com. Retrieved 29 June 2019. 
  72. "Roboschool". openai.com. Retrieved 5 April 2020. 
  73. "OpenAI Baselines: DQN". OpenAI Blog. November 28, 2017. Retrieved May 5, 2018. 
  74. "OpenAI/baselines". GitHub. Retrieved May 5, 2018. 
  75. "[1706.03741] Deep reinforcement learning from human preferences". Retrieved March 2, 2018. 
  76. gwern (June 3, 2017). "June 2017 news - Gwern.net". Retrieved March 2, 2018. 
  77. "Two Giants of AI Team Up to Head Off the Robot Apocalypse". WIRED. Retrieved March 2, 2018. A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it's possible to do practical work right now on making machine learning systems less able to produce nasty surprises. 
  78. "Faster Physics in Python". openai.com. Retrieved 5 April 2020. 
  79. "Learning from Human Preferences". OpenAI.com. Retrieved 29 June 2019. 
  80. "Better Exploration with Parameter Noise". openai.com. Retrieved 5 April 2020. 
  81. Jordan Crook (August 12, 2017). "OpenAI bot remains undefeated against world's greatest Dota 2 players". TechCrunch. Retrieved March 2, 2018. 
  82. "Did Elon Musk's AI champ destroy humans at video games? It's complicated". The Verge. August 14, 2017. Retrieved March 2, 2018. 
  83. "Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer". Business Insider. August 11, 2017. Retrieved March 3, 2018. 
  84. "Dota 2". openai.com. Retrieved 15 March 2023. 
  85. Cade Metz (August 13, 2017). "Teaching A.I. Systems to Behave Themselves". The New York Times. Retrieved May 5, 2018. 
  86. "OpenAI Baselines: ACKTR & A2C". openai.com. Retrieved 5 April 2020. 
  87. "[1709.04326] Learning with Opponent-Learning Awareness". Retrieved March 2, 2018. 
  88. gwern (August 16, 2017). "September 2017 news - Gwern.net". Retrieved March 2, 2018. 
  89. "AI Sumo Wrestlers Could Make Future Robots More Nimble". WIRED. Retrieved March 3, 2018. 
  90. Appolonia, Alexandra; Gmoser, Justin (October 20, 2017). "Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer". Business Insider. Retrieved March 3, 2018. 
  91. Cade Metz (November 6, 2017). "A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up". The New York Times. Retrieved May 5, 2018. 
  92. "Block-Sparse GPU Kernels". openai.com. Retrieved 5 April 2020. 
  93. "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018. 
  94. "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018. 
  95. Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018. 
  96. 96.0 96.1 "Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla". The Verge. February 21, 2018. Retrieved March 2, 2018. 
  97. Simonite, Tom. "Why Artificial Intelligence Researchers Should Be More Paranoid". WIRED. Retrieved March 2, 2018. 
  98. "OpenAI Supporters". OpenAI Blog. February 21, 2018. Retrieved March 1, 2018. 
  99. "Ingredients for Robotics Research". openai.com. Retrieved 5 April 2020. 
  100. "OpenAI Hackathon". OpenAI Blog. February 24, 2018. Retrieved March 1, 2018. 
  101. "Report from the OpenAI Hackathon". OpenAI Blog. March 15, 2018. Retrieved May 5, 2018. 
  102. "OpenAI Retro Contest". OpenAI. Retrieved May 5, 2018. 
  103. "Retro Contest". OpenAI Blog. April 13, 2018. Retrieved May 5, 2018. 
  104. "[OpenAI Retro Contest] Getting Started". medium.com. Retrieved 31 May 2023. 
  105. "OpenAI Charter". OpenAI Blog. April 9, 2018. Retrieved May 5, 2018. 
  106. wunan (April 9, 2018). "OpenAI charter". LessWrong. Retrieved May 5, 2018. 
  107. "[D] OpenAI Charter • r/MachineLearning". reddit. Retrieved May 5, 2018. 
  108. "OpenAI Charter". Hacker News. Retrieved May 5, 2018. 
  109. Tristan Greene (April 10, 2018). "The AI company Elon Musk co-founded intends to create machines with real intelligence". The Next Web. Retrieved May 5, 2018. 
  110. Cade Metz (April 19, 2018). "A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit". The New York Times. Retrieved May 5, 2018. 
  111. ""A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning". reddit. Retrieved May 5, 2018. 
  112. "gwern comments on A.I. Researchers Are Making More Than $1M, Even at a Nonprofit". Hacker News. Retrieved May 5, 2018. 
  113. "[1805.00899] AI safety via debate". Retrieved May 5, 2018. 
  114. Irving, Geoffrey; Amodei, Dario (May 3, 2018). "AI Safety via Debate". OpenAI Blog. Retrieved May 5, 2018. 
  115. "AI and Compute". openai.com. Retrieved 5 April 2020. 
  116. "Improving Language Understanding with Unsupervised Learning". openai.com. Retrieved 5 April 2020. 
  117. Gershgorn, Dave. "OpenAI built gaming bots that can work as a team with inhuman precision". qz.com. Retrieved 14 June 2019. 
  118. Knight, Will. "A team of AI algorithms just crushed humans in a complex computer game". technologyreview.com. Retrieved 14 June 2019. 
  119. "OpenAI's bot can now defeat skilled Dota 2 teams". venturebeat.com. Retrieved 14 June 2019. 
  120. Vincent, James. "Elon Musk, DeepMind founders, and others sign pledge to not develop lethal AI weapon systems". theverge.com. Retrieved 1 June 2019. 
  121. Locklear, Mallory. "DeepMind, Elon Musk and others pledge not to make autonomous AI weapons". engadget.com. Retrieved 1 June 2019. 
  122. Quach, Katyanna. "Elon Musk, his arch nemesis DeepMind swear off AI weapons". theregister.co.uk. Retrieved 1 June 2019. 
  123. "OpenAI's 'state-of-the-art' system gives robots humanlike dexterity". venturebeat.com. Retrieved 14 June 2019. 
  124. Coldewey, Devin. "OpenAI's robotic hand doesn't need humans to teach it human behaviors". techcrunch.com. Retrieved 14 June 2019. 
  125. Vincent, James (30 July 2018). "OpenAI sets new benchmark for robot dexterity". The Verge. Retrieved 28 July 2023. 
  126. Whitwam, Ryan. "OpenAI Bots Crush the Best Human Dota 2 Players in the World". extremetech.com. Retrieved 15 June 2019. 
  127. Quach, Katyanna. "OpenAI bots thrash team of Dota 2 semi-pros, set eyes on mega-tourney". theregister.co.uk. Retrieved 15 June 2019. 
  128. Savov, Vlad. "The OpenAI Dota 2 bots just defeated a team of former pros". theverge.com. Retrieved 15 June 2019. 
  129. Rigg, Jamie. "'Dota 2' veterans steamrolled by AI team in exhibition match". engadget.com. Retrieved 15 June 2019. 
  130. Chu, Timothy; Cohen, Michael B.; Pachocki, Jakub W.; Peng, Richard. "Constant Arboricity Spectral Sparsifiers". arxiv.org. Retrieved 26 March 2020. 
  131. "Reinforcement Learning with Prediction-Based Rewards". openai.com. Retrieved 5 April 2020. 
  132. "Spinning Up in Deep RL". OpenAI.com. Retrieved 15 June 2019. 
  133. Ramesh, Prasad. "OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners". hub.packtpub.com. Retrieved 15 June 2019. 
  134. Johnson, Khari. "OpenAI launches reinforcement learning training to prepare for artificial general intelligence". flipboard.com. Retrieved 15 June 2019. 
  135. "OpenAI Founder: Short-Term AGI Is a Serious Possibility". syncedreview.com. Retrieved 15 June 2019. 
  136. Rodriguez, Jesus. "What's New in Deep Learning Research: OpenAI and DeepMind Join Forces to Achieve Superhuman Performance in Reinforcement Learning". towardsdatascience.com. Retrieved 29 June 2019. 
  137. "How AI Training Scales". openai.com. Retrieved 4 April 2020. 
  138. "OpenAI teaches AI teamwork by playing hide-and-seek". venturebeat.com. Retrieved 24 February 2020. 
  139. "OpenAI's CoinRun tests the adaptability of reinforcement learning agents". venturebeat.com. Retrieved 24 February 2020. 
  140. "What Is a Dev Training Environment & How to Use them Efficiently". Bunnyshell. Retrieved 17 March 2023. 
  141. "Better language models and their implications". openai.com. Retrieved 23 March 2023. 
  142. "An AI helped us write this article". vox.com. Retrieved 28 June 2019. 
  143. Lowe, Ryan. "OpenAI's GPT-2: the model, the hype, and the controversy". towardsdatascience.com. Retrieved 10 July 2019. 
  144. 144.0 144.1 "The Hacker Learns to Trust". medium.com. Retrieved 5 May 2020. 
  145. Irving, Geoffrey; Askell, Amanda. "AI Safety Needs Social Scientists". doi:10.23915/distill.00014. 
  146. "AI Safety Needs Social Scientists". openai.com. Retrieved 5 April 2020. 
  147. "Neural MMO: A Massively Multiagent Game Environment". openai.com. Retrieved 5 April 2020. 
  148. "Introducing Activation Atlases". openai.com. Retrieved 5 April 2020. 
  149. Johnson, Khari. "OpenAI launches new company for funding safe artificial general intelligence". venturebeat.com. Retrieved 15 June 2019. 
  150. Trazzi, Michaël. "Considerateness in OpenAI LP Debate". medium.com. Retrieved 15 June 2019. 
  151. "Implicit Generation and Generalization Methods for Energy-Based Models". openai.com. Retrieved 5 April 2020. 
  152. "Sam Altman's leap of faith". techcrunch.com. Retrieved 24 February 2020. 
  153. "Y Combinator president Sam Altman is stepping down amid a series of changes at the accelerator". techcrunch.com. Retrieved 24 February 2020. 
  154. Alford, Anthony. "OpenAI Introduces Sparse Transformers for Deep Learning of Longer Sequences". infoq.com. Retrieved 15 June 2019. 
  155. "OpenAI Sparse Transformer Improves Predictable Sequence Length by 30x". medium.com. Retrieved 15 June 2019. 
  156. "Generative Modeling with Sparse Transformers". OpenAI.com. Retrieved 15 June 2019. 
  157. "MuseNet". OpenAI.com. Retrieved 15 June 2019. 
  158. "OpenAI Robotics Symposium 2019". OpenAI.com. Retrieved 14 June 2019. 
  159. "A poetry-writing AI has just been unveiled. It's ... pretty good.". vox.com. Retrieved 11 July 2019. 
  160. Vincent, James. "AND OpenAI's new multitalented AI writes, translates, and slanders". theverge.com. Retrieved 11 July 2019. 
  161. 161.0 161.1 "GPT-2: 6-month follow-up". openai.com. Retrieved 23 March 2023. 
  162. "Open Hearing on Deepfakes and Artificial Intelligence". youtube.com. Retrieved 23 March 2023. 
  163. "Microsoft Invests In and Partners with OpenAI to Support Us Building Beneficial AGI". OpenAI. July 22, 2019. Retrieved July 26, 2019. 
  164. "OpenAI forms exclusive computing partnership with Microsoft to build new Azure AI supercomputing technologies". Microsoft. July 22, 2019. Retrieved July 26, 2019. 
  165. Chan, Rosalie (July 22, 2019). "Microsoft is investing $1 billion in OpenAI, the Elon Musk-founded company that's trying to build human-like artificial intelligence". Business Insider. Retrieved July 26, 2019. 
  166. Sawhney, Mohanbir (July 24, 2019). "The Real Reasons Microsoft Invested In OpenAI". Forbes. Retrieved July 26, 2019. 
  167. "OpenAI releases curtailed version of GPT-2 language model". venturebeat.com. Retrieved 24 February 2020. 
  168. "OpenAI Just Released an Even Scarier Fake News-Writing Algorithm". interestingengineering.com. Retrieved 24 February 2020. 
  169. "OPENAI JUST RELEASED A NEW VERSION OF ITS FAKE NEWS-WRITING AI". futurism.com. Retrieved 24 February 2020. 
  170. "Emergent tool use from multi-agent interaction". openai.com. Retrieved 15 March 2023. 
  171. "Solving Rubik's Cube with a robot hand". openai.com. Retrieved 15 March 2023. 
  172. "Solving Rubik's Cube with a Robot Hand". arxiv.org. Retrieved 4 April 2020. 
  173. "Solving Rubik's Cube with a Robot Hand". openai.com. Retrieved 4 April 2020. 
  174. "Solving Rubik's Cube with a robot hand". openai.com. Retrieved 25 June 2023. 
  175. Fuscaldo, Donna (15 October 2019). "A Human-like Robotic Hand is Able to Solve the Rubik's Cube". interestingengineering.com. Retrieved 25 June 2023. 
  176. "GPT-2: 1.5B release". openai.com. Retrieved 15 March 2023. 
  177. "GPT-2: 1.5B Release". openai.com. Retrieved 5 April 2020. 
  178. "Safety Gym". openai.com. Retrieved 5 April 2020. 
  179. "Procgen Benchmark". openai.com. Retrieved 2 March 2020. 
  180. "OpenAI's Procgen Benchmark prevents AI model overfitting". venturebeat.com. Retrieved 2 March 2020. 
  181. "GENERALIZATION IN REINFORCEMENT LEARNING – EXPLORATION VS EXPLOITATION". analyticsindiamag.com. Retrieved 2 March 2020. 
  182. Nakkiran, Preetum; Kaplun, Gal; Bansal, Yamini; Yang, Tristan; Barak, Boaz; Sutskever, Ilya. "Deep Double Descent: Where Bigger Models and More Data Hurt". arxiv.org. Retrieved 5 April 2020. 
  183. "Deep Double Descent". OpenAI. December 5, 2019. Retrieved May 23, 2020. 
  184. Hubinger, Evan (December 5, 2019). "Understanding "Deep Double Descent"". LessWrong. Retrieved 24 May 2020. 
  185. Hubinger, Evan (December 18, 2019). "Inductive biases stick around". Retrieved 24 May 2020. 
  186. "OpenAI sets PyTorch as its new standard deep learning framework". jaxenter.com. Retrieved 23 February 2020. 
  187. "OpenAI goes all-in on Facebook's Pytorch machine learning framework". venturebeat.com. Retrieved 23 February 2020. 
  188. "Writeup: Progress on AI Safety via Debate". lesswrong.com. Retrieved 16 May 2020. 
  189. Hao, Karen. "The messy, secretive reality behind OpenAI's bid to save the world. The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism.". Technology Review. 
  190. Holmes, Aaron. "Elon Musk just criticized the artificial intelligence company he helped found — and said his confidence in the safety of its AI is 'not high'". businessinsider.com. Retrieved 29 February 2020. 
  191. "Elon Musk". twitter.com. Retrieved 29 February 2020. 
  192. "GPT-3 on GitHub". OpenAI. Retrieved July 19, 2020. 
  193. "Language Models are Few-Shot Learners". May 28, 2020. Retrieved July 19, 2020. 
  194. "Nick Cammarata on Twitter: GPT-3 as therapist". July 14, 2020. Retrieved July 19, 2020. 
  195. Gwern (June 19, 2020). "GPT-3 Creative Fiction". Retrieved July 19, 2020. 
  196. Walton, Nick (July 14, 2020). "AI Dungeon: Dragon Model Upgrade. You can now play AI Dungeon with one of the most powerful AI models in the world.". Retrieved July 19, 2020. 
  197. Shameem, Sharif (July 13, 2020). "Sharif Shameem on Twitter: With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you.". Twitter. Retrieved July 19, 2020. 
  198. Lacker, Kevin (July 6, 2020). "Giving GPT-3 a Turing Test". Retrieved July 19, 2020. 
  199. Woolf, Max (July 18, 2020). "Tempering Expectations for GPT-3 and OpenAI's API". Retrieved July 19, 2020. 
  200. Asparouhov, Delian (July 17, 2020). "Quick thoughts on GPT3". Retrieved July 19, 2020. 
  201. "OpenAI API". openai.com. Retrieved 26 March 2023. 
  202. "The Scaling Hypothesis". Gwern.net. Retrieved 29 September 2024. 
  203. "Are We in an AI Overhang?". LessWrong. Retrieved 29 September 2024. 
  204. "Microsoft teams up with OpenAI to exclusively license GPT-3 language model". The Official Microsoft Blog. 22 September 2020. Retrieved 24 March 2023. 
  205. Branwen, Gwern (2 January 2020). "January 2021 News". gwern.net. Retrieved 24 August 2023. 
  206. Kokotajlo, Daniel. "Dario Amodei leaves OpenAI". lesswrong.com. Retrieved 24 August 2023. 
  207. Kokotajlo, Daniel. "Dario Amodei leaves OpenAI". lesswrong.com. Retrieved 9 June 2023. 
  208. "Organizational update from OpenAI". openai.com. Retrieved 9 June 2023. 
  209. "Sam McCandlish". linkedin.com. Retrieved 24 August 2023. 
  210. "Tom Brown". linkedin.com. Retrieved 24 August 2023. 
  211. "Tom Henighan". linkedin.com. Retrieved 24 August 2023. 
  212. "Christopher Olah". linkedin.com. Retrieved 24 August 2023. 
  213. "Jack Clark". linkedin.com. Retrieved 24 August 2023. 
  214. "Benjamin Mann". linkedin.com. Retrieved 24 August 2023. 
  215. "ChatGPT must be regulated and A.I. 'can be used by bad actors,' warns OpenAI's CTO". finance.yahoo.com. 5 February 2023. Retrieved 2023-08-24. 
  216. Vincent, James (2023-02-03). "Google invested $300 million in AI firm founded by former OpenAI researchers". The Verge. Retrieved 2023-08-24. 
  217. O'Reilly, Mathilde (2023-06-30). "Anthropic releases paper revealing the bias of large language models". dailyai.com. Retrieved 2023-08-24. 
  218. "As Anthropic seeks billions to take on OpenAI, 'industrial capture' is nigh. Or is it?". VentureBeat. 2023-04-07. Retrieved 2023-08-24. 
  219. Waters, Richard; Shubber, Kadhim (3 February 2023). "Google invests $300mn in artificial intelligence start-up Anthropic". Financial Times. 
  220. "Google invests $300 million in Anthropic as race to compete with ChatGPT heats up". VentureBeat. 2023-02-03. Retrieved 2023-08-24. 
  221. "ChatGPT must be regulated and A.I. 'can be used by bad actors,' warns OpenAI's CTO". Fortune. Retrieved 2023-08-24. 
  222. Milmo, Dan. "Claude 2: ChatGPT rival launches chatbot that can summarise a novel". The Guardian. Retrieved 24 August 2023. 
  223. "CLIP: Connecting text and images". openai.com. Retrieved 24 March 2023. 
  224. "DALL·E: Creating images from text". openai.com. Retrieved 17 March 2023. 
  225. "Jan Leike". Twitter. Retrieved 23 June 2023. 
  226. "OpenAI Codex". openai.com. Retrieved 17 March 2023. 
  227. "OpenAI Shuts Down GPT-3 Bot Used To Emulate Dead Fiancée". Futurism. Retrieved 31 May 2023. 
  228. "OpenAI unveils model that can summarize books of any length". VentureBeat. 23 September 2021. Retrieved 31 May 2023. 
  229. "OpenAI rival Cohere launches language model API". VentureBeat. 15 November 2021. Retrieved 31 May 2023. 
  230. "OpenAI begins allowing customers to fine-tune GPT-3". VentureBeat. 14 December 2021. Retrieved 30 May 2023. 
  231. Ramnani, Meeta (27 January 2022). "Now, OpenAI API have text and code embeddings". Analytics India Magazine. Retrieved 30 May 2023. 
  232. "OpenAI Introduces InstructGPT Language Model to Follow Human Instructions". InfoQ. Retrieved 30 May 2023. 
  233. "EleutherAI: When OpenAI Isn't Open Enough - IEEE Spectrum". spectrum.ieee.org. Retrieved 30 May 2023. 
  234. Shenwai, Tanushree (21 March 2022). "OpenAI Releases New Version of GPT-3 and Codex That Can Edit or Insert Content Into Existing Text". MarkTechPost. Retrieved 30 May 2023. 
  235. Robertson, Adi (6 April 2022). "OpenAI's DALL-E AI image generator can now edit pictures, too". The Verge. Retrieved 30 May 2023. 
  236. Coldewey, Devin (6 April 2022). "New OpenAI tool draws anything, bigger and better than ever". TechCrunch. Retrieved 30 May 2023. 
  237. Papadopoulos, Loukia (10 April 2022). "OpenAI's new AI system DALL-E 2 can create mesmerizing images from text". interestingengineering.com. Retrieved 30 May 2023. 
  238. Kolakowski, Nick (31 May 2022). "Microsoft's Azure Now Features OpenAI Tools for Developers". Dice Insights. Retrieved 30 May 2023. 
  239. Shenwai, Tanushree (27 June 2022). "OpenAI Introduces Video PreTraining (VPT), A Novel Semi-Supervised Imitation Learning Technique". MarkTechPost. Retrieved 30 May 2023. 
  240. "DALL·E 2: Extending creativity". openai.com. Retrieved 17 March 2023. 
  241. "Reducing bias and improving safety in DALL·E 2". openai.com. Retrieved 17 March 2023. 
  242. "New and improved content moderation tooling". openai.com. Retrieved 17 March 2023. 
  243. "Our approach to alignment research". openai.com. Retrieved 17 March 2023. 
  244. "DALL·E: Introducing outpainting". openai.com. Retrieved 17 March 2023. 
  245. "DALL·E now available without waitlist". openai.com. Retrieved 17 March 2023. 
  246. "SHUTTERSTOCK PARTNERS WITH OPENAI AND LEADS THE WAY TO BRING AI-GENERATED CONTENT TO ALL - Press and Media - Shutterstock". www.shutterstock.com. Retrieved 14 June 2023. 
  247. "DALL·E API now available in public beta". openai.com. Retrieved 17 March 2023. 
  248. "Introducing ChatGPT". openai.com. Retrieved 22 June 2023. 
  249. Hu, Krystal; Hu, Krystal (2023-02-02). "ChatGPT sets record for fastest-growing user base – analyst note". Archived from the original on February 3, 2023. Retrieved 2023-06-03. 
  250. Varanasi, Lakshmi (January 5, 2023). "ChatGPT creator OpenAI is in talks to sell shares in a tender offer that would double the startup's valuation to $29 billion". Insider. Archived from the original on January 18, 2023. Retrieved January 18, 2023. 
  251. "Discovering the minutiae of backend systems". openai.com. Retrieved 16 March 2023. 
  252. "New and improved embedding model". openai.com. Retrieved 16 March 2023. 
  253. "Forecasting potential misuses of language models for disinformation campaigns and how to reduce risk". openai.com. Retrieved 14 March 2023. 
  254. "OpenAI, Georgetown, Stanford study finds LLMs can boost public opinion manipulation". VentureBeat. 13 January 2023. Retrieved 25 June 2023. 
  255. Siegel, Daniel (17 February 2023). "Weapons of Mass Disruption: Artificial Intelligence and the Production of Extremist Propaganda". GNET. Retrieved 25 June 2023. 
  256. "OpenAI and Microsoft extend partnership". openai.com. Retrieved 16 March 2023. 
  257. "BuzzFeed to use OpenAI technology to create content". www.cbsnews.com. 26 January 2023. Retrieved 14 June 2023. 
  258. "New AI classifier for indicating AI-written text". openai.com. Retrieved 16 March 2023. 
  259. "Introducing ChatGPT Plus". openai.com. Retrieved 16 March 2023. 
  260. "OpenAI selects Bain & Company for innovative partnership". www.consultancy.uk. 23 February 2023. Retrieved 14 June 2023. 
  261. "Planning for AGI and beyond". openai.com. Retrieved 16 March 2023. 
  262. "Salesforce, OpenAI Partner to Launch ChatGPT-like Tool for Enterprise". aibusiness.com. Retrieved 14 June 2023. 
  263. Wiggers, Kyle (14 March 2023). "OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art". TechCrunch. Retrieved 26 July 2023. 
  264. "ChatGPT Plus too pricey? 7 websites that let you access GPT-4 for free". The Indian Express. 5 May 2023. Retrieved 28 July 2023. 
  265. Edwards, Benj (15 March 2023). "OpenAI checked to see whether GPT-4 could take over the world". Ars Technica. Retrieved 25 August 2023. 
  266. News, A. B. C. "OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'". ABC News. Retrieved 23 March 2023. 
  267. "Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367". youtube.com. Retrieved 25 March 2023. 
  268. "AI startup accelerator Neo will partner with OpenAI, Microsoft". The Seattle Times. 27 March 2023. Retrieved 14 June 2023. 
  269. "Announcing OpenAI's Bug Bounty Program". openai.com. Retrieved 9 June 2023. 
  270. Vincent, James (14 April 2023). "OpenAI's CEO confirms the company isn't training GPT-5 and "won't for some time"". The Verge. Retrieved 9 May 2023. 
  271. Novet, Jordan (19 April 2023). "Atlassian taps OpenAI to make its collaboration software smarter". CNBC. Retrieved 14 June 2023. 
  272. "Consensus raises $3M, partners with OpenAI to revolutionize scientific web search". VentureBeat. 21 April 2023. Retrieved 14 June 2023. 
  273. Kang, Cecilia (16 May 2023). "OpenAI's Sam Altman Urges A.I. Regulation in Senate Hearing". The New York Times. Retrieved 26 July 2023. 
  274. "Governance of superintelligence". openai.com. Retrieved 9 June 2023. 
  275. "Democratic Inputs to AI". openai.com. Retrieved 9 June 2023. 
  276. "OpenAI cybersecurity grant program". openai.com. Retrieved 9 June 2023. 
  277. "Google, OpenAI will share AI models with the UK government". Engadget. Retrieved 13 June 2023. 
  278. "Radio Host Sues OpenAI after ChatGPT Generates Fake Complaint Against Him". NDTV.com. Retrieved 17 July 2023. 
  279. "Exclusive: OpenAI Lobbied E.U. to Water Down AI Regulation". Time. 20 June 2023. Retrieved 16 July 2023. 
  280. "OpenAI, owners of ChatGPT to launch AI app store". technext24.com. 21 June 2023. Retrieved 18 July 2023. 
  281. Brittain, Blake; Brittain, Blake (29 June 2023). "Lawsuit says OpenAI violated US authors' copyrights to train AI chatbot". Reuters. Retrieved 16 July 2023. 
  282. Milmo, Dan; editor, Dan Milmo Global technology (28 June 2023). "ChatGPT developer OpenAI to locate first non-US office in London". The Guardian. Retrieved 17 July 2023. 
  283. Patel, Dylan. "GPT-4 Architecture, Infrastructure, Training Dataset, Costs, Vision, MoE". www.semianalysis.com. Retrieved 16 July 2023. 
  284. "Shutterstock Expands Partnership with OpenAI, Signs New Six-Year Agreement". investor.shutterstock.com. Retrieved 15 December 2024. 
  285. "Sarah Silverman sues OpenAI and Meta". BBC News. 12 July 2023. Retrieved 16 July 2023. 
  286. "Elon Musk launches xAI to rival OpenAI and Google". South China Morning Post. 13 July 2023. Retrieved 16 July 2023. 
  287. "AP and OpenAI enter into two-year partnership to help train algorithmic models". Engadget. Retrieved 15 July 2023. 
  288. Roth, Emma (13 July 2023). "OpenAI will use Associated Press news stories to train its models". The Verge. Retrieved 15 July 2023. 
  289. "Partnership with American Journalism Project to Support Local News". openai.com. Retrieved 15 December 2024. 
  290. "Upwork and OpenAI Partner to Connect Businesses with OpenAI Experts". investors.upwork.com. Retrieved 15 December 2024. 
  291. Lanz, Decrypt / Jose Antonio (26 July 2023). "Uncensored and 'Insane': A Look at OpenAI's Secret Image Generator". Decrypt. Retrieved 28 July 2023. 
  292. Toczkowska, Natalia (6 August 2023). "Anthropic Introduces New AI Chatbot, Claude 2". TS2 SPACE. Retrieved 25 August 2023. 
  293. Wiggers, Kyle (16 August 2023). "OpenAI acquires AI design studio Global Illumination". TechCrunch. Retrieved 22 August 2023. 
  294. Field, Hayden (17 August 2023). "Meta, OpenAI, Anthropic and Cohere A.I. models all make stuff up — here's which is worst". CNBC. Retrieved 25 August 2023. 
  295. Davis, Wes (21 August 2023). "The New York Times blocks OpenAI's web crawler". The Verge. Retrieved 22 August 2023. 
  296. "Sam Altman fired as CEO of ChatGPT maker OpenAI". Al Jazeera. Retrieved 17 September 2024. 
  297. 297.0 297.1 "Sam Altman Returns to OpenAI After Board Ouster". The New York Times. Retrieved 1 October 2024. 
  298. "Axel Springer and OpenAI Partner to Deepen Beneficial Use of AI in Journalism". axelspringer.com. Retrieved 15 December 2024. 
  299. "OpenAI Announces First Partnership with a University". cnbc.com. Retrieved 15 December 2024. 
  300. "OpenAI Announces New Members to Board of Directors". openai.com. Retrieved 16 September 2024. 
  301. "Global News Partnerships with Le Monde and PRISA Media". openai.com. Retrieved 15 December 2024. 
  302. "OpenAI opens first Asia office in Tokyo as ChatGPT use grows". Kyodo News. April 15, 2024. Retrieved June 20, 2024. 
  303. "News Corp and OpenAI Sign Landmark Multi-Year Global Partnership". openai.com. Retrieved 15 December 2024. 
  304. "Moderna and OpenAI Collaboration". openai.com. Retrieved 16 September 2024. 
  305. "Content Partnership with Financial Times". openai.com. Retrieved 15 December 2024. 
  306. "API Partnership with Stack Overflow". openai.com. Retrieved 15 December 2024. 
  307. "Introducing the Model Spec". openai.com. Retrieved 16 September 2024. 
  308. "OpenAI and Reddit Partnership". openai.com. Retrieved 15 December 2024. 
  309. "Sanofi partners with OpenAI, Formation Bio for AI-driven drug development". reuters.com. Retrieved 15 December 2024. 
  310. "A Content and Product Partnership with Vox Media". openai.com. Retrieved 15 December 2024. 
  311. "Introducing OpenAI for Nonprofits". openai.com. Retrieved 16 September 2024. 
  312. "Introducing ChatGPT Edu". openai.com. Retrieved 16 September 2024. 
  313. "OpenAI thwarts influence operations by Russia, China, and Israel". NPR. Retrieved 13 October 2024. 
  314. "OpenAI and Apple Announce Partnership". openai.com. Retrieved 15 December 2024. 
  315. "OpenAI Appoints Retired US Army General". openai.com. Retrieved 16 September 2024. 
  316. "OpenAI Acquires Rockset". openai.com. Retrieved 16 September 2024. 
  317. "Introducing SWE Bench: Verified". openai.com. Retrieved 16 September 2024. 
  318. "Conde Nast and OpenAI Announce Strategic Partnership". openai.com. Retrieved 16 September 2024. 
  319. "Learning to Reason with Large Language Models". openai.com. Retrieved 16 September 2024. 
  320. "Introducing OpenAI O1 Preview". openai.com. Retrieved 16 September 2024. 
  321. "OpenAI O1 Model Reasoning Abilities". theverge.com. Retrieved 16 September 2024. 
  322. "OpenAI Launches New AI Model Family O1". venturebeat.com. Retrieved 16 September 2024. 
  323. "Sam Altman on OpenAI O1 Model Capabilities". businessinsider.com. Retrieved 16 September 2024. 
  324. "OpenAI CTO Mira Murati Says She Will Leave the Company". Bloomberg. Retrieved 29 September 2024. 
  325. "OpenAI and GEDI partner for Italian news content". OpenAI. Retrieved 13 October 2024. 
  326. "OpenAI and Hearst Content Partnership". OpenAI. Retrieved 13 October 2024. 
  327. "ChatGPT maker OpenAI to launch subsidiary in Paris as city aims to become AI hub". Euronews. Retrieved 13 October 2024. 
  328. "OpenAI Blocks 20 Global Malicious Campaigns". The Hacker News. Retrieved 13 October 2024. 
  329. "OpenAI: The startup that secured $6.6bn in funding". FinTech Magazine. Retrieved 13 October 2024. 
  330. "OpenAI Introduces SWARM: A Framework for Building Multi-Agent Systems". analyticsindiamag.com. Retrieved 24 October 2024. 
  331. "OpenAI Introduces Experimental Framework SWARM". binance.com. Retrieved 24 October 2024. 
  332. "OpenAI Unveils Experimental SWARM Framework Igniting Debate on AI-Driven Automation". venturebeat.com. Retrieved 24 October 2024. 
  333. "OpenAI Announces SWARM Framework for AI Orchestration". infoq.com. Retrieved 24 October 2024. 
  334. "BBC News Article". BBC News. Retrieved 19 November 2024. 
  335. "Indian News Agency Files 287-Page Lawsuit Against OpenAI". AutoGPT.net. Retrieved 19 November 2024. 
  336. "Sam Altman says OpenAI is lowering the bar for AGI". The Verge. Retrieved 19 November 2024. 
  337. "OpenAI and Future Partner on Specialist Content". openai.com. Retrieved 15 December 2024. 
  338. "OpenAI Partners with Military Defense Tech Company". eNCA. Retrieved 19 November 2024. 
  339. "OpenAI launches ChatGPT Pro". Silicon Canals. Retrieved 19 November 2024. 
  340. "Google Trends data for ChatGPT (January 2020 to December 2024)". trends.google.com. Retrieved 15 December 2024. 
  341. "Google Ngram Viewer: OpenAI (2000-2022)". books.google.com. Retrieved 15 December 2024. 
  342. "Pageview analysis for OpenAI across multiple months". wikipediaviews.org. Retrieved 15 December 2024.