Timeline of OpenAI

From Timelines
Jump to: navigation, search

This is a timeline of OpenAI. OpenAI is a non-profit safety-conscious artificial intelligence capabilities company.

Big picture

Time period Development summary More details
2014–2015 Background Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies, about the dangers of superhuman machine intelligence, is published. Soon after the book's publication, Elon Musk and Sam Altman, the two people who would become co-chairs and initial donors of OpenAI, publicly state their concern of superhuman machine intelligence.
2015–present Establishment OpenAI is founded and begins producing research.

Full timeline

Year Month and date Event type Details
2014 October 22–24 Background During an interview at the AeroAstro Centennial Symposium, Elon Musk, who would later become co-chair of OpenAI, calls artificial intelligence humanity's "biggest existential threat".[1][2]
2015 February 25 Background Sam Altman, president of Y Combinator who would later become a co-chair of OpenAI, publishes a blog post in which he writes that the development of superhuman AI is "probably the greatest threat to the continued existence of humanity".[3]
2015 May 6 Background Greg Brockman, who would become CTO of OpenAI, announces in a blog post that he is leaving his role as CTO of Stripe. In the post, in the section "What comes next" he writes "I haven't decided exactly what I'll be building (feel free to ping if you want to chat)".[4][5]
2015 June Background Sam Altman and Greg Brockman have a conversation about next steps for Brockman.[6]
2015 June 4 Background At Airbnb's Open Air 2015 conference, Sam Altman, president of Y Combinator who would later become a co-chair of OpenAI, states his concern for advanced artificial intelligence and shares that he recently invested in a company doing AI safety research.[7]
2015 July (approximate) Background Sam Altman sets up a dinner in Menlo Park, California to talk about starting an organization to do AI research. Attendees include Greg Brockman, Dario Amodei, Chris Olah, Paul Christiano, Ilya Sutskever, and Elon Musk.[6]
2015 December 11 OpenAI is announced to the public. (The news articles from this period make it sound like OpenAI launched sometime after this date.)[8][9][10]
2016 January Staff Ilya Sutskever joins OpenAI as Research Director.[11]
2016 January 9 The OpenAI research team does an AMA ("ask me anything") on r/MachineLearning, the subreddit dedicated to machine learning.[12]
2016 March 31 Staff A blog post from this day announces that Ian Goodfellow has joined OpenAI.[13]
2016 April 26 Staff A blog post from this day announces that Pieter Abbeel has joined OpenAI.[14]
2016 April 27 Software The public beta of OpenAI Gym, an open source toolkit that provides environments to test AI bots, is released.[15][16][17]
2016 June 21 Publication "Concrete Problems in AI Safety" is submitted to the arXiv.[18]
2016 July Staff Dario Amodei joins OpenAI.[19]
2016 July 8 Publication "Adversarial Examples in the Physical World" is published. One of the authors is Ian Goodfellow, who is at OpenAI at the time.[20]
2016 August 15 The technology company Nvidia announces that it has donated the first Nvidia DGX-1 (a supercomputer) to OpenAI. OpenAI plans to use the supercomputer to train its AI on a corpus of conversations from Reddit.[21][22][23]
2016 November 15 A partnership between OpenAI and Microsoft's artificial intelligence division is announced. As part of the partnership, Microsoft provides a price reduction on computing resources to OpenAI through Microsoft Azure.[24][25]
2016 December 5 Software OpenAI's Universe, "a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications", is released.[26][27][28][29]
2017 January Staff Paul Christiano joins OpenAI to work on AI alignment.[30] He was previously an intern at OpenAI in 2016.[31]
2017 March Financial The Open Philanthropy Project awards a grant of $30 million to OpenAI for general support.[32] The grant initiates a partnership between Open Philanthropy Project and OpenAI, in which Holden Karnofsky (executive director of Open Philanthropy Project) joins OpenAI's board of directors to oversee OpenAI's safety and governance work.[33] The grant was criticized by Maciej Cegłowski[34] and Benjamin Hoffman (who would write the blog post "OpenAI makes humanity less safe")[35][36][37] among others.[38]
2017 April An article entitled "The People Behind OpenAI" is published on Red Hat's Open Source Stories website, covering work at OpenAI.[39][40][41]
2017 April 6 Publication "Learning to Generate Reviews and Discovering Sentiment" is published.[42]
2017 May 24 Software OpenAI releases Baselines, a set of implementations of reinforcement learning algorithms.[43][44]
2017 June 12 Publication "Deep reinforcement learning from human preferences" is first uploaded to the arXiv. The paper is a collaboration between researchers at OpenAI and Google DeepMind.[45][46][47]
2017 August 12 OpenAI's Dota 2 bot beats Danil "Dendi" Ishutin, a professional human player, (and possibly others?) in one-on-one battles.[48][49][50]
2017 August 13 The New York Times publishes a story covering the AI safety work (by Dario Amodei, Geoffrey Irving, and Paul Christiano) at OpenAI.[51]
2017 September 13 Publication "Learning with Opponent-Learning Awareness" is first uploaded to the arXiv.[52][53]
2017 October 11 Software RoboSumo, a game that simulates sumo wrestling for AI to learn to play, is released.[54][55]
2017 November 6 Staff The New York Times reports that Pieter Abbeel (a researcher at OpenAI) and three other researchers from Berkeley and OpenAI have left to start their own company called Embodied Intelligence.[56]
2017 December Publication The 2017 AI Index is published. OpenAI contributed to the report.[57]
2018 February 20 Publication The report "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" is published. The report forecasts malicious use of artificial intelligence in the short term and makes recommendations on how to mitigate these risks from AI. The report is authored by individuals at Future of Humanity Institute, Centre for the Study of Existential Risk, OpenAI, Electronic Frontier Foundation, Center for a New American Security, and other institutions.[58][59][60][61][62]
2018 February 20 OpenAI announces changes in donors and advisors. New donors are: Jed McCaleb, Gabe Newell, Michael Seibel, Jaan Tallinn, and Ashton Eaton and Brianne Theisen-Eaton. Reid Hoffman is "significantly increasing his contribution". Pieter Abbeel (previously at OpenAI), Julia Galef, and Maran Nelson become advisors. Elon Musk departs the board but remains as a donor and advisor.[63][61]
2018 March 3 OpenAI hosts its first hackathon.[64][65]
2018 April 5 – June 5 The OpenAI Retro Contest takes place.[66][67] As a result of the release of the Gym Retro library, OpenAI's Universe become deprecated.[68]
2018 April 9 OpenAI releases a charter. The charter says in part that OpenAI commits to stop competing with a value-aligned and safety-conscious project that comes close to building artificial general intelligence, and also that OpenAI expects to reduce its traditional publishing in the future due to safety concerns.[69][70][71][72][73]
2018 April 19 Financial The New York Times publishes a story detailing the salaries of researchers at OpenAI, using information from OpenAI's 2016 Form 990. The salaries include $1.9 million paid to Ilya Sutskever and $800,000 paid to Ian Goodfellow (hired in March of that year).[74][75][76]
2018 May 2 Publication The paper "AI safety via debate" by Geoffrey Irving, Paul Christiano, and Dario Amodei is uploaded to the arXiv.[77][78]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Issa Rice.

Funding information for this timeline is available.

What the timeline is still missing

Timeline update strategy

See also

External links


  1. Samuel Gibbs (October 27, 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Retrieved July 25, 2017. 
  2. "AeroAstro Centennial Webcast". Retrieved July 25, 2017. The high point of the MIT Aeronautics and Astronautics Department's 2014 Centennial celebration is the October 22-24 Centennial Symposium 
  3. "Machine intelligence, part 1". Sam Altman. Retrieved July 27, 2017. 
  4. Brockman, Greg (May 6, 2015). "Leaving Stripe". Greg Brockman on Svbtle. Retrieved May 6, 2018. 
  5. Carson, Biz (May 6, 2015). "One of the first employees of $3.5 billion startup Stripe is leaving to form his own company". Business Insider. Retrieved May 6, 2018. 
  6. 6.0 6.1 "My path to OpenAI". Greg Brockman on Svbtle. May 3, 2016. Retrieved May 8, 2018. 
  7. Matt Weinberger (June 4, 2015). "Head of Silicon Valley's most important startup farm says we're in a 'mega bubble' that won't last". Business Insider. Retrieved July 27, 2017. 
  8. John Markoff (December 11, 2015). "Artificial-Intelligence Research Center Is Founded by Silicon Valley Investors". The New York Times. Retrieved July 26, 2017. The organization, to be named OpenAI, will be established as a nonprofit, and will be based in San Francisco. 
  9. "Introducing OpenAI". OpenAI Blog. December 11, 2015. Retrieved July 26, 2017. 
  10. Drew Olanoff (December 11, 2015). "Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman". TechCrunch. Retrieved March 2, 2018. 
  11. "Ilya Sutskever". AI Watch. April 8, 2018. Retrieved May 6, 2018. 
  12. "AMA: the OpenAI Research Team • r/MachineLearning". reddit. Retrieved May 5, 2018. 
  13. Brockman, Greg (March 22, 2017). "Team++". OpenAI Blog. Retrieved May 6, 2018. 
  14. Sutskever, Ilya (March 20, 2017). "Welcome, Pieter and Shivon!". OpenAI Blog. Retrieved May 6, 2018. 
  15. "OpenAI Gym Beta". OpenAI Blog. March 20, 2017. Retrieved March 2, 2018. 
  16. "Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free". WIRED. April 27, 2016. Retrieved March 2, 2018. This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called "reinforcement learning" 
  17. Shead, Sam (April 28, 2016). "Elon Musk's $1 billion AI company launches a 'gym' where developers train their computers". Business Insider. Retrieved March 3, 2018. 
  18. "[1606.06565] Concrete Problems in AI Safety". June 21, 2016. Retrieved July 25, 2017. 
  19. "Dario Amodei - Research Scientist @ OpenAI". Crunchbase. Retrieved May 6, 2018. 
  20. Metz, Cade (July 29, 2016). "How To Fool AI Into Seeing Something That Isn't There". WIRED. Retrieved March 3, 2018. 
  21. "NVIDIA Brings DGX-1 AI Supercomputer in a Box to OpenAI". The Official NVIDIA Blog. August 15, 2016. Retrieved May 5, 2018. 
  22. Vanian, Jonathan (August 15, 2016). "Nvidia Just Gave A Supercomputer to Elon Musk-backed Artificial Intelligence Group". Fortune. Retrieved May 5, 2018. 
  23. De Jesus, Cecille (August 17, 2016). "Elon Musk's OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak". Futurism. Retrieved May 5, 2018. 
  24. Statt, Nick (November 15, 2016). "Microsoft is partnering with Elon Musk's OpenAI to protect humanity's best interests". The Verge. Retrieved March 2, 2018. 
  25. Metz, Cade. "The Next Big Front in the Battle of the Clouds Is AI Chips. And Microsoft Just Scored a Win". WIRED. Retrieved March 2, 2018. According to Altman and Harry Shum, head of Microsoft new AI and research group, OpenAI's use of Azure is part of a larger partnership between the two companies. In the future, Altman and Shum tell WIRED, the two companies may also collaborate on research. "We're exploring a couple of specific projects," Altman says. "I'm assuming something will happen there." That too will require some serious hardware. 
  26. "universe". GitHub. Retrieved March 1, 2018. 
  27. John Mannes (December 5, 2016). "OpenAI's Universe is the fun parent every artificial intelligence deserves". TechCrunch. Retrieved March 2, 2018. 
  28. "Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do". WIRED. Retrieved March 2, 2018. 
  29. "OpenAI Universe". Hacker News. Retrieved May 5, 2018. 
  30. "AI Alignment". Paul Christiano. May 13, 2017. Retrieved May 6, 2018. 
  31. "Team Update". OpenAI Blog. March 22, 2017. Retrieved May 6, 2018. 
  32. "Open Philanthropy Project donations made (filtered to cause areas matching AI safety)". Retrieved July 27, 2017. 
  33. "OpenAI — General Support". Open Philanthropy Project. December 15, 2017. Retrieved May 6, 2018. 
  34. "Pinboard on Twitter". Twitter. Retrieved May 8, 2018. What the actual fuck… “Open Philanthropy” dude gives a $30M grant to his roommate / future brother-in-law. Trumpy! 
  35. "OpenAI makes humanity less safe". Compass Rose. April 13, 2017. Retrieved May 6, 2018. 
  36. "OpenAI makes humanity less safe". LessWrong. Retrieved May 6, 2018. 
  37. "OpenAI donations received". Retrieved May 6, 2018. 
  38. Naik, Vipul. "I'm having a hard time understanding the rationale...". Retrieved May 8, 2018. 
  39. Simoneaux, Brent; Stegman, Casey. "Open Source Stories: The People Behind OpenAI". Retrieved May 5, 2018.  In the HTML source, last-publish-date is shown as Tue, 25 Apr 2017 04:00:00 GMT as of 2018-05-05.
  40. "Profile of the people behind OpenAI • r/OpenAI". reddit. April 7, 2017. Retrieved May 5, 2018. 
  41. "The People Behind OpenAI". Hacker News. July 23, 2017. Retrieved May 5, 2018. 
  42. John Mannes (April 7, 2017). "OpenAI sets benchmark for sentiment analysis using an efficient mLSTM". TechCrunch. Retrieved March 2, 2018. 
  43. "OpenAI Baselines: DQN". OpenAI Blog. November 28, 2017. Retrieved May 5, 2018. 
  44. "openai/baselines". GitHub. Retrieved May 5, 2018. 
  45. "[1706.03741] Deep reinforcement learning from human preferences". Retrieved March 2, 2018. 
  46. gwern (June 3, 2017). "June 2017 news - Gwern.net". Retrieved March 2, 2018. 
  47. "Two Giants of AI Team Up to Head Off the Robot Apocalypse". WIRED. Retrieved March 2, 2018. A new paper from the two organizations on a machine learning system that uses pointers from humans to learn a new task, rather than figuring out its own—potentially unpredictable—approach, follows through on that. Amodei says the project shows it's possible to do practical work right now on making machine learning systems less able to produce nasty surprises. 
  48. Jordan Crook (August 12, 2017). "OpenAI bot remains undefeated against world's greatest Dota 2 players". TechCrunch. Retrieved March 2, 2018. 
  49. "Did Elon Musk's AI champ destroy humans at video games? It's complicated". The Verge. August 14, 2017. Retrieved March 2, 2018. 
  50. "Elon Musk's $1 billion AI startup made a surprise appearance at a $24 million video game tournament — and crushed a pro gamer". Business Insider. August 11, 2017. Retrieved March 3, 2018. 
  51. Cade Metz (August 13, 2017). "Teaching A.I. Systems to Behave Themselves". The New York Times. Retrieved May 5, 2018. 
  52. "[1709.04326] Learning with Opponent-Learning Awareness". Retrieved March 2, 2018. 
  53. gwern (August 16, 2017). "September 2017 news - Gwern.net". Retrieved March 2, 2018. 
  54. "AI Sumo Wrestlers Could Make Future Robots More Nimble". WIRED. Retrieved March 3, 2018. 
  55. Appolonia, Alexandra; Gmoser, Justin (October 20, 2017). "Elon Musk's artificial intelligence company created virtual robots that can sumo wrestle and play soccer". Business Insider. Retrieved March 3, 2018. 
  56. Cade Metz (November 6, 2017). "A.I. Researchers Leave Elon Musk Lab to Begin Robotics Start-Up". The New York Times. Retrieved May 5, 2018. 
  57. Vincent, James (December 1, 2017). "Artificial intelligence isn't as clever as we think, but that doesn't stop it being a threat". The Verge. Retrieved March 2, 2018. 
  58. "[1802.07228] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation". Retrieved February 24, 2018. 
  59. "Preparing for Malicious Uses of AI". OpenAI Blog. February 21, 2018. Retrieved February 24, 2018. 
  60. Malicious AI Report. "The Malicious Use of Artificial Intelligence". Malicious AI Report. Retrieved February 24, 2018. 
  61. 61.0 61.1 "Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla". The Verge. February 21, 2018. Retrieved March 2, 2018. 
  62. Simonite, Tom. "Why Artificial Intelligence Researchers Should Be More Paranoid". WIRED. Retrieved March 2, 2018. 
  63. "OpenAI Supporters". OpenAI Blog. February 21, 2018. Retrieved March 1, 2018. 
  64. "OpenAI Hackathon". OpenAI Blog. February 24, 2018. Retrieved March 1, 2018. 
  65. "Report from the OpenAI Hackathon". OpenAI Blog. March 15, 2018. Retrieved May 5, 2018. 
  66. "OpenAI Retro Contest". OpenAI. Retrieved May 5, 2018. 
  67. "Retro Contest". OpenAI Blog. April 13, 2018. Retrieved May 5, 2018. 
  68. "openai/universe". GitHub. Retrieved May 5, 2018. 
  69. "OpenAI Charter". OpenAI Blog. April 9, 2018. Retrieved May 5, 2018. 
  70. wunan (April 9, 2018). "OpenAI charter". LessWrong. Retrieved May 5, 2018. 
  71. "[D] OpenAI Charter • r/MachineLearning". reddit. Retrieved May 5, 2018. 
  72. "OpenAI Charter". Hacker News. Retrieved May 5, 2018. 
  73. Tristan Greene (April 10, 2018). "The AI company Elon Musk co-founded intends to create machines with real intelligence". The Next Web. Retrieved May 5, 2018. 
  74. Cade Metz (April 19, 2018). "A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit". The New York Times. Retrieved May 5, 2018. 
  75. ""A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit [OpenAI]" • r/reinforcementlearning". reddit. Retrieved May 5, 2018. 
  76. "gwern comments on A.I. Researchers Are Making More Than $1M, Even at a Nonprofit". Hacker News. Retrieved May 5, 2018. 
  77. "[1805.00899] AI safety via debate". Retrieved May 5, 2018. 
  78. Irving, Geoffrey; Amodei, Dario (May 3, 2018). "AI Safety via Debate". OpenAI Blog. Retrieved May 5, 2018.