Difference between revisions of "Timeline of transformers"

From Timelines
Jump to: navigation, search
(Full timeline)
Line 36: Line 36:
 
  |archive-date = 26 January 2021
 
  |archive-date = 26 January 2021
 
  |archive-url  = https://web.archive.org/web/20210126024542/https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
 
  |archive-url  = https://web.archive.org/web/20210126024542/https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
|url-status  = live
 
 
}}</ref>
 
}}</ref>
 
|-
 
|-

Revision as of 21:26, 6 March 2023

This is a timeline of FIXME.

Sample questions

The following are some interesting questions that can be answered by reading this timeline:

Big picture

Time period Development summary More details

Full timeline

Year Month and date Event type Details
2017 June Google researchers first describe the transformer algorithm that would turbocharge the power of chatbots.
2018 June 11 OpenAI releases a paper entitled Improving Language Understanding by Generative Pre-Training, in which they introduces the Generative Pre-trained Transformer (GPT).[1]
2019 February 14 OpenAI releases Generative Pre-trained Transformer 2 (GPT-2).
2020 June 11 OpenAI releases Generative Pre-trained Transformer 3 (GPT-3) in beta.

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by FIXME.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

  • FIXME

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived from the original (PDF) on 26 January 2021. Retrieved 23 January 2021.