Difference between revisions of "Timeline of transformers"

From Timelines
Jump to: navigation, search
Line 41: Line 41:
 
|-
 
|-
 
| 2020 || June 11 || || OpenAI releases Generative Pre-trained Transformer 3 ({{w|GPT-3}}) in beta.
 
| 2020 || June 11 || || OpenAI releases Generative Pre-trained Transformer 3 ({{w|GPT-3}}) in beta.
 +
|-
 +
| 2022 || June 14 || || Google releases all Switch Transformer models in T5X/JAX, including the 1.6T param Switch-C and the 395B param Switch-XXL models.<ref>{{cite web |title=https://twitter.com/LiamFedus/status/1536791574612303872 |url=https://twitter.com/LiamFedus/status/1536791574612303872 |website=Twitter |access-date=8 March 2023 |language=en}}</ref> The Switch Transformer is a type of neural network layer used in the transformer architecture, which replaces the regular feed-forward neural network (FFN) layer. The Switch Transformer is unique because it has multiple FFNs, also known as experts, in each layer, as opposed to the single FFN found in a standard transformer layer.<ref>{{cite web |last1=Davis |first1=Jonathan |title=Understanding Google’s Switch Transformer |url=https://towardsdatascience.com/understanding-googles-switch-transformer-904b8bf29f66?gi=dce2bdefb21d#:~:text=The%20Switch%20Transformer%20is%20a,multiple%20FFNs%2C%20known%20as%20experts. |website=Medium |access-date=8 March 2023 |language=en |date=2 May 2021}}</ref>
 
|-
 
|-
 
| 2023 || February 15 || || A paper presents a pilot study that evaluates the cognitive abilities of two recently released generative transformer models, ChatGPT and DALL-E 2, in decision-making and spatial reasoning. The study constructs input prompts following neutral a priori guidelines and finds that DALL-E 2 can generate at least one correct image for each spatial reasoning prompt, but most images generated are incorrect. Similarly, ChatGPT demonstrates some level of rational decision-making, but many of its decisions violate at least one of the axioms under the classical Von Neumann-Morgenstern utility theorem. ChatGPT's outputs tend to be unpredictable and can make irrational decisions for simpler problems while drawing correct conclusions for more complex bet structures. The paper discusses the challenges of scaling up such cognitive evaluations for generative models and conducting them with a closed set of answer keys.<ref>{{cite journal |last1=Tang |first1=Zhisheng |last2=Kejriwal |first2=Mayank |title=A Pilot Evaluation of ChatGPT and DALL-E 2 on Decision Making and Spatial Reasoning |journal=arXiv:2302.09068 [cs] |date=15 February 2023 |doi=10.48550/arXiv.2302.09068 |url=https://arxiv.org/abs/2302.09068 |access-date=7 March 2023}}</ref>
 
| 2023 || February 15 || || A paper presents a pilot study that evaluates the cognitive abilities of two recently released generative transformer models, ChatGPT and DALL-E 2, in decision-making and spatial reasoning. The study constructs input prompts following neutral a priori guidelines and finds that DALL-E 2 can generate at least one correct image for each spatial reasoning prompt, but most images generated are incorrect. Similarly, ChatGPT demonstrates some level of rational decision-making, but many of its decisions violate at least one of the axioms under the classical Von Neumann-Morgenstern utility theorem. ChatGPT's outputs tend to be unpredictable and can make irrational decisions for simpler problems while drawing correct conclusions for more complex bet structures. The paper discusses the challenges of scaling up such cognitive evaluations for generative models and conducting them with a closed set of answer keys.<ref>{{cite journal |last1=Tang |first1=Zhisheng |last2=Kejriwal |first2=Mayank |title=A Pilot Evaluation of ChatGPT and DALL-E 2 on Decision Making and Spatial Reasoning |journal=arXiv:2302.09068 [cs] |date=15 February 2023 |doi=10.48550/arXiv.2302.09068 |url=https://arxiv.org/abs/2302.09068 |access-date=7 March 2023}}</ref>

Revision as of 11:18, 8 March 2023

This is a timeline of FIXME.

Sample questions

The following are some interesting questions that can be answered by reading this timeline:

Big picture

Time period Development summary More details

Full timeline

Year Month and date Event type Details
2017 June Google researchers first describe the transformer algorithm that would turbocharge the power of chatbots.
2018 June 11 OpenAI releases a paper entitled Improving Language Understanding by Generative Pre-Training, in which they introduces the Generative Pre-trained Transformer (GPT).[1]
2019 February 14 OpenAI releases Generative Pre-trained Transformer 2 (GPT-2).
2020 June 11 OpenAI releases Generative Pre-trained Transformer 3 (GPT-3) in beta.
2022 June 14 Google releases all Switch Transformer models in T5X/JAX, including the 1.6T param Switch-C and the 395B param Switch-XXL models.[2] The Switch Transformer is a type of neural network layer used in the transformer architecture, which replaces the regular feed-forward neural network (FFN) layer. The Switch Transformer is unique because it has multiple FFNs, also known as experts, in each layer, as opposed to the single FFN found in a standard transformer layer.[3]
2023 February 15 A paper presents a pilot study that evaluates the cognitive abilities of two recently released generative transformer models, ChatGPT and DALL-E 2, in decision-making and spatial reasoning. The study constructs input prompts following neutral a priori guidelines and finds that DALL-E 2 can generate at least one correct image for each spatial reasoning prompt, but most images generated are incorrect. Similarly, ChatGPT demonstrates some level of rational decision-making, but many of its decisions violate at least one of the axioms under the classical Von Neumann-Morgenstern utility theorem. ChatGPT's outputs tend to be unpredictable and can make irrational decisions for simpler problems while drawing correct conclusions for more complex bet structures. The paper discusses the challenges of scaling up such cognitive evaluations for generative models and conducting them with a closed set of answer keys.[4]
2023 February 18 A paper evaluates the performance of Generative Pre-trained Transformer (GPT) models for machine translation, covering various aspects such as the quality of different GPT models, the effect of prompting strategies, robustness towards domain shifts and document-level translation. The experiment includes eighteen different translation directions involving high and low resource languages, as well as non English-centric translations. The results show that GPT models achieve competitive translation quality for high resource languages, while having limited capabilities for low resource languages. Hybrid approaches, which combine GPT models with other translation systems, can further enhance the translation quality. The paper provides valuable insights for researchers and practitioners in the field to understand the potential and limitations of GPT models for translation.[5]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by FIXME.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

  • FIXME

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived from the original (PDF) on 26 January 2021. Retrieved 23 January 2021. 
  2. "https://twitter.com/LiamFedus/status/1536791574612303872". Twitter. Retrieved 8 March 2023.  External link in |title= (help)
  3. Davis, Jonathan (2 May 2021). "Understanding Google's Switch Transformer". Medium. Retrieved 8 March 2023. 
  4. Tang, Zhisheng; Kejriwal, Mayank (15 February 2023). "A Pilot Evaluation of ChatGPT and DALL-E 2 on Decision Making and Spatial Reasoning". arXiv:2302.09068 [cs]. doi:10.48550/arXiv.2302.09068. Retrieved 7 March 2023. 
  5. Hendy, Amr; Abdelrehim, Mohamed; Sharaf, Amr; Raunak, Vikas; Gabr, Mohamed; Matsushita, Hitokazu; Kim, Young Jin; Afify, Mohamed; Awadalla, Hany Hassan (17 February 2023). "How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation". arXiv:2302.09210 [cs]. doi:10.48550/arXiv.2302.09210.