Difference between revisions of "Talk:Timeline of large language models"
From Timelines
Line 9: | Line 9: | ||
{| class="sortable wikitable" | {| class="sortable wikitable" | ||
! Year !! Month and date !! Model name !! Number of parameters !! Event type !! Details | ! Year !! Month and date !! Model name !! Number of parameters !! Event type !! Details | ||
+ | |- | ||
+ | | 2022 || March 29 || Large-scale transformer language model || || || A paper investigates the optimal model size and number of tokens for training a transformer language model under a given compute budget. The researchers find that current large language models are significantly undertrained, and the model size and the number of training tokens should be scaled equally for compute-optimal training. They test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4x more data. Chinchilla outperforms Gopher, GPT-3, Jurassic-1, and Megatron-Turing NLG on a range of downstream evaluation tasks and reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, more than a 7% improvement over Gopher.<ref>{{cite journal |last1=Hoffmann |first1=Jordan |last2=Borgeaud |first2=Sebastian |last3=Mensch |first3=Arthur |last4=Buchatskaya |first4=Elena |last5=Cai |first5=Trevor |last6=Rutherford |first6=Eliza |last7=Casas |first7=Diego de Las |last8=Hendricks |first8=Lisa Anne |last9=Welbl |first9=Johannes |last10=Clark |first10=Aidan |last11=Hennigan |first11=Tom |last12=Noland |first12=Eric |last13=Millican |first13=Katie |last14=Driessche |first14=George van den |last15=Damoc |first15=Bogdan |last16=Guy |first16=Aurelia |last17=Osindero |first17=Simon |last18=Simonyan |first18=Karen |last19=Elsen |first19=Erich |last20=Rae |first20=Jack W. |last21=Vinyals |first21=Oriol |last22=Sifre |first22=Laurent |title=Training Compute-Optimal Large Language Models |date=2022 |doi=10.48550/arXiv.2203.15556}}</ref> | ||
|- | |- | ||
| 2020 || May 28 || Large-scale language model || || || A paper discusses the use of language models in few-shot learning, where a model is trained on a large corpus of text and then fine-tuned for a specific task. The authors demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance. They trained GPT-3, a language model with 175 billion parameters, and tested its performance in the few-shot setting. GPT-3 achieved strong performance on many NLP tasks, including translation, question-answering, and cloze tasks, as well as tasks that require on-the-fly reasoning or domain adaptation. However, the authors also identify some datasets where GPT-3's few-shot learning struggles, as well as methodological issues related to training on large web corpora. The paper also discusses the broader societal impacts of this finding and of GPT-3 in general.<ref>{{cite journal |last1=Brown |first1=Tom B. |last2=Mann |first2=Benjamin |last3=Ryder |first3=Nick |last4=Subbiah |first4=Melanie |last5=Kaplan |first5=Jared |last6=Dhariwal |first6=Prafulla |last7=Neelakantan |first7=Arvind |last8=Shyam |first8=Pranav |last9=Sastry |first9=Girish |last10=Askell |first10=Amanda |last11=Agarwal |first11=Sandhini |last12=Herbert-Voss |first12=Ariel |last13=Krueger |first13=Gretchen |last14=Henighan |first14=Tom |last15=Child |first15=Rewon |last16=Ramesh |first16=Aditya |last17=Ziegler |first17=Daniel M. |last18=Wu |first18=Jeffrey |last19=Winter |first19=Clemens |last20=Hesse |first20=Christopher |last21=Chen |first21=Mark |last22=Sigler |first22=Eric |last23=Litwin |first23=Mateusz |last24=Gray |first24=Scott |last25=Chess |first25=Benjamin |last26=Clark |first26=Jack |last27=Berner |first27=Christopher |last28=McCandlish |first28=Sam |last29=Radford |first29=Alec |last30=Sutskever |first30=Ilya |last31=Amodei |first31=Dario |title=Language Models are Few-Shot Learners |date=2020 |doi=10.48550/arXiv.2005.14165}}</ref> | | 2020 || May 28 || Large-scale language model || || || A paper discusses the use of language models in few-shot learning, where a model is trained on a large corpus of text and then fine-tuned for a specific task. The authors demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance. They trained GPT-3, a language model with 175 billion parameters, and tested its performance in the few-shot setting. GPT-3 achieved strong performance on many NLP tasks, including translation, question-answering, and cloze tasks, as well as tasks that require on-the-fly reasoning or domain adaptation. However, the authors also identify some datasets where GPT-3's few-shot learning struggles, as well as methodological issues related to training on large web corpora. The paper also discusses the broader societal impacts of this finding and of GPT-3 in general.<ref>{{cite journal |last1=Brown |first1=Tom B. |last2=Mann |first2=Benjamin |last3=Ryder |first3=Nick |last4=Subbiah |first4=Melanie |last5=Kaplan |first5=Jared |last6=Dhariwal |first6=Prafulla |last7=Neelakantan |first7=Arvind |last8=Shyam |first8=Pranav |last9=Sastry |first9=Girish |last10=Askell |first10=Amanda |last11=Agarwal |first11=Sandhini |last12=Herbert-Voss |first12=Ariel |last13=Krueger |first13=Gretchen |last14=Henighan |first14=Tom |last15=Child |first15=Rewon |last16=Ramesh |first16=Aditya |last17=Ziegler |first17=Daniel M. |last18=Wu |first18=Jeffrey |last19=Winter |first19=Clemens |last20=Hesse |first20=Christopher |last21=Chen |first21=Mark |last22=Sigler |first22=Eric |last23=Litwin |first23=Mateusz |last24=Gray |first24=Scott |last25=Chess |first25=Benjamin |last26=Clark |first26=Jack |last27=Berner |first27=Christopher |last28=McCandlish |first28=Sam |last29=Radford |first29=Alec |last30=Sutskever |first30=Ilya |last31=Amodei |first31=Dario |title=Language Models are Few-Shot Learners |date=2020 |doi=10.48550/arXiv.2005.14165}}</ref> | ||
+ | |- | ||
+ | | 2020 || July || Neural text generation model || || || A paper discusses the limitations of neural text generation models in open-ended tasks like language modeling and story generation, due to the standard likelihood training and approximate decoding objectives. The authors specifically analyze these limitations for abstractive document summarization and find that such models tend to hallucinate content that is unfaithful to the input document. The paper presents the results of a human evaluation of several neural abstractive summarization systems, highlighting the substantial amount of hallucinated content in all model-generated summaries. However, the authors also show that pretrained models perform better in terms of generating faithful and factual summaries, as evaluated by humans. They propose that textual entailment measures may be a better evaluation metric for faithfulness than standard metrics, leading to better training and decoding criteria.<ref>{{cite journal |last1=Maynez |first1=Joshua |last2=Narayan |first2=Shashi |last3=Bohnet |first3=Bernd |last4=McDonald |first4=Ryan |title=On Faithfulness and Factuality in Abstractive Summarization |journal=Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics |date=July 2020 |pages=1906–1919 |doi=10.18653/v1/2020.acl-main.173 |url=https://aclanthology.org/2020.acl-main.173/ |publisher=Association for Computational Linguistics}}</ref> | ||
|- | |- | ||
| 2022 || April 12 || Reinforcement learning-based language model || || || A paper describes a method for training language models to act as helpful and harmless assistants using {{w|reinforcement learning}} from human feedback. The authors demonstrate that this alignment training improves performance on almost all natural language processing evaluations and is compatible with training for specialized skills such as python coding and summarization. They explore an iterated online mode of training and investigate the robustness of the approach, identifying a linear relationship between the RL reward and the square root of the {{w|Kullback–Leibler divergence}} between the policy and its initialization. The authors also perform peripheral analyses and provide samples from their models using prompts from recent related work.<ref>{{cite journal |last1=Bai |first1=Yuntao |last2=Jones |first2=Andy |last3=Ndousse |first3=Kamal |last4=Askell |first4=Amanda |last5=Chen |first5=Anna |last6=DasSarma |first6=Nova |last7=Drain |first7=Dawn |last8=Fort |first8=Stanislav |last9=Ganguli |first9=Deep |last10=Henighan |first10=Tom |last11=Joseph |first11=Nicholas |last12=Kadavath |first12=Saurav |last13=Kernion |first13=Jackson |last14=Conerly |first14=Tom |last15=El-Showk |first15=Sheer |last16=Elhage |first16=Nelson |last17=Hatfield-Dodds |first17=Zac |last18=Hernandez |first18=Danny |last19=Hume |first19=Tristan |last20=Johnston |first20=Scott |last21=Kravec |first21=Shauna |last22=Lovitt |first22=Liane |last23=Nanda |first23=Neel |last24=Olsson |first24=Catherine |last25=Amodei |first25=Dario |last26=Brown |first26=Tom |last27=Clark |first27=Jack |last28=McCandlish |first28=Sam |last29=Olah |first29=Chris |last30=Mann |first30=Ben |last31=Kaplan |first31=Jared |title=Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback |date=2022 |doi=10.48550/arXiv.2204.05862}}</ref> | | 2022 || April 12 || Reinforcement learning-based language model || || || A paper describes a method for training language models to act as helpful and harmless assistants using {{w|reinforcement learning}} from human feedback. The authors demonstrate that this alignment training improves performance on almost all natural language processing evaluations and is compatible with training for specialized skills such as python coding and summarization. They explore an iterated online mode of training and investigate the robustness of the approach, identifying a linear relationship between the RL reward and the square root of the {{w|Kullback–Leibler divergence}} between the policy and its initialization. The authors also perform peripheral analyses and provide samples from their models using prompts from recent related work.<ref>{{cite journal |last1=Bai |first1=Yuntao |last2=Jones |first2=Andy |last3=Ndousse |first3=Kamal |last4=Askell |first4=Amanda |last5=Chen |first5=Anna |last6=DasSarma |first6=Nova |last7=Drain |first7=Dawn |last8=Fort |first8=Stanislav |last9=Ganguli |first9=Deep |last10=Henighan |first10=Tom |last11=Joseph |first11=Nicholas |last12=Kadavath |first12=Saurav |last13=Kernion |first13=Jackson |last14=Conerly |first14=Tom |last15=El-Showk |first15=Sheer |last16=Elhage |first16=Nelson |last17=Hatfield-Dodds |first17=Zac |last18=Hernandez |first18=Danny |last19=Hume |first19=Tristan |last20=Johnston |first20=Scott |last21=Kravec |first21=Shauna |last22=Lovitt |first22=Liane |last23=Nanda |first23=Neel |last24=Olsson |first24=Catherine |last25=Amodei |first25=Dario |last26=Brown |first26=Tom |last27=Clark |first27=Jack |last28=McCandlish |first28=Sam |last29=Olah |first29=Chris |last30=Mann |first30=Ben |last31=Kaplan |first31=Jared |title=Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback |date=2022 |doi=10.48550/arXiv.2204.05862}}</ref> |
Revision as of 07:24, 26 June 2023
Sample questions
The following are some interesting questions that can be answered by reading this timeline:
Concepts without articles on Wikipedia
Year | Month and date | Model name | Number of parameters | Event type | Details |
---|---|---|---|---|---|
2022 | March 29 | Large-scale transformer language model | A paper investigates the optimal model size and number of tokens for training a transformer language model under a given compute budget. The researchers find that current large language models are significantly undertrained, and the model size and the number of training tokens should be scaled equally for compute-optimal training. They test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and 4x more data. Chinchilla outperforms Gopher, GPT-3, Jurassic-1, and Megatron-Turing NLG on a range of downstream evaluation tasks and reaches a state-of-the-art average accuracy of 67.5% on the MMLU benchmark, more than a 7% improvement over Gopher.[1] | ||
2020 | May 28 | Large-scale language model | A paper discusses the use of language models in few-shot learning, where a model is trained on a large corpus of text and then fine-tuned for a specific task. The authors demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance. They trained GPT-3, a language model with 175 billion parameters, and tested its performance in the few-shot setting. GPT-3 achieved strong performance on many NLP tasks, including translation, question-answering, and cloze tasks, as well as tasks that require on-the-fly reasoning or domain adaptation. However, the authors also identify some datasets where GPT-3's few-shot learning struggles, as well as methodological issues related to training on large web corpora. The paper also discusses the broader societal impacts of this finding and of GPT-3 in general.[2] | ||
2020 | July | Neural text generation model | A paper discusses the limitations of neural text generation models in open-ended tasks like language modeling and story generation, due to the standard likelihood training and approximate decoding objectives. The authors specifically analyze these limitations for abstractive document summarization and find that such models tend to hallucinate content that is unfaithful to the input document. The paper presents the results of a human evaluation of several neural abstractive summarization systems, highlighting the substantial amount of hallucinated content in all model-generated summaries. However, the authors also show that pretrained models perform better in terms of generating faithful and factual summaries, as evaluated by humans. They propose that textual entailment measures may be a better evaluation metric for faithfulness than standard metrics, leading to better training and decoding criteria.[3] | ||
2022 | April 12 | Reinforcement learning-based language model | A paper describes a method for training language models to act as helpful and harmless assistants using reinforcement learning from human feedback. The authors demonstrate that this alignment training improves performance on almost all natural language processing evaluations and is compatible with training for specialized skills such as python coding and summarization. They explore an iterated online mode of training and investigate the robustness of the approach, identifying a linear relationship between the RL reward and the square root of the Kullback–Leibler divergence between the policy and its initialization. The authors also perform peripheral analyses and provide samples from their models using prompts from recent related work.[4] | ||
2022 | June 2 | OpenAI publishes a blog post on the development of best practices for organizations developing or deploying large language models. The principles include prohibiting misuse of language models, mitigating unintentional harm by evaluating models, minimizing sources of bias, and collaborating with stakeholders. These practices are meant to mitigate the risks of language models and achieve their full potential to augment human capabilities. The authors express hope that other organizations will adopt these principles and advance public discussion on language model development and deployment. The support from other organizations shows the growing social concern over the safety of LLMs.[5] | |||
2022 | September | Competition | Nvidia announces the launch of its BioNeMo LLM service to help researchers build new artificial intelligence models for biology.[6] | ||
2023 | Marh 14 | Medical language model | Google shares health AI updates including progress on their Medical PaLM 2, expert-level medical language model (LLM) research which demonstrated consistently expert-level performance on medical exam questions, scoring 85%. The company has partnered with Jacaranda Health and Chang Gung Memorial Hospital to build AI models that can help simplify acquiring and interpreting ultrasound images to identify important information like gestational age in expecting mothers and early detection of breast cancer. They're also partners with Mayo Clinic with the purpose to extend the reach of their AI model, with the goal of helping more patients receive radiotherapy treatment sooner. Additionally, Google works with partners on the ground to bring their research on tuberculosis (TB) AI-powered chest x-ray screening into the care setting.[7] |
- ↑ Hoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur; Buchatskaya, Elena; Cai, Trevor; Rutherford, Eliza; Casas, Diego de Las; Hendricks, Lisa Anne; Welbl, Johannes; Clark, Aidan; Hennigan, Tom; Noland, Eric; Millican, Katie; Driessche, George van den; Damoc, Bogdan; Guy, Aurelia; Osindero, Simon; Simonyan, Karen; Elsen, Erich; Rae, Jack W.; Vinyals, Oriol; Sifre, Laurent (2022). "Training Compute-Optimal Large Language Models". doi:10.48550/arXiv.2203.15556.
- ↑ Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (2020). "Language Models are Few-Shot Learners". doi:10.48550/arXiv.2005.14165.
- ↑ Maynez, Joshua; Narayan, Shashi; Bohnet, Bernd; McDonald, Ryan (July 2020). "On Faithfulness and Factuality in Abstractive Summarization". Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics: 1906–1919. doi:10.18653/v1/2020.acl-main.173.
- ↑ Bai, Yuntao; Jones, Andy; Ndousse, Kamal; Askell, Amanda; Chen, Anna; DasSarma, Nova; Drain, Dawn; Fort, Stanislav; Ganguli, Deep; Henighan, Tom; Joseph, Nicholas; Kadavath, Saurav; Kernion, Jackson; Conerly, Tom; El-Showk, Sheer; Elhage, Nelson; Hatfield-Dodds, Zac; Hernandez, Danny; Hume, Tristan; Johnston, Scott; Kravec, Shauna; Lovitt, Liane; Nanda, Neel; Olsson, Catherine; Amodei, Dario; Brown, Tom; Clark, Jack; McCandlish, Sam; Olah, Chris; Mann, Ben; Kaplan, Jared (2022). "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback". doi:10.48550/arXiv.2204.05862.
- ↑ "Best practices for deploying language models". openai.com. Retrieved 17 March 2023.
- ↑ "Nvidia boosts generative AI for biology with BioNeMo". VentureBeat. 12 January 2023. Retrieved 11 March 2023.
- ↑ "Our latest health AI research updates". Google. 14 March 2023. Retrieved 21 March 2023.