Difference between revisions of "Talk:Timeline of large language models"

From Timelines
Jump to: navigation, search
Line 6: Line 6:
  
 
* {{w|BioNeMo}}
 
* {{w|BioNeMo}}
 
 
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
 
! Year !! Month and date !! Model name !! Number of parameters !! Event type !! Details
 
! Year !! Month and date !! Model name !! Number of parameters !! Event type !! Details
 +
|-
 +
| 2020 || May 28 || Large-scale language model || || || A paper discusses the use of language models in few-shot learning, where a model is trained on a large corpus of text and then fine-tuned for a specific task. The authors demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance. They trained GPT-3, a language model with 175 billion parameters, and tested its performance in the few-shot setting. GPT-3 achieved strong performance on many NLP tasks, including translation, question-answering, and cloze tasks, as well as tasks that require on-the-fly reasoning or domain adaptation. However, the authors also identify some datasets where GPT-3's few-shot learning struggles, as well as methodological issues related to training on large web corpora. The paper also discusses the broader societal impacts of this finding and of GPT-3 in general.<ref>{{cite journal |last1=Brown |first1=Tom B. |last2=Mann |first2=Benjamin |last3=Ryder |first3=Nick |last4=Subbiah |first4=Melanie |last5=Kaplan |first5=Jared |last6=Dhariwal |first6=Prafulla |last7=Neelakantan |first7=Arvind |last8=Shyam |first8=Pranav |last9=Sastry |first9=Girish |last10=Askell |first10=Amanda |last11=Agarwal |first11=Sandhini |last12=Herbert-Voss |first12=Ariel |last13=Krueger |first13=Gretchen |last14=Henighan |first14=Tom |last15=Child |first15=Rewon |last16=Ramesh |first16=Aditya |last17=Ziegler |first17=Daniel M. |last18=Wu |first18=Jeffrey |last19=Winter |first19=Clemens |last20=Hesse |first20=Christopher |last21=Chen |first21=Mark |last22=Sigler |first22=Eric |last23=Litwin |first23=Mateusz |last24=Gray |first24=Scott |last25=Chess |first25=Benjamin |last26=Clark |first26=Jack |last27=Berner |first27=Christopher |last28=McCandlish |first28=Sam |last29=Radford |first29=Alec |last30=Sutskever |first30=Ilya |last31=Amodei |first31=Dario |title=Language Models are Few-Shot Learners |date=2020 |doi=10.48550/arXiv.2005.14165}}</ref>
 
|-
 
|-
 
| 2022 || April 12 || Reinforcement learning-based language model || || || A paper describes a method for training language models to act as helpful and harmless assistants using {{w|reinforcement learning}} from human feedback. The authors demonstrate that this alignment training improves performance on almost all natural language processing evaluations and is compatible with training for specialized skills such as python coding and summarization. They explore an iterated online mode of training and investigate the robustness of the approach, identifying a linear relationship between the RL reward and the square root of the {{w|Kullback–Leibler divergence}} between the policy and its initialization. The authors also perform peripheral analyses and provide samples from their models using prompts from recent related work.<ref>{{cite journal |last1=Bai |first1=Yuntao |last2=Jones |first2=Andy |last3=Ndousse |first3=Kamal |last4=Askell |first4=Amanda |last5=Chen |first5=Anna |last6=DasSarma |first6=Nova |last7=Drain |first7=Dawn |last8=Fort |first8=Stanislav |last9=Ganguli |first9=Deep |last10=Henighan |first10=Tom |last11=Joseph |first11=Nicholas |last12=Kadavath |first12=Saurav |last13=Kernion |first13=Jackson |last14=Conerly |first14=Tom |last15=El-Showk |first15=Sheer |last16=Elhage |first16=Nelson |last17=Hatfield-Dodds |first17=Zac |last18=Hernandez |first18=Danny |last19=Hume |first19=Tristan |last20=Johnston |first20=Scott |last21=Kravec |first21=Shauna |last22=Lovitt |first22=Liane |last23=Nanda |first23=Neel |last24=Olsson |first24=Catherine |last25=Amodei |first25=Dario |last26=Brown |first26=Tom |last27=Clark |first27=Jack |last28=McCandlish |first28=Sam |last29=Olah |first29=Chris |last30=Mann |first30=Ben |last31=Kaplan |first31=Jared |title=Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback |date=2022 |doi=10.48550/arXiv.2204.05862}}</ref>
 
| 2022 || April 12 || Reinforcement learning-based language model || || || A paper describes a method for training language models to act as helpful and harmless assistants using {{w|reinforcement learning}} from human feedback. The authors demonstrate that this alignment training improves performance on almost all natural language processing evaluations and is compatible with training for specialized skills such as python coding and summarization. They explore an iterated online mode of training and investigate the robustness of the approach, identifying a linear relationship between the RL reward and the square root of the {{w|Kullback–Leibler divergence}} between the policy and its initialization. The authors also perform peripheral analyses and provide samples from their models using prompts from recent related work.<ref>{{cite journal |last1=Bai |first1=Yuntao |last2=Jones |first2=Andy |last3=Ndousse |first3=Kamal |last4=Askell |first4=Amanda |last5=Chen |first5=Anna |last6=DasSarma |first6=Nova |last7=Drain |first7=Dawn |last8=Fort |first8=Stanislav |last9=Ganguli |first9=Deep |last10=Henighan |first10=Tom |last11=Joseph |first11=Nicholas |last12=Kadavath |first12=Saurav |last13=Kernion |first13=Jackson |last14=Conerly |first14=Tom |last15=El-Showk |first15=Sheer |last16=Elhage |first16=Nelson |last17=Hatfield-Dodds |first17=Zac |last18=Hernandez |first18=Danny |last19=Hume |first19=Tristan |last20=Johnston |first20=Scott |last21=Kravec |first21=Shauna |last22=Lovitt |first22=Liane |last23=Nanda |first23=Neel |last24=Olsson |first24=Catherine |last25=Amodei |first25=Dario |last26=Brown |first26=Tom |last27=Clark |first27=Jack |last28=McCandlish |first28=Sam |last29=Olah |first29=Chris |last30=Mann |first30=Ben |last31=Kaplan |first31=Jared |title=Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback |date=2022 |doi=10.48550/arXiv.2204.05862}}</ref>

Revision as of 07:12, 26 June 2023

Sample questions

The following are some interesting questions that can be answered by reading this timeline:

Concepts without articles on Wikipedia

Year Month and date Model name Number of parameters Event type Details
2020 May 28 Large-scale language model A paper discusses the use of language models in few-shot learning, where a model is trained on a large corpus of text and then fine-tuned for a specific task. The authors demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance. They trained GPT-3, a language model with 175 billion parameters, and tested its performance in the few-shot setting. GPT-3 achieved strong performance on many NLP tasks, including translation, question-answering, and cloze tasks, as well as tasks that require on-the-fly reasoning or domain adaptation. However, the authors also identify some datasets where GPT-3's few-shot learning struggles, as well as methodological issues related to training on large web corpora. The paper also discusses the broader societal impacts of this finding and of GPT-3 in general.[1]
2022 April 12 Reinforcement learning-based language model A paper describes a method for training language models to act as helpful and harmless assistants using reinforcement learning from human feedback. The authors demonstrate that this alignment training improves performance on almost all natural language processing evaluations and is compatible with training for specialized skills such as python coding and summarization. They explore an iterated online mode of training and investigate the robustness of the approach, identifying a linear relationship between the RL reward and the square root of the Kullback–Leibler divergence between the policy and its initialization. The authors also perform peripheral analyses and provide samples from their models using prompts from recent related work.[2]
2022 June 2 OpenAI publishes a blog post on the development of best practices for organizations developing or deploying large language models. The principles include prohibiting misuse of language models, mitigating unintentional harm by evaluating models, minimizing sources of bias, and collaborating with stakeholders. These practices are meant to mitigate the risks of language models and achieve their full potential to augment human capabilities. The authors express hope that other organizations will adopt these principles and advance public discussion on language model development and deployment. The support from other organizations shows the growing social concern over the safety of LLMs.[3]
2022 September Competition Nvidia announces the launch of its BioNeMo LLM service to help researchers build new artificial intelligence models for biology.[4]
2023 Marh 14 Medical language model Google shares health AI updates including progress on their Medical PaLM 2, expert-level medical language model (LLM) research which demonstrated consistently expert-level performance on medical exam questions, scoring 85%. The company has partnered with Jacaranda Health and Chang Gung Memorial Hospital to build AI models that can help simplify acquiring and interpreting ultrasound images to identify important information like gestational age in expecting mothers and early detection of breast cancer. They're also partners with Mayo Clinic with the purpose to extend the reach of their AI model, with the goal of helping more patients receive radiotherapy treatment sooner. Additionally, Google works with partners on the ground to bring their research on tuberculosis (TB) AI-powered chest x-ray screening into the care setting.[5]
  1. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (2020). "Language Models are Few-Shot Learners". doi:10.48550/arXiv.2005.14165. 
  2. Bai, Yuntao; Jones, Andy; Ndousse, Kamal; Askell, Amanda; Chen, Anna; DasSarma, Nova; Drain, Dawn; Fort, Stanislav; Ganguli, Deep; Henighan, Tom; Joseph, Nicholas; Kadavath, Saurav; Kernion, Jackson; Conerly, Tom; El-Showk, Sheer; Elhage, Nelson; Hatfield-Dodds, Zac; Hernandez, Danny; Hume, Tristan; Johnston, Scott; Kravec, Shauna; Lovitt, Liane; Nanda, Neel; Olsson, Catherine; Amodei, Dario; Brown, Tom; Clark, Jack; McCandlish, Sam; Olah, Chris; Mann, Ben; Kaplan, Jared (2022). "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback". doi:10.48550/arXiv.2204.05862. 
  3. "Best practices for deploying language models". openai.com. Retrieved 17 March 2023. 
  4. "Nvidia boosts generative AI for biology with BioNeMo". VentureBeat. 12 January 2023. Retrieved 11 March 2023. 
  5. "Our latest health AI research updates". Google. 14 March 2023. Retrieved 21 March 2023.