Difference between revisions of "Timeline of large language models"

From Timelines
Jump to: navigation, search
Line 20: Line 20:
 
|-
 
|-
 
| 2022 || April || || OpenAI reveals DALL-E 2.  
 
| 2022 || April || || OpenAI reveals DALL-E 2.  
 +
|-
 +
| 2023 || February 24 || Study || A paper proposes a system called LLM-Augmenter that improves large language models by using external knowledge and automated feedback. The system adds plug-and-play modules to a black-box LLM to ground responses in external knowledge and iteratively improve responses using feedback generated by utility functions. The system is validated on task-oriented dialog and open-domain question answering, showing a significant reduction in hallucinations without sacrificing fluency and informativeness. The source code and models are publicly available.<ref>{{cite journal |last1=Peng |first1=Baolin |last2=Galley |first2=Michel |last3=He |first3=Pengcheng |last4=Cheng |first4=Hao |last5=Xie |first5=Yujia |last6=Hu |first6=Yu |last7=Huang |first7=Qiuyuan |last8=Liden |first8=Lars |last9=Yu |first9=Zhou |last10=Chen |first10=Weizhu |last11=Gao |first11=Jianfeng |title=Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback |journal=arXiv:2302.12813 [cs] |date=1 March 2023 |doi=10.48550/arXiv.2302.12813 |url=https://arxiv.org/abs/2302.12813}}</ref>
 
|-
 
|-
 
| 2023 || March 1 || Study || A paper introduces a method to train language models like ChatGPT to understand concepts precisely using succinct representations based on category theory. The representations provide concept-wise invariance properties and a new learning algorithm that can accurately learn complex concepts or fix misconceptions. The approach also allows for the generation of a hierarchical decomposition of the representations, which can be manually verified by examining each part individually.<ref>{{cite journal |last1=Yuan |first1=Yang |title=Succinct Representations for Concepts |date=2023 |doi=10.48550/arXiv.2303.00446}}</ref>
 
| 2023 || March 1 || Study || A paper introduces a method to train language models like ChatGPT to understand concepts precisely using succinct representations based on category theory. The representations provide concept-wise invariance properties and a new learning algorithm that can accurately learn complex concepts or fix misconceptions. The approach also allows for the generation of a hierarchical decomposition of the representations, which can be manually verified by examining each part individually.<ref>{{cite journal |last1=Yuan |first1=Yang |title=Succinct Representations for Concepts |date=2023 |doi=10.48550/arXiv.2303.00446}}</ref>

Revision as of 11:14, 7 March 2023

This is a timeline of FIXME.

Sample questions

The following are some interesting questions that can be answered by reading this timeline:

Big picture

Time period Development summary More details

Full timeline

Year Month and date Event type Details
2021 May Google anounces chatbot LaMDA, but doesn't release it publicly.
2022 April OpenAI reveals DALL-E 2.
2023 February 24 Study A paper proposes a system called LLM-Augmenter that improves large language models by using external knowledge and automated feedback. The system adds plug-and-play modules to a black-box LLM to ground responses in external knowledge and iteratively improve responses using feedback generated by utility functions. The system is validated on task-oriented dialog and open-domain question answering, showing a significant reduction in hallucinations without sacrificing fluency and informativeness. The source code and models are publicly available.[1]
2023 March 1 Study A paper introduces a method to train language models like ChatGPT to understand concepts precisely using succinct representations based on category theory. The representations provide concept-wise invariance properties and a new learning algorithm that can accurately learn complex concepts or fix misconceptions. The approach also allows for the generation of a hierarchical decomposition of the representations, which can be manually verified by examining each part individually.[2]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by FIXME.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

  • FIXME

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. Peng, Baolin; Galley, Michel; He, Pengcheng; Cheng, Hao; Xie, Yujia; Hu, Yu; Huang, Qiuyuan; Liden, Lars; Yu, Zhou; Chen, Weizhu; Gao, Jianfeng (1 March 2023). "Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback". arXiv:2302.12813 [cs]. doi:10.48550/arXiv.2302.12813. 
  2. Yuan, Yang (2023). "Succinct Representations for Concepts". doi:10.48550/arXiv.2303.00446.