Difference between revisions of "Talk:Timeline of large language models"

From Timelines
Jump to: navigation, search
 
(4 intermediate revisions by the same user not shown)
Line 7: Line 7:
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
 
! Year !! Month and date !! Model name !! Number of parameters !! Event type !! Details
 
! Year !! Month and date !! Model name !! Number of parameters !! Event type !! Details
 
+
|-
 +
| 2018 || April 1 || Marian || || Early development || A paper introduces Marian, a highly efficient Neural Machine Translation (NMT) framework written entirely in C++. The framework includes an integrated automatic differentiation engine based on dynamic computation graphs. The authors discuss the design of the encoder-decoder framework and demonstrate that Marian, as a research-friendly toolkit, achieves fast training and translation speeds, making it a valuable tool for NMT research and development.<ref>{{cite journal |last1=Junczys-Dowmunt |first1=Marcin |last2=Grundkiewicz |first2=Roman |last3=Dwojak |first3=Tomasz |last4=Hoang |first4=Hieu |last5=Heafield |first5=Kenneth |last6=Neckermann |first6=Tom |last7=Seide |first7=Frank |last8=Germann |first8=Ulrich |last9=Aji |first9=Alham Fikri |last10=Bogoychev |first10=Nikolay |last11=Martins |first11=André F. T. |last12=Birch |first12=Alexandra |title=Marian: Fast Neural Machine Translation in C++ |date=2018 |doi=10.48550/arXiv.1804.00344}}</ref> NMT models, like those used in Marian, form a significant component of large language models.
 +
|-
 
| 2022 || June 2 || || || || {{w|OpenAI}} publishes a blog post on the development of best practices for organizations developing or deploying large language models. The principles include prohibiting misuse of language models, mitigating unintentional harm by evaluating models, minimizing sources of bias, and collaborating with stakeholders. These practices are meant to mitigate the risks of language models and achieve their full potential to augment human capabilities. The authors express hope that other organizations will adopt these principles and advance public discussion on language model development and deployment. The support from other organizations shows the growing social concern over the safety of LLMs.<ref>{{cite web |title=Best practices for deploying language models |url=https://openai.com/blog/best-practices-for-deploying-language-models |website=openai.com |access-date=17 March 2023}}</ref>     
 
| 2022 || June 2 || || || || {{w|OpenAI}} publishes a blog post on the development of best practices for organizations developing or deploying large language models. The principles include prohibiting misuse of language models, mitigating unintentional harm by evaluating models, minimizing sources of bias, and collaborating with stakeholders. These practices are meant to mitigate the risks of language models and achieve their full potential to augment human capabilities. The authors express hope that other organizations will adopt these principles and advance public discussion on language model development and deployment. The support from other organizations shows the growing social concern over the safety of LLMs.<ref>{{cite web |title=Best practices for deploying language models |url=https://openai.com/blog/best-practices-for-deploying-language-models |website=openai.com |access-date=17 March 2023}}</ref>     
 
|-
 
|-
 
| 2022 || September || || Competition || || {{w|Nvidia}} announces the launch of its {{w|BioNeMo}} LLM service to help researchers build new artificial intelligence models for biology.<ref>{{cite web |title=Nvidia boosts generative AI for biology with BioNeMo |url=https://venturebeat.com/ai/nvidia-boosts-generative-ai-for-biology-with-bionemo/#:~:text=In%20September%202022%2C%20Nvidia%20announced,yielded%20some%20strong%20early%20results. |website=VentureBeat |access-date=11 March 2023 |date=12 January 2023}}</ref>
 
| 2022 || September || || Competition || || {{w|Nvidia}} announces the launch of its {{w|BioNeMo}} LLM service to help researchers build new artificial intelligence models for biology.<ref>{{cite web |title=Nvidia boosts generative AI for biology with BioNeMo |url=https://venturebeat.com/ai/nvidia-boosts-generative-ai-for-biology-with-bionemo/#:~:text=In%20September%202022%2C%20Nvidia%20announced,yielded%20some%20strong%20early%20results. |website=VentureBeat |access-date=11 March 2023 |date=12 January 2023}}</ref>
 +
|-
 +
| 2023 || February 9 || || || || A paper presents a collaborative design framework that combines interactive evolution and LLMs to simulate the human design process. The framework uses interactive evolution to exploit user feedback and LLMs for a complex creative task of recombining and varying ideas. The process begins with a brief and a set of candidate designs, generated by a language model or proposed by users. Users provide feedback to an interactive {{w|genetic algorithm}} that selects, recombines, and mutates the most promising designs. The framework was evaluated on three game design tasks with human designers collaborating remotely.<ref>{{cite journal |last1=Lanzi |first1=Pier Luca |last2=Loiacono |first2=Daniele |title=ChatGPT and Other Large Language Models as Evolutionary Engines for Online Interactive Collaborative Game Design |journal=arXiv:2303.02155 [cs] |date=9 February 2023 |doi=10.48550/arXiv.2303.02155 |url=https://arxiv.org/abs/2303.02155}}</ref>
 
|-
 
|-
 
| 2023 || February 14 || || Research || || A paper presents a framework called ChatCAD, which integrates LLMs with {{w|computer-aided diagnosis}} (CAD) networks for medical images. ChatCAD uses LLMs to enhance the output of multiple CAD networks by summarizing and reorganizing the information presented in natural language text format. This approach merges the strengths of LLMs' medical domain knowledge and logical reasoning with the vision understanding capability of existing medical-image CAD models. The goal is to create a more user-friendly and understandable system for patients compared to conventional CAD systems. The paper suggests that LLMs can also be used to improve the performance of vision-based medical-image CAD models in the future.<ref>{{cite journal |last1=Wang |first1=Sheng |last2=Zhao |first2=Zihao |last3=Ouyang |first3=Xi |last4=Wang |first4=Qian |last5=Shen |first5=Dinggang |title=ChatCAD: Interactive Computer-Aided Diagnosis on Medical Image using Large Language Models |date=2023 |doi=10.48550/arXiv.2302.07257}}</ref>
 
| 2023 || February 14 || || Research || || A paper presents a framework called ChatCAD, which integrates LLMs with {{w|computer-aided diagnosis}} (CAD) networks for medical images. ChatCAD uses LLMs to enhance the output of multiple CAD networks by summarizing and reorganizing the information presented in natural language text format. This approach merges the strengths of LLMs' medical domain knowledge and logical reasoning with the vision understanding capability of existing medical-image CAD models. The goal is to create a more user-friendly and understandable system for patients compared to conventional CAD systems. The paper suggests that LLMs can also be used to improve the performance of vision-based medical-image CAD models in the future.<ref>{{cite journal |last1=Wang |first1=Sheng |last2=Zhao |first2=Zihao |last3=Ouyang |first3=Xi |last4=Wang |first4=Qian |last5=Shen |first5=Dinggang |title=ChatCAD: Interactive Computer-Aided Diagnosis on Medical Image using Large Language Models |date=2023 |doi=10.48550/arXiv.2302.07257}}</ref>
 
|-
 
|-
 
| 2023 || February 17 || || || Research || A paper surveys the state of the art of hybrid language models architectures and strategies for complex question-answering (QA, CQA, CPS). While very large language models are good at leveraging public data on standard problems, they may require specific architecture, knowledge, skills, tasks, methods, sensitive data, performance, human approval, and versatile feedback to tackle more specific complex questions or problems. The paper identifies the key elements used with LLMs to solve complex questions or problems and discusses challenges associated with complex QA. The paper also reviews current solutions and promising strategies, using elements such as hybrid LLM architectures, human-in-the-loop reinforcement learning, prompting adaptation, neuro-symbolic and structured knowledge grounding, {{w|program synthesis}}, and others.<ref>{{cite journal |last1=Daull |first1=Xavier |last2=Bellot |first2=Patrice |last3=Bruno |first3=Emmanuel |last4=Martin |first4=Vincent |last5=Murisasco |first5=Elisabeth |title=Complex QA and language models hybrid architectures, Survey |journal=arXiv:2302.09051 [cs] |date=17 February 2023 |doi=10.48550/arXiv.2302.09051 |url=https://arxiv.org/abs/2302.09051}}</ref>
 
| 2023 || February 17 || || || Research || A paper surveys the state of the art of hybrid language models architectures and strategies for complex question-answering (QA, CQA, CPS). While very large language models are good at leveraging public data on standard problems, they may require specific architecture, knowledge, skills, tasks, methods, sensitive data, performance, human approval, and versatile feedback to tackle more specific complex questions or problems. The paper identifies the key elements used with LLMs to solve complex questions or problems and discusses challenges associated with complex QA. The paper also reviews current solutions and promising strategies, using elements such as hybrid LLM architectures, human-in-the-loop reinforcement learning, prompting adaptation, neuro-symbolic and structured knowledge grounding, {{w|program synthesis}}, and others.<ref>{{cite journal |last1=Daull |first1=Xavier |last2=Bellot |first2=Patrice |last3=Bruno |first3=Emmanuel |last4=Martin |first4=Vincent |last5=Murisasco |first5=Elisabeth |title=Complex QA and language models hybrid architectures, Survey |journal=arXiv:2302.09051 [cs] |date=17 February 2023 |doi=10.48550/arXiv.2302.09051 |url=https://arxiv.org/abs/2302.09051}}</ref>
|-
 
| 2023 || February 24 || || || Research || A paper proposes a system called LLM-Augmenter that improves large language models by using external knowledge and automated feedback. The system adds plug-and-play modules to a black-box LLM to ground responses in external knowledge and iteratively improve responses using feedback generated by utility functions. The system is validated on task-oriented dialog and open-domain question answering, showing a significant reduction in hallucinations without sacrificing fluency and informativeness. The source code and models are publicly available.<ref>{{cite journal |last1=Peng |first1=Baolin |last2=Galley |first2=Michel |last3=He |first3=Pengcheng |last4=Cheng |first4=Hao |last5=Xie |first5=Yujia |last6=Hu |first6=Yu |last7=Huang |first7=Qiuyuan |last8=Liden |first8=Lars |last9=Yu |first9=Zhou |last10=Chen |first10=Weizhu |last11=Gao |first11=Jianfeng |title=Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback |journal=arXiv:2302.12813 [cs] |date=1 March 2023 |doi=10.48550/arXiv.2302.12813 |url=https://arxiv.org/abs/2302.12813}}</ref>
 
|-
 
| 2023 || February 27 || || || || A paper proposes a framework that simplifies reward design in {{w|reinforcement learning}} (RL) by using natural language as a proxy for the reward function. The framework prompts a large language model, such as GPT-3, to evaluate the agent's behavior against the desired behavior described in the prompt and outputs a corresponding reward signal. The RL agent uses this reward to update its behavior. The approach is evaluated in three tasks, and the results demonstrate that RL agents trained with the framework are well-aligned with the user's objectives and outperform RL agents trained with reward functions learned via {{w|supervised learning}}.<ref>{{cite journal |last1=Kwon |first1=Minae |last2=Xie |first2=Sang Michael |last3=Bullard |first3=Kalesha |last4=Sadigh |first4=Dorsa |title=Reward Design with Language Models |journal=arXiv:2303.00001 [cs] |date=27 February 2023 |doi=10.48550/arXiv.2303.00001 |url=https://arxiv.org/abs/2303.00001}}</ref>
 
|-
 
| 2023 || February 27 || || || Research || A paper discusses the use of open source code to train large language models (LLMs) and the potential security, privacy, and licensing implications of this practice. LLMs for code are commonly trained on large unsanitized corpora of source code scraped from the internet, leading to the memorization and verbatim emission of content by the models. The paper argues that the use of {{w|copyleft}} code to train LLMs is a legal and ethical dilemma, and provides actionable recommendations to address this issue. Overall, the paper highlights the importance of considering the implications of using [[w:Open-source software|open source code]] in training LLMs.<ref>{{cite journal |last1=Al-Kaswan |first1=Ali |last2=Izadi |first2=Maliheh |title=The (ab)use of Open Source Code to Train Large Language Models |journal=arXiv:2302.13681 [cs] |date=28 February 2023 |doi=10.48550/arXiv.2302.13681 |url=https://arxiv.org/abs/2302.13681}}</ref>
 
 
|-
 
|-
 
| 2023 || February 28 || || || || GEMBA (GPT Estimation Metric Based Assessment) is presented as a GPT-based metric for evaluating translation quality both with and without a reference translation. The authors evaluate four prompt variants in two modes and investigate seven versions of GPT models, including ChatGPT. Their method achieves state-of-the-art accuracy in both modes compared to human labels and provides insight into the usefulness of pre-trained, generative large language models for translation quality assessment.<ref>{{cite journal |last1=Kocmi |first1=Tom |last2=Federmann |first2=Christian |title=Large Language Models Are State-of-the-Art Evaluators of Translation Quality |journal=arXiv:2302.14520 [cs] |date=28 February 2023 |doi=10.48550/arXiv.2302.14520 |url=https://arxiv.org/abs/2302.14520}}</ref><ref>{{cite web |title=Large Language Models Are State-of-the-Art Evaluators of Translation Quality |url=https://www.arxiv-vanity.com/papers/2302.14520/ |website=arxiv-vanity.com |access-date=16 May 2023}}</ref>
 
| 2023 || February 28 || || || || GEMBA (GPT Estimation Metric Based Assessment) is presented as a GPT-based metric for evaluating translation quality both with and without a reference translation. The authors evaluate four prompt variants in two modes and investigate seven versions of GPT models, including ChatGPT. Their method achieves state-of-the-art accuracy in both modes compared to human labels and provides insight into the usefulness of pre-trained, generative large language models for translation quality assessment.<ref>{{cite journal |last1=Kocmi |first1=Tom |last2=Federmann |first2=Christian |title=Large Language Models Are State-of-the-Art Evaluators of Translation Quality |journal=arXiv:2302.14520 [cs] |date=28 February 2023 |doi=10.48550/arXiv.2302.14520 |url=https://arxiv.org/abs/2302.14520}}</ref><ref>{{cite web |title=Large Language Models Are State-of-the-Art Evaluators of Translation Quality |url=https://www.arxiv-vanity.com/papers/2302.14520/ |website=arxiv-vanity.com |access-date=16 May 2023}}</ref>
|-
 
| 2023 || February 28 || || || || A paper discusses In-Context Instruction Learning (ICIL), a new approach to instruction learning for LLMs that significantly improves zero-shot task generalization performance. ICIL uses a single fixed prompt that concatenates cross-task demonstrations to evaluate all tasks, and it is complementary to instruction-based fine-tuning. The authors demonstrate that ICIL improves the performance of both pretrained and instruction-fine-tuned models, including the most powerful instruction-fine-tuned baseline (text-davinci-003) by 9.3%.<ref>{{cite journal |last1=Ye |first1=Seonghyeon |last2=Hwang |first2=Hyeonbin |last3=Yang |first3=Sohee |last4=Yun |first4=Hyeongu |last5=Kim |first5=Yireun |last6=Seo |first6=Minjoon |title=In-Context Instruction Learning |journal=arXiv:2302.14691 [cs] |date=28 February 2023 |doi=10.48550/arXiv.2302.14691 |url=https://arxiv.org/abs/2302.14691}}</ref>
 
|-
 
| 2023 || March 1 || || || Research || A paper introduces a method to train language models like ChatGPT to understand concepts precisely using succinct representations based on {{w|category theory}}. The representations provide concept-wise invariance properties and a new learning algorithm that can accurately learn complex concepts or fix misconceptions. The approach also allows for the generation of a hierarchical decomposition of the representations, which can be manually verified by examining each part individually.<ref>{{cite journal |last1=Yuan |first1=Yang |title=Succinct Representations for Concepts |date=2023 |doi=10.48550/arXiv.2303.00446}}</ref>
 
 
|-
 
|-
 
| 2023 || March 3 || Two stage framework<ref>{{cite web |title=Prophet |url=https://github.com/MILVLG/prophet |website=github.com |publisher=Vision and Language Group@ MIL |access-date=16 May 2023 |date=16 May 2023}}</ref> || || Research || A paper proposes a framework called Prophet that uses answer heuristics to prompt LLMs for knowledge-based visual question answering (VQA). Previous methods used LLMs to acquire necessary knowledge for answering, but these methods did not fully activate the capacity of LLMs due to insufficient input information. Prophet trains a vanilla VQA model on a knowledge-based VQA dataset without external knowledge and extracts two types of answer heuristics: answer candidates and answer-aware examples. These answer heuristics are encoded into prompts to enhance the capacity of LLMs. Prophet outperforms existing state-of-the-art methods on two challenging knowledge-based VQA datasets, OK-VQA and A-OKVQA, delivering 61.1% and 55.7% accuracies on their testing sets, respectively.<ref>{{cite journal |last1=Shao |first1=Zhenwei |last2=Yu |first2=Zhou |last3=Wang |first3=Meng |last4=Yu |first4=Jun |title=Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering |journal=arXiv:2303.01903 [cs] |date=3 March 2023 |doi=10.48550/arXiv.2303.01903 |url=https://arxiv.org/abs/2303.01903}}</ref>
 
| 2023 || March 3 || Two stage framework<ref>{{cite web |title=Prophet |url=https://github.com/MILVLG/prophet |website=github.com |publisher=Vision and Language Group@ MIL |access-date=16 May 2023 |date=16 May 2023}}</ref> || || Research || A paper proposes a framework called Prophet that uses answer heuristics to prompt LLMs for knowledge-based visual question answering (VQA). Previous methods used LLMs to acquire necessary knowledge for answering, but these methods did not fully activate the capacity of LLMs due to insufficient input information. Prophet trains a vanilla VQA model on a knowledge-based VQA dataset without external knowledge and extracts two types of answer heuristics: answer candidates and answer-aware examples. These answer heuristics are encoded into prompts to enhance the capacity of LLMs. Prophet outperforms existing state-of-the-art methods on two challenging knowledge-based VQA datasets, OK-VQA and A-OKVQA, delivering 61.1% and 55.7% accuracies on their testing sets, respectively.<ref>{{cite journal |last1=Shao |first1=Zhenwei |last2=Yu |first2=Zhou |last3=Wang |first3=Meng |last4=Yu |first4=Jun |title=Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering |journal=arXiv:2303.01903 [cs] |date=3 March 2023 |doi=10.48550/arXiv.2303.01903 |url=https://arxiv.org/abs/2303.01903}}</ref>

Latest revision as of 12:25, 12 October 2023


Extended Timeline

These events were removed from the main timeline.

Year Month and date Model name Number of parameters Event type Details
2018 April 1 Marian Early development A paper introduces Marian, a highly efficient Neural Machine Translation (NMT) framework written entirely in C++. The framework includes an integrated automatic differentiation engine based on dynamic computation graphs. The authors discuss the design of the encoder-decoder framework and demonstrate that Marian, as a research-friendly toolkit, achieves fast training and translation speeds, making it a valuable tool for NMT research and development.[1] NMT models, like those used in Marian, form a significant component of large language models.
2022 June 2 OpenAI publishes a blog post on the development of best practices for organizations developing or deploying large language models. The principles include prohibiting misuse of language models, mitigating unintentional harm by evaluating models, minimizing sources of bias, and collaborating with stakeholders. These practices are meant to mitigate the risks of language models and achieve their full potential to augment human capabilities. The authors express hope that other organizations will adopt these principles and advance public discussion on language model development and deployment. The support from other organizations shows the growing social concern over the safety of LLMs.[2]
2022 September Competition Nvidia announces the launch of its BioNeMo LLM service to help researchers build new artificial intelligence models for biology.[3]
2023 February 9 A paper presents a collaborative design framework that combines interactive evolution and LLMs to simulate the human design process. The framework uses interactive evolution to exploit user feedback and LLMs for a complex creative task of recombining and varying ideas. The process begins with a brief and a set of candidate designs, generated by a language model or proposed by users. Users provide feedback to an interactive genetic algorithm that selects, recombines, and mutates the most promising designs. The framework was evaluated on three game design tasks with human designers collaborating remotely.[4]
2023 February 14 Research A paper presents a framework called ChatCAD, which integrates LLMs with computer-aided diagnosis (CAD) networks for medical images. ChatCAD uses LLMs to enhance the output of multiple CAD networks by summarizing and reorganizing the information presented in natural language text format. This approach merges the strengths of LLMs' medical domain knowledge and logical reasoning with the vision understanding capability of existing medical-image CAD models. The goal is to create a more user-friendly and understandable system for patients compared to conventional CAD systems. The paper suggests that LLMs can also be used to improve the performance of vision-based medical-image CAD models in the future.[5]
2023 February 17 Research A paper surveys the state of the art of hybrid language models architectures and strategies for complex question-answering (QA, CQA, CPS). While very large language models are good at leveraging public data on standard problems, they may require specific architecture, knowledge, skills, tasks, methods, sensitive data, performance, human approval, and versatile feedback to tackle more specific complex questions or problems. The paper identifies the key elements used with LLMs to solve complex questions or problems and discusses challenges associated with complex QA. The paper also reviews current solutions and promising strategies, using elements such as hybrid LLM architectures, human-in-the-loop reinforcement learning, prompting adaptation, neuro-symbolic and structured knowledge grounding, program synthesis, and others.[6]
2023 February 28 GEMBA (GPT Estimation Metric Based Assessment) is presented as a GPT-based metric for evaluating translation quality both with and without a reference translation. The authors evaluate four prompt variants in two modes and investigate seven versions of GPT models, including ChatGPT. Their method achieves state-of-the-art accuracy in both modes compared to human labels and provides insight into the usefulness of pre-trained, generative large language models for translation quality assessment.[7][8]
2023 March 3 Two stage framework[9] Research A paper proposes a framework called Prophet that uses answer heuristics to prompt LLMs for knowledge-based visual question answering (VQA). Previous methods used LLMs to acquire necessary knowledge for answering, but these methods did not fully activate the capacity of LLMs due to insufficient input information. Prophet trains a vanilla VQA model on a knowledge-based VQA dataset without external knowledge and extracts two types of answer heuristics: answer candidates and answer-aware examples. These answer heuristics are encoded into prompts to enhance the capacity of LLMs. Prophet outperforms existing state-of-the-art methods on two challenging knowledge-based VQA datasets, OK-VQA and A-OKVQA, delivering 61.1% and 55.7% accuracies on their testing sets, respectively.[10]
2023 March 7 SynthIE A paper presents SynthIE as a novel approach that leverages LLMs for synthetic data generation, even for tasks where LLMs can't directly solve the problem. It operates by prompting the LLM to generate text for a given structured output, exploiting task asymmetry to create high-quality, large-scale data. This methodology is demonstrated in the challenging domain of closed information extraction, where ground-truth data is scarce. SynthIE produces a dataset of 1.8 million data points, surpassing existing datasets in quality through human evaluation. The resulting SynthIE models, fine-tuned on this data, outperform comparable models by significant margins, achieving a 57-point improvement in micro F1 and a 79-point improvement in macro F1. All associated resources are publicly available.[11]
2023 March 14 Google shares health AI updates including progress on their Medical PaLM 2, expert-level medical language model (LLM) research which demonstrated consistently expert-level performance on medical exam questions, scoring 85%. The company has partnered with Jacaranda Health and Chang Gung Memorial Hospital to build AI models that can help simplify acquiring and interpreting ultrasound images to identify important information like gestational age in expecting mothers and early detection of breast cancer. They're also partners with Mayo Clinic with the purpose to extend the reach of their AI model, with the goal of helping more patients receive radiotherapy treatment sooner. Additionally, Google works with partners on the ground to bring their research on tuberculosis (TB) AI-powered chest x-ray screening into the care setting.[12]
  1. Junczys-Dowmunt, Marcin; Grundkiewicz, Roman; Dwojak, Tomasz; Hoang, Hieu; Heafield, Kenneth; Neckermann, Tom; Seide, Frank; Germann, Ulrich; Aji, Alham Fikri; Bogoychev, Nikolay; Martins, André F. T.; Birch, Alexandra (2018). "Marian: Fast Neural Machine Translation in C++". doi:10.48550/arXiv.1804.00344. 
  2. "Best practices for deploying language models". openai.com. Retrieved 17 March 2023. 
  3. "Nvidia boosts generative AI for biology with BioNeMo". VentureBeat. 12 January 2023. Retrieved 11 March 2023. 
  4. Lanzi, Pier Luca; Loiacono, Daniele (9 February 2023). "ChatGPT and Other Large Language Models as Evolutionary Engines for Online Interactive Collaborative Game Design". arXiv:2303.02155 [cs]. doi:10.48550/arXiv.2303.02155. 
  5. Wang, Sheng; Zhao, Zihao; Ouyang, Xi; Wang, Qian; Shen, Dinggang (2023). "ChatCAD: Interactive Computer-Aided Diagnosis on Medical Image using Large Language Models". doi:10.48550/arXiv.2302.07257. 
  6. Daull, Xavier; Bellot, Patrice; Bruno, Emmanuel; Martin, Vincent; Murisasco, Elisabeth (17 February 2023). "Complex QA and language models hybrid architectures, Survey". arXiv:2302.09051 [cs]. doi:10.48550/arXiv.2302.09051. 
  7. Kocmi, Tom; Federmann, Christian (28 February 2023). "Large Language Models Are State-of-the-Art Evaluators of Translation Quality". arXiv:2302.14520 [cs]. doi:10.48550/arXiv.2302.14520. 
  8. "Large Language Models Are State-of-the-Art Evaluators of Translation Quality". arxiv-vanity.com. Retrieved 16 May 2023. 
  9. "Prophet". github.com. Vision and Language Group@ MIL. 16 May 2023. Retrieved 16 May 2023. 
  10. Shao, Zhenwei; Yu, Zhou; Wang, Meng; Yu, Jun (3 March 2023). "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering". arXiv:2303.01903 [cs]. doi:10.48550/arXiv.2303.01903. 
  11. Josifoski, Martin; Sakota, Marija; Peyrard, Maxime; West, Robert (7 March 2023). "Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction". arXiv:2303.04132 [cs]. doi:10.48550/arXiv.2303.04132. 
  12. "Our latest health AI research updates". Google. 14 March 2023. Retrieved 21 March 2023.