2024-06-16|閱讀時間 ‧ 約 27 分鐘

AI說書 - 從0開始 - 21

我想要一天分享一點「LLM從底層堆疊的技術」,並且每篇文章長度控制在三分鐘以內,讓大家不會壓力太大,但是又能夠每天成長一點。


我們已經在AI說書 - 從0開始 - 20中,闡述GPT模型的Supervised and Unsupervised觀點,接著一樣引述書籍:Transformers for Natural Language Processing and Computer Vision, Denis Rothman, 2024闡述其他常見謬誤觀點:


  • Discriminative and Generative

Since the arrival of ChatGPT, the term “Generative AI” has often been used to describe GPT- like models. First, we must note that GPT models were not the first to produce Generative AI, recurrent neural networks were also generative.

Also, if we follow the science, saying the GPT-like models perform Generative AI tasks is not entirely true. In the previous Supervised and unsupervised bullet point, we mentioned that a “generative” model could perform the supervised task of inferring an output such as “true” or “false” when the input is a statement or a question. This is not a Generative AI task! This is a discriminative AI task.

Another situation can arise with summarizing. Part of summarizing is discriminative to find keywords, topics, and names. Another part can be generative when the system infers new tokens to summarize a text.

Finally, the full power of the autoregressive nature (generating a token and adding it to the input) of language modeling (adding a new token at the end of a sequence) is attained when an LLM invents a story; that is generative AI.

A GPT-like model can be discriminative, Generative, or both depending on the task to perform.


  • Task-Specific and Specific Task Models

Task-specific models are often opposed to general-purpose models, such as GPT-like LLMs. Task-specific models are trained to perform very well on specific tasks. This is accurate to some extent for some tasks. However, LLMs trained to be general-purpose systems can surprisingly perform well on specific tasks, as proven through the evaluations of various exams (law, medical). Also, transformer models can now perform image recognition and generation.


  • Interaction and automation

A key point in analyzing the relationship between humans and LLMs is automation. To what degree should we automate our tasks? Some tasks cannot be automated by LLMs. ChatGPT and other assistants cannot make life-and-death, moral, ethical, and business decisions. Asking a system to do that will endanger an organization.

Automation needs to be thoroughly analyzed before implementing LLMs.


分享至
成為作者繼續創作的動力吧!
這頻道將提供三分鐘以內長度的AI知識,讓你一天學一點AI知識,每天進步一點
© 2024 vocus All rights reserved.