2024-06-20|閱讀時間 ‧ 約 26 分鐘

AI說書 - 從0開始 - 36

我想要一天分享一點「LLM從底層堆疊的技術」,並且每篇文章長度控制在三分鐘以內,讓大家不會壓力太大,但是又能夠每天成長一點。


AI說書 - 從0開始 - 0AI說書 - 從0開始 - 35,我們完成書籍:Transformers for Natural Language Processing and Computer Vision, Denis Rothman, 2024第一章說明。


以下附上參考項目:

  • Bommansani et al., 2021, On the Opportunities and Risks of Foundation Models: https://arxiv. org/abs/2108.07258
  • Rishi Bommasani, Dilara Soylu, Thomas I. Liao, Kathleen A. Creel, and Percy Liang, 2023, Ecosystem Graphs: The Social Footprint of Foundation Models: https://arxiv.org/abs/2303.15772
  • Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin, 2017, Attention is All You Need: https://arxiv.org/abs/1706.03762
  • Chen et al., 2021, Evaluating Large Language Models Trained on Code: https://arxiv.org/abs/2107.03374
  • Microsoft AI: https://innovation.microsoft.com/en-us/ai-at-scale
  • OpenAI: https://openai.com/
  • Google AI: https://ai.google/
  • Google Trax: https://github.com/google/trax
  • AllenNLP: https://allennlp.org/
  • Hugging Face: https://huggingface.co/
  • Google Cloud TPU: https://cloud.google.com/tpu/docs/intro-to-tpu


以下附上額外閱讀項目:

  • Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock, 2023, GPTs are GPTs: An Ear- ly Look at the Labor Market Impact Potential of Large Language Models: https://arxiv.org/ abs/2303.10130
  • Jussi Heikkilä, Julius Rissanen, and Timo Ali-Vehmas, 2023, Coopetition, standardization, and gen- eral purpose technologies: A framework and an application: https://www.sciencedirect.com/ science/article/pii/S0308596122001902
  • NVIDIA blog on Foundation Models: https://blogs.nvidia.com/blog/2023/03/13/what- are-foundation-models/
  • On Markov chains: https://mathshistory.st-andrews.ac.uk/Biographies/Markov/
分享至
成為作者繼續創作的動力吧!
© 2024 vocus All rights reserved.