2024-09-18|閱讀時間 ‧ 約 22 分鐘

AI說書 - 從0開始 - 181 | 預訓練模型資料下載與相關依賴準備

我想要一天分享一點「LLM從底層堆疊的技術」,並且每篇文章長度控制在三分鐘以內,讓大家不會壓力太大,但是又能夠每天成長一點。


整理目前手上有的素材:


這次預訓練模型使用的資料集為 Kaggle’s Customer Support on Twitter dataset,詳見 https://www.kaggle.com/datasets/thoughtvector/customer- support-on-twitter


過程需要有 Kaggle 帳戶,前置作業如下:

from google.colab import drive 
drive.mount('/content/drive')

import os
import json

try:
import kaggle
except:
!pip install kaggle
import kaggle

with open(os.path.expanduser("drive/MyDrive/files/kaggle.json"), "r") as f:
kaggle_credentials = json.load(f)

kaggle_username = kaggle_credentials["username"]
kaggle_key = kaggle_credentials["key"]
os.environ["KAGGLE_USERNAME"] = kaggle_username
os.environ["KAGGLE_KEY"] = kaggle_key
kaggle.api.authenticate()

!kaggle datasets download -d thoughtvector/customer-support-on-twitter


然後執行解壓縮過程:

import zipfile

with zipfile.ZipFile('/content/customer-support-on-twitter.zip', 'r') as zip_ref:
zip_ref.extractall('/content/')
print("File Unzipped!")


接著來安裝相關配件包:

!pip install accelerate == 0.29.3 
!pip install Transformers == 4.40.1
!pip install datasets == 2.16.0 #installing Hugging Face datasets for data loading and preprocessing
from accelerate import Accelerator


分享至
成為作者繼續創作的動力吧!
© 2024 vocus All rights reserved.