我想要一天分享一點「LLM從底層堆疊的技術」,並且每篇文章長度控制在三分鐘以內,讓大家不會壓力太大,但是又能夠每天成長一點。
整理目前手上有的素材:
今天要來訓練自己專屬的 Tokenizer,以下先圖示說明目的,假設我有一個句子「... the tokenizer ...」,那麼經過 Tokenizer 後會變成:
當中有個奇怪符號「Ġ」表示空白,接著把上述字元變成它專屬的代碼,如下所示:
講完理念之後,我們來實作:
from pathlib import Path
from tokenizers import ByteLevelBPETokenizer
paths = [str(x) for x in Path(".").glob("**/*.txt")]
file_contents = []
for path in paths:
try:
with open(path, 'r', encoding ='utf-8', errors = 'replace') as file:
file_contents.append(file.read())
except Exception as e:
print(f"Error reading {path}: {e}")
text = "\n".join(file_contents)
tokenizer = ByteLevelBPETokenizer()
tokenizer.train_from_iterator([text],
vocab_size = 52000,
min_frequency = 2,
special_tokens=["<s>", "<pad>", "</s>", "<unk>", "<mask>"])