我想要一天分享一點「LLM從底層堆疊的技術」,並且每篇文章長度控制在三分鐘以內,讓大家不會壓力太大,但是又能夠每天成長一點。
- 訓練的必要模組安裝:AI說書 - 從0開始 - 135
- 載入資料集:AI說書 - 從0開始 - 136
- 資料集窺探:AI說書 - 從0開始 - 137
- 資料前處理與 Tokenization:AI說書 - 從0開始 - 138
- 資料 Padding 與訓練/驗證集切割:AI說書 - 從0開始 - 139
- Data Loader 設定:AI說書 - 從0開始 - 140
- BERT 模型窺探:AI說書 - 從0開始 - 141
- 載入 BERT 模型:AI說書 - 從0開始 - 142
- Optimizer 的 Decay Rate 群組配置:AI說書 - 從0開始 - 143
- BERT 模型的特定「層」參數窺探方法:AI說書 - 從0開始 - 144
- Optimizer 的 Decay Rate 群組窺探:AI說書 - 從0開始 - 145
- 配置 Optimizer 與訓練成效評估函數:AI說書 - 從0開始 - 146
以下開始撰寫訓練程式:
epochs = 4
total_steps = len(train_dataloader) * epochs
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, num_training_steps = total_steps)
train_loss_set = []
for _ in trange(epochs, desc = "Epoch"):
# Set our model to training mode (as opposed to evaluation mode)
model.train()
tr_loss = 0
nb_tr_examples = 0
nb_tr_steps = 0
for step, batch in enumerate(train_dataloader):
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
optimizer.zero_grad()
outputs = model(b_input_ids, token_type_ids = None, attention_mask = b_input_mask, labels = b_labels)
loss = outputs['loss']
train_loss_set.append(loss.item())
loss.backward()
optimizer.step()
scheduler.step()
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
print("Train loss: {}".format(tr_loss/nb_tr_steps))
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
for batch in validation_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
logits = model(b_input_ids, token_type_ids = None, attention_mask = b_input_mask)
logits = logits['logits'].detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))