這一篇介紹的是日本股市(日K)資料擷取模組,主要針對東京證券交易所(TSE)掛牌的普通股。程式由 AI 生成,我負責測試與整合,設計上延續中國篇的「單一 Cell 完成」哲學,並加入多輪預篩、批次下載、單檔補救與續跑機制。
🧠 程式功能亮點與模組設計
這份模組具備以下特色:
✅ 清單來源優先序
- 使用 tokyo-stock-exchange 套件內建 CSV 清單(自動安裝)
- 若套件失敗,改抓 JPX 英文頁 Excel 表單
- 若兩者皆失敗,使用極小預設清單(保底)
🔍 預篩機制(三態分類)
- 每檔股票會進行「ok / retry / bad」三態判斷
- 節流者會進行兩輪重試與單檔補拉
- 預篩結果儲存為 jp_prefilter_ok.csv,可續用
⏯️ 分段續跑與狀態管理
- 使用 manifest 檔案記錄每檔狀態(pending / done / failed / skipped)
- 自動跳過已完成項目,支援中斷後續跑
- 每次執行都會更新狀態並保存
📦 批次下載與單檔補救
- 使用 Yahoo Finance 的小批量下載(60 檔一批)
- 若批次失敗,會自動切換為單檔下載
- 每檔資料儲存為 <code>.T.csv 格式
🧪 資料驗證機制
- 抽樣檢查最近 60 天是否有資料
- 檢查欄位完整性、OHLC 合理性、成交量是否為正
- 統計 ok / bad 檔案數量,輸出驗證報告
📥 清單擷取方式:tokyo-stock-exchange 套件
程式會自動執行以下邏輯:- 嘗試匯入 tokyo-stock-exchange 套件
- 若尚未安裝,自動執行 pip 安裝
- 使用套件內建 CSV 清單擷取代碼與公司名稱
- 若套件失敗,改抓 JPX 英文頁 Excel 表單
- 清單儲存為 jp_list_all.csv,格式為 (code, name)
📁 檔案儲存路徑與命名格式
程式會自動建立以下資料夾結構:

每檔股票會儲存為 <code>.T.csv,例如 7203.T.csv(TOYOTA)、6758.T.csv(SONY)。
▶️ 執行流程總覽(單一 Cell 完成)
- 掛載 Google Drive(Colab 環境)
- 顯示免責聲明(研究與教學用途)
- 擷取清單(套件 → Excel → 預設)
- 預篩代碼(三態分類+補救)
- 建立或讀取 manifest 狀態檔
- 分批下載日K資料(60 檔一批,共 34 批)
- 單檔補救下載(若批次失敗)
- 儲存 CSV 檔案與狀態更新
- 抽樣驗證資料品質(近 60 天是否有資料)
- 儲存執行參數快照(jp_state.json)
📦 檔案儲存位置: /content/drive/MyDrive/各國股票檔案/jp-share/dayK/
📝 日誌檔案: /Log/日股日K資料下載器/download_jp_20251024_105429.txt
💾 參數快照: /jp-share/lists/jp_state.json
2️⃣ 程式功能亮點與模組設計
這份日本 TSE 模組延續了系列文章的核心設計理念,並針對東京證券交易所的特性進行了優化。最重要的是:
📦 所有步驟都整合在「一個 Cell」中完成,無需分段執行,部署簡單、續跑穩定。
以下是模組的主要功能:
- ✅ 清單擷取:優先使用 tokyo-stock-exchange 套件,失敗則自動 fallback 到 JPX 英文頁 Excel 表單
- 🔍 預篩邏輯:三態分類(ok / retry / bad)+兩輪重試+單檔補救
- ⏯️ 斷點續跑:使用 manifest 記錄狀態,支援中斷後續跑
- 📦 批次下載:每批 60 檔,優先使用 Yahoo Finance 的 period 模式
- 🔨 單檔補救:批次失敗時自動 fallback 為單檔下載
- 🧪 資料驗證:抽樣檢查近 60 天是否有資料,並輸出驗證報告
- 💾 執行快照:自動儲存 jp_state.json,方便日後比對與除錯
3️⃣ 清單擷取與來源說明
日本股市的清單擷取邏輯設計為「三層保險」:
- 優先使用 tokyo-stock-exchange 套件
- 套件內建最新 TSE 清單(CSV 格式)
- 程式會自動安裝並匯入套件
- 若套件失敗,改抓 JPX 英文頁 Excel 表單
- 網址:JPX 官方清單(英文)
- 自動解析欄位(Local Code, Name)
- 若兩者皆失敗,使用極小預設清單
- 例如:TOYOTA(7203)、SONY(6758)、SOFTBANK(9984)
清單會儲存為 jp_list_all.csv,格式為 (code, name),並自動排除非 4 位數代碼(如 REIT、ETF、債券等)。
4️⃣ 檔案儲存路徑與命名格式
程式會自動建立以下資料夾結構,與其他市場一致:

每檔股票會儲存為 <code>.T.csv,例如:
- 7203.T.csv → TOYOTA
- 6758.T.csv → SONY
- 9984.T.csv → SOFTBANK
5️⃣ 執行流程總覽(單一 Cell 完成)
- 掛載 Google Drive(Colab 環境)
- 顯示免責聲明(研究與教學用途)
- 擷取清單(套件 → Excel → 預設)
- 預篩代碼(三態分類+補救)
- 建立或讀取 manifest 狀態檔
- 分批下載日K資料(60 檔一批,共 34 批)
- 單檔補救下載(若批次失敗)
- 儲存 CSV 檔案與狀態更新
- 抽樣驗證資料品質(近 60 天是否有資料)
- 儲存執行參數快照(jp_state.json)
6️⃣ 建議介紹重點(可分段貼文)
- tokyo-stock-exchange 套件用途與 fallback 機制
- JPX Excel 表單解析技巧(欄位辨識、清洗)
- 預篩三態分類與兩輪重試邏輯
- 批次下載與單檔補救策略
- 檔案命名與儲存格式
- 抽樣驗證與資料品質檢查
- 執行參數快照與日誌輸出
7️⃣ 結語與預告
這套日本模組延續了「穩定、可續跑、可驗證」的核心理念,並針對 TSE 市場設計了多輪預篩與清單備援機制,適合用來建立日股歷史資料庫或作為金融資料處理的學習範例。
# -*- coding: utf-8 -*-
# 🚀 get_jp_stocks_tse_reliable_resume_final.py
# (2025-10) JP:放慢節奏+多輪重試+單檔補拉+分段續跑/Checkpoint (路徑最終修正版)
#
# 特色:
# - 免責聲明(研究/教學用)
# - 清單來源優先序: tokyo_stock_exchange 套件內建 CSV → JPX 英文頁 Excel → 極小預設(保底)
# - 預篩:三態(ok / retry / bad)→ 兩輪重試 → 仍節流者進行單檔探測補回
# - 批次下載:yfinance 小批量+間隔,period 優先
# - 分段續跑 / Checkpoint:保存 list, prefilter_ok, manifest → 重跑自動續跑
# - 儲存格式:<code>.T.csv,欄位 (date, open, high, low, close, volume)
# - 完成後:簡單抽樣驗證最近 60 天是否有資料
import os, io, re, time, random, logging, warnings, sys, subprocess, json
import pandas as pd
import yfinance as yf
from pathlib import Path
from tqdm import tqdm
# ====== 降噪 ======
for lg in ["yfinance", "urllib3", "requests"]:
logging.getLogger(lg).setLevel(logging.CRITICAL)
logging.getLogger(lg).propagate = False
warnings.filterwarnings("ignore")
# ========== 參數與路徑定義 (Adjusted to jp-share structure) ==========
# 參數
MARKET_CODE = "jp-share" # 資料夾名稱 (已修正回 jp-share)
DATA_SUBDIR = "dayK" # 日K子資料夾名
PROJECT_NAME = "日股日K資料下載器" # 專案名稱(用於 Log)
# ====== Colab Drive or local ======
try:
from google.colab import drive
print("🔗 正在掛載 Google Drive...")
drive.mount('/content/drive', force_remount=False)
print("✅ Drive 已掛載")
BASE_DIR = '/content/drive/MyDrive/各國股票檔案'
except Exception:
print("⚠️ 非 Colab 環境,使用 ./data")
BASE_DIR = os.path.abspath("./data")
# 調整後的路徑
BASE_MARKET_DIR = f"{BASE_DIR}/{MARKET_CODE}"
DATA_DIR = f'{BASE_MARKET_DIR}/{DATA_SUBDIR}' # 儲存 CSV 檔案
LIST_DIR = f'{BASE_MARKET_DIR}/lists' # 儲存清單與 Checkpoint ( lists 也移入 jp-share 內)
LOG_PARENT_DIR = f"{BASE_DIR}/Log" # 使用與 US 相似的 Log 父目錄
LOG_DIR = f'{LOG_PARENT_DIR}/{PROJECT_NAME}' # 儲存 Log
os.makedirs(DATA_DIR, exist_ok=True)
os.makedirs(LOG_DIR, exist_ok=True)
os.makedirs(LIST_DIR, exist_ok=True)
ts_tag = pd.Timestamp.now().strftime("%Y%m%d_%H%M%S")
LOG_FILE = f'{LOG_DIR}/download_jp_{ts_tag}.txt'
# ====== 免責聲明 ======
print(f"""
【免責聲明 / Disclaimer】({ts_tag})
1) 本程式僅供研究與教學,不構成投資建議。使用風險自負。
2) 清單與行情來自第三方(JPX 公開表單 / tokyo-stock-exchange 套件 / Yahoo Finance),可能因延遲、下市/停牌、API 節流而有遺漏或錯誤。
3) 僅嘗試下載東京證券交易所掛牌普通股代碼(一般為 4 位數)。如有板別調整/代碼變更,請以官方為準。
4) 結果僅作參考,切勿作為投資決策唯一依據。
""")
def log(msg: str):
with open(LOG_FILE, "a", encoding="utf-8") as f:
f.write(f"{pd.Timestamp.now()}: {msg}\n")
print(msg)
# ====== 參數(可視情況調整) ======
START_DATE = "2000-01-01"
END_DATE = "2099-12-31" # <-- 已修改為更遙遠的未來日期
BATCH_SIZE = 60
PAUSE_SEC = 8.0
RETRY_SLEEP_SEC = [90, 180] # 預篩的兩輪重試等待
MAX_SINGLE_PROBE = 999999 # 單檔補拉最大數(可改 300~500)
SAMPLE_LIMIT = None # 測試用(None 不限)
# Checkpoint / Resume 旗標 (設回續跑模式)
FORCE_REFRESH_LIST = False # False: 讀取已存在的清單
FORCE_REFILTER = False # False: 讀取已存在的預篩結果
FORCE_REBUILD_MANIFEST = False # False: 讀取 Manifest,自動續跑
# Checkpoint 檔案
LIST_CSV = Path(LIST_DIR) / "jp_list_all.csv"
PREF_OK_CSV = Path(LIST_DIR) / "jp_prefilter_ok.csv"
MANIFEST_CSV = Path(LIST_DIR) / "jp_manifest.csv" # 狀態檔(resume 用)
STATE_JSON = Path(LIST_DIR) / "jp_state.json" # 紀錄一些執行參數
# ====== yfinance:period 優先,最後才 start/end (Unchanged) ======
def safe_history(symbol: str, start: str, end: str, interval="1d", max_retries=6, base_delay=1.0):
periods = ["max", "10y", "5y", "2y", "1y"]
for i in range(max_retries):
try:
tk = yf.Ticker(symbol)
if i < len(periods):
p = periods[i]
df = tk.history(period=p, interval=interval, auto_adjust=False)
else:
df = tk.history(start=start, end=end, interval=interval, auto_adjust=False)
if df is not None and not df.empty:
return df
time.sleep(base_delay + 0.5*i + random.uniform(0, 0.7))
except Exception:
time.sleep(base_delay + 0.5*i + random.uniform(0, 1.0))
return None
def standardize_df(df: pd.DataFrame) -> pd.DataFrame:
if df is None or df.empty:
return pd.DataFrame()
df = df.reset_index()
if 'Date' not in df.columns:
first_col = df.columns[0]
if str(first_col).lower().startswith("date"):
df.rename(columns={first_col: 'Date'}, inplace=True)
else:
return pd.DataFrame()
df['date'] = pd.to_datetime(df['Date'], errors='coerce', utc=True)
for _ in range(2):
try:
df['date'] = df['date'].dt.tz_convert(None)
except Exception:
try:
df['date'] = df['date'].dt.tz_localize(None)
except Exception:
pass
df = df.rename(columns={'Open':'open','High':'high','Low':'low','Close':'close','Volume':'volume'})
req = ['date','open','high','low','close','volume']
if not all(c in df.columns for c in req):
return pd.DataFrame()
df = df.dropna(subset=['date'])
for c in ['open','high','low','close','volume']:
df[c] = pd.to_numeric(df[c], errors='coerce')
df = df.dropna(subset=['open','high','low','close','volume'])
df = df[df['volume'] > 0]
df = df[(df['date'] >= pd.to_datetime(START_DATE)) & (df['date'] <= pd.to_datetime(END_DATE))]
df = df.sort_values('date').reset_index(drop=True)
return df[req]
# ====== 取得 TSE 清單(套件 or JPX Excel)(Unchanged) ======
def try_import_tokyo_stock_exchange():
try:
import tokyo_stock_exchange as _tsepkg
return _tsepkg
except Exception:
try:
print("📦 正在安裝 tokyo_stock_exchange ...")
subprocess.check_call([sys.executable, "-m", "pip", "install", "-q", "tokyo-stock-exchange"])
import tokyo_stock_exchange as _tsepkg
return _tsepkg
except Exception as e:
print(f"⚠️ 安裝/載入 tokyo_stock_exchange 失敗:{e}")
return None
def get_tse_list_fresh():
"""
直接抓一份新的清單,回傳 [(code4, name), ...]
"""
# 1) 套件
pkg = try_import_tokyo_stock_exchange()
if pkg is not None:
try:
from tokyo_stock_exchange import tse
csv_file_path = getattr(tse, "csv_file_path", None)
file_date = getattr(tse, "get_file_date", lambda: None)()
print(f"✅ 使用套件內建清單:{csv_file_path}(版本日期:{file_date})")
df = pd.read_csv(csv_file_path)
code_col = next((c for c in df.columns if str(c).strip().lower() in ("code","銘柄コード","コード","local code","local_code","ticker")), None)
name_col = next((c for c in df.columns if "name" in str(c).lower() or "銘柄名" in str(c) or "company" in str(c).lower()), None)
if not code_col: code_col = df.columns[0]
if not name_col: name_col = code_col
df[code_col] = df[code_col].astype(str).str.extract(r"(\d{4})")[0]
df = df.dropna(subset=[code_col]).drop_duplicates(subset=[code_col])
rows = list(zip(df[code_col].astype(str).tolist(), df[name_col].astype(str).tolist()))
rows = [(c, n) for c, n in rows if re.fullmatch(r"\d{4}", c)]
if len(rows) >= 500:
return rows
print("⚠️ 套件內建 CSV 未取得足夠代碼,改試 JPX Excel")
except Exception as e:
print(f"⚠️ tokyo_stock_exchange fallback 失敗:{e}")
# 2) JPX Excel(英語頁)
try:
import requests
xls_url = "https://www.jpx.co.jp/english/markets/statistics-equities/misc/tvdivq0000001vg2-att/data_e.xls"
r = requests.get(xls_url, timeout=60)
r.raise_for_status()
bio = io.BytesIO(r.content)
df_raw = pd.read_excel(bio, header=None)
hdr_idx = None
for i in range(min(20, len(df_raw))):
row = " ".join(df_raw.iloc[i].astype(str).tolist()).lower()
if ("local" in row and "code" in row) and ("name" in row):
hdr_idx = i; break
if hdr_idx is None:
raise RuntimeError(f"JPX 欄位辨識失敗:{df_raw.head(3).to_dict(orient='records')}")
cols = df_raw.iloc[hdr_idx].tolist()
df = df_raw.iloc[hdr_idx+1:].copy()
df.columns = cols
df = df.dropna(how="all")
code_col = None
for cand in df.columns:
s = str(cand).lower()
if "local" in s and "code" in s: code_col = cand; break
if s.strip() in ("code", "local code", "local_code"): code_col = cand; break
name_col = None
for cand in df.columns:
s = str(cand).lower()
if "name" in s: name_col = cand; break
if not code_col: raise RuntimeError("找不到 Local Code 欄")
if not name_col: name_col = code_col
df[code_col] = df[code_col].astype(str).str.extract(r"(\d{4})")[0]
df = df.dropna(subset=[code_col]).drop_duplicates(subset=[code_col])
rows = list(zip(df[code_col].astype(str).tolist(), df[name_col].astype(str).tolist()))
rows = [(c, n) for c, n in rows if re.fullmatch(r"\d{4}", c)]
print(f"✅ JPX Excel 清單:{len(rows)} 檔")
return rows
except Exception as e:
print(f"⚠️ JPX Excel 失敗:{e}")
print("⚠️ JP 清單不足,使用極小預設")
return [("7203","TOYOTA"), ("6758","SONY"), ("9984","SOFTBANK")]
def get_tse_list():
"""優先讀 LIST_CSV;必要時刷新。"""
if (not FORCE_REFRESH_LIST) and LIST_CSV.exists():
df = pd.read_csv(LIST_CSV)
rows = list(zip(df["code"].astype(str), df["name"].astype(str)))
print(f"📄 使用現有清單:{LIST_CSV}({len(rows)} 檔)")
return rows
rows = get_tse_list_fresh()
pd.DataFrame(rows, columns=["code","name"]).to_csv(LIST_CSV, index=False)
print(f"💾 清單已保存:{LIST_CSV}({len(rows)} 檔)")
return rows
# ====== 三態預篩(多輪重試+單檔補拉)(Unchanged) ======
def quick_symbol_ok_tri(symbol: str) -> str:
try:
tk = yf.Ticker(symbol)
try:
df = tk.history(period="5d", interval="1d", auto_adjust=False)
except Exception as e:
if any(k in str(e).lower() for k in ["too many requests","429","rate limit"]):
return "retry"
df = None
if df is not None and not df.empty:
return "ok"
for per, itv in [("1y","1mo"),("5y","3mo")]:
try:
df2 = tk.history(period=per, interval=itv, auto_adjust=False)
if df2 is not None and not df2.empty:
return "ok"
except Exception as e2:
if any(k in str(e2).lower() for k in ["too many requests","429","rate limit"]):
return "retry"
return "bad"
except Exception:
return "retry"
def prefilter_tri(rows):
def tri_check(code):
return quick_symbol_ok_tri(f"{code}.T")
ok, retry, bad = [], [], []
for code, name in tqdm(rows, desc="JP 預篩(第一輪)", unit="檔"):
s = tri_check(code)
(ok if s=="ok" else retry if s=="retry" else bad).append((code, name))
for round_idx, slp in enumerate(RETRY_SLEEP_SEC, start=2):
if not retry: break
log(f"⏳ 第{round_idx}輪需要重試:{len(retry)} 檔,暫停 {slp} 秒再試")
time.sleep(slp)
new_retry = []
for code, name in tqdm(retry, desc=f"JP 預篩(第{round_idx}輪)", unit="檔"):
s = tri_check(code)
if s == "ok": ok.append((code, name))
elif s == "retry": new_retry.append((code, name))
else: bad.append((code, name))
retry = new_retry
still_ok, still_bad = [], []
if retry:
log(f"🔨 對仍節流 {len(retry)} 檔嘗試單檔直連…")
for idx, (code, name) in enumerate(tqdm(retry, desc="JP 單檔快驗", unit="檔")):
if idx >= MAX_SINGLE_PROBE: break
sym = f"{code}.T"
df = None
try:
df = yf.Ticker(sym).history(period="1y", interval="1mo", auto_adjust=False)
except Exception:
pass
if df is not None and not df.empty:
still_ok.append((code, name))
else:
still_bad.append((code, name))
time.sleep(0.15 + random.uniform(0, 0.2))
ok += still_ok
bad += still_bad
log(f"✅ 預篩結果:ok={len(ok)}, bad={len(bad)}(單檔補回 {len(still_ok)})")
return ok
def get_prefilter_ok(rows_all):
"""優先讀 PREF_OK_CSV;必要時重做預篩。"""
if (not FORCE_REFILTER) and PREF_OK_CSV.exists():
df = pd.read_csv(PREF_OK_CSV)
rows = list(zip(df["code"].astype(str), df["name"].astype(str)))
print(f"📄 使用現有預篩結果:{PREF_OK_CSV}({len(rows)} 檔)")
return rows
ok_rows = prefilter_tri(rows_all)
pd.DataFrame(ok_rows, columns=["code","name"]).to_csv(PREF_OK_CSV, index=False)
print(f"💾 預篩結果已保存:{PREF_OK_CSV}({len(ok_rows)} 檔)")
return ok_rows
# ====== Manifest:逐檔狀態檔(resume 用)(Unchanged) ======
def build_manifest(ok_rows):
"""建立或讀取 manifest。欄位:code,name,status,last_error,last_try"""
if (not FORCE_REBUILD_MANIFEST) and MANIFEST_CSV.exists():
mf = pd.read_csv(MANIFEST_CSV)
need_cols = {"code","name","status","last_error","last_try"}
if need_cols.issubset(set(mf.columns)):
print(f"📄 讀取現有 manifest:{MANIFEST_CSV}({len(mf)} 列)")
return mf
else:
print("⚠️ 舊 manifest 欄位不完整,將重建")
# 新建
mf = pd.DataFrame(ok_rows, columns=["code","name"])
mf["status"] = "pending" # pending / done / failed / skipped
mf["last_error"] = ""
mf["last_try"] = ""
# 已存在檔案標記為 done
have = {f.split(".")[0] for f in os.listdir(DATA_DIR) if f.endswith(".T.csv")}
mf.loc[mf["code"].isin(have), ["status","last_error"]] = ["done",""]
mf.to_csv(MANIFEST_CSV, index=False)
print(f"💾 新建 manifest:{MANIFEST_CSV}({len(mf)} 列,已有 {len(have)} 檔標記 done)")
return mf
def save_manifest(mf):
mf.to_csv(MANIFEST_CSV, index=False)
# ====== 批次下載與存檔 (Unchanged logic, uses new DATA_DIR) ======
def to_symbol(code4: str) -> str:
return f"{int(code4):04d}.T"
def download_batch(codes):
syms = [to_symbol(c) for c in codes]
df = None
try:
df = yf.download(syms, period="10y", interval="1d", group_by="ticker", auto_adjust=False, threads=False)
except Exception as e:
log(f"[download] 批次失敗({len(syms)}): {e} → fallback 5y")
time.sleep(PAUSE_SEC + random.uniform(0, 1.5))
try:
df = yf.download(syms, period="5y", interval="1d", group_by="ticker", auto_adjust=False, threads=False)
except Exception as e2:
log(f"[download] 5y仍失敗,跳過此批:{e2}")
return None
return df
def write_one_from_multi(df_multi, sym):
try:
sub = df_multi[sym].copy() if isinstance(df_multi.columns, pd.MultiIndex) else df_multi.copy()
if sub is None or sub.empty:
return False
sub = sub.rename(columns={"Open":"open","High":"high","Low":"low","Close":"close","Volume":"volume"}).reset_index()
sub["date"] = pd.to_datetime(sub["Date"], errors="coerce")
sub = sub.dropna(subset=["date","open","high","low","close","volume"])
sub = sub[["date","open","high","low","close","volume"]]
if not len(sub):
return False
code4 = sym.replace(".T","")
out = os.path.join(DATA_DIR, f"{code4}.T.csv") # Uses new DATA_DIR
sub.to_csv(out, index=False)
return True
except Exception:
return False
def resume_download_loop(mf):
# 只挑選 pending/failed/(以及沒有檔案的 skipped)
need = mf[mf["status"].isin(["pending","failed","skipped"])].copy()
# 如果檔案其實已存在,直接標記 done
have = {f.split(".")[0] for f in os.listdir(DATA_DIR) if f.endswith(".T.csv")} # Uses new DATA_DIR
mf.loc[mf["code"].isin(have), ["status","last_error","last_try"]] = ["done","","auto-detected"]
save_manifest(mf)
# 重新計算需要下載的清單
need = mf[mf["status"].isin(["pending","failed","skipped"]) & (~mf["code"].isin(have))]["code"].tolist()
if not need:
log("✅ 無需下載:manifest 已全部完成或檔案已存在")
return
total_batches = (len(need) + BATCH_SIZE - 1) // BATCH_SIZE
for bi in range(0, len(need), BATCH_SIZE):
batch_codes = need[bi:bi+BATCH_SIZE]
tqdm.write(f"[批次 {bi//BATCH_SIZE+1}/{total_batches}] 嘗試下載 {len(batch_codes)} 檔…")
df = download_batch(batch_codes)
if df is None:
# 整批失敗→逐檔 fallback
for c in batch_codes:
sym = to_symbol(c)
ok = False
try:
d1 = safe_history(sym, START_DATE, END_DATE, "1d")
d1 = standardize_df(d1)
if d1 is not None and not d1.empty:
out = os.path.join(DATA_DIR, f"{c}.T.csv")
d1.to_csv(out, index=False)
ok = True
except Exception as e:
mf.loc[mf["code"]==c, ["status","last_error","last_try"]] = ["failed", str(e), "single-fallback"]
if ok:
mf.loc[mf["code"]==c, ["status","last_error","last_try"]] = ["done","", "single-fallback"]
save_manifest(mf)
time.sleep(PAUSE_SEC + random.uniform(0, 1.5))
continue
# 批次成功:把在這批有資料的寫出
for c in batch_codes:
sym = to_symbol(c)
ok = write_one_from_multi(df, sym)
if ok:
mf.loc[mf["code"]==c, ["status","last_error","last_try"]] = ["done", "", "batch"]
else:
# 再做單檔補拉
try:
d1 = safe_history(sym, START_DATE, END_DATE, "1d")
d1 = standardize_df(d1)
if d1 is not None and not d1.empty:
out = os.path.join(DATA_DIR, f"{c}.T.csv")
d1.to_csv(out, index=False)
mf.loc[mf["code"]==c, ["status","last_error","last_try"]] = ["done", "", "single-after-batch"]
else:
mf.loc[mf["code"]==c, ["status","last_error","last_try"]] = ["failed", "empty_df", "single-after-batch"]
except Exception as e:
mf.loc[mf["code"]==c, ["status","last_error","last_try"]] = ["failed", str(e), "single-after-batch"]
save_manifest(mf)
time.sleep(PAUSE_SEC + random.uniform(0, 1.5))
# ====== 簡單驗證 (Unchanged logic, uses new DATA_DIR) ======
def quick_validate_samples(out_dir, sample_k=20):
import glob
files = glob.glob(os.path.join(out_dir, "*.csv"))
if not files:
return {"files": 0, "ok": 0, "bad": 0, "notes": "no files"}
pick = random.sample(files, min(sample_k, len(files)))
ok, bad = 0, 0
for p in pick:
try:
d = pd.read_csv(p)
if not set(["date","open","high","low","close","volume"]).issubset(d.columns):
bad += 1; continue
d["date"] = pd.to_datetime(d["date"], errors="coerce")
d = d.dropna(subset=["date"])
d = d.sort_values("date")
if d.empty:
bad += 1; continue
if (pd.Timestamp.now().tz_localize(None) - d["date"].iloc[-1]) > pd.Timedelta(days=60):
bad += 1; continue
if (d["volume"] > 0).mean() == 0:
bad += 1; continue
ok += 1
except Exception:
bad += 1
return {"files": len(files), "ok": ok, "bad": bad, "notes": f"sample={len(pick)}"}
# ====== 主流程 (Uses new path variables) ======
def main():
print("📁 目錄:")
print(f" BASE_DIR = {BASE_DIR}")
print(f" LIST_DIR = {LIST_DIR}")
print(f" {MARKET_CODE}/{DATA_SUBDIR} = {DATA_DIR}")
print(f" logs = {LOG_DIR}")
print("\n🚀 日本 TSE 股票下載開始(含續跑機制)")
# 1) 清單(可復用或刷新)
rows_all = get_tse_list()
if SAMPLE_LIMIT:
rows_all = rows_all[:SAMPLE_LIMIT]
log(f"🧾 讀到代碼數:{len(rows_all)}")
# 2) 預篩(可復用或重做)
ok_rows = get_prefilter_ok(rows_all)
# 3) 建/讀 manifest(pending/done/failed/skipped)
mf = build_manifest(ok_rows)
# 4) 續跑-只補未完成
resume_download_loop(mf)
# 5) 統計輸出
mf = pd.read_csv(MANIFEST_CSV)
tot = len(mf)
done = int((mf["status"]=="done").sum())
failed = int((mf["status"]=="failed").sum())
pending = int((mf["status"]=="pending").sum())
skipped = int((mf["status"]=="skipped").sum())
log(f"📊 狀態統計:total={tot}, done={done}, failed={failed}, pending={pending}, skipped={skipped}")
have = len([f for f in os.listdir(DATA_DIR) if f.endswith(".T.csv")])
log(f"🎉 完成\n ✅ 檔案目錄:{DATA_DIR}\n 📝 日誌:{LOG_FILE}\n 📦 產出檔案:{have} 檔")
# 6) 抽樣驗證
chk = quick_validate_samples(DATA_DIR, sample_k=20)
log(f"🔎 抽樣驗證:files={chk['files']}、ok={chk['ok']}、bad={chk['bad']}({chk['notes']})")
# 7) 存下執行參數(方便日後比對)
with open(STATE_JSON, "w", encoding="utf-8") as f:
json.dump({
"ts": ts_tag,
"start_date": START_DATE,
"end_date": END_DATE,
"batch_size": BATCH_SIZE,
"pause_sec": PAUSE_SEC,
"retry_sleep_sec": RETRY_SLEEP_SEC,
"sample_limit": SAMPLE_LIMIT
}, f, ensure_ascii=False, indent=2)
print(f"💾 參數快照:{STATE_JSON}")
if __name__ == "__main__":
main()
執行中畫面

完成畫面

將上方程式碼逐個貼上colab cell執行即可。預設會在goole driver建立資料夾存放日K檔案。

如果複製程式碼貼到colab上方會出現如下空白,導致執行後發生錯誤訊息
File "<tokenize>", line 205 IndentationError: unindent does not match any outer indentation level
請選擇該處空白選取候用取代方式全部取代,再次執行即可

-------------------------------------------------------------------------------------------------
🧑🔬 作者身份與非專業聲明|AUTHOR'S STATUS AND INTENT 本報告的作者為獨立的、業餘數據研究愛好者,非專業量化分析師,亦不具備任何持牌金融顧問資格。本專題報告是作者利用全職工作外的個人時間完成。 The author of this report is an independent, amateur data researcher and NOT a professional quantitative analyst or a licensed financial advisor. This work is completed in the author's personal free time for statistical research purposes.
📊 數據來源與品質限制|DATA SOURCE LIMITATION 本報告所有歷史價格數據均來自免費公共資源(如 Yahoo Finance)。雖然作者已通過 V4.0 QA 系統盡力檢查並排除明顯錯誤,但由於數據源限制,作者不保證數據 100% 無誤。 All data is sourced from free public providers (e.g., Yahoo Finance). While the author uses the V4.0 QA System to minimize errors, the author offers NO WARRANTY of 100% accuracy. Data integrity is constrained by the free source.
🚫 無投資建議聲明|NO INVESTMENT ADVICE 本文內容、圖表及 AI 分析結果僅供研究參考與教學啟發之用,不構成任何投資買賣建議、諮詢或招攬。所有分析僅描述歷史統計規律。 This content is for statistical research and educational inspiration only. It does NOT constitute personalized financial advice, investment recommendations, or a solicitation to buy or sell securities.
⚠️ 風險與責任劃分|RISK & LIABILITY 股票市場投資涉及重大風險。您應自行判斷並承擔所有投資風險。作者(和平台)對您基於本報告所做出的任何投資決策和潛在損失,不承擔任何責任。 Stock market investing involves significant risk. The reader must exercise their own judgment. The author (and the platform) assumes NO LIABILITY for any financial losses incurred based on the information provided herein.











