Flux.1 是一款免費開源的模型,用於文生圖
說起文生圖,最近的突破就是文字在 AI 的圖像中終於可控了,以前總是一堆奇怪的文字
本次使用的虛擬機

RAM: 16GB
GPU: 4060Ti 16G
OS:Ubuntu 22.04 LTS
Driver:555.42.02
CUDA Version: 12.1
安裝 ComfyUI
- git clone https://github.com/comfyanonymous/ComfyUI
- 將 SD 放到 models/checkpoints
- 將 VAE 放到 models/vae
在 Conda 虛擬環境上安裝套件
(base) sung@gpu:~/ComfyUI$ conda create -n flux
Channels:
- defaults
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done
(base) sung@gpu:~/ComfyUI$ conda activate flux
(flux) sung@gpu:~/ComfyUI$ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
(flux) sung@gpu:~/ComfyUI$ pip install -r requirements.txt
ComfyUI 中文化
- cd ComfyUI/Custom_nodes
- git clone https://github.com/AIGODLIKE/AIGODLIKE-ComfyUI-Translation
下載 Flux 模型
- 官方 FP16 版本,需要 24G VRAM black-forest-labs/FLUX.1-dev at main (huggingface.co)
- 我們所選用的 FP8 版本,需要 12G VRAM Kijai/flux-fp8 at main (huggingface.co)
(flux) sung@gpu: cd ComfyUI/models/unet
(flux) sung@gpu:~/ComfyUI/models/unet$ (flux) sung@gpu:~/ComfyUI/models/unet$ wget https://huggingface.co/Kijai/flux-fp8/resolve/main/flux1-dev-fp8.safetensors
下載 CLIP 模型 (Text Encoder)
comfyanonymous/flux_text_encoders at main (huggingface.co)
- FP16,需要 32G VRAM
- 我們所選用的 FP8 版本
(flux) sung@gpu:~/ComfyUI/models/clip$ wget https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors
(flux) sung@gpu:~/ComfyUI/models/clip$ wget https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors
下載 VAE 模型
(flux) sung@gpu:~$ cd ComfyUI/models/vae
(flux) sung@gpu:~/ComfyUI/models/vae$ wget https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors
啟動 ComfyUI
(flux) sung@gpu:~/ComfyUI$ python3 main.py --listen
Total VRAM 15985 MB, total RAM 15952 MB
pytorch version: 2.2.2+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync
Using pytorch cross attention
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
[Prompt Server] web root: /home/sung/ComfyUI/web
Import times for custom nodes:
0.0 seconds: /home/sung/ComfyUI/custom_nodes/websocket_image_save.py
Starting server
To see the GUI go to: http://0.0.0.0:8188
執行 Sample 例子
從 Flux.1 ComfyUI Guide, workflow and example – ComfyUI-WIKI 可以下載 Sample JSON
使用 Flux under 12GB VRAM

- 下載後,從 ComfyUI 進行載入 (Load)
- 如果需要的話,調整其他參數
- 點選 Queue Prompt 後進行圖片的生成
遇到的問題
模型載入到一半死掉?
通常是 VRAM 不足夠模型去跑,或者是產出的圖片太大,導致 VRAM 使用超過
第一次載入要多久?

解析度不夠可以怎麼調整?


調整 Steps 使多可以使畫面變得清晰
