更新於 2024/08/20閱讀時間約 6 分鐘

Flux.1 免費開源文生圖模型介紹及使用指南 @ 4060Ti 16G

Flux.1 是一款免費開源的模型,用於文生圖
說起文生圖,最近的突破就是文字在 AI 的圖像中終於可控了,以前總是一堆奇怪的文字

本次使用的虛擬機

raw-image

RAM: 16GB

GPU: 4060Ti 16G

OS:Ubuntu 22.04 LTS

Driver:555.42.02

CUDA Version: 12.1


安裝 ComfyUI

  1. git clone https://github.com/comfyanonymous/ComfyUI
  2. 將 SD 放到 models/checkpoints
  3. 將 VAE 放到 models/vae

在 Conda 虛擬環境上安裝套件

(base) sung@gpu:~/ComfyUI$ conda create -n flux
Channels:
- defaults
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done

(base) sung@gpu:~/ComfyUI$ conda activate flux
(flux) sung@gpu:~/ComfyUI$ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
(flux) sung@gpu:~/ComfyUI$ pip install -r requirements.txt

ComfyUI 中文化

  1. cd ComfyUI/Custom_nodes
  2. git clone https://github.com/AIGODLIKE/AIGODLIKE-ComfyUI-Translation

下載 Flux 模型

(flux) sung@gpu: cd ComfyUI/models/unet
(flux) sung@gpu:~/ComfyUI/models/unet$ (flux) sung@gpu:~/ComfyUI/models/unet$ wget https://huggingface.co/Kijai/flux-fp8/resolve/main/flux1-dev-fp8.safetensors

下載 CLIP 模型 (Text Encoder)

comfyanonymous/flux_text_encoders at main (huggingface.co)

  • FP16,需要 32G VRAM
  • 我們所選用的 FP8 版本

(flux) sung@gpu:~/ComfyUI/models/clip$ wget https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors
(flux) sung@gpu:~/ComfyUI/models/clip$ wget https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors

下載 VAE 模型

(flux) sung@gpu:~$ cd ComfyUI/models/vae
(flux) sung@gpu:~/ComfyUI/models/vae$ wget https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors

啟動 ComfyUI

(flux) sung@gpu:~/ComfyUI$ python3 main.py --listen
Total VRAM 15985 MB, total RAM 15952 MB
pytorch version: 2.2.2+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync
Using pytorch cross attention
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
[Prompt Server] web root: /home/sung/ComfyUI/web

Import times for custom nodes:
0.0 seconds: /home/sung/ComfyUI/custom_nodes/websocket_image_save.py

Starting server

To see the GUI go to: http://0.0.0.0:8188

執行 Sample 例子

Flux.1 ComfyUI Guide, workflow and example – ComfyUI-WIKI 可以下載 Sample JSON

使用 Flux under 12GB VRAM

  1. 下載後,從 ComfyUI 進行載入 (Load)
  2. 如果需要的話,調整其他參數
  3. 點選 Queue Prompt 後進行圖片的生成













遇到的問題

模型載入到一半死掉?

通常是 VRAM 不足夠模型去跑,或者是產出的圖片太大,導致 VRAM 使用超過

第一次載入要多久?

解析度不夠可以怎麼調整?

調整 Steps 使多可以使畫面變得清晰





參考資料

Flux Examples | ComfyUI_examples (comfyanonymous.github.io)

Flux.1 ComfyUI Guide, workflow and example – ComfyUI-WIKI

分享至
成為作者繼續創作的動力吧!
© 2024 vocus All rights reserved.