Instructions to use mingyi456/Z-Image-Turbo-Art-DF11-ComfyUI with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusion Single File
How to use mingyi456/Z-Image-Turbo-Art-DF11-ComfyUI with Diffusion Single File:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11
Feel free to request for other models for compression as well, although models whose architecture I am unfamiliar with might be slightly tricky for me.
How to Use
ComfyUI
Install my own fork of the DF11 ComfyUI custom node: https://github.com/mingyi456/ComfyUI-DFloat11-Extended. After installing the DF11 custom node, simply replace the "Load Diffusion Model" node of an existing workflow with the "Load Diffusion Model" node. If you run into any issues, feel free to leave a comment.
diffusers
Refer to this model instead.
Compression Details
This is the pattern_dict for compressing Z-Image-based models in ComfyUI:
pattern_dict_comfyui = {
r"noise_refiner\.\d+": (
"attention.qkv",
"attention.out",
"feed_forward.w1",
"feed_forward.w2",
"feed_forward.w3",
"adaLN_modulation.0"
),
r"context_refiner\.\d+": (
"attention.qkv",
"attention.out",
"feed_forward.w1",
"feed_forward.w2",
"feed_forward.w3",
),
r"layers\.\d+": (
"attention.qkv",
"attention.out",
"feed_forward.w1",
"feed_forward.w2",
"feed_forward.w3",
"adaLN_modulation.0"
)
}
- Downloads last month
- 33