Z-Image-Turbo-Fun-Controlnet-Union-2.0
Model Features
- This ControlNet is added on 15 layer blocks and 2 refiner layer blocks.
- The model was trained from scratch for 70,000 steps on a dataset of 1 million high-quality images covering both general and human-centric content. Training was performed at 1328 resolution using BFloat16 precision, with a batch size of 64, a learning rate of 2e-5, and a text dropout ratio of 0.10.
- It supports multiple control conditionsβincluding Canny, HED, Depth, Pose and MLSD can be used like a standard ControlNet.
- We found that under different strength levels, using different step numbers has a certain impact on the realism and clarity of the results. For strength and step testing, please refer to Scale Test Results.
- You can adjust control_context_scale for stronger control and better detail preservation. For better stability, we highly recommend using a detailed prompt. The optimal range for control_context_scale is from 0.65 to 0.90.
- Note on Steps: As you increase the control strength (higher control_context_scale values), it's recommended to appropriately increase the number of inference steps to achieve better results and maintain generation quality. This is likely because the control model has not been distilled.
- Inpainting mode is also supported.
TODO
- Train on better data.
Results
| Pose + Inpaint | Output |
![]() ![]() |
![]() |
| Pose | Output |
![]() |
![]() |
| Pose | Output |
![]() |
![]() |
| Pose | Output |
![]() |
![]() |
| Canny | Output |
![]() |
![]() |
| HED | Output |
![]() |
![]() |
| Depth | Output |
![]() |
![]() |
Inference
Go to the VideoX-Fun repository for more details.
Please clone the VideoX-Fun repository and create the required directories:
# Clone the code
git clone https://github.com/aigc-apps/VideoX-Fun.git
# Enter VideoX-Fun's directory
cd VideoX-Fun
# Create model directories
mkdir -p models/Diffusion_Transformer
mkdir -p models/Personalized_Model
Then download the weights into models/Diffusion_Transformer and models/Personalized_Model.
π¦ models/
βββ π Diffusion_Transformer/
β βββ π Z-Image-Turbo/
βββ π Personalized_Model/
β βββ π¦ Z-Image-Turbo-Fun-Controlnet-Union-2.0.safetensors
Then run the file examples/z_image_fun/predict_t2i_control_2.0.py and examples/z_image_fun/predict_i2i_inpaint_2.0.py.
Scale Test Results
The table below shows the generation results under different combinations of Diffusion steps and Control Scale strength:
Parameter Description:
Diffusion Steps: Number of iteration steps for the diffusion model (9, 10, 20, 30, 40) Control Scale: Control strength coefficient (0.65 - 1.0)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support












































