Instructions to use Tongyi-MAI/Z-Image-Turbo with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Tongyi-MAI/Z-Image-Turbo with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Tongyi-MAI/Z-Image-Turbo", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
有人在移动端跑通过这个模型吗?Has anyone run this model on a mobile device?
#38
by supershui - opened
作为一个初学者,我想在手机上跑这个模型,有什么指导吗?
As a beginner, I want to run this model on my phone. Is there any guidance?
无论是stable-diffusion.cpp还是MNN,现在可能没有很好的推理加速框架支持这个模型,除非你对ONNX足够了解,否则需要等一段时间才能获得某个支持这个模型的加速框架。
Neither stable-diffusion.cpp nor MNN currently has a good inference acceleration framework to support this model. Unless you are familiar with ONNX, you will have to wait for a while before an acceleration framework that supports this model becomes available.
It would be slow as hell, yes?
The easiest way to run Z-Image locally on macOS. Native app, full privacy, no setup required. ⚡️
Download here: https://www.zimageapp.com/