SpaceMonkey8-cloud commited on
Commit
2b78234
·
1 Parent(s): 332c46a

Initial commit: Animated Loop Generator

Browse files
Files changed (4) hide show
  1. .gitignore +16 -0
  2. README.md +70 -7
  3. app.py +321 -0
  4. requirements.txt +12 -0
.gitignore ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ cat > .gitignore << 'EOF'
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ *.egg-info/
8
+ dist/
9
+ build/
10
+ *.mp4
11
+ *.avi
12
+ *.mov
13
+ flagged/
14
+ gradio_cached_examples/
15
+ .DS_Store
16
+ EOF
README.md CHANGED
@@ -1,14 +1,77 @@
1
  ---
2
- title: MONK3YSPAC333
3
- emoji: 🐢
4
- colorFrom: pink
5
- colorTo: gray
6
  sdk: gradio
7
- sdk_version: 5.49.1
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
- short_description: VIDEO GENERATOR
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Animated Loop Generator
3
+ emoji: 🎬
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: gradio
7
+ sdk_version: 4.13.0
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
 
11
  ---
12
 
13
+ # 🎬 Animated Loop Generator
14
+
15
+ Generate stunning animated video loops from static images using **Stable Video Diffusion**!
16
+
17
+ ## 🚀 Features
18
+
19
+ - 📸 **Image to Video**: Transform any image into an animated video
20
+ - 🔄 **Perfect Loops**: Create seamless, repeating animations
21
+ - ⚙️ **Customizable**: Control motion strength, frame count, and FPS
22
+ - 🎨 **High Quality**: Powered by Stability AI's SVD model
23
+ - 💨 **Fast**: Optimized for quick generation
24
+
25
+ ## 🎯 How to Use
26
+
27
+ 1. **Upload an Image**: Choose any image (landscape, portrait, abstract)
28
+ 2. **Adjust Parameters**:
29
+ - **Frames**: 14-25 (more = longer video)
30
+ - **Motion Strength**: 1-255 (higher = more movement)
31
+ - **FPS**: 4-30 (video frame rate)
32
+ - **Perfect Loop**: Enable for seamless looping
33
+ 3. **Generate**: Click the button and wait ~30-60 seconds
34
+ 4. **Download**: Save your animated loop!
35
+
36
+ ## 🎨 Best Results
37
+
38
+ - Use images with clear subjects and potential for motion
39
+ - Landscapes and nature scenes work great
40
+ - Start with default settings (25 frames, 127 motion strength)
41
+ - Enable "Perfect Loop" for repeating animations
42
+
43
+ ## 🔧 Technical Details
44
+
45
+ - **Model**: Stable Video Diffusion XT (stabilityai/stable-video-diffusion-img2vid-xt)
46
+ - **Resolution**: 1024x576 (optimal)
47
+ - **Generation Time**: ~30-60 seconds (depends on hardware)
48
+ - **Hardware**: GPU T4 or better recommended
49
+
50
+ ## 💡 Tips
51
+
52
+ - First generation is slower (model loading)
53
+ - Subsequent generations are faster
54
+ - Higher inference steps = better quality (but slower)
55
+ - Motion strength 127 is a good starting point
56
+
57
+ ## 📚 Examples
58
+
59
+ Try these types of images:
60
+ - 🌄 Landscapes with clouds or water
61
+ - 🎨 Abstract art with flowing patterns
62
+ - 🌸 Nature close-ups with flowers or leaves
63
+ - 🏙️ Cityscapes with lights
64
+
65
+ ## 🤝 Credits
66
+
67
+ - **Model**: Stability AI - Stable Video Diffusion
68
+ - **Framework**: HuggingFace Diffusers
69
+ - **UI**: Gradio
70
+
71
+ ## 📄 License
72
+
73
+ Apache 2.0
74
+
75
+ ---
76
+
77
+ **Made with ❤️ for the AI community**
app.py ADDED
@@ -0,0 +1,321 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Animated Loop Generator - HuggingFace Space
3
+ Genera loop video animati da immagini usando Stable Video Diffusion
4
+ """
5
+
6
+ import gradio as gr
7
+ import torch
8
+ from diffusers import StableVideoDiffusionPipeline
9
+ from diffusers.utils import export_to_video
10
+ from PIL import Image
11
+ import numpy as np
12
+ import os
13
+ import tempfile
14
+
15
+ # Configurazione
16
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
17
+ MODEL_ID = "stabilityai/stable-video-diffusion-img2vid-xt"
18
+
19
+ # Carica pipeline globale
20
+ print(f"🔧 Loading pipeline on {DEVICE}...")
21
+ pipe = None
22
+
23
+ def load_pipeline():
24
+ """Carica la pipeline SVD"""
25
+ global pipe
26
+
27
+ if pipe is None:
28
+ try:
29
+ pipe = StableVideoDiffusionPipeline.from_pretrained(
30
+ MODEL_ID,
31
+ torch_dtype=torch.float16 if DEVICE == "cuda" else torch.float32,
32
+ variant="fp16" if DEVICE == "cuda" else None,
33
+ )
34
+ pipe.to(DEVICE)
35
+
36
+ if DEVICE == "cuda":
37
+ pipe.enable_model_cpu_offload()
38
+ pipe.enable_vae_slicing()
39
+
40
+ print("✅ Pipeline loaded successfully!")
41
+ except Exception as e:
42
+ print(f"❌ Error loading pipeline: {e}")
43
+ raise
44
+
45
+ return pipe
46
+
47
+
48
+ def preprocess_image(image):
49
+ """Preprocessa l'immagine per SVD"""
50
+ if image is None:
51
+ raise ValueError("No image provided")
52
+
53
+ # Converti in PIL se necessario
54
+ if isinstance(image, np.ndarray):
55
+ image = Image.fromarray(image)
56
+
57
+ # Converti in RGB
58
+ if image.mode != "RGB":
59
+ image = image.convert("RGB")
60
+
61
+ # Ridimensiona a risoluzione ottimale
62
+ image = image.resize((1024, 576), Image.LANCZOS)
63
+
64
+ return image
65
+
66
+
67
+ def create_loop_video(frames, fps=6):
68
+ """
69
+ Crea un loop perfetto aggiungendo i frame in reverse
70
+ """
71
+ # Frame originali + reverse (senza duplicare primo e ultimo)
72
+ loop_frames = frames + frames[-2:0:-1]
73
+ return loop_frames
74
+
75
+
76
+ def generate_animated_loop(
77
+ image,
78
+ num_frames=25,
79
+ motion_strength=127,
80
+ fps=6,
81
+ make_loop=True,
82
+ num_inference_steps=25,
83
+ progress=gr.Progress()
84
+ ):
85
+ """
86
+ Genera un loop video animato da un'immagine
87
+
88
+ Args:
89
+ image: Input image (PIL or numpy array)
90
+ num_frames: Numero di frame (14-25)
91
+ motion_strength: Intensità movimento (1-255)
92
+ fps: Frame per secondo
93
+ make_loop: Se True, crea un loop perfetto
94
+ num_inference_steps: Step di inferenza (qualità)
95
+ progress: Gradio progress tracker
96
+ """
97
+
98
+ try:
99
+ progress(0, desc="🔧 Initializing...")
100
+
101
+ # Carica pipeline
102
+ pipeline = load_pipeline()
103
+
104
+ progress(0.1, desc="🖼️ Processing image...")
105
+
106
+ # Preprocessa immagine
107
+ processed_image = preprocess_image(image)
108
+
109
+ progress(0.2, desc="🎬 Generating video frames...")
110
+
111
+ # Genera video
112
+ with torch.no_grad():
113
+ output = pipeline(
114
+ processed_image,
115
+ height=576,
116
+ width=1024,
117
+ num_frames=num_frames,
118
+ motion_bucket_id=motion_strength,
119
+ fps=fps,
120
+ decode_chunk_size=4,
121
+ num_inference_steps=num_inference_steps,
122
+ )
123
+
124
+ frames = output.frames[0]
125
+
126
+ progress(0.8, desc="🔄 Creating loop...")
127
+
128
+ # Crea loop se richiesto
129
+ if make_loop:
130
+ frames = create_loop_video(frames, fps)
131
+
132
+ progress(0.9, desc="💾 Saving video...")
133
+
134
+ # Salva video
135
+ output_path = tempfile.NamedTemporaryFile(
136
+ suffix=".mp4",
137
+ delete=False
138
+ ).name
139
+
140
+ export_to_video(frames, output_path, fps=fps)
141
+
142
+ progress(1.0, desc="✅ Complete!")
143
+
144
+ # Info
145
+ info = f"""
146
+ ✅ **Video generato con successo!**
147
+
148
+ 📊 **Dettagli:**
149
+ - Frame totali: {len(frames)}
150
+ - FPS: {fps}
151
+ - Durata: ~{len(frames)/fps:.1f} secondi
152
+ - Loop: {'Sì' if make_loop else 'No'}
153
+ - Motion strength: {motion_strength}
154
+ """
155
+
156
+ return output_path, info
157
+
158
+ except Exception as e:
159
+ error_msg = f"❌ **Errore:** {str(e)}"
160
+ print(f"Error in generation: {e}")
161
+ return None, error_msg
162
+
163
+
164
+ def create_demo():
165
+ """Crea l'interfaccia Gradio"""
166
+
167
+ with gr.Blocks(
168
+ title="🎬 Animated Loop Generator",
169
+ theme=gr.themes.Soft()
170
+ ) as demo:
171
+
172
+ gr.Markdown("""
173
+ # 🎬 Animated Loop Generator
174
+
175
+ Genera loop video animati da immagini usando **Stable Video Diffusion**
176
+
177
+ ### Come usare:
178
+ 1. 📤 Carica un'immagine
179
+ 2. ⚙️ Regola i parametri
180
+ 3. 🎬 Click su "Generate Loop"
181
+ 4. 📥 Scarica il tuo video!
182
+ """)
183
+
184
+ with gr.Row():
185
+ # Colonna sinistra - Input
186
+ with gr.Column(scale=1):
187
+ image_input = gr.Image(
188
+ label="📸 Input Image",
189
+ type="pil",
190
+ sources=["upload", "webcam", "clipboard"]
191
+ )
192
+
193
+ with gr.Accordion("⚙️ Advanced Settings", open=True):
194
+ num_frames = gr.Slider(
195
+ minimum=14,
196
+ maximum=25,
197
+ value=25,
198
+ step=1,
199
+ label="🎞️ Number of Frames",
200
+ info="Più frame = video più lungo"
201
+ )
202
+
203
+ motion_strength = gr.Slider(
204
+ minimum=1,
205
+ maximum=255,
206
+ value=127,
207
+ step=1,
208
+ label="💨 Motion Strength",
209
+ info="Maggiore = più movimento"
210
+ )
211
+
212
+ fps = gr.Slider(
213
+ minimum=4,
214
+ maximum=30,
215
+ value=6,
216
+ step=1,
217
+ label="🎥 FPS (Frames per Second)",
218
+ info="Frame rate del video"
219
+ )
220
+
221
+ make_loop = gr.Checkbox(
222
+ value=True,
223
+ label="🔄 Create Perfect Loop",
224
+ info="Crea un loop senza interruzioni"
225
+ )
226
+
227
+ num_inference_steps = gr.Slider(
228
+ minimum=10,
229
+ maximum=50,
230
+ value=25,
231
+ step=5,
232
+ label="🎨 Quality (Inference Steps)",
233
+ info="Più step = migliore qualità (ma più lento)"
234
+ )
235
+
236
+ generate_btn = gr.Button(
237
+ "🎬 Generate Animated Loop",
238
+ variant="primary",
239
+ size="lg"
240
+ )
241
+
242
+ # Colonna destra - Output
243
+ with gr.Column(scale=1):
244
+ video_output = gr.Video(
245
+ label="🎬 Generated Loop",
246
+ autoplay=True,
247
+ loop=True
248
+ )
249
+
250
+ info_output = gr.Markdown(
251
+ label="ℹ️ Info",
252
+ value="Upload un'immagine e genera il tuo loop!"
253
+ )
254
+
255
+ # Examples
256
+ gr.Markdown("### 🎨 Esempi")
257
+ gr.Examples(
258
+ examples=[
259
+ ["examples/landscape.jpg", 25, 127, 6, True],
260
+ ["examples/portrait.jpg", 20, 80, 6, True],
261
+ ["examples/abstract.jpg", 25, 180, 8, True],
262
+ ],
263
+ inputs=[image_input, num_frames, motion_strength, fps, make_loop],
264
+ outputs=[video_output, info_output],
265
+ fn=generate_animated_loop,
266
+ cache_examples=False,
267
+ )
268
+
269
+ # Event handler
270
+ generate_btn.click(
271
+ fn=generate_animated_loop,
272
+ inputs=[
273
+ image_input,
274
+ num_frames,
275
+ motion_strength,
276
+ fps,
277
+ make_loop,
278
+ num_inference_steps
279
+ ],
280
+ outputs=[video_output, info_output],
281
+ )
282
+
283
+ # Info footer
284
+ gr.Markdown("""
285
+ ---
286
+ ### 📚 Tips per Risultati Migliori:
287
+
288
+ - 🖼️ **Immagini ideali**: Paesaggi, nature, oggetti con potenziale di movimento
289
+ - 💨 **Motion strength**: Inizia con 127, aumenta per più movimento
290
+ - 🎞️ **Frames**: 25 per loop più lunghi, 14-18 per loop veloci
291
+ - 🔄 **Loop**: Attiva per animazioni cicliche perfette
292
+
293
+ ### 🔧 Powered by:
294
+ - **Model**: Stability AI - Stable Video Diffusion XT
295
+ - **Framework**: Diffusers + Gradio
296
+ - **Hardware**: GPU T4 (upgrade per velocità maggiore)
297
+
298
+ ### 💡 Note:
299
+ - Prima generazione più lenta (caricamento modello)
300
+ - Generazioni successive più veloci
301
+ - Qualità ottimale con immagini 1024x576
302
+ """)
303
+
304
+ return demo
305
+
306
+
307
+ # Carica pipeline all'avvio
308
+ try:
309
+ load_pipeline()
310
+ except Exception as e:
311
+ print(f"⚠️ Pipeline will be loaded on first generation: {e}")
312
+
313
+ # Lancia app
314
+ if __name__ == "__main__":
315
+ demo = create_demo()
316
+ demo.queue(max_size=20)
317
+ demo.launch(
318
+ server_name="0.0.0.0",
319
+ server_port=7860,
320
+ share=False
321
+ )
requirements.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ diffusers==0.25.1
2
+ transformers==4.36.2
3
+ accelerate==0.25.0
4
+ torch==2.1.2
5
+ torchvision==0.16.2
6
+ gradio==4.13.0
7
+ pillow==10.2.0
8
+ numpy==1.26.3
9
+ imageio==2.33.1
10
+ imageio-ffmpeg==0.4.9
11
+ safetensors==0.4.1
12
+ huggingface-hub==0.20.3