gg-hf-g

community
Activity Feed

AI & ML interests

None defined yet.

danielhanchenย 
posted an update about 14 hours ago
view post
Post
5176
Unsloth is now one of the top 10 most followed organizations on Hugging Face. ๐Ÿค—๐Ÿฆฅ

Thanks so much for all the support!
Our HF page:
unsloth
  • 3 replies
ยท
mlabonneย 
posted an update 2 days ago
view post
Post
483
Big update to llm-datasets, my curated list of datasets and tools for post-training LLMs.

> Added many new datasets
> New "thinking" column
> Refreshed recommended tools.

Thanks to everyone who told me they used it for their research at ICLR, you motivated this update!
  • 1 reply
ยท
danielhanchenย 
posted an update 7 days ago
danielhanchenย 
posted an update 13 days ago
danielhanchenย 
posted an update 22 days ago
danielhanchenย 
posted an update 27 days ago
danielhanchenย 
posted an update 29 days ago
view post
Post
2745
A new way to use Unsloth.

Coming soon...
danielhanchenย 
posted an update about 1 month ago
view post
Post
925
You donโ€™t need to set LLM parameters anymore! ๐Ÿš€

llama.cpp uses only the context length + compute your local setup needs. Unsloth also auto-applies the correct model settings

Try in Unsloth Studio - now with precompiled llama.cpp binaries.

GitHub: https://github.com/unslothai/unsloth
  • 2 replies
ยท
danielhanchenย 
posted an update about 1 month ago
view post
Post
3403
Introducing Unsloth Studio โœจ
A new open-source web UI to train and run LLMs.

โ€ข Run models locally on Mac, Windows, Linux
โ€ข Train 500+ models 2x faster with 70% less VRAM
โ€ข Supports GGUF, vision, audio, embedding models
โ€ข Auto-create datasets from PDF, CSV, DOCX
โ€ข Self-healing tool calling and code execution
โ€ข Compare models side by side + export to GGUF

GitHub: https://github.com/unslothai/unsloth
Blog and Guide: https://unsloth.ai/docs/new/studio

Available now on Hugging Face, NVIDIA, Docker and Colab.
danielhanchenย 
posted an update about 2 months ago
view post
Post
3930
We collaborated with NVIDIA to teach you about Reinforcement Learning and RL environments. ๐Ÿ’š Learn:

โ€ข Why RL environments matter + how to build them
โ€ข When RL is better than SFT
โ€ข GRPO and RL best practices
โ€ข How verifiable rewards and RLVR work

Blog: https://unsloth.ai/blog/rl-environments
  • 4 replies
ยท
alvarobarttย 
posted an update about 2 months ago
view post
Post
3660
Learn how to deploy Microsoft Research VibeVoice ASR on Microsoft Azure Foundry with Hugging Face to generate rich audio transcriptions with Who, When, and What! ๐Ÿ’ฅ

> ๐Ÿ•’ 60-minute single-pass processing, no chunking or stitching
> ๐Ÿ‘ค Customized hotwords to guide recognition on domain-specific content
> ๐Ÿ“ Rich transcription: joint ASR + diarization + timestamping in one pass
> ๐ŸŒ 50+ languages with automatic detection and code-switching support
> ๐Ÿค— Deployed on Microsoft Foundry via an OpenAI-compatible Chat Completions API

https://huggingface.co/docs/microsoft-azure/foundry/examples/deploy-vibevoice-asr
danielhanchenย 
posted an update 2 months ago
view post
Post
3457
100,000+ models trained with Unsloth have now been open-sourced on ๐Ÿค—Hugging Face! ๐Ÿฆฅ

Here are the most popular ones you can run local:
1. TeichAI - GLM-4.7-Flash distilled from Claude 4.5 Opus (high)
2. Zed - Qwen Coder 7B fine-tuned for stronger coding
3. DavidAU - Llama-3.3-8B distilled from Claude 4.5 Opus (high)
4. huihui - gpt-oss made โ€œabliberatedโ€

Links to models:
1. TeichAI: TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF
2. Zed: zed-industries/zeta
3. DavidAU: DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning
4. huihui: huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated

See all the 100K latest models fine-tuned with Unsloth here: https://huggingface.co/models?other=u
  • 2 replies
ยท
danielhanchenย 
posted an update 2 months ago
danielhanchenย 
posted an update 3 months ago
view post
Post
5222
We collaborated with Hugging Face to enable you to train MoE models 12ร— faster with 35% less VRAM via our new Triton kernels (no accuracy loss). ๐Ÿค—

Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
  • 1 reply
ยท
alvarobarttย 
posted an update 3 months ago
view post
Post
3237
๐Ÿ’ฅ hf-mem v0.4.1 now also estimates KV cache memory requirements for any context length and batch size with the --experimental flag!

uvx hf-mem --model-id ... --experimental will automatically pull the required information from the Hugging Face Hub to include the KV cache estimation, when applicable.

๐Ÿ’ก Alternatively, you can also set the --max-model-len, --batch-size and --kv-cache-dtype arguments (ร  la vLLM) manually if preferred.
  • 1 reply
ยท
danielhanchenย 
posted an update 3 months ago
view post
Post
3510
You can now run Kimi K2.5 locally! ๐Ÿ”ฅ

We shrank the 1T model to 240GB (-60%) via Dynamic 1-bit.
Get >40 tok/s on 242GB or 622GB VRAM/RAM for near full precision.

GGUF: unsloth/Kimi-K2.5-GGUF

Guide: https://unsloth.ai/docs/models/kimi-k2.5
  • 7 replies
ยท