Ready to try some AI models? The Hugging Face Hub hosts many open-source models you can use right now — no setup required.
What makes this special? You get access to the same models that power many AI applications, completely free. Whether you’re a curious beginner or an experienced developer, you can experiment with text generation, image creation, translation, and more.
New to AI models? Think of them as specialized tools: some write text, some create images, some translate.
Let’s try a text model (e.g., openai/gpt-oss-120b) in the Hugging Face Playground:
Now try an image editing Space: Qwen Image Edit.
For audio, try a text-to-speech model.
Beyond trying models, you can build your own tools and applications.
🌤️ Widgets - Try models directly in their model cards. This will give a quick overview of what the model can do.
🖥️ The Playground - Try models directly on the website. No coding required.
🔌 API Calls - Integrate models via simple HTTP requests.
🐍 Python Libraries - Download and run models locally for full control.
Start simple! Most people begin with browser widgets to understand what models can do, then move to APIs or Python as their projects grow more complex.
Every model on the Hub has an interactive widget that lets you test it instantly. No account needed, no setup - just type and see what happens!
How to Use Widgets:
openai/gpt-oss-120bWhat You Can Try:
openai/gpt-oss-120bWidget not working? Some models are popular and may be busy. Try again in a few minutes, or look for similar models that aren’t as crowded.
The Playground offers more controls and settings to understand and compare models.
What Makes Playgrounds Special:
Pro Tips:
Ready to integrate AI into your application? APIs let you add AI capabilities with a few lines of code.
How APIs Work:
Simple API Example (Python):
from huggingface_hub import InferenceClient
client = InferenceClient()
response = client.text_generation(
"Write a short story about a robot:",
model="microsoft/DialoGPT-large"
)
print(response)API Benefits:
Go deeper: We’re moving pretty fast over these topics, but at the end of this chapter we’ll dive deep into building AI applications with APIs.
Run models on your computer without sending data to remote servers.
Why Use Local Apps:
To get started:




High-performance C/C++ library optimized for CPU inference. Based on the GGUF format for efficient model loading.
Best for: CPU-based inference, maximum performance
# Install with brew (Mac/Linux)
brew install llama.cpp
# Run a model directly from the Hub
llama-cli -hf bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0Desktop application with an intuitive graphical interface for downloading and running local LLMs.
Best for: Beginners who want a GUI
Open-source chat application that can run offline with document chat capabilities.
Best for: Privacy-focused users
With many models on the Hub, here are some tips:
Browse by Task:
Check the Stats:
Filter by Your Needs:
Ready to put this knowledge into practice? Pick one approach and try it this week:
The world of open-source AI is waiting for you to explore, experiment, and create. Every expert started as a beginner - your journey starts with trying your first model!
Ready to start building? Visit huggingface.co/models and begin your AI development journey.
< > Update on GitHub