Ready to contribute to the AI community? Sharing your work on Hugging Face helps others learn and gives you useful feedback. Whether you’ve created a cool demo, fine-tuned a model, or collected a useful dataset, the community wants to see what you’ve built! 🌟
Sharing your work helps others learn, gives you feedback to improve, and builds your reputation in the AI community. Plus, it’s easier than you might think.
There are three main ways to share your AI work on Hugging Face:
🤖 Models - Share trained or fine-tuned models that others can use directly or build upon
📊 Datasets - Upload datasets you’ve collected, cleaned, or created for others to use in their projects
🎮 Spaces - Create interactive demos that let people try your AI applications in their browser
Spaces are a visible way to share your work. Interactive demos are useful for portfolios and feedback.
Spaces are the easiest and most visible way to share your AI work. Here’s how to create one:
1. Choose Your Framework
2. Create Your Space
index.html with <h1>Hello World</h1>)3. Make It Great
You can share small datasets too — they don’t need to be large LLM training corpora.
Uploading Datasets:
Documentation tips:
The easiest way to get started is to duplicate a community space. This is handy if you want to use your own credits, do some private testing, or just customize the space to your liking.
my-duplicate-spaceFor example, why not duplicate this transcription space from Nvidia and use it to transcribe your own audio files?
Let’s put your knowledge into practice! We’ll create a simple Gradio Space that visualizes data from a Hugging Face dataset. This exercise will teach you the basics of Spaces while creating something useful.
A data explorer that lets users:
my-data-explorerReplace the contents of app.py with this code:
import gradio as gr
import pandas as pd
# Sample dataset - in a real app, you'd load from Hugging Face datasets
data = {
'Country': ['USA', 'China', 'India', 'Germany', 'Japan', 'Brazil'],
'Population': [331, 1439, 1380, 83, 125, 213],
'GDP': [21.43, 14.34, 3.17, 3.84, 4.94, 1.61],
'Area': [9.8, 9.6, 3.3, 0.36, 0.38, 8.5]
}
df = pd.DataFrame(data)
def explore_data(country, show_chart):
"""Function to filter and display data based on user selection"""
if country == "All":
filtered_df = df
title = "All Countries Data"
else:
filtered_df = df[df['Country'] == country]
title = f"Data for {country}"
# Create a simple text summary
summary = f"Showing data for {len(filtered_df)} country(ies)"
if show_chart and len(filtered_df) > 1:
# Create a simple bar chart using Gradio's built-in plotting
chart_data = filtered_df[['Country', 'Population']].values.tolist()
return filtered_df, summary, chart_data
else:
return filtered_df, summary, None
# Create the Gradio interface
with gr.Blocks(title="Data Explorer") as demo:
gr.Markdown("# 📊 Country Data Explorer")
gr.Markdown("Explore population and economic data for different countries!")
with gr.Row():
with gr.Column():
country_dropdown = gr.Dropdown(
choices=["All"] + df['Country'].tolist(),
value="All",
label="Select Country",
info="Choose a country to explore its data"
)
show_chart = gr.Checkbox(
label="Show Population Chart",
value=True,
info="Display a chart when viewing multiple countries"
)
explore_btn = gr.Button("Explore Data", variant="primary")
with gr.Column():
summary_text = gr.Textbox(
label="Summary",
interactive=False
)
with gr.Row():
data_table = gr.Dataframe(
value=df,
label="Country Data",
interactive=False
)
with gr.Row():
chart_plot = gr.BarPlot(
x="Country",
y="Population",
title="Population by Country (Millions)",
label="Population Chart",
visible=True
)
# Set up the interaction
explore_btn.click(
fn=explore_data,
inputs=[country_dropdown, show_chart],
outputs=[data_table, summary_text, chart_plot]
)
# Also trigger on dropdown change
country_dropdown.change(
fn=explore_data,
inputs=[country_dropdown, show_chart],
outputs=[data_table, summary_text, chart_plot]
)
# Launch the app
if __name__ == "__main__":
demo.launch()Create a requirements.txt file with:
gradio
pandas# Country Data Explorer
A simple Gradio app that lets you explore country statistics including population, GDP, and area data.
## Features
- Filter data by country
- View data in table format
- Visualize population with charts
Built as part of the Hugging Face 101 course!Once your basic Space is working, try these improvements:
Promote Your Space:
Connect With Others:
Measure Success:
Let’s build something practical: an AI Meeting Notes app that transcribes audio files and generates summaries with action items. This exercise shows you how to combine multiple AI models into one powerful application.
An AI-powered meeting assistant that:
This demonstrates using multiple providers within a single application.
Tech Stack: Gradio (UI) + Inference Providers (AI)
Authenticate with Hugging Face using the CLI:
pip install huggingface_hub hf auth login
When prompted, paste your Hugging Face token. Generate one from your settings.
Create a simple web interface using Gradio:
import gradio as gr
from huggingface_hub import InferenceClient
def process_meeting_audio(audio_file):
"""Process uploaded audio file and return transcript + summary"""
if audio_file is None:
return "Please upload an audio file.", ""
# We'll implement the AI logic next
return "Transcript will appear here...", "Summary will appear here..."
# Create the Gradio interface
app = gr.Interface(
fn=process_meeting_audio,
inputs=gr.Audio(label="Upload Meeting Audio", type="filepath"),
outputs=[
gr.Textbox(label="Transcript", lines=10),
gr.Textbox(label="Summary & Action Items", lines=8)
],
title="🎤 AI Meeting Notes",
description="Upload an audio file to get an instant transcript and summary with action items."
)
if __name__ == "__main__":
app.launch()This uses Gradio’s gr.Audio component for upload or microphone input, with two outputs: transcript and summary.
Implement transcription using OpenAI’s whisper-large-v3 (or -v3-turbo) for speech recognition.
We’ll use the auto provider to automatically select the first available provider for the model. You can define your own priority list of providers in the Inference Providers page.
def transcribe_audio(audio_file_path):
"""Transcribe audio using fal.ai for speed"""
client = InferenceClient(provider="auto")
# Pass the file path directly - the client handles file reading
transcript = client.automatic_speech_recognition(
audio=audio_file_path,
model="openai/whisper-large-v3"
)
return transcript.textNext, use a widely available language model like deepseek-ai/DeepSeek-V3-0324 via an Inference Provider. Use provider="auto" to select an available provider.
We will define a custom prompt to ensure the output is formatted as a summary with action items and decisions made:
def generate_summary(transcript):
"""Generate summary using an Inference Provider"""
client = InferenceClient(provider="auto")
prompt = f"""
Analyze this meeting transcript and provide:
1. A concise summary of key points
2. Action items with responsible parties
3. Important decisions made
Transcript: {transcript}
Format with clear sections:
## Summary
## Action Items
## Decisions Made
"""
response = client.chat_completion(
model="deepseek-ai/DeepSeek-V3-0324",
messages=[{"role": "user", "content": prompt}],
max_tokens=1000
)
return response.choices[0].message.contentTo deploy, we’ll need to create an app.py file and upload it to Hugging Face Spaces.
import gradio as gr
from huggingface_hub import InferenceClient
def transcribe_audio(audio_file_path):
"""Transcribe audio using an Inference Provider"""
client = InferenceClient(provider="auto")
# Pass the file path directly - the client handles file reading
transcript = client.automatic_speech_recognition(
audio=audio_file_path, model="openai/whisper-large-v3"
)
return transcript.text
def generate_summary(transcript):
"""Generate summary using an Inference Provider"""
client = InferenceClient(provider="auto")
prompt = f"""
Analyze this meeting transcript and provide:
1. A concise summary of key points
2. Action items with responsible parties
3. Important decisions made
Transcript: {transcript}
Format with clear sections:
## Summary
## Action Items
## Decisions Made
"""
response = client.chat_completion(
model="deepseek-ai/DeepSeek-V3-0324",
messages=[{"role": "user", "content": prompt}],
max_tokens=1000,
)
return response.choices[0].message.content
def process_meeting_audio(audio_file):
"""Main processing function"""
if audio_file is None:
return "Please upload an audio file.", ""
try:
# Step 1: Transcribe
transcript = transcribe_audio(audio_file)
# Step 2: Summarize
summary = generate_summary(transcript)
return transcript, summary
except Exception as e:
return f"Error processing audio: {str(e)}", ""
# Create Gradio interface
app = gr.Interface(
fn=process_meeting_audio,
inputs=gr.Audio(label="Upload Meeting Audio", type="filepath"),
outputs=[
gr.Textbox(label="Transcript", lines=10),
gr.Textbox(label="Summary & Action Items", lines=8),
],
title="🎤 AI Meeting Notes",
description="Upload audio to get instant transcripts and summaries.",
)
if __name__ == "__main__":
app.launch()Our app will run on port 7860 and look like this:

To deploy, we’ll need to create a new Space and upload our files.
app.pyHF_TOKEN as a secret (get it from your settings)https://huggingface.co/spaces/your-username/your-space-nameNote: While we used CLI authentication locally, Spaces requires the token as a secret for the deployment environment.
If you want to explore more providers, you can check out the Inference Providers page. Or here are some ideas for next steps: