text
stringlengths 0
2k
| heading1
stringlengths 4
79
| source_page_url
stringclasses 182
values | source_page_title
stringclasses 182
values |
|---|---|---|---|
r.Dataframe(row_count=5, row_limits=None, column_count=3, column_limits=None)
```
Or with min/max constraints:
```python
Rows between 3 and 10, columns between 2 and 5
df = gr.Dataframe(row_count=5, row_limits=(3, 10), column_count=3, column_limits=(2, 5))
```
**Migration examples:**
- `row_count=(5, "fixed")` → `row_count=5, row_limits=(5, 5)`
- `row_count=(5, "dynamic")` → `row_count=5, row_limits=None`
- `row_count=5` → `row_count=5, row_limits=None` (same behavior)
- `col_count=(3, "fixed")` → `column_count=3, column_limits=(3, 3)`
- `col_count=(3, "dynamic")` → `column_count=3, column_limits=None`
- `col_count=3` → `column_count=3, column_limits=None` (same behavior)
`allow_tags=True` is now the default for `gr.Chatbot`
Due to the rise in LLMs returning HTML, markdown tags, and custom tags (such as `<thinking>` tags), the default value of `allow_tags` in `gr.Chatbot` has changed from `False` to `True` in Gradio 6.
**In Gradio 5.x:**
- `allow_tags=False` was the default
- All HTML and custom tags were sanitized/removed from chatbot messages (unless explicitly allowed)
**In Gradio 6.x:**
- `allow_tags=True` is the default
- All custom tags (non-standard HTML tags) are preserved in chatbot messages
- Standard HTML tags are still sanitized for security unless `sanitize_html=False`
**Before (Gradio 5.x):**
```python
import gradio as gr
chatbot = gr.Chatbot()
```
This would remove all tags from messages, including custom tags like `<thinking>`.
**After (Gradio 6.x):**
```python
import gradio as gr
chatbot = gr.Chatbot()
```
This will now preserve custom tags like `<thinking>` in the messages.
**To maintain the old behavior:**
If you want to continue removing all tags from chatbot messages (the old default behavior), explicitly set `allow_tags=False`:
```python
import gradio as gr
chatbot = gr.Chatbot(allow_tags=False)
```
**Note:** You can also specify a list of specific tags to allow:
```python
chatbot = gr.Chatbot(allow_tags=["thinking",
|
Component-level Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
e`:
```python
import gradio as gr
chatbot = gr.Chatbot(allow_tags=False)
```
**Note:** You can also specify a list of specific tags to allow:
```python
chatbot = gr.Chatbot(allow_tags=["thinking", "tool_call"])
```
This will only preserve `<thinking>` and `<tool_call>` tags while removing all other custom tags.
Other removed component parameters
Several component parameters have been removed in Gradio 6.0. These parameters were previously deprecated and have now been fully removed.
`gr.Chatbot` removed parameters
**`bubble_full_width`** - This parameter has been removed as it no longer has any effect.
**`resizeable`** - This parameter (with the typo) has been removed. Use `resizable` instead.
**Before (Gradio 5.x):**
```python
chatbot = gr.Chatbot(resizeable=True)
```
**After (Gradio 6.x):**
```python
chatbot = gr.Chatbot(resizable=True)
```
**`show_copy_button`, `show_copy_all_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
chatbot = gr.Chatbot(show_copy_button=True, show_copy_all_button=True, show_share_button=True)
```
**After (Gradio 6.x):**
```python
chatbot = gr.Chatbot(buttons=["copy", "copy_all", "share"])
```
`gr.Audio` / `WaveformOptions` removed parameters
**`show_controls`** - This parameter in `WaveformOptions` has been removed. Use `show_recording_waveform` instead.
**Before (Gradio 5.x):**
```python
audio = gr.Audio(
waveform_options=gr.WaveformOptions(show_controls=False)
)
```
**After (Gradio 6.x):**
```python
audio = gr.Audio(
waveform_options=gr.WaveformOptions(show_recording_waveform=False)
)
```
**`min_length` and `max_length`** - These parameters have been removed. Use validators instead.
**Before (Gradio 5.x):**
```python
audio = gr.Audio(min_length=1, max_length=10)
```
**After (Gradio 6.x):**
```python
audio = gr.Audio(
validator=lambda audio: gr.validators.is_audio_correct_length(audio, min_length=1
|
Component-level Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
*
```python
audio = gr.Audio(min_length=1, max_length=10)
```
**After (Gradio 6.x):**
```python
audio = gr.Audio(
validator=lambda audio: gr.validators.is_audio_correct_length(audio, min_length=1, max_length=10)
)
```
**`show_download_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
audio = gr.Audio(show_download_button=True, show_share_button=True)
```
**After (Gradio 6.x):**
```python
audio = gr.Audio(buttons=["download", "share"])
```
**Note:** For components where `show_share_button` had a default of `None` (which would show the button on Spaces), you can use `buttons=["share"]` to always show it, or omit it from the list to hide it.
`gr.Image` removed parameters
**`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.
**Before (Gradio 5.x):**
```python
image = gr.Image(mirror_webcam=True)
```
**After (Gradio 6.x):**
```python
image = gr.Image(webcam_options=gr.WebcamOptions(mirror=True))
```
**`webcam_constraints`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.
**Before (Gradio 5.x):**
```python
image = gr.Image(webcam_constraints={"facingMode": "user"})
```
**After (Gradio 6.x):**
```python
image = gr.Image(webcam_options=gr.WebcamOptions(constraints={"facingMode": "user"}))
```
**`show_download_button`, `show_share_button`, `show_fullscreen_button`** - These parameters have been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
image = gr.Image(show_download_button=True, show_share_button=True, show_fullscreen_button=True)
```
**After (Gradio 6.x):**
```python
image = gr.Image(buttons=["download", "share", "fullscreen"])
```
`gr.Video` removed parameters
**`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.
**Before (Gradio 5.x):**
```python
video = gr.Video(m
|
Component-level Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
`gr.Video` removed parameters
**`mirror_webcam`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.
**Before (Gradio 5.x):**
```python
video = gr.Video(mirror_webcam=True)
```
**After (Gradio 6.x):**
```python
video = gr.Video(webcam_options=gr.WebcamOptions(mirror=True))
```
**`webcam_constraints`** - This parameter has been removed. Use `webcam_options` with `gr.WebcamOptions` instead.
**Before (Gradio 5.x):**
```python
video = gr.Video(webcam_constraints={"facingMode": "user"})
```
**After (Gradio 6.x):**
```python
video = gr.Video(webcam_options=gr.WebcamOptions(constraints={"facingMode": "user"}))
```
**`min_length` and `max_length`** - These parameters have been removed. Use validators instead.
**Before (Gradio 5.x):**
```python
video = gr.Video(min_length=1, max_length=10)
```
**After (Gradio 6.x):**
```python
video = gr.Video(
validator=lambda video: gr.validators.is_video_correct_length(video, min_length=1, max_length=10)
)
```
**`show_download_button`, `show_share_button`** - These parameters have been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
video = gr.Video(show_download_button=True, show_share_button=True)
```
**After (Gradio 6.x):**
```python
video = gr.Video(buttons=["download", "share"])
```
`gr.ImageEditor` removed parameters
**`crop_size`** - This parameter has been removed. Use `canvas_size` instead.
**Before (Gradio 5.x):**
```python
editor = gr.ImageEditor(crop_size=(512, 512))
```
**After (Gradio 6.x):**
```python
editor = gr.ImageEditor(canvas_size=(512, 512))
```
Removed components
**`gr.LogoutButton`** - This component has been removed. Use `gr.LoginButton` instead, which handles both login and logout processes.
**Before (Gradio 5.x):**
```python
logout_btn = gr.LogoutButton()
```
**After (Gradio 6.x):**
```python
login_btn = gr.LoginButton()
```
Native plot components removed parameters
The following parameters have
|
Component-level Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
5.x):**
```python
logout_btn = gr.LogoutButton()
```
**After (Gradio 6.x):**
```python
login_btn = gr.LoginButton()
```
Native plot components removed parameters
The following parameters have been removed from `gr.LinePlot`, `gr.BarPlot`, and `gr.ScatterPlot`:
- `overlay_point` - This parameter has been removed.
- `width` - This parameter has been removed. Use CSS styling or container width instead.
- `stroke_dash` - This parameter has been removed.
- `interactive` - This parameter has been removed.
- `show_actions_button` - This parameter has been removed.
- `color_legend_title` - This parameter has been removed. Use `color_title` instead.
- `show_fullscreen_button`, `show_export_button` - These parameters have been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
plot = gr.LinePlot(
value=data,
x="date",
y="downloads",
overlay_point=True,
width=900,
show_fullscreen_button=True,
show_export_button=True
)
```
**After (Gradio 6.x):**
```python
plot = gr.LinePlot(
value=data,
x="date",
y="downloads",
buttons=["fullscreen", "export"]
)
```
**Note:** For `color_legend_title`, use `color_title` instead:
**Before (Gradio 5.x):**
```python
plot = gr.ScatterPlot(color_legend_title="Category")
```
**After (Gradio 6.x):**
```python
plot = gr.ScatterPlot(color_title="Category")
```
`gr.Textbox` removed parameters
**`show_copy_button`** - This parameter has been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
text = gr.Textbox(show_copy_button=True)
```
**After (Gradio 6.x):**
```python
text = gr.Textbox(buttons=["copy"])
```
`gr.Markdown` removed parameters
**`show_copy_button`** - This parameter has been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
markdown = gr.Markdown(show_copy_button=True)
```
**After (Gradio 6.x):**
```python
markdown = gr.Markdown(buttons=["copy"])
```
`gr.Dataframe` remove
|
Component-level Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
stead.
**Before (Gradio 5.x):**
```python
markdown = gr.Markdown(show_copy_button=True)
```
**After (Gradio 6.x):**
```python
markdown = gr.Markdown(buttons=["copy"])
```
`gr.Dataframe` removed parameters
**`show_copy_button`, `show_fullscreen_button`** - These parameters have been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
df = gr.Dataframe(show_copy_button=True, show_fullscreen_button=True)
```
**After (Gradio 6.x):**
```python
df = gr.Dataframe(buttons=["copy", "fullscreen"])
```
`gr.Slider` removed parameters
**`show_reset_button`** - This parameter has been removed. Use the `buttons` parameter instead.
**Before (Gradio 5.x):**
```python
slider = gr.Slider(show_reset_button=True)
```
**After (Gradio 6.x):**
```python
slider = gr.Slider(buttons=["reset"])
```
|
Component-level Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
`gradio sketch` command removed
The `gradio sketch` command-line tool has been deprecated and completely removed in Gradio 6. This tool was used to create Gradio apps through a visual interface.
**In Gradio 5.x:**
- You could run `gradio sketch` to launch an interactive GUI for building Gradio apps
- The tool would generate Python code visually
**In Gradio 6.x:**
- The `gradio sketch` command has been removed
- Running `gradio sketch` will raise a `DeprecationWarning`
|
CLI Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
`hf_token` parameter renamed to `token` in `Client`
The `hf_token` parameter in the `Client` class has been renamed to `token` for consistency and simplicity.
**Before (Gradio 5.x):**
```python
from gradio_client import Client
client = Client("abidlabs/my-private-space", hf_token="hf_...")
```
**After (Gradio 6.x):**
```python
from gradio_client import Client
client = Client("abidlabs/my-private-space", token="hf_...")
```
`deploy_discord` method deprecated
The `deploy_discord` method in the `Client` class has been deprecated and will be removed in Gradio 6.0. This method was used to deploy Gradio apps as Discord bots.
**Before (Gradio 5.x):**
```python
from gradio_client import Client
client = Client("username/space-name")
client.deploy_discord(discord_bot_token="...")
```
**After (Gradio 6.x):**
The `deploy_discord` method is no longer available. Please see the [documentation on creating a Discord bot with Gradio](https://www.gradio.app/guides/creating-a-discord-bot-from-a-gradio-app) for alternative approaches.
`AppError` now subclasses `Exception` instead of `ValueError`
The `AppError` exception class in the Python client now subclasses `Exception` directly instead of `ValueError`. This is a breaking change if you have code that specifically catches `ValueError` to handle `AppError` instances.
**Before (Gradio 5.x):**
```python
from gradio_client import Client
from gradio_client.exceptions import AppError
try:
client = Client("username/space-name")
result = client.predict("/predict", inputs)
except ValueError as e:
This would catch AppError in Gradio 5.x
print(f"Error: {e}")
```
**After (Gradio 6.x):**
```python
from gradio_client import Client
from gradio_client.exceptions import AppError
try:
client = Client("username/space-name")
result = client.predict("/predict", inputs)
except AppError as e:
Explicitly catch AppError
print(f"App error: {e}")
except ValueError as e:
This will no lon
|
Python Client Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
"username/space-name")
result = client.predict("/predict", inputs)
except AppError as e:
Explicitly catch AppError
print(f"App error: {e}")
except ValueError as e:
This will no longer catch AppError
print(f"Value error: {e}")
```
|
Python Client Changes
|
https://gradio.app/guides/gradio-6-migration-guide
|
Other Tutorials - Gradio 6 Migration Guide Guide
|
Encoder functions to send audio as base64-encoded data and images as base64-encoded JPEG.
```python
import base64
import numpy as np
from io import BytesIO
from PIL import Image
def encode_audio(data: np.ndarray) -> dict:
"""Encode audio data (int16 mono) for Gemini."""
return {
"mime_type": "audio/pcm",
"data": base64.b64encode(data.tobytes()).decode("UTF-8"),
}
def encode_image(data: np.ndarray) -> dict:
with BytesIO() as output_bytes:
pil_image = Image.fromarray(data)
pil_image.save(output_bytes, "JPEG")
bytes_data = output_bytes.getvalue()
base64_str = str(base64.b64encode(bytes_data), "utf-8")
return {"mime_type": "image/jpeg", "data": base64_str}
```
|
1) Encoders for audio and images
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
This handler:
- Opens a Gemini Live session on startup
- Receives streaming audio from Gemini and yields it back to the client
- Sends microphone audio as it arrives
- Sends a video frame at most once per second (to avoid flooding the API)
- Optionally sends an uploaded image (`gr.Image`) alongside the webcam frame
```python
import asyncio
import os
import time
import numpy as np
import websockets
from dotenv import load_dotenv
from google import genai
from fastrtc import AsyncAudioVideoStreamHandler, wait_for_item, WebRTCError
load_dotenv()
class GeminiHandler(AsyncAudioVideoStreamHandler):
def __init__(self) -> None:
super().__init__(
"mono",
output_sample_rate=24000,
input_sample_rate=16000,
)
self.audio_queue = asyncio.Queue()
self.video_queue = asyncio.Queue()
self.session = None
self.last_frame_time = 0.0
self.quit = asyncio.Event()
def copy(self) -> "GeminiHandler":
return GeminiHandler()
async def start_up(self):
await self.wait_for_args()
api_key = self.latest_args[3]
hf_token = self.latest_args[4]
if hf_token is None or hf_token == "":
raise WebRTCError("HF Token is required")
os.environ["HF_TOKEN"] = hf_token
client = genai.Client(
api_key=api_key, http_options={"api_version": "v1alpha"}
)
config = {"response_modalities": ["AUDIO"], "system_instruction": "You are an art critic that will critique the artwork passed in as an image to the user. Critique the artwork in a funny and lighthearted way. Be concise and to the point. Be friendly and engaging. Be helpful and informative. Be funny and lighthearted."}
async with client.aio.live.connect(
model="gemini-2.0-flash-exp",
config=config,
) as session:
self.session = session
while not self.quit.is_set():
turn = self.session.receiv
|
2) Implement the Gemini audio-video handler
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
model="gemini-2.0-flash-exp",
config=config,
) as session:
self.session = session
while not self.quit.is_set():
turn = self.session.receive()
try:
async for response in turn:
if data := response.data:
audio = np.frombuffer(data, dtype=np.int16).reshape(1, -1)
self.audio_queue.put_nowait(audio)
except websockets.exceptions.ConnectionClosedOK:
print("connection closed")
break
Video: receive and (optionally) send frames to Gemini
async def video_receive(self, frame: np.ndarray):
self.video_queue.put_nowait(frame)
if self.session and (time.time() - self.last_frame_time > 1.0):
self.last_frame_time = time.time()
await self.session.send(input=encode_image(frame))
If there is an uploaded image passed alongside the WebRTC component,
it will be available in latest_args[2]
if self.latest_args[2] is not None:
await self.session.send(input=encode_image(self.latest_args[2]))
async def video_emit(self) -> np.ndarray:
frame = await wait_for_item(self.video_queue, 0.01)
if frame is not None:
return frame
Fallback while waiting for first frame
return np.zeros((100, 100, 3), dtype=np.uint8)
Audio: forward microphone audio to Gemini
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
_, array = frame
array = array.squeeze() (num_samples,)
audio_message = encode_audio(array)
if self.session:
await self.session.send(input=audio_message)
Audio: emit Gemini’s audio back to the client
async def emit(self):
array = await wait_for_item(self.audio_queue, 0.01)
if array is not None:
return (self.output_sam
|
2) Implement the Gemini audio-video handler
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
Audio: emit Gemini’s audio back to the client
async def emit(self):
array = await wait_for_item(self.audio_queue, 0.01)
if array is not None:
return (self.output_sample_rate, array)
return array
async def shutdown(self) -> None:
if self.session:
self.quit.set()
await self.session.close()
self.quit.clear()
```
|
2) Implement the Gemini audio-video handler
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
We’ll add an optional `gr.Image` input alongside the `WebRTC` component. The handler will access this in `self.latest_args[1]` when sending frames to Gemini.
```python
import gradio as gr
from fastrtc import Stream, WebRTC, get_hf_turn_credentials
stream = Stream(
handler=GeminiHandler(),
modality="audio-video",
mode="send-receive",
server_rtc_configuration=get_hf_turn_credentials(ttl=600*10000),
rtc_configuration=get_hf_turn_credentials(),
additional_inputs=[
gr.Markdown(
"🎨 Art Critic\n\n"
"Provide an image of your artwork or hold it up to the webcam, and Gemini will critique it for you."
"To get a Gemini API key, please visit the [Gemini API Key](https://aistudio.google.com/apikey) page."
"To get an HF Token, please visit the [HF Token](https://huggingface.co/settings/tokens) page."
),
gr.Image(label="Artwork", value="mona_lisa.jpg", type="numpy", sources=["upload", "clipboard"]),
gr.Textbox(label="Gemini API Key", type="password"),
gr.Textbox(label="HF Token", type="password"),
],
ui_args={
"icon": "https://www.gstatic.com/lamda/images/gemini_favicon_f069958c85030456e93de685481c559f160ea06b.png",
"pulse_color": "rgb(255, 255, 255)",
"icon_button_color": "rgb(255, 255, 255)",
"title": "Gemini Audio Video Chat",
},
time_limit=90,
concurrency_limit=5,
)
if __name__ == "__main__":
stream.ui.launch()
```
References
- Gemini Audio Video Chat reference code: [Hugging Face Space](https://huggingface.co/spaces/gradio/gemini-audio-video/blob/main/app.py)
- FastRTC docs: `https://fastrtc.org`
- Audio + video user guide: `https://fastrtc.org/userguide/audio-video/`
- Gradio component integration: `https://fastrtc.org/userguide/gradio/`
- Cookbook (live demos + code): `https://fastrtc.org/cookbook/`
|
3) Setup Stream and Gradio UI
|
https://gradio.app/guides/create-immersive-demo
|
Other Tutorials - Create Immersive Demo Guide
|
Building a dashboard from a public Google Sheet is very easy, thanks to the [`pandas` library](https://pandas.pydata.org/):
1\. Get the URL of the Google Sheets that you want to use. To do this, simply go to the Google Sheets, click on the "Share" button in the top-right corner, and then click on the "Get shareable link" button. This will give you a URL that looks something like this:
```html
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0
```
2\. Now, let's modify this URL and then use it to read the data from the Google Sheets into a Pandas DataFrame. (In the code below, replace the `URL` variable with the URL of your public Google Sheet):
```python
import pandas as pd
URL = "https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0"
csv_url = URL.replace('/editgid=', '/export?format=csv&gid=')
def get_data():
return pd.read_csv(csv_url)
```
3\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) you would like the component to refresh. Here's the Gradio code:
```python
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown("📈 Real-Time Line Plot")
with gr.Row():
with gr.Column():
gr.DataFrame(get_data, every=gr.Timer(5))
with gr.Column():
gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500)
demo.queue().launch() Run the demo with queuing enabled
```
And that's it! You have a dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
|
Public Google Sheets
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
For private Google Sheets, the process requires a little more work, but not that much! The key difference is that now, you must authenticate yourself to authorize access to the private Google Sheets.
Authentication
To authenticate yourself, obtain credentials from Google Cloud. Here's [how to set up google cloud credentials](https://developers.google.com/workspace/guides/create-credentials):
1\. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)
2\. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one.
3\. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "Google Sheets API", click on it, and click the "Enable" button. If you see the "Manage" button, then Google Sheets is already enabled, and you're all set.
4\. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button.
5\. In the "Create credentials" dialog, select "Service account key" as the type of credentials to create, and give it a name. **Note down the email of the service account**
6\. After selecting the service account, select the "JSON" key type and then click on the "Create" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:
```json
{
"type": "service_account",
"project_id": "your project",
"private_key_id": "your private key id",
"private_key": "private key",
"client_email": "email",
"client_id": "client id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
}
```
|
Private Google Sheets
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
}
```
Querying
Once you have the credentials `.json` file, you can use the following steps to query your Google Sheet:
1\. Click on the "Share" button in the top-right corner of the Google Sheet. Share the Google Sheets with the email address of the service from Step 5 of authentication subsection (this step is important!). Then click on the "Get shareable link" button. This will give you a URL that looks something like this:
```html
https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/editgid=0
```
2\. Install the [`gspread` library](https://docs.gspread.org/en/v5.7.0/), which makes it easy to work with the [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) in Python by running in the terminal: `pip install gspread`
3\. Write a function to load the data from the Google Sheet, like this (replace the `URL` variable with the URL of your private Google Sheet):
```python
import gspread
import pandas as pd
Authenticate with Google and get the sheet
URL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375sAT7vPvaj4k/editgid=0'
gc = gspread.service_account("path/to/key.json")
sh = gc.open_by_url(URL)
worksheet = sh.sheet1
def get_data():
values = worksheet.get_all_values()
df = pd.DataFrame(values[1:], columns=values[0])
return df
```
4\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio cod
|
Private Google Sheets
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
. To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio code:
```python
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown("📈 Real-Time Line Plot")
with gr.Row():
with gr.Column():
gr.DataFrame(get_data, every=gr.Timer(5))
with gr.Column():
gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500)
demo.queue().launch() Run the demo with queuing enabled
```
You now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
|
Private Google Sheets
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
And that's all there is to it! With just a few lines of code, you can use `gradio` and other libraries to read data from a public or private Google Sheet and then display and plot the data in a real-time dashboard.
|
Conclusion
|
https://gradio.app/guides/creating-a-realtime-dashboard-from-google-sheets
|
Other Tutorials - Creating A Realtime Dashboard From Google Sheets Guide
|
Gradio features a built-in theming engine that lets you customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `launch()` method of `Blocks` or `Interface`. For example:
```python
with gr.Blocks() as demo:
... your code here
demo.launch(theme=gr.themes.Soft())
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-soft.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. These are:
* `gr.themes.Base()` - the `"base"` theme sets the primary color to blue but otherwise has minimal styling, making it particularly useful as a base for creating new, custom themes.
* `gr.themes.Default()` - the `"default"` Gradio 5 theme, with a vibrant orange primary color and gray secondary color.
* `gr.themes.Origin()` - the `"origin"` theme is most similar to Gradio 4 styling. Colors, especially in light mode, are more subdued than the Gradio 5 default theme.
* `gr.themes.Citrus()` - the `"citrus"` theme uses a yellow primary color, highlights form elements that are in focus, and includes fun 3D effects when buttons are clicked.
* `gr.themes.Monochrome()` - the `"monochrome"` theme uses a black primary and white secondary color, and uses serif-style fonts, giving the appearance of a black-and-white newspaper.
* `gr.themes.Soft()` - the `"soft"` theme uses a purple primary color and white secondary color. It also increases the border radius around buttons and form elements and highlights labels.
* `gr.themes.Glass()` - the `"glass"` theme has a blue primary color and a transclucent gray secondary color. The theme also uses vertical gradients to create a glassy effect.
* `gr.themes.Ocean()` - the `"ocean"` theme has a blue-green primary color and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.
Each of these themes set values
|
Introduction
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
the `"ocean"` theme has a blue-green primary color and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.
Each of these themes set values for hundreds of CSS variables. You can use prebuilt themes as a starting point for your own custom themes, or you can create your own themes from scratch. Let's take a look at each approach.
|
Introduction
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
The easiest way to build a theme is using the Theme Builder. To launch the Theme Builder locally, run the following code:
```python
import gradio as gr
gr.themes.builder()
```
$demo_theme_builder
You can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.
As you edit the values in the Theme Builder, the app will preview updates in real time. You can download the code to generate the theme you've created so you can use it in any Gradio app.
In the rest of the guide, we will cover building themes programmatically.
|
Using the Theme Builder
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
Although each theme has hundreds of CSS variables, the values for most these variables are drawn from 8 core variables which can be set through the constructor of each prebuilt theme. Modifying these 8 arguments allows you to quickly change the look and feel of your app.
Core Colors
The first 3 constructor arguments set the colors of the theme and are `gradio.themes.Color` objects. Internally, these Color objects hold brightness values for the palette of a single hue, ranging from 50, 100, 200..., 800, 900, 950. Other CSS variables are derived from these 3 colors.
The 3 color constructor arguments are:
- `primary_hue`: This is the color draws attention in your theme. In the default theme, this is set to `gradio.themes.colors.orange`.
- `secondary_hue`: This is the color that is used for secondary elements in your theme. In the default theme, this is set to `gradio.themes.colors.blue`.
- `neutral_hue`: This is the color that is used for text and other neutral elements in your theme. In the default theme, this is set to `gradio.themes.colors.gray`.
You could modify these values using their string shortcuts, such as
```python
with gr.Blocks() as demo:
... your code here
demo.launch(theme=gr.themes.Default(primary_hue="red", secondary_hue="pink"))
...
```
or you could use the `Color` objects directly, like this:
```python
with gr.Blocks() as demo:
... your code here
demo.launch(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink))
```
<div class="wrapper">
<iframe
src="https://gradio-theme-extended-step-1.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
Predefined colors are:
- `slate`
- `gray`
- `zinc`
- `neutral`
- `stone`
- `red`
- `orange`
- `amber`
- `yellow`
- `lime`
- `green`
- `emerald`
- `teal`
- `cyan`
- `sky`
- `blue`
- `indigo`
- `violet`
- `purple`
- `fuchsia`
- `pink`
- `rose`
You could also create your own custom `Color` objects and pass them in.
Core Sizing
The nex
|
Extending Themes via the Constructor
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
ld`
- `teal`
- `cyan`
- `sky`
- `blue`
- `indigo`
- `violet`
- `purple`
- `fuchsia`
- `pink`
- `rose`
You could also create your own custom `Color` objects and pass them in.
Core Sizing
The next 3 constructor arguments set the sizing of the theme and are `gradio.themes.Size` objects. Internally, these Size objects hold pixel size values that range from `xxs` to `xxl`. Other CSS variables are derived from these 3 sizes.
- `spacing_size`: This sets the padding within and spacing between elements. In the default theme, this is set to `gradio.themes.sizes.spacing_md`.
- `radius_size`: This sets the roundedness of corners of elements. In the default theme, this is set to `gradio.themes.sizes.radius_md`.
- `text_size`: This sets the font size of text. In the default theme, this is set to `gradio.themes.sizes.text_md`.
You could modify these values using their string shortcuts, such as
```python
with gr.Blocks() as demo:
... your code here
demo.launch(theme=gr.themes.Default(spacing_size="sm", radius_size="none"))
...
```
or you could use the `Size` objects directly, like this:
```python
with gr.Blocks() as demo:
... your code here
demo.launch(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none))
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-extended-step-2.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
The predefined size objects are:
- `radius_none`
- `radius_sm`
- `radius_md`
- `radius_lg`
- `spacing_sm`
- `spacing_md`
- `spacing_lg`
- `text_sm`
- `text_md`
- `text_lg`
You could also create your own custom `Size` objects and pass them in.
Core Fonts
The final 2 constructor arguments set the fonts of the theme. You can pass a list of fonts to each of these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.
- `font`: Th
|
Extending Themes via the Constructor
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.
- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("IBM Plex Sans")`.
- `font_mono`: This sets the monospace font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont("IBM Plex Mono")`.
You could modify these values such as the following:
```python
with gr.Blocks() as demo:
... your code here
demo.launch(theme=gr.themes.Default(font=[gr.themes.GoogleFont("Inconsolata"), "Arial", "sans-serif"]))
...
```
<div class="wrapper">
<iframe
src="https://gradio-theme-extended-step-3.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
|
Extending Themes via the Constructor
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
You can also modify the values of CSS variables after the theme has been loaded. To do so, use the `.set()` method of the theme object to get access to the CSS variables. For example:
```python
theme = gr.themes.Default(primary_hue="blue").set(
loader_color="FF0000",
slider_color="FF0000",
)
with gr.Blocks() as demo:
... your code here
demo.launch(theme=theme)
```
In the example above, we've set the `loader_color` and `slider_color` variables to `FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.
Your IDE type hinting should help you navigate these variables. Since there are so many CSS variables, let's take a look at how these variables are named and organized.
CSS Variable Naming Conventions
CSS variable names can get quite long, like `button_primary_background_fill_hover_dark`! However they follow a common naming convention that makes it easy to understand what they do and to find the variable you're looking for. Separated by underscores, the variable name is made up of:
1. The target element, such as `button`, `slider`, or `block`.
2. The target element type or sub-element, such as `button_primary`, or `block_label`.
3. The property, such as `button_primary_background_fill`, or `block_label_border_width`.
4. Any relevant state, such as `button_primary_background_fill_hover`.
5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.
Of course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`.
CSS Variable Organization
Though there are hundreds of CSS variables, they do not all have to have individual values. They draw their values by referencing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of indi
|
Extending Themes via `.set()`
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
cing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may want to modify.
Referencing Core Variables
To reference one of the core constructor variables, precede the variable name with an asterisk. To reference a core color, use the `*primary_`, `*secondary_`, or `*neutral_` prefix, followed by the brightness value. For example:
```python
theme = gr.themes.Default(primary_hue="blue").set(
button_primary_background_fill="*primary_200",
button_primary_background_fill_hover="*primary_300",
)
```
In the example above, we've set the `button_primary_background_fill` and `button_primary_background_fill_hover` variables to `*primary_200` and `*primary_300`. These variables will be set to the 200 and 300 brightness values of the blue primary color palette, respectively.
Similarly, to reference a core size, use the `*spacing_`, `*radius_`, or `*text_` prefix, followed by the size value. For example:
```python
theme = gr.themes.Default(radius_size="md").set(
button_primary_border_radius="*radius_xl",
)
```
In the example above, we've set the `button_primary_border_radius` variable to `*radius_xl`. This variable will be set to the `xl` setting of the medium radius size range.
Referencing Other Variables
Variables can also reference each other. For example, look at the example below:
```python
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_hover="FF0000",
button_primary_border="FF0000",
)
```
Having to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.
```python
theme = gr.themes.Default().set(
button_primary_background_fill="F
|
Extending Themes via `.set()`
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
ll` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.
```python
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_hover="*button_primary_background_fill",
button_primary_border="*button_primary_background_fill",
)
```
Now, if we change the `button_primary_background_fill` variable, the `button_primary_background_fill_hover` and `button_primary_border` variables will automatically update as well.
This is particularly useful if you intend to share your theme - it makes it easy to modify the theme without having to change every variable.
Note that dark mode variables automatically reference each other. For example:
```python
theme = gr.themes.Default().set(
button_primary_background_fill="FF0000",
button_primary_background_fill_dark="AAAAAA",
button_primary_border="*button_primary_background_fill",
button_primary_border_dark="*button_primary_background_fill_dark",
)
```
`button_primary_border_dark` will draw its value from `button_primary_background_fill_dark`, because dark mode always draw from the dark version of the variable.
|
Extending Themes via `.set()`
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
Let's say you want to create a theme from scratch! We'll go through it step by step - you can also see the source of prebuilt themes in the gradio source repo for reference - [here's the source](https://github.com/gradio-app/gradio/blob/main/gradio/themes/monochrome.py) for the Monochrome theme.
Our new theme class will inherit from `gradio.themes.Base`, a theme that sets a lot of convenient defaults. Let's make a simple demo that creates a dummy theme called Seafoam, and make a simple app that uses it.
$code_theme_new_step_1
<div class="wrapper">
<iframe
src="https://gradio-theme-new-step-1.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
The Base theme is very barebones, and uses `gr.themes.Blue` as it primary color - you'll note the primary button and the loading animation are both blue as a result. Let's change the defaults core arguments of our app. We'll overwrite the constructor and pass new defaults for the core constructor arguments.
We'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.
$code_theme_new_step_2
<div class="wrapper">
<iframe
src="https://gradio-theme-new-step-2.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
See how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.
Let's modify the theme a bit more directly. We'll call the `set()` method to overwrite CSS variable values explicitly. We can use any CSS logic, and reference our core constructor arguments using the `*` prefix.
$code_theme_new_step_3
<div class="wrapper">
<iframe
src="https://gradio-theme-new-step-3.hf.space?__theme=light"
frameborder="0"
></iframe>
</div>
Look how fun our theme looks now! With just a few variable changes, our theme looks completely different.
You may find it helpful to explore the [source code
|
Creating a Full Theme
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
ght"
frameborder="0"
></iframe>
</div>
Look how fun our theme looks now! With just a few variable changes, our theme looks completely different.
You may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel.
|
Creating a Full Theme
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
Once you have created a theme, you can upload it to the HuggingFace Hub to let others view it, use it, and build off of it!
Uploading a Theme
There are two ways to upload a theme, via the theme class instance or the command line. We will cover both of them with the previously created `seafoam` theme.
- Via the class instance
Each theme instance has a method called `push_to_hub` we can use to upload a theme to the HuggingFace hub.
```python
seafoam.push_to_hub(repo_name="seafoam",
version="0.0.1",
token="<token>")
```
- Via the command line
First save the theme to disk
```python
seafoam.dump(filename="seafoam.json")
```
Then use the `upload_theme` command:
```bash
upload_theme\
"seafoam.json"\
"seafoam"\
--version "0.0.1"\
--token "<token>"
```
In order to upload a theme, you must have a HuggingFace account and pass your [Access Token](https://huggingface.co/docs/huggingface_hub/quick-startlogin)
as the `token` argument. However, if you log in via the [HuggingFace command line](https://huggingface.co/docs/huggingface_hub/quick-startlogin) (which comes installed with `gradio`),
you can omit the `token` argument.
The `version` argument lets you specify a valid [semantic version](https://www.geeksforgeeks.org/introduction-semantic-versioning/) string for your theme.
That way your users are able to specify which version of your theme they want to use in their apps. This also lets you publish updates to your theme without worrying
about changing how previously created apps look. The `version` argument is optional. If omitted, the next patch version is automatically applied.
Theme Previews
By calling `push_to_hub` or `upload_theme`, the theme assets will be stored in a [HuggingFace space](https://huggingface.co/docs/hub/spaces-overview).
For example, the theme preview for the calm seafoam theme is here: [calm seafoam preview](https://huggingface.co/spaces/shivalikasingh/calm_seafoam).
<div class="wrapper">
<iframe
src="ht
|
Sharing Themes
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
verview).
For example, the theme preview for the calm seafoam theme is here: [calm seafoam preview](https://huggingface.co/spaces/shivalikasingh/calm_seafoam).
<div class="wrapper">
<iframe
src="https://shivalikasingh-calm-seafoam.hf.space/?__theme=light"
frameborder="0"
></iframe>
</div>
Discovering Themes
The [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows all the public gradio themes. After publishing your theme,
it will automatically show up in the theme gallery after a couple of minutes.
You can sort the themes by the number of likes on the space and from most to least recently created as well as toggling themes between light and dark mode.
<div class="wrapper">
<iframe
src="https://gradio-theme-gallery.static.hf.space"
frameborder="0"
></iframe>
</div>
Downloading
To use a theme from the hub, use the `from_hub` method on the `ThemeClass` and pass it to your app:
```python
my_theme = gr.Theme.from_hub("gradio/seafoam")
with gr.Blocks() as demo:
... your code here
demo.launch(theme=my_theme)
```
You can also pass the theme string directly to the `launch()` method of `Blocks` or `Interface` (e.g. `demo.launch(theme="gradio/seafoam")`)
You can pin your app to an upstream theme version by using semantic versioning expressions.
For example, the following would ensure the theme we load from the `seafoam` repo was between versions `0.0.1` and `0.1.0`:
```python
with gr.Blocks() as demo:
... your code here
demo.launch(theme="gradio/seafoam@>=0.0.1,<0.1.0")
....
```
Enjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!
If you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!
<style>
.wrapper {
position: relative;
padding-bottom: 56.25%;
padding-top: 25px;
height: 0;
}
.wrapper iframe {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
</style>
|
Sharing Themes
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
ion: relative;
padding-bottom: 56.25%;
padding-top: 25px;
height: 0;
}
.wrapper iframe {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
}
</style>
|
Sharing Themes
|
https://gradio.app/guides/theming-guide
|
Other Tutorials - Theming Guide Guide
|
Tabular data science is the most widely used domain of machine learning, with problems ranging from customer segmentation to churn prediction. Throughout various stages of the tabular data science workflow, communicating your work to stakeholders or clients can be cumbersome; which prevents data scientists from focusing on what matters, such as data analysis and model building. Data scientists can end up spending hours building a dashboard that takes in dataframe and returning plots, or returning a prediction or plot of clusters in a dataset. In this guide, we'll go through how to use `gradio` to improve your data science workflows. We will also talk about how to use `gradio` and [skops](https://skops.readthedocs.io/en/stable/) to build interfaces with only one line of code!
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started).
|
Introduction
|
https://gradio.app/guides/using-gradio-for-tabular-workflows
|
Other Tutorials - Using Gradio For Tabular Workflows Guide
|
We will take a look at how we can create a simple UI that predicts failures based on product information.
```python
import gradio as gr
import pandas as pd
import joblib
import datasets
inputs = [gr.Dataframe(row_count = (2, "dynamic"), col_count=(4,"dynamic"), label="Input Data", interactive=1)]
outputs = [gr.Dataframe(row_count = (2, "dynamic"), col_count=(1, "fixed"), label="Predictions", headers=["Failures"])]
model = joblib.load("model.pkl")
we will give our dataframe as example
df = datasets.load_dataset("merve/supersoaker-failures")
df = df["train"].to_pandas()
def infer(input_dataframe):
return pd.DataFrame(model.predict(input_dataframe))
gr.Interface(fn = infer, inputs = inputs, outputs = outputs, examples = [[df.head(2)]]).launch()
```
Let's break down above code.
- `fn`: the inference function that takes input dataframe and returns predictions.
- `inputs`: the component we take our input with. We define our input as dataframe with 2 rows and 4 columns, which initially will look like an empty dataframe with the aforementioned shape. When the `row_count` is set to `dynamic`, you don't have to rely on the dataset you're inputting to pre-defined component.
- `outputs`: The dataframe component that stores outputs. This UI can take single or multiple samples to infer, and returns 0 or 1 for each sample in one column, so we give `row_count` as 2 and `col_count` as 1 above. `headers` is a list made of header names for dataframe.
- `examples`: You can either pass the input by dragging and dropping a CSV file, or a pandas DataFrame through examples, which headers will be automatically taken by the interface.
We will now create an example for a minimal data visualization dashboard. You can find a more comprehensive version in the related Spaces.
<gradio-app space="gradio/tabular-playground"></gradio-app>
```python
import gradio as gr
import pandas as pd
import datasets
import seaborn as sns
import matplotlib.pyplot as plt
df = datasets.load_dataset
|
Let's Create a Simple Interface!
|
https://gradio.app/guides/using-gradio-for-tabular-workflows
|
Other Tutorials - Using Gradio For Tabular Workflows Guide
|
app space="gradio/tabular-playground"></gradio-app>
```python
import gradio as gr
import pandas as pd
import datasets
import seaborn as sns
import matplotlib.pyplot as plt
df = datasets.load_dataset("merve/supersoaker-failures")
df = df["train"].to_pandas()
df.dropna(axis=0, inplace=True)
def plot(df):
plt.scatter(df.measurement_13, df.measurement_15, c = df.loading,alpha=0.5)
plt.savefig("scatter.png")
df['failure'].value_counts().plot(kind='bar')
plt.savefig("bar.png")
sns.heatmap(df.select_dtypes(include="number").corr())
plt.savefig("corr.png")
plots = ["corr.png","scatter.png", "bar.png"]
return plots
inputs = [gr.Dataframe(label="Supersoaker Production Data")]
outputs = [gr.Gallery(label="Profiling Dashboard", columns=(1,3))]
gr.Interface(plot, inputs=inputs, outputs=outputs, examples=[df.head(100)], title="Supersoaker Failures Analysis Dashboard").launch()
```
<gradio-app space="gradio/gradio-analysis-dashboard-minimal"></gradio-app>
We will use the same dataset we used to train our model, but we will make a dashboard to visualize it this time.
- `fn`: The function that will create plots based on data.
- `inputs`: We use the same `Dataframe` component we used above.
- `outputs`: The `Gallery` component is used to keep our visualizations.
- `examples`: We will have the dataset itself as the example.
|
Let's Create a Simple Interface!
|
https://gradio.app/guides/using-gradio-for-tabular-workflows
|
Other Tutorials - Using Gradio For Tabular Workflows Guide
|
`skops` is a library built on top of `huggingface_hub` and `sklearn`. With the recent `gradio` integration of `skops`, you can build tabular data interfaces with one line of code!
```python
import gradio as gr
title and description are optional
title = "Supersoaker Defective Product Prediction"
description = "This model predicts Supersoaker production line failures. Drag and drop any slice from dataset or edit values as you wish in below dataframe component."
gr.load("huggingface/scikit-learn/tabular-playground", title=title, description=description).launch()
```
<gradio-app space="gradio/gradio-skops-integration"></gradio-app>
`sklearn` models pushed to Hugging Face Hub using `skops` include a `config.json` file that contains an example input with column names, the task being solved (that can either be `tabular-classification` or `tabular-regression`). From the task type, `gradio` constructs the `Interface` and consumes column names and the example input to build it. You can [refer to skops documentation on hosting models on Hub](https://skops.readthedocs.io/en/latest/auto_examples/plot_hf_hub.htmlsphx-glr-auto-examples-plot-hf-hub-py) to learn how to push your models to Hub using `skops`.
|
Easily load tabular data interfaces with one line of code using skops
|
https://gradio.app/guides/using-gradio-for-tabular-workflows
|
Other Tutorials - Using Gradio For Tabular Workflows Guide
|
Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.
Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.
Let's get started!
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). We will be using a pretrained image classification model, so you should also have `torch` installed.
|
Introduction
|
https://gradio.app/guides/image-classification-in-pytorch
|
Other Tutorials - Image Classification In Pytorch Guide
|
First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.
```python
import torch
model = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()
```
Because we will be using the model for inference, we have called the `.eval()` method.
|
Step 1 — Setting up the Image Classification Model
|
https://gradio.app/guides/image-classification-in-pytorch
|
Other Tutorials - Image Classification In Pytorch Guide
|
Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).
In the case of our pretrained model, it will look like this:
```python
import requests
from PIL import Image
from torchvision import transforms
Download human-readable labels for ImageNet.
response = requests.get("https://git.io/JJkYN")
labels = response.text.split("\n")
def predict(inp):
inp = transforms.ToTensor()(inp).unsqueeze(0)
with torch.no_grad():
prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)
confidences = {labels[i]: float(prediction[i]) for i in range(1000)}
return confidences
```
Let's break this down. The function takes one parameter:
- `inp`: the input image as a `PIL` image
Then, the function converts the image to a PIL Image and then eventually a PyTorch `tensor`, passes it through the model, and returns:
- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities
|
Step 2 — Defining a `predict` function
|
https://gradio.app/guides/image-classification-in-pytorch
|
Other Tutorials - Image Classification In Pytorch Guide
|
Now that we have our predictive function set up, we can create a Gradio Interface around it.
In this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type="pil")` which creates the component and handles the preprocessing to convert that to a `PIL` image.
The output component will be a `Label`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images by constructing it as `Label(num_top_classes=3)`.
Finally, we'll add one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples. The code for Gradio looks like this:
```python
import gradio as gr
gr.Interface(fn=predict,
inputs=gr.Image(type="pil"),
outputs=gr.Label(num_top_classes=3),
examples=["lion.jpg", "cheetah.jpg"]).launch()
```
This produces the following interface, which you can try right here in your browser (try uploading your own examples!):
<gradio-app space="gradio/pytorch-image-classifier">
---
And you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!
|
Step 3 — Creating a Gradio Interface
|
https://gradio.app/guides/image-classification-in-pytorch
|
Other Tutorials - Image Classification In Pytorch Guide
|
Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or "token") into different categories, such as names of people or names of locations, or different parts of speech.
For example, given the sentence:
> Does Chicago have any Pakistani restaurants?
A named-entity recognition algorithm may identify:
- "Chicago" as a **location**
- "Pakistani" as an **ethnicity**
and so on.
Using `gradio` (specifically the `HighlightedText` component), you can easily build a web demo of your NER model and share that with the rest of your team.
Here is an example of a demo that you'll be able to build:
$demo_ner_pipeline
This tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn!
Prerequisites
Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained named-entity recognition model. You can use your own, while in this tutorial, we will use one from the `transformers` library.
Approach 1: List of Entity Dictionaries
Many named-entity recognition models output a list of dictionaries. Each dictionary consists of an _entity_, a "start" index, and an "end" index. This is, for example, how NER models in the `transformers` library operate:
```py
from transformers import pipeline
ner_pipeline = pipeline("ner")
ner_pipeline("Does Chicago have any Pakistani restaurants")
```
Output:
```bash
[{'entity': 'I-LOC',
'score': 0.9988978,
'index': 2,
'word': 'Chicago',
'start': 5,
'end': 12},
{'entity': 'I-MISC',
'score': 0.9958592,
'index': 5,
'word': 'Pakistani',
'start': 22,
'end': 31}]
```
If you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this
|
Introduction
|
https://gradio.app/guides/named-entity-recognition
|
Other Tutorials - Named Entity Recognition Guide
|
index': 5,
'word': 'Pakistani',
'start': 22,
'end': 31}]
```
If you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this **list of entities**, along with the **original text** to the model, together as dictionary, with the keys being `"entities"` and `"text"` respectively.
Here is a complete example:
$code_ner_pipeline
$demo_ner_pipeline
Approach 2: List of Tuples
An alternative way to pass data into the `HighlightedText` component is a list of tuples. The first element of each tuple should be the word or words that are being classified into a particular entity. The second element should be the entity label (or `None` if they should be unlabeled). The `HighlightedText` component automatically strings together the words and labels to display the entities.
In some cases, this can be easier than the first approach. Here is a demo showing this approach using Spacy's parts-of-speech tagger:
$code_text_analysis
$demo_text_analysis
---
And you're done! That's all you need to know to build a web-based GUI for your NER model.
Fun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`.
|
Introduction
|
https://gradio.app/guides/named-entity-recognition
|
Other Tutorials - Named Entity Recognition Guide
|
This guide explains how you can use Gradio to plot geographical data on a map using the `gradio.Plot` component. The Gradio `Plot` component works with Matplotlib, Bokeh and Plotly. Plotly is what we will be working with in this guide. Plotly allows developers to easily create all sorts of maps with their geographical data. Take a look [here](https://plotly.com/python/maps/) for some examples.
|
Introduction
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
We will be using the New York City Airbnb dataset, which is hosted on kaggle [here](https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data). I've uploaded it to the Hugging Face Hub as a dataset [here](https://huggingface.co/datasets/gradio/NYC-Airbnb-Open-Data) for easier use and download. Using this data we will plot Airbnb locations on a map output and allow filtering based on price and location. Below is the demo that we will be building. ⚡️
$demo_map_airbnb
|
Overview
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
Let's start by loading the Airbnb NYC data from the Hugging Face Hub.
```python
from datasets import load_dataset
dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
df = dataset.to_pandas()
def filter_map(min_price, max_price, boroughs):
new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
(df['price'] > min_price) & (df['price'] < max_price)]
names = new_df["name"].tolist()
prices = new_df["price"].tolist()
text_list = [(names[i], prices[i]) for i in range(0, len(names))]
```
In the code above, we first load the csv data into a pandas dataframe. Let's begin by defining a function that we will use as the prediction function for the gradio app. This function will accept the minimum price and maximum price range as well as the list of boroughs to filter the resulting map. We can use the passed in values (`min_price`, `max_price`, and list of `boroughs`) to filter the dataframe and create `new_df`. Next we will create `text_list` of the names and prices of each Airbnb to use as labels on the map.
|
Step 1 - Loading CSV data 💾
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
Plotly makes it easy to work with maps. Let's take a look below how we can create a map figure.
```python
import plotly.graph_objects as go
fig = go.Figure(go.Scattermapbox(
customdata=text_list,
lat=new_df['latitude'].tolist(),
lon=new_df['longitude'].tolist(),
mode='markers',
marker=go.scattermapbox.Marker(
size=6
),
hoverinfo="text",
hovertemplate='<b>Name</b>: %{customdata[0]}<br><b>Price</b>: $%{customdata[1]}'
))
fig.update_layout(
mapbox_style="open-street-map",
hovermode='closest',
mapbox=dict(
bearing=0,
center=go.layout.mapbox.Center(
lat=40.67,
lon=-73.90
),
pitch=0,
zoom=9
),
)
```
Above, we create a scatter plot on mapbox by passing it our list of latitudes and longitudes to plot markers. We also pass in our custom data of names and prices for additional info to appear on every marker we hover over. Next we use `update_layout` to specify other map settings such as zoom, and centering.
More info [here](https://plotly.com/python/scattermapbox/) on scatter plots using Mapbox and Plotly.
|
Step 2 - Map Figure 🌐
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
We will use two `gr.Number` components and a `gr.CheckboxGroup` to allow users of our app to specify price ranges and borough locations. We will then use the `gr.Plot` component as an output for our Plotly + Mapbox map we created earlier.
```python
with gr.Blocks() as demo:
with gr.Column():
with gr.Row():
min_price = gr.Number(value=250, label="Minimum Price")
max_price = gr.Number(value=1000, label="Maximum Price")
boroughs = gr.CheckboxGroup(choices=["Queens", "Brooklyn", "Manhattan", "Bronx", "Staten Island"], value=["Queens", "Brooklyn"], label="Select Boroughs:")
btn = gr.Button(value="Update Filter")
map = gr.Plot()
demo.load(filter_map, [min_price, max_price, boroughs], map)
btn.click(filter_map, [min_price, max_price, boroughs], map)
```
We layout these components using the `gr.Column` and `gr.Row` and we'll also add event triggers for when the demo first loads and when our "Update Filter" button is clicked in order to trigger the map to update with our new filters.
This is what the full demo code looks like:
$code_map_airbnb
|
Step 3 - Gradio App ⚡️
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
If you run the code above, your app will start running locally.
You can even get a temporary shareable link by passing the `share=True` parameter to `launch`.
But what if you want to a permanent deployment solution?
Let's deploy our Gradio app to the free HuggingFace Spaces platform.
If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
|
Step 4 - Deployment 🤗
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
And you're all done! That's all the code you need to build a map demo.
Here's a link to the demo [Map demo](https://huggingface.co/spaces/gradio/map_airbnb) and [complete code](https://huggingface.co/spaces/gradio/map_airbnb/blob/main/run.py) (on Hugging Face Spaces)
|
Conclusion 🎉
|
https://gradio.app/guides/plot-component-for-maps
|
Other Tutorials - Plot Component For Maps Guide
|
A virtual environment in Python is a self-contained directory that holds a Python installation for a particular version of Python, along with a number of additional packages. This environment is isolated from the main Python installation and other virtual environments. Each environment can have its own independent set of installed Python packages, which allows you to maintain different versions of libraries for different projects without conflicts.
Using virtual environments ensures that you can work on multiple Python projects on the same machine without any conflicts. This is particularly useful when different projects require different versions of the same library. It also simplifies dependency management and enhances reproducibility, as you can easily share the requirements of your project with others.
|
Virtual Environments
|
https://gradio.app/guides/installing-gradio-in-a-virtual-environment
|
Other Tutorials - Installing Gradio In A Virtual Environment Guide
|
To install Gradio on a Windows system in a virtual environment, follow these steps:
1. **Install Python**: Ensure you have Python 3.10 or higher installed. You can download it from [python.org](https://www.python.org/). You can verify the installation by running `python --version` or `python3 --version` in Command Prompt.
2. **Create a Virtual Environment**:
Open Command Prompt and navigate to your project directory. Then create a virtual environment using the following command:
```bash
python -m venv gradio-env
```
This command creates a new directory `gradio-env` in your project folder, containing a fresh Python installation.
3. **Activate the Virtual Environment**:
To activate the virtual environment, run:
```bash
.\gradio-env\Scripts\activate
```
Your command prompt should now indicate that you are working inside `gradio-env`. Note: you can choose a different name than `gradio-env` for your virtual environment in this step.
4. **Install Gradio**:
Now, you can install Gradio using pip:
```bash
pip install gradio
```
5. **Verification**:
To verify the installation, run `python` and then type:
```python
import gradio as gr
print(gr.__version__)
```
This will display the installed version of Gradio.
|
Installing Gradio on Windows
|
https://gradio.app/guides/installing-gradio-in-a-virtual-environment
|
Other Tutorials - Installing Gradio In A Virtual Environment Guide
|
The installation steps on MacOS and Linux are similar to Windows but with some differences in commands.
1. **Install Python**:
Python usually comes pre-installed on MacOS and most Linux distributions. You can verify the installation by running `python --version` in the terminal (note that depending on how Python is installed, you might have to use `python3` instead of `python` throughout these steps).
Ensure you have Python 3.10 or higher installed. If you do not have it installed, you can download it from [python.org](https://www.python.org/).
2. **Create a Virtual Environment**:
Open Terminal and navigate to your project directory. Then create a virtual environment using:
```bash
python -m venv gradio-env
```
Note: you can choose a different name than `gradio-env` for your virtual environment in this step.
3. **Activate the Virtual Environment**:
To activate the virtual environment on MacOS/Linux, use:
```bash
source gradio-env/bin/activate
```
4. **Install Gradio**:
With the virtual environment activated, install Gradio using pip:
```bash
pip install gradio
```
5. **Verification**:
To verify the installation, run `python` and then type:
```python
import gradio as gr
print(gr.__version__)
```
This will display the installed version of Gradio.
By following these steps, you can successfully install Gradio in a virtual environment on your operating system, ensuring a clean and managed workspace for your Python projects.
|
Installing Gradio on MacOS/Linux
|
https://gradio.app/guides/installing-gradio-in-a-virtual-environment
|
Other Tutorials - Installing Gradio In A Virtual Environment Guide
|
Let’s start with a simple example of integrating a C++ program into a Gradio app. Suppose we have the following C++ program that adds two numbers:
```cpp
// add.cpp
include <iostream>
int main() {
double a, b;
std::cin >> a >> b;
std::cout << a + b << std::endl;
return 0;
}
```
This program reads two numbers from standard input, adds them, and outputs the result.
We can build a Gradio interface around this C++ program using Python's `subprocess` module. Here’s the corresponding Python code:
```python
import gradio as gr
import subprocess
def add_numbers(a, b):
process = subprocess.Popen(
['./add'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
output, error = process.communicate(input=f"{a} {b}\n".encode())
if error:
return f"Error: {error.decode()}"
return float(output.decode().strip())
demo = gr.Interface(
fn=add_numbers,
inputs=[gr.Number(label="Number 1"), gr.Number(label="Number 2")],
outputs=gr.Textbox(label="Result")
)
demo.launch()
```
Here, `subprocess.Popen` is used to execute the compiled C++ program (`add`), pass the input values, and capture the output. You can compile the C++ program by running:
```bash
g++ -o add add.cpp
```
This example shows how easy it is to call C++ from Python using `subprocess` and build a Gradio interface around it.
|
Using Gradio with C++
|
https://gradio.app/guides/using-gradio-in-other-programming-languages
|
Other Tutorials - Using Gradio In Other Programming Languages Guide
|
Now, let’s move to another example: calling a Rust program to apply a sepia filter to an image. The Rust code could look something like this:
```rust
// sepia.rs
extern crate image;
use image::{GenericImageView, ImageBuffer, Rgba};
fn sepia_filter(input: &str, output: &str) {
let img = image::open(input).unwrap();
let (width, height) = img.dimensions();
let mut img_buf = ImageBuffer::new(width, height);
for (x, y, pixel) in img.pixels() {
let (r, g, b, a) = (pixel[0] as f32, pixel[1] as f32, pixel[2] as f32, pixel[3]);
let tr = (0.393 * r + 0.769 * g + 0.189 * b).min(255.0);
let tg = (0.349 * r + 0.686 * g + 0.168 * b).min(255.0);
let tb = (0.272 * r + 0.534 * g + 0.131 * b).min(255.0);
img_buf.put_pixel(x, y, Rgba([tr as u8, tg as u8, tb as u8, a]));
}
img_buf.save(output).unwrap();
}
fn main() {
let args: Vec<String> = std::env::args().collect();
if args.len() != 3 {
eprintln!("Usage: sepia <input_file> <output_file>");
return;
}
sepia_filter(&args[1], &args[2]);
}
```
This Rust program applies a sepia filter to an image. It takes two command-line arguments: the input image path and the output image path. You can compile this program using:
```bash
cargo build --release
```
Now, we can call this Rust program from Python and use Gradio to build the interface:
```python
import gradio as gr
import subprocess
def apply_sepia(input_path):
output_path = "output.png"
process = subprocess.Popen(
['./target/release/sepia', input_path, output_path],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
process.wait()
return output_path
demo = gr.Interface(
fn=apply_sepia,
inputs=gr.Image(type="filepath", label="Input Image"),
outputs=gr.Image(label="Sepia Image")
)
demo.launch()
```
Here, when a user uploads an image and clicks submit, Gradio calls the Rust binary (`sepia`) to process the image, and re
|
Using Gradio with Rust
|
https://gradio.app/guides/using-gradio-in-other-programming-languages
|
Other Tutorials - Using Gradio In Other Programming Languages Guide
|
nput Image"),
outputs=gr.Image(label="Sepia Image")
)
demo.launch()
```
Here, when a user uploads an image and clicks submit, Gradio calls the Rust binary (`sepia`) to process the image, and returns the sepia-filtered output to Gradio.
This setup showcases how you can integrate performance-critical or specialized code written in Rust into a Gradio interface.
|
Using Gradio with Rust
|
https://gradio.app/guides/using-gradio-in-other-programming-languages
|
Other Tutorials - Using Gradio In Other Programming Languages Guide
|
Integrating Gradio with R is particularly straightforward thanks to the `reticulate` package, which allows you to run Python code directly in R. Let’s walk through an example of using Gradio in R.
**Installation**
First, you need to install the `reticulate` package in R:
```r
install.packages("reticulate")
```
Once installed, you can use the package to run Gradio directly from within an R script.
```r
library(reticulate)
py_install("gradio", pip = TRUE)
gr <- import("gradio") import gradio as gr
```
**Building a Gradio Application**
With gradio installed and imported, we now have access to gradio's app building methods. Let's build a simple app for an R function that returns a greeting
```r
greeting <- \(name) paste("Hello", name)
app <- gr$Interface(
fn = greeting,
inputs = gr$Text(label = "Name"),
outputs = gr$Text(label = "Greeting"),
title = "Hello! &128515 &128075"
)
app$launch(server_name = "localhost",
server_port = as.integer(3000))
```
Credit to [@IfeanyiIdiaye](https://github.com/Ifeanyi55) for contributing this section. You can see more examples [here](https://github.com/Ifeanyi55/Gradio-in-R/tree/main/Code), including using Gradio Blocks to build a machine learning application in R.
|
Using Gradio with R (via `reticulate`)
|
https://gradio.app/guides/using-gradio-in-other-programming-languages
|
Other Tutorials - Using Gradio In Other Programming Languages Guide
|
When you demo a machine learning model, you might want to collect data from users who try the model, particularly data points in which the model is not behaving as expected. Capturing these "hard" data points is valuable because it allows you to improve your machine learning model and make it more reliable and robust.
Gradio simplifies the collection of this data by including a **Flag** button with every `Interface`. This allows a user or tester to easily send data back to the machine where the demo is running. In this Guide, we discuss more about how to use the flagging feature, both with `gradio.Interface` as well as with `gradio.Blocks`.
|
Introduction
|
https://gradio.app/guides/using-flagging
|
Other Tutorials - Using Flagging Guide
|
Flagging with Gradio's `Interface` is especially easy. By default, underneath the output components, there is a button marked **Flag**. When a user testing your model sees input with interesting output, they can click the flag button to send the input and output data back to the machine where the demo is running. The sample is saved to a CSV log file (by default). If the demo involves images, audio, video, or other types of files, these are saved separately in a parallel directory and the paths to these files are saved in the CSV file.
There are [four parameters](https://gradio.app/docs/interfaceinitialization) in `gradio.Interface` that control how flagging works. We will go over them in greater detail.
- `flagging_mode`: this parameter can be set to either `"manual"` (default), `"auto"`, or `"never"`.
- `manual`: users will see a button to flag, and samples are only flagged when the button is clicked.
- `auto`: users will not see a button to flag, but every sample will be flagged automatically.
- `never`: users will not see a button to flag, and no sample will be flagged.
- `flagging_options`: this parameter can be either `None` (default) or a list of strings.
- If `None`, then the user simply clicks on the **Flag** button and no additional options are shown.
- If a list of strings are provided, then the user sees several buttons, corresponding to each of the strings that are provided. For example, if the value of this parameter is `["Incorrect", "Ambiguous"]`, then buttons labeled **Flag as Incorrect** and **Flag as Ambiguous** appear. This only applies if `flagging_mode` is `"manual"`.
- The chosen option is then logged along with the input and output.
- `flagging_dir`: this parameter takes a string.
- It represents what to name the directory where flagged data is stored.
- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class
- Using this parameter allows you to write custom code that gets run whe
|
The **Flag** button in `gradio.Interface`
|
https://gradio.app/guides/using-flagging
|
Other Tutorials - Using Flagging Guide
|
flagged data is stored.
- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class
- Using this parameter allows you to write custom code that gets run when the flag button is clicked
- By default, this is set to an instance of `gr.JSONLogger`
|
The **Flag** button in `gradio.Interface`
|
https://gradio.app/guides/using-flagging
|
Other Tutorials - Using Flagging Guide
|
Within the directory provided by the `flagging_dir` argument, a JSON file will log the flagged data.
Here's an example: The code below creates the calculator interface embedded below it:
```python
import gradio as gr
def calculator(num1, operation, num2):
if operation == "add":
return num1 + num2
elif operation == "subtract":
return num1 - num2
elif operation == "multiply":
return num1 * num2
elif operation == "divide":
return num1 / num2
iface = gr.Interface(
calculator,
["number", gr.Radio(["add", "subtract", "multiply", "divide"]), "number"],
"number",
flagging_mode="manual"
)
iface.launch()
```
<gradio-app space="gradio/calculator-flagging-basic"></gradio-app>
When you click the flag button above, the directory where the interface was launched will include a new flagged subfolder, with a csv file inside it. This csv file includes all the data that was flagged.
```directory
+-- flagged/
| +-- logs.csv
```
_flagged/logs.csv_
```csv
num1,operation,num2,Output,timestamp
5,add,7,12,2022-01-31 11:40:51.093412
6,subtract,1.5,4.5,2022-01-31 03:25:32.023542
```
If the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well. For example an `image` input to `image` output interface will create the following structure.
```directory
+-- flagged/
| +-- logs.csv
| +-- image/
| | +-- 0.png
| | +-- 1.png
| +-- Output/
| | +-- 0.png
| | +-- 1.png
```
_flagged/logs.csv_
```csv
im,Output timestamp
im/0.png,Output/0.png,2022-02-04 19:49:58.026963
im/1.png,Output/1.png,2022-02-02 10:40:51.093412
```
If you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.
If we go back to the calculator example, the fo
|
What happens to flagged data?
|
https://gradio.app/guides/using-flagging
|
Other Tutorials - Using Flagging Guide
|
` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.
If we go back to the calculator example, the following code will create the interface embedded below it.
```python
iface = gr.Interface(
calculator,
["number", gr.Radio(["add", "subtract", "multiply", "divide"]), "number"],
"number",
flagging_mode="manual",
flagging_options=["wrong sign", "off by one", "other"]
)
iface.launch()
```
<gradio-app space="gradio/calculator-flagging-options"></gradio-app>
When users click the flag button, the csv file will now include a column indicating the selected option.
_flagged/logs.csv_
```csv
num1,operation,num2,Output,flag,timestamp
5,add,7,-12,wrong sign,2022-02-04 11:40:51.093412
6,subtract,1.5,3.5,off by one,2022-02-04 11:42:32.062512
```
|
What happens to flagged data?
|
https://gradio.app/guides/using-flagging
|
Other Tutorials - Using Flagging Guide
|
What about if you are using `gradio.Blocks`? On one hand, you have even more flexibility
with Blocks -- you can write whatever Python code you want to run when a button is clicked,
and assign that using the built-in events in Blocks.
At the same time, you might want to use an existing `FlaggingCallback` to avoid writing extra code.
This requires two steps:
1. You have to run your callback's `.setup()` somewhere in the code prior to the
first time you flag data
2. When the flagging button is clicked, then you trigger the callback's `.flag()` method,
making sure to collect the arguments correctly and disabling the typical preprocessing.
Here is an example with an image sepia filter Blocks demo that lets you flag
data using the default `CSVLogger`:
$code_blocks_flag
$demo_blocks_flag
|
Flagging with Blocks
|
https://gradio.app/guides/using-flagging
|
Other Tutorials - Using Flagging Guide
|
Important Note: please make sure your users understand when the data they submit is being saved, and what you plan on doing with it. This is especially important when you use `flagging_mode=auto` (when all of the data submitted through the demo is being flagged)
That's all! Happy building :)
|
Privacy
|
https://gradio.app/guides/using-flagging
|
Other Tutorials - Using Flagging Guide
|
In this Guide, we'll walk you through:
- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces
- How to setup a Gradio demo for EfficientNet-Lite4
- How to contribute your own Gradio demos for the ONNX organization on Hugging Face
Here's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model.
|
Introduction
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.
The [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture.
|
What is the ONNX Model Zoo?
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
Gradio
Gradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.
Get started [here](https://gradio.app/getting_started)
Hugging Face Spaces
Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).
Hugging Face Models
Hugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)
|
What are Hugging Face Spaces & Gradio?
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet).
|
How did Hugging Face help the ONNX Model Zoo?
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.
ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/).
|
What is the role of ONNX Runtime?
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU. To learn more read the [model card](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4)
Here we walk through setting up a example demo for EfficientNet-Lite4 using Gradio
First we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.
```python
import numpy as np
import math
import matplotlib.pyplot as plt
import cv2
import json
import gradio as gr
from huggingface_hub import hf_hub_download
from onnx import hub
import onnxruntime as ort
loads ONNX model from ONNX Model Zoo
model = hub.load("efficientnet-lite4")
loads the labels text file
labels = json.load(open("labels_map.txt", "r"))
sets image file dimensions to 224x224 by resizing and cropping image from center
def pre_process_edgetpu(img, dims):
output_height, output_width, _ = dims
img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)
img = center_crop(img, output_height, output_width)
img = np.asarray(img, dtype='float32')
converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]
img -= [127.0, 127.0, 127.0]
img /= [128.0, 128.0, 128.0]
return img
resizes the image with a proportional scale
def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
height, width, _ = img.shape
new_height = int(100. * out_he
|
Setting up a Gradio Demo for EfficientNet-Lite4
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
the image with a proportional scale
def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):
height, width, _ = img.shape
new_height = int(100. * out_height / scale)
new_width = int(100. * out_width / scale)
if height > width:
w = new_width
h = int(new_height * height / width)
else:
h = new_height
w = int(new_width * width / height)
img = cv2.resize(img, (w, h), interpolation=inter_pol)
return img
crops the image around the center based on given height and width
def center_crop(img, out_height, out_width):
height, width, _ = img.shape
left = int((width - out_width) / 2)
right = int((width + out_width) / 2)
top = int((height - out_height) / 2)
bottom = int((height + out_height) / 2)
img = img[top:bottom, left:right]
return img
sess = ort.InferenceSession(model)
def inference(img):
img = cv2.imread(img)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = pre_process_edgetpu(img, (224, 224, 3))
img_batch = np.expand_dims(img, axis=0)
results = sess.run(["Softmax:0"], {"images:0": img_batch})[0]
result = reversed(results[0].argsort()[-5:])
resultdic = {}
for r in result:
resultdic[labels[str(r)]] = float(results[0][r])
return resultdic
title = "EfficientNet-Lite4"
description = "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU."
examples = [['catonnx.jpg']]
gr.Interface(inference, gr.Image(type="filepath"), "label", title=title, description=description, examples=examples).launch()
```
|
Setting up a Gradio Demo for EfficientNet-Lite4
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
examples=examples).launch()
```
|
Setting up a Gradio Demo for EfficientNet-Lite4
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
- Create an account on Hugging Face [here](https://huggingface.co/join).
- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/modelsmodels)
- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.
- Request to join ONNX Organization [here](https://huggingface.co/onnx).
- Once approved transfer model from your username to ONNX organization
- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/modelsmodels)
|
How to contribute Gradio demos on HF spaces using ONNX models
|
https://gradio.app/guides/Gradio-and-ONNX-on-Hugging-Face
|
Other Tutorials - Gradio And Onnx On Hugging Face Guide
|
This guide explains how you can run background tasks from your gradio app.
Background tasks are operations that you'd like to perform outside the request-response
lifecycle of your app either once or on a periodic schedule.
Examples of background tasks include periodically synchronizing data to an external database or
sending a report of model predictions via email.
|
Introduction
|
https://gradio.app/guides/running-background-tasks
|
Other Tutorials - Running Background Tasks Guide
|
We will be creating a simple "Google-forms-style" application to gather feedback from users of the gradio library.
We will use a local sqlite database to store our data, but we will periodically synchronize the state of the database
with a [HuggingFace Dataset](https://huggingface.co/datasets) so that our user reviews are always backed up.
The synchronization will happen in a background task running every 60 seconds.
At the end of the demo, you'll have a fully working application like this one:
<gradio-app space="freddyaboulton/gradio-google-forms"> </gradio-app>
|
Overview
|
https://gradio.app/guides/running-background-tasks
|
Other Tutorials - Running Background Tasks Guide
|
Our application will store the name of the reviewer, their rating of gradio on a scale of 1 to 5, as well as
any comments they want to share about the library. Let's write some code that creates a database table to
store this data. We'll also write some functions to insert a review into that table and fetch the latest 10 reviews.
We're going to use the `sqlite3` library to connect to our sqlite database but gradio will work with any library.
The code will look like this:
```python
DB_FILE = "./reviews.db"
db = sqlite3.connect(DB_FILE)
Create table if it doesn't already exist
try:
db.execute("SELECT * FROM reviews").fetchall()
db.close()
except sqlite3.OperationalError:
db.execute(
'''
CREATE TABLE reviews (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL,
name TEXT, review INTEGER, comments TEXT)
''')
db.commit()
db.close()
def get_latest_reviews(db: sqlite3.Connection):
reviews = db.execute("SELECT * FROM reviews ORDER BY id DESC limit 10").fetchall()
total_reviews = db.execute("Select COUNT(id) from reviews").fetchone()[0]
reviews = pd.DataFrame(reviews, columns=["id", "date_created", "name", "review", "comments"])
return reviews, total_reviews
def add_review(name: str, review: int, comments: str):
db = sqlite3.connect(DB_FILE)
cursor = db.cursor()
cursor.execute("INSERT INTO reviews(name, review, comments) VALUES(?,?,?)", [name, review, comments])
db.commit()
reviews, total_reviews = get_latest_reviews(db)
db.close()
return reviews, total_reviews
```
Let's also write a function to load the latest reviews when the gradio application loads:
```python
def load_data():
db = sqlite3.connect(DB_FILE)
reviews, total_reviews = get_latest_reviews(db)
db.close()
return reviews, total_reviews
```
|
Step 1 - Write your database logic 💾
|
https://gradio.app/guides/running-background-tasks
|
Other Tutorials - Running Background Tasks Guide
|
Now that we have our database logic defined, we can use gradio create a dynamic web page to ask our users for feedback!
```python
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
name = gr.Textbox(label="Name", placeholder="What is your name?")
review = gr.Radio(label="How satisfied are you with using gradio?", choices=[1, 2, 3, 4, 5])
comments = gr.Textbox(label="Comments", lines=10, placeholder="Do you have any feedback on gradio?")
submit = gr.Button(value="Submit Feedback")
with gr.Column():
data = gr.Dataframe(label="Most recently created 10 rows")
count = gr.Number(label="Total number of reviews")
submit.click(add_review, [name, review, comments], [data, count])
demo.load(load_data, None, [data, count])
```
|
Step 2 - Create a gradio app ⚡
|
https://gradio.app/guides/running-background-tasks
|
Other Tutorials - Running Background Tasks Guide
|
We could call `demo.launch()` after step 2 and have a fully functioning application. However,
our data would be stored locally on our machine. If the sqlite file were accidentally deleted, we'd lose all of our reviews!
Let's back up our data to a dataset on the HuggingFace hub.
Create a dataset [here](https://huggingface.co/datasets) before proceeding.
Now at the **top** of our script, we'll use the [huggingface hub client library](https://huggingface.co/docs/huggingface_hub/index)
to connect to our dataset and pull the latest backup.
```python
TOKEN = os.environ.get('HUB_TOKEN')
repo = huggingface_hub.Repository(
local_dir="data",
repo_type="dataset",
clone_from="<name-of-your-dataset>",
use_auth_token=TOKEN
)
repo.git_pull()
shutil.copyfile("./data/reviews.db", DB_FILE)
```
Note that you'll have to get an access token from the "Settings" tab of your HuggingFace for the above code to work.
In the script, the token is securely accessed via an environment variable.

Now we will create a background task to synch our local database to the dataset hub every 60 seconds.
We will use the [AdvancedPythonScheduler](https://apscheduler.readthedocs.io/en/3.x/) to handle the scheduling.
However, this is not the only task scheduling library available. Feel free to use whatever you are comfortable with.
The function to back up our data will look like this:
```python
from apscheduler.schedulers.background import BackgroundScheduler
def backup_db():
shutil.copyfile(DB_FILE, "./data/reviews.db")
db = sqlite3.connect(DB_FILE)
reviews = db.execute("SELECT * FROM reviews").fetchall()
pd.DataFrame(reviews).to_csv("./data/reviews.csv", index=False)
print("updating db")
repo.push_to_hub(blocking=False, commit_message=f"Updating data at {datetime.datetime.now()}")
scheduler = BackgroundScheduler()
scheduler.add_job(func=backup_db, trigge
|
Step 3 - Synchronize with HuggingFace Datasets 🤗
|
https://gradio.app/guides/running-background-tasks
|
Other Tutorials - Running Background Tasks Guide
|
print("updating db")
repo.push_to_hub(blocking=False, commit_message=f"Updating data at {datetime.datetime.now()}")
scheduler = BackgroundScheduler()
scheduler.add_job(func=backup_db, trigger="interval", seconds=60)
scheduler.start()
```
|
Step 3 - Synchronize with HuggingFace Datasets 🤗
|
https://gradio.app/guides/running-background-tasks
|
Other Tutorials - Running Background Tasks Guide
|
You can use the HuggingFace [Spaces](https://huggingface.co/spaces) platform to deploy this application for free ✨
If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
You will have to use the `HUB_TOKEN` environment variable as a secret in the Guides.
|
Step 4 (Bonus) - Deployment to HuggingFace Spaces
|
https://gradio.app/guides/running-background-tasks
|
Other Tutorials - Running Background Tasks Guide
|
Congratulations! You know how to run background tasks from your gradio app on a schedule ⏲️.
Checkout the application running on Spaces [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms).
The complete code is [here](https://huggingface.co/spaces/freddyaboulton/gradio-google-forms/blob/main/app.py)
|
Conclusion
|
https://gradio.app/guides/running-background-tasks
|
Other Tutorials - Running Background Tasks Guide
|
**[OpenAPI](https://www.openapis.org/)** is a widely adopted standard for describing RESTful APIs in a machine-readable format, typically as a JSON file.
You can create a Gradio UI from an OpenAPI Spec **in 1 line of Python**, instantly generating an interactive web interface for any API, making it accessible for demos, testing, or sharing with non-developers, without writing custom frontend code.
|
Introduction
|
https://gradio.app/guides/from-openapi-spec
|
Other Tutorials - From Openapi Spec Guide
|
Gradio now provides a convenient function, `gr.load_openapi`, that can automatically generate a Gradio app from an OpenAPI v3 specification. This function parses the spec, creates UI components for each endpoint and parameter, and lets you interact with the API directly from your browser.
Here's a minimal example:
```python
import gradio as gr
demo = gr.load_openapi(
openapi_spec="https://petstore3.swagger.io/api/v3/openapi.json",
base_url="https://petstore3.swagger.io/api/v3",
paths=["/pet.*"],
methods=["get", "post"],
)
demo.launch()
```
**Parameters:**
- **openapi_spec**: URL, file path, or Python dictionary containing the OpenAPI v3 spec (JSON format only).
- **base_url**: The base URL for the API endpoints (e.g., `https://api.example.com/v1`).
- **paths** (optional): List of endpoint path patterns (supports regex) to include. If not set, all paths are included.
- **methods** (optional): List of HTTP methods (e.g., `["get", "post"]`) to include. If not set, all methods are included.
The generated app will display a sidebar with available endpoints and create interactive forms for each operation, letting you make API calls and view responses in real time.
|
How it works
|
https://gradio.app/guides/from-openapi-spec
|
Other Tutorials - From Openapi Spec Guide
|
Once your Gradio app is running, you can share the URL with others so they can try out the API through a friendly web interface—no code required. For even more power, you can launch the app as an MCP (Model Control Protocol) server using [Gradio's MCP integration](https://www.gradio.app/guides/building-mcp-server-with-gradio), enabling programmatic access and orchestration of your API via the MCP ecosystem. This makes it easy to build, share, and automate API workflows with minimal effort.
|
Next steps
|
https://gradio.app/guides/from-openapi-spec
|
Other Tutorials - From Openapi Spec Guide
|
Let's go through a simple example to understand how to containerize a Gradio app using Docker.
Step 1: Create Your Gradio App
First, we need a simple Gradio app. Let's create a Python file named `app.py` with the following content:
```python
import gradio as gr
def greet(name):
return f"Hello {name}!"
iface = gr.Interface(fn=greet, inputs="text", outputs="text").launch()
```
This app creates a simple interface that greets the user by name.
Step 2: Create a Dockerfile
Next, we'll create a Dockerfile to specify how our app should be built and run in a Docker container. Create a file named `Dockerfile` in the same directory as your app with the following content:
```dockerfile
FROM python:3.10-slim
WORKDIR /usr/src/app
COPY . .
RUN pip install --no-cache-dir gradio
EXPOSE 7860
ENV GRADIO_SERVER_NAME="0.0.0.0"
CMD ["python", "app.py"]
```
This Dockerfile performs the following steps:
- Starts from a Python 3.10 slim image.
- Sets the working directory and copies the app into the container.
- Installs Gradio (you should install all other requirements as well).
- Exposes port 7860 (Gradio's default port).
- Sets the `GRADIO_SERVER_NAME` environment variable to ensure Gradio listens on all network interfaces.
- Specifies the command to run the app.
Step 3: Build and Run Your Docker Container
With the Dockerfile in place, you can build and run your container:
```bash
docker build -t gradio-app .
docker run -p 7860:7860 gradio-app
```
Your Gradio app should now be accessible at `http://localhost:7860`.
|
How to Dockerize a Gradio App
|
https://gradio.app/guides/deploying-gradio-with-docker
|
Other Tutorials - Deploying Gradio With Docker Guide
|
When running Gradio applications in Docker, there are a few important things to keep in mind:
Running the Gradio app on `"0.0.0.0"` and exposing port 7860
In the Docker environment, setting `GRADIO_SERVER_NAME="0.0.0.0"` as an environment variable (or directly in your Gradio app's `launch()` function) is crucial for allowing connections from outside the container. And the `EXPOSE 7860` directive in the Dockerfile tells Docker to expose Gradio's default port on the container to enable external access to the Gradio app.
Enable Stickiness for Multiple Replicas
When deploying Gradio apps with multiple replicas, such as on AWS ECS, it's important to enable stickiness with `sessionAffinity: ClientIP`. This ensures that all requests from the same user are routed to the same instance. This is important because Gradio's communication protocol requires multiple separate connections from the frontend to the backend in order for events to be processed correctly. (If you use Terraform, you'll want to add a [stickiness block](https://registry.terraform.io/providers/hashicorp/aws/3.14.1/docs/resources/lb_target_groupstickiness) into your target group definition.)
Deploying Behind a Proxy
If you're deploying your Gradio app behind a proxy, like Nginx, it's essential to configure the proxy correctly. Gradio provides a [Guide that walks through the necessary steps](https://www.gradio.app/guides/running-gradio-on-your-web-server-with-nginx). This setup ensures your app is accessible and performs well in production environments.
|
Important Considerations
|
https://gradio.app/guides/deploying-gradio-with-docker
|
Other Tutorials - Deploying Gradio With Docker Guide
|
When you are building a Gradio demo, particularly out of Blocks, you may find it cumbersome to keep re-running your code to test your changes.
To make it faster and more convenient to write your code, we've made it easier to "reload" your Gradio apps instantly when you are developing in a **Python IDE** (like VS Code, Sublime Text, PyCharm, or so on) or generally running your Python code from the terminal. We've also developed an analogous "magic command" that allows you to re-run cells faster if you use **Jupyter Notebooks** (or any similar environment like Colab).
This short Guide will cover both of these methods, so no matter how you write Python, you'll leave knowing how to build Gradio apps faster.
|
Why Hot Reloading?
|
https://gradio.app/guides/developing-faster-with-reload-mode
|
Other Tutorials - Developing Faster With Reload Mode Guide
|
If you are building Gradio Blocks using a Python IDE, your file of code (let's name it `run.py`) might look something like this:
```python
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown("Greetings from Gradio!")
inp = gr.Textbox(placeholder="What is your name?")
out = gr.Textbox()
inp.change(fn=lambda x: f"Welcome, {x}!",
inputs=inp,
outputs=out)
if __name__ == "__main__":
demo.launch()
```
The problem is that anytime that you want to make a change to your layout, events, or components, you have to close and rerun your app by writing `python run.py`.
Instead of doing this, you can run your code in **reload mode** by changing 1 word: `python` to `gradio`:
In the terminal, run `gradio run.py`. That's it!
Now, you'll see that after you'll see something like this:
```bash
Watching: '/Users/freddy/sources/gradio/gradio', '/Users/freddy/sources/gradio/demo/'
Running on local URL: http://127.0.0.1:7860
```
The important part here is the line that says `Watching...` What's happening here is that Gradio will be observing the directory where `run.py` file lives, and if the file changes, it will automatically rerun the file for you. So you can focus on writing your code, and your Gradio demo will refresh automatically 🥳
Tip: the `gradio` command does not detect the parameters passed to the `launch()` methods because the `launch()` method is never called in reload mode. For example, setting `auth`, or `show_error` in `launch()` will not be reflected in the app.
There is one important thing to keep in mind when using the reload mode: Gradio specifically looks for a Gradio Blocks/Interface demo called `demo` in your code. If you have named your demo something else, you will need to pass in the name of your demo as the 2nd parameter in your code. So if your `run.py` file looked like this:
```python
import gradio as gr
with gr.Blocks() as my_demo:
gr.Markdown("Greetings from Gradio!")
inp = gr.
|
Python IDE Reload 🔥
|
https://gradio.app/guides/developing-faster-with-reload-mode
|
Other Tutorials - Developing Faster With Reload Mode Guide
|
emo as the 2nd parameter in your code. So if your `run.py` file looked like this:
```python
import gradio as gr
with gr.Blocks() as my_demo:
gr.Markdown("Greetings from Gradio!")
inp = gr.Textbox(placeholder="What is your name?")
out = gr.Textbox()
inp.change(fn=lambda x: f"Welcome, {x}!",
inputs=inp,
outputs=out)
if __name__ == "__main__":
my_demo.launch()
```
Then you would launch it in reload mode like this: `gradio run.py --demo-name=my_demo`.
By default, the Gradio use UTF-8 encoding for scripts. **For reload mode**, If you are using encoding formats other than UTF-8 (such as cp1252), make sure you've done like this:
1. Configure encoding declaration of python script, for example: `-*- coding: cp1252 -*-`
2. Confirm that your code editor has identified that encoding format.
3. Run like this: `gradio run.py --encoding cp1252`
🔥 If your application accepts command line arguments, you can pass them in as well. Here's an example:
```python
import gradio as gr
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--name", type=str, default="User")
args, unknown = parser.parse_known_args()
with gr.Blocks() as demo:
gr.Markdown(f"Greetings {args.name}!")
inp = gr.Textbox()
out = gr.Textbox()
inp.change(fn=lambda x: x, inputs=inp, outputs=out)
if __name__ == "__main__":
demo.launch()
```
Which you could run like this: `gradio run.py --name Gretel`
As a small aside, this auto-reloading happens if you change your `run.py` source code or the Gradio source code. Meaning that this can be useful if you decide to [contribute to Gradio itself](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md) ✅
|
Python IDE Reload 🔥
|
https://gradio.app/guides/developing-faster-with-reload-mode
|
Other Tutorials - Developing Faster With Reload Mode Guide
|
By default, reload mode will re-run your entire script for every change you make.
But there are some cases where this is not desirable.
For example, loading a machine learning model should probably only happen once to save time. There are also some Python libraries that use C or Rust extensions that throw errors when they are reloaded, like `numpy` and `tiktoken`.
In these situations, you can place code that you do not want to be re-run inside an `if gr.NO_RELOAD:` codeblock. Here's an example of how you can use it to only load a transformers model once during the development process.
Tip: The value of `gr.NO_RELOAD` is `True`. So you don't have to change your script when you are done developing and want to run it in production. Simply run the file with `python` instead of `gradio`.
```python
import gradio as gr
if gr.NO_RELOAD:
from transformers import pipeline
pipe = pipeline("text-classification", model="cardiffnlp/twitter-roberta-base-sentiment-latest")
demo = gr.Interface(lambda s: {d["label"]: d["score"] for d in pipe(s)}, gr.Textbox(), gr.Label())
if __name__ == "__main__":
demo.launch()
```
|
Controlling the Reload 🎛️
|
https://gradio.app/guides/developing-faster-with-reload-mode
|
Other Tutorials - Developing Faster With Reload Mode Guide
|
You can also enable Gradio's **Vibe Mode**, which, which provides an in-browser chat that can be used to write or edit your Gradio app using natural language. To enable this, simply run use the `--vibe` flag with Gradio, e.g. `gradio --vibe app.py`.
Vibe Mode lets you describe commands using natural language and have an LLM write or edit the code in your Gradio app. The LLM is powered by Hugging Face's [Inference Providers](https://huggingface.co/docs/inference-providers/en/index), so you must be logged into Hugging Face locally to use this.
Note: When Vibe Mode is enabled, anyone who can access the Gradio endpoint can modify files and run arbitrary code on the host machine. Use only for local development.
|
Vibe Mode
|
https://gradio.app/guides/developing-faster-with-reload-mode
|
Other Tutorials - Developing Faster With Reload Mode Guide
|
What about if you use Jupyter Notebooks (or Colab Notebooks, etc.) to develop code? We got something for you too!
We've developed a **magic command** that will create and run a Blocks demo for you. To use this, load the gradio extension at the top of your notebook:
`%load_ext gradio`
Then, in the cell that you are developing your Gradio demo, simply write the magic command **`%%blocks`** at the top, and then write the layout and components like you would normally:
```py
%%blocks
import gradio as gr
with gr.Blocks() as demo:
gr.Markdown(f"Greetings {args.name}!")
inp = gr.Textbox()
out = gr.Textbox()
inp.change(fn=lambda x: x, inputs=inp, outputs=out)
```
Notice that:
- You do not need to launch your demo — Gradio does that for you automatically!
- Every time you rerun the cell, Gradio will re-render your app on the same port and using the same underlying web server. This means you'll see your changes _much, much faster_ than if you were rerunning the cell normally.
Here's what it looks like in a jupyter notebook:

🪄 This works in colab notebooks too! [Here's a colab notebook](https://colab.research.google.com/drive/1zAuWoiTIb3O2oitbtVb2_ekv1K6ggtC1?usp=sharing) where you can see the Blocks magic in action. Try making some changes and re-running the cell with the Gradio code!
Tip: You may have to use `%%blocks --share` in Colab to get the demo to appear in the cell.
The Notebook Magic is now the author's preferred way of building Gradio demos. Regardless of how you write Python code, we hope either of these methods will give you a much better development experience using Gradio.
---
|
Jupyter Notebook Magic 🔮
|
https://gradio.app/guides/developing-faster-with-reload-mode
|
Other Tutorials - Developing Faster With Reload Mode Guide
|
Now that you know how to develop quickly using Gradio, start building your own!
If you are looking for inspiration, try exploring demos other people have built with Gradio, [browse public Hugging Face Spaces](http://hf.space/) 🤗
|
Next Steps
|
https://gradio.app/guides/developing-faster-with-reload-mode
|
Other Tutorials - Developing Faster With Reload Mode Guide
|
Gradio features [blocks](https://www.gradio.app/docs/blocks) to easily layout applications. To use this feature, you need to stack or nest layout components and create a hierarchy with them. This isn't difficult to implement and maintain for small projects, but after the project gets more complex, this component hierarchy becomes difficult to maintain and reuse.
In this guide, we are going to explore how we can wrap the layout classes to create more maintainable and easy-to-read applications without sacrificing flexibility.
|
Introduction
|
https://gradio.app/guides/wrapping-layouts
|
Other Tutorials - Wrapping Layouts Guide
|
We are going to follow the implementation from this Huggingface Space example:
<gradio-app
space="gradio/wrapping-layouts">
</gradio-app>
|
Example
|
https://gradio.app/guides/wrapping-layouts
|
Other Tutorials - Wrapping Layouts Guide
|
The wrapping utility has two important classes. The first one is the ```LayoutBase``` class and the other one is the ```Application``` class.
We are going to look at the ```render``` and ```attach_event``` functions of them for brevity. You can look at the full implementation from [the example code](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).
So let's start with the ```LayoutBase``` class.
LayoutBase Class
1. Render Function
Let's look at the ```render``` function in the ```LayoutBase``` class:
```python
other LayoutBase implementations
def render(self) -> None:
with self.main_layout:
for renderable in self.renderables:
renderable.render()
self.main_layout.render()
```
This is a little confusing at first but if you consider the default implementation you can understand it easily.
Let's look at an example:
In the default implementation, this is what we're doing:
```python
with Row():
left_textbox = Textbox(value="left_textbox")
right_textbox = Textbox(value="right_textbox")
```
Now, pay attention to the Textbox variables. These variables' ```render``` parameter is true by default. So as we use the ```with``` syntax and create these variables, they are calling the ```render``` function under the ```with``` syntax.
We know the render function is called in the constructor with the implementation from the ```gradio.blocks.Block``` class:
```python
class Block:
constructor parameters are omitted for brevity
def __init__(self, ...):
other assign functions
if render:
self.render()
```
So our implementation looks like this:
```python
self.main_layout -> Row()
with self.main_layout:
left_textbox.render()
right_textbox.render()
```
What this means is by calling the components' render functions under the ```with``` syntax, we are actually simulating the default implementation.
So now let's consider two nested ```with```s to see ho
|
Implementation
|
https://gradio.app/guides/wrapping-layouts
|
Other Tutorials - Wrapping Layouts Guide
|
at this means is by calling the components' render functions under the ```with``` syntax, we are actually simulating the default implementation.
So now let's consider two nested ```with```s to see how the outer one works. For this, let's expand our example with the ```Tab``` component:
```python
with Tab():
with Row():
first_textbox = Textbox(value="first_textbox")
second_textbox = Textbox(value="second_textbox")
```
Pay attention to the Row and Tab components this time. We have created the Textbox variables above and added them to Row with the ```with``` syntax. Now we need to add the Row component to the Tab component. You can see that the Row component is created with default parameters, so its render parameter is true, that's why the render function is going to be executed under the Tab component's ```with``` syntax.
To mimic this implementation, we need to call the ```render``` function of the ```main_layout``` variable after the ```with``` syntax of the ```main_layout``` variable.
So the implementation looks like this:
```python
with tab_main_layout:
with row_main_layout:
first_textbox.render()
second_textbox.render()
row_main_layout.render()
tab_main_layout.render()
```
The default implementation and our implementation are the same, but we are using the render function ourselves. So it requires a little work.
Now, let's take a look at the ```attach_event``` function.
2. Attach Event Function
The function is left as not implemented because it is specific to the class, so each class has to implement its `attach_event` function.
```python
other LayoutBase implementations
def attach_event(self, block_dict: Dict[str, Block]) -> None:
raise NotImplementedError
```
Check out the ```block_dict``` variable in the ```Application``` class's ```attach_event``` function.
Application Class
1. Render Function
```python
other Application implementations
def _render(self):
|
Implementation
|
https://gradio.app/guides/wrapping-layouts
|
Other Tutorials - Wrapping Layouts Guide
|
ct``` variable in the ```Application``` class's ```attach_event``` function.
Application Class
1. Render Function
```python
other Application implementations
def _render(self):
with self.app:
for child in self.children:
child.render()
self.app.render()
```
From the explanation of the ```LayoutBase``` class's ```render``` function, we can understand the ```child.render``` part.
So let's look at the bottom part, why are we calling the ```app``` variable's ```render``` function? It's important to call this function because if we look at the implementation in the ```gradio.blocks.Blocks``` class, we can see that it is adding the components and event functions into the root component. To put it another way, it is creating and structuring the gradio application.
2. Attach Event Function
Let's see how we can attach events to components:
```python
other Application implementations
def _attach_event(self):
block_dict: Dict[str, Block] = {}
for child in self.children:
block_dict.update(child.global_children_dict)
with self.app:
for child in self.children:
try:
child.attach_event(block_dict=block_dict)
except NotImplementedError:
print(f"{child.name}'s attach_event is not implemented")
```
You can see why the ```global_children_list``` is used in the ```LayoutBase``` class from the example code. With this, all the components in the application are gathered into one dictionary, so the component can access all the components with their names.
The ```with``` syntax is used here again to attach events to components. If we look at the ```__exit__``` function in the ```gradio.blocks.Blocks``` class, we can see that it is calling the ```attach_load_events``` function which is used for setting event triggers to components. So we have to use the ```with``` syntax to trigger the ```_
|
Implementation
|
https://gradio.app/guides/wrapping-layouts
|
Other Tutorials - Wrapping Layouts Guide
|
Blocks``` class, we can see that it is calling the ```attach_load_events``` function which is used for setting event triggers to components. So we have to use the ```with``` syntax to trigger the ```__exit__``` function.
Of course, we can call ```attach_load_events``` without using the ```with``` syntax, but the function needs a ```Context.root_block```, and it is set in the ```__enter__``` function. So we used the ```with``` syntax here rather than calling the function ourselves.
|
Implementation
|
https://gradio.app/guides/wrapping-layouts
|
Other Tutorials - Wrapping Layouts Guide
|
In this guide, we saw
- How we can wrap the layouts
- How components are rendered
- How we can structure our application with wrapped layout classes
Because the classes used in this guide are used for demonstration purposes, they may still not be totally optimized or modular. But that would make the guide much longer!
I hope this guide helps you gain another view of the layout classes and gives you an idea about how you can use them for your needs. See the full implementation of our example [here](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).
|
Conclusion
|
https://gradio.app/guides/wrapping-layouts
|
Other Tutorials - Wrapping Layouts Guide
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.