Commit
·
1080b26
1
Parent(s):
fca2cc1
Add comprehensive Gradio Space for ASR/STT benchmark results
Browse files- Created interactive Gradio app with 5 tabs:
- Overview with test sample and limitations
- Interactive visualizations for WER and speed metrics
- Q&A section with all research findings
- Hardware and setup details
- About section with project motivation
- Added all benchmark charts and results
- Included reference transcription text
- Set up Git LFS for audio files (*.wav, *.mp3, *.flac)
- Created interactive Plotly visualizations for WER analysis
- Comprehensive documentation of findings and methodology
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
- .gitattributes +6 -0
- .gitignore +25 -0
- app.py +413 -0
- audio/README.md +7 -0
- requirements.txt +3 -0
- results/accuracy_speed_tradeoff.png +3 -0
- results/engine_comparison.png +3 -0
- results/speed_by_size.png +3 -0
- results/summary_stats.txt +14 -0
- results/variants_comparison.png +3 -0
- results/wer_by_size.png +3 -0
- text/reference.txt +13 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.wav filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
*.flac filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
*.so
|
| 6 |
+
.Python
|
| 7 |
+
|
| 8 |
+
# Virtual environments
|
| 9 |
+
.venv/
|
| 10 |
+
venv/
|
| 11 |
+
ENV/
|
| 12 |
+
env/
|
| 13 |
+
|
| 14 |
+
# IDEs
|
| 15 |
+
.vscode/
|
| 16 |
+
.idea/
|
| 17 |
+
*.swp
|
| 18 |
+
*.swo
|
| 19 |
+
|
| 20 |
+
# OS
|
| 21 |
+
.DS_Store
|
| 22 |
+
Thumbs.db
|
| 23 |
+
|
| 24 |
+
# Gradio
|
| 25 |
+
flagged/
|
app.py
ADDED
|
@@ -0,0 +1,413 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import gradio as gr
|
| 2 |
+
import pandas as pd
|
| 3 |
+
import plotly.graph_objects as go
|
| 4 |
+
import plotly.express as px
|
| 5 |
+
from pathlib import Path
|
| 6 |
+
|
| 7 |
+
# Read reference text
|
| 8 |
+
reference_text_path = Path("text/reference.txt")
|
| 9 |
+
if reference_text_path.exists():
|
| 10 |
+
with open(reference_text_path, "r") as f:
|
| 11 |
+
reference_text = f.read()
|
| 12 |
+
else:
|
| 13 |
+
reference_text = "Reference text not available"
|
| 14 |
+
|
| 15 |
+
# Check if audio file exists
|
| 16 |
+
audio_path = Path("audio/001.wav")
|
| 17 |
+
audio_exists = audio_path.exists()
|
| 18 |
+
|
| 19 |
+
# Prepare WER data for visualizations
|
| 20 |
+
wer_data = {
|
| 21 |
+
"Model": ["tiny", "base", "small", "medium", "large-v3-turbo"],
|
| 22 |
+
"WER (%)": [15.05, 9.95, 11.17, 6.07, 7.04],
|
| 23 |
+
"Speed (s)": [2.73, 5.01, 5.14, 19.42, 33.08],
|
| 24 |
+
"Model Size": ["39M", "74M", "244M", "769M", "809M"]
|
| 25 |
+
}
|
| 26 |
+
df_wer = pd.DataFrame(wer_data)
|
| 27 |
+
|
| 28 |
+
# Engine comparison data
|
| 29 |
+
engine_data = {
|
| 30 |
+
"Engine": ["faster-whisper", "openai-whisper", "distil-whisper"],
|
| 31 |
+
"WER (%)": [9.95, 9.95, 21.6],
|
| 32 |
+
"Speed (s)": [4.87, 6.51, 38.49]
|
| 33 |
+
}
|
| 34 |
+
df_engine = pd.DataFrame(engine_data)
|
| 35 |
+
|
| 36 |
+
# Create interactive WER by model visualization
|
| 37 |
+
fig_wer = go.Figure()
|
| 38 |
+
fig_wer.add_trace(go.Bar(
|
| 39 |
+
x=df_wer["Model"],
|
| 40 |
+
y=df_wer["WER (%)"],
|
| 41 |
+
text=df_wer["WER (%)"].round(2),
|
| 42 |
+
textposition='auto',
|
| 43 |
+
marker_color=['#FF6B6B', '#4ECDC4', '#45B7D1', '#96CEB4', '#FFEAA7'],
|
| 44 |
+
hovertemplate='<b>%{x}</b><br>WER: %{y:.2f}%<br>Size: %{customdata}<extra></extra>',
|
| 45 |
+
customdata=df_wer["Model Size"]
|
| 46 |
+
))
|
| 47 |
+
fig_wer.update_layout(
|
| 48 |
+
title="Word Error Rate by Model Size",
|
| 49 |
+
xaxis_title="Model",
|
| 50 |
+
yaxis_title="WER (%)",
|
| 51 |
+
template="plotly_white",
|
| 52 |
+
height=400
|
| 53 |
+
)
|
| 54 |
+
|
| 55 |
+
# Create speed vs accuracy scatter plot
|
| 56 |
+
fig_scatter = go.Figure()
|
| 57 |
+
fig_scatter.add_trace(go.Scatter(
|
| 58 |
+
x=df_wer["Speed (s)"],
|
| 59 |
+
y=df_wer["WER (%)"],
|
| 60 |
+
mode='markers+text',
|
| 61 |
+
marker=dict(size=15, color=['#FF6B6B', '#4ECDC4', '#45B7D1', '#96CEB4', '#FFEAA7']),
|
| 62 |
+
text=df_wer["Model"],
|
| 63 |
+
textposition="top center",
|
| 64 |
+
hovertemplate='<b>%{text}</b><br>Speed: %{x:.2f}s<br>WER: %{y:.2f}%<extra></extra>'
|
| 65 |
+
))
|
| 66 |
+
fig_scatter.update_layout(
|
| 67 |
+
title="Speed vs Accuracy Tradeoff",
|
| 68 |
+
xaxis_title="Inference Time (seconds)",
|
| 69 |
+
yaxis_title="WER (%)",
|
| 70 |
+
template="plotly_white",
|
| 71 |
+
height=400
|
| 72 |
+
)
|
| 73 |
+
|
| 74 |
+
# Create engine comparison visualization
|
| 75 |
+
fig_engine = go.Figure()
|
| 76 |
+
fig_engine.add_trace(go.Bar(
|
| 77 |
+
x=df_engine["Engine"],
|
| 78 |
+
y=df_engine["WER (%)"],
|
| 79 |
+
name="WER (%)",
|
| 80 |
+
marker_color='#4ECDC4',
|
| 81 |
+
text=df_engine["WER (%)"].round(2),
|
| 82 |
+
textposition='auto'
|
| 83 |
+
))
|
| 84 |
+
fig_engine.update_layout(
|
| 85 |
+
title="WER by Engine (Base Model)",
|
| 86 |
+
xaxis_title="Engine",
|
| 87 |
+
yaxis_title="WER (%)",
|
| 88 |
+
template="plotly_white",
|
| 89 |
+
height=400
|
| 90 |
+
)
|
| 91 |
+
|
| 92 |
+
# Custom CSS
|
| 93 |
+
custom_css = """
|
| 94 |
+
.gradio-container {
|
| 95 |
+
font-family: 'Inter', sans-serif;
|
| 96 |
+
}
|
| 97 |
+
.limitation-box {
|
| 98 |
+
background-color: #FFF3CD;
|
| 99 |
+
border-left: 4px solid #FFC107;
|
| 100 |
+
padding: 15px;
|
| 101 |
+
margin: 10px 0;
|
| 102 |
+
}
|
| 103 |
+
.question-box {
|
| 104 |
+
background-color: #E3F2FD;
|
| 105 |
+
border-left: 4px solid #2196F3;
|
| 106 |
+
padding: 15px;
|
| 107 |
+
margin: 15px 0;
|
| 108 |
+
}
|
| 109 |
+
"""
|
| 110 |
+
|
| 111 |
+
# Build the interface
|
| 112 |
+
with gr.Blocks(css=custom_css, theme=gr.themes.Soft()) as demo:
|
| 113 |
+
gr.Markdown(
|
| 114 |
+
"""
|
| 115 |
+
# Local ASR/STT Benchmark Evaluation
|
| 116 |
+
### A Single Sample Evaluation on Local Hardware
|
| 117 |
+
|
| 118 |
+
Testing different Whisper model sizes to find the optimal balance between accuracy and speed for daily transcription workflow.
|
| 119 |
+
"""
|
| 120 |
+
)
|
| 121 |
+
|
| 122 |
+
with gr.Tabs():
|
| 123 |
+
# Tab 1: Overview & Test Sample
|
| 124 |
+
with gr.Tab("📊 Overview"):
|
| 125 |
+
gr.Markdown(
|
| 126 |
+
"""
|
| 127 |
+
## About This Evaluation
|
| 128 |
+
|
| 129 |
+
This was a "back of the envelope" style experiment to determine which Whisper model size works best
|
| 130 |
+
for daily transcription on local hardware, focusing on the tradeoff between accuracy (WER) and inference speed.
|
| 131 |
+
"""
|
| 132 |
+
)
|
| 133 |
+
|
| 134 |
+
gr.Markdown("### 🎯 Test Sample")
|
| 135 |
+
|
| 136 |
+
if audio_exists:
|
| 137 |
+
gr.Audio(
|
| 138 |
+
value=str(audio_path),
|
| 139 |
+
label="Test Audio (001.wav)",
|
| 140 |
+
type="filepath"
|
| 141 |
+
)
|
| 142 |
+
else:
|
| 143 |
+
gr.Markdown("**Note:** Audio file will be added soon.")
|
| 144 |
+
|
| 145 |
+
gr.Markdown("### 📝 Reference Text (Ground Truth)")
|
| 146 |
+
gr.Textbox(
|
| 147 |
+
value=reference_text,
|
| 148 |
+
label="Reference Transcription",
|
| 149 |
+
lines=10,
|
| 150 |
+
max_lines=15,
|
| 151 |
+
interactive=False
|
| 152 |
+
)
|
| 153 |
+
|
| 154 |
+
gr.Markdown(
|
| 155 |
+
"""
|
| 156 |
+
### ⚠️ Important Limitations
|
| 157 |
+
|
| 158 |
+
- **Quick experiment**: Not a definitive scientific evaluation
|
| 159 |
+
- **Hardware specific**: AMD GPU with ROCm (not ideal for STT), using CPU inference
|
| 160 |
+
- **Single sample**: Results based on one audio clip
|
| 161 |
+
- **Variable conditions**: ASR accuracy depends on mic quality, background noise, speaking style
|
| 162 |
+
- **Personal use case**: Optimized for one user's voice and workflow
|
| 163 |
+
"""
|
| 164 |
+
)
|
| 165 |
+
|
| 166 |
+
# Tab 2: Results & Visualizations
|
| 167 |
+
with gr.Tab("📈 Results"):
|
| 168 |
+
gr.Markdown("## Key Findings")
|
| 169 |
+
|
| 170 |
+
with gr.Row():
|
| 171 |
+
with gr.Column():
|
| 172 |
+
gr.Markdown(
|
| 173 |
+
"""
|
| 174 |
+
### Best Accuracy
|
| 175 |
+
**medium** model
|
| 176 |
+
- 6.07% WER
|
| 177 |
+
- 19.42s inference
|
| 178 |
+
|
| 179 |
+
### Fastest
|
| 180 |
+
**tiny** model
|
| 181 |
+
- 15.05% WER
|
| 182 |
+
- 2.73s inference
|
| 183 |
+
|
| 184 |
+
### Recommended for Daily Use
|
| 185 |
+
**base** model (faster-whisper)
|
| 186 |
+
- 9.95% WER
|
| 187 |
+
- ~5s inference
|
| 188 |
+
- Good balance
|
| 189 |
+
"""
|
| 190 |
+
)
|
| 191 |
+
|
| 192 |
+
with gr.Column():
|
| 193 |
+
gr.Markdown(
|
| 194 |
+
"""
|
| 195 |
+
### Key Takeaways
|
| 196 |
+
|
| 197 |
+
1. **Biggest jump**: tiny → base (15% → 10% WER)
|
| 198 |
+
2. **Diminishing returns**: After base, accuracy gains are smaller
|
| 199 |
+
3. **faster-whisper**: Same accuracy as OpenAI, 1.2x faster
|
| 200 |
+
4. **distil-whisper**: Unexpectedly slower AND less accurate on this sample
|
| 201 |
+
"""
|
| 202 |
+
)
|
| 203 |
+
|
| 204 |
+
gr.Markdown("## Interactive Visualizations")
|
| 205 |
+
|
| 206 |
+
with gr.Row():
|
| 207 |
+
gr.Plot(fig_wer, label="WER by Model Size")
|
| 208 |
+
|
| 209 |
+
with gr.Row():
|
| 210 |
+
gr.Plot(fig_scatter, label="Speed vs Accuracy")
|
| 211 |
+
|
| 212 |
+
with gr.Row():
|
| 213 |
+
gr.Plot(fig_engine, label="Engine Comparison")
|
| 214 |
+
|
| 215 |
+
gr.Markdown("## Original Charts from Benchmark")
|
| 216 |
+
|
| 217 |
+
with gr.Row():
|
| 218 |
+
with gr.Column():
|
| 219 |
+
gr.Image("results/wer_by_size.png", label="WER by Size")
|
| 220 |
+
with gr.Column():
|
| 221 |
+
gr.Image("results/speed_by_size.png", label="Speed by Size")
|
| 222 |
+
|
| 223 |
+
with gr.Row():
|
| 224 |
+
with gr.Column():
|
| 225 |
+
gr.Image("results/accuracy_speed_tradeoff.png", label="Accuracy vs Speed")
|
| 226 |
+
with gr.Column():
|
| 227 |
+
gr.Image("results/engine_comparison.png", label="Engine Comparison")
|
| 228 |
+
|
| 229 |
+
with gr.Row():
|
| 230 |
+
gr.Image("results/variants_comparison.png", label="All Variants Tested")
|
| 231 |
+
|
| 232 |
+
# Tab 3: Q&A
|
| 233 |
+
with gr.Tab("❓ Questions & Answers"):
|
| 234 |
+
gr.Markdown(
|
| 235 |
+
"""
|
| 236 |
+
# Research Questions & Findings
|
| 237 |
+
|
| 238 |
+
## Q1: How much does model size actually matter for accuracy?
|
| 239 |
+
|
| 240 |
+
**Answer:** On my hardware, diminishing returns set in around **medium**.
|
| 241 |
+
|
| 242 |
+
The biggest accuracy jump was from tiny (15.05% WER) → base (9.95% WER). After that, improvements are smaller:
|
| 243 |
+
- tiny → base: 5.1% improvement
|
| 244 |
+
- base → medium: 3.88% improvement
|
| 245 |
+
- medium → large-v3-turbo: Actually worse (1% regression)
|
| 246 |
+
|
| 247 |
+
The "sweet spot" depends on your use case:
|
| 248 |
+
- **Live transcription**: Even small lags matter → base or small
|
| 249 |
+
- **Batch processing**: Can afford slower → medium or large
|
| 250 |
+
|
| 251 |
+
---
|
| 252 |
+
|
| 253 |
+
## Q2: Is faster-whisper really as good as OpenAI Whisper?
|
| 254 |
+
|
| 255 |
+
**Answer:** Yes! On this test, identical accuracy with better speed.
|
| 256 |
+
|
| 257 |
+
Testing the base model:
|
| 258 |
+
- **faster-whisper**: 9.95% WER in 5.01s
|
| 259 |
+
- **openai-whisper**: 9.95% WER in 6.17s
|
| 260 |
+
|
| 261 |
+
faster-whisper was ~1.2x faster with no accuracy loss. Clear winner for my use case.
|
| 262 |
+
|
| 263 |
+
---
|
| 264 |
+
|
| 265 |
+
## Q3: What's the speed vs. accuracy tradeoff?
|
| 266 |
+
|
| 267 |
+
**Answer:** For daily transcription of my own voice, base or small hits the sweet spot.
|
| 268 |
+
|
| 269 |
+
- **tiny**: 2.73s but 15% WER is too rough
|
| 270 |
+
- **base**: 5s with 10% WER - acceptable for daily use
|
| 271 |
+
- **small**: Similar to base, slightly slower
|
| 272 |
+
- **medium**: 6% WER but 7x slower than tiny
|
| 273 |
+
- **large-v3-turbo**: 33s for 7% WER - overkill for casual use
|
| 274 |
+
|
| 275 |
+
---
|
| 276 |
+
|
| 277 |
+
## Q4: Which model should I use for my daily STT workflow?
|
| 278 |
+
|
| 279 |
+
**My personal answer:** base model with faster-whisper
|
| 280 |
+
|
| 281 |
+
**Why it works for me:**
|
| 282 |
+
- ~10% WER is acceptable for dictation (I can quickly fix errors)
|
| 283 |
+
- 5 seconds per clip is fast enough
|
| 284 |
+
- 140MB model size is manageable
|
| 285 |
+
- Good balance for daily workflow
|
| 286 |
+
|
| 287 |
+
**When I'd use something else:**
|
| 288 |
+
- **tiny**: Quick tests or very long recordings where speed matters most
|
| 289 |
+
- **medium/large**: Publishing or professional work needing better accuracy
|
| 290 |
+
|
| 291 |
+
---
|
| 292 |
+
|
| 293 |
+
## Bonus Finding: distil-whisper
|
| 294 |
+
|
| 295 |
+
I tested distil-whisper expecting it to be faster, but on my sample:
|
| 296 |
+
- **distil-whisper**: 21.6% WER in 38.49s ✗
|
| 297 |
+
|
| 298 |
+
Both slower AND less accurate than the standard models. Unexpected, but that's the data.
|
| 299 |
+
"""
|
| 300 |
+
)
|
| 301 |
+
|
| 302 |
+
# Tab 4: Hardware & Setup
|
| 303 |
+
with gr.Tab("💻 Hardware & Setup"):
|
| 304 |
+
gr.Markdown(
|
| 305 |
+
"""
|
| 306 |
+
## Test Environment
|
| 307 |
+
|
| 308 |
+
### Hardware
|
| 309 |
+
- **GPU**: AMD Radeon RX 7700 XT (ROCm available but using CPU inference)
|
| 310 |
+
- **CPU**: Intel Core i7-12700F (12 cores, 20 threads)
|
| 311 |
+
- **RAM**: 64 GB
|
| 312 |
+
- **OS**: Ubuntu 25.04
|
| 313 |
+
|
| 314 |
+
### Why CPU Inference?
|
| 315 |
+
- AMD GPU with ROCm isn't ideal for STT workloads
|
| 316 |
+
- CPU inference provided more consistent results
|
| 317 |
+
- Your performance will differ based on your hardware
|
| 318 |
+
|
| 319 |
+
### Models Tested
|
| 320 |
+
|
| 321 |
+
**Whisper model sizes:**
|
| 322 |
+
- tiny (39M params)
|
| 323 |
+
- base (74M params)
|
| 324 |
+
- small (244M params)
|
| 325 |
+
- medium (769M params)
|
| 326 |
+
- large-v3-turbo (809M params)
|
| 327 |
+
|
| 328 |
+
**Engines compared:**
|
| 329 |
+
- OpenAI Whisper (original implementation)
|
| 330 |
+
- faster-whisper (optimized CTranslate2)
|
| 331 |
+
- distil-whisper (distilled variant)
|
| 332 |
+
|
| 333 |
+
### Metrics
|
| 334 |
+
- **WER (Word Error Rate)**: Lower is better - percentage of words transcribed incorrectly
|
| 335 |
+
- **Inference Time**: How long it takes to transcribe the audio sample
|
| 336 |
+
|
| 337 |
+
## Running Your Own Tests
|
| 338 |
+
|
| 339 |
+
Want to benchmark on your own voice and hardware?
|
| 340 |
+
|
| 341 |
+
1. Clone the repository: [github.com/danielrosehill/Local-ASR-STT-Benchmark](https://github.com/danielrosehill/Local-ASR-STT-Benchmark)
|
| 342 |
+
2. Set up the conda environment (see `setup.md`)
|
| 343 |
+
3. Record your own audio and create reference transcriptions
|
| 344 |
+
4. Run the benchmark scripts
|
| 345 |
+
5. Generate visualizations
|
| 346 |
+
|
| 347 |
+
Your results will likely differ based on:
|
| 348 |
+
- Your hardware (GPU/CPU)
|
| 349 |
+
- Your voice characteristics
|
| 350 |
+
- Your microphone quality
|
| 351 |
+
- Background noise conditions
|
| 352 |
+
- Speaking style and pace
|
| 353 |
+
"""
|
| 354 |
+
)
|
| 355 |
+
|
| 356 |
+
# Tab 5: About
|
| 357 |
+
with gr.Tab("ℹ️ About"):
|
| 358 |
+
gr.Markdown(
|
| 359 |
+
"""
|
| 360 |
+
## About This Project
|
| 361 |
+
|
| 362 |
+
### Motivation
|
| 363 |
+
|
| 364 |
+
I was tired of guessing which Whisper model size to use for speech-to-text. There are plenty of
|
| 365 |
+
benchmarks out there, but they're often:
|
| 366 |
+
- Run on different hardware than mine
|
| 367 |
+
- Tested on different voice characteristics
|
| 368 |
+
- Using different microphones and conditions
|
| 369 |
+
|
| 370 |
+
So I decided to run my own evaluation on my actual setup with my actual voice.
|
| 371 |
+
|
| 372 |
+
### Why This Matters
|
| 373 |
+
|
| 374 |
+
If you're doing hours of transcription per day (like I am), optimizing your STT setup is worth it:
|
| 375 |
+
- Faster models = less waiting
|
| 376 |
+
- More accurate models = less editing
|
| 377 |
+
- Finding the sweet spot = better workflow
|
| 378 |
+
|
| 379 |
+
### Next Steps
|
| 380 |
+
|
| 381 |
+
For a more robust evaluation, I'd want to:
|
| 382 |
+
- Test on multiple audio samples
|
| 383 |
+
- Include different speaking styles (casual, technical, professional)
|
| 384 |
+
- Test on different microphones
|
| 385 |
+
- Evaluate punctuation and capitalization accuracy
|
| 386 |
+
- Compare ASR (Automatic Speech Recognition) vs traditional STT
|
| 387 |
+
- Test GPU inference on NVIDIA hardware
|
| 388 |
+
|
| 389 |
+
### Repository
|
| 390 |
+
|
| 391 |
+
Full benchmark code and results:
|
| 392 |
+
[github.com/danielrosehill/Local-ASR-STT-Benchmark](https://github.com/danielrosehill/Local-ASR-STT-Benchmark)
|
| 393 |
+
|
| 394 |
+
### License
|
| 395 |
+
|
| 396 |
+
MIT License - Feel free to use and adapt for your own benchmarks!
|
| 397 |
+
|
| 398 |
+
---
|
| 399 |
+
|
| 400 |
+
*Built with Gradio • Whisper models by OpenAI • Hosted on Hugging Face Spaces*
|
| 401 |
+
"""
|
| 402 |
+
)
|
| 403 |
+
|
| 404 |
+
gr.Markdown(
|
| 405 |
+
"""
|
| 406 |
+
---
|
| 407 |
+
### 📧 Questions or feedback?
|
| 408 |
+
Visit the [GitHub repository](https://github.com/danielrosehill/Local-ASR-STT-Benchmark) to open an issue or contribute.
|
| 409 |
+
"""
|
| 410 |
+
)
|
| 411 |
+
|
| 412 |
+
if __name__ == "__main__":
|
| 413 |
+
demo.launch()
|
audio/README.md
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Audio Files
|
| 2 |
+
|
| 3 |
+
Place the test audio file `001.wav` here.
|
| 4 |
+
|
| 5 |
+
This is the audio sample that was used for the benchmark evaluation. It will be displayed in the Gradio app for users to listen to and compare with the transcription results.
|
| 6 |
+
|
| 7 |
+
Note: Audio files are tracked with Git LFS (configured in `.gitattributes`).
|
requirements.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio>=5.0.0
|
| 2 |
+
pandas>=2.0.0
|
| 3 |
+
plotly>=5.0.0
|
results/accuracy_speed_tradeoff.png
ADDED
|
Git LFS Details
|
results/engine_comparison.png
ADDED
|
Git LFS Details
|
results/speed_by_size.png
ADDED
|
Git LFS Details
|
results/summary_stats.txt
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
=== Model Size Comparison ===
|
| 3 |
+
Best accuracy: medium (WER: 6.07%)
|
| 4 |
+
Fastest: tiny (2.73s)
|
| 5 |
+
Best balance: tiny
|
| 6 |
+
|
| 7 |
+
=== Engine Comparison ===
|
| 8 |
+
faster-whisper: WER=9.95%, Time=5.01s
|
| 9 |
+
openai-whisper: WER=9.95%, Time=6.17s
|
| 10 |
+
|
| 11 |
+
=== Variant Comparison ===
|
| 12 |
+
faster-whisper: WER=9.95%, Time=4.87s
|
| 13 |
+
openai-whisper: WER=9.95%, Time=6.51s
|
| 14 |
+
distil-whisper: WER=21.60%, Time=38.49s
|
results/variants_comparison.png
ADDED
|
Git LFS Details
|
results/wer_by_size.png
ADDED
|
Git LFS Details
|
text/reference.txt
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
So I've been working on this really interesting project lately, and honestly, it's been a bit of a rollercoaster. You know how sometimes you start something thinking it'll be quick and easy, but then you end up going down this rabbit hole? Yeah, that's basically what happened here.
|
| 2 |
+
|
| 3 |
+
The thing is, I wanted to test out different speech-to-text systems to see which one actually works best for my setup. I mean, there are so many options out there - Whisper, Vosk, you name it. But I figured the only way to really know is to just test them myself, right?
|
| 4 |
+
|
| 5 |
+
What's really cool is that I can run this stuff locally on my machine. No need to send my audio to some cloud service. Privacy matters, you know? Plus, with the GPU I've got, the processing should be pretty fast. At least that's the theory. We'll see how it actually performs when I get everything set up and running.
|
| 6 |
+
|
| 7 |
+
Here's the funny part though - I spent like three hours yesterday just trying to get the drivers working properly. Classic tech project experience, am I right? You'd think installing GPU drivers would be straightforward in 2025, but nope. Had to dig through forum posts from five years ago, try different kernel versions, the whole nine yards. But eventually I got it sorted out, and man, does it feel good when things finally click into place.
|
| 8 |
+
|
| 9 |
+
Oh, and another thing - I've been using speech-to-text myself for typing, which is kind of meta when you think about it. Testing speech recognition by creating test data through speech recognition. It's like inception or something. Sometimes the transcription messes up words, especially technical terms or names, but overall it's gotten so much better than it used to be.
|
| 10 |
+
|
| 11 |
+
The plan is to record myself reading different types of content - technical documentation, casual conversation like this, maybe some news articles, stories, that kind of thing. That way I can see if certain models handle specific styles better than others. Maybe one's great at technical jargon but struggles with natural speech patterns, or vice versa. Should be interesting to find out.
|
| 12 |
+
|
| 13 |
+
I'm also curious about punctuation and formatting. Like, does it know when to put commas? Does it capitalize proper nouns correctly? These little details matter way more than you'd think when you're actually using these systems for real work. Nobody wants to spend hours fixing transcription errors just to save a few minutes of typing.
|