Whisper-Tiny Dutch - Full Synthetic Data (Unfiltered)
This model is a fine-tuned version of openai/whisper-tiny for Dutch automatic speech recognition (ASR). It was trained on Common Voice 17.0 Dutch combined with all synthetic speech data without quality filtering, representing the maximum data augmentation approach.
Introduction
Purpose
This model uses all available synthetic data without WAVe quality filtering to evaluate the impact of maximum data augmentation. While it achieves the best in-domain Test WER (24.93%), it requires significantly more training steps than filtered approaches, demonstrating the quality-vs-quantity tradeoff in synthetic data augmentation.
How the Data Was Created
The training data combines real speech from Common Voice 17.0 with the complete synthetic dataset:
Transcript Generation: We used GPT-4o-mini to generate Dutch transcripts that match the word count distribution observed in Common Voice, ensuring realistic utterance lengths and diverse linguistic content.
Speech Synthesis: Each transcript was converted to audio using OpenAI's TTS-1 model with 9 different voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer), producing 34,898 synthetic samples.
No Quality Filtering: Unlike other models in this series, no WAVe filtering was applied. All 34,898 synthetic samples were used, including those with potential synthesis defects.
How the Model Was Created
The model was fine-tuned from openai/whisper-tiny using the Hugging Face Transformers library:
Mixed Training: Combined 34,952 real speech samples from Common Voice 17.0 Dutch with all 34,898 synthetic samples (69,850 total).
Optimization: Trained for 5 epochs with a learning rate of 5e-5, global batch size of 256, and BF16 precision on an NVIDIA H200 GPU.
Checkpoint Selection: The best checkpoint was selected based on validation loss, occurring at step 800 with a validation loss of 0.3271.
This approach achieves the best in-domain performance (24.93% Test WER) but requires 60% more training steps than the high-quality filtered approach.
Model Details
| Property | Value |
|---|---|
| Base Model | openai/whisper-tiny |
| Language | Dutch (nl) |
| Task | Automatic Speech Recognition (transcribe) |
| Parameters | 39M |
| Training Data | Common Voice 17.0 + All Synthetic (Unfiltered) |
| Total Training Samples | 69,850 |
| Sampling Rate | 16kHz |
Evaluation Results
This Model (whisper-tiny-cv-fully-synthetic-nl)
| Metric | Value |
|---|---|
| Validation Loss | 0.3207 |
| Validation WER | 19.61% |
| Test WER (Common Voice) | 24.93% |
| Test WER (MLS) | 43.12% |
| Best Checkpoint | Step 800 |
| Max Training Steps | 1,365 |
Comparison with Other Training Configurations
| Training Data | Max Steps | Val Loss | Val WER | Test WER (CV) | Test WER (MLS) |
|---|---|---|---|---|---|
| Common Voice Only | 680 | 0.3382 | 19.77% | 26.00% | 44.85% |
| High-Quality Filtered + CV | 890 | 0.3323 | 19.59% | 25.51% | 43.76% |
| Mid-High Quality Filtered + CV | 1,270 | 0.3292 | 19.36% | 25.05% | 43.11% |
| All Synthetic + CV (Unfiltered) | 1,365 | 0.3207 | 19.61% | 24.93% | 43.12% |
Key Performance Highlights
- Best in-domain Test WER (24.93%) among all Whisper-Tiny Dutch configurations
- Lowest validation loss (0.3207) indicating strong model fit
- 4.1% relative improvement on Common Voice test set vs baseline (24.93% vs 26.00%)
- 3.9% relative improvement on MLS benchmark vs baseline (43.12% vs 44.85%)
- Tradeoff: Requires 1,365 steps vs 890 for high-quality filtered (53% more compute)
Training Data
Dataset Composition
| Source | Samples | Description |
|---|---|---|
| Common Voice 17.0 Dutch | 34,952 | Real speech from Mozilla's crowdsourced dataset |
| Synthetic Transcript NL (all) | 34,898 | Complete TTS audio without filtering |
| Total | 69,850 |
Synthetic Data Generation Pipeline
The synthetic dataset (yuriyvnv/synthetic_transcript_nl) was generated using:
- Transcript Generation: GPT-4o-mini, matching Common Voice word count distribution
- Speech Synthesis: OpenAI TTS-1 model with 9 voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer)
- No Filtering: All samples used regardless of quality
Quality Distribution (For Reference)
While this model uses all data, WAVe quality assessment shows the distribution:
| Quality Level | Samples | Percentage | Used in This Model |
|---|---|---|---|
| High (q ≥ 0.8) | 10,555 | 30.2% | ✓ |
| Medium (0.5 ≤ q < 0.8) | 19,627 | 56.2% | ✓ |
| Low (q < 0.5) | 4,716 | 13.5% | ✓ |
| Total | 34,898 | 100% | All used |
Note: 13.5% of the synthetic data (4,716 samples) would be filtered out by WAVe, but is included in this model's training.
Training Procedure
Hyperparameters
| Parameter | Value |
|---|---|
| Learning Rate | 5e-5 |
| Batch Size (Global) | 256 |
| Warmup Steps | 200 |
| Max Epochs | 5 |
| Precision | BF16 |
| Optimizer | AdamW (fused) |
| Eval Steps | 50 |
| Metric for Best Model | eval_loss |
Training Infrastructure
- GPU: NVIDIA H200 (140GB VRAM)
- Operating System: Ubuntu 22.04
- Framework: Hugging Face Transformers
Training Curve
Step 100: val_loss = 0.5266
Step 250: val_loss = 0.4054
Step 500: val_loss = 0.3454
Step 800: val_loss = 0.3271 ← Best checkpoint
Step 1000: val_loss = 0.3311
Step 1200: val_loss = 0.3331
Step 1350: val_loss = 0.3331
Usage
Transcription Pipeline
from transformers import pipeline
transcriber = pipeline(
"automatic-speech-recognition",
model="yuriyvnv/whisper-tiny-cv-fully-synthetic-nl",
device="cuda"
)
result = transcriber("path/to/dutch_audio.wav")
print(result["text"])
Direct Model Usage
from transformers import WhisperProcessor, WhisperForConditionalGeneration
import librosa
processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-tiny-cv-fully-synthetic-nl")
model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-tiny-cv-fully-synthetic-nl")
model.to("cuda")
audio, sr = librosa.load("path/to/dutch_audio.wav", sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)
Specifying Language
model.generation_config.language = "nl"
model.generation_config.task = "transcribe"
When to Use This Model
This model is ideal when:
- Maximum in-domain accuracy is required: Best Test WER (24.93%) on Common Voice
- Compute budget is not a constraint: Requires most training steps (1,365)
- Quality filtering is not available: Uses raw synthetic data
Consider filtered alternatives for better efficiency:
- whisper-tiny-high-mixed-nl: 35% fewer steps, slight accuracy tradeoff
- whisper-tiny-mixed-nl: 7% fewer steps, best cross-domain generalization
Quality vs Quantity Analysis
This model demonstrates the tradeoff between data quantity and quality:
| Approach | Synthetic Samples | Training Steps | Test WER (CV) | Efficiency |
|---|---|---|---|---|
| High-Quality (q≥0.8) | 10,555 | 890 | 25.51% | Best |
| Mid-High (q≥0.5) | 30,182 | 1,270 | 25.05% | Good |
| Unfiltered (this model) | 34,898 | 1,365 | 24.93% | Lowest |
Key insight: The unfiltered approach provides only 0.12% absolute WER improvement over mid-high filtering, while requiring 7.5% more training steps. For most applications, filtered approaches offer better compute efficiency.
Limitations
- Model capacity: Whisper-Tiny (39M params) has limited representational power
- Training efficiency: Requires most compute among all configurations
- Noisy training signal: Includes low-quality synthetic samples (13.5% with q < 0.5)
- Domain specificity: Optimized for general Dutch; may underperform on technical domains
- Dialect coverage: Performance may vary across Dutch regional variants
Citation
@article{perezhohin2024enhancing,
title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
journal={IEEE Access},
year={2024},
publisher={IEEE}
}
References
- Base Model: openai/whisper-tiny
- Training Data (Real): mozilla-foundation/common_voice_17_0
- Training Data (Synthetic): yuriyvnv/synthetic_transcript_nl
- Whisper Paper: Robust Speech Recognition via Large-Scale Weak Supervision
- IEEE Access Paper: Enhancing ASR with Semantic Audio Filtering
License
Apache 2.0
- Downloads last month
- 21
Model tree for yuriyvnv/whisper-tiny-cv-fully-synthetic-nl
Base model
openai/whisper-tinyDatasets used to train yuriyvnv/whisper-tiny-cv-fully-synthetic-nl
Collection including yuriyvnv/whisper-tiny-cv-fully-synthetic-nl
Evaluation results
- Test WER on Common Voice 17.0 (Dutch)test set self-reported24.930
- Test WER (MLS) on Multilingual LibriSpeech (Dutch)test set self-reported43.120