Whisper-Large-v3 Portuguese - Mid-High Quality Filtered Synthetic Data

This model is a fine-tuned version of openai/whisper-large-v3 for Portuguese automatic speech recognition (ASR). It was trained on Common Voice 17.0 Portuguese combined with WAVe-filtered synthetic speech data using a balanced quality threshold (q ≥ 0.5), including both high-quality and medium-quality samples.

Purpose

This model demonstrates the optimal balance between data quality and quantity for Portuguese ASR. By retaining 87.3% of synthetic samples (high + medium quality), this model achieves:

  • 29.3% WER improvement over the CV-only baseline (8.33% vs 11.78%)
  • 32.9% better cross-domain generalization on MLS (10.27% vs 15.31%) - best among all configurations
  • Best validation loss (0.1040) among all Portuguese Large-v3 variants
  • Balanced training efficiency with 805 max steps (87% increase over baseline)

The model is part of a comprehensive study on WAVe (Word-Aligned Verification) filtering for Portuguese ASR, demonstrating that mid-high quality filtering provides the best overall performance, particularly for cross-domain tasks.

Model Details

Property Value
Base Model openai/whisper-large-v3
Language Portuguese (pt)
Task Automatic Speech Recognition (transcribe)
Parameters 1550M
Training Data Common Voice 17.0 + Mid-High Quality Synthetic (q ≥ 0.5)
Total Training Samples 41,047
Sampling Rate 16kHz

Evaluation Results

This Model (whisper-large-v3-mixed-pt)

Metric Value
Validation Loss 0.1040
Validation WER 7.73%
Test WER (Common Voice) 8.33%
Test WER (MLS) 10.27%
Best Checkpoint Step 300
Max Training Steps 805

Comparison with Other Training Configurations (Whisper-Large-v3 Portuguese)

Training Data Max Steps Val Loss Val WER Test WER (CV) Test WER (MLS)
Common Voice Only 430 0.1260 11.38% 11.78% 15.31%
High-Quality (q ≥ 0.8) + CV 575 0.1045 7.33% 7.94% 12.41%
Mid-High (q ≥ 0.5) + CV 805 0.1040 7.73% 8.33% 10.27%
All Synthetic + CV 860 0.1050 7.57% 8.33% 13.43%

Key Performance Highlights

  • Best cross-domain performance: Lowest MLS WER (10.27%) among all Portuguese configurations
  • Best validation loss (0.1040) - optimal model convergence
  • Strong in-domain: 8.33% Test WER on Common Voice (29.3% improvement vs baseline)
  • Balanced dataset: 87.3% of synthetic data included (19,181 samples)
  • Training efficiency: 6% fewer steps than unfiltered while maintaining quality control

Training Data

Dataset Composition

Source Samples Description
Common Voice 17.0 Portuguese 21,866 Real speech from Mozilla's crowdsourced dataset
Synthetic Transcript PT (q ≥ 0.5) 19,181 WAVe-filtered TTS audio (high + medium quality)
Total 41,047

Synthetic Data Generation Pipeline

The synthetic dataset (yuriyvnv/synthetic_transcript_pt) was generated using:

  1. Transcript Generation: GPT-4o-mini, matching Common Voice word count distribution
  2. Speech Synthesis: OpenAI TTS-1 model with 9 voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer)
  3. Quality Filtering: WAVe model with balanced threshold q ≥ 0.5

WAVe Quality Distribution (Portuguese Synthetic Data)

Quality Level Samples Percentage Used in This Model
High (q ≥ 0.8) 7,312 33.3% ✓
Medium (0.5 ≤ q < 0.8) 11,869 54.0% ✓
Low (q < 0.5) 2,787 12.7% ✗

This threshold retains 87.3% of the synthetic dataset (high + medium quality), filtering only the lowest-quality samples while preserving volume for robust cross-domain training.

Training Procedure

Hyperparameters

Parameter Value
Learning Rate 5e-6
Batch Size (Global) 256
Warmup Steps 200
Max Epochs 5
Precision BF16
Optimizer AdamW (fused)
Eval Steps 50
Metric for Best Model eval_loss

Training Infrastructure

  • GPU: NVIDIA H200 (140GB VRAM)
  • Operating System: Ubuntu 22.04
  • Framework: Hugging Face Transformers

Usage

Transcription Pipeline

from transformers import pipeline

transcriber = pipeline(
    "automatic-speech-recognition",
    model="yuriyvnv/whisper-large-v3-mixed-pt",
    device="cuda"
)

result = transcriber("path/to/portuguese_audio.wav")
print(result["text"])

Direct Model Usage

from transformers import WhisperProcessor, WhisperForConditionalGeneration
import librosa

processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-large-v3-mixed-pt")
model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-large-v3-mixed-pt")
model.to("cuda")

audio, sr = librosa.load("path/to/portuguese_audio.wav", sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")

predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)

Specifying Language

model.generation_config.language = "pt"
model.generation_config.task = "transcribe"

Methodology

This model leverages WAVe (Word-Aligned Verification), a word-level quality assessment method for filtering synthetic speech data. Unlike sentence-level filtering approaches, WAVe:

  • Aligns each word to its corresponding audio frames using multi-head attention
  • Assigns per-word confidence scores via a GLU-based scorer
  • Detects localized synthesis errors (mispronunciations, omitted words, prosodic anomalies)
  • Achieves 6.5% improvement over sentence-level filtering methods

The balanced threshold (q ≥ 0.5) retains 87.3% of synthetic samples, striking an optimal balance between data volume and quality for robust cross-domain generalization.

When to Use This Model

This model is ideal when:

  • Best cross-domain robustness required: Achieves 10.27% MLS WER (best among all Portuguese configurations)
  • Balanced performance needed: Strong on both in-domain (8.33%) and cross-domain (10.27%) benchmarks
  • Optimal training efficiency: Best validation loss with reasonable compute budget
  • Volume + quality: Includes 87.3% of synthetic data while filtering lowest-quality samples

Consider other variants based on your needs:

Quality vs Quantity Analysis

This model represents the optimal balance point for Whisper-Large-v3 Portuguese:

Approach Synthetic Samples Training Steps Test WER (CV) Test WER (MLS) Best For
CV Only 0 430 11.78% 15.31% Speed
High-Quality (q≥0.8) 7,312 575 7.94% 12.41% In-domain
Mid-High (q≥0.5) 19,181 805 8.33% 10.27% Cross-domain
Unfiltered 21,968 860 8.33% 13.43% Volume

Key insight: The mid-high threshold achieves the best cross-domain generalization (10.27% MLS WER), outperforming even the unfiltered approach by 23.5% while requiring 6% fewer training steps.

Limitations

  • Domain specificity: Optimized for general Portuguese; may underperform on technical domains
  • Acoustic conditions: Trained on clean speech; noise robustness not guaranteed
  • Dialect coverage: Performance may vary across Portuguese regional variants (European vs Brazilian)

Citation

This model is part of research on WAVe (Word-Aligned Verification) for synthetic speech quality assessment. While the WAVe methodology paper is currently under review, please cite our previous work that motivated this research:

@article{perezhohin2024enhancing,
  title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
  author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
  journal={IEEE Access},
  year={2024},
  publisher={IEEE}
}

References

License

Apache 2.0

Downloads last month
24
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yuriyvnv/whisper-large-v3-mixed-pt

Finetuned
(681)
this model

Datasets used to train yuriyvnv/whisper-large-v3-mixed-pt

Collection including yuriyvnv/whisper-large-v3-mixed-pt

Evaluation results