metadata
library_name: transformers
datasets:
- CoRal-project/coral
language:
- da
base_model:
- openai/whisper-tiny
pipeline_tag: automatic-speech-recognition
A small hobby project trained in a Kaggle notebook using their free P100 GPUs. Was curious about if you could train whisper-tiny to perform decently if you specialized it for a single language, i.e. danish in this case. The TL;DR is that the results are not great :)
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition",
model="rasgaard/whisper-tiny.da")