Workshop-on-Asian-Translation-2025-FinetunedModels
For T2T task of Workshop on Asian Translation(2025), these are the fine-tuned models with NLLB-200-XB as base model, with WAT + 100k samanantar pairs.
3B • Updated • 3Note For WAT2025, on - Challenge set, BLEU - 56.90, RIBES - 0.870254 - Evaluation set, BLEU - 45.10, RIBES - 0.831282
OdiaGenAI/facebook-nllb-200-3.3B-finetuned-bengali
3B • Updated • 9Note For WAT2025, on - Challenge set, BLEU - 50.10, RIBES - 0.830882 - Evaluation set, BLEU - 49.50, RIBES - 0.804158
OdiaGenAI/facebook-nllb-200-3.3B-finetuned-malayalam
3B • Updated • 3Note For WAT2025, on - Challenge set, BLEU - 44.20, RIBES - 0.775824 - Evaluation set, BLEU - 43.20, RIBEs - 0.708217
OdiaGenAI/facebook-nllb-200-3.3B-finetuned-odia
3B • Updated • 18Note For WAT2025, on - Challenge set, BLEU - 56.40, RIBES - 0.916177 - Evaluation set, BLEU - 62.90, RIBES - 0.903659
DebasishDhal99/facebook-nllb-200-1.3B-finetuned-hindi
1B • Updated • 2Note For WAT2025, on - Challenge set, BLEU - 55.50, RIBES - 0.867866 - Evaluation set, BLEU - 44.70, RIBEs - 0.828884
DebasishDhal99/facebook-nllb-200-1.3B-finetuned-odia
1B • Updated • 4Note For WAT2025, on - Challenge set, BLEU - 53.70, RIBES - 0.909711 - Evaluation set, BLEU - 60.10, RIBEs - 0.896546
DebasishDhal99/facebook-nllb-200-distilled-600M-finetuned-odia
0.6B • Updated • 1Note For WAT2025, on - Challenge set, BLEU - 50.00, RIBES - 0.902548 - Evaluation set, BLEU - 54.60, RIBEs - 0.884445