UMCU/DutchMedicalTextV3
Viewer • Updated • 73.5M • 385 • 3
Llama-3.2-1B-Instruct, with domain adapted pretraining (DAPT), also called Continuous Pre-training (CPT) on a generic Dutch medical corpus.
Training for on the Dutch medical corpus, with a 256 batch size, maximally 1024 sequence length during training and a linear-cosine schedul, with 100 cycles per 250M steps, with LRmax=1e-4 and 100K warmup steps, AdamW for optimization.
Currently at 5.5 perplexity, could still use more training.
Planned: on-premise continuous pre-training on Dutch clinical texts.
To use for text-generation;
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("UMCU/MedLlama.nl")
model = AutoModelForCausalLM.from_pretrained("UMCU/MedLlama.nl", torch_dtype=torch.float16)
If you use this model please cite with
@misc{vanes2026languagecorporadutchmedical,
title={Language corpora for the Dutch medical domain},
author={B. van Es},
year={2026},
eprint={2604.25374},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2604.25374},
}
Base model
meta-llama/Llama-3.2-1B-Instruct