Llama-3.2-1B-Instruct, with domain adapted pretraining (DAPT), also called Continuous Pre-training (CPT) on a generic Dutch medical corpus.

Training for on the Dutch medical corpus, with a 256 batch size, maximally 1024 sequence length during training and a linear-cosine schedul, with 100 cycles per 250M steps, with LRmax=1e-4 and 100K warmup steps, AdamW for optimization.

Currently at 5.5 perplexity, could still use more training.

Planned: on-premise continuous pre-training on Dutch clinical texts.

To use for text-generation;

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("UMCU/MedLlama.nl")
model = AutoModelForCausalLM.from_pretrained("UMCU/MedLlama.nl", torch_dtype=torch.float16)

If you use this model please cite with

@misc{vanes2026languagecorporadutchmedical,
      title={Language corpora for the Dutch medical domain}, 
      author={B. van Es},
      year={2026},
      eprint={2604.25374},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2604.25374}, 
}
Downloads last month
346
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for UMCU/MedLlama.nl

Finetuned
(1728)
this model

Dataset used to train UMCU/MedLlama.nl

Paper for UMCU/MedLlama.nl