Transformers documentation
Fine-tuning
Fine-tuning
Fine-tuning continues training a large pretrained model on a smaller dataset specific to a task or domain. For example, fine-tuning on a dataset of coding examples helps the model get better at coding. Fine-tuning is identical to pretraining except you don’t start with random weights. It also requires far less compute, data, and time.
The tutorial below walks through fine-tuning a large language model with Trainer.
Log in to your Hugging Face account with your user token to push your fine-tuned model to the Hub.
from huggingface_hub import login
login()Tokenization
Load a dataset and tokenize the text column the model trains on (horoscope in the dataset below).
The tokenizer creates the model inputs, input_ids and attention_mask. The model’s forward method only accepts input_ids and attention_mask, so set remove_columns to drop columns like horoscope after tokenization.
- Set
truncation=Trueand amax_lengthto truncate longer sequences to a specified maximum length. - Use the
train_test_splitmethod to create a test split for evaluating the model.
from datasets import load_dataset
from transformers import AutoTokenizer, DataCollatorForLanguageModeling
model_name = "Qwen/Qwen3-0.6B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
dataset = load_dataset("karthiksagarn/astro_horoscope", split="train")
def tokenize(batch):
return tokenizer(
batch["horoscope"],
truncation=True,
max_length=512,
)
dataset = dataset.map(tokenize, batched=True, remove_columns=dataset.column_names)
dataset = dataset.train_test_split(test_size=0.1)A data collator assembles dataset samples into batches for the model to process. DataCollatorForLanguageModeling dynamically pads each batch to the longest sequence in that batch rather than padding every sequence in the dataset to the same length. This saves compute and memory by avoiding computing unnecessary padding tokens.
- Set
mlm=Falseto avoid randomly masking tokens.
data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False),Loading a model
Load a pretrained checkpoint to fine-tune (see the Loading models guide for more details about loading models).
- Set
dtype="auto"to load the weights in their saved dtype. Without it, PyTorch loads weights intorch.float32, which doubles memory usage if the weights are originallytorch.bfloat16.
from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
model_name = "Qwen/Qwen3-0.6B"
model = AutoModelForCausalLM.from_pretrained(model_name, dtype="auto")Training configuration
TrainingArguments provides all the options for customizing a training run. Only the most common arguments are covered here. Everything else has reasonable defaults or is only relevant to specific scenarios like distributed training. See the TrainingArguments API docs for a complete list of arguments.
num_train_epochsandper_device_train_batch_sizecontrol training duration and batch size.learning_ratesets the initial learning rate for the optimizer.
TrainingArguments(
output_dir="qwen3-finetuned",
num_train_epochs=3,
per_device_train_batch_size=2,
gradient_accumulation_steps=8,
gradient_checkpointing=True,
bf16=True,
learning_rate=2e-5,
logging_steps=10,
eval_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
)Training
Create a Trainer instance with all the necessary components, then call train() to begin.
trainer = Trainer(
model=model,
args=training_args,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
processing_class=tokenizer,
data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
trainer.train()
trainer.push_to_hub()push_to_hub() uploads the fine-tuned weights, generation config, tokenizer, and model config to the Hub.
Next steps
- Read the Subclassing Trainer methods guide to learn how to subclass Trainer methods to support new and custom functionalities.
- Read the Callbacks guide to learn how to hook into training events for logging, early stopping, and other custom behavior.
- Read the Data collators guide to learn how to customize how samples are assembled into batches.
- Browse transformers/examples/pytorch, notebooks, or the Resources > Task Recipes section for additional training examples on different text, audio, vision, and multimodal tasks.