Aitana-2B-S-base-1.0
Aitana-2B-S-base-1.0 is a generative language model from the Aitana family, developed by the GPLSI (Language and Information System Group) at the University of Alicante. This model is based on BSC-LT/salamandra-2b and has been continuously pre-trained on multilingual data (Valencian, Spanish, and English) to improve representation of Valencian and Catalan languages.
Table of Contents
- Model Description
- Evaluation
- Training Data
- Intended Uses
- How to Use
- GGUF for LM Studio
- Additional Information
Model Description
| Property | Value |
|---|---|
| Base Model | BSC-LT/salamandra-2b |
| Architecture | Transformer decoder-only |
| Parameters | ~2.25B |
| Languages | Valencian, Spanish, English |
| License | Apache 2.0 |
Aitana-2B-S-base-1.0 extends the multilingual Salamandra foundation with additional training on domain-specific Valencian, Spanish, and English data. The training emphasizes administrative, legal, and tourism domains.
Training Data
This model was trained on the following ALIA datasets:
| Dataset ID | Name | Language | Source |
|---|---|---|---|
| dc8 | dogv_va_2025 | Valencian | gplsi/alia_dogv |
| dc9 | dogv_es_2025 | Spanish | gplsi/alia_dogv |
| dc10 | corts_es_va_2025 | Spanish/Valencian | gplsi/alia_les_corts |
| dc11 | amic_va_2025 | Valencian | gplsi/alia_amic |
| dc12 | boua_va_2025 | Valencian | gplsi/alia_boua |
| dc13 | boua_es_2025 | Spanish | gplsi/alia_boua |
| dc14 | tourism_va_2025 | Valencian | gplsi/alia_tourism |
| dc15 | tourism_es_2025 | Spanish | gplsi/alia_tourism |
| dc16 | tourism_en_2025 | English | gplsi/alia_tourism |
Data Sources
- DOGV (Diari Oficial de la Generalitat Valenciana): Official communications of the Valencian Community including laws and public sector communications
- Les Corts Valencianes: Transcripts from the Valencian Parliament plenary sessions and committee meetings
- AMIC: Valencian language corpus
- BOUA (Butlletí Oficial de la Universitat d'Alacant): Official University of Alicante documents including grants, regulations, and resolutions
- Tourism: Multilingual tourism domain content
Intended Uses
This model can be used for:
- Text generation in Valencian, Spanish, and English
- Fine-tuning for specific downstream tasks
- Domain adaptation for administrative, legal, or tourism applications
Note: Due to the formal register of training data (administrative and legal domains), generated text tends toward formal language.
How to Use
Transformers
import torch
from transformers import pipeline, AutoTokenizer
model_id = "gplsi/Aitana-2B-S-base-1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Valencian example
text = "Les corts valencianes han pres la decisió de"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])
# Spanish example
text = "El turismo en la Comunidad Valenciana"
result = generator(text, do_sample=True, top_k=10, max_new_tokens=100)
print(result[0]['generated_text'])
GGUF for LM Studio
This repository includes GGUF quantized versions for use with LM Studio, Ollama, and other llama.cpp-based tools.
| File | Quantization | Size | Quality |
|---|---|---|---|
Aitana-s2b-c0dc17-Q4_K_M.gguf |
Q4_K_M | ~1.3 GB | Good balance |
Aitana-s2b-c0dc17-f16.gguf |
F16 | ~4.5 GB | Full precision |
Using with llama-cpp-python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="gplsi/Aitana-2B-S-base-1.0",
filename="Aitana-s2b-c0dc17-Q4_K_M.gguf",
)
output = llm("Les corts valencianes han decidit", max_tokens=100)
print(output["choices"][0]["text"])
Evaluation
In the following table, we can see the results obtained with different benchmarks from lm-evaluation-harness in comparison with the model used for continuous pre-training. The results have been obtained from the model pre-trained; no instruction tuning or fine-tuning of any kind has been performed.
Normalized score per language
| Language | Salamandra 2B | Aitana-2B-S-base-1.0 |
|---|---|---|
| Spanish | 0.150 | 0.163 |
| Catalan | 0.224 | 0.220 |
| English | 0.168 | 0.161 |
| Valencian | 0.603 | 0.608 |
Valencian
Classification Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-2B | Aitana-2B-S-base-1.0 |
|---|---|---|---|---|---|
| XNLI | va | Natural Language Inference | acc | 0.475 | 0.474 |
Generation Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-2B | Aitana-2B-S-base-1.0 |
|---|---|---|---|---|---|
| Cocoteros | va | Reading Comprehension | bleu | 6.32 | 6.61 |
| Phrases ca-va | va-ca | Translation - Adaptation | bleu | 79.82 | 81.57 |
| Phrases va-ca | va-ca | Translation - Adaptation | bleu | 78.05 | 75.68 |
| Phrases va-es | va-es | Translation | bleu | 76.04 | 76.31 |
| Phrases es-va | es-va | Translation | bleu | 58.86 | 62.86 |
Catalan
Classification Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-2B | Aitana-2B-S-base-1.0 |
|---|---|---|---|---|---|
| Belebele Cat_latn | ca | Reading Comprehension | acc | 0.231 | 0.257 |
| COPA | ca | Commonsense Reasoning | acc | 0.700 | 0.690 |
| XStoryCloze | ca | Commonsense Reasoning | acc | 0.655 | 0.655 |
| OpenBookQA | ca | Question Answering | acc | 0.294 | 0.300 |
| PAWS | ca | Paraphrasing | acc | 0.556 | 0.566 |
| PiQA | ca | Question Answering | acc | 0.643 | 0.641 |
| SiQA | ca | Question Answering | acc | 0.434 | 0.425 |
| ARC Easy | ca | Question Answering | acc | 0.551 | 0.553 |
| ARC Challenge | ca | Question Answering | acc | 0.290 | 0.282 |
| XNLI | ca | Natural Language Inference | acc | 0.473 | 0.469 |
| Teca | ca | Natural Language Inference | acc | 0.465 | 0.430 |
| WNLI | ca | Natural Language Inference | acc | 0.577 | 0.577 |
| Catcola | ca | Linguistic Acceptability | acc | 0.543 | 0.596 |
| Catcola | ca | Linguistic Acceptability | mcc | 0.046 | -0.002 |
| Catalanqa | ca | Question Answering | F1 | 0.668 | 0.643 |
| Mgsm direct | ca | Math | exact match | 0.024 | 0.024 |
| Catalanqa | ca | Question Answering | exact match | 0.437 | 0.405 |
| Xquad | ca | Question Answering | exact match | 0.371 | 0.344 |
| Xquad | ca | Question Answering | F1 | 0.579 | 0.568 |
Generation Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-2B | Aitana-2B-S-base-1.0 |
|---|---|---|---|---|---|
| Cabreu abstractive | ca | Summarization | bleu | 5.78 | 6.52 |
| Cabreu extractive | ca | Summarization | bleu | 42.89 | 41.61 |
| Cabreu extreme | ca | Summarization | bleu | 3.29 | 3.01 |
Spanish
Classification Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-2B | Aitana-2B-S-base-1.0 |
|---|---|---|---|---|---|
| Belebele | es | Reading Comprehension | acc | 0.228 | 0.263 |
| PAWS | es | Paraphrasing | acc | 0.561 | 0.553 |
| XNLI | es | Natural Language Inference | acc | 0.439 | 0.422 |
| WNLI | es | Natural Language Inference | acc | 0.563 | 0.563 |
| XStoryCloze | es | Commonsense Reasoning | acc | 0.653 | 0.655 |
| Escola | es | Linguistic Acceptability | acc | 0.593 | 0.618 |
| Escola | es | Linguistic Acceptability | mcc | 0.031 | -0.020 |
| OpenbookQA | es | Question Answering | acc | 0.308 | 0.316 |
| MGSM Direct | es | Math | exact match | 0.020 | 0.032 |
| XQUAD | es | Question Answering | exact match | 0.377 | 0.341 |
| XQUAD | es | Question Answering | F1 | 0.584 | 0.559 |
Generation Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-2B | Aitana-2B-S-base-1.0 |
|---|---|---|---|---|---|
| Cocoteros | es | Reading Comprehension | bleu | 8.46 | 7.043 |
| XLSum | es | Summarization | bleu | 0.801 | 1.622 |
English
Classification Benchmarks
| Dataset | Lang. | Task | Metric | Salamandra-2B | Aitana-2B-S-base-1.0 |
|---|---|---|---|---|---|
| Arc Challenge | en | Question Answering | acc | 0.370 | 0.360 |
| Arc Easy | en | Question Answering | acc | 0.722 | 0.712 |
| Belebele | en | Reading Comprehension | acc | 0.216 | 0.252 |
| PAWS | en | Paraphrasing | acc | 0.561 | 0.574 |
| XNLI | en | Natural Language Inference | acc | 0.462 | 0.452 |
| XStoryCloze | en | Commonsense Reasoning | acc | 0.711 | 0.713 |
| OpenBookQA | en | Question Answering | acc | 0.300 | 0.270 |
| PiQA | en | Question Answering | acc | 0.737 | 0.742 |
| Social iqa | en | Question Answering | acc | 0.454 | 0.450 |
| WNLI | en | Natural Language Inference | acc | 0.465 | 0.380 |
| MGSM Direct | en | Math | exact match | 0.064 | 0.06 |
| TriviaQA | en | Question Answering | exact match | 0.376 | 0.352 |
Additional Information
Author
The model has been developed by the Language and Information Systems Group (GPLSI) and the Centro de Inteligencia Digital (CENID), both part of the University of Alicante (UA), as part of their ongoing research in Natural Language Processing (NLP).
Part of the Aitana Family
This model is part of the Aitana model family, which includes:
- gplsi/Aitana-2B-S - Valencian-focused base model
- gplsi/Aitana-TA-2B-S - Translation model (Spanish ↔ Valencian)
Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública, co-financed by the EU – NextGenerationEU, within the framework of the project Desarrollo de Modelos ALIA.
Acknowledgments
We would like to express our gratitude to all individuals and institutions that have contributed to the development of this work.
Special thanks to:
- Language Technologies Laboratory at Barcelona Supercomputing Center
- Centro Vasco de Tecnología de la Lengua (HiTZ)
- Centro Singular de Investigación en Tecnologías Inteligentes (CiTIUS)
- Sistemas Inteligentes de Acceso a la Información (SINAI)
- Instituto Universitario de Investigación Informática (IUII)
- Leonardo HPC System
- European supercomputing ecosystem (EUROHPC)
We also acknowledge the financial, technical, and scientific support of the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA, whose contribution has been essential to the completion of this research.
License
Disclaimer
This model is intended for general purposes and is available under a permissive Apache License 2.0. Be aware that the model may have biases and/or undesirable outputs. Users deploying systems based on this model are responsible for mitigating risks and complying with applicable AI regulations.
Reference
@misc{gplsi-aitana-2B-S-base-1.0,
author = {Estevanell-Valladares, Ernesto L. and Yáñez-Romero, Fabio and Sepúlveda-Torres, Robiert and Consuegra-Ayala, Juan Pablo and Galeano, Santiago and Miró Maestre, María and Martínez-Murillo, Iván and Grande, Eduardo and Canal-Esteve, Miquel and Bonora, Mar and Gutierrez, Yoan and Abreu Salas, José Ignacio and Lloret, Elena and Montoyo, Andrés and Muñoz-Guillena and Palomar, Manuel},
title = {Aitana 2B base: Continually pre-trained on Valencian},
year = {2025},
institution = {Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA)},
howpublished = {\url{https://huggingface.co/gplsi/gplsi/Aitana-2B-S-base-1.0}},
note = {Accessed: 2025-12-12}
}
Copyright © 2025 Language and Information Systems Group (GPLSI) and Centro de Inteligencia Digital (CENID), University of Alicante (UA). Distributed under the Apache License 2.0.
- Downloads last month
- 579