Datasets:
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
AfriCorpus v1
AfriCorpus-v1 is the first public release of LocaleNLP's audited, deduplicated, and quality-filtered African language corpus. Built to power the AfriLION LLM project, this dataset directly addresses the Tokenizer Fertility problem that causes all current LLMs to underperform on African languages.
Key Statistics
| Language | Code | Script | CC-100 Source | Status |
|---|---|---|---|---|
| Wolof | wo |
Latin | CC-100 | Audited |
| Swahili | sw |
Latin | CC-100 | Audited |
| Hausa | ha |
Latin + Ajami | CC-100 | Audited |
| Yoruba | yo |
Latin | CC-100 | Audited |
| Amharic | am |
Ge'ez (Ethiopic) | CC-100 | Audited |
| Tigrinya | ti |
Ge'ez (Ethiopic) | CC-100 | In Progress |
| Somali | so |
Latin | CC-100 | In Progress |
| Igbo | ig |
Latin | CC-100 | In Progress |
| Zulu | zu |
Latin | CC-100 | In Progress |
Quality Assurance Pipeline
Every document in this corpus has passed through a 7-stage pipeline:
- Download β CC-100
.txt.xzsource files from StatMT. - Language-ID Filter β
langdetectwith confidence threshold > 0.90. - Text Cleaning β URL removal, HTML stripping, control character normalization.
- Deduplication β MinHash LSH (threshold 0.85, 128 permutations), including cross-lingual dedup.
- Length Filter β Only sentences with 20β2048 whitespace tokens are kept.
- JSONL Sharding β 100k lines per shard for streaming compatibility.
- Upload β Published here with provenance metadata on every record.
Critical Design Decisions
Ge'ez Script Handling
Amharic and Tigrinya use the Ge'ez (Ethiopic) script which has ~500 base syllabic characters. Each combination is a unique glyph, leading to thousands of distinct characters. Training on this corpus requires character_coverage=0.9999 in SentencePiece. Do not lower this value or your tokenizer will produce <0xE1><0x88><0xA0> byte-fallback tokens instead of actual Ge'ez glyphs, silently corrupting Amharic model training.
Equal Upsampling
Wolof has ~40MB of CC-100 data; Swahili has ~6.6GB. A proportionally-weighted tokenizer devotes most of its vocab budget to Swahili, leaving Wolof with ~200 tokens that fragment every word into 5β6 pieces. Our tokenizer training script upsamples Wolof 150x to achieve equal representation.
Lang ID Tokens
Every document is prepended with a language ID token ([WO], [SW], [HA], [AM], etc.) during tokenizer training. This enables the model to condition on language at inference time β critical for code-switching and per-language perplexity measurement.
Usage
from datasets import load_dataset
# Load a specific language
ds = load_dataset("LocaleNLP/AfriCorpus-v1", split="wo")
print(ds[0])
# {'text': 'Nanga def, baal ma.', 'lang': 'wo', 'lang_name': 'Wolof',
# 'token_count': 5, 'source': 'cc100'}
# Load all languages
ds_all = load_dataset("LocaleNLP/AfriCorpus-v1")
Citation
If you use this dataset, please cite:
@dataset{africorpus_v1_2026,
title = {AfriCorpus v1: Audited African Language Corpus for LLM Training},
author = {Jagne, Alieu and LocaleNLP Team},
year = {2026},
url = {https://huggingface.co/datasets/LocaleNLP/AfriCorpus-v1},
license = {cc-by-4.0}
}
Related Resources
- GitHub: LocaleNLP/afrilion
- Model: LocaleNLP/afrilion-base
- Community: Masakhane
- Downloads last month
- 31