Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    TypeError
Message:      argument of type 'bool' is not iterable
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 656, in get_module
                  builder_configs, default_config_name = create_builder_configs_from_metadata_configs(
                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 353, in create_builder_configs_from_metadata_configs
                  builder_config_cls(
                File "<string>", line 14, in __init__
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 87, in __post_init__
                  super().__post_init__()
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 124, in __post_init__
                  if invalid_char in self.name:
                     ^^^^^^^^^^^^^^^^^^^^^^^^^
              TypeError: argument of type 'bool' is not iterable

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Open Wikipedia (Markdown)

Every Wikipedia article converted to clean Markdown, organized by language and updated from the latest Wikimedia dumps

What is it?

This dataset contains every article from every language edition of Wikipedia, converted from raw MediaWiki markup into clean, readable Markdown. Headings, bold, italic, code blocks, and internal links are all preserved as proper Markdown syntax, while templates, infoboxes, references, tables, categories, and other noise are stripped away.

The dataset currently contains 65.9M articles across 367 languages, sourced from the official Wikimedia database dumps. Each language's full XML export is streamed, parsed, and converted article by article. The results are stored as sharded Apache Parquet files with Zstandard compression, organized by language. Each language gets its own directory under data/, and each shard holds up to 500,000 articles.

This is the Markdown variant of the Open Wikipedia collection. If you need plain text with all formatting removed, see . If you need the original MediaWiki source markup, see .

What is being released?

The dataset is organized as one directory per language, with sharded Parquet files inside each:

data/
  en/en-00000.parquet      English, shard 0
     en-00001.parquet      English, shard 1
     ...
  de/de-00000.parquet      German
  fr/fr-00000.parquet      French
  es/es-00000.parquet      Spanish
  ja/ja-00000.parquet      Japanese
  ...
  la/la-00000.parquet      Latin

Each Parquet file contains up to 500,000 rows. Languages with fewer articles than that fit in a single shard. Larger languages like English, German, and French are split across multiple shards. All files use Zstandard compression.

How to download and use this dataset

Using DuckDB

DuckDB can read Parquet files directly from Hugging Face without downloading anything first. This is the fastest way to explore the data.

-- Count articles per language
SELECT lang, COUNT(*) as articles
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/*/*.parquet')
GROUP BY lang
ORDER BY articles DESC;
-- Search for articles about a topic across all languages
SELECT title, lang, length, url
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/*/*.parquet')
WHERE markdown ILIKE '%machine learning%'
ORDER BY length DESC
LIMIT 20;
-- Article length distribution for English Wikipedia
SELECT
    percentile_disc(0.25) WITHIN GROUP (ORDER BY length) AS p25,
    percentile_disc(0.50) WITHIN GROUP (ORDER BY length) AS p50,
    percentile_disc(0.75) WITHIN GROUP (ORDER BY length) AS p75,
    percentile_disc(0.90) WITHIN GROUP (ORDER BY length) AS p90,
    percentile_disc(0.99) WITHIN GROUP (ORDER BY length) AS p99,
    AVG(length)::INT AS avg_length,
    MAX(length) AS max_length
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/en/*.parquet');
-- Find the longest articles in each language
SELECT lang, title, length, url
FROM (
    SELECT *, ROW_NUMBER() OVER (PARTITION BY lang ORDER BY length DESC) AS rn
    FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/*/*.parquet')
)
WHERE rn = 1
ORDER BY length DESC
LIMIT 20;
-- How many articles contain code blocks?
SELECT lang, COUNT(*) AS articles_with_code, COUNT(*) * 100.0 / SUM(COUNT(*)) OVER () AS pct
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/*/*.parquet')
WHERE markdown LIKE '%```%'
GROUP BY lang
ORDER BY articles_with_code DESC
LIMIT 15;

Using datasets

from datasets import load_dataset

# Load English Wikipedia
ds = load_dataset("open-index/open-wikipedia-markdown", "en")
print(ds["train"][0]["title"])
print(ds["train"][0]["markdown"][:500])

# Stream the full dataset without downloading everything first
ds = load_dataset("open-index/open-wikipedia-markdown", "en", split="train", streaming=True)
for item in ds:
    print(item["title"], item["length"])

# Load a specific language
ds = load_dataset("open-index/open-wikipedia-markdown", "de")
print(f"German articles: {len(ds['train']):,}")

Using huggingface_hub

from huggingface_hub import snapshot_download

# Download only English
snapshot_download(
    "open-index/open-wikipedia-markdown",
    repo_type="dataset",
    local_dir="./wiki-md/",
    allow_patterns="data/en/*",
)

For faster downloads, install pip install huggingface_hub[hf_transfer] and set HF_HUB_ENABLE_HF_TRANSFER=1.

Using the CLI

# Download a single language
huggingface-cli download open-index/open-wikipedia-markdown \
    --include "data/la/*" \
    --repo-type dataset --local-dir ./wiki-md/

Using Polars

import polars as pl

df = pl.read_parquet("data/en/*.parquet")
print(f"English articles: {len(df):,}")
print(f"Total Markdown text: {df['length'].sum() / 1e9:.1f} GB")
print(df.select("title", "length").describe())

Dataset statistics

You can query the per-language statistics directly from the stats.csv file included in the dataset:

SELECT * FROM read_csv_auto('hf://datasets/open-index/open-wikipedia-markdown/stats.csv')
ORDER BY articles DESC;

The stats.csv file tracks each committed language with the following columns:

Column Description
lang ISO 639 language code
lang_name Human-readable language name
articles Number of articles in this language
md_shards Number of Markdown Parquet shards
text_shards Number of plain text Parquet shards
wikitext_shards Number of wikitext Parquet shards
md_bytes, text_bytes, wikitext_bytes Parquet file sizes per variant
dump_bytes Original Wikimedia dump size
dur_s Processing duration in seconds
committed_at ISO 8601 timestamp of when this language was committed

Languages

Language Code Articles Shards
Bulgarian bg 308.4K 1
ckb ckb 80.8K 1
Estonian et 253.7K 1
fon fon 3.8K 1
frp frp 5.4K 1
hyw hyw 13.7K 1
kl kl 1 1
Simple English simple 261.5K 1
Belarusian be 261.6K 1
pam pam 9.8K 1
rmy rmy 613 1
stq stq 3.8K 1
French fr 2.7M 6
ilo ilo 15.3K 1
oc oc 85.7K 1
tt tt 613.9K 2
tum tum 12.9K 1
Egyptian Arabic arz 1.6M 4
ga ga 58.6K 1
gor gor 15.4K 1
jbo jbo 1.3K 1
kab kab 5.5K 1
li li 15.0K 1
nap nap 5.9K 1
ps ps 22.4K 1
ami ami 1.8K 1
Croatian hr 210.2K 1
kge kge 1.8K 1
lez lez 4.3K 1
roa_rup roa_rup 1.2K 1
srn srn 1.0K 1
tdd tdd 541 1
ts ts 1.0K 1
kaa kaa 9.8K 1
to to 1.6K 1
Ukrainian uk 1.4M 3
ang ang 4.3K 1
ay ay 4.7K 1
fj fj 1.1K 1
iu iu 510 1
ti ti 418 1
tok tok 3.1K 1
br br 77.5K 1
fiu_vro fiu_vro 6.5K 1
pwn pwn 444 1
Swedish sv 2.6M 6
testcommons testcommons 1 1
wa wa 12.1K 1
awa awa 3.8K 1
glk glk 48.5K 1
Greek el 267.5K 1
jam jam 1.5K 1
arc arc 1.6K 1
ff ff 27.2K 1
mrj mrj 8.5K 1
se se 5.2K 1
szl szl 55.7K 1
ik ik 338 1
io io 57.0K 1
Dutch nl 2.2M 5
Sinhala si 27.2K 1
zea zea 6.8K 1
zgh zgh 12.1K 1
alt alt 1.1K 1
South Azerbaijani azb 244.4K 1
bbc bbc 1.3K 1
kv kv 5.7K 1
pdc pdc 1.7K 1
tly tly 6.8K 1
tn tn 4.8K 1
Vietnamese vi 1.3M 3
ch ch 466 1
Welsh cy 283.6K 1
Gujarati gu 30.9K 1
mdf mdf 7.7K 1
mni mni 10.5K 1
Waray war 1.2M 3
dga dga 4.0K 1
om om 2.1K 1
tg tg 115.7K 1
thankyou thankyou 3 1
krc krc 2.1K 1
lb lb 62.9K 1
mh mh 7 1
qu qu 23.8K 1
ty ty 275 1
tyv tyv 4.2K 1
kai kai 671 1
nds_nl nds_nl 7.1K 1
nia nia 1.8K 1
Portuguese pt 1.1M 3
new new 72.6K 1
Thai th 181.2K 1
an an 62.1K 1
ig ig 53.3K 1
mi mi 7.7K 1
ady ady 720 1
German de 3.1M 7
ng ng 17 1
pap pap 5.2K 1
anp anp 3.1K 1
bar bar 24.7K 1
gn gn 5.7K 1
kus kus 1.6K 1
nrm nrm 4.5K 1
pnb pnb 66.6K 1
pnt pnt 578 1
rw rw 11.2K 1
gcr gcr 2.4K 1
gd gd 15.0K 1
gpe gpe 5.1K 1
Italian it 1.9M 4
kaj kaj 986 1
lbe lbe 919 1
Macedonian mk 160.0K 1
xal xal 1.3K 1
Catalan ca 787.8K 2
Kannada kn 35.1K 1
Lithuanian lt 218.1K 1
map_bms map_bms 5.5K 1
nov nov 1.6K 1
tcy tcy 3.7K 1
tig tig 359 1
xmf xmf 19.6K 1
ban ban 36.1K 1
cu cu 1.4K 1
gan gan 3.8K 1
lij lij 10.5K 1
scn scn 23.3K 1
vec vec 66.8K 1
Yiddish yi 15.1K 1
za za 1.7K 1
diq diq 36.8K 1
Finnish fi 613.5K 2
gom gom 4.3K 1
hak hak 8.7K 1
Indonesian id 749.6K 2
is is 57.6K 1
ln ln 4.3K 1
os os 19.1K 1
Czech cs 588.5K 2
dv dv 4.4K 1
ext ext 4.2K 1
ny ny 930 1
rn rn 566 1
Sanskrit sa 12.8K 1
Tagalog tl 48.3K 1
vote vote 7 1
avk avk 27.6K 1
Spanish es 2.0M 5
guc guc 893 1
lld lld 147.9K 1
ltg ltg 1.0K 1
rsk rsk 1.2K 1
Turkish tr 631.1K 2
ve ve 964 1
Arabic ar 1.3M 3
kg kg 1.3K 1
Khmer km 13.4K 1
ks ks 9.3K 1
lfn lfn 4.9K 1
lg lg 5.3K 1
nah nah 3.7K 1
sn sn 11.1K 1
co co 7.7K 1
bpy bpy 25.2K 1
cho cho 6 1
guw guw 1.8K 1
mus mus 2 1
fy fy 57.0K 1
ii ii 2 1
kcg kcg 1.7K 1
knc knc 2.2K 1
Swahili sw 107.0K 1
syl syl 955 1
English en 7.1M 15
Japanese ja 1.5M 3
ksh ksh 2.7K 1
Malay ms 383.5K 1
tet tet 1.3K 1
bjn bjn 11.4K 1
bug bug 10.6K 1
cv cv 57.6K 1
kj kj 3 1
Punjabi pa 59.7K 1
fur fur 4.7K 1
haw haw 2.3K 1
Armenian hy 325.4K 1
dag dag 14.3K 1
Georgian ka 192.6K 1
Marathi mr 101.3K 1
nup nup 1.3K 1
ppl ppl 872 1
ab ab 6.2K 1
as as 24.2K 1
pag pag 2.4K 1
rue rue 8.6K 1
am am 14.0K 1
ast ast 132.3K 1
ki ki 1.3K 1
Afrikaans af 126.9K 1
ak ak 1 1
roa_tara roa_tara 1.8K 1
sco sco 32.2K 1
Urdu ur 250.7K 1
bcl bcl 20.0K 1
Bosnian bs 96.7K 1
dz dz 1.3K 1
gag gag 2.2K 1
iba iba 2.4K 1
Latvian lv 139.5K 1
nr nr 551 1
Serbo-Croatian sh 445.0K 1
bi bi 778 1
hsb hsb 14.2K 1
Korean ko 738.9K 2
Minangkabau min 226.5K 1
pih pih 1 1
Albanian sq 105.7K 1
be_x_old be_x_old 90.0K 1
chy chy 469 1
got got 1.1K 1
gv gv 6.6K 1
ha ha 96.0K 1
kbd kbd 1.7K 1
nso nso 8.3K 1
rm rm 3.1K 1
ann ann 454 1
igl igl 1.2K 1
mad mad 6.6K 1
mg mg 99.5K 1
Malayalam ml 88.7K 1
Burmese my 98.2K 1
so so 11.0K 1
szy szy 4.6K 1
Azerbaijani az 206.9K 1
bh bh 9.0K 1
bm bm 594 1
Hungarian hu 563.3K 2
mai mai 15.0K 1
mnw mnw 3.8K 1
tay tay 2.8K 1
tpi tpi 666 1
trv trv 1.9K 1
ug ug 10.0K 1
zu zu 5.0K 1
ky ky 76.5K 1
Mongolian mn 30.2K 1
mwl mwl 4.4K 1
smn smn 6.3K 1
zh_classical zh_classical 13.1K 1
zh_yue zh_yue 123.5K 1
cbk_zam cbk_zam 3.1K 1
Danish da 309.6K 1
frr frr 19.6K 1
ku ku 87.4K 1
pcd pcd 5.8K 1
shn shn 15.0K 1
sm sm 1.0K 1
ten ten 534 1
eml eml 2.1K 1
Basque eu 466.4K 1
Hebrew he 375.0K 1
Nepali ne 30.1K 1
Slovenian sl 190.7K 1
udm udm 5.5K 1
vep vep 7.0K 1
av av 3.8K 1
ba ba 63.9K 1
ho ho 2 1
lmo lmo 75.5K 1
nqo nqo 1.6K 1
Romanian ro 510.8K 2
wo wo 1.3K 1
zh_min_nan zh_min_nan 418.0K 1
ace ace 11.7K 1
btm btm 1.2K 1
Esperanto eo 382.5K 1
ie ie 13.2K 1
Norwegian no 672.7K 2
skr skr 24.3K 1
bo bo 14.8K 1
Javanese jv 64.7K 1
lad lad 3.9K 1
myv myv 7.5K 1
pfl pfl 2.9K 1
sd sd 21.3K 1
shi shi 10.8K 1
ary ary 13.6K 1
Bengali bn 186.9K 1
gur gur 1.6K 1
mhr mhr 11.0K 1
sat sat 15.6K 1
Tamil ta 183.9K 1
csb csb 5.5K 1
din din 324 1
hif hif 9.6K 1
nds nds 84.7K 1
pms pms 70.8K 1
Serbian sr 708.1K 2
bat_smg bat_smg 17.1K 1
Cebuano ceb 6.1M 13
fat fat 2.1K 1
Hindi hi 171.6K 1
koi koi 3.3K 1
kw kw 5.5K 1
nv nv 22.4K 1
Polish pl 1.6M 4
cr cr 1 1
ee ee 1.1K 1
fo fo 12.3K 1
mt mt 7.7K 1
Russian ru 2.1M 5
sah sah 17.3K 1
yo yo 32.0K 1
kbp kbp 2.0K 1
sc sc 7.2K 1
Lao lo 6.0K 1
olo olo 4.6K 1
pi pi 123 1
rki rki 1.7K 1
vo vo 47.3K 1
wikifunctions wikifunctions 23.9K 1
atj atj 2.0K 1
Kazakh kk 246.0K 1
Latin la 139.4K 1
pcm pcm 1.7K 1
als als 31.6K 1
cdo cdo 8.8K 1
mos mos 1.6K 1
ss ss 1.3K 1
Telugu te 122.3K 1
chr chr 690 1
Sundanese su 60.6K 1
tw tw 5.0K 1
bew bew 3.4K 1
ht ht 66.1K 1
ia ia 27.0K 1
inh inh 2.2K 1
mzn mzn 64.1K 1
Odia or 20.8K 1
sg sg 213 1
Chinese zh 1.5M 4
bdr bdr 425 1
Chechen ce 846.6K 2
Galician gl 229.5K 1
nn nn 174.8K 1
Uzbek uz 310.5K 1
xh xh 2.5K 1
blk blk 3.3K 1
bxr bxr 2.8K 1
crh crh 26.4K 1
dtp dtp 2.0K 1
st st 1.8K 1
vls vls 8.1K 1
wuu wuu 47.6K 1
dsb dsb 3.2K 1
testwikidata testwikidata 131.0K 1
tk tk 6.8K 1
dty dty 3.9K 1
Persian fa 1.1M 3
Slovak sk 248.7K 1
aa aa 0 0
hz hz 0 0
kr kr 0 0
lrc lrc 0 0
na na 0 0

Schema

Every Parquet file shares the same schema:

Column Type Description
id int64 Wikipedia page ID, unique within each language edition
title string Article title as it appears on Wikipedia
markdown string Full article body converted from wikitext to Markdown
url string Direct URL to the Wikipedia article
lang string ISO 639 language code (e.g. en, de, fr, ja)
length int32 Markdown body length in bytes
timestamp string Last revision timestamp in ISO 8601 format

Example instance

Here is an example row from the English partition, showing a converted article:

{
  "id": 12,
  "title": "Anarchism",
  "markdown": "# Anarchism\n\n**Anarchism** is a political philosophy and movement that is against all forms of authority...",
  "url": "https://en.wikipedia.org/wiki/Anarchism",
  "lang": "en",
  "length": 87453,
  "timestamp": "2025-12-15T08:22:01Z"
}

The markdown field contains the full article text with Markdown formatting. Internal wiki links are converted to full Wikipedia URLs, so [[United States|US]] becomes [US](https://en.wikipedia.org/wiki/United_States).

Wikitext to Markdown conversion

The conversion handles the most common MediaWiki syntax elements and maps them to their Markdown equivalents:

MediaWiki syntax Markdown output
== Heading == ## Heading
=== Subheading === ### Subheading
'''bold''' **bold**
''italic'' *italic*
[[Page|Text]] [Text](https://lang.wikipedia.org/wiki/Page)
[https://example.com text] [text](https://example.com)
<syntaxhighlight lang="python"> ```python ```
<code>x</code> `x`
<pre>block</pre> ```\nblock\n```

What gets stripped

The following elements are removed during conversion to produce clean, readable text:

Element Handling
{{templates}} Removed entirely, including Infobox, Navbox, Taxobox, and all other templates
{{Infobox ...}} Removed, including nested template parameters
`{ tables
<ref> citations Removed, including named references
[[File:]] / [[Image:]] Removed, including thumbnails and captions
[[Category:]] Removed
<!-- comments --> Removed
Interwiki links Removed
Magic words __NOTOC__, __FORCETOC__, and similar directives are removed

The goal is to preserve the article's readable content and structure while removing everything that only makes sense in the context of the MediaWiki rendering engine.

How it works

The pipeline processes Wikipedia language editions through the following steps:

  1. Download. The latest {lang}wiki-latest-pages-articles.xml.bz2 dump is streamed from dumps.wikimedia.org. Downloads support HTTP range resumption, so interrupted transfers pick up where they left off.

  2. Parse. A streaming XML parser processes the bzip2-compressed dump without extracting it to disk. Only namespace-0 pages (articles) are kept. Redirects, talk pages, user pages, and all other namespaces are skipped.

  3. Convert. Each article's wikitext is converted to Markdown through a series of regex-based transformations. Templates are stripped with up to 5 nesting passes to handle deeply nested constructs. Internal wiki links are resolved to full Wikipedia URLs for the appropriate language edition.

  4. Filter. Articles shorter than 100 bytes after conversion are excluded. This removes stubs, disambiguation pages, and other pages with minimal content.

  5. Shard. Articles are written to Zstandard-compressed Parquet files, approximately 500,000 rows per shard. Multiple languages are processed in parallel using a worker pool.

  6. Publish. Each language's shards are committed to this Hugging Face repository as they complete. Commit messages include article counts, shard counts, and file sizes for auditability.

Considerations

Why Markdown instead of plain text?

Plain text is sufficient for many NLP tasks, but it loses document structure. Markdown preserves headings, bold, italic, code blocks, and links, which makes it better suited for:

  • Language model training where the model should understand document structure
  • Retrieval-augmented generation (RAG) where chunking by heading sections produces more coherent results
  • Knowledge graph construction where preserved links encode relationships between concepts

If you do not need formatting, the plain text variant is smaller and simpler.

Known limitations

  • Conversion is regex-based, not a full parser. Some complex wikitext constructs (deeply nested tables inside templates, parser functions, Lua module output) may not convert perfectly. The vast majority of articles convert cleanly, but edge cases exist.
  • Templates are stripped, not expanded. Infoboxes, navigation boxes, and other templates are removed entirely rather than expanded to their rendered output. This means some structured data that appears in rendered Wikipedia pages is not present in this dataset.
  • One snapshot in time. This dataset represents a single snapshot of each language's dump. It does not track edit history or article revisions.
  • Dump availability varies. Not all language editions have their dumps available at all times. Languages whose dumps fail to download are skipped and will be included in future updates.

Related datasets

  • - Same articles as pure plain text with all formatting removed. Smaller files, better for embeddings and classification.
  • - Same articles in original MediaWiki wikitext markup. Use this if you need templates, tables, references, and other source elements.

Thanks

The content in this dataset was written by millions of Wikipedia editors worldwide and is hosted by the Wikimedia Foundation. The raw data comes from the Wikimedia database dumps, which the Foundation makes freely available for download.

Wikipedia is one of humanity's greatest collaborative achievements. All credit for the content goes to the volunteer editors who write, review, and maintain it.

This dataset is an independent conversion and is not affiliated with or endorsed by the Wikimedia Foundation.

Licensing

Wikipedia content is released under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). This dataset inherits that license. If you redistribute or build upon this data, you must give appropriate credit and share your contributions under the same license.

Citation

@dataset{open_wikipedia_markdown,
  title     = {Open Wikipedia (Markdown)},
  author    = {Open Index},
  year      = {2026},
  url       = {https://huggingface.co/datasets/open-index/open-wikipedia-markdown},
  license   = {CC BY-SA 4.0},
  publisher = {Hugging Face}
}

Last updated: 2026-04-24

Downloads last month
1,016

Models trained or fine-tuned on open-index/open-wikipedia-markdown