Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
openeurollm
/
tokenizer-128k
like
0
Follow
OpenEuroLLM
112
38 languages
tokenizer
sentencepiece
bpe
multilingual
european-languages
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
tokenizer-128k
2.39 MB
Ctrl+K
Ctrl+K
1 contributor
History:
3 commits
timpal0l
Add model card with description, usage, and evaluation results
a89eaa2
verified
about 1 month ago
.gitattributes
1.52 kB
initial commit
about 1 month ago
README.md
4.3 kB
Add model card with description, usage, and evaluation results
about 1 month ago
added_tokens.json
22 Bytes
Upload 128k vocab SentencePiece BPE tokenizer (trained on 175GB)
about 1 month ago
special_tokens_map.json
16.6 kB
Upload 128k vocab SentencePiece BPE tokenizer (trained on 175GB)
about 1 month ago
tokenizer.model
2.35 MB
xet
Upload 128k vocab SentencePiece BPE tokenizer (trained on 175GB)
about 1 month ago
tokenizer_config.json
22.5 kB
Upload 128k vocab SentencePiece BPE tokenizer (trained on 175GB)
about 1 month ago