Stefan Schweter's picture
In a Training Loop 🔄

Stefan Schweter PRO

stefan-it

AI & ML interests

Flair Library 💕, NER & PoS Tagging, LM Pretraining (mostly encoder-only & encoder-decoder), Historical Language Models, German Language Models, Bavarian NLP 🥨

Recent Activity

reacted to martinsu's post with 🔥 27 minutes ago
I wasted days on a GPU node on a bug that shouldn't exist So I was fine-tuning TildeOPEN-30B and the outputs were... weird. Token ID 179 (<0x00>) kept appearing between almost every token pair. Took me a bit to figure out what was going on. Turns out I used the fast tokenizer for training, but the model was trained on the slow one. Silent failure. Well... long story short—TGI uses (forces) the fast tokenizer, no questions asked. And you'll have agile's kryptonite: silent failure. If the model was trained on slow, it's a silent disaster. I got curious and wrote a quick script to check how common this is. Ran it on 6,014 LLM HF models overnight. Roughly 10% of HF model downloads have mismatched tokenizers. Not all mismatches are catastrophic, but some are brutal — like chat template markers inflating from 1 token to 3, silently wrecking context windows and causing model act weird. This wasn't rigorous research, but the drift is real. And the worst part? 968 models(out of 500+ downloads) have both fast and slow tokenizers present, but they still produce different outputs. No missing files, no errors — just silent degradation. TGI defaults to the fast tokenizer, as does AutoTokenizer.from_pretrained(). If a fast tokenizer doesn't exist, it auto-generates one. If your model was trained on slow, you get silent degradation. Output looks fine; the model just performs worse. Sometimes really worse. You'd never know. If model was trained on fast tokenizer, its fine, but how do You know? The root cause? Either model authors run HF conversion and upload both without verifying, or users run TGI, which always forces(converts to) fast . The result of this fight with tokenizers is https://huggingface.co/martinsu/tildeopen-30b-mu-instruct It's based on TildeOPEN-30B (a solid EU HPC multilingual base). Nothing fancy—just a proper instruction fine-tune where I didn't mess up the tokenizer this time. Full article: https://github.com/martins-u/tokenmagedon
liked a model 4 days ago
Cognitive-Lab/NetraEmbed
liked a model 4 days ago
Cognitive-Lab/ColNetraEmbed
View all activity

Organizations

Bayerische Staatsbibliothek's profile picture flair's profile picture Flax Community's profile picture dumitrescustefan-org's profile picture GermanT5's profile picture BigScience: LMs for Historical Texts's profile picture BigLAM: BigScience Libraries, Archives and Museums's profile picture Universal NER's profile picture Libre Euro Lingua-Alliance's profile picture Lang UK's profile picture BabyLM Challenge's profile picture hmByT5 Preliminary's profile picture hmByT5's profile picture Blog-explorers's profile picture German Wikipedia LMs's profile picture hmBERT's profile picture hmTEAMS's profile picture HIPE's profile picture hmBERT Tiny's profile picture hmBERT 64k's profile picture LSV @ Saarland University's profile picture GERMATRON's profile picture PleIAs's profile picture German LLM Tokenizers's profile picture Occiglot's profile picture Social Post Explorers's profile picture GERTuraX's profile picture Stefmal's profile picture Hugging Face Discord Community's profile picture Project German LLM's profile picture ENGEBA's profile picture Nerdy Face's profile picture TensorFlow Model Garden LMs's profile picture Hugging Face MCP Course's profile picture Bavarian NLP's profile picture Baivaria's profile picture SindBERT's profile picture German Tokenizer Benchmark's profile picture German MaxText SLMs's profile picture