--- library_name: transformers pipeline_tag: text-generation license: mit language: - en --- # Pretrained Historical Model (1850 - 1880) This model was trained using the [BabyLlama2](https://arxiv.org/abs/2409.17312v1) training recipe, from two trainers: - [Trainer 1](https://huggingface.co/Hplm/teacher_1_1850_1880) - [Trainer 2](https://huggingface.co/Hplm/teacher_2_1850_1880) It was trained on 10M words from the Gutenberg Corpus attributed to the time period 1850 - 1880. ### Model Sources - **Repository:** https://github.com/comp-int-hum/historical-perspectival-lm - **Paper (ArXiv):** https://arxiv.org/abs/2504.05523 - **Paper (Hugging Face):** https://huggingface.co/papers/2504.05523 ## Downloading the Model Load the model like this: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Hplm/student_1850_1880", torch_dtype=torch.float16) tokenizer = AutoTokenizer.from_pretrained("Hplm/student_1850_1880") ``` ## License This model is released under the [MIT](https://choosealicense.com/licenses/mit/) licence. ## Citation ``` @article{fittschen_diachroniclanguagemodels_2025, title = {Pretraining Language Models for Diachronic Linguistic Change Discovery}, author = {Fittschen, Elisabeth and Li, Sabrina and Lippincott, Tom and Choshen, Leshem and Messner, Craig}, year = {2025}, month = apr, eprint = {2504.05523}, primaryclass = {cs.CL}, publisher = {arXiv}, doi = {10.48550/arXiv.2504.05523}, url = {https://arxiv.org/abs/2504.05523}, urldate = {2025-04-14}, archiveprefix = {arXiv}, journal = {arxiv:2504.05523[cs.CL]} } ```