GemMoE
Collection
GemMoE Mixture of Expert and associated finetunes • 9 items • Updated • 2
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Crystalcareai/gemma-coder")
model = AutoModelForCausalLM.from_pretrained("Crystalcareai/gemma-coder")YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
This repository contains a fine-tuned version of the Gemma model, which is part of the GemMoE (Gemma Mixture of Experts) family of models. For more information about GemMoE, please refer to the official documentation [https://huggingface.co/Crystalcareai/GemMoE-Beta-1].
You can use this fine-tuned model like any other HuggingFace model. Simply load it using the from_pretrained method:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("huggingface-Crystalcareai/gemma-coder") tokenizer = AutoTokenizer.from_pretrained("huggingface-Crystalcareai/gemma-coder")
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Crystalcareai/gemma-coder")