Text Generation
Transformers
Safetensors
mixtral
Mixture of Experts
mergekit
Merge
chinese
arabic
english
multilingual
german
french
openchat/openchat-3.5-1210
beowolx/CodeNinja-1.0-OpenChat-7B
maywell/PiVoT-0.1-Starling-LM-RP
WizardLM/WizardMath-7B-V1.1
davidkim205/komt-mistral-7b-v1
OpenBuddy/openbuddy-zephyr-7b-v14.1
manishiitg/open-aditi-hi-v1
VAGOsolutions/SauerkrautLM-7b-v1-mistral
text-generation-inference
| license: apache-2.0 | |
| tags: | |
| - moe | |
| - mergekit | |
| - merge | |
| - chinese | |
| - arabic | |
| - english | |
| - multilingual | |
| - german | |
| - french | |
| - openchat/openchat-3.5-1210 | |
| - beowolx/CodeNinja-1.0-OpenChat-7B | |
| - maywell/PiVoT-0.1-Starling-LM-RP | |
| - WizardLM/WizardMath-7B-V1.1 | |
| - davidkim205/komt-mistral-7b-v1 | |
| - OpenBuddy/openbuddy-zephyr-7b-v14.1 | |
| - manishiitg/open-aditi-hi-v1 | |
| - VAGOsolutions/SauerkrautLM-7b-v1-mistral | |
| # MetaModel_moe_multilingualv2 | |
| This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: | |
| * [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) | |
| * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) | |
| * [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) | |
| * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) | |
| * [davidkim205/komt-mistral-7b-v1](https://huggingface.co/davidkim205/komt-mistral-7b-v1) | |
| * [OpenBuddy/openbuddy-zephyr-7b-v14.1](https://huggingface.co/OpenBuddy/openbuddy-zephyr-7b-v14.1) | |
| * [manishiitg/open-aditi-hi-v1](https://huggingface.co/manishiitg/open-aditi-hi-v1) | |
| * [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) | |
| ## 🧩 Configuration | |
| ```yamlbase_model: mlabonne/NeuralMarcoro14-7B | |
| dtype: bfloat16 | |
| experts: | |
| - positive_prompts: | |
| - chat | |
| - assistant | |
| - tell me | |
| - explain | |
| source_model: openchat/openchat-3.5-1210 | |
| - positive_prompts: | |
| - code | |
| - python | |
| - javascript | |
| - programming | |
| - algorithm | |
| source_model: beowolx/CodeNinja-1.0-OpenChat-7B | |
| - positive_prompts: | |
| - storywriting | |
| - write | |
| - scene | |
| - story | |
| - character | |
| source_model: maywell/PiVoT-0.1-Starling-LM-RP | |
| - positive_prompts: | |
| - reason | |
| - math | |
| - mathematics | |
| - solve | |
| - count | |
| source_model: WizardLM/WizardMath-7B-V1.1 | |
| - positive_prompts: | |
| - korean | |
| - answer in korean | |
| - korea | |
| source_model: davidkim205/komt-mistral-7b-v1 | |
| - positive_prompts: | |
| - chinese | |
| - china | |
| - answer in chinese | |
| source_model: OpenBuddy/openbuddy-zephyr-7b-v14.1 | |
| - positive_prompts: | |
| - hindi | |
| - india | |
| - hindu | |
| - answer in hindi | |
| source_model: manishiitg/open-aditi-hi-v1 | |
| - positive_prompts: | |
| - german | |
| - germany | |
| - answer in german | |
| - deutsch | |
| source_model: VAGOsolutions/SauerkrautLM-7b-v1-mistral | |
| gate_mode: hidden | |
| ``` | |
| ## 💻 Usage | |
| ```python | |
| !pip install -qU transformers bitsandbytes accelerate | |
| from transformers import AutoTokenizer | |
| import transformers | |
| import torch | |
| model = "gagan3012/MetaModel_moe_multilingualv2" | |
| tokenizer = AutoTokenizer.from_pretrained(model) | |
| pipeline = transformers.pipeline( | |
| "text-generation", | |
| model=model, | |
| model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, | |
| ) | |
| messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] | |
| prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
| outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | |
| print(outputs[0]["generated_text"]) | |
| ``` |