| --- |
| license: cc-by-nc-4.0 |
| tags: |
| - merge |
| - mergekit |
| - lazymergekit |
| - samir-fama/SamirGPT-v1 |
| - abacusai/Slerp-CM-mist-dpo |
| - EmbeddedLLM/Mistral-7B-Merge-14-v0.2 |
| base_model: |
| - mistralai/Mistral-7B-v0.1 |
| - samir-fama/SamirGPT-v1 |
| - abacusai/Slerp-CM-mist-dpo |
| - EmbeddedLLM/Mistral-7B-Merge-14-v0.2 |
| model-index: |
| - name: Daredevil-7B |
| results: |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: AI2 Reasoning Challenge (25-Shot) |
| type: ai2_arc |
| config: ARC-Challenge |
| split: test |
| args: |
| num_few_shot: 25 |
| metrics: |
| - type: acc_norm |
| value: 69.37 |
| name: normalized accuracy |
| source: |
| url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-7B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: HellaSwag (10-Shot) |
| type: hellaswag |
| split: validation |
| args: |
| num_few_shot: 10 |
| metrics: |
| - type: acc_norm |
| value: 87.17 |
| name: normalized accuracy |
| source: |
| url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-7B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: MMLU (5-Shot) |
| type: cais/mmlu |
| config: all |
| split: test |
| args: |
| num_few_shot: 5 |
| metrics: |
| - type: acc |
| value: 65.3 |
| name: accuracy |
| source: |
| url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-7B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: TruthfulQA (0-shot) |
| type: truthful_qa |
| config: multiple_choice |
| split: validation |
| args: |
| num_few_shot: 0 |
| metrics: |
| - type: mc2 |
| value: 64.09 |
| source: |
| url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-7B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: Winogrande (5-shot) |
| type: winogrande |
| config: winogrande_xl |
| split: validation |
| args: |
| num_few_shot: 5 |
| metrics: |
| - type: acc |
| value: 81.29 |
| name: accuracy |
| source: |
| url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-7B |
| name: Open LLM Leaderboard |
| - task: |
| type: text-generation |
| name: Text Generation |
| dataset: |
| name: GSM8k (5-shot) |
| type: gsm8k |
| config: main |
| split: test |
| args: |
| num_few_shot: 5 |
| metrics: |
| - type: acc |
| value: 72.93 |
| name: accuracy |
| source: |
| url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-7B |
| name: Open LLM Leaderboard |
| --- |
| |
| # Daredevil-7B |
|
|
| Daredevil-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
| * [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1) |
| * [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo) |
| * [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2) |
|
|
| ## 🏆 Evaluation |
|
|
| ### Open LLM Leaderboard |
|
|
| TBD. |
|
|
| ### Nous |
|
|
| | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |
| |------------------------------------------------------------|------:|------:|---------:|-------:|------:| |
| |[**Daredevil-7B**](https://huggingface.co/shadowml/Daredevil-7B)| **44.85**| **76.07**| <u>**64.89**</u>| **47.07**| <u>**58.22**</u>| |
| |[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)| 42.75| 72.99| 52.99| 40.94| 52.42| |
| |[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)| 43.67| 73.24| 55.37| 41.76| 53.51| |
| |[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| <u>47.79</u>| 74.69| 55.92| 44.84| 55.81| |
| |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| <u>76.24</u>| 64.15| 45.64| 57.67| |
| |[CatMarcoro14-7B-slerp](https://huggingface.co/occultml/CatMarcoro14-7B-slerp)| 45.21| 75.91| 63.81| <u>47.31</u>| 58.06| |
|
|
| See the complete evaluation [here](https://gist.github.com/mlabonne/cd03d60f7428450a87ca270b5c467324). |
|
|
| ## 🧩 Configuration |
|
|
| ```yaml |
| models: |
| - model: mistralai/Mistral-7B-v0.1 |
| # No parameters necessary for base model |
| - model: samir-fama/SamirGPT-v1 |
| parameters: |
| density: 0.53 |
| weight: 0.4 |
| - model: abacusai/Slerp-CM-mist-dpo |
| parameters: |
| density: 0.53 |
| weight: 0.3 |
| - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2 |
| parameters: |
| density: 0.53 |
| weight: 0.3 |
| merge_method: dare_ties |
| base_model: mistralai/Mistral-7B-v0.1 |
| parameters: |
| int8_mask: true |
| dtype: bfloat16 |
| ``` |
|
|
| ## 💻 Usage |
|
|
| ```python |
| !pip install -qU transformers accelerate |
| |
| from transformers import AutoTokenizer |
| import transformers |
| import torch |
| |
| model = "shadowml/Daredevil-7B" |
| messages = [{"role": "user", "content": "What is a large language model?"}] |
| |
| tokenizer = AutoTokenizer.from_pretrained(model) |
| prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| pipeline = transformers.pipeline( |
| "text-generation", |
| model=model, |
| torch_dtype=torch.float16, |
| device_map="auto", |
| ) |
| |
| outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
| print(outputs[0]["generated_text"]) |
| ``` |
| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
| Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Daredevil-7B) |
|
|
| | Metric |Value| |
| |---------------------------------|----:| |
| |Avg. |73.36| |
| |AI2 Reasoning Challenge (25-Shot)|69.37| |
| |HellaSwag (10-Shot) |87.17| |
| |MMLU (5-Shot) |65.30| |
| |TruthfulQA (0-shot) |64.09| |
| |Winogrande (5-shot) |81.29| |
| |GSM8k (5-shot) |72.93| |
|
|
|
|