Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
• 2311.03099 • Published
• 30
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using T145/KRONOS-8B-V5 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: T145/KRONOS-8B-V5
dtype: bfloat16
merge_method: dare_ties
slices:
- sources:
- layer_range: [0, 32]
model: akjindal53244/Llama-3.1-Storm-8B
parameters:
density: 0.8
weight: 0.25
- layer_range: [0, 32]
model: arcee-ai/Llama-3.1-SuperNova-Lite
parameters:
density: 0.8
weight: 0.33
- layer_range: [0, 32]
model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
density: 0.8
weight: 0.42
- layer_range: [0, 32]
model: T145/KRONOS-8B-V5
tokenizer_source: base
Detailed results can be found here! Summarized results can be found here!
| Metric | % Value |
|---|---|
| Avg. | 25.83 |
| IFEval (0-Shot) | 55.51 |
| BBH (3-Shot) | 31.85 |
| MATH Lvl 5 (4-Shot) | 21.15 |
| GPQA (0-shot) | 5.48 |
| MuSR (0-shot) | 8.73 |
| MMLU-PRO (5-shot) | 32.24 |