--- base_model: - ValiantLabs/Qwen3-4B-Esper3 - ValiantLabs/Qwen3-4B-ShiningValiant3 - ertghiu256/Qwen3-Hermes-4b - ertghiu256/qwen-3-4b-mixture-of-thought - ertghiu256/qwen3-4b-code-reasoning - janhq/Jan-v1-4B - Qwen/Qwen3-4B-Thinking-2507 - ertghiu256/Qwen3-4b-2507-Thinking-math-and-code - quelmap/Lightning-4b - GetSoloTech/Qwen3-Code-Reasoning-4B - Qwen/Qwen3-4b-Instruct-2507 - ertghiu256/qwen3-multi-reasoner - Tesslate/WEBGEN-4B-Preview - huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated - ertghiu256/qwen3-math-reasoner - ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3 - Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2 - Tesslate/UIGEN-FX-4B-Preview - POLARIS-Project/Polaris-4B-Preview library_name: transformers tags: - mergekit - merge --- # Tcomanr-V2_6 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) as a base. ### Models Merged The following models were included in the merge: * [ValiantLabs/Qwen3-4B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) * [ValiantLabs/Qwen3-4B-ShiningValiant3](https://huggingface.co/ValiantLabs/Qwen3-4B-ShiningValiant3) * [ertghiu256/Qwen3-Hermes-4b](https://huggingface.co/ertghiu256/Qwen3-Hermes-4b) * [ertghiu256/qwen-3-4b-mixture-of-thought](https://huggingface.co/ertghiu256/qwen-3-4b-mixture-of-thought) * [ertghiu256/qwen3-4b-code-reasoning](https://huggingface.co/ertghiu256/qwen3-4b-code-reasoning) * [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B) * [ertghiu256/Qwen3-4b-2507-Thinking-math-and-code](https://huggingface.co/ertghiu256/Qwen3-4b-2507-Thinking-math-and-code) * [quelmap/Lightning-4b](https://huggingface.co/quelmap/Lightning-4b) * [GetSoloTech/Qwen3-Code-Reasoning-4B](https://huggingface.co/GetSoloTech/Qwen3-Code-Reasoning-4B) * [Qwen/Qwen3-4b-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4b-Instruct-2507) * [ertghiu256/qwen3-multi-reasoner](https://huggingface.co/ertghiu256/qwen3-multi-reasoner) * [Tesslate/WEBGEN-4B-Preview](https://huggingface.co/Tesslate/WEBGEN-4B-Preview) * [huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated) * [ertghiu256/qwen3-math-reasoner](https://huggingface.co/ertghiu256/qwen3-math-reasoner) * [ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3](https://huggingface.co/ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3) * [Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2) * [Tesslate/UIGEN-FX-4B-Preview](https://huggingface.co/Tesslate/UIGEN-FX-4B-Preview) * [POLARIS-Project/Polaris-4B-Preview](https://huggingface.co/POLARIS-Project/Polaris-4B-Preview) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ertghiu256/qwen3-math-reasoner parameters: weight: 0.85 - model: ertghiu256/qwen3-4b-code-reasoning parameters: weight: 0.9 - model: ertghiu256/qwen-3-4b-mixture-of-thought parameters: weight: 1.0 - model: POLARIS-Project/Polaris-4B-Preview parameters: weight: 1.0 - model: ertghiu256/qwen3-multi-reasoner parameters: weight: 0.85 - model: ertghiu256/Qwen3-Hermes-4b parameters: weight: 0.7 - model: ValiantLabs/Qwen3-4B-Esper3 parameters: weight: 0.8 - model: Tesslate/WEBGEN-4B-Preview parameters: weight: 1.0 - model: Tesslate/UIGEN-FX-4B-Preview parameters: weight: 0.95 - model: ValiantLabs/Qwen3-4B-ShiningValiant3 parameters: weight: 0.8 - model: huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated parameters: weight: 0.85 - model: Qwen/Qwen3-4B-Thinking-2507 parameters: weight: 1.0 - model: Qwen/Qwen3-4b-Instruct-2507 parameters: weight: 1.0 - model: GetSoloTech/Qwen3-Code-Reasoning-4B parameters: weight: 0.95 - model: ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3 parameters: weight: 1.0 - model: janhq/Jan-v1-4B parameters: weight: 0.25 - model: Goekdeniz-Guelmez/Josiefied-Qwen3-4B-Instruct-2507-gabliterated-v2 parameters: weight: 0.85 - model: quelmap/Lightning-4b parameters: weight: 0.75 - model: ertghiu256/Qwen3-4b-2507-Thinking-math-and-code parameters: weight: 1.0 merge_method: ties base_model: Qwen/Qwen3-4B-Thinking-2507 parameters: normalize: true int8_mask: true lambda: 1.0 dtype: float16 ```