Text Generation
GGUF
English
quant
pruned
experimental

Experimental layer-wise + pruned (layers 4 and 5) quantization of mistralai/Devstral-Small-2505

Using LLaMA C++ release b5890 for quantization.

Original model: mistralai/Devstral-Small-2505

From the original model creators:

Devstral Small 1.0

Devstral is an agentic LLM for software engineering tasks built under a collaboration between Mistral AI and All Hands AI 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this benchmark.

It is finetuned from Mistral-Small-3.1, therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from Mistral-Small-3.1 the vision encoder was removed.

For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.

Learn more about Devstral in our blog post.

PLEASE READ THIS BEFORE USING THESE EXPERIMENTAL VERSIONS!

An area of personal interest is finding ways to optimize the inference performance of LLMs when deployed in resource-constrained environments like commodity hardware, desktops, laptops, mobiles, edge devices, etc. There are many approaches to accomplish this, including architecture simplification and knowledge distillation, but my focus has been primarily on quantization and pruning.

The method used to produce these experimental versions is covered in Squeezing Tensor Bits: the quest for smaller LLMs, but at a high level it involves using a custom version of llama-imatrix to identify influential tensors, quantize the most important layers to higher bit precision and the less important to lower bits, and remove (prune) one or more layers. This process was partly inspired by Dumitru's et al Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels, and Xin Men's et al ShortGPT: Layers in Large Language Models are More Redundant Than You Expect

As of version b5125, llama-quantize can perform tensor-wide quantization (TWQ), whereby user-defined tensors are quantized at a specific level, or perform layer-wise quantization (LWQ) by selecting different quantization types per tensor/layer. For example, --tensor-type attn_v=q6_k will quantize all Attention Value tensors at q6_k (TWQ), and --tensor-type "\.([0-9]|1[01257]|31)\.attn_k=q4_k" will quantize Attention Key tensors on layers 0 to 9, 10, 11, 12, 15, 17 and 31 at q4_k, leaving the remaining layers at their default value (LWQ).

As of version b5740, llama-quantize can also prune models during quantisation by providing a comma-separated list in the --prune-layers command line option. The pruning operation will renumber remaining layers to avoid gaps in the sequence, update the relevant model metadata and, if an imatrix is available, it will use the correct importance score vector. This option can be used alongside --tensor-type to perform tensor/layer-wise quantization on selected tensor types, whilst at the same time pruning others. For example:

llama-quantize --tensor-type attn=q6_k --prune-layers 3,7,11 --imatrix imatrix.dat model-f32.gguf model-q4_k_m.gguf q4_k_m

An enhanced version of llama-imatrix generates useful statistics to guide the tensor and layer selection process. --show-statistics will display:

  • Σ(Act²): the sum of all squared activations over the tensor (i.e. the Importance Scores)
  • Min & Max: minimum and maximum squared activation values
  • μ & σ: activations' mean and standard deviation
  • % Active: proportion of elements whose average squared activation exceeds a very small threshold (1e-5). Helpful to determine how alive/dormant the tensor is during inference
  • N: number of squared activations in the tensor
  • Entropy: entropy of the squared activation distribution, in bits (standard Shannon entropy measurement)
  • E (norm): Normalized entropy.
  • ZD Score: z-score distribution as described in 3.1 Layer Importance Scores in the Layer-Wise Quantization paper
  • CosSim: cosine similarity between same type tensors with respect to the previous layer (i.e. blk.7.attn_k and blk.6.attn_k)

Please note that statistics are calculated for each individual tensor and should be used to compare between tensors of the same type only. For example, assuming that attn_k in layer 10 has a higher influence during inference than attn_k in layer 7 because its Σ(Act²) is larger makes sense, whilst concluding the same between attn_k and ffn_down does not.

There’s a pull request to merge these changes back into the core llama.cpp project. This may or may not ever happen so, until then, the modified version will be available on GitHub.

For testing and comparison I use models produced by Unsloth (Daniel and Michael Han do some really advanced level stuff!) and Bartowski (see credits below) but if they don't provide versions of the required model, all tests and comparisons are done against naive quantizations obtained by simply running llama-quantize with no further optimization.

All experimental versions were generated using an appropriate imatrix created from calibration datasets available at eaddario/imatrix-calibration. At its core, an Importance Matrix (imatrix) is a table or, more broadly, a structured representation that scores the relative importance of different features or parameters in a machine learning model. It essentially quantifies the "impact" each feature has on a specific outcome, prediction, or relationship being modelled, and it helps to counterbalance the negative effects of quantization and pruning.

The process to generate these models is roughly as follows:

  1. Convert the original model's tensors to GGUF F16*
  2. Estimate the Perplexity score for the F16 model (baseline) using the wikitext-2-raw-v1 dataset, and save the logits
  3. Generate an imatrix from selected calibration datasets
  4. Determine tensor and layer Importance Score contribution using the enhanced version of llama-imatrix
  5. Select an appropriate quant level for each tensor and quantize/prune the model using llama-quantize. In this model's case, layers 4 and 5 have been pruned
  6. Calculate Perplexity, KL Divergence, ARC (Easy+Challenge), HellaSwag, MMLU, Truthful QA and WinoGrande scores for each quantized model
  7. Keep versions with the best scores
  8. Repeat until all desired quants are created. I find that quantizations below Q3/IQ3 are not fit for my purposes and therefore do not usually generate them, but happy to provide other quants on request.

*BF16 would be preferred, but Apple's GPUs don't support it yet, and therefore any operations are executed in the CPU, making it unacceptably slow. This is expected to change in the near term but until then, if you are using Apple kit avoid using any models tagged BF16

Models

Sizes (in GB)

Model Bartowski Unsloth Repo Shrinkage
Devstral-Small-2505-pruned-IQ3_M 10.7 N/A 10.3 3.7%
Devstral-Small-2505-pruned-IQ3_S 9.9 N/A 9.6 3.1%
Devstral-Small-2505-pruned-IQ4_NL 13.5 13.5 12.5 7.4%
Devstral-Small-2505-pruned-Q3_K_L 12.4 N/A 10.9 12.1%
Devstral-Small-2505-pruned-Q3_K_M 11.5 11.5 10.0 13.0%
Devstral-Small-2505-pruned-Q3_K_S 10.4 10.4 9.0 13.5%
Devstral-Small-2505-pruned-Q4_K_M 14.3 14.3 12.5 12.6%
Devstral-Small-2505-pruned-Q4_K_S 13.5 13.5 11.6 14.1%
Devstral-Small-2505-pruned-Q5_K_M 16.8 16.8 15.3 8.9%
Devstral-Small-2505-pruned-Q5_K_S 16.3 16.3 14.5 11.0%
Devstral-Small-2505-pruned-Q6_K 19.3 19.3 18.9 2.1%
Devstral-Small-2505-pruned-Q8_0 25.1 25.1 20.3 19.1%

Bits per Weight, Perplexity and KL Divergence scores

Model BPW μPPL 𝜌PPL μKLD RMS Δp
Devstral-Small-2505-pruned-IQ3_M 3.6735 25.296399 ±0.207824 68.77% 1.589584 ±0.004716 40.684 ±0.088
Devstral-Small-2505-pruned-IQ3_S 3.4052 33.072181 ±0.266788 66.25% 1.858256 ±0.004862 43.468 ±0.087
Devstral-Small-2505-pruned-IQ4_NL 4.4435 25.870345 ±0.201328 67.87% 1.610465 ±0.004621 42.340 ±0.088
Devstral-Small-2505-pruned-Q3_K_L 3.8705 25.391481 ±0.200241 68.62% 1.590939 ±0.004603 41.557 ±0.088
Devstral-Small-2505-pruned-Q3_K_M 3.5467 25.452478 ±0.201209 68.69% 1.593512 ±0.004603 41.653 ±0.088
Devstral-Small-2505-pruned-Q3_K_S 3.2060 25.605200 ±0.202837 69.46% 1.599039 ±0.004488 41.245 ±0.088
Devstral-Small-2505-pruned-Q4_K_M 4.4492 21.875174 ±0.172411 70.04% 1.440707 ±0.004475 39.909 ±0.088
Devstral-Small-2505-Q4_K_M-bartowski 4.8620 5.151812 ±0.029360 99.47% 0.020939 ±0.000193 4.633 ±0.046
Devstral-Small-2505-Q4_K_M-unsloth 4.8620 5.153395 ±0.029344 99.47% 0.021015 ±0.000194 4.649 ±0.047
Devstral-Small-2505-pruned-Q4_K_S 4.1390 22.721916 ±0.178813 69.70% 1.478272 ±0.004493 40.367 ±0.088
Devstral-Small-2505-pruned-Q5_K_M 5.4555 20.789833 ±0.163143 70.86% 1.391234 ±0.004397 39.303 ±0.088
Devstral-Small-2505-pruned-Q5_K_S 5.1466 21.078977 ±0.165583 70.53% 1.404281 ±0.004435 39.453 ±0.088
Devstral-Small-2505-pruned-Q6_K 6.7205 20.781395 ±0.162964 70.85% 1.390876 ±0.004395 39.291 ±0.087
Devstral-Small-2505-pruned-Q8_0 7.2324 20.620931 ±0.161512 71.05% 1.383438 ±0.004372 39.180 ±0.087
Devstral-Small-2505-pruned-F16 16.0003 5.050590 ±0.028422 100% N/A N/A

ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores

Scores generated using llama-perplexity with 750 tasks per test, and a context size of 768 tokens.

For the test data used in the generation of these scores, follow the appropiate links: HellaSwag, ARC, MMLU, Truthful QA and WinoGrande

Model ARC HellaSwag MMLU Truthful QA WinoGrande Avg Score
Devstral-Small-2505-pruned-IQ3_M 62.4000 ±1.7699 77.87 38.9333 ±1.7816 31.8667 ±1.7026 70.9333 ±1.6591 56.40
Devstral-Small-2505-pruned-IQ3_S 61.7333 ±1.7759 77.73 40.6667 ±1.7948 31.3333 ±1.6949 71.8667 ±1.6430 56.67
Devstral-Small-2505-pruned-IQ4_NL 60.5333 ±1.7860 78.80 39.7333 ±1.7880 36.0000 ±1.7539 72.2667 ±1.6358 57.47
Devstral-Small-2505-pruned-Q3_K_L 58.6667 ±1.7993 77.33 42.4000 ±1.8057 35.2000 ±1.7451 72.6667 ±1.6284 57.25
Devstral-Small-2505-pruned-Q3_K_M 60.0000 ±1.7900 77.07 43.4667 ±1.8113 35.4667 ±1.7481 72.8000 ±1.6260 57.76
Devstral-Small-2505-pruned-Q3_K_S 55.3333 ±1.8165 77.20 40.5333 ±1.7939 34.1333 ±1.7325 69.7333 ±1.6787 55.39
Devstral-Small-2505-pruned-Q4_K_M 59.2000 ±1.7958 78.00 40.2667 ±1.7920 34.5333 ±1.7374 71.0667 ±1.6569 56.61
Devstral-Small-2505-Q4_K_M-bartowski 66.9333 ±1.7190 82.80 43.2000 ±1.8100 35.8667 ±1.7525 79.8667 ±1.4652 61.73
Devstral-Small-2505-Q4_K_M-unsloth 66.9333 ±1.7190 82.80 43.6000 ±1.8119 36.4000 ±1.7581 79.8667 ±1.4652 61.92
Devstral-Small-2505-pruned-Q4_K_S 60.9333 ±1.7827 77.87 40.4000 ±1.7930 35.0667 ±1.7436 72.2667 ±1.6358 57.31
Devstral-Small-2505-pruned-Q5_K_M 59.6000 ±1.7930 77.60 41.2000 ±1.7984 35.0667 ±1.7436 73.3333 ±1.6158 57.36
Devstral-Small-2505-pruned-Q5_K_S 59.8667 ±1.7910 77.47 41.3333 ±1.7993 34.2667 ±1.7342 72.9333 ±1.6235 57.17
Devstral-Small-2505-pruned-Q6_K 61.0667 ±1.7816 77.73 41.2000 ±1.7984 34.9333 ±1.7420 72.1333 ±1.6382 57.41
Devstral-Small-2505-pruned-Q8_0 60.6667 ±1.7849 77.33 41.3333 ±1.7993 34.6667 ±1.7389 72.4000 ±1.6334 57.28
Devstral-Small-2505-pruned-F16 66.8000 ±1.7207 83.47 43.7333 ±1.8126 36.0000 ±1.7539 79.7333 ±1.4688 61.95

Tokens per Second - Benchmarks

Scores generated using llama-bench. Naive (llama-quantize with no optimization) Q4_K_M quantization included for comparison.

model size params backend threads test t/s
Devstral-Small-2505-pruned-Q4_K_M 11.63 GiB 22.46 B Metal,BLAS 12 pp512 266.12 ±6.25
Devstral-Small-2505-pruned-Q4_K_M 11.63 GiB 22.46 B Metal,BLAS 12 tg128 28.07 ±0.33
Devstral-Small-2505-pruned-Q4_K_M 11.63 GiB 22.46 B Metal,BLAS 12 pp1024+tg1024 47.14 ±0.22
Devstral-Small-2505-Q4_K_M-bartowski 13.34 GiB 23.57 B Metal,BLAS 12 pp512 254.77 ±13.57
Devstral-Small-2505-Q4_K_M-bartowski 13.34 GiB 23.57 B Metal,BLAS 12 tg128 27.64 ±0.39
Devstral-Small-2505-Q4_K_M-bartowski 13.34 GiB 23.57 B Metal,BLAS 12 pp1024+tg1024 45.62 ±0.16
Devstral-Small-2505-Q4_K_M-unsloth 13.34 GiB 23.57 B Metal,BLAS 12 pp512 259.37 ±5.28
Devstral-Small-2505-Q4_K_M-unsloth 13.34 GiB 23.57 B Metal,BLAS 12 tg128 28.43 ±0.23
Devstral-Small-2505-Q4_K_M-unsloth 13.34 GiB 23.57 B Metal,BLAS 12 pp1024+tg1024 45.62 ±0.15

Metrics used

Perplexity: one of the key metrics used in NLP evaluation. It measures the quality of a language model by evaluating how well it predicts the next token given a particular sequence of words. A PPL of 1 indicates an exact match between predicted and actual, whereas values greater than one indicate a degree of "surprise" the generated token differs from the expected.

Kullback–Leibler (KL) Divergence: a statistical measure of how much a probability distribution differs from another. When quantizing models (or altering the original tensors in any way for that matter), the closest we can preserve the weights' probability distribution to the original model the better, thus the closest to 0 the better.

AI2 Reasoning Challenge (ARC): a benchmark to evaluate the ability of AI models to answer complex science questions that require logical reasoning beyond pattern matching.

HellaSwag: the Harder Endings, Longer contexts, and Low-shot Activities for Situations With Adversarial Generations (bit of a mouthful!) is a benchmark designed to test commonsense natural language inference. It requires the model to predict the most likely ending of a sentence.

MMLU: the Massive Multitask Language Understanding evaluates LLMs’ general knowledge and problem-solving abilities across 57 subjects, including elementary mathematics, US history, computer science, and law.

Truthful QA: evaluates how well LLMs generate truthful responses to questions. It identifies whether AI models can avoid generating false or misleading information, particularly in areas where human knowledge is prone to misconceptions.

Winogrande: based on the Winograd Schema Challenge, is a natural language understanding task requiring models to resolve ambiguities in sentences involving pronoun references.

Credits

LLaMa C++ has a large and vibrant community of contributors (~1,200 last time I checked) that actively maintain and extend its functionality, adding new models and architectures almost as fast as they appear (considering the breakneck speed at which the AI/ML field is advancing, this alone is a remarkable feat!), and whilst I'm grateful to each and everyone of them, I want to recognise three people in particular: Thank You! Colin Kealty for the many contributions and for being one of the best sources of high quality quantized models available on Hugging Face, and a really big Thank You! to Georgi Gerganov for his amazing work with llama.cpp and the ggml/gguf libraries, and Iwan Kawrakow for being one of the key authors behind the many quantisation algorithms and the imatrix functionality.

Downloads last month
842
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for eaddario/Devstral-Small-2505-pruned-GGUF

Dataset used to train eaddario/Devstral-Small-2505-pruned-GGUF

Papers for eaddario/Devstral-Small-2505-pruned-GGUF