hugging-quants/Meta-Llama-3.1-8B-Instruct-GPTQ-INT4 Text Generation • 8B • Updated Aug 7, 2024 • 9.03k • 40
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters Updated Jul 27 • 146
DavidAU/L3.1-MOE-8X8B-Dark-Planet-8D-Mirrored-Chaos-Uncensored-47B-GGUF Text Generation • 47B • Updated May 28 • 310 • 5
DavidAU/L3.1-Dark-Reasoning-LewdPlay-evo-Hermes-R1-Uncensored-8B Text Generation • 8B • Updated Jul 28 • 258 • 30
hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4 Text Generation • 410B • Updated Sep 13, 2024 • 1.86k • 36
hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4 Text Generation • 8B • Updated Aug 7, 2024 • 166k • 80
hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 Text Generation • 71B • Updated Aug 7, 2024 • 114k • 107
hugging-quants/Meta-Llama-3.1-405B-Instruct-GPTQ-INT4 Text Generation • 410B • Updated Aug 7, 2024 • 338 • 16
hugging-quants/Meta-Llama-3.1-405B-Instruct-BNB-NF4 Text Generation • 423B • Updated Sep 16, 2024 • 29 • 5
hugging-quants/Meta-Llama-3.1-8B-Instruct-BNB-NF4 Text Generation • 8B • Updated Aug 8, 2024 • 458 • 8
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit Text Generation • 8B • Updated Jul 29, 2024 • 124 • 4
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit Text Generation • 71B • Updated Jul 27, 2024 • 30 • 4
hugging-quants/Meta-Llama-3.1-70B-Instruct-GPTQ-INT4 Text Generation • 71B • Updated Aug 7, 2024 • 62.3k • 23
sunnyyy/openbuddy-llama3.1-8b-v22.1-131k-Q4_K_M-GGUF Text Generation • 8B • Updated Jul 25, 2024 • 23
azhiboedova/Meta-Llama-3.1-8B-Instruct-AQLM-2Bit-1x16 Text Generation • 2B • Updated Aug 28, 2024 • 12 • 13