-
-
-
-
-
-
Inference Providers
Active filters:
mixtral
utter-project/EuroMoE-2.6B-A0.6B-Instruct-2512
Text Generation
•
3B
•
Updated
•
14
•
6
mistralai/Mixtral-8x7B-Instruct-v0.1
47B
•
Updated
•
482k
•
4.64k
mistralai/Mixtral-8x7B-v0.1
47B
•
Updated
•
76.3k
•
1.79k
fireworks-ai/firefunction-v1
Text Generation
•
47B
•
Updated
•
16
•
126
mistralai/Mixtral-8x22B-Instruct-v0.1
141B
•
Updated
•
9.6k
•
745
blascotobasco/L3.2-Ascendant-Prime-16E-A6B
35B
•
Updated
•
22
•
2
NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
Text Generation
•
47B
•
Updated
•
135
•
59
cloudyu/Mixtral_34Bx2_MoE_60B
Text Generation
•
61B
•
Updated
•
7.17k
•
112
NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
Text Generation
•
47B
•
Updated
•
7.76k
•
453
Text Generation
•
47B
•
Updated
•
39
•
54
alpindale/WizardLM-2-8x22B
Text Generation
•
141B
•
Updated
•
8.16k
•
•
409
Text Generation
•
36.4M
•
Updated
•
308k
•
5
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated
•
158
DavidAU/Mistral-MOE-4X7B-Dark-MultiVerse-Uncensored-Enhanced32-24B-gguf
Text Generation
•
24B
•
Updated
•
4.73k
•
101
DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE
Text Generation
•
Updated
•
60
utter-project/EuroMoE-2.6B-A0.6B-2512
Text Generation
•
3B
•
Updated
•
20
•
3
SIP-med-LLM/SIP-jmed-llm-3-8x13b-AC-32k-instruct
73B
•
Updated
•
430
•
6
DavidAU/Llama3.2-24B-A3B-II-Dark-Champion-INSTRUCT-Heretic-Abliterated-Uncensored
Text Generation
•
18B
•
Updated
•
135
•
4
DavidAU/Llama3.2-30B-A3B-II-Dark-Champion-INSTRUCT-Heretic-Abliterated-Uncensored
Text Generation
•
30B
•
Updated
•
941
•
8
NLPark/Shi-Cis-Kestrel-uncensored
Text Generation
•
141B
•
Updated
•
574
•
2
Text Generation
•
2B
•
Updated
•
3
TheBlokeAI/Mixtral-tiny-GPTQ
Text Generation
•
0.2B
•
Updated
•
17
•
3
LoneStriker/Mixtral-8x7B-Instruct-v0.1-HF
Text Generation
•
Updated
•
3
•
4
LoneStriker/Mixtral-8x7B-v0.1-HF
Text Generation
•
Updated
•
5
TheBloke/Mixtral-8x7B-v0.1-GGUF
47B
•
Updated
•
4.38k
•
434
TheBloke/Mixtral-8x7B-v0.1-GPTQ
Text Generation
•
47B
•
Updated
•
1.65k
•
127
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-2bit_g16_s128-HQQ
Text Generation
•
Updated
•
28
•
9
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-4bit_g64-HQQ
Text Generation
•
Updated
•
13
•
9
marcsun13/Mixtral-tiny-GPTQ
Text Generation
•
0.2B
•
Updated
•
4
mobiuslabsgmbh/Mixtral-8x7B-v0.1-hf-2bit_g16_s128-HQQ
Text Generation
•
Updated
•
27
•
4