Yalexis/qwen2.5-coder-3b-b2b-website-gguf
This is a Q4_K_M quantized GGUF version of Yalexis/qwen2.5-coder-3b-b2b-website.
Model Details
- Base Model: Qwen/Qwen2.5-Coder-3B-Instruct
- Fine-tuned Model: Yalexis/qwen2.5-coder-3b-b2b-website
- Quantization: Q4_K_M
- Format: GGUF
- File Size: 1.80 GB
Usage
Ollama
- Create a Modelfile:
FROM ./qwen2.5-coder-3b-b2b-website-q4_k_m.gguf
- Create the model:
ollama create qwen-b2b-website -f Modelfile
- Run the model:
ollama run qwen-b2b-website
llama.cpp
./llama-cli -m qwen2.5-coder-3b-b2b-website-q4_k_m.gguf -p "Your prompt here"
LM Studio
Simply download this model in LM Studio and start chatting!
Model Information
This model was fine-tuned for B2B website generation with a 10k token context window.
- Downloads last month
- 27
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support