Yalexis/qwen2.5-coder-3b-b2b-website-gguf

This is a Q4_K_M quantized GGUF version of Yalexis/qwen2.5-coder-3b-b2b-website.

Model Details

  • Base Model: Qwen/Qwen2.5-Coder-3B-Instruct
  • Fine-tuned Model: Yalexis/qwen2.5-coder-3b-b2b-website
  • Quantization: Q4_K_M
  • Format: GGUF
  • File Size: 1.80 GB

Usage

Ollama

  1. Create a Modelfile:
FROM ./qwen2.5-coder-3b-b2b-website-q4_k_m.gguf
  1. Create the model:
ollama create qwen-b2b-website -f Modelfile
  1. Run the model:
ollama run qwen-b2b-website

llama.cpp

./llama-cli -m qwen2.5-coder-3b-b2b-website-q4_k_m.gguf -p "Your prompt here"

LM Studio

Simply download this model in LM Studio and start chatting!

Model Information

This model was fine-tuned for B2B website generation with a 10k token context window.

Downloads last month
27
GGUF
Model size
3B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Yalexis/qwen2.5-coder-3b-b2b-website-gguf

Base model

Qwen/Qwen2.5-3B
Quantized
(91)
this model