Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantLLM
/
functiongemma-270m-it-4bit-mlx
like
0
Follow
QuantLLM
13
Text Generation
MLX
Safetensors
Transformers
English
gemma3_text
quantllm
mlx-lm
apple-silicon
q4_k_m
conversational
text-generation-inference
8-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
functiongemma-270m-it-4bit-mlx
476 MB
1 contributor
History:
2 commits
codewithdark
Upload model via QuantLLM
0a76aa1
verified
10 days ago
.gitattributes
1.57 kB
Upload model via QuantLLM
10 days ago
CONVERT_TO_MLX.md
240 Bytes
Upload model via QuantLLM
10 days ago
README.md
4.15 kB
Upload model via QuantLLM
10 days ago
added_tokens.json
63 Bytes
Upload model via QuantLLM
10 days ago
chat_template.jinja
13.8 kB
Upload model via QuantLLM
10 days ago
config.json
1.84 kB
Upload model via QuantLLM
10 days ago
generation_config.json
176 Bytes
Upload model via QuantLLM
10 days ago
model.safetensors
436 MB
xet
Upload model via QuantLLM
10 days ago
special_tokens_map.json
706 Bytes
Upload model via QuantLLM
10 days ago
tokenizer.json
33.4 MB
xet
Upload model via QuantLLM
10 days ago
tokenizer.model
4.69 MB
xet
Upload model via QuantLLM
10 days ago
tokenizer_config.json
1.16 MB
Upload model via QuantLLM
10 days ago