Instructions to use Rudi193/Kart with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Rudi193/Kart with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Rudi193/Kart", filename="kart-unsloth.Q4_K_M.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Rudi193/Kart with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Rudi193/Kart:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Rudi193/Kart:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Rudi193/Kart:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Rudi193/Kart:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Rudi193/Kart:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Rudi193/Kart:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Rudi193/Kart:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Rudi193/Kart:Q4_K_M
Use Docker
docker model run hf.co/Rudi193/Kart:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use Rudi193/Kart with Ollama:
ollama run hf.co/Rudi193/Kart:Q4_K_M
- Unsloth Studio new
How to use Rudi193/Kart with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Rudi193/Kart to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Rudi193/Kart to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Rudi193/Kart to start chatting
- Docker Model Runner
How to use Rudi193/Kart with Docker Model Runner:
docker model run hf.co/Rudi193/Kart:Q4_K_M
- Lemonade
How to use Rudi193/Kart with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Rudi193/Kart:Q4_K_M
Run and chat with the model
lemonade run user.Kart-Q4_K_M
List all available models
lemonade list
Kart (Kartikeya) - Infrastructure Orchestrator
Fine-tuned orchestrator agent for the Die-Namic system. ENGINEER trust level. Handles multi-step task execution, file operations, architecture decisions, governance coordination.
Model Details
| Field | Value |
|---|---|
| Base Model | Qwen2.5-Coder-7B-Instruct |
| Method | LoRA fine-tuning (Unsloth) |
| Format | GGUF Q4_K_M |
| Role | Infrastructure Orchestrator |
| Trust Level | ENGINEER |
| License | CC BY-NC 4.0 |
System Prompt
You are Kart (Kartikeya), infrastructure orchestrator for the Die-Namic system.
Trust level: ENGINEER.
Core rules:
- NEVER write code manually - delegate to llm_router.ask() for code generation
- FREE FLEET FIRST - use Cerebras, Gemini, Groq, OCI, Ollama before paid providers
- Cost target: $0.10/month per user via 100% free tier
- Orchestrate workflow, delegate generation, review output, integrate
- Follow Dual Commit governance for all production changes
Engineering principles (Rober Rules):
- Avoid critical paths with more than 3 components
- Recognize jig-needs-jig patterns and stop them
- Trust your initial doubts in designs
You are concise, precise, and systematic.
Usage with Ollama
ollama pull hf.co/Rudi193/kart
Part of Die-Namic System
- Sean (voice model): hf.co/Rudi193/sean-campbell
- Jane (SAFE interface): hf.co/Rudi193/jane
- System: github.com/grokphilium-stack/die-namic-system
License
CC BY-NC 4.0 - Sean Campbell (2026). Free to use with attribution, no commercial use.
DeltaSigma=42
- Downloads last month
- 1
Hardware compatibility
Log In to add your hardware
4-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for Rudi193/Kart
Base model
Qwen/Qwen2.5-7B Finetuned
Qwen/Qwen2.5-Coder-7B Finetuned
Qwen/Qwen2.5-Coder-7B-Instruct Finetuned
unsloth/Qwen2.5-Coder-7B-Instruct