Code Reasoning
Collection
7 items • Updated • 6
docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "GetSoloTech/GPT-OSS-Code-Reasoning-20B" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "GetSoloTech/GPT-OSS-Code-Reasoning-20B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'
openai/gpt-oss-20bnvidia/OpenCodeReasoning-2 (OCR-2), combining python and cpp splits. Each sample reconstructs the upstream question and uses the dataset's r1_generation as the assistant responseSFTTrainerThis model was trained in a chat format. Recommended structure:
messages = [
{"role": "system", "content": "You are an expert competitive programmer. Read the problem and produce a correct, efficient solution. Include reasoning if helpful."},
{"role": "user", "content": problem_text},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
If you prefer plain text, place the problem text after a brief instruction, but chat format generally yields better results.
Specify reasoning effort in apply_chat_template (supported values: "low", "medium" (default), or "high"):
messages = [
{"role": "system", "content": "Always respond in riddles"},
{"role": "user", "content": "Explain why the meaning of life is 42"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="high",
).to(model.device)
generated = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(generated[0][inputs["input_ids"].shape[-1]:]))
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "GetSoloTech/GPT-OSS-Code-Reasoning-20B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=auto,
device_map="auto",
)
problem_text = """
You are given an array of integers ... (your problem here)
"""
messages = [
{"role": "system", "content": "You are an expert competitive programmer. Read the problem and produce a correct, efficient solution. Include reasoning if helpful."},
{"role": "user", "content": problem_text},
]
input_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
reasoning_effort="medium",
)
inputs = tokenizer([input_text], return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=768,
temperature=0.3,
top_p=0.9,
repetition_penalty=1.1,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
max_new_tokens 512–1024 for full solutions; shorter for hintsnvidia/OpenCodeReasoning-2 with python and cpp splits--take_samples examples per splitopen-r1/codeforces)messages and a formatted text field with the tokenizer's chat templatetrain_test_split according to --eval_ratioFastLanguageModel) for efficient 4-bit loading and fast PEFTSFTTrainer) for straightforward supervised fine-tuningopen-r1/codeforces)
Install from pip and serve model
# Install SGLang from pip: pip install sglang# Start the SGLang server: python3 -m sglang.launch_server \ --model-path "GetSoloTech/GPT-OSS-Code-Reasoning-20B" \ --host 0.0.0.0 \ --port 30000# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "GetSoloTech/GPT-OSS-Code-Reasoning-20B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'