vqwen3-4b-pretrain (MLP stage-1 feature-alignment foundation)
Stage-1 checkpoint. Only the 2-layer MLP projector has been trained. The model can emit plain BLIP-style captions, but has not been instruction-tuned and is not intended as a deployable chat model — it is the pretraining foundation for downstream stage-2 fine-tunes.
For the stage-2 instruction-tuned bundle (same architecture, trained on
LLaVA-Instruct-150K), see
alpharomercoma/vqwen3-4b.
Loads as a stock LlavaForConditionalGeneration — no trust_remote_code.
Architecture
- Vision tower:
openai/clip-vit-large-patch14-336— frozen - Projector: 2-layer MLP2xGELU (
Linear 1024→2560 → GELU → Linear 2560→2560) — trained (only delta) - LLM:
Qwen/Qwen3-4B— frozen
Trainable parameter count: ~9 M (two Linear layers). Everything else is
loaded unchanged from its base checkpoint.
Training recipe
Dataset: liuhaotian/LLaVA-Pretrain — 558 K BLIP-captioned image–text
pairs from LAION/CC/SBU. Conversation format is plain (no chat template,
no system prompt): <image> on the human turn, caption on the assistant
turn, loss masked on the human side.
Hyperparameters (single NVIDIA H200, bf16, SDPA, Liger-Kernel):
| Global batch size | 256 |
| Learning rate | 1e-3, cosine, warmup ratio 0.03 |
| Weight decay | 0.0 |
| Epochs | 1 |
| Max sequence length | 2048 |
| Precision | bf16 |
What this model can do
- Emit short captions for images when prompted in the training-time "plain" format.
- Serve as a starting point for stage-2 instruction tuning (LoRA on the LLM
- continued projector training) — the frozen CLIP features are already aligned to Qwen3's embedding space.
What this model is NOT
- Not an instruction-following model. Has never seen chat-formatted supervision, so it will not reliably follow yes/no-style prompts.
- Not specialized for any domain — this is the generic alignment checkpoint, pre-any task fine-tuning.
Quick start
import torch
from PIL import Image
from transformers import LlavaForConditionalGeneration, AutoProcessor
model_id = "alpharomercoma/vqwen3-4b-pretrain"
model = LlavaForConditionalGeneration.from_pretrained(
model_id, dtype=torch.bfloat16, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)
image = Image.open("my_image.jpg").convert("RGB")
messages = [{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "Describe this image."},
],
}]
prompt = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device)
inputs["pixel_values"] = inputs["pixel_values"].to(torch.bfloat16)
out = model.generate(**inputs, max_new_tokens=64, do_sample=False)
print(processor.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
Pairs with
| Stage | Raw delta (Kaggle) | Bundled (HF) |
|---|---|---|
| Stage-1 alignment | vqwen-projector |
this model |
| Stage-2 instruction | vqwen-lora |
vqwen3-4b |
Credits
- Base vision:
openai/clip-vit-large-patch14-336 - Base LLM:
Qwen/Qwen3-4B - Dataset:
liuhaotian/LLaVA-Pretrain - Recipe: LLaVA-1.5 stage-1 alignment
License
Apache 2.0 for the trained projector. Base models retain original licenses.
- Downloads last month
- 15