LFM2.5-VL-450M wildfire risk (GGUF)
Fine-tuned from LiquidAI/LFM2.5-VL-450M on Sentinel-2 satellite imagery to assess wildfire risk. Part of the Liquid Cookbook wildfire-prevention example.
Given an RGB and SWIR Sentinel-2 image pair, the model outputs a structured JSON risk assessment:
{
"risk_level": "low | medium | high",
"dry_vegetation_present": true,
"urban_interface": false,
"steep_terrain": true,
"water_body_present": false,
"image_quality_limited": false
}
Eval results
Evaluated on 172 test samples from Paulescu/wildfire-prevention, ground truth from claude-opus-4-6.
| field | claude-opus-4-6 | LFM2.5-VL-450M Q8_0 (base) | LFM2.5-VL-450M Q8_0 (fine-tuned) |
|---|---|---|---|
| valid_json | 1.00 | 1.00 | 1.00 |
| fields_present | 1.00 | 1.00 | 1.00 |
| risk_level | 0.99 | 0.08 | 0.76 |
| dry_vegetation_present | 0.99 | 0.48 | 0.83 |
| urban_interface | 0.98 | 0.25 | 0.93 |
| steep_terrain | 0.99 | 0.45 | 0.81 |
| water_body_present | 0.99 | 0.74 | 0.87 |
| image_quality_limited | 1.00 | 0.28 | 0.86 |
| overall | 0.99 | 0.38 | 0.84 |
| avg latency (s) | 2.91 | 0.72 | 0.59 |
Files
Running inference with a VLM in llama.cpp requires two GGUF files:
| file | description |
|---|---|
lfm2.5-vl-wildfire-Q8_0.gguf |
Language model backbone (Q8_0) |
mmproj-lfm2.5-vl-wildfire-Q8_0.gguf |
Vision tower + multimodal projector (F16) |
Usage
llama-server
llama-server \
-m lfm2.5-vl-wildfire-Q8_0.gguf \
--mmproj mmproj-lfm2.5-vl-wildfire-Q8_0.gguf \
--jinja --port 8080
Reproduce eval results
Clone the Liquid Cookbook, then:
cd examples/wildfire-prevention
uv sync
uv run scripts/evaluate.py \
--hf-dataset Paulescu/wildfire-prevention \
--backend local \
--model lfm2.5-vl-wildfire-Q8_0.gguf \
--mmproj mmproj-lfm2.5-vl-wildfire-Q8_0.gguf \
--split test
- Downloads last month
- 41
Hardware compatibility
Log In to add your hardware
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for Paulescu/wildfire-risk-detector
Base model
LiquidAI/LFM2.5-350M-Base Finetuned
LiquidAI/LFM2.5-350M Finetuned
LiquidAI/LFM2.5-VL-450M