DeepThinkVLA
Collection
Enhancing Reasoning Capability of Vision-Language-Action Models • 5 items • Updated
DeepThinkVLA is a Vision-Language-Action (VLA) model designed to enhance the reasoning capabilities of robotic agents through explicit deliberation. It refactors the policy into a 2.9B parameter hybrid decoder that generates a reasoning trace (Chain-of-Thought) before emitting action chunks.
DeepThinkVLA addresses the challenges of integrating Chain-of-Thought (CoT) into VLA models by satisfying two key conditions:
The model is initialized from the pi0-FAST checkpoint and demonstrates significant performance gains on robotic manipulation benchmarks.
If you find this work helpful, please consider citing:
@article{yin2025deepthinkvla,
title={DeepThinkVLA: Enhancing Reasoning Capability of Vision-Language-Action Models},
author={Yin, Cheng and Lin, Yankai and Xu, Wang and Tam, Sikyuen and Zeng, Xiangrui and Liu, Zhiyuan and Yin, Zhouping},
journal={arXiv preprint arXiv:2511.15669},
year={2025}
}