Vision-Language-Action (VLA) models are formulated to ground instructions in visual context and generate action sequences for robotic manipulation. Despite recent progress, VLA models still face challenges in learning related and reusable primitives, reducing reliance on large-scale data and complex architectures, and enabling exploration beyond demonstrations.
To address these challenges, we propose a novel Neuro-Symbolic Vision-Language-Action (NS-VLA) framework via online reinforcement learning (RL). It introduces a symbolic encoder to embed vision and language features and extract structured primitives, utilizes a symbolic solver for data-efficient action sequencing, and leverages online RL to optimize generation via expansive exploration.
Experiments on robotic manipulation benchmarks demonstrate that NS-VLA outperforms previous methods in both one-shot training and data-perturbed settings, while simultaneously exhibiting superior zero-shot generalizability, high data efficiency, and an expanded exploration space.
Extracts structured primitives from vision-language inputs, capturing reusable action structures across tasks.
Lightweight solver with visual token sparsification for data-efficient, real-time action generation.
GRPO-based training with primitive-segmented rewards enables exploration beyond expert demonstrations.
98.6% SR on LIBERO, 79.4% on LIBERO-Plus, and 91.2% 5-task SR on CALVIN ABC→D.
NS-VLA achieves state-of-the-art performance across three benchmark settings:
| Method | Params | LIBERO (Full) | LIBERO (1-shot) | LIBERO-Plus |
|---|---|---|---|---|
| OpenVLA | 7B | 76.5 | 35.7 | 15.6 |
| OpenVLA-OFT | 7B | 97.1 | 48.9 | 69.6 |
| π₀ | 3B | 94.2 | 37.4 | 53.6 |
| UniVLA | 7B | 95.2 | 55.1 | 42.9 |
| VLA-Adapter | 0.5B | 97.3 | 65.3 | 58.9 |
| NS-VLA (Ours) | 2B | 98.6 | 69.1 | 79.4 |
If you find our work useful, please consider citing:
@article{zhu2026nsvla,
title={NS-VLA: Towards Neuro-Symbolic Vision-Language-Action Models},
author={Zhu, Ziyue and Wu, Shangyang and Zhao, Shuai and Zhao, Zhiqiu and Li, Shengjie and Wang, Yi and Li, Fang and Luo, Haoran},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2026}
}