467
edits
Changes
no edit summary
= Robotic Control via Embodied Chain-of-Thought Reasoning =
Embodied Chain-of-Thought Reasoning (ECoT) is a novel approach for training robotic policies. This approach trains a vision-language-action model to generate reasoning steps in response to instructions and images before choosing a robot action, enabling better performance, interpretability, and generalization. The codebase is built on top of OpenVLA. We refer Refer to it for the detailed documentation of the code and dependencies.
== Quickstart ==
<code>
from transformers import AutoModelForVision2Seq, AutoProcessor
== Pretrained models ==
* '''embodied_features_bridge''': A dataset of the embodied features and reasonings collected for Bridge demonstrations.
=== Explicit Notes on Model Licensing & Commercial Use ===
While all code in this repository is released under an MIT License, our the pretrained models may inherit restrictions from the underlying base models we use. Specifically, both the above models are derived from Llama-2, and as such are subject to the Llama Community License.
== Installation ==
== Repository Structure ==
* '''experiments''': Code for evaluating the policies on a WidowX robot.
* '''vla-scripts/''': Core scripts for training, fine-tuning, and deploying VLAs.
* '''LICENSE''': All code is made available under the MIT License; happy hacking!.
* '''Makefile''': Top-level Makefile (by default, supports linting - checking & auto-fix); extend as needed.
* '''pyproject.toml''': Full project configuration details (including dependencies), as well as tool configurations.
== Citation ==
If you find our the code or models are useful in your work, please cite our the paper:
<code>
@article{Zawalski24-ecot,