Difference between revisions of "Prismatic VLM REPL"

From Humanoid Robots Wiki
Jump to: navigation, search
Line 3: Line 3:
 
== REPL Script Guide ==
 
== REPL Script Guide ==
  
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the scripts folder) if you would like to get started with OpenVLA.
+
Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the '''scripts''' folder) if you would like to get started with OpenVLA.
  
 
== Prerequisites ==
 
== Prerequisites ==

Revision as of 23:22, 20 June 2024

The K-Scale OpenVLA adaptation by User:Paweł is at https://github.com/kscalelabs/openvla

REPL Script Guide

Here are some suggestions to run the generate.py REPL Script from the repo (you can find this in the scripts folder) if you would like to get started with OpenVLA.

Prerequisites

Before running the script, ensure you have the following:

  • Python 3.8 or higher installed
  • NVIDIA GPU with CUDA support (optional but recommended for faster processing)
  • Hugging Face account and token for accessing Meta Lllama

Setting Up the Environment

In addition to installing requirements-min.txt from the repo, you probably need to install rich, tensorflow_graphics, tensorflow-datasets and dlimp.

Set up Hugging Face token

You need a Hugging Face token to access certain models. Create a .hf_token file thats needed by the script.

Create a file named `.hf_token` in the root directory of your project and add your Hugging Face token to this file:

echo "your_hugging_face_token" > .hf_token

Sample Images for generate.py REPL

You can get these by capturing frames or screenshotting rollout videos from

 https://openvla.github.io/ 

Make sure the images have an end effector in them.


work in progress,need to add screenshots and next steps