video video 9.03 20 |
|---|
Exylos Pick-and-Place Sample
A multi-view robot manipulation dataset captured through consumer VR and procedurally expanded into transfer-ready episodes. Delivered in LeRobot-compatible structure.
Why this dataset is different
Most public manipulation datasets come from one of two sources: real-robot teleoperation farms (slow and expensive) or pure simulation (cheap but with poor real-world transfer). This sample comes from a third path:
- Captured in consumer VR. A human performs the task in an immersive virtual environment using a standard VR headset. Their hand motion is retargeted onto a virtual robot embodiment in real time, producing kinematically valid trajectories.
- Procedurally expanded. Each seed demonstration is multiplied into many physics-consistent variations (object poses, distractors, lighting, camera angles) — so that a small number of human demonstrations becomes a diverse training corpus.
- Packaged for direct training. Output is delivered in LeRobot-compatible structure, with synchronized multi-view video, state and action streams, phase-level annotations, and success/failure metadata.
The result is human-seeded, scaled, and labeled data that is closer to what policy training actually needs — without the cost of running a physical lab.
This public release is intentionally compact. It is meant as an inspection sample — to let robotics teams evaluate the format, modalities, and annotation quality before discussing larger productized skill packs.
Dataset summary
| Property | Value |
|---|---|
| Episodes | 50 |
| Modalities | Multi-view RGB video + robot state + actions + phase annotations + episode metadata |
| Task | Pick up an object from the workspace and place it into a container |
| Robot embodiment | Franka Emika Panda (7-DoF arm + parallel gripper) |
| Camera views | 5 synchronized RGB streams |
| Video | 30 FPS, H.264 |
| Robot state | 9-dimensional |
| Action vector | 9-dimensional |
| Trajectories | Synchronized robot state + action streams per frame |
| Episode-level metadata | Success / failure outcome, derived quality flags |
| Phase-level annotations | Approach, grasp, transport, place, recovery segments |
| Trajectory mix | Success, failure, and recovery-rich episodes |
| Format | LeRobot-compatible (Parquet + MP4) |
| License | Apache 2.0 |
What is included
Each episode bundles five kinds of synchronized signals:
- Robot state trajectories — the full 9D state stream over time
- Action trajectories — the 9D control signal at each frame
- Multi-view RGB video — five synchronized streams (wrist, front, left, top, right)
- Episode-level metadata — task identity, success / failure outcome, derived quality flags
- Phase-level annotations — segment boundaries for the meaningful sub-stages of each episode (approach, grasp, transport, place, recovery)
Camera views
observation.images.wrist_cam
observation.images.front_cam
observation.images.left_cam
observation.images.top_cam
observation.images.right_cam
Core trajectory fields
observation.state
action
timestamp
frame_index
episode_index
task_index
next.done
next.success
Quick start
The dataset follows LeRobot dataset conventions and can be loaded directly with the lerobot library:
from lerobot.datasets.lerobot_dataset import LeRobotDataset
dataset = LeRobotDataset("ExylosAi/pick_and_place_sample")
# Inspect the first frame of the first episode
sample = dataset[0]
print(sample.keys())
print(sample["observation.state"].shape)
print(sample["action"].shape)
You can also browse the raw Parquet and MP4 files directly under the Files tab.
Repository structure
README.md
LICENSE
info.json
annotations.json
tasks.jsonl
episodes.jsonl
episodes_stats.jsonl
preview.mp4
data/
chunk-000/
episode_000000.parquet
episode_000001.parquet
...
videos/
chunk-000/
wrist_cam/
episode_000000.mp4
episode_000001.mp4
...
front_cam/
...
left_cam/
...
top_cam/
...
right_cam/
...
Intended use
This sample is suitable for:
- Inspecting the Exylos data format and annotation schema
- Quick imitation-learning experiments on a narrow pick-and-place task
- Format compatibility testing against a LeRobot-based training pipeline
- Evaluating phase-level annotation density and recovery-trajectory coverage
For larger production-scale skill packs (broader object families, configurable embodiments, custom evaluation logic, or higher episode volumes), see exylos.ai or contact us directly.
Out-of-scope
- This sample does not target a specific real-world deployment cell or production line.
- It does not include dense per-frame semantic or instance masks (these are available in higher-tier skill packs).
- It is not a benchmark and does not include a held-out evaluation split tuned for leaderboard-style comparison.
About Exylos
Exylos is an early-stage robotics data company. We capture human manipulation demonstrations in consumer VR and procedurally expand them into thousands of physics-consistent, transfer-ready training episodes. Datasets are delivered in LeRobot-compatible structure or adapted to client pipelines.
If you are a robotics or applied-ML team and want to discuss a custom skill pack for your embodiment and task, reach out at contact@exylos.ai or visit exylos.ai.
Citation
If you use this dataset in research or in a public technical report, please cite it as:
@misc{exylos_picknplace_sample_2026,
title = {Exylos Pick-and-Place Sample: A Multi-View, VR-Captured Manipulation Dataset},
author = {Exylos},
year = {2026},
howpublished = {\url{https://huggingface.co/datasets/ExylosAi/pick_and_place_sample}},
note = {LeRobot-compatible dataset}
}
License
Released under the Apache License 2.0. You are free to use this dataset for both research and commercial purposes, subject to the standard Apache 2.0 attribution requirements. See the LICENSE file in this repository for full terms.
Contact
- Website: exylos.ai
- Email: contact@exylos.ai
- LinkedIn: Exylos on LinkedIn
For questions specific to this dataset (format, schema, fields), please open a discussion in the Community tab on this repository.
- Downloads last month
- 374