Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
video
video
label
class label
2 classes
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
0videos
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std
1videos_std

Dexterous Manipulation Benchmark — Cross-Method Evaluation

Cross-method evaluation scaffolding for dexterous manipulation, comparing 4 methods × 4 hands × 2 datasets × 7 trajectories on identical hardware, scenes, and reference trajectories.

Tag: v0.7-mt-complete · Trained checkpoints in the private companion repo ckwolfe/benchmarks-trained-ckpts.


TL;DR

method class simulator best metric status
ManipTrans closed-loop RL (residual) IsaacGym tracking err 4.0 cm @ oakink 21/28 cells (schunk retrain live)
DexMachina closed-loop RL (PPO) Genesis ADD 0.24 m @ arctic 28/28 cells ✅
Spider sampling (MJWP) MuJoCo-Warp tracking err 8.3 cm, succ 36% @ oakink 12/28 cells
Oracle (kinematic replay) open-loop upper bound MuJoCo tracking err 0.0 (by definition) 28/28 videos, 20/28 evals

Results — averaged across all real eval cells

method n cells ADD ↓ (m) tracking_err ↓ (m) success ↑ cost / cell
ManipTrans (closed-loop, IG) 27 0.040 ~2 s
DexMachina (closed-loop, Genesis) 225 0.244 0.191 ~300 s
Spider (sampling, MJWP) 28 0.083 0.357 ~600 s
Oracle (kinematic replay, MJ) 20 0.000 n/a <5 s

All numbers are per-cell means; full per-(method,hand,traj) rows live in metrics/*.jsonl.

Interpretation notes for reviewers

  • DM "5 seeds" = 5 training reps, shared RL seed=42. All 140 public DM checkpoints in staging/run_paths.json were trained with seed: 42 per their config.json. The 5-rep variance reflects rl_games's env-step + mini-epoch stochasticity, not a seed sweep.
  • Open-loop vs closed-loop. Spider (sampling) and Oracle (kinematic replay) are fundamentally different from MT/DM (closed-loop learned controllers). Reading them as one leaderboard is misleading. Paper Table 1 splits into two sub-tables keyed on method class.
  • Simulator confound. MT runs in IsaacGym, DM in Genesis, Spider in MuJoCo-Warp, Oracle in MuJoCo. Contact model, integrator, and timestep differ. Tracking-err is most comparable within-class; cross-class comparisons are illustrative only.
  • Divergent-rollout guard. Rows where the rollout diverged (tracking_err_mean > tracking_err_max + 0.01 or add_mean > 5 m) are blanked to null at write time. Schema validator in shared/bench/schema.py enforces this on append.

Datasets × hands

dataset trajs hands covered
OakInk-v2 lift_board, pick_spoon_bowl, pour_tube, stir_beaker, uncap_alcohol_burner, unplug, wipe_board allegro, inspire, schunk, xhand
Arctic ketchup30, box30, mixer30, ketchup40, mixer40, notebook40, waffleiron40 allegro, inspire, schunk, xhand

Per-cell coverage (✓ real eval, · missing)

ManipTrans × OakInk

lift_board pick_spoon pour_tube stir_beaker uncap unplug wipe_board
allegro
inspire
schunk · · · · · · ·
xhand

Schunk retrain running (epoch ~550/3000, reward climbing). Arrival: v0.8-schunk-retrained.

DexMachina × Arctic

ketchup30 box30 mixer30 ketchup40 mixer40 notebook40 waffleiron40
allegro
inspire
schunk
xhand

Standardized videos

All videos are 720×480 @ 30 fps h264 crf 20, rendered through a shared MuJoCo scene with BENCH_CAMERA["front"] (pos=[0,-1.6,2.2], lookat=[0,-0.1,1.2], fov=30°). Files live under videos_std/.

bucket count prefix description
Oracle (oakink) 28/28 std_* kinematic replay through per-hand URDF
ManipTrans (oakink) 21/28 mt_* captured rollout qpos replayed through same MJ scene
(MT × schunk) 0/7 mt_* training in progress
Arctic 0/56 Spider arctic preprocess stage-2+ pending

Native (per-method) videos

Also published for cross-check: each method's own renderer (different camera, resolution, overlays). Under videos/:

  • dexmachina_* (Genesis camera): 16 oakink demo-playback clips
  • spider_* (MJX camera): 26 oakink cells
  • ours_* (MuJoCo camera): 9 oakink cells

How to reproduce one cell

# Clone the benchmarking repo (not shipped here — the datasets repo only hosts artifacts).
git clone https://github.com/<you>/benchmarking && cd benchmarking
./scripts/run.sh maniptrans --hand xhand --dataset oakink_v2 --traj lift_board_bimanual --seed 42 --viz none --dev
# → appends MetricsRow to outputs/metrics/maniptrans.jsonl and (via bench_hooks.patch)
#   dumps qpos to outputs/mt_qpos/<run_id>.npz for post-hoc video rendering.

MUJOCO_GL=egl python scripts/render_standardized.py --source mt --hand xhand --traj lift_board
# → writes videos_std/mt_xhand_oakink_v2_lift_board_bimanual_seed42_default.mp4

Full sweep

./scripts/deploy_all.sh               # all 4 methods end-to-end, ~14 GPU-hr

Per-method deploy scripts under scripts/deploy_{spider,dexmachina,maniptrans,ours}.sh.


Files in this repo

metrics/
  maniptrans.jsonl     # 613 rows (27 real)
  dexmachina.jsonl     # 229 rows (225 real)
  spider.jsonl         # 293 rows (28 real — 1 divergent row stripped)
  ours.jsonl           # 20 rows (20 real)
videos_std/
  std_*.mp4            # 28 Oracle kinematic replays (oakink)
  mt_*.mp4             # 21 ManipTrans captured-qpos replays (oakink)
videos/
  dexmachina_*.mp4     # 16 DM native (oakink demo playback)
  spider_*.mp4         # 26 Spider native (MJX camera)
  ours_*.mp4           # 9 Oracle native (MuJoCo camera)
scripts/
  render_standardized.py   # MJ renderer shared across methods
patches/
  mt_bench_hooks.patch      # MT qpos-capture upstream patch (253 lines)
  mt_trajs_configs/         # 7 oakink YAMLs with upstream-compatible data_idx
STATUS.md                   # live coverage snapshot

Metric schema (shared/bench/schema.py::MetricsRow)

field type units notes
run_id str canonical <method>_<hand>_<dataset>_<traj>_seed<N>_<warmstart>
success bool success_rate >= 0.5
success_rate float 0–1 per-episode outcome mean
tracking_err_mean Optional[float] meters position err vs reference; blanked if >1 m (divergent)
tracking_err_max Optional[float] meters max per rollout; Spider uses quat-L2 legacy
add_mean Optional[float] meters Average Distance of Displacement (DM canonical)
add_auc Optional[float] meters AUC under threshold sweep 0–0.1 m
wallclock_s float seconds host-side subprocess wall time
sim_steps int hard cap via BENCH_MAX_STEPS
upstream_commit Optional[str] git SHA captured on host via BENCH_UPSTREAM_COMMIT env
sim_backend str one of isaacgym, genesis, mjwp, mujoco

Validator (at row-append time) rejects tracking_err_mean > tracking_err_max + 0.01 and add_mean > 5 m — those are the known divergent-rollout failure modes.


Limitations & what's not here

gap why
DM × oakink training 140 public checkpoints are arctic-only; from-scratch is ~35 GPU-days per cell.
MT × schunk imitator No public artifact; retrain in progress (v0.8 tag forthcoming).
Arctic standardized videos Spider arctic preprocess stops at stage-1; no scene.xml for arctic cells.
Oracle real warmstart (Tara's retargeter) Payload pending from an external collaborator; 55/56 stubs remain. Present arm uses kinematic IK fallback.
MT allegro/inspire video joint-name mapping Hand-specific IG DOF ↔ MJ joint table needed (xhand works: 24/38 joints auto-matched).

Citation

Paper in preparation. If you reference this artifact in the meantime, please cite:

@misc{benchmarks-viz-tiles-2026,
  title  = {Dexterous Manipulation Benchmark — Cross-Method Evaluation Tiles},
  author = {C.K. Wolfe and T. Sadjadpour},
  year   = {2026},
  url    = {https://huggingface.co/datasets/ckwolfe/benchmarks-viz-tiles},
  note   = {Tag v0.7-mt-complete}
}

Upstream methods retain their own licenses/attributions. See:

Downloads last month
116