Methodology Question
Hi, great work!
Would it be possible to get some more detail on the methodology behind the benchmark design — specifically what statistical justification informed decisions like task selection, scoring criteria, and category structure? Also any data on things like test-retest, id be interested in running a PCA if at all possible.
Much appreciated!
Hi @DJLougen , thank you for the kind words and your interest in our work!
Here are some details regarding our methodology and the points you raised:
Category Structure & Task Selection: We designed the benchmark based on real-world scenarios, dividing the tasks into five major categories. This ensures comprehensive coverage of the typical, day-to-day environments that agents like OpenClaw operate in.
Scoring Criteria: Our primary guiding principle was to guarantee discriminative power—meaning the benchmark needs to clearly highlight performance gaps between different models. To achieve this, we established highly granular scoring criteria with detailed grading points for each task, which are then aggregated using a weighted system.
Test-Retest: To ensure reliability, we guarantee that every model evaluated on our benchmark is run at least twice.
Detailed Data: If you are interested in diving into the sub-scores (which should be helpful for running a PCA), you can find the detailed category breakdown data on our project homepage: https://internlm.github.io/WildClawBench/
Let us know if you need access to more specific raw data for your PCA or if you have any other questions. We really appreciate your feedback!
That 3D plot cooks love me some good data vis!