The dataset viewer is not available for this subset.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Artifact Graph
A heterogeneous graph of HuggingFace model/dataset/paper/codebase nodes with observed (model, dataset, performance-metric) evaluation edges, used to benchmark link prediction and attribute regression.
Contents
| path | description |
|---|---|
full/ |
Full unsplit graph: all nodes + all edges (by type) |
transductive/ |
All nodes visible in both train and test; edges split |
inductive/ |
Disjoint node partition: some nodes train-only, others test-only |
Full graph (full/)
| file | description |
|---|---|
node_metadata.json |
Per-node {type, name, downloads, info} for all 14K nodes |
node_mappings.json |
Integer ID β HuggingFace ID mapping |
node_embeddings_voyage.npy |
Voyage-3 embeddings, (N, 1024) |
node_embeddings_random.npy |
L2-normalised random embeddings |
edges.npz |
All edges combined, (2, E) |
edges_eval.npz |
model Γ dataset evaluation edges |
edges_base_model.npz |
model β base_model edges |
edges_resource.npz |
model/dataset β paper/codebase edges |
edge_metadata.json |
Raw (model, dataset, metric) edge records |
edge_metadata_normalized.json |
Eval edges with metrics normalised to [0, 1] |
edge_metadata_eval.json |
Eval-edge metadata only |
edge_metadata_base_model.json |
base-model edge metadata |
edge_metadata_resource.json |
paper / codebase resource edge metadata |
Each split directory contains:
| file | description |
|---|---|
node_embeddings_voyage.npy |
Voyage-3 embeddings, shape (N, 1024) |
node_embeddings_random.npy |
L2-normalised random embeddings, same shape |
split_info.json |
Split metadata (seed, counts, dates) |
node_split.json (inductive) |
Per-node train/test assignment |
train_split/ |
Training subgraph (see below) |
test_split/ |
Test subgraph (held-out eval edges) |
Each {train,test}_split/ holds:
| file | description |
|---|---|
node_metadata.json |
Per-node {type, name, downloads, info} |
edge_metadata_normalized.json |
Normalized (u,v) β metric: value map |
edges.npz |
Message-passing edges, edges key, shape (2, E) |
pos_edges.npz |
Positive eval edges (model Γ dataset with metric) |
Node types
model: HuggingFace models (e.g.,sileod/deberta-v3-large-tasksource-nli)dataset: HuggingFace datasets (e.g.,nyu-mll/multi_nli)paper: referenced papers (arXiv IDs)codebase: linked repositories
Edge types
model β dataset(eval): accuracy / F1 / BLEU / etc. (normalized to[0, 1])model β paper,model β codebase,dataset β paper,dataset β codebase: resource linksmodel β model: base-model / fine-tune relations
Usage
from huggingface_hub import snapshot_download
path = snapshot_download("lwaekfjlk/artifact-graph", repo_type="dataset")
import numpy as np, json
emb = np.load(f"{path}/transductive/node_embeddings_voyage.npy")
nm = json.load(open(f"{path}/transductive/train_split/node_metadata.json"))
pe = np.load(f"{path}/transductive/train_split/pos_edges.npz")["edges"]
print(emb.shape, len(nm), pe.shape)
Case study: NLI (case_study_nli/)
Frozen 576-cell evaluation grid used as the NLI case study in our paper: 48 NLI models Γ 12 NLI datasets. Each cell was produced by an LLM-coder pipeline that emitted per-example predictions and a top-level accuracy.
Layout
| path | description |
|---|---|
case_study_nli/raw_evals/<model>_<dataset>_accuracy/predictions.json |
Per-example {idx, prediction, ground_truth} |
case_study_nli/raw_evals/<model>_<dataset>_accuracy/results.json |
{accuracy: float} (plus previous_accuracy for 9 bug cells) |
case_study_nli/all_results_summary_fixed.json |
Cleaned aggregate: 576 rows, 9 bug fixes applied, masked flags for cells that cannot be scored 3-way |
case_study_nli/scripts/rebuild_nli_summary.py |
Raw β fixed aggregate |
case_study_nli/scripts/plot_nli_heatmap.py |
45-model heatmap (3 models with degenerate cells excluded by --min-cell 0.05) |
case_study_nli/scripts/plot_nli_matrix_scree.py |
Double-centered SVD scree plot |
case_study_nli/figures/nli_results_heatmap.{png,pdf} |
Main heatmap |
case_study_nli/figures/nli_matrix_scree.{png,pdf} |
Scree plot |
Known issues in the raw evaluations
- 9 bug-fix cells: top-level
accuracy=0was overwritten butprevious_accuracy>0holds the real value. The fixed aggregate usesprevious_accuracy. - Binary-output models on 3-way datasets: three zero-shot classifiers only emit 2 labels; their MNLI / SNLI / ANLI / NLI_FEVER cells are masked in the aggregate (not directly comparable to 3-way models).
- 2 true failures:
microsoft/deberta-v3-baseonallenai/scitailandaraag2/MedNLIproduced degenerate predictions.
Reproducibility note
The per-cell evaluation scripts were not uniformly persisted to disk β
cells run in the January batch retained them, but the April re-runs (the
majority) executed inline via an agent and only wrote results back. We
therefore ship just the frozen outputs (predictions.json + results.json)
rather than an incomplete script set. The processing scripts in
case_study_nli/scripts/ are sufficient to regenerate the aggregate and
figures from the per-cell outputs.
Reproduce aggregate + figures
pip install datasets numpy matplotlib huggingface_hub
python scripts/rebuild_nli_summary.py \
--src case_study_nli/raw_evals \
--out case_study_nli/all_results_summary_fixed.json
python scripts/plot_nli_heatmap.py \
--input case_study_nli/all_results_summary_fixed.json \
--out-dir case_study_nli/figures
python scripts/plot_nli_matrix_scree.py \
--input case_study_nli/all_results_summary_fixed.json \
--out-dir case_study_nli/figures
Verification bench (verification_bench/)
Full agent-based eval reproductions: 263 (model, dataset, metric) cells drawn from a stratified "hard" sample of the artifact graph. A skill-based multi-agent system (driver: GPT-5.2, tool mode: multiturn_metadatatool) attempts to reproduce each published accuracy score by locating the dataset, loading the model, writing an eval script, and reporting a metric.
Layout
verification_bench/
βββ skills_multiagent_gpt-5.2_metadatatool/
βββ <model>_<dataset>_<metric>/
βββ metadata.json # (model, dataset, metric) spec
βββ run_eval.py # agent-written evaluation script
βββ predictions.json # per-example predictions
βββ results.json # top-level metric value
βββ run.log # agent trajectory log
βββ results_full.json # rich metric breakdown (4 cells only)
Use
import json, os
ROOT = "verification_bench/skills_multiagent_gpt-5.2_metadatatool"
for cell in os.listdir(ROOT):
meta = json.load(open(f"{ROOT}/{cell}/metadata.json"))
result = json.load(open(f"{ROOT}/{cell}/results.json"))
print(meta["model_id"], meta["dataset_id"], result)
Notes
- 263 / 266 cell dirs contain a complete
results.json; the remaining 3 failed with agent / runtime errors. - Cell directories are named
<model>_<dataset>_<metric>with/replaced by_in HuggingFace IDs. - This suite is the best-performing agent configuration we evaluated
(156 cells above accuracy 0.5, 97 above 0.8); scores are properly
normalised to
[0, 1].
- Downloads last month
- 33