\$OneMillion-Bench: How Far are Language Agents from Human Experts?
Abstract
A new benchmark evaluates language models on complex, real-world professional tasks requiring multi-step reasoning, evidence resolution, and domain-specific decision-making across multiple industries.
As language models (LMs) evolve from chat assistants to long-horizon agents capable of multi-step reasoning and tool use, existing benchmarks remain largely confined to structured or exam-style tasks that fall short of real-world professional demands. To this end, we introduce \OneMillion-Bench OneMillion-Bench, a benchmark of 400 expert-curated tasks spanning Law, Finance, Industry, Healthcare, and Natural Science, built to evaluate agents across economically consequential scenarios. Unlike prior work, the benchmark requires retrieving authoritative sources, resolving conflicting evidence, applying domain-specific rules, and making constraint decisions, where correctness depends as much on the reasoning process as the final answer. We adopt a rubric-based evaluation protocol scoring factual accuracy, logical coherence, practical feasibility, and professional compliance, focused on expert-level problems to ensure meaningful differentiation across agents. Together, \$OneMillion-Bench provides a unified testbed for assessing agentic reliability, professional depth, and practical readiness in domain-intensive scenarios.
Community
Just read through the $OneMillion-Bench report and honestly it reframes how we should be thinking about agentic evaluation.
The core insight is simple but sharp: instead of asking "did the model get the right answer?", ask "how much would a senior professional charge to do this work?" They built 400 tasks across Law, Finance, Healthcare, Natural Science, and Industry — curated by actual domain experts over 2,000+ hours — and priced each task based on real market wages. Total value: over $1M. Hence the name.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper