Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 6 new columns ({'github_stars', 'name', 'github_url', 'author', 'last_updated', 'language'}) and 4 missing columns ({'models_tested', 'max_score', 'methodology', 'benchmark'}).

This happened while the csv dataset builder was generating data using

hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026/data/mcp_servers.csv (at revision 59b53f222028a66ddf45cb132b15505923bb69ed), [/tmp/hf-datasets-cache/medium/datasets/25809192754235-config-parquet-and-info-BenchGeckoAI-ai-model-ben-fab6395f/hub/datasets--BenchGeckoAI--ai-model-benchmarks-2026/snapshots/59b53f222028a66ddf45cb132b15505923bb69ed/data/benchmarks.csv (origin=hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026@59b53f222028a66ddf45cb132b15505923bb69ed/data/benchmarks.csv), /tmp/hf-datasets-cache/medium/datasets/25809192754235-config-parquet-and-info-BenchGeckoAI-ai-model-ben-fab6395f/hub/datasets--BenchGeckoAI--ai-model-benchmarks-2026/snapshots/59b53f222028a66ddf45cb132b15505923bb69ed/data/mcp_servers.csv (origin=hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026@59b53f222028a66ddf45cb132b15505923bb69ed/data/mcp_servers.csv), /tmp/hf-datasets-cache/medium/datasets/25809192754235-config-parquet-and-info-BenchGeckoAI-ai-model-ben-fab6395f/hub/datasets--BenchGeckoAI--ai-model-benchmarks-2026/snapshots/59b53f222028a66ddf45cb132b15505923bb69ed/data/models.csv (origin=hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026@59b53f222028a66ddf45cb132b15505923bb69ed/data/models.csv), /tmp/hf-datasets-cache/medium/datasets/25809192754235-config-parquet-and-info-BenchGeckoAI-ai-model-ben-fab6395f/hub/datasets--BenchGeckoAI--ai-model-benchmarks-2026/snapshots/59b53f222028a66ddf45cb132b15505923bb69ed/data/providers.csv (origin=hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026@59b53f222028a66ddf45cb132b15505923bb69ed/data/providers.csv)]

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
                  self._write_table(pa_table, writer_batch_size=writer_batch_size)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              name: string
              slug: string
              category: string
              description: string
              github_stars: int64
              github_url: string
              author: string
              language: string
              last_updated: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1307
              to
              {'benchmark': Value('string'), 'slug': Value('string'), 'category': Value('string'), 'description': Value('string'), 'max_score': Value('int64'), 'models_tested': Value('int64'), 'methodology': Value('string')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1895, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 6 new columns ({'github_stars', 'name', 'github_url', 'author', 'last_updated', 'language'}) and 4 missing columns ({'models_tested', 'max_score', 'methodology', 'benchmark'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026/data/mcp_servers.csv (at revision 59b53f222028a66ddf45cb132b15505923bb69ed), [/tmp/hf-datasets-cache/medium/datasets/25809192754235-config-parquet-and-info-BenchGeckoAI-ai-model-ben-fab6395f/hub/datasets--BenchGeckoAI--ai-model-benchmarks-2026/snapshots/59b53f222028a66ddf45cb132b15505923bb69ed/data/benchmarks.csv (origin=hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026@59b53f222028a66ddf45cb132b15505923bb69ed/data/benchmarks.csv), /tmp/hf-datasets-cache/medium/datasets/25809192754235-config-parquet-and-info-BenchGeckoAI-ai-model-ben-fab6395f/hub/datasets--BenchGeckoAI--ai-model-benchmarks-2026/snapshots/59b53f222028a66ddf45cb132b15505923bb69ed/data/mcp_servers.csv (origin=hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026@59b53f222028a66ddf45cb132b15505923bb69ed/data/mcp_servers.csv), /tmp/hf-datasets-cache/medium/datasets/25809192754235-config-parquet-and-info-BenchGeckoAI-ai-model-ben-fab6395f/hub/datasets--BenchGeckoAI--ai-model-benchmarks-2026/snapshots/59b53f222028a66ddf45cb132b15505923bb69ed/data/models.csv (origin=hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026@59b53f222028a66ddf45cb132b15505923bb69ed/data/models.csv), /tmp/hf-datasets-cache/medium/datasets/25809192754235-config-parquet-and-info-BenchGeckoAI-ai-model-ben-fab6395f/hub/datasets--BenchGeckoAI--ai-model-benchmarks-2026/snapshots/59b53f222028a66ddf45cb132b15505923bb69ed/data/providers.csv (origin=hf://datasets/BenchGeckoAI/ai-model-benchmarks-2026@59b53f222028a66ddf45cb132b15505923bb69ed/data/providers.csv)]
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

benchmark
string
slug
string
category
string
description
string
max_score
int64
models_tested
int64
methodology
string
MMLU
mmlu
Knowledge
Massive Multitask Language Understanding - 57 subjects from STEM to humanities
100
380
multiple_choice
MMLU-Pro
mmlu-pro
Knowledge
Harder MMLU with 10 answer choices and more reasoning-heavy questions
100
245
multiple_choice
HumanEval
humaneval
Coding
Python function completion from docstrings - 164 problems
100
340
pass_at_1
MBPP
mbpp
Coding
Mostly Basic Programming Problems - 974 crowd-sourced Python tasks
100
310
pass_at_1
SWE-bench Verified
swe-bench-verified
Coding
Real GitHub issue resolution on popular Python repos
100
180
execution
GPQA Diamond
gpqa-diamond
Science
Graduate-level questions in physics chemistry and biology
100
290
multiple_choice
MATH
math
Mathematics
Competition mathematics across 7 difficulty levels
100
350
exact_match
GSM8K
gsm8k
Mathematics
Grade school math word problems - 8.5K examples
100
370
exact_match
ARC-Challenge
arc-challenge
Reasoning
AI2 Reasoning Challenge - grade-school science questions
100
360
multiple_choice
HellaSwag
hellaswag
Reasoning
Sentence completion requiring commonsense reasoning
100
355
multiple_choice
WinoGrande
winogrande
Reasoning
Pronoun resolution requiring world knowledge
100
340
accuracy
TruthfulQA
truthfulqa
Safety
Measures tendency to generate false but plausible answers
100
320
mc_accuracy
BBH
bbh
Reasoning
BIG-Bench Hard - 23 challenging tasks from BIG-Bench
100
300
exact_match
DROP
drop
Reading
Discrete Reasoning Over Paragraphs - reading comprehension with math
100
280
f1_score
MGSM
mgsm
Multilingual
Multilingual Grade School Math in 10 languages
100
220
exact_match
IFEval
ifeval
Instruction
Instruction Following Evaluation - verifiable instruction constraints
100
260
strict_accuracy
MuSR
musr
Reasoning
Multi-Step Soft Reasoning - complex multi-hop problems
100
200
accuracy
MMMU
mmmu
Multimodal
Massive Multi-discipline Multimodal Understanding
100
120
accuracy
LiveCodeBench
livecodebench
Coding
Contamination-free coding benchmark from recent competitions
100
190
pass_at_1
Aider Polyglot
aider-polyglot
Coding
Multi-language code editing benchmark using Aider framework
100
160
edit_accuracy
Arena ELO
arena-elo
Human Preference
Chatbot Arena crowdsourced human preference ratings
2,000
280
elo_rating
MT-Bench
mt-bench
Conversation
Multi-turn conversation quality scored by GPT-4
10
310
gpt4_judge
AlpacaEval 2.0
alpacaeval-2
Instruction
Length-controlled win rate against GPT-4 Turbo baseline
100
250
lc_win_rate
SimpleQA
simpleqa
Factuality
Short-form factual question answering with verifiable answers
100
230
exact_match
BFCL
bfcl
Tool Use
Berkeley Function Calling Leaderboard - API and tool use
100
180
ast_accuracy
Tau-bench
tau-bench
Agentic
Real-world agent task completion across airline and retail domains
100
140
task_success
WebArena
webarena
Agentic
Web browsing agent tasks on realistic websites
100
90
task_completion
AIME 2024
aime-2024
Mathematics
American Invitational Mathematics Examination problems
100
200
exact_match
AMC 2023
amc-2023
Mathematics
American Mathematics Competition problems
100
220
exact_match
Codeforces
codeforces
Coding
Competitive programming problems rated by difficulty
3,000
150
elo_rating
RULER
ruler
Long Context
Synthetic long-context retrieval and reasoning tasks
100
110
accuracy
NIAH
niah
Long Context
Needle in a Haystack - information retrieval in long documents
100
130
recall
LongBench
longbench
Long Context
Real-world long document understanding tasks
100
120
f1_score
Chatbot Arena Hard
arena-hard
Human Preference
Challenging subset of Arena prompts with high separability
100
210
win_rate
NATURAL
natural
Tool Use
Natural language to API call translation
100
160
accuracy
ToolBench
toolbench
Tool Use
Large-scale tool use benchmark with 16K real APIs
100
130
pass_rate
MedQA
medqa
Domain
US Medical Licensing Exam style questions
100
240
accuracy
LegalBench
legalbench
Domain
Legal reasoning across 162 tasks
100
180
accuracy
FinanceBench
financebench
Domain
Financial analysis and reasoning from SEC filings
100
150
accuracy
CLRS
clrs
Reasoning
Classical algorithm reasoning and step tracing
100
100
accuracy
null
filesystem
developer-tools
Read write and manage local filesystem operations via MCP
null
null
null
null
github
developer-tools
Interact with GitHub repositories issues and pull requests
null
null
null
null
postgres
databases
Query and manage PostgreSQL databases with schema introspection
null
null
null
null
brave-search
search
Web search via Brave Search API with summarization
null
null
null
null
puppeteer
web-automation
Browser automation for web scraping and testing via Puppeteer
null
null
null
null
slack
communication
Read and send Slack messages manage channels and users
null
null
null
null
google-maps
location
Geocoding directions and place search via Google Maps API
null
null
null
null
sentry
monitoring
Access Sentry error tracking events and issue data
null
null
null
null
memory
knowledge
Persistent knowledge graph for long-term memory storage
null
null
null
null
sqlite
databases
Query and manage SQLite databases with full SQL support
null
null
null
null
fetch
web-automation
HTTP requests to any URL with response parsing and caching
null
null
null
null
sequential-thinking
reasoning
Step-by-step reasoning and problem decomposition tool
null
null
null
null
notion
productivity
Read and write Notion pages databases and blocks
null
null
null
null
linear
project-management
Manage Linear issues projects and cycles via MCP
null
null
null
null
supabase
databases
Query and manage Supabase projects databases and auth
null
null
null
null
vercel
deployment
Manage Vercel deployments projects and environment variables
null
null
null
null
stripe
payments
Access Stripe payment data customers and subscriptions
null
null
null
null
jira
project-management
Manage Jira issues sprints and project boards
null
null
null
null
docker
devops
Manage Docker containers images and compose stacks
null
null
null
null
aws
cloud
Interact with AWS services including S3 Lambda and DynamoDB
null
null
null
null
gpt-4o
null
null
null
null
null
null
gpt-4o-mini
null
null
null
null
null
null
gpt-4-5
null
null
null
null
null
null
o3
null
null
null
null
null
null
o4-mini
null
null
null
null
null
null
claude-sonnet-4
null
null
null
null
null
null
claude-opus-4
null
null
null
null
null
null
claude-3-5-haiku
null
null
null
null
null
null
claude-opus-4-6
null
null
null
null
null
null
gemini-2-0-flash
null
null
null
null
null
null
gemini-2-5-pro
null
null
null
null
null
null
gemini-2-5-flash
null
null
null
null
null
null
llama-4-maverick
null
null
null
null
null
null
llama-4-scout
null
null
null
null
null
null
deepseek-v3
null
null
null
null
null
null
deepseek-r1
null
null
null
null
null
null
grok-3
null
null
null
null
null
null
grok-3-mini
null
null
null
null
null
null
mistral-large-2
null
null
null
null
null
null
command-r-plus
null
null
null
null
null
null
openai
null
null
null
null
null
null
anthropic
null
null
null
null
null
null
google
null
null
null
null
null
null
meta
null
null
null
null
null
null
deepseek
null
null
null
null
null
null
xai
null
null
null
null
null
null
mistral
null
null
null
null
null
null
cohere
null
null
null
null
null
null
amazon
null
null
null
null
null
null
ai21-labs
null
null
null
null
null
null
alibaba
null
null
null
null
null
null
zhipu-ai
null
null
null
null
null
null
01-ai
null
null
null
null
null
null
nvidia
null
null
null
null
null
null
reka
null
null
null
null
null
null
writer
null
null
null
null
null
null
inflection
null
null
null
null
null
null
databricks
null
null
null
null
null
null
together-ai
null
null
null
null
null
null
perplexity
null
null
null
null
null

AI Model Benchmarks & Pricing Dataset 2026

A comprehensive survey of large language model performance and economics, maintained by BenchGecko.

What's Inside

File Records Description
data/models.csv 20 Top AI models with benchmark scores and API pricing
data/providers.csv 20 AI model providers with metadata
data/benchmarks.csv 40 Benchmark suites with methodology
data/mcp_servers.csv 20 Model Context Protocol servers

This is a sample from the full dataset. The complete dataset covers thousands of models, hundreds of providers, and over a hundred benchmarks, updated every two hours at benchgecko.ai.

Fields (models.csv)

Column Type Description
name String Model display name
provider String Organization that created the model
input_price Float USD per 1M input tokens
output_price Float USD per 1M output tokens
context_window Integer Maximum context length in tokens
average_score Float Weighted average across all benchmarks (0-100)
mmlu_score Float MMLU benchmark score
humaneval_score Float HumanEval coding score
gpqa_score Float GPQA Diamond science score
math_score Float MATH competition score
open_source Boolean Whether weights are publicly available
release_date Date Public release date

Quick Start

from datasets import load_dataset

dataset = load_dataset("BenchGeckoAI/ai-model-benchmarks-2026")
models = dataset["train"]

# Find the best open-source model
open_models = [m for m in models if m["open_source"]]
best = max(open_models, key=lambda m: m["average_score"])
print(f"Best open model: {best['name']} ({best['average_score']})")

Use Cases

  • Model Selection: Compare benchmark scores across evaluation types before deploying
  • Cost Analysis: Find the best price-to-performance ratio across providers
  • Market Research: Track the AI model landscape and provider ecosystem
  • Academic Research: Study capability trajectories and scaling laws

Full Dataset

This sample covers 20 models. The full live dataset is available through:

Methodology

Benchmark scores sourced from original model technical reports and cross-verified using open-source evaluation frameworks (EleutherAI lm-evaluation-harness, BigCode HumanEval+). Pricing collected from official API documentation, updated within 48 hours of changes.

Citation

@dataset{benchgecko2026,
  author = {BenchGecko},
  title = {AI Model Benchmarks and Pricing Dataset 2026},
  year = {2026},
  publisher = {Hugging Face},
  url = {https://huggingface.co/datasets/BenchGeckoAI/ai-model-benchmarks-2026}
}

License

CC BY 4.0. Attribution: BenchGecko (https://benchgecko.ai)

Downloads last month
61