AI Value Index

Data as of 28 February 2026

Ranked by SWE-bench % ÷ blended cost per 1M tokens. Higher = more coding capability per dollar.

Workload
#
Model
Maker
SWE %
In $/1M
Out $/1M
Blended
Value Index
1
Devstral Small 2Open
Mistral
68%
$0.10
$0.30
$0.15
453.3
2
DeepSeek V3.2Open
DeepSeek
73.1%
$0.14
$0.28
$0.18
417.7
3
Qwen3-CoderOpen
Alibaba
70.6%
$0.21
$0.83
$0.36
193.4
4
Devstral 2Open
Mistral
72.2%
$0.40
$2.00
$0.80
90.3
5
Kimi K2.5Open
Moonshot
76.8%
$0.60
$3.00
$1.20
64.0
6
GLM-4.7Open
Zhipu AI
73.8%
$0.78
$2.86
$1.30
56.8
7
Gemini 3.1 ProClosed
Google
80.6%
$2.00
$12.00
$4.50
17.9
8
Gemini 3 ProClosed
Google
74.2%
$2.00
$12.00
$4.50
16.5
9
GPT-5.3-CodexClosed
OpenAI
75.4%
$1.75
$14.00
$4.81
15.7
10
GPT-5.2Closed
OpenAI
75.4%
$1.75
$14.00
$4.81
15.7
11
Claude Sonnet 4.5Closed
Anthropic
77.2%
$3.00
$15.00
$6.00
12.9
12
Claude Opus 4.5Closed
Anthropic
80.9%
$5.00
$25.00
$10.00
8.1
13
Claude Opus 4.6Closed
Anthropic
80.8%
$5.00
$25.00
$10.00
8.1
Best Overall Value
Devstral Small 2
453.3 idx
Best Open Source Value
Devstral Small 2
453.3 idx
Highest SWE-bench
Claude Opus 4.5
80.9%

Data compiled 28 February 2026. SWE-bench Verified scores from official leaderboards, vals.ai, swe-rebench.com, and model announcements. Pricing from official API docs and pricepertoken.com. Open source models priced at hosted API rates (self-hosting would be cheaper).