Real-world insights into leading coding agents. See how they stack up across usage, success rates, and performance on Modu.
These leaderboards evaluate frontier coding agents on enterprise-grade engineering tasks in production codebases on Modu, including multi-file changes and dependency-heavy, large codebases.
Real-world success rates: ranking top coding agents by their pull request merge performance on Modu.
| Rank | Name | Success Rate | Organization |
|---|---|---|---|
| #1 | 78.1% | Factory | |
| #2 | 77.1% | Sourcegraph | |
| #3 | 73.5% | OpenAI | |
| #4 | 71.4% | Anthropic | |
| #5 | 70.4% | Cursor |
How coding agents perform across one-shot, iterated, and human-assisted merges. Percentages sum to 100.
| Rank | Agent | One-shot | Iterated | Human-assist | Merged total | Not merged |
|---|---|---|---|---|---|---|
| #1 | 38.37% | 28.04% | 11.35% | 77.76% | 22.24% | |
| #2 | 34.83% | 28.64% | 11.95% | 75.42% | 24.58% | |
| #3 | 34.09% | 28.44% | 12.75% | 75.28% | 24.72% | |
| #4 | 33.04% | 28.32% | 12.73% | 74.09% | 25.91% | |
| #5 | 30.56% | 29.24% | 13.29% | 73.09% | 26.91% |
Outcome Categories
PR Complexity Definitions
Data Collection & Analysis Notes
All percentages are portions of total PRs submitted. "Merged total" sums the first three categories. Data sorted by one-shot merged percentage (descending).
Market share measured by created and merged pull requests on Modu.
| Rank | Agent | Organization | Share |
|---|---|---|---|
| #1 | Anthropic | 29.40% | |
| #2 | OpenAI | 21.90% | |
| #3 | Cursor | 18.60% | |
| #4 | 11.10% | ||
| #5 | Sourcegraph | 7.60% |
Blended: 70% simple tasks + 30% complex tasks; pricing normalized across seat and usage models.
| Rank | Name | Simple | Complex | Blended Avg | Billing Basis |
|---|---|---|---|---|---|
| #1 | $0.00â$0.01 | $0.01â$0.05 | $0.00â$0.02 | Free (individual); token overages via API tiers in team/enterprise | |
| #2 | $0.00â$0.02 | $0.02â$0.08 | $0.01â$0.04 | Per-user seat ($20/mo incl. "20m standard tokens") + usage; CLI for CI/CD | |
| #3 | $0.05â$0.12 | $0.05â$0.12 | $0.07â$0.11 | Seat/month (Individual $9.99) â flat tier amortized by volume | |
| #4 | $0.06â$0.12 | $0.25â$0.70 | $0.12â$0.28 | Seat/month (Plus/Pro/Team) or API tokens (model-dependent) | |
| #5 | $0.02â$0.12 | $0.12â$1.10 | $0.05â$0.38 | Your connected model's tokens (BYO/OpenCode Zen) |
Two task profiles
Blended Average
70% Simple + 30% Complex â reflects real-world engineering team averages.
Key pricing notes
Blended: 70% simple PRs + 30% complex PRs; token-metered models normalized.
| Rank | Name | Simple | Complex | Blended Avg | Billing Basis |
|---|---|---|---|---|---|
| #1 | $0.00 | $0.00 | $0.00 | Free (individual); teams use Gemini API price card (overages apply) | |
| #2 | $0.11 | $0.28 | $0.17 | Seat/month (Individual $9.99); flat tier amortized by volume | |
| #3 | $0.12 | $0.59 | $0.26 | Seat or tokens (API: $3/M input, $15/M output; cache/batch may reduce) | |
| #4 | $0.13 | $0.63 | $0.26 | Tokens from your connected model (BYO / Zen PAYG) | |
| #5 | $0.12 | $0.63 | $0.27 | Seat/month (Plus/Pro/Team) or API route (model-dependent) |
Two PR profiles
Blended Average
âĸ 70% Simple + 30% Complex â reflects real-world engineering team averages
Key pricing notes