How we grade
Grade your team.
A letter-grade report card on your team's velocity and AI adoption. Computed from 90 days of pull-request metadata. No code is read.
Connect GitHub
Read-only access Your code is never read Revocable from GitHub settings
No GitHub access? Take the 2-minute self-assessment →
How we score
What we look at
Pull-request metadata and a few well-known config files
(CLAUDE.md, .cursorrules,
package.json). Your source code is never read.
- AI-First Score (0–100) — a weighted blend of pull-request references to known AI tools (40%), AI convention files in your repositories (30%), and AI SDK dependencies (30%).
- Median pull-request cycle time — the time from a pull request opening to merging. Healthy teams ship in under a day.
- Median time-to-first-review — how long a pull request waits before its first review. Healthy teams respond within four hours.
- Pull requests per engineer per week — a throughput proxy. Healthy teams ship four or more per engineer each week.
- % of pull requests touching tests — how often shipped work carries test coverage. Healthy teams clear sixty percent.
- Median pull-request size — total lines changed in a typical pull request. Smaller diffs review faster and ship cleaner; under two hundred lines is the target.
- Stale pull-request ratio — the share of open pull requests older than fourteen days. Healthy teams stay below ten percent.