The AI cost vs. payroll math your CFO is about to do.

Project your team's AI inference spend over 12–24 months at realistic adoption-growth rates. Compare it to payroll. See the layoff arithmetic that already drove $700B in big-tech capex commitments and 38,000 AI-attributed cuts in Q1 2026 alone.

Your team
Devs actively using AI tools (Copilot, Cursor, Claude Code, Aider, etc.)
Average API spend per active dev right now. If you don't know, $200 is the 2026 mid-range.
$ /mo
How much per-dev usage compounds month-over-month as adoption deepens. Industry: +320% over 24 months ≈ +6%/mo conservative, +15%/mo aggressive.
% / mo
Salary + benefits + taxes + overhead. Default is US-loaded; ~R$300k for senior BR-loaded.
$ /yr
How far out to project. CFOs typically plan 12–24 months for opex commitments.
months
The numbers
Headline
Today's monthly AI spend (team total)
— per dev × — devs
Monthly AI spend at end of horizon
Cumulative AI spend over horizon
AI cost as % of payroll, end of horizon
Devs whose annual salary equals end-of-horizon AI burn
— this is the "absorb the cost via headcount" math, not a recommendation.
Monthly cost over time
When the AI bill curve crosses your payroll line, the math forces a decision.
AI inference Team payroll (monthly)

How this is calculated

Three inputs do most of the work: per-dev monthly cost today, monthly compounding growth rate, and horizon. The model is intentionally simple — anything more complex hides assumptions inside formulas.

The formula

cost_at_month_n = devs × cost_today × (1 + growth)^n

Cumulative spend is the sum across all months. Payroll uses loaded annual salary × number of devs. The "layoff equivalent" divides end-of-horizon annual AI burn by per-dev loaded salary — it's the number of devs whose payroll equals what AI is consuming.

Why compounding growth, not flat?

Per-token prices keep falling, but enterprise total spend keeps rising fast. Not because anyone budgeted it — because adoption keeps deepening. The same dev who used 10 turns/day in Q1 uses 30 in Q3 once they trust the tool. Then they start running background agents. Compounding growth is the empirically honest curve.

Why this isn't doom-mongering

The output isn't "fire everyone." It's "this is the gravitational pull on opex if you don't measure." Teams that calibrate per-task cost (with something like TokenPoints) catch divergence early and re-allocate before it shows up as a layoff PR. Teams that don't, find out from their CFO.

Important caveats. This is a planning model, not a forecast. It assumes (1) per-dev cost grows at a steady rate, (2) headcount stays flat, (3) per-token prices don't collapse 10x, (4) no productivity offsets reduce demand for new hires. Reality breaks all of these in different directions. Use this to start a conversation, not to make a budget commitment.

Where the defaults come from

  • Enterprise AI budget: $1.2M (2024) → $7M (2026), ~6x growth — AnalyticsWeek 2026 Inference Economics
  • Token prices fell 280x in 24 months while enterprise AI spend rose 320%Oplexa, AI Inference Cost Crisis 2026
  • Inference = 85% of enterprise AI budget in 2026 — same source
  • Q1 2026: 78,557 tech layoffs; 47.9% AI-attributedTom's Hardware
  • Meta cut 8,000 jobs in April 2026 explicitly to fund $115–135B in AI capex — Axios, TheNextWeb
  • Big 4 (MSFT, GOOG, META, AMZN) AI capex 2026: ~$700B, +75% YoY — Fortune
  • Cursor revenue: $0 → $2B ARR in ~18 months; doubled $1B → $2B in 3 months — Panto / Cursor disclosures
  • GitHub Copilot: 90% of Fortune 100 using as of 2025 — Second Talent

Don't trust this number. Measure your own.

The default growth rate is an industry average. Your team will diverge. The whole point of TokenPoints is replacing models like this with your real per-task data.

Read the framework →