Back to Blog
Cloud

Cloudflare Workers vs Vercel Functions in 2026: a real-world cost and latency breakdown

I run the same Next.js workload on Vercel and a hand-rolled Workers stack. The pricing model is the most important difference, not the latency. Here are the numbers.

March 202614 min read

I run the same workload on both. A Next.js app with a few API routes, an LLM proxy, and a handful of cron jobs. Here is what I learned.

Latency

```chart::latency ```

The cold-start gap between Workers and Vercel Edge is real but small in practice (14ms vs 27ms p50). Both feel instant. The big gap is to Vercel's Lambda functions (245ms) which is the default if your route uses any Node API not supported by the edge runtime. If you forget to set `export const runtime = 'edge'` you fall off this cliff[^2].

Cost

```chart::cost ```

This is where the choice gets made. Workers' pricing is "per request + CPU-ms", Vercel Pro is "per invocation + bandwidth + edge function execution"[^1][^2]. At low scale they are roughly comparable. At 100 million requests a month, Workers is roughly 8x cheaper.

That sounds extreme until you realise Vercel's pricing includes the build minutes, the preview deployments, the analytics, the deploy infrastructure, the support. Workers is "we'll route a request to your code." You pay for the rest separately if you want it.

Feature parity

```table::f ```

Notable differences: - R2 has zero egress fees, Vercel Blob charges egress. For media-heavy workloads this changes the maths. - D1 is SQLite at the edge, eventual consistency. Vercel Postgres is a managed primary. If you need transactions across regions, Vercel. - Workers runs in 335+ POPs[^3], Vercel runs functions in fewer regions but with smarter routing.

Vendor lock

This is underrated. Workers code uses the Workers Runtime API. If you write `addEventListener('fetch')` style or use Cloudflare bindings, moving off Cloudflare requires rewriting. Vercel Functions are mostly Lambda-compatible — your handler is a Node function, and you can move it to AWS Lambda or fly.io with minimal work.

Where I land

For SarmaLink-AI's chat backend (low requests, high CPU per request) I use Vercel because the dev experience and Postgres integration are better. For a hypothetical asset proxy or webhook router (high requests, low CPU per request) I use Workers because the pricing makes it 5-10x cheaper.

The decision is not "which is better." It is "which pricing curve fits your workload."

S

Sarma

SarmaLinux

Have a project in mind?

Let's discuss how I can help you implement these ideas in your business.

Get in Touch