Why we ship edge-first in 2026, and what we got wrong about it.
Three years ago we made a bet: every new product would be deployed at the edge by default. We’ve since shipped 60+ projects this way. Here’s what worked, what didn’t, and what we’d do differently.
The case for edge-first
The pitch was simple: latency is a feature, and the easiest way to take 200ms off a page is to not cross an ocean for it. Cloudflare Workers, Vercel Edge Functions, Deno Deploy — pick your flavor. We picked Workers.
For marketing surfaces and read-heavy product pages, this was an unambiguous win. P95 TTFB dropped to 40ms across MENA. Our SEO scores climbed without a single content change. Conversion rates ticked up by 8–14% on every site we migrated.
Latency is a feature, and the easiest way to take 200ms off a page is to not cross an ocean for it.
What we got wrong
The ergonomics of distributed state. We assumed the edge runtime would feel like a slightly stricter Node. It doesn’t. Long-running connections, in-memory caches, and certain ORM patterns just don’t translate. We rewrote our session handling three times before we landed on something honest.
What we’d do differently
Start with the data layer. The edge runtime is the easy part — the hard part is choosing a database that doesn’t fight you. D1, Hyperdrive, and Neon serverless are all credible answers in 2026. Pick one, commit, write the runbook.
Want this for your own product?
We do edge migrations as a 2-week sprint. Written estimate, fixed price.