5 days to 1 minute
Getting code from “PR approved” to “running in production” took 5 days at one point.
Not because of testing. Not because of review complexity. Our deployment pipeline was a manual sequential process where each step waited on a human and each human waited on their own schedule. PRs got approved, sat in a non-existent queue, got forgotten, got rebased, conflicted with other PRs that also sat waiting. Eventually someone would batch-deploy a bunch of changes at once, introducing all the risk of big-bang releases with none of the benefits of planned releases.
The fix was a merge queue.
Merge queue takes approved PRs, orders them, runs CI on each one in sequence against current main, merges automatically when they pass. No human in the loop between “approved” and “deployed.” Handles conflicts, rebases, ordering.
5 days to 1 minute.
But someone pointed out something uncomfortable: I was celebrating fixing a problem I’d created. We needed the merge queue because we’d never had a formal deployment process, no CI/CD discipline from the start, no agreed-upon merge cadence. Self-inflicted wound.
Still the right fix though. During the transcription crisis we needed to ship emergency patches multiple times per day. Without the merge queue, each patch would have taken days of manual coordination. With it, fix goes live in minutes.
Config management improvement came alongside this. Saved 1-2 hours per deployment by standardizing how environment variables and service configs were structured.
DX improvements compound. The merge queue didn’t just save deployment time. It changed how the team thought about shipping. When deploying is fast and painless, you ship smaller changes. Smaller changes are easier to review, debug, roll back. Whole development loop tightened.
Testing consolidation is next. Test coverage at 30%. We’ve been telling ourselves that’s fine given the crisis. It’s not. Every deployment without adequate test coverage is a dice roll. The merge queue just lets us roll faster.