How We Work

Zero Downtime Website Deployments

Whether we're building a static marketing site or a full-stack custom web application, every deployment is designed so users never experience an outage, a broken page, or a maintenance window. Here's exactly how we do it.

Most website downtime during a deploy isn't inevitable. It's the result of skipping the right architecture. Common culprits:

Fixing these isn't complicated. It requires the right deploy strategy from the start and a bit of discipline around how schema changes are sequenced.

For static marketing sites and Jamstack applications, zero downtime is straightforward with atomic deploys. The principle: the new version of the site is built in full and made live in a single, instantaneous pointer swap. Users either see the old version or the new one, never a half-deployed mix.

  1. 1
    Build into an isolated directory

    Every deploy builds output into a new versioned folder like /releases/v42/, completely separate from what's currently being served.

  2. 2
    Validate before cutover

    Automated checks run against the staged build: broken links, missing assets, size regression alerts. Nothing touches production until they pass.

  3. 3
    Atomic symlink swap

    The current/ symlink is atomically updated to point at the new release. The filesystem operation is instantaneous. No in-flight requests are interrupted.

  4. 4
    Instant rollback

    If something's wrong, rolling back is a single symlink change pointing back to the previous release. Recovery is measured in seconds, not minutes.

For full-stack custom web applications running Django, Node, or similar, we use blue/green deployments. Two identical environments (blue live, green staging) sit behind a load balancer or reverse proxy. Traffic flows to blue while green is updated and validated. The cutover is a single config change.

Parallel Environments

Blue serves all live traffic. Green is a full copy of production with the same infrastructure and config, updated with the new release and tested independently before any traffic touches it.

Health Check Gate

Before the nginx upstream is updated, the green environment must pass health checks: HTTP 200 on the health endpoint, DB connectivity, background workers running. If any check fails, the deploy stops automatically.

Warm Cutover

Green is fully warmed (migrations run, caches primed, workers ready) before a single request is routed to it. The nginx upstream swap takes effect with zero dropped connections using graceful reload.

Keep Blue for 15 Minutes

Blue stays live and idle for 15 minutes post-cutover. If a production issue surfaces immediately after the swap, flipping back is a one-line nginx change and a reload. No rebuild required.

Schema changes are the most common cause of downtime during application deploys. A naive ALTER TABLE on a large table can lock reads and writes for minutes. We sequence migrations so the running application is never broken by a schema change in progress.

  1. 1
    Expand: add the new column, nullable, no default

    New columns are added as nullable with no server-side default. This is a metadata-only change in Postgres. It takes milliseconds regardless of table size and requires no lock.

  2. 2
    Deploy the app: write to both old and new shape

    The new version of the application is deployed. It writes to both the old and new column simultaneously. Old code still runs fine against the old schema. No breakage.

  3. 3
    Backfill in batches

    Existing rows are updated in small batches with a short sleep between each, enough to populate the new column without starving the query queue or spiking I/O.

  4. 4
    Contract: drop what's no longer needed

    Once the new column is fully populated and the old one is no longer referenced in any live code, a follow-up migration removes the old column in a separate deploy.

The same expand/contract pattern applies to renaming columns, splitting tables, changing data types, and adding indexes. Each step is safe to run against a live, fully-loaded database.

A common source of post-deploy breakage: the app is updated but browsers or CDNs are still serving stale CSS, JS, or image assets. We solve this with content-addressed filenames, where every static asset filename includes a hash of its content.

All of the above runs automatically on every merge to main. The deploy pipeline is the enforcement layer: no manual steps, no remembering to run migrations, no SSH sessions on the production server.

Whether you're building something new and want to do it right from day one, or you're already shipping with a process that causes downtime, we can fix it.

A 30-minute call is enough to understand your current setup and tell you exactly what needs to change.

Get in Touch