Case Study

Deployment Time Cut from 2 Hours to 5 Minutes

A software team was spending two hours on every production deployment: manual steps, a checklist nobody kept up to date, and enough risk that deploys were getting pushed to Friday afternoons just to have the weekend as a buffer. We replaced the whole process with a pipeline that runs in five minutes on every merge.

The team was shipping a web application used by a few hundred daily active users. The product was in active development and deploys were happening two to three times a week. Each one took the same two hours and required a senior developer to be present to babysit the process.

The codebase had grown over a few years and the original deploy process had grown with it in the worst way: more steps added on top of old steps, institutional knowledge locked inside one person's head, and a shared server that nobody wanted to touch without triple-checking everything first.

We documented the existing deploy before touching anything. The full sequence:

  1. 1
    SSH into the server and pull the branch manually

    The developer would open a terminal, SSH into production, and run git pull. If anything was cached or stale from a previous failed deploy, it had to be cleaned up by hand first.

  2. 2
    Install dependencies and hope nothing broke

    Running the package install command directly on the production server. If a new dependency had a build issue on that specific OS version, the deploy stopped and the debugging started.

  3. 3
    Run database migrations by hand

    No automation, no rollback plan. Migrations were run manually with the application still live, and if something locked a table or failed mid-run, the team was dealing with it in real time.

  4. 4
    Restart the application server and watch the logs

    The developer would restart the process manager, tail the logs for five minutes, and watch for errors. If the app came up clean, the deploy was called done. If not, they were rolling back manually.

  5. 5
    Manual smoke tests across the application

    Click through critical paths in the browser to make sure nothing was obviously broken. No automated coverage meant this step took as long as the person doing it felt was enough.

Two hours on a good day. Longer when something went wrong. And something went wrong often enough that the team had started batching up changes to reduce how frequently they had to do it.

The replacement was a GitHub Actions pipeline triggered on every merge to main. No manual steps, no SSH, no one person who had to be available. The whole sequence runs in the same amount of time the team used to spend just pulling the branch and checking for errors.

Automated Test Run

Every merge triggers the full test suite before anything touches production. If tests fail, the pipeline exits and the team gets a notification. Nothing broken ever reaches the server.

Docker Build and Push

The application is built into a Docker image in CI with all dependencies baked in. No more installing packages directly on the server. The image that gets tested is exactly the image that gets deployed.

Automated Migrations with Rollback

Database migrations run as part of the pipeline before the new container goes live. If a migration fails, the deploy stops before any traffic hits the new code. The old container keeps serving.

Health Check Before Cutover

The new container starts up and must pass a health check before the load balancer switches traffic. Broken deploys are caught before users see them, not after.

Automated Smoke Tests

A small suite of end-to-end checks runs against the live environment after each deploy. Critical paths are verified automatically. No developer sitting there clicking through the app.

Slack Notification on Completion

The team gets a Slack message when a deploy completes, including the git SHA, the branch, who triggered it, and how long it took. Everyone knows when something ships without asking.

The existing infrastructure stayed in place. No cloud migration required. The same server, the same database, the same domain. The pipeline was layered on top of what was already there.

Deploys went from two hours to five minutes. More importantly, the team stopped treating deployments as events that required planning and coordination. They became routine, low-stakes, and automatic.

The team described the shift as getting their Fridays back. They used to avoid shipping on Fridays because a bad deploy meant a ruined weekend. Now they don't think about it.

Most teams with a painful deploy process know it's painful. The problem is usually not that the fix is complicated. It's that fixing the deploy process never makes it onto the sprint because it's not a feature.

A CI/CD pipeline for a typical web application takes a few days to build correctly. The investment pays back in the first week of not doing deploys manually.

Tell us what your current process looks like and we'll tell you exactly what it would take to automate it. Most teams are further along than they think.

One conversation is usually enough to scope the work.

Get in Touch