Software & AIFebruary 10, 2026

GitHub Actions: The CI/CD Pipeline That Ships Every Project We Build

Every push to an i3k repository triggers an automatic pipeline: lint, type-check, test, build, deploy. Nothing goes to production without passing through GitHub Actions. Here's how we structured our workflows to manage Vercel frontend, Docker backend, and on-premise deployments.

GitHub Actions: The CI/CD Pipeline That Ships Every Project We Build - Software & AI | i3k

The Standard Pipeline: From Push to Deploy

Every i3k repository has a GitHub Actions workflow that triggers on push to main and on every pull request. The pipeline is divided into parallel jobs to maximize speed: lint (ESLint + Prettier check) and type-check (tsc --noEmit) run simultaneously. If both pass, the test job starts (Vitest with coverage report). Only if tests exceed the 85% coverage threshold does the build job execute. For React frontends like i3k.eu, the build produces static assets that Vercel deploys automatically. We don't need an explicit deploy job — Vercel is natively integrated with GitHub and detects every push. For Python backends (RAG Enterprise, CRM81 cloud), the build job creates a Docker image and pushes it to GitHub Container Registry. The complete frontend pipeline takes about 2 minutes. For the backend, including multi-stage Docker build, about 5 minutes. We've optimized times with aggressive caching: npm dependencies are cached with actions/cache and Docker layer caching reduces rebuilds by 70%. Before caching, the backend build took 14 minutes.

Staging, Production, and Secrets Management

We have separate workflows for staging and production with manual gates. Push to main auto-deploys to staging. Production deploy requires manual approval via GitHub Environments — a senior team member must click "Approve" in the Actions UI. This has saved us at least three times from rushed Friday evening deploys. Secrets are managed entirely through GitHub Secrets at repository and environment level. LLM provider API keys (Anthropic, OpenAI), database credentials, SSH tokens for on-premise servers — everything lives in secrets, never in code. Each environment (staging, production, on-premise) has its own isolated set of secrets. A staging workflow cannot access production secrets. For on-premise deployments, the workflow is more complex. After the Docker image build, a dedicated job connects via SSH to the client's server (using an Ed25519 key saved in secrets), pushes the image, and runs a rolling update with docker compose. The entire process is automated but requires manual approval and a maintenance window agreed with the client.

Docker Image Builds and Vercel Integration

Our Python backends use multi-stage Docker builds to produce lightweight images. The first stage installs dependencies with pip in a virtualenv, the second copies only the virtualenv and source code onto a python:3.12-slim image. The final RAG Enterprise image weighs 1.8 GB (including embedding models), while CRM81 is leaner at 650 MB. GitHub Actions builds images with docker/build-push-action and pushes them to ghcr.io (GitHub Container Registry). We use Docker layer caching via actions/cache with mode=max, which saves all intermediate layers. When only source code changes (not dependencies), the rebuild takes 45 seconds instead of 8 minutes because the pip install layer is reused from cache. For the frontend, the relationship with Vercel is more nuanced. Vercel has its own build system, but we still run lint, type-check, and tests in GitHub Actions before Vercel deploys. If the GitHub pipeline fails, the commit is marked with a red check and Vercel doesn't auto-deploy to production (we've configured Vercel to require passing checks). This double validation layer gives us certainty that no broken code reaches production.

Interested?

Contact us to receive a personalized quote.

All articles

Securvita S.r.l. — i3k.eu