Production-Ready Images
LEVEL 0
The Problem
Your Dockerfile works great in development:
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]
But in production, this image:
- Is 1.2GB (slow to pull, wastes bandwidth)
- Has development dependencies (200MB of dev tools)
- Runs as root (security risk)
- Has no health check
- Rebuilds from scratch every time (slow CI/CD)
Development images ≠ Production images.
LEVEL 1
The Concept — The Airplane vs. Prototype
The Concept
Imagine building an airplane.
Prototype: Rough, functional, has extra tools and testing equipment still attached, not optimized, carries unnecessary weight.
Production airplane: Streamlined, only essential components, optimized for performance, every gram matters, safety systems in place, maintenance logs.
Production images are like production airplanes: lean, secure, optimized, observable.
LEVEL 2
The Mechanics — Multi-Stage Builds
The Mechanics
Development Dockerfile (bad for production):
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install # Includes devDependencies
COPY . .
RUN npm run build
CMD ["node", "dist/server.js"]
Result: 1.2GB image with dev dependencies.
Production Dockerfile (multi-stage):
# Stage 1: Build
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci # Clean install, uses package-lock.json
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:18-alpine
WORKDIR /app
# Create non-root user
RUN addgroup -g 1000 appuser && \
adduser -D -u 1000 -G appuser appuser
# Copy only production dependencies
COPY package*.json ./
RUN npm ci --only=production && \
npm cache clean --force
# Copy built app from builder stage
COPY --from=builder --chown=appuser:appuser /app/dist ./dist
# Switch to non-root user
USER appuser
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"
# Expose port (documentation)
EXPOSE 3000
CMD ["node", "dist/server.js"]
Result: 150MB image, production-only dependencies, non-root user, health check.
LEVEL 3
Optimization Techniques
1. Use smaller base images:
# ❌ Large
FROM ubuntu:22.04 # 77MB
# ✅ Smaller
FROM alpine:3.18 # 5MB
# ✅ Smallest
FROM gcr.io/distroless/nodejs18-debian11 # 2MB
2. Layer caching:
Put things that change frequently later:
# ❌ Bad (rebuilds everything when code changes)
FROM node:18-alpine
COPY . .
RUN npm install
# ✅ Good (caches dependencies when only code changes)
FROM node:18-alpine
COPY package*.json ./
RUN npm ci
COPY . . # Code changes don't invalidate dependency layer
3. Minimize layers:
Combine RUN commands:
# ❌ Multiple layers
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get install -y vim
RUN apt-get clean
# ✅ Single layer
RUN apt-get update && \
apt-get install -y \
curl \
vim && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
4. Remove unnecessary files:
RUN npm install && \
npm cache clean --force && \
rm -rf /tmp/* /root/.npm
5. Use .dockerignore:
node_modules
npm-debug.log
.git
.env
.vscode
*.md
tests/
coverage/
LEVEL 4
Configuration Best Practices
Environment-specific configuration:
FROM node:18-alpine
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci --only=production
# Copy app
COPY . .
# Build args for build-time configuration
ARG BUILD_ENV=production
ARG APP_VERSION=unknown
# Environment variables for runtime configuration
ENV NODE_ENV=production \
PORT=3000 \
LOG_LEVEL=info
# Labels for metadata
LABEL maintainer="team@company.com" \
version="${APP_VERSION}" \
description="My production app"
# Non-root user
RUN addgroup -g 1000 appuser && adduser -D -u 1000 -G appuser appuser
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s CMD wget --quiet --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["node", "server.js"]
Build:
docker build \
--build-arg APP_VERSION=$(git rev-parse --short HEAD) \
--build-arg BUILD_ENV=production \
-t myapp:$(git rev-parse --short HEAD) \
-t myapp:latest \
.
LEVEL 5
Image Tagging Strategy
Bad tagging:
# Always using :latest
docker build -t myapp:latest .
docker push myapp:latest
Problems:
- Can’t rollback to previous version
- No audit trail
- “latest” might not be latest
Good tagging:
# Semantic versioning + git SHA
VERSION=1.2.3
GIT_SHA=$(git rev-parse --short HEAD)
docker build \
-t myapp:${VERSION} \
-t myapp:${VERSION}-${GIT_SHA} \
-t myapp:${GIT_SHA} \
-t myapp:latest \
.
docker push myapp:${VERSION}
docker push myapp:${VERSION}-${GIT_SHA}
docker push myapp:${GIT_SHA}
docker push myapp:latest
Now you have:
myapp:1.2.3— semantic versionmyapp:1.2.3-abc123— version + commitmyapp:abc123— just commit (for CI/CD)myapp:latest— convenience pointer
Can rollback to any version.