Docker & Containerization

📖 Concept

Docker packages your Node.js application and all its dependencies into a portable container — ensuring it runs identically everywhere (development, staging, production).

Why Docker for Node.js?

  • Consistency — "Works on my machine" problem eliminated
  • Isolation — Each app gets its own environment
  • Scalability — Containers start in seconds, scale horizontally
  • CI/CD — Build once, deploy everywhere

Dockerfile best practices for Node.js:

  1. Use specific Node.js version tags (node:20-alpine, not node:latest)
  2. Use multi-stage builds to reduce image size
  3. Copy package*.json first, then npm ci, then copy source (layer caching)
  4. Run as non-root user (USER node)
  5. Use .dockerignore to exclude node_modules, .git, etc.
  6. Use npm ci instead of npm install for deterministic installs
  7. Set NODE_ENV=production to skip dev dependencies

Alpine vs. Debian images:

Image Size Use Case
node:20-alpine ~180MB Production (smallest)
node:20-slim ~240MB When Alpine causes issues
node:20 ~1GB Development, debugging

🏠 Real-world analogy: Docker is like a shipping container. Your application (cargo) is packed with everything it needs (dependencies). The container fits on any ship (server) regardless of what other containers are onboard. The container specification (Dockerfile) ensures identical packing every time.

💻 Code Example

codeTap to expand ⛶
1// Docker Configuration for Node.js
2
3// === Dockerfile (production-ready, multi-stage) ===
4const dockerfile = `
5# Stage 1: Dependencies
6FROM node:20-alpine AS deps
7WORKDIR /app
8COPY package.json package-lock.json ./
9RUN npm ci --only=production && \
10 cp -R node_modules /tmp/prod_modules && \
11 npm ci # Install all deps for build stage
12
13# Stage 2: Build (if you have a build step)
14FROM node:20-alpine AS builder
15WORKDIR /app
16COPY --from=deps /app/node_modules ./node_modules
17COPY . .
18RUN npm run build # TypeScript compile, etc.
19
20# Stage 3: Production (minimal image)
21FROM node:20-alpine AS runner
22WORKDIR /app
23
24# Security: run as non-root
25RUN addgroup --system --gid 1001 nodejs && \
26 adduser --system --uid 1001 appuser
27
28# Copy only production dependencies
29COPY --from=deps /tmp/prod_modules ./node_modules
30COPY --from=builder /app/dist ./dist
31COPY package.json ./
32
33# Set environment
34ENV NODE_ENV=production
35ENV PORT=3000
36
37# Health check
38HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
39 CMD wget -q --spider http://localhost:3000/health || exit 1
40
41USER appuser
42EXPOSE 3000
43
44CMD ["node", "dist/server.js"]
45`;
46
47// === .dockerignore ===
48const dockerignore = `
49node_modules
50npm-debug.log
51.git
52.gitignore
53.env
54.env.*
55coverage
56tests
57docs
58*.md
59.vscode
60.idea
61Dockerfile
62docker-compose*.yml
63`;
64
65// === docker-compose.yml ===
66const dockerCompose = `
67version: "3.8"
68
69services:
70 api:
71 build: .
72 ports:
73 - "3000:3000"
74 environment:
75 - NODE_ENV=production
76 - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
77 - REDIS_URL=redis://cache:6379
78 depends_on:
79 db:
80 condition: service_healthy
81 cache:
82 condition: service_healthy
83 restart: unless-stopped
84 deploy:
85 resources:
86 limits:
87 memory: 512M
88 cpus: "1.0"
89
90 db:
91 image: postgres:16-alpine
92 environment:
93 POSTGRES_DB: myapp
94 POSTGRES_USER: postgres
95 POSTGRES_PASSWORD: password
96 volumes:
97 - pgdata:/var/lib/postgresql/data
98 healthcheck:
99 test: ["CMD-SHELL", "pg_isready -U postgres"]
100 interval: 5s
101 timeout: 5s
102 retries: 5
103
104 cache:
105 image: redis:7-alpine
106 command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
107 healthcheck:
108 test: ["CMD", "redis-cli", "ping"]
109 interval: 5s
110 timeout: 5s
111 retries: 5
112
113volumes:
114 pgdata:
115`;
116
117module.exports = { dockerfile, dockerignore, dockerCompose };

🏋️ Practice Exercise

Exercises:

  1. Write a production Dockerfile with multi-stage build — compare image sizes: full vs alpine vs multi-stage
  2. Create a docker-compose.yml with Node.js API, PostgreSQL, and Redis
  3. Optimize Docker layer caching — ensure npm ci layer is cached when only source code changes
  4. Implement Docker health checks that test the /health endpoint
  5. Set up volume mounts for local development with hot-reloading inside Docker
  6. Compare docker build times with and without .dockerignore optimization

⚠️ Common Mistakes

  • Using node:latest — version can change unexpectedly; always pin a specific version like node:20-alpine

  • Copying node_modules into the image instead of running npm ci — host modules may not match the container's OS or architecture

  • Not using .dockerignore — without it, node_modules, .git, and test files are copied, making the image huge and builds slow

  • Running containers as root — if the app is compromised, the attacker has root access; use USER node or create a dedicated user

  • Not using multi-stage builds — the final image includes build tools, devDependencies, and source code; multi-stage reduces image size by 50-80%

💼 Interview Questions

🎤 Mock Interview

Mock interview is powered by AI for Docker & Containerization. Login to unlock this feature.