Performance Profiling & Optimization

📖 Concept

Performance optimization starts with measurement, not guesswork. Node.js provides built-in tools and third-party solutions for identifying bottlenecks.

Key performance metrics:

Metric Target Tool
Response time (p95) < 200ms APM, custom middleware
Throughput (req/s) Depends on workload Load testing (autocannon)
Memory usage < 500MB RSS process.memoryUsage()
Event loop lag < 10ms perf_hooks, Clinic.js
CPU usage < 70% os.cpus(), top/htop

Profiling tools:

  1. node --prof — V8 CPU profiler (generates tick files)
  2. node --inspect — Chrome DevTools profiler (Memory, CPU, Performance)
  3. Clinic.js — Automated performance diagnosis (Doctor, Flame, Bubbleprof)
  4. autocannon — HTTP load testing (like Apache Bench but better)
  5. perf_hooks — Precise performance measurement API

Common performance bottlenecks:

  1. Synchronous operationsreadFileSync, JSON.parse on large data
  2. Memory leaks — global arrays, event listeners, closures
  3. N+1 queries — fetching related data in loops
  4. Missing indexes — database queries doing full table scans
  5. Large payloads — sending unnecessary data in API responses
  6. No caching — re-computing or re-fetching unchanged data

🏠 Real-world analogy: Performance profiling is like a doctor's checkup. You measure vital signs (metrics), run diagnostics (profiling), identify the problem (bottleneck), and prescribe treatment (optimization). Guessing without measuring is like taking random medicine.

💻 Code Example

codeTap to expand ⛶
1// Performance Profiling & Optimization
2
3const { performance, PerformanceObserver } = require("perf_hooks");
4
5// 1. Measure function performance
6function measureExecution(label, fn) {
7 const start = performance.now();
8 const result = fn();
9 const duration = performance.now() - start;
10 console.log(`[${label}] ${duration.toFixed(2)}ms`);
11 return result;
12}
13
14// Async version
15async function measureAsync(label, fn) {
16 const start = performance.now();
17 const result = await fn();
18 const duration = performance.now() - start;
19 console.log(`[${label}] ${duration.toFixed(2)}ms`);
20 return result;
21}
22
23// 2. Memory monitoring
24function logMemory(label = "") {
25 const usage = process.memoryUsage();
26 console.log(`Memory ${label}:`, {
27 rss: `${(usage.rss / 1024 / 1024).toFixed(1)}MB`, // Total allocated
28 heapUsed: `${(usage.heapUsed / 1024 / 1024).toFixed(1)}MB`, // JS objects
29 heapTotal: `${(usage.heapTotal / 1024 / 1024).toFixed(1)}MB`,
30 external: `${(usage.external / 1024 / 1024).toFixed(1)}MB`, // C++ objects
31 });
32}
33
34// 3. Event loop monitoring
35function monitorEventLoop() {
36 let lastCheck = Date.now();
37
38 setInterval(() => {
39 const now = Date.now();
40 const lag = now - lastCheck - 1000; // Expected 1000ms interval
41 lastCheck = now;
42
43 if (lag > 50) {
44 console.warn(`Event loop lag: ${lag}ms`);
45 }
46 }, 1000);
47}
48
49// 4. Response time middleware
50function responseTimeMiddleware(req, res, next) {
51 const start = process.hrtime.bigint();
52
53 res.on("finish", () => {
54 const duration = Number(process.hrtime.bigint() - start) / 1e6; // Convert to ms
55 res.setHeader("X-Response-Time", `${duration.toFixed(2)}ms`);
56
57 // Alert on slow responses
58 if (duration > 1000) {
59 console.warn(`Slow response: ${req.method} ${req.originalUrl} - ${duration.toFixed(0)}ms`);
60 }
61 });
62
63 next();
64}
65
66// 5. Optimization patterns
67
68// ❌ SLOW: Creating new RegExp in every call
69function validateEmailSlow(email) {
70 return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
71}
72
73// ✅ FAST: Compile regex once
74const EMAIL_REGEX = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
75function validateEmailFast(email) {
76 return EMAIL_REGEX.test(email);
77}
78
79// ❌ SLOW: String concatenation in loops
80function buildStringSlow(items) {
81 let result = "";
82 for (const item of items) {
83 result += `Item: ${item}\n`; // Creates new string each iteration
84 }
85 return result;
86}
87
88// ✅ FAST: Array join
89function buildStringFast(items) {
90 return items.map((item) => `Item: ${item}`).join("\n");
91}
92
93// ❌ SLOW: JSON.parse/stringify for deep cloning
94function deepCloneSlow(obj) {
95 return JSON.parse(JSON.stringify(obj)); // Expensive!
96}
97
98// ✅ FAST: structuredClone (Node.js 17+)
99function deepCloneFast(obj) {
100 return structuredClone(obj);
101}
102
103// 6. Load testing with autocannon
104// npm install -g autocannon
105// autocannon -c 100 -d 10 http://localhost:3000/api/users
106// -c 100: 100 concurrent connections
107// -d 10: run for 10 seconds
108
109module.exports = { measureAsync, logMemory, monitorEventLoop, responseTimeMiddleware };

🏋️ Practice Exercise

Exercises:

  1. Add response time headers and logging to an Express API — alert on responses > 500ms
  2. Profile memory usage during a load test — identify and fix any memory leaks
  3. Use Clinic.js Doctor to diagnose performance issues in a sample application
  4. Run autocannon against your API and optimize until you achieve 1000+ req/s
  5. Compare JSON.parse vs streaming JSON parsing for a 100MB file
  6. Implement event loop lag monitoring and alerting

⚠️ Common Mistakes

  • Optimizing without measuring first — always profile before optimizing; premature optimization wastes time on non-bottlenecks

  • Using synchronous operations in request handlers — readFileSync, crypto.pbkdf2Sync block the entire event loop; use async versions

  • Accumulating data in memory (arrays, maps, caches) without limits — set maximum sizes and eviction policies

  • Not using database connection pooling — creating a new connection per request is extremely expensive; reuse connections

  • Sending entire database records in API responses — select only needed fields; reduce payload size

💼 Interview Questions

🎤 Mock Interview

Mock interview is powered by AI for Performance Profiling & Optimization. Login to unlock this feature.