Complete Governor Limits Reference & Strategies

0/2 in this phase0/41 across the roadmap

šŸ“– Concept

While we covered basic governor limits in Phase 3, this section provides the complete reference with advanced optimization strategies that senior developers and architects need.

Complete Governor Limits Table:

Limit                         Synchronous    Asynchronous
────────────────────────────  ────────────   ────────────
SOQL queries                      100            200
SOQL rows retrieved            50,000         50,000
DML statements                   150            150
DML rows                       10,000         10,000
Heap size                        6 MB          12 MB
CPU time                     10,000ms       60,000ms
Callouts                        100            100
Callout timeout (single)        120s           120s
Callout timeout (total)         120s           120s
Future invocations               50             0 (from batch)
Queueable invocations            50             1
Email invocations                10             10
SOSL searches                    20             20
Event publishing               150            150
QueryLocator rows               N/A         50,000,000

Platform-level limits (per 24 hours):

API calls          — Based on edition (Enterprise: 100,000/day + 1,000/user)
Batch Apex         — 250,000 execute() invocations per 24 hours
Scheduled Apex     — 100 scheduled jobs
Platform Events    — Based on entitlement (default: 250K/hour)
Storage            — Data storage + File storage (per edition)

Advanced optimization patterns:

  1. Lazy evaluation — Don't compute or query until absolutely necessary
  2. Memoization — Cache computed results in static variables within a transaction
  3. Query consolidation — Combine multiple queries into one with OR conditions
  4. Selective processing — Skip records that don't meet criteria before querying
  5. Asynchronous offloading — Move heavy processing to Queueable/Batch

The governor limit hierarchy of concern:

1. SOQL queries (100)    — Most commonly hit, hardest to fix retroactively
2. CPU time (10s)        — Complex logic + large datasets
3. DML statements (150)  — Secondary operations and cascading triggers
4. Heap size (6MB)       — Large query results or string processing
5. SOQL rows (50,000)    — Processing too many records at once

šŸ’» Code Example

codeTap to expand ā›¶
1// Advanced Governor Limit Optimization
2
3public class PerformanceOptimization {
4
5 // 1. MEMOIZATION — Cache within transaction
6 private static Map<Id, Account> accountCache = new Map<Id, Account>();
7
8 public static Account getAccountCached(Id accountId) {
9 if (!accountCache.containsKey(accountId)) {
10 accountCache.put(accountId, [
11 SELECT Id, Name, Industry, OwnerId
12 FROM Account WHERE Id = :accountId
13 ]);
14 }
15 return accountCache.get(accountId);
16 }
17
18 // Bulk cache loading
19 public static void preloadAccounts(Set<Id> accountIds) {
20 // Remove already-cached IDs
21 accountIds.removeAll(accountCache.keySet());
22
23 if (!accountIds.isEmpty()) {
24 Map<Id, Account> newAccounts = new Map<Id, Account>(
25 [SELECT Id, Name, Industry, OwnerId
26 FROM Account WHERE Id IN :accountIds]
27 );
28 accountCache.putAll(newAccounts);
29 }
30 }
31
32 // 2. QUERY CONSOLIDATION — Reduce SOQL count
33
34 // BAD: 3 separate queries
35 // List<Account> techAccounts = [SELECT Id FROM Account WHERE Industry = 'Tech'];
36 // List<Account> finAccounts = [SELECT Id FROM Account WHERE Industry = 'Finance'];
37 // List<Account> healthAccounts = [SELECT Id FROM Account WHERE Industry = 'Health'];
38
39 // GOOD: 1 consolidated query
40 public static Map<String, List<Account>> getAccountsByIndustries(
41 Set<String> industries
42 ) {
43 List<Account> allAccounts = [
44 SELECT Id, Name, Industry
45 FROM Account
46 WHERE Industry IN :industries
47 ];
48
49 // Group in memory (free — no SOQL cost)
50 Map<String, List<Account>> result = new Map<String, List<Account>>();
51 for (Account acc : allAccounts) {
52 if (!result.containsKey(acc.Industry)) {
53 result.put(acc.Industry, new List<Account>());
54 }
55 result.get(acc.Industry).add(acc);
56 }
57 return result;
58 }
59
60 // 3. SELECTIVE PROCESSING — Skip unnecessary work
61 public static void optimizedTriggerHandler(
62 List<Opportunity> newOpps,
63 Map<Id, Opportunity> oldMap
64 ) {
65 // Step 1: Filter to only records that actually changed
66 List<Opportunity> stageChanged = new List<Opportunity>();
67 List<Opportunity> amountChanged = new List<Opportunity>();
68
69 for (Opportunity opp : newOpps) {
70 Opportunity oldOpp = oldMap.get(opp.Id);
71
72 if (opp.StageName != oldOpp.StageName) {
73 stageChanged.add(opp);
74 }
75 if (opp.Amount != oldOpp.Amount) {
76 amountChanged.add(opp);
77 }
78 }
79
80 // Step 2: Only query/process if there are relevant changes
81 if (!stageChanged.isEmpty()) {
82 processStageChanges(stageChanged);
83 }
84 if (!amountChanged.isEmpty()) {
85 processAmountChanges(amountChanged);
86 }
87 // If nothing changed? 0 SOQL, 0 DML — no limits consumed
88 }
89
90 // 4. HEAP OPTIMIZATION — Process large datasets efficiently
91 public static Integer countMatchingRecords(String criteria) {
92 // BAD: Loads all records into heap
93 // List<Account> all = [SELECT Id FROM Account WHERE ...];
94 // return all.size(); // Could be 50,000 records in heap!
95
96 // GOOD: Use COUNT() — returns integer, not records
97 return [SELECT COUNT() FROM Account WHERE Industry = :criteria];
98 }
99
100 // Process without loading all into memory
101 public static void processWithIterator() {
102 // For loop with SOQL — processes in chunks of 200, not all at once
103 for (List<Account> chunk : [
104 SELECT Id, Name, Industry FROM Account WHERE Industry = 'Technology'
105 ]) {
106 // Each chunk is 200 records max
107 // Previous chunk is garbage collected
108 processChunk(chunk);
109 }
110 // Heap never holds more than 200 records at a time
111 }
112
113 // 5. CPU OPTIMIZATION — Avoid O(n²) patterns
114 public static void optimizedMatching(
115 List<Contact> contacts, List<Account> accounts
116 ) {
117 // BAD: O(n Ɨ m) nested loop
118 // for (Contact c : contacts) {
119 // for (Account a : accounts) {
120 // if (c.AccountId == a.Id) { ... }
121 // }
122 // }
123
124 // GOOD: O(n + m) with Map
125 Map<Id, Account> accountMap = new Map<Id, Account>(accounts);
126 for (Contact c : contacts) {
127 Account a = accountMap.get(c.AccountId);
128 if (a != null) {
129 // O(1) lookup instead of O(m)
130 }
131 }
132 }
133
134 // 6. MONITORING — Proactive limit checking
135 public static void checkLimitsWarning() {
136 Integer queriesUsed = Limits.getQueries();
137 Integer queriesMax = Limits.getLimitQueries();
138
139 if (queriesUsed > queriesMax * 0.8) {
140 System.debug(LoggingLevel.WARN,
141 'WARNING: SOQL usage at ' + queriesUsed + '/' + queriesMax +
142 ' (' + (queriesUsed * 100 / queriesMax) + '%)');
143 }
144
145 Integer cpuUsed = Limits.getCpuTime();
146 Integer cpuMax = Limits.getLimitCpuTime();
147
148 if (cpuUsed > cpuMax * 0.7) {
149 System.debug(LoggingLevel.WARN,
150 'WARNING: CPU time at ' + cpuUsed + 'ms/' + cpuMax + 'ms');
151 }
152 }
153}

šŸ‹ļø Practice Exercise

Governor Limit Mastery Exercises:

  1. Write a utility class that provides memoized queries with automatic cache invalidation
  2. Consolidate 5 separate SOQL queries into 1 using dynamic SOQL and Maps
  3. Implement selective processing in a trigger that skips records where relevant fields haven't changed
  4. Write a heap-efficient processor using the SOQL for-loop pattern for 100K+ records
  5. Create a limit monitoring decorator that logs limit consumption before and after method calls
  6. Optimize a method from O(n²) to O(n) using Maps
  7. Design a limits budget for a complex transaction with 5 triggers and 3 Flows on the same save
  8. Write a test that verifies your code stays within 50% of governor limits with 200 records
  9. Implement query consolidation: merge parent and child queries into one with subquery
  10. Build a governor limit dashboard that shows typical consumption patterns

āš ļø Common Mistakes

  • Checking Limits class for flow control — if you need to check if you're close to limits, your code isn't properly bulkified. Fix the root cause

  • Not accounting for other automation sharing the transaction — your trigger shares limits with Flows, Process Builders, and other triggers on the same object

  • Optimizing prematurely — profile first, optimize second. The bottleneck might not be where you think

  • Caching too aggressively — static variable caches persist for the entire transaction but consume heap. Clear caches when no longer needed

  • Not using SOQL for-loop for large datasets — regular SOQL loads all results into heap. For-loop processes in 200-record chunks

šŸ’¼ Interview Questions

šŸŽ¤ Mock Interview

Mock interview is powered by AI for Complete Governor Limits Reference & Strategies. Login to unlock this feature.