Complete Governor Limits Reference & Strategies
š Concept
While we covered basic governor limits in Phase 3, this section provides the complete reference with advanced optimization strategies that senior developers and architects need.
Complete Governor Limits Table:
Limit Synchronous Asynchronous
āāāāāāāāāāāāāāāāāāāāāāāāāāāā āāāāāāāāāāāā āāāāāāāāāāāā
SOQL queries 100 200
SOQL rows retrieved 50,000 50,000
DML statements 150 150
DML rows 10,000 10,000
Heap size 6 MB 12 MB
CPU time 10,000ms 60,000ms
Callouts 100 100
Callout timeout (single) 120s 120s
Callout timeout (total) 120s 120s
Future invocations 50 0 (from batch)
Queueable invocations 50 1
Email invocations 10 10
SOSL searches 20 20
Event publishing 150 150
QueryLocator rows N/A 50,000,000
Platform-level limits (per 24 hours):
API calls ā Based on edition (Enterprise: 100,000/day + 1,000/user)
Batch Apex ā 250,000 execute() invocations per 24 hours
Scheduled Apex ā 100 scheduled jobs
Platform Events ā Based on entitlement (default: 250K/hour)
Storage ā Data storage + File storage (per edition)
Advanced optimization patterns:
- Lazy evaluation ā Don't compute or query until absolutely necessary
- Memoization ā Cache computed results in static variables within a transaction
- Query consolidation ā Combine multiple queries into one with OR conditions
- Selective processing ā Skip records that don't meet criteria before querying
- Asynchronous offloading ā Move heavy processing to Queueable/Batch
The governor limit hierarchy of concern:
1. SOQL queries (100) ā Most commonly hit, hardest to fix retroactively
2. CPU time (10s) ā Complex logic + large datasets
3. DML statements (150) ā Secondary operations and cascading triggers
4. Heap size (6MB) ā Large query results or string processing
5. SOQL rows (50,000) ā Processing too many records at once
š» Code Example
1// Advanced Governor Limit Optimization23public class PerformanceOptimization {45 // 1. MEMOIZATION ā Cache within transaction6 private static Map<Id, Account> accountCache = new Map<Id, Account>();78 public static Account getAccountCached(Id accountId) {9 if (!accountCache.containsKey(accountId)) {10 accountCache.put(accountId, [11 SELECT Id, Name, Industry, OwnerId12 FROM Account WHERE Id = :accountId13 ]);14 }15 return accountCache.get(accountId);16 }1718 // Bulk cache loading19 public static void preloadAccounts(Set<Id> accountIds) {20 // Remove already-cached IDs21 accountIds.removeAll(accountCache.keySet());2223 if (!accountIds.isEmpty()) {24 Map<Id, Account> newAccounts = new Map<Id, Account>(25 [SELECT Id, Name, Industry, OwnerId26 FROM Account WHERE Id IN :accountIds]27 );28 accountCache.putAll(newAccounts);29 }30 }3132 // 2. QUERY CONSOLIDATION ā Reduce SOQL count3334 // BAD: 3 separate queries35 // List<Account> techAccounts = [SELECT Id FROM Account WHERE Industry = 'Tech'];36 // List<Account> finAccounts = [SELECT Id FROM Account WHERE Industry = 'Finance'];37 // List<Account> healthAccounts = [SELECT Id FROM Account WHERE Industry = 'Health'];3839 // GOOD: 1 consolidated query40 public static Map<String, List<Account>> getAccountsByIndustries(41 Set<String> industries42 ) {43 List<Account> allAccounts = [44 SELECT Id, Name, Industry45 FROM Account46 WHERE Industry IN :industries47 ];4849 // Group in memory (free ā no SOQL cost)50 Map<String, List<Account>> result = new Map<String, List<Account>>();51 for (Account acc : allAccounts) {52 if (!result.containsKey(acc.Industry)) {53 result.put(acc.Industry, new List<Account>());54 }55 result.get(acc.Industry).add(acc);56 }57 return result;58 }5960 // 3. SELECTIVE PROCESSING ā Skip unnecessary work61 public static void optimizedTriggerHandler(62 List<Opportunity> newOpps,63 Map<Id, Opportunity> oldMap64 ) {65 // Step 1: Filter to only records that actually changed66 List<Opportunity> stageChanged = new List<Opportunity>();67 List<Opportunity> amountChanged = new List<Opportunity>();6869 for (Opportunity opp : newOpps) {70 Opportunity oldOpp = oldMap.get(opp.Id);7172 if (opp.StageName != oldOpp.StageName) {73 stageChanged.add(opp);74 }75 if (opp.Amount != oldOpp.Amount) {76 amountChanged.add(opp);77 }78 }7980 // Step 2: Only query/process if there are relevant changes81 if (!stageChanged.isEmpty()) {82 processStageChanges(stageChanged);83 }84 if (!amountChanged.isEmpty()) {85 processAmountChanges(amountChanged);86 }87 // If nothing changed? 0 SOQL, 0 DML ā no limits consumed88 }8990 // 4. HEAP OPTIMIZATION ā Process large datasets efficiently91 public static Integer countMatchingRecords(String criteria) {92 // BAD: Loads all records into heap93 // List<Account> all = [SELECT Id FROM Account WHERE ...];94 // return all.size(); // Could be 50,000 records in heap!9596 // GOOD: Use COUNT() ā returns integer, not records97 return [SELECT COUNT() FROM Account WHERE Industry = :criteria];98 }99100 // Process without loading all into memory101 public static void processWithIterator() {102 // For loop with SOQL ā processes in chunks of 200, not all at once103 for (List<Account> chunk : [104 SELECT Id, Name, Industry FROM Account WHERE Industry = 'Technology'105 ]) {106 // Each chunk is 200 records max107 // Previous chunk is garbage collected108 processChunk(chunk);109 }110 // Heap never holds more than 200 records at a time111 }112113 // 5. CPU OPTIMIZATION ā Avoid O(n²) patterns114 public static void optimizedMatching(115 List<Contact> contacts, List<Account> accounts116 ) {117 // BAD: O(n Ć m) nested loop118 // for (Contact c : contacts) {119 // for (Account a : accounts) {120 // if (c.AccountId == a.Id) { ... }121 // }122 // }123124 // GOOD: O(n + m) with Map125 Map<Id, Account> accountMap = new Map<Id, Account>(accounts);126 for (Contact c : contacts) {127 Account a = accountMap.get(c.AccountId);128 if (a != null) {129 // O(1) lookup instead of O(m)130 }131 }132 }133134 // 6. MONITORING ā Proactive limit checking135 public static void checkLimitsWarning() {136 Integer queriesUsed = Limits.getQueries();137 Integer queriesMax = Limits.getLimitQueries();138139 if (queriesUsed > queriesMax * 0.8) {140 System.debug(LoggingLevel.WARN,141 'WARNING: SOQL usage at ' + queriesUsed + '/' + queriesMax +142 ' (' + (queriesUsed * 100 / queriesMax) + '%)');143 }144145 Integer cpuUsed = Limits.getCpuTime();146 Integer cpuMax = Limits.getLimitCpuTime();147148 if (cpuUsed > cpuMax * 0.7) {149 System.debug(LoggingLevel.WARN,150 'WARNING: CPU time at ' + cpuUsed + 'ms/' + cpuMax + 'ms');151 }152 }153}
šļø Practice Exercise
Governor Limit Mastery Exercises:
- Write a utility class that provides memoized queries with automatic cache invalidation
- Consolidate 5 separate SOQL queries into 1 using dynamic SOQL and Maps
- Implement selective processing in a trigger that skips records where relevant fields haven't changed
- Write a heap-efficient processor using the SOQL for-loop pattern for 100K+ records
- Create a limit monitoring decorator that logs limit consumption before and after method calls
- Optimize a method from O(n²) to O(n) using Maps
- Design a limits budget for a complex transaction with 5 triggers and 3 Flows on the same save
- Write a test that verifies your code stays within 50% of governor limits with 200 records
- Implement query consolidation: merge parent and child queries into one with subquery
- Build a governor limit dashboard that shows typical consumption patterns
ā ļø Common Mistakes
Checking Limits class for flow control ā if you need to check if you're close to limits, your code isn't properly bulkified. Fix the root cause
Not accounting for other automation sharing the transaction ā your trigger shares limits with Flows, Process Builders, and other triggers on the same object
Optimizing prematurely ā profile first, optimize second. The bottleneck might not be where you think
Caching too aggressively ā static variable caches persist for the entire transaction but consume heap. Clear caches when no longer needed
Not using SOQL for-loop for large datasets ā regular SOQL loads all results into heap. For-loop processes in 200-record chunks
š¼ Interview Questions
š¤ Mock Interview
Mock interview is powered by AI for Complete Governor Limits Reference & Strategies. Login to unlock this feature.