Trigger-to-Queueable Bulk Processing Pattern
When a data loader inserts 1,000 records, Salesforce does not fire your trigger once with all 1,000 records. It fires the trigger five times in chunks of 200. Each chunk shares the same Apex transaction, which means static variables persist across all five executions. This behavior creates both a challenge and an opportunity for efficient asynchronous processing.
Why Triggers Fire Multiple Times
Salesforce enforces a 200-record batch size for trigger execution. A bulk operation of 1,000 records results in five trigger invocations within a single transaction. Each invocation receives Trigger.new containing up to 200 records, but the transaction context -- including static variables -- survives across all invocations.
The Accumulate-and-Enqueue Pattern
The goal is to collect all affected record IDs across every trigger chunk, then enqueue exactly one Queueable job to process them all.
The Trigger Handler
public class OrderItemTriggerHandler {
// Static set survives across all 200-record chunks in the transaction
private static Set<Id> pendingOrderItemIds = new Set<Id>();
// Guard flag: ensures we only enqueue one Queueable per transaction
@TestVisible
private static Boolean queueableEnqueued = false;
public static void handleAfterInsert(List<OrderItem> newItems) {
for (OrderItem item : newItems) {
if (item.Status__c == 'Approved') {
pendingOrderItemIds.add(item.Id);
}
}
enqueueIfNeeded();
}
private static void enqueueIfNeeded() {
if (!pendingOrderItemIds.isEmpty() && !queueableEnqueued) {
System.enqueueJob(
new OrderItemProcessingJob(new Set<Id>(pendingOrderItemIds))
);
queueableEnqueued = true;
}
}
}
The Queueable Class
public class OrderItemProcessingJob implements Queueable {
private Set<Id> orderItemIds;
public OrderItemProcessingJob(Set<Id> orderItemIds) {
this.orderItemIds = orderItemIds;
}
public void execute(QueueableContext ctx) {
List<OrderItem> items = [
SELECT Id, OrderId, Product2Id, Quantity, UnitPrice
FROM OrderItem
WHERE Id IN :orderItemIds
];
// Heavy processing logic here
OrderItemService.processApprovedItems(items);
}
}
The Enqueue-Once Guard
The queueableEnqueued static boolean ensures only one Queueable is enqueued per transaction. Without it, each trigger chunk would enqueue a separate job, wasting async limits and potentially causing duplicate processing.
The pattern works because:
- Chunk 1 adds IDs to
pendingOrderItemIds, enqueues the job, setsqueueableEnqueued = true. - Chunks 2-5 add their IDs to
pendingOrderItemIds, butenqueueIfNeeded()skips enqueueing because the guard is alreadytrue.
Why @TestVisible Matters
The @TestVisible annotation on queueableEnqueued is essential for testing. Without it, test methods cannot reset the guard flag between test scenarios:
@IsTest
static void testMultipleBatches() {
// Reset the guard so this test starts clean
OrderItemTriggerHandler.queueableEnqueued = false;
List<OrderItem> testItems = TestDataFactory.createOrderItems(500);
insert testItems;
System.assertEquals(1, Limits.getQueueableJobs(),
'Should enqueue exactly one Queueable regardless of chunk count');
}
Without @TestVisible, successive tests may find the guard already set from a prior test, causing false negatives.
Service Layer Separation
The Queueable should delegate business logic to a service class rather than embedding it directly. This keeps the Queueable focused on orchestration and makes the logic independently testable:
public class OrderItemService {
public static void processApprovedItems(List<OrderItem> items) {
// Business logic: rollups, external callouts, record creation
}
}
Key Considerations
| Concern | Guidance |
|---|---|
| Chunk accumulation | Static variables persist within a single transaction, not across transactions |
| Guard reset | The guard resets naturally between transactions; use @TestVisible for test isolation |
| Async limits | Each transaction can enqueue up to 50 Queueable jobs; this pattern uses only one |
| Error handling | Implement Database.AllowsCallouts or a Finalizer for retry logic |
This pattern is foundational for any Salesforce implementation that processes bulk data asynchronously while respecting governor limits.