The Queueable Serialization Trap: Why Bulk Operations Lose Data
One of the most deceptive bugs in Salesforce async processing happens when you combine the trigger-to-Queueable pattern with large bulk operations. Everything works perfectly in testing with 200 records or fewer, then silently drops data in production when volumes exceed 200.
How Queueable Serialization Works
When you call System.enqueueJob(myQueueable), Salesforce serializes the entire Queueable object -- including all of its instance variables -- at that moment. The serialized snapshot is what gets executed later, not the live object reference.
This is the critical distinction: the Queueable is a frozen copy from the moment of enqueueing.
The Silent Data Loss Scenario
Consider a trigger handler that accumulates IDs in a static set and passes them to a Queueable:
public class ItemTriggerHandler {
private static Set<Id> accumulatedIds = new Set<Id>();
private static Boolean jobEnqueued = false;
public static void handleAfterInsert(List<Item__c> newItems) {
for (Item__c item : newItems) {
accumulatedIds.add(item.Id);
}
if (!jobEnqueued) {
// Queueable is serialized NOW with only batch 1's IDs
System.enqueueJob(new ItemProcessor(new Set<Id>(accumulatedIds)));
jobEnqueued = true;
}
}
}
When inserting 1,000 records:
| Trigger Chunk | Records | accumulatedIds Size | Queueable State |
|---|---|---|---|
| Chunk 1 | 1-200 | 200 | Serialized with 200 IDs |
| Chunk 2 | 201-400 | 400 | Already enqueued; guard blocks |
| Chunk 3 | 401-600 | 600 | Already enqueued; guard blocks |
| Chunk 4 | 601-800 | 800 | Already enqueued; guard blocks |
| Chunk 5 | 801-1000 | 1000 | Already enqueued; guard blocks |
The static accumulatedIds set correctly grows to 1,000 IDs. But the Queueable was serialized during chunk 1, so it only contains 200 IDs. 800 records are silently dropped.
Why This Is Hard to Catch
- Unit tests rarely exceed 200 records (one trigger chunk), so the bug never manifests.
- No error is thrown. The Queueable executes successfully -- it just processes fewer records than expected.
- The static set has the correct count, so logging the set size in the trigger shows 1,000.
The Shared Key Pattern
Instead of passing individual record IDs to the Queueable, pass a shared key -- a common identifier that the Queueable can use to re-query all relevant records at execution time.
public class ItemTriggerHandler {
private static Boolean jobEnqueued = false;
public static void handleAfterInsert(List<Item__c> newItems) {
if (newItems.isEmpty()) return;
// Use a shared key that all records in this operation share
Id parentId = newItems[0].Parent__c;
if (!jobEnqueued) {
// Pass the shared key, not individual IDs
System.enqueueJob(new ItemProcessor(parentId));
jobEnqueued = true;
}
}
}
public class ItemProcessor implements Queueable {
private Id parentId;
public ItemProcessor(Id parentId) {
this.parentId = parentId;
}
public void execute(QueueableContext ctx) {
// Re-query at execution time captures ALL records
List<Item__c> allItems = [
SELECT Id, Name, Status__c
FROM Item__c
WHERE Parent__c = :parentId
AND Status__c = 'Pending'
];
// Process all items -- nothing dropped
}
}
The Queueable now re-queries using the shared key at execution time, after all trigger chunks have committed their records to the database.
When This Matters vs. When It Does Not
Matters: Any scenario where bulk operations routinely exceed 200 records -- data loads, integrations, batch processes, or Flow-triggered bulk updates.
Does not matter: If your data volumes never exceed 200 records per operation, a single trigger chunk handles everything and the serialization timing is irrelevant.
Does not matter: If the Queueable always re-queries by a shared key anyway (parent record ID, batch identifier, status flag), the serialization trap is naturally avoided.
Defensive Design
The safest approach is to assume volumes will grow. Even if today's operations are small, design Queueables to re-query at execution time rather than relying on data passed at enqueue time. The shared key pattern costs almost nothing to implement and eliminates an entire class of silent failures.