Skip to main content

Validating AI Tool Enums Against Salesforce Picklist Metadata

Overview

When AI-powered tools create or update Salesforce records, they rely on predefined enum values in their tool schemas to guide the model's choices. If those enum values don't match the actual picklist values configured in the Salesforce org, records will fail to save -- and the error only surfaces at the moment of the DML operation, after the AI has already committed to an invalid value.

This article explains why hardcoded enum values are a reliability risk in AI-to-Salesforce integrations, and how to dynamically populate tool schemas from the org's own picklist metadata. The pattern applies to any AI integration that writes to Salesforce, including chatbots, agent frameworks, batch processors, and middleware connectors.

Validating AI Tool Enums Against Salesforce Picklist Metadata

When building AI-powered tools that create or update Salesforce records, a common failure mode is the mismatch between the enum values defined in the tool schema and the actual picklist values configured in the org.

The Problem

AI tool calling relies on structured schemas that tell the model which values are valid for each parameter. A tool definition for creating a case might include:

{
"name": "create_case",
"input_schema": {
"properties": {
"priority": {
"type": "string",
"enum": ["Low", "Medium", "High", "Critical"]
}
}
}
}

If the org's Priority picklist uses P1 - Critical, P2 - High, P3 - Medium, P4 - Low instead of the hardcoded values, the AI will confidently select "High" — a perfectly valid enum value per the tool definition — and the DML operation will fail:

INVALID_OR_NULL_FOR_RESTRICTED_PICKLIST: Priority: bad value for restricted picklist field: High

This is particularly insidious because the error only surfaces at DML time, after the AI has already committed to a value. The tool schema told the model that "High" was valid, so the model had no reason to doubt it.

The Fix: Dynamic Enum Population

Instead of hardcoding enum values, query the org's picklist metadata at runtime and inject the actual values into the tool definition.

Using Schema.DescribeSObjectResult

public class ToolDefinitionBuilder {

public static List<String> getPicklistValues(String objectName, String fieldName) {
Schema.SObjectType sot = Schema.getGlobalDescribe().get(objectName);
Schema.DescribeFieldResult dfr = sot.getDescribe()
.fields.getMap()
.get(fieldName)
.getDescribe();

List<String> values = new List<String>();
for (Schema.PicklistEntry pe : dfr.getPicklistValues()) {
if (pe.isActive()) {
values.add(pe.getValue());
}
}
return values;
}

public static Map<String, Object> buildCaseCreationTool() {
List<String> priorities = getPicklistValues('Case', 'Priority');
List<String> statuses = getPicklistValues('Case', 'Status');
List<String> types = getPicklistValues('Case', 'Type');

// Build tool schema with real picklist values
Map<String, Object> priorityProp = new Map<String, Object>{
'type' => 'string',
'description' => 'Case priority level',
'enum' => priorities
};

Map<String, Object> properties = new Map<String, Object>{
'subject' => new Map<String, Object>{
'type' => 'string',
'description' => 'Brief summary of the issue'
},
'priority' => priorityProp,
'status' => new Map<String, Object>{
'type' => 'string',
'enum' => statuses
},
'type' => new Map<String, Object>{
'type' => 'string',
'enum' => types
}
};

return new Map<String, Object>{
'name' => 'create_case',
'description' => 'Creates a new support case',
'input_schema' => new Map<String, Object>{
'type' => 'object',
'properties' => properties,
'required' => new List<String>{ 'subject', 'priority' }
}
};
}
}

Handling Record Type-Dependent Picklists

Picklist values can vary by record type. If your tool targets a specific record type, filter accordingly:

public static List<String> getPicklistValuesForRecordType(
String objectName, String fieldName, Id recordTypeId
) {
// Use UI API via FieldDefinition for record-type-aware values
List<PicklistValueInfo> infos = [
SELECT Value, IsActive
FROM PicklistValueInfo
WHERE EntityParticle.EntityDefinition.QualifiedApiName = :objectName
AND EntityParticle.QualifiedApiName = :fieldName
AND IsActive = true
];

List<String> values = new List<String>();
for (PicklistValueInfo pvi : infos) {
values.add(pvi.Value);
}
return values;
}

Best Practices

  1. Always use enum for restricted picklists. If a field is restricted, the AI must pick from valid values. Freetext string types will lead to DML failures.

  2. Cache tool definitions. Picklist metadata rarely changes. Use Cache.Org or a static variable to avoid redundant describe calls on every message.

  3. Include descriptions on enum values. If the picklist labels differ from API values, add a description to the property that lists both, so the AI understands the semantic meaning.

  4. Validate before DML. Even with correct enums, add a validation step before insert to catch edge cases like dependent picklists:

String priority = (String) input.get('priority');
Set<String> validPriorities = new Set<String>(getPicklistValues('Case', 'Priority'));
if (!validPriorities.contains(priority)) {
return new Map<String, Object>{
'error' => 'Invalid priority value. Valid options: ' + String.join(new List<String>(validPriorities), ', ')
};
}
  1. Refresh enums on deployment. If picklist values change between environments (sandbox vs. production), the dynamic approach handles this automatically since it reads from the running org's metadata.

This pattern applies to any AI integration that creates Salesforce records — not just chatbots, but also batch processors, middleware connectors, and agent frameworks.