Skip to main content

Designing Conversational Flows via System Prompts

Overview

Instead of hard-coding conversation logic into application code, this article demonstrates how to define entire multi-step workflows within LLM system prompts -- and store those prompts as configurable metadata records. This approach lets admins and business users modify chatbot behavior without code deployments.

  • Why it matters: Coded conversation flows are rigid and require developer involvement for every change. Prompt-driven flows enable rapid iteration, reduce deployment risk, and empower non-technical team members to manage AI behavior directly.
  • What you will learn: How to structure system prompts with distinct sections (identity, communication style, flow steps, guardrails), store them in Custom Metadata for declarative management, and implement the Record Overview confirmation pattern that prevents accidental data creation.

Traditional chatbot frameworks use decision trees, state machines, or coded if/else branches to guide conversations. With modern LLMs, a more flexible approach is to define the entire conversational flow in the system prompt. The LLM follows the instructions naturally, adapting to user input while respecting the defined sequence and guardrails.

Why Prompt-Driven Flows

Coded conversation flows are rigid. Adding a new clarifying question or changing the order of steps requires a code change, test updates, and deployment. With prompt-driven flows:

  • Non-developers can modify behavior by editing text records in Salesforce Setup
  • Flow changes deploy instantly via Custom Metadata without a code release
  • The LLM handles edge cases -- unexpected user input, topic changes, and ambiguous responses are managed by the model's reasoning rather than explicit branching logic

System Prompt Architecture

A well-structured system prompt has distinct sections, each governing a different aspect of the AI's behavior:

IDENTITY & ROLE:
You are a support assistant for Acme Corp's customer portal.
You help users submit support cases and review existing requests.

COMMUNICATION STYLE:
- Professional but approachable tone
- Use bullet points for lists of 3+ items
- Format data in tables when comparing multiple items
- Keep responses concise -- under 200 words unless detail is requested

CONVERSATION FLOW:
1. Greet the user and ask how you can help
2. If they want to create a case:
a. Ask for the subject (one sentence summary)
b. Ask for priority (Low, Medium, High, Critical)
c. Ask clarifying questions based on the subject
d. Present a Record Overview table for confirmation
e. Only call the create_case tool after explicit user approval
3. If they want to review cases:
a. Call the get_cases tool to retrieve their open cases
b. Present results in a formatted table
c. Ask if they want details on a specific case

TOOL CALLING RULES:
- NEVER call create_case without showing a Record Overview first
- NEVER assume field values -- ask if unclear
- ALWAYS confirm before any create or update operation

ANTI-PATTERNS (things you must never do):
- Do not create multiple records in a single response
- Do not skip the overview step, even if the user says "just do it"
- Do not provide technical details about internal systems
- Do not make up case numbers or reference IDs

Storing Flow Rules in Custom Metadata

Rather than embedding the entire prompt in Apex, store each section as a separate Custom Metadata record. This enables granular control over which sections are active and their ordering.

AI_Context__mdt Records:
+--------------------------+---------------------+---------+------------+
| DeveloperName | Category__c | Sort | Is_Active |
+--------------------------+---------------------+---------+------------+
| Communication_Style | Communication Style | 1 | true |
| Conversation_Flow | Conversation Flow | 2 | true |
| Tool_Calling_Rules | Tool Calling Rules | 3 | true |
| Anti_Patterns | Anti-Patterns | 4 | true |
| Holiday_Greeting | Communication Style | 0 | false |
+--------------------------+---------------------+---------+------------+

Apex assembles the prompt dynamically:

public static String assembleSystemPrompt() {
List<AI_Context__mdt> sections = [
SELECT Category__c, Context_Text__c
FROM AI_Context__mdt
WHERE Is_Active__c = true
ORDER BY Sort_Order__c ASC
];

List<String> parts = new List<String>();
for (AI_Context__mdt section : sections) {
parts.add(section.Category__c.toUpperCase() + ':\n' + section.Context_Text__c);
}
return String.join(parts, '\n\n');
}

Activating the Holiday_Greeting record adds a seasonal message without touching code. Deactivating Anti_Patterns during testing lets you exercise tool calls more freely.

The Record Overview Pattern

The most important conversational guardrail is the Record Overview -- a structured summary presented to the user before any create or update operation. This prevents accidental record creation and gives the user a clear confirmation point.

RECORD OVERVIEW:
| Field | Value |
|-------------|-------------------------------------|
| Subject | Login page returns 403 after update |
| Priority | High |
| Category | Authentication |
| Description | Users report 403 errors when... |

Shall I create this case?

The system prompt instructs the AI to always present this table and wait for confirmation. The tool call only executes when the user explicitly approves -- "yes", "looks good", "go ahead".

Practical Guidelines

  • Keep sections under 500 words each. LLMs follow shorter, well-structured instructions more reliably than long prose.
  • Use numbered steps for sequential flows. The model follows ordered lists more consistently than paragraph descriptions.
  • Define anti-patterns explicitly. Telling the AI what NOT to do is as important as telling it what to do. Without anti-pattern guidance, models tend to be overly helpful and skip confirmation steps.
  • Version control via DeveloperName. Use descriptive names like Conversation_Flow_V2 when iterating, keeping the previous version inactive for rollback.
  • Test with edge cases. Try interrupting the flow mid-sequence, providing contradictory information, or requesting actions outside the defined scope. Adjust the prompt based on how the model handles these scenarios.

This approach produces conversational experiences that feel natural while maintaining the structure and safety rails that production systems require.