Guided Agent Workflows with MCP Prompts
You can expose 50 MCP tools and still get poor answers from an AI agent — because the agent doesn't know in what order to call them, or how to summarise the results. A user asks "is this customer at risk?" and the agent makes one tool call when it should have made three.
MCP prompts are the fix. A prompt is a Mustache template authored by the API operator, published over MCP, and rendered on demand by the server. The agent fetches the rendered text via prompts/get and then follows the instructions inside it. The operator gets to script the workflow once; every agent gets the same playbook.
This recipe builds two prompts that demonstrate the pattern.
What You'll Build
Two MCP prompts on top of an existing customer API:
customer_analysis— given a customer id (and optionally a segment, analysis type, and time period), produces a structured analysis script that names the right tools and resources in the right order.data_quality_check— given a table, a column, and a threshold, produces a checklist for validating data freshness, null rates, and distribution.
Crucially, prompts do not execute SQL. They render text. The agent then chooses to call the tools and resources the prompt mentions.
How Prompts Differ From Tools and Resources
The prompt is a script for the agent. flAPI's prompts/get handler renders Mustache placeholders against the supplied arguments and returns the result as a chat message — see mcp-prompt: handling in src/mcp_route_handlers.cpp and the response shape in MCP Protocol Reference.
Prerequisites
- An existing flAPI project with at least one
mcp-tooldeclared (we'll referencecustomer_lookup) - An existing MCP resource for the schema is useful but optional — see Schema as an MCP Resource
Project Layout
prompts-demo/
├── flapi.yaml
├── data/
│ └── customers.parquet
└── sqls/
├── customer-common.yaml
├── customer-lookup-tool.yaml # existing mcp-tool
├── customer-schema-resource.yaml # existing mcp-resource
├── customer-analysis-prompt.yaml # NEW
└── data-quality-check-prompt.yaml # NEW
Prompts live in their own YAML files alongside the rest of the endpoint config; flAPI picks them up by the presence of an mcp-prompt: block at the top.
Step-by-Step
1. The customer_analysis Prompt
sqls/customer-analysis-prompt.yaml:
mcp-prompt:
name: customer_analysis
description: Generate a structured customer analysis playbook for the agent
template: |
You are a customer data analyst. Follow this playbook precisely.
## Analysis Request
{{#customer_id}}
- Customer ID: {{customer_id}}
{{/customer_id}}
{{#segment}}
- Segment focus: {{segment}}
{{/segment}}
{{#analysis_type}}
- Analysis type: {{analysis_type}}
{{/analysis_type}}
{{#time_period}}
- Time period: {{time_period}}
{{/time_period}}
## Step 1 — Ground yourself
Call the `customer_schema` MCP resource (URI `flapi://customer_schema`)
to confirm field names and types before issuing any tool call.
## Step 2 — Fetch the customer
Call the `customer_lookup` MCP tool with arguments:
{ "id": "{{customer_id}}"{{#segment}}, "segment": "{{segment}}"{{/segment}} }
Confirm the returned `c_mktsegment` matches the segment focus
{{#segment}}({{segment}}){{/segment}}. If it does not, stop and report
the discrepancy to the user.
## Step 3 — Profile and recommend
Using the row returned by `customer_lookup`:
1. Summarise the customer profile in one paragraph.
2. Flag any of these risk signals:
- `c_acctbal` below 0
- `c_address` empty or single line
- `c_phone` not matching the regional format
3. Recommend the next best action ({{#analysis_type}}{{analysis_type}}-style{{/analysis_type}}).
{{#include_schema}}
## Step 4 — Append schema reference
Quote the relevant columns from `flapi://customer_schema` so the user
can see the data types behind your analysis.
{{/include_schema}}
Respond in Markdown.
arguments:
- customer_id
- segment
- analysis_type
- time_period
- include_schema
Key things to notice:
- Inline template only. Unlike
mcp-toolandmcp-resource, prompts use an inlinetemplate:string and do not supporttemplate-source:files. (SeeCONFIG_REFERENCE.md §3.4.) - No
connection:block. Prompts never run SQL, so there is no database connection to declare. - Mustache sections gate optional content.
{{#segment}}…{{/segment}}renders only whensegmentis supplied. This lets one prompt template cover several call shapes. - The prompt names the tools.
customer_lookupandflapi://customer_schemaare referenced by exact name so the agent knows whichtools/call/resources/readto make next.
2. The data_quality_check Prompt
A second prompt to show multi-argument substitution and a slightly different style.
sqls/data-quality-check-prompt.yaml:
mcp-prompt:
name: data_quality_check
description: Generate a data-quality checklist for a single column
template: |
You are a data quality engineer reviewing the column `{{column}}`
of table `{{table}}`.
Apply the following checks in order. For each check, call the
appropriate flAPI MCP tool and report PASS / FAIL / SKIP with
a one-line justification.
## Checks
1. **Freshness** — confirm the most recent row is no older than
{{#max_age_hours}}{{max_age_hours}}{{/max_age_hours}}{{^max_age_hours}}24{{/max_age_hours}} hours.
2. **Null rate** — confirm that fewer than
{{#null_threshold_pct}}{{null_threshold_pct}}{{/null_threshold_pct}}{{^null_threshold_pct}}5{{/null_threshold_pct}}%
of rows have a NULL `{{column}}`.
3. **Distinct count** — confirm the column is not collapsed to a single
distinct value (which often indicates a broken ingest).
{{#check_distribution}}
4. **Distribution** — call the `column_histogram` tool and verify
the top bucket is no more than 80% of the total.
{{/check_distribution}}
## Output
Return a Markdown table:
| Check | Result | Notes |
|-------|--------|-------|
End with a one-paragraph "Overall verdict" recommending one of:
ACCEPT, ACCEPT-WITH-WARNINGS, REJECT.
arguments:
- table
- column
- max_age_hours
- null_threshold_pct
- check_distribution
This prompt uses inverted sections — {{^max_age_hours}}24{{/max_age_hours}} — to inline a default value when the argument isn't supplied. That keeps the playbook self-contained even when the caller omits optional inputs.
End-to-End JSON-RPC Walkthrough
1. Initialize and discover
$ curl -sS -i -X POST http://localhost:8080/mcp/jsonrpc \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-11-25",
"clientInfo": {"name": "curl-demo", "version": "1.0.0"}
}
}'
HTTP/1.1 200 OK
Mcp-Session-Id: 7a1c-...-9f88
$ SID=7a1c-...-9f88
2. List prompts
$ curl -sS -X POST http://localhost:8080/mcp/jsonrpc \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SID" \
-d '{"jsonrpc":"2.0","id":2,"method":"prompts/list","params":{}}' | jq
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"prompts": [
{
"name": "customer_analysis",
"description": "Generate a structured customer analysis playbook for the agent",
"arguments": [
{ "name": "customer_id" },
{ "name": "segment" },
{ "name": "analysis_type" },
{ "name": "time_period" },
{ "name": "include_schema" }
]
},
{
"name": "data_quality_check",
"description": "Generate a data-quality checklist for a single column",
"arguments": [
{ "name": "table" },
{ "name": "column" },
{ "name": "max_age_hours" },
{ "name": "null_threshold_pct" },
{ "name": "check_distribution" }
]
}
]
}
}
3. Render customer_analysis
$ curl -sS -X POST http://localhost:8080/mcp/jsonrpc \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SID" \
-d '{
"jsonrpc": "2.0",
"id": 3,
"method": "prompts/get",
"params": {
"name": "customer_analysis",
"arguments": {
"customer_id": "12345",
"segment": "AUTOMOBILE",
"analysis_type": "churn-risk",
"include_schema": true
}
}
}' | jq
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"description": "Generate a structured customer analysis playbook for the agent",
"messages": [
{
"role": "user",
"content": {
"type": "text",
"text": "You are a customer data analyst. Follow this playbook precisely.\n\n## Analysis Request\n- Customer ID: 12345\n- Segment focus: AUTOMOBILE\n- Analysis type: churn-risk\n\n## Step 1 — Ground yourself\nCall the `customer_schema` MCP resource ..."
}
}
]
}
}
Notice three things:
- The
time_periodargument was omitted, so the entire{{#time_period}}…{{/time_period}}section disappeared from the output. include_schema: trueenabled the "Step 4 — Append schema reference" section.- The result is a chat message (
role: "user"), not raw text. flAPI wraps the rendered template in a single user-role message so that the client can drop it straight into an LLM conversation.
4. Render data_quality_check with defaults
$ curl -sS -X POST http://localhost:8080/mcp/jsonrpc \
-H "Content-Type: application/json" \
-H "Mcp-Session-Id: $SID" \
-d '{
"jsonrpc": "2.0",
"id": 4,
"method": "prompts/get",
"params": {
"name": "data_quality_check",
"arguments": {
"table": "customers",
"column": "c_acctbal"
}
}
}' | jq -r '.result.messages[0].content.text'
You are a data quality engineer reviewing the column `c_acctbal`
of table `customers`.
...
1. Freshness — confirm the most recent row is no older than 24 hours.
2. Null rate — confirm that fewer than 5% of rows have a NULL `c_acctbal`.
...
The inverted sections injected the 24 and 5 defaults because the agent didn't supply max_age_hours or null_threshold_pct. The optional check_distribution step was omitted entirely.
Prompts Don't Run SQL — Why That Matters
This is the single biggest misconception about MCP prompts:
A
prompts/getcall renders text. It does not query the database, it does not call other tools, and it does not authenticate against yourauthblock.
The server reads the mcp-prompt.template string, substitutes Mustache placeholders against the supplied arguments, and returns the rendered chat message. The agent reads the response and decides what to do next — usually a tools/call to one of the tools mentioned in the rendered text.
A consequence: placeholders are substituted as-is. If a malicious caller supplies "customer_id": "12345; DROP TABLE customers", that string ends up inside the rendered prompt — but it is text for the agent to read, not SQL passed to a database. The agent is expected to call customer_lookup with that value, at which point flAPI's request validators on the tool will reject it (see Validation).
If you want a prompt to consume live data, design it so the prompt instructs the agent to call a tool, and put the validation on the tool.
Mustache Cheat-Sheet
| Syntax | Meaning |
|---|---|
{{customer_id}} | HTML-escape and substitute the value |
{{{customer_id}}} | Substitute the value without escaping (rare in prompts; common in SQL templates) |
{{#segment}}…{{/segment}} | Render the section only if segment is present and truthy |
{{^segment}}…{{/segment}} | Render the section only if segment is absent or falsey |
{{! comment }} | Ignored at render time |
Arrays are also supported ({{#items}}{{.}}{{/items}}), but prompts rarely need them — most playbooks use scalar arguments only. For the full grammar see the upstream Mustache spec referenced by flAPI's template engine.
Tying It Together
The most productive pattern is:
- One MCP resource for the schema (
customer_schema) — provides shared context. - One MCP tool for the data (
customer_lookup) — does the work. - One MCP prompt for each common workflow (
customer_analysis,data_quality_check) — chains the previous two.
A single agent session might do prompts/get → resources/read → tools/call → final answer, with the operator's prompt template doing the heavy lifting of orchestration.
See Also
- MCP Protocol Reference — the wire shape of
prompts/listandprompts/get - MCP Overview — when to choose a prompt over a tool or resource
- Schema as an MCP Resource — the companion recipe for the
customer_schemaresource this prompt references - Validation — the validator framework that protects tools when a prompt's instructions trigger a tool call
- Claude Integration — wiring Claude Desktop to discover and render these prompts automatically