← All docs
Examples

Examples

Two working example agents ship with Kurral — **ShopBot** (Anthropic + MCP) and **HelpDesk** (OpenAI + Proxy). Both demonstrate different integration patterns and include built-in security testing payloads.

Two working example agents ship with Kurral — ShopBot (Anthropic + MCP) and HelpDesk (OpenAI + Proxy). Both demonstrate different integration patterns and include built-in security testing payloads.


ShopBot

A customer service agent for an e-commerce store. Uses Claude via the Anthropic SDK with MCP tool calling.

Architecture

User ──▶ ShopBot Agent (Claude) ──▶ MCP Server (tools)
              │
              └── Kurral (observability + security)

What It Does

  • Searches orders by customer email
  • Looks up product information
  • Processes returns and refunds
  • Handles customer complaints

Setup

cd examples/shopbot

# Install dependencies
pip install -r requirements.txt

# Seed the demo database
python seed_database.py

# Set environment variables
export ANTHROPIC_API_KEY=sk-ant-your-key
export KURRAL_API_KEY=kr_live_your-key
export KURRAL_API_URL=https://kurral-api.onrender.com

# Start the MCP server
python shopbot_server.py

# In another terminal, run the agent
python shopbot_agent.py

Integration Pattern

ShopBot demonstrates the SDK tracing + MCP pattern:

# shopbot_agent.py
import anthropic

# Option A: Direct to Anthropic (with SDK tracing)
client = anthropic.Anthropic()

# Option B: Through Kurral proxy (automatic observability)
client = anthropic.Anthropic(
    base_url=f"{KURRAL_API_URL}/api/proxy/anthropic",
    default_headers={
        "X-Kurral-API-Key": KURRAL_API_KEY,
        "x-kurral-agent": "ShopBot",
    },
)

Security Testing

ShopBot includes red team payloads built into the agent for testing:

  • SQL injection via search queries
  • Path traversal via file access tools
  • Prompt injection via customer messages
  • Auth bypass attempts

Run a security scan against ShopBot to see Kurral detect these vulnerabilities.


HelpDesk

An IT support agent that uses OpenAI GPT-4o via the Kurral LLM Proxy. Demonstrates the proxy-only integration pattern — zero SDK code needed.

Architecture

User ──▶ HelpDesk Agent ──▶ Kurral LLM Proxy ──▶ OpenAI API
              │
              └──▶ MCP Server (tools) [direct, not proxied]

The key difference from ShopBot: LLM calls go through the Kurral proxy for automatic observability. Tool execution happens locally via the MCP server, but tool call arguments and results are visible to the proxy as part of the LLM conversation.

What It Does

  • Looks up employee information
  • Resets passwords
  • Creates support tickets
  • Checks system status

Setup

cd examples/helpdesk

# Install dependencies
pip install -r requirements.txt

# Seed the demo database
python seed_database.py

# Set environment variables
export OPENAI_API_KEY=sk-your-key
export KURRAL_API_KEY=kr_live_your-key
export KURRAL_API_URL=https://kurral-api.onrender.com

# Start the MCP server (port 3004)
python helpdesk_server.py

# In another terminal, run the agent (port 3003)
python helpdesk_agent.py

Integration Pattern

HelpDesk demonstrates the proxy-only pattern — the simplest way to add Kurral:

# helpdesk_agent.py
from openai import OpenAI

client = OpenAI(
    base_url=f"{KURRAL_API_URL}/api/proxy/openai/v1",
    api_key=os.getenv("OPENAI_API_KEY"),
    default_headers={
        "X-Kurral-API-Key": KURRAL_API_KEY,
        "x-kurral-agent": "HelpDesk",
    },
)

# Everything else is unchanged — tool calls, streaming, function calling

What Gets Captured Automatically

With the proxy pattern, Kurral captures without any SDK code:

  • Every LLM call (model, tokens, cost, latency)
  • Tool call arguments and results (part of the LLM message history)
  • Request and response content (based on retention setting)
  • Session grouping (via x-kurral-session-id)
  • Agent attribution (via x-kurral-agent)
  • Security scanning of tool interactions

Tool execution happens locally (agent → MCP server directly), but the proxy sees what the LLM asked each tool to do and what result came back — because tool calling flows through the LLM API. For discrete tool event timing and replay, add the SDK.


Choosing an Integration Pattern

PatternLLM ObservabilityTool VisibilitySecurity ScanningReplaySetup Effort
Proxy only (HelpDesk)FullArgs + results (from LLM conversation)YesNoMinimal — change base URL
SDK tracing (ShopBot)FullDiscrete events with timingYesYesAdd decorators
Proxy + SDKFullFull (conversation + discrete events)YesYesBoth
MCP ProxyVia LLM proxyProtocol-level with timingYesYesRun MCP proxy server

Start with the proxy — it gives you observability, tool interaction visibility, and security scanning. Add the SDK when you need tool execution timing, prompt template capture, or replay.