Server Templates
Server templates provide pre-built handlers that simplify creating backend API routes for your AI chat application. Instead of manually implementing streaming, tool handling, and context processing, you can use these templates to get started quickly.
Available Templates
| Template | Purpose | Import |
|---|---|---|
| createAIChatHandler | Main chat API route | ai-chat-bootstrap/server |
| createCompressionHandler | Summarise transcripts and return compression artifacts | ai-chat-bootstrap/server |
| createSuggestionsHandler | Generate contextual suggestions | ai-chat-bootstrap/server |
| createThreadTitleHandler | Auto-generate thread titles | ai-chat-bootstrap/server |
| createMcpToolsHandler | MCP server tool integration | ai-chat-bootstrap/server |
Quick Start
Basic Chat Handler
// app/api/chat/route.ts
import { createAIChatHandler } from "ai-chat-bootstrap/server";
import { openai } from "@ai-sdk/openai";
const handler = createAIChatHandler({
model: openai("gpt-4"),
streamOptions: { temperature: 0.7 },
});
export { handler as POST };With Azure OpenAI
// app/api/chat/route.ts
import { createAIChatHandler } from "ai-chat-bootstrap/server";
import { createAzure } from "@ai-sdk/azure";
const azure = createAzure({
resourceName: process.env.AZURE_RESOURCE_NAME!,
apiKey: process.env.AZURE_API_KEY!,
});
const handler = createAIChatHandler({
model: azure("gpt-4"),
streamOptions: { temperature: 0.7 },
});
export { handler as POST };What Server Templates Handle
Server templates automatically handle:
Request Processing
- System Prompt Enrichment: Prefer
enrichedSystemPromptover rawsystemPrompt - Tool Integration: Merge frontend tools and MCP server tools
- Context Processing: Include context items from
useAIContext - Focus Items: Include focused items from
useAIFocus
Response Streaming
- Real-time Streaming: Stream responses back to the frontend
- Tool Call Handling: Execute tools and stream results
- Error Handling: Graceful error responses
- Format Conversion: Convert between AI SDK and UI formats
Advanced Features
- Model Selection: Support dynamic model switching
- Chain of Thought: Handle reasoning blocks
- File Attachments: Process file uploads
- Thread Management: Handle thread persistence
- Compression Summaries:
createCompressionHandlergenerates artifacts, snapshot metadata, and usage stats for long transcripts
System Prompt Enrichment
All templates automatically use the enrichedSystemPrompt from the frontend, which includes:
You are an AI assistant running in an enhanced environment with the following capabilities:
[Tools section - if tools are available]
Available tools:
- tool_name: Tool description
[Context section - if context items exist]
Current context:
- Description: Context data
[Focus section - if items are focused]
User is currently focusing on:
- Item: Focus data
[Original system prompt - if provided]
---
Your original system prompt hereThis structure ensures the AI has full awareness of available capabilities without you needing to manually construct prompts.
Custom Configuration
You can customize any template:
const handler = createAIChatHandler({
model: openai("gpt-4"),
streamOptions: {
temperature: 0.7,
maxTokens: 1000,
},
// Custom system prompt processing
onSystemPrompt: (enrichedPrompt, originalPrompt, context) => {
return `${enrichedPrompt}\n\nAdditional instructions: Be concise.`;
},
// Custom tool processing
onToolCall: (toolCall, context) => {
console.log(`Tool called: ${toolCall.name}`);
return toolCall;
},
// Custom error handling
onError: (error, context) => {
console.error("Chat error:", error);
return new Response("Chat service temporarily unavailable", { status: 503 });
}
});Environment Variables
Common environment variables used by templates:
# OpenAI
OPENAI_API_KEY=your_openai_key
# Azure OpenAI
AZURE_RESOURCE_NAME=your_azure_resource
AZURE_API_KEY=your_azure_key
AZURE_API_VERSION=2024-02-15-preview
AZURE_DEPLOYMENT_ID=your_deployment_id
# MCP Servers (if using)
MCP_SERVER_URLS=http://localhost:3001,http://localhost:3002Error Handling
Templates include built-in error handling:
// Automatic error responses for:
// - Invalid requests (400)
// - Authentication failures (401)
// - Rate limits (429)
// - Model errors (500)
// - Network timeouts (503)See Also
- createAIChatHandler - Main chat handler
- Basic Chat Guide - Frontend implementation
- useAIChat Hook - Frontend hook reference
Last updated on