Basic Chat
This example shows the minimal setup needed for an AI chat interface. The demo uses mocked responses, but the code examples show real implementation patterns.
AI Assistant
• Connected to AIFrontend Implementation
The frontend uses the ChatContainer component which handles all message state, streaming, and API communication internally:
import React from "react";
import { ChatContainer } from "ai-chat-bootstrap";
export function BasicChat() {
return (
<div className="h-[600px] w-full">
<ChatContainer
transport={{ api: "/api/chat" }}
messages={{ systemPrompt: "You are a helpful AI assistant." }}
header={{ title: "AI Assistant", subtitle: "Connected to AI" }}
ui={{ placeholder: "Ask me anything..." }}
/>
</div>
);
}Backend API Route
Create an API route at app/api/chat/route.ts using the new server template:
import { createAIChatHandler } from "ai-chat-bootstrap/server";
import { openai } from "@ai-sdk/openai";
const handler = createAIChatHandler({
model: openai("gpt-4"),
streamOptions: { temperature: 0.7 },
});
export { handler as POST };Or with Azure OpenAI:
import { createAIChatHandler } from "ai-chat-bootstrap/server";
import { createAzure } from "@ai-sdk/azure";
const azure = createAzure({
resourceName: process.env.AZURE_RESOURCE_NAME!,
apiKey: process.env.AZURE_API_KEY!,
});
const handler = createAIChatHandler({
model: azure("gpt-4"),
streamOptions: { temperature: 0.7 },
});
export { handler as POST };Note: The
useAIChathook automatically sends anenrichedSystemPromptcontaining a standardized preamble plus conditional sections (Tools / Context / Focus) and then appends yoursystemPrompt(if provided). Always preferenrichedSystemPromptwhen present and do not rebuild those sections again on the server.
Demo reference
- Next.js route that implements this exact setup: packages/ai-chat-bootstrap-demo/src/app/basic/page.tsx
- Streaming chat handler used by the demo routes: packages/ai-chat-bootstrap-demo/src/app/api/chat/route.ts
How it works
- Frontend: The
ChatContainercomponent manages message state and automatically posts to/api/chat - Backend: The API route receives messages and streams responses using the Vercel AI SDK
- Streaming: Responses are streamed back to the frontend and rendered in real-time
- State Management: All message history and loading states are handled automatically
Next steps
- Add tools: Register frontend tools (or MCP servers) and they are automatically merged into the handler request
- Compress history: Provide a compression endpoint and enable
compression.enabledto keep threads under budget - Add context: Use
useAIContextto share app state with the AI - Add suggestions: Enable contextual suggestions with
enableSuggestions={true}— the actions toolbar will surface them automatically - Style & toolbar: Customize the prompt toolbar (floating, inline, hidden) or restyle the chat via Tailwind classes or component props
API Reference
- Hook: useAIChat
Next
Continue to Popout Chat →
Last updated on