Skip to Content
ChatBasic Chat

Basic Chat

This example shows the minimal setup needed for an AI chat interface. The demo uses mocked responses, but the code examples show real implementation patterns.

AI Assistant

Connected to AI

Send a message to get started

Frontend Implementation

The frontend uses the ChatContainer component which handles all message state, streaming, and API communication internally:

import React from "react"; import { ChatContainer } from "ai-chat-bootstrap"; export function BasicChat() { return ( <div className="h-[600px] w-full"> <ChatContainer transport={{ api: "/api/chat" }} messages={{ systemPrompt: "You are a helpful AI assistant." }} header={{ title: "AI Assistant", subtitle: "Connected to AI" }} ui={{ placeholder: "Ask me anything..." }} /> </div> ); }

Backend API Route

Create an API route at app/api/chat/route.ts using the new server template:

import { createAIChatHandler } from "ai-chat-bootstrap/server"; import { openai } from "@ai-sdk/openai"; const handler = createAIChatHandler({ model: openai("gpt-4"), streamOptions: { temperature: 0.7 }, }); export { handler as POST };

Or with Azure OpenAI:

import { createAIChatHandler } from "ai-chat-bootstrap/server"; import { createAzure } from "@ai-sdk/azure"; const azure = createAzure({ resourceName: process.env.AZURE_RESOURCE_NAME!, apiKey: process.env.AZURE_API_KEY!, }); const handler = createAIChatHandler({ model: azure("gpt-4"), streamOptions: { temperature: 0.7 }, }); export { handler as POST };

Note: The useAIChat hook automatically sends an enrichedSystemPrompt containing a standardized preamble plus conditional sections (Tools / Context / Focus) and then appends your systemPrompt (if provided). Always prefer enrichedSystemPrompt when present and do not rebuild those sections again on the server.

Demo reference

How it works

  1. Frontend: The ChatContainer component manages message state and automatically posts to /api/chat
  2. Backend: The API route receives messages and streams responses using the Vercel AI SDK
  3. Streaming: Responses are streamed back to the frontend and rendered in real-time
  4. State Management: All message history and loading states are handled automatically

Next steps

  • Add tools: Register frontend tools (or MCP servers) and they are automatically merged into the handler request
  • Compress history: Provide a compression endpoint and enable compression.enabled to keep threads under budget
  • Add context: Use useAIContext to share app state with the AI
  • Add suggestions: Enable contextual suggestions with enableSuggestions={true} — the actions toolbar will surface them automatically
  • Style & toolbar: Customize the prompt toolbar (floating, inline, hidden) or restyle the chat via Tailwind classes or component props

API Reference

Next

Continue to Popout Chat →

Last updated on