Skip to Content
ChatBasic Chat

Basic Chat

This example shows the minimal setup needed for an AI chat interface. The demo uses mocked responses, but the code examples show real implementation patterns.

AI Assistant

Connected to AI

Ready to chat?

Send a message to get started

Frontend Implementation

The frontend uses the useAIChat hook to handle all message state, streaming, and API communication:

import React from "react"; import { ChatContainer, useAIChat } from "ai-chat-bootstrap"; export function BasicChat() { const chat = useAIChat({ api: "/api/chat", systemPrompt: "You are a helpful AI assistant." }); return ( <div className="h-[420px] w-full"> <ChatContainer chat={chat} header={{ title: "AI Assistant", subtitle: "Connected to AI" }} ui={{ placeholder: "Ask me anything..." }} /> </div> ); }

Backend API Route

Create an API route at app/api/chat/route.ts (Next.js App Router):

import { createAzure } from "@ai-sdk/azure"; import { convertToModelMessages, streamText } from "ai"; const azure = createAzure({ resourceName: process.env.AZURE_RESOURCE_NAME!, apiKey: process.env.AZURE_API_KEY!, apiVersion: process.env.AZURE_API_VERSION ?? "preview", }); const model = azure(process.env.AZURE_DEPLOYMENT_ID!); export async function POST(req: Request) { const { messages, enrichedSystemPrompt } = await req.json(); const result = await streamText({ model, messages: [ { role: "system", content: enrichedSystemPrompt }, ...convertToModelMessages(messages), ], }); return result.toUIMessageStreamResponse(); }

Note: The useAIChat hook automatically sends an enrichedSystemPrompt containing a standardized preamble plus conditional sections (Tools / Context / Focus) and then appends your systemPrompt (if provided). Always prefer enrichedSystemPrompt when present and do not rebuild those sections again on the server.

How it works

  1. Frontend: The useAIChat hook manages message state and automatically posts to /api/chat
  2. Backend: The API route receives messages and streams responses using the Vercel AI SDK
  3. Streaming: Responses are streamed back to the frontend and rendered in real-time
  4. State Management: All message history and loading states are handled automatically

Next steps

  • Add tools: Enable function calling by passing tools to both useAIChat and streamText
  • Add context: Use useAIContext to share app state with the AI
  • Add suggestions: Enable contextual suggestions with enableSuggestions={true}
  • Style: Customize appearance via Tailwind classes or component props

API Reference

Next

Continue to Popout Chat →

Last updated on