useAIChat
⚠️ Internal API:
useAIChatis not exported from the publicai-chat-bootstrapentry point. It powersChatContainerandChatPopoutinternally and may change without notice. Reference this hook only if you are building a custom container and are comfortable mirroring upstream updates.
The hook orchestrates transport calls, context/focus/tool hydration, compression, branching, and thread persistence before passing everything to the Vercel AI SDK useChat helper.
When to reach for it
- You are authoring a bespoke chat surface that cannot reuse
ChatContainer. - You need direct access to the underlying stores (
context,focus,tools,compression) in a headless environment. - You are contributing to the library and need to understand how the internal state machine works.
Signature
function useAIChat(options?: UseAIChatOptions): UseAIChatReturn;UseAIChatOptions matches the props accepted by ChatContainer (transport, messages, threads, features, mcp, models, compression, suggestions).
System prompt enrichment
Every send builds an enrichedSystemPrompt that:
- describes the surrounding runtime (tools, context, focus),
- includes sections only when data is present,
- appends your
messages.systemPromptat the end.
Override it per-call via chat.sendMessage({ body: { enrichedSystemPrompt: "..." } }) if you need full control.
Key options
transport.api(default/api/chat) – endpoint used by the client transport.transport.prepareSendMessagesRequest– optional callback to adjust the request before it is sent; prefer annotatingmetadatainstead of mutatingmessages.messages.systemPrompt/messages.initial– seed instructions and historical messages.threads.*– enable scoped thread persistence, auto-titling, and warn-on-miss behaviour.features.chainOfThought/features.branching– opt into reasoning blocks or experimental branching helpers.mcp.*– surface Model Context Protocol integrations.models.available/models.initial– populate the model selector.compression– enable automatic compression, pinned messages, and artifact review.suggestions– configure auto-fetching follow-up suggestions (prompt, count, API endpoint).
Return highlights
useAIChat returns all helpers from useChat plus:
input,setInput– managed draft state decoupled from streaming.sendMessageWithContext– wrapsappendwhile merging context, focus items, and tool metadata.sendAICommandMessage– trigger an AI command that targets a specific frontend tool.retryLastMessage,clearError,isLoading– convenience status helpers.- Stores exposed as plain data:
context,focusItems,tools,availableTools. - Model helpers:
models,model,setModel. - Feature state:
threadId,scopeKey,chainOfThoughtEnabled,mcpEnabled. - Compression controller:
compression(usage, pinned messages, artifacts,runCompression). - Branching controller:
branching(enabled flag +selectBranch). - Suggestions controller:
suggestions(enabled flag,items,handleSuggestionClick,fetchSuggestions,clearSuggestions).
Minimal usage example
function HeadlessChat() {
const chat = useAIChat({
transport: { api: "/api/chat" },
messages: { systemPrompt: "You are a helpful AI assistant." },
models: {
available: [
{ id: "gpt-4o-mini", label: "GPT-4o mini" },
{ id: "gpt-4", label: "GPT-4" },
],
initial: "gpt-4o-mini",
},
});
return (
<form
onSubmit={(event) => {
event.preventDefault();
if (!chat.input.trim()) return;
chat.sendMessageWithContext(chat.input);
chat.setInput("");
}}
>
<ul>
{chat.messages.map((message) => (
<li key={message.id}>
<strong>{message.role}:</strong>{" "}
{message.parts?.map((part) =>
part.type === "text" ? part.text : "[non-text part]"
).join("")}
</li>
))}
</ul>
<textarea
value={chat.input}
onChange={(event) => chat.setInput(event.target.value)}
placeholder="Ask me anything..."
/>
<footer>
<span>
Active model: {chat.model ?? chat.models[0]?.id ?? "default"}
</span>
<button type="submit" disabled={chat.isLoading}>
Send
</button>
</footer>
</form>
);
}
// Prefer <ChatContainer /> unless you truly need a custom shell.