createMcpToolsHandler
createMcpToolsHandler ships a ready-made POST handler that proxies a single Model Context Protocol (MCP) server descriptor from the frontend to the AI SDK MCP client. It returns only the tool summaries that the UI needs to display available MCP capabilities and surfaces any discovery errors back to the caller.
This handler is typically paired with the
useMCPServerhook on the client andcreateAIChatHandleron the backend. The client controls which MCP servers are queried; the backend only needs an endpoint that accepts those descriptors.
Why a bridge endpoint?
MCP servers run wherever your data or tools live (for example, on an internal host or a SaaS vendor). Browsers cannot speak the MCP transports directly, so the UI calls /api/mcp on your backend instead. The bridge:
- Receives the serialized MCP descriptor from the browser.
- Optionally injects request headers (for auth or tenancy) that only the server should know.
- Performs the MCP handshake inside Node.js to validate that the server is reachable and to list its tools.
- Returns only the lightweight tool summaries that the UI needs until the next chat request.
At runtime the chat handler will reconnect to the original MCP server URL when messages are streamed; /api/mcp is purely for discovery/validation.
Import
import { createMcpToolsHandler } from "ai-chat-bootstrap/server";API Reference
interface CreateMcpToolsHandlerOptions {
/**
* Optional callback fired when the request body fails to parse
* or the MCP client throws during tool discovery.
*/
onError?: (error: unknown, ctx: { req: Request }) => void;
/**
* Optionally copy selected HTTP headers from the incoming request onto the
* MCP transport config. Useful for forwarding bearer tokens or tenancy IDs
* supplied by the browser.
*/
forwardHeaders?: string[];
}
declare function createMcpToolsHandler(
options?: CreateMcpToolsHandlerOptions
): (req: Request) => Promise<Response>;Request contract
The handler expects a JSON body shaped like:
interface MCPServerToolsRequest {
server: SerializedMCPServer;
}
interface SerializedMCPServer {
id: string;
name?: string;
transport: {
type: "sse" | "streamable-http";
url: string;
headers?: Record<string, string>;
};
}This descriptor is generated on the client (useMCPServer or the MCP servers dialog) and includes everything the backend needs to open an MCP client connection.
Response contract
On success the handler returns:
interface MCPServerToolsResponse {
tools: Array<{ name: string; description?: string }>;
errors?: Array<{
serverId: string;
serverName?: string;
url: string;
message: string;
}>;
}If every MCP query succeeds you receive a 200 response with populated tools. If one or more servers fail, the handler still returns tools that succeeded but includes the failure details in errors and responds with status 207 so the client can show a warning. Completely invalid requests (bad JSON, missing server) return 400; unexpected crashes respond with 500 and { error: string }.
Example: Next.js App Router
// app/api/mcp/route.ts
import { createMcpToolsHandler } from "ai-chat-bootstrap/server";
const handler = createMcpToolsHandler({
onError(error) {
console.error("[mcp] failed to load tools", error);
},
forwardHeaders: ["Authorization"],
});
export { handler as POST };The frontend should point mcp.api (or the useMCPServer hook) to /api/mcp.
Example: Express
import express from "express";
import { createMcpToolsHandler } from "ai-chat-bootstrap/server";
const app = express();
const handleMcpTools = createMcpToolsHandler();
app.post("/api/mcp", express.json(), (req, res) => {
handleMcpTools(
new Request("http://localhost/api/mcp", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(req.body),
})
)
.then(async (response) => {
res.status(response.status);
response.headers.forEach((value, key) => res.setHeader(key, value));
res.send(await response.text());
})
.catch((error) => {
console.error(error);
res.status(500).json({ error: "Internal MCP bridge error" });
});
});Frontend wiring
import { ChatContainer, useMCPServer } from "ai-chat-bootstrap";
function DemoChat() {
useMCPServer({
id: "demo-mcp-server",
name: "Demo MCP Toolkit",
url: process.env.NEXT_PUBLIC_MCP_SERVER_URL ?? "http://127.0.0.1:3030/mcp",
transportType: "streamable-http",
});
return (
<ChatContainer
transport={{ api: "/api/chat" }}
mcp={{ enabled: true, api: "/api/mcp" }}
/>
);
}When a server is registered, the hook automatically POSTs the descriptor shown above to /api/mcp, stores the returned tool summaries, and exposes them to the chat UI. Those same summaries are merged into the enriched system prompt and forwarded to createAIChatHandler, which reconnects to the remote MCP servers during each chat request.
If the bridge returns an errors array, useMCPServer records the message so the UI can surface which MCP servers failed discovery while still rendering any tools that loaded successfully.