Dev
Local development server for @mctx-ai/app with hot reload
Ask AI about Dev
Powered by Claude Β· Grounded in docs
I know everything about Dev. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
mctx β The best way to Build an MCP Server
@mctx-ai/mcp is the best way to Build an MCP Server. Register tools, resources, and prompts β the framework handles protocol negotiation, input validation, error sanitization, and CORS. You write the business logic.
MCP (Model Context Protocol) is the open standard AI assistants use to call external tools. Claude, ChatGPT, Cursor, and other clients speak it. Build an MCP server once, and it's accessible from every MCP-compatible AI.
Quick Start
import { createServer, T } from "@mctx-ai/mcp";
const server = createServer({
instructions: "A greeting server. Use the greet tool to say hello.",
});
function greet(mctx, req, res) {
res.send(`Hello, ${req.name}! (user: ${mctx.userId || "anonymous"})`);
}
greet.description = "Greet someone by name";
greet.input = {
name: T.string({ required: true, description: "Name to greet" }),
};
server.tool("greet", greet);
export default { fetch: server.fetch };
That's a working MCP server. Run it locally with npx mctx-dev index.js.
Installation
Scaffold a new project (recommended):
npx create-mctx-server my-app
cd my-app
npm install
npx mctx-dev index.js
Use the template repo:
github.com/mctx-ai/example-app β click "Use this template" on GitHub.
Add to an existing project:
npm install @mctx-ai/mcp
Run the dev server:
npx mctx-dev index.js
Hot reload is included. Changes to index.js restart the server automatically.
Features
- Zero runtime dependencies β ships nothing you don't need
- TypeScript-ready β full
.d.tstype definitions included - Hot reload dev server β
mctx-devwatches your files and restarts on change - Input validation β JSON Schema validation via the
Ttype system - Error sanitization β secrets and stack traces never leak to clients
- MCP protocol handled β capability negotiation, JSON-RPC 2.0, CORS β all automatic
- Cloudflare Workers β exports a standard
fetchhandler, deploys anywhere
API
Handler Signature
Every handler β tools, resources, and prompts β uses the same three-argument signature:
function myHandler(mctx, req, res) {
res.send("result");
}
| Parameter | Type | Description |
|---|---|---|
mctx | ModelContext | Per-request context. mctx.userId is the authenticated user ID (or undefined). |
req | object | Input arguments, validated against the handler's input schema. |
res | Response | Output port. Call res.send() to return a result. |
Tools
Tools are functions AI can call β like API endpoints.
function search(mctx, req, res) {
const results = db.query(req.query, { limit: req.limit });
res.send(results);
}
search.description = "Search the database";
search.input = {
query: T.string({ required: true, description: "Search query" }),
limit: T.number({ default: 10, description: "Max results" }),
};
server.tool("search", search);
For long-running tools, report progress with res.progress(current, total):
async function migrate(mctx, req, res) {
for (let i = 0; i < req.tables.length; i++) {
await copyTable(req.tables[i]);
res.progress(i + 1, req.tables.length);
}
res.send(`Migrated ${req.tables.length} tables`);
}
migrate.description = "Migrate database tables";
migrate.input = {
tables: T.array({ required: true, items: T.string() }),
};
server.tool("migrate", migrate);
Resources
Resources are read-only data AI can pull for context. Use static URIs or URI templates.
// Static resource
function readme(mctx, req, res) {
res.send("# My Project\nWelcome to the docs.");
}
readme.mimeType = "text/markdown";
server.resource("docs://readme", readme);
// Dynamic template β {userId} is extracted and available on req
function getUser(mctx, req, res) {
res.send(JSON.stringify(db.findUser(req.userId)));
}
getUser.description = "Fetch a user by ID";
getUser.mimeType = "application/json";
server.resource("user://{userId}", getUser);
Prompts
Prompts are reusable message templates for initializing AI conversations.
function codeReview(mctx, req, res) {
res.send(`Review this ${req.language} code for bugs:\n\n${req.code}`);
}
codeReview.description = "Review code for issues";
codeReview.input = {
code: T.string({ required: true }),
language: T.string({ description: "Programming language" }),
};
server.prompt("code-review", codeReview);
For multi-message prompts with images or embedded resources, use conversation():
import { conversation } from "@mctx-ai/mcp";
function debug(mctx, req, res) {
res.send(
conversation(({ user, ai }) => [
user.say("I hit this error:"),
user.say(req.error),
user.attach(req.screenshot, "image/png"),
ai.say("I'll analyze the error and screenshot together."),
]),
);
}
debug.description = "Debug with error + screenshot";
debug.input = {
error: T.string({ required: true }),
screenshot: T.string({ required: true, description: "Base64 image data" }),
};
server.prompt("debug", debug);
LLM Sampling
Use res.ask() to request an LLM completion from the client (LLM-in-the-loop):
async function summarize(mctx, req, res) {
const content = await fetchPage(req.url);
const summary = res.ask ? await res.ask(`Summarize:\n\n${content}`) : content;
res.send(summary);
}
res.ask is null when the client does not support sampling β always check before calling.
Type System
T builds JSON Schema definitions for tool and prompt inputs.
| Type | Example |
|---|---|
T.string() | T.string({ required: true, enum: ["a", "b"] }) |
T.number() | T.number({ min: 0, max: 100 }) |
T.boolean() | T.boolean({ default: false }) |
T.array() | T.array({ items: T.string() }) |
T.object() | T.object({ properties: { key: T.string() } }) |
All types accept required, description, and default.
Logging
import { log } from "@mctx-ai/mcp";
log.info("Server started");
log.warning("Rate limit approaching");
log.error("Connection failed");
Levels follow RFC 5424: debug, info, notice, warning, error, critical, alert, emergency.
Deploy
Push to GitHub and connect your repo at mctx.ai. Your server goes live.
Full deployment guide at docs.mctx.ai.
Contributing
See CONTRIBUTING.md and GitHub Issues.
Links
mctx is a trademark of mctx, Inc.
Licensed under MIT
