Fastmcp Http Example
A practical example of integrating Model Context Protocol (MCP) with OpenAI's function calling API using a custom HTTP transport layer. Features a FastAPI server, async HTTP client, and interactive chat interface with rich formatting
Ask AI about Fastmcp Http Example
Powered by Claude Β· Grounded in docs
I know everything about Fastmcp Http Example. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
MCP Chat System
A practical example of integrating Model Context Protocol (MCP) with OpenAI's function calling API, featuring a custom HTTP server implementation.
Why this project? FastMCP primarily uses STDIO transport, which isn't ideal for HTTP-based clients. This project demonstrates how to build a complete HTTP transport layer for MCP, enabling RESTful API access and better integration with web applications.
π Table of Contents
- Architecture
- Key Features
- Quick Start
- Usage
- HTTP API Endpoints
- Technical Details
- Example Tools
- Troubleshooting
- License
ποΈ Architecture
This project demonstrates how to build a complete MCP-powered chat system with the following components:
server.py: Base MCP server defining tools, resources, and prompts for sales analyticshttp_server.py: FastAPI-based HTTP server exposing MCP functionality via REST APIhttp_client.py: HTTP client for communicating with the MCP serverchat.py: Interactive REPL chat client integrating OpenAI with MCP over HTTP
β¨ Key Features
- β Proper tool call handling: Correctly formats OpenAI function calls for MCP
- β HTTP transport: Custom HTTP implementation for MCP (FastMCP uses STDIO by default)
- β Dynamic tool execution: LLM automatically decides when to use tools based on user queries
- β Rich terminal UI: Beautiful formatting with colors, markdown, and loading animations
- β Resource management: Static and templated resources
- β Optional prompt templates: Reusable templates for multi-client scenarios (not required for single chatbot)
- β Error handling: Comprehensive error handling and informative messages
π Quick Start
Prerequisites
- Python 3.12+
- uv package manager
- OpenAI API key
Installation
- Clone the repository:
git clone <repository-url>
cd mcptests
- Install dependencies:
uv sync
- Create a
.envfile in the project root:
# .env
OPENAI_API_KEY=sk-your-api-key-here
OPENAI_MODEL=gpt-4o-mini
MCP_BASE_URL=http://127.0.0.1:8001/mcp
DOCS_DB=biz.sqlite
# Optional server configuration
MCP_SERVER_NAME=biz-server
DEFAULT_TOP_N=5
Note: Never commit your
.envfile to version control. It contains sensitive API keys.
Running the System
Terminal 1 - Start the MCP Server:
chmod +x start_server.sh
./start_server.sh
Or manually:
uv run python http_server.py
Terminal 2 - Start the Chat Client:
chmod +x start_chat.sh
./start_chat.sh
Or manually:
uv run python chat.py
π Usage
Available Commands
/tools- List available tools/resources- List available resources/read <uri>- Read a specific resource/prompt <name> key=value ...- Prepare a prompt template/exitor/quit- Exit the chat
Example Session
you> /tools
- find_products: Return products whose name or id contains the query...
- sales_between: Aggregate sales between [date_start, date_end]...
- top_products: Top-N products by sales amount...
you> Find products containing "laptop"
assistant> [Uses find_products tool and displays results]
you> /prompt summarize_sales date_start=2024-01-01 date_end=2024-01-31
β
Prompt 'summarize_sales' prepared for the next message.
you> Analyze the sales
assistant> [Generates analysis using the prepared prompt]
π HTTP API Endpoints
The HTTP server exposes the following endpoints:
GET /mcp/tools- List all available toolsGET /mcp/prompts- List all available promptsGET /mcp/resources- List all available resourcesPOST /mcp/tools/call- Execute a tool{"name": "tool_name", "arguments": {...}}POST /mcp/prompts/get- Get a prompt with parameters{"name": "prompt_name", "arguments": {...}}GET /mcp/resources/read/{uri}- Read a resourceGET /health- Health check
π οΈ Technical Details
How It Works
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β INTERACTION FLOW β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββ
β USER β "What products do we have?"
ββββββ¬ββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CHAT CLIENT (chat.py) β
β β
β 1οΈβ£ On Startup: β
β β’ Connect to MCP server (http_client.py) β
β β’ Fetch available tools β self.cached_tools β
β β’ Convert to OpenAI format β self.oai_tools_spec β
β β
β 2οΈβ£ On User Message: β
β messages = [ β
β {"role": "system", "content": "You are..."}, β
β {"role": "user", "content": "What products..."} β
β ] β
β β
β β‘ TOOLS INJECTION HAPPENS HERE: β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β openai.chat.completions.create( β β
β β messages=messages, β β
β β tools=self.oai_tools_spec β π§ MCP TOOLS! β β
β β ) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β OPENAI API β
β (GPT-4o-mini) β
β β
β Analyzes: β
β β’ User message β
β β’ Available β
β tools π§ β
β β
β Decides: β
β "I need to use β
β find_products"β
ββββββββββ¬βββββββββ
β
β Returns tool_calls
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CHAT CLIENT (chat.py) β
β β
β 3οΈβ£ Receives tool_calls from OpenAI: β
β { β
β "tool_calls": [{ β
β "function": { β
β "name": "find_products", β
β "arguments": '{"query": ""}' β
β } β
β }] β
β } β
β β
β 4οΈβ£ Execute tools via HTTP: β
β result = await mcp.call_tool("find_products", {"query":""}) β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββ
β
β POST /mcp/tools/call
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β HTTP SERVER (http_server.py) β
β β
β Receives: {"name": "find_products", "arguments": {...}} β
β β
β Calls: tool.fn(**args) βββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββ
β MCP SERVER (server.py) β
β β
β @mcp.tool β
β def find_products(...): β
β # Execute SQL β
β # Return products β
ββββββββββββββββ¬βββββββββββββββ
β
Returns: [{"id": "p-100", ...}, ...]
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CHAT CLIENT (chat.py) β
β β
β 5οΈβ£ Receives tool results β
β β
β 6οΈβ£ Sends back to OpenAI: β
β messages = [ β
β {"role": "system", ...}, β
β {"role": "user", "content": "What products..."}, β
β {"role": "assistant", "tool_calls": [...]}, β
β {"role": "tool", "content": "[{products...}]"} β π β
β ] β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β OPENAI API β
β (GPT-4o-mini) β
β β
β Synthesizes: β
β "Here are the β
β products: ..."β
ββββββββββ¬βββββββββ
β
β Final response
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CHAT CLIENT (chat.py) β
β β
β 7οΈβ£ Displays formatted response with Rich β
ββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββ
β USER β Sees beautiful formatted response
ββββββββββββ
Key Points
- Tools are injected ONCE at startup - fetched from MCP server and cached
- Tools are sent with EVERY message to OpenAI as available options
- OpenAI decides which tools to call based on the user query
- Chat executes the tools via HTTP to MCP server
- Results flow back through the same path to create the final response
Understanding Tools vs Prompts
π§ Tools (The Core Functionality)
- What they are: Functions that the LLM can call automatically
- When to use: Always needed for dynamic data and actions
- Who decides: The LLM (GPT) decides when to call them based on user queries
- Example flow:
User: "Show me the products" β LLM decides to call find_products() β Tool executes and returns data β LLM synthesizes natural response
π¬ Prompts (Optional Templates)
- What they are: Pre-configured message templates with specific instructions
- When to use:
- β Multiple applications consuming the same MCP server (web app, Slack bot, API)
- β Standardizing responses across different teams/tools
- β Clients without their own LLM (using MCP prompts as instructions)
- β NOT needed for a single chatbot with GPT (like this example)
- How to use: Manually activated with
/promptcommand - Example use case:
# Useful for internal tools with multiple consumers: Web Dashboard ββ Slack Bot ββββββΌββ MCP Server (consistent prompts) Mobile App βββββ
π‘ For Building a ChatGPT-style Assistant
If you're building a conversational assistant (like this project), you only need Tools:
- β Tools provide the data and actions
- β GPT handles the conversation and decides when to use tools
- β Prompts are optional (mainly for demonstration/multi-client scenarios)
The prompts in this project serve as examples of MCP's capabilities, but aren't required for the chat to work.
Code-Level Implementation
Where Tools Get Injected
Step 1: Startup - Fetch Tools from MCP Server
# chat.py - ChatHost.start()
async def start(self):
self.mcp = MCPHttpClient(MCP_BASE_URL)
# Fetch tools from MCP server via HTTP
self.cached_tools = await self.mcp.list_tools()
# Returns: [{"name": "find_products", "description": "...", "inputSchema": {...}}, ...]
# Convert to OpenAI function calling format
self.oai_tools_spec = as_openai_tools(self.cached_tools)
# Converts to: [{"type": "function", "function": {"name": "...", "parameters": {...}}}, ...]
Step 2: Every Message - Send Tools to OpenAI
# chat.py - ChatHost.chat_round()
async def chat_round(self, user_text: str):
messages = [
{"role": "system", "content": "You are..."},
{"role": "user", "content": user_text}
]
# π§ TOOLS INJECTED HERE - sent to OpenAI with every message
first = self.oai.chat.completions.create(
model=OPENAI_MODEL,
messages=messages,
tools=self.oai_tools_spec, # β MCP tools in OpenAI format
tool_choice="auto" # β Let OpenAI decide when to use them
)
# OpenAI returns either:
# - Regular response, OR
# - Response with tool_calls
Step 3: Execute Tools When OpenAI Requests Them
# chat.py - ChatHost.chat_round()
if first_msg.tool_calls:
for tc in first_msg.tool_calls:
fn = tc.function.name # e.g., "find_products"
args = json.loads(tc.function.arguments) # e.g., {"query": ""}
# Execute tool on MCP server via HTTP
result = await self.mcp.call_tool(fn, args)
# β
# POST http://127.0.0.1:8001/mcp/tools/call
# {"name": "find_products", "arguments": {"query": ""}}
Correct Tool Call Handling
The system properly handles OpenAI's function calling by including tool_calls in the assistant message:
# Build assistant message with tool_calls (REQUIRED by OpenAI)
asst_msg = {"role": "assistant"}
if first_msg.content:
asst_msg["content"] = first_msg.content
if first_msg.tool_calls:
# Include tool_calls in assistant message (required by OpenAI API)
asst_msg["tool_calls"] = [
{
"id": tc.id,
"type": "function",
"function": {"name": tc.function.name, "arguments": tc.function.arguments}
} for tc in first_msg.tool_calls
]
convo.append(asst_msg)
# Then append tool results
for tc in first_msg.tool_calls:
result = await mcp.call_tool(...)
convo.append({
"role": "tool",
"tool_call_id": tc.id, # Must match the tool_call id
"content": json.dumps(result)
})
HTTP Transport for MCP
Since FastMCP 2.x primarily uses STDIO transport, this project implements a custom HTTP layer:
- FastAPI server wraps MCP tools/prompts/resources
- HTTP client provides async interface to the server
- Chat client uses HTTP client instead of FastMCP's STDIO client
π§ͺ Example Tools
The system includes sample business analytics tools:
find_products: Search products by name or categorysales_between: Aggregate sales data for a date rangetop_products: Get top N products by sales volumesales_report: Comprehensive sales report with KPIs
π Available Prompts (Optional)
Note: These prompts are optional examples. For a single chatbot with GPT (like this project), you only need the Tools above. Prompts are useful when multiple applications share the same MCP server.
summarize_sales: Generate sales summary for a periodsales_overview_json: Get JSON-formatted sales overviewcompare_periods_json: Compare two time periodscategory_insights_json: Category-specific analysisproduct_deepdive_markdown: Detailed product analysismerchandising_actions_json: Actionable merchandising recommendationsnatural_language_sales_summary: Human-friendly summary (supports multiple languages)
How to use: /prompt <name> key=value ... then ask your question
π Troubleshooting
"Failed to connect to MCP server"
- Ensure the server is running in Terminal 1
- Check that port 8001 is available:
lsof -i :8001 - Verify the URL in
.env:MCP_BASE_URL=http://127.0.0.1:8001/mcp
"Missing OPENAI_API_KEY"
- Create a
.envfile with your OpenAI API key - Format:
OPENAI_API_KEY=sk-...
π License
This project is licensed under the MIT License - see the LICENSE file for details.
π Acknowledgments
- FastMCP - The MCP framework this project builds upon
- Model Context Protocol - The protocol specification
- OpenAI - For the function calling API
π Related Resources
- Model Context Protocol Specification
- FastMCP Documentation
- OpenAI Function Calling Guide
- FastAPI Documentation
π§ Support
- Check QUICKSTART.md for a quick setup guide
- Review the code comments and docstrings for implementation details
- Open an issue for questions or bug reports
Made with β€οΈ as a practical example of MCP integration
