Fastmcp Math Demo
No description available
Ask AI about Fastmcp Math Demo
Powered by Claude · Grounded in docs
I know everything about Fastmcp Math Demo. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
MyFastMCP
A FastMCP server providing math tools (add, subtract, multiply, divide).
Setup
-
Install dependencies:
uv sync -
Create a
.envfile with your API keys:GOOGLE_API_KEY=your_gemini_api_key OPENAI_API_KEY=your_openai_api_key
Running the Server
# Run as stdio server (default)
python main.py
# Run as HTTP server
python main.py --http --port 8000
Testing the API with Bruno
This project includes a Bruno collection for testing the MCP server via HTTP.
Interactive Web UI (ADK)
You can test the MCP server interactively using Google's ADK web interface:
# Start the web UI
adk web agents
Then open http://127.0.0.1:8000 in your browser.
The agents/ directory contains two agent configurations:
math_mcp/- Uses MCP tools (via stdio)math_simple/- Uses simple Python functions (no MCP)
Image 1: Tool call sequence for "Calculate 15 + 27, then multiply by 3, subtract 50, divide by 2"
Image 2: Tool call sequence for "Calculate (3*(15 + 27) - 50)/2"
LLM Tool Calling Simulations
This project includes two simulations of LLM tool calling in the tests/ directory:
1. test_with_llm.py - Direct LLM Integration
Uses litellm to interface with various LLM providers with a unified API.
cd tests
python test_with_llm.py
# Run with OpenAI
python test_with_llm.py --model openai
# Run with custom queries
python test_with_llm.py --query "Calculate 15 + 27" "Calculate 10 + 5"
2. test_with_adk.py - Google ADK Integration
Uses Google's Agent Development Kit (ADK) to connect to the MCP server. ADK handles the tool calling loop automatically.
cd tests
python test_with_adk.py
# Run with OpenAI
python test_with_adk.py --model openai
# Run with custom queries
python test_with_adk.py --query "Calculate 15 + 27" "Calculate 10 + 5"
Performance Comparison
| Approach | Import | First Query (4 tools) | Subsequent Queries |
|---|---|---|---|
| litellm | ~21s | ~6s | ~6s |
| ADK + MCP | ~0.7s | ~26s | ~5s |
Notes:
- litellm: Slow import (~21s), but consistent ~6s per query (no cold start)
- ADK: Fast import (~0.7s), but ~21s cold start on first query, then ~5s for subsequent queries
How It Works
- The script starts the MCP server as a subprocess (or connects to a running server)
- It connects to the server via stdio
- Lists available tools (add, subtract, multiply, divide)
- Sends a math problem to the LLM with tool definitions
- The LLM decides to call a tool, the script executes it via MCP
- Returns the result to the LLM for final answer
ADK Approach (test_with_adk.py)
sequenceDiagram
participant U as User
participant A as Agent
participant L as LLM
participant T as MCP Tools
U->>A: "Calculate 15 + 27, then..."
A->>L: Send prompt + tool definitions
L-->>A: function_call: add(15, 27)
A->>T: Execute add(15, 27)
T-->>A: Result: 42.0
A->>L: Tool result: 42.0
L-->>A: function_call: multiply(42, 3)
A->>T: Execute multiply(42, 3)
T-->>A: Result: 126.0
A->>L: Tool result: 126.0
L-->>A: Final answer: 38.0
A-->>U: The answer is 38.0
ADK handles the tool calling loop automatically.
Direct LLM Approach (test_with_llm.py)
sequenceDiagram
participant U as User
participant S as Script
participant L as LLM
participant T as MCP Tools
U->>S: "Calculate 15 + 27, then..."
S->>L: Send prompt + tool definitions
L-->>S: function_call: add(15, 27)
S->>T: Execute add(15, 27)
T-->>S: Result: 42.0
S->>L: Tool result: 42.0
L-->>S: function_call: multiply(42, 3)
S->>T: Execute multiply(42, 3)
T-->>S: Result: 126.0
S->>L: Tool result: 126.0
L-->>S: Final answer: 38.0
S-->>U: The answer is 38.0
The script manually handles tool calling loop.
LLM Context Inspection
When running either test script, it captures each LLM request context window and writes it to a markdown file in tests/:
llm_context_windows_adk_q1.md,llm_context_windows_adk_q2.md- generated bytest_with_adk.pyllm_context_windows_llm_q1.md,llm_context_windows_llm_q2.md- generated bytest_with_llm.py
Each file shows:
- System instructions
- Full conversation history (cumulative across turns)
- Tool definitions with descriptions and JSON schemas
- Function calls and responses
This is useful for understanding what the LLM sees at each step of the tool calling loop.
