Book Library
An educational Model Context Protocol (MCP) server demonstrating - Resources, Prompts, and Tools with dual transport support (STDIO + HTTP) and Ollama integration.
Ask AI about Book Library
Powered by Claude Β· Grounded in docs
I know everything about Book Library. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
MCP Book Library Manager
An educational Model Context Protocol (MCP) server demonstrating Resources, Prompts, and Tools with dual transport support (STDIO + HTTP) and Ollama integration.
What You'll Learn
This project demonstrates:
- β Resources: Structured data access (book catalog, reading statistics)
- β Prompts: Template-based LLM guidance with data injection
- β Tools: Executable functions (search, modify reading list)
- β STDIO Transport: Traditional stdin/stdout communication
- β HTTP Transport: RESTful JSON-RPC endpoint
- β True LLM Routing: Ollama-based host where the AI decides which tools/prompts to use
Prerequisites
- Python 3.10 or higher
- Ollama installed and running
- Node.js (for MCP Inspector, optional)
Quick Start
1. Installation
# Clone or create the project directory
cd mcp-library
# Install dependencies
pip install -r requirements.txt
# Install Ollama (if not already installed)
# Visit: https://ollama.ai/download
# Pull the Llama3 model
ollama pull llama3
2. Start Ollama Service
# In a separate terminal
ollama serve
3. Run the Interactive Assistant
python host/run_ollama.py
Example interaction:
You: Find me some science fiction books
Assistant: [Uses search_books tool internally]
I found several great science fiction books:
1. Dune by Frank Herbert (Rating: 4.5)
2. Brave New World by Aldous Huxley (Rating: 4.3)
...
You: Recommend me a book based on my reading history
Assistant: [Uses recommend_books prompt with your stats]
Based on your favorite genres (Science Fiction, Fantasy, Mystery)...
Testing with MCP Inspector
The MCP Inspector lets you test primitives without writing code:
# Install Inspector
npm install -g @modelcontextprotocol/inspector
# Run Inspector with your server
mcp-inspector python server/stdio_server.py
Opens a web UI where you can:
- Browse and read Resources
- Test Prompts with different arguments
- Execute Tools with custom inputs
See client/inspector_guide.md for detailed instructions.
Understanding MCP Primitives
Resources (Read-Only Data)
Resources provide structured data that LLMs can access:
# List resources
GET library://books/catalog # All books with metadata
GET library://user/reading-stats # User's reading history
Use case: When the LLM needs to know what books are available or understand user preferences.
Prompts (Templates + Data)
Prompts are instruction templates with injected data:
# Get recommendation prompt
get_prompt("recommend_books", {
"genre": "Fantasy",
"mood": "adventurous"
})
Returns a complete prompt with:
- Your reading statistics
- Full book catalog
- Structured instructions for the LLM
Use case: Guide the LLM to perform specific tasks using current data.
Tools (Executable Functions)
Tools perform actions and return results:
# Search for books
call_tool("search_books", {
"query": "tolkien",
"min_rating": 4.5
})
# Add to reading list
call_tool("add_to_reading_list", {
"book_id": "fellowship-ring"
})
Use case: When the LLM needs to DO something (search, modify data, call APIs).
π§ Architecture
βββββββββββββββββββ
β Ollama Host β β True LLM routing (no hardcoded logic)
β (run_ollama.py)β
ββββββββββ¬βββββββββ
β
JSON-RPC over STDIO/HTTP
β
ββββββββββΌβββββββββ
β MCP Server β
β βββββββββββββ β
β β Resources β β β Read data
β β Prompts β β β Templates
β β Tools β β β Execute actions
β βββββββββββββ β
ββββββββββ¬βββββββββ
β
ββββββββββΌβββββββββ
β Data Files β
β - books.json β
β - reading_list.β
βββββββββββββββββββ
How the LLM Routing Works
Unlike traditional chatbots with if/else logic, this host uses true AI routing:
- System Context: The host fetches all available tools/prompts and sends their descriptions to Ollama
- LLM Decision: Llama3 reads the user's query and decides which tool/prompt to use
- Execution: The host executes the LLM's choice via MCP
- Iteration: Results flow back to the LLM, which can chain multiple tools
Example:
User: "Find fantasy books and add the best one to my list"
Llama3 thinks:
β Use search_books(query="fantasy") first
β Analyze results
β Use add_to_reading_list(book_id="fellowship-ring")
β Respond to user
Running Different Components
STDIO Server (for Inspector/Clients)
python server/stdio_server.py
HTTP Server (for REST clients)
python server/http_server.py
# Server runs on http://localhost:8000
# read more about testing this on inspector_guide.md file
Example Client (Demonstrates all primitives)
python client/example_usage.py
Run Tests
pytest tests/ -v
Project Structure
mcp-library/
βββ server/
β βββ stdio_server.py # STDIO transport
β βββ http_server.py # HTTP transport
β βββ registry.py # Central primitive registry
β βββ resources/ # Data access layer
β βββ prompts/ # Template generators
β βββ tools/ # Executable functions
β βββ data/ # JSON storage
βββ host/
β βββ run_ollama.py # Ollama-based AI host
β βββ config.yaml # Configuration
βββ client/
β βββ example_usage.py # Demo client
β βββ inspector_guide.md # Inspector tutorial
βββ tests/ # Pytest test suite
βββ diagrams/ # Architecture diagrams
Troubleshooting
Ollama Connection Error
Error: Cannot connect to Ollama
Solution: Ensure Ollama is running:
ollama serve
ollama pull llama3
Module Not Found
ModuleNotFoundError: No module named 'mcp'
Solution: Install dependencies:
pip install -r requirements.txt
Tool Execution Fails
Solution: Verify data files exist:
ls server/data/books.json
ls server/data/reading_list.json
Learn More
License
MIT License - Feel free to use this for learning and building!
Happy Learning!
