π¦
Lang
No description available
0 installs
Trust: 30 β Low
Other
Ask AI about Lang
Powered by Claude Β· Grounded in docs
I know everything about Lang. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Loading tools...
Reviews
Documentation
MCP LangChain
A project demonstrating how to integrate Model Context Protocol (MCP) with LangChain and LangGraph to create AI agents that can use custom tools.
What This Project Does
This project shows how to:
- Create custom MCP tool servers (math operations, weather lookup)
- Connect an LLM (Ollama/Groq) to these tools using LangChain
- Use the ReAct (Reason + Act) agent pattern for intelligent tool usage
In Simple Terms
Built a system where:
- MCP Servers expose Python functions as "tools" (like a calculator, weather API)
- LangChain connects these tools to an LLM (AI model)
- LangGraph's ReAct Agent helps the LLM decide when and how to use these tools
- Ollama runs a free, local AI model.
Project Structure
mcp-lang/
βββ client.py # Main client - connects LLM to MCP servers
βββ servers/
β βββ mathserver.py # MCP server with math tools (stdio transport)
β βββ weatherserver.py # MCP server with weather tool (HTTP transport)
βββ main.py # Basic entry point
βββ requirements.txt # Python dependencies
βββ pyproject.toml # Project configuration
βββ .env # Environment variables (API keys)
How It Works (ReAct Pattern)
- User asks a question β "What is 5 multiplied by 3?"
- LLM reasons β "This is a math problem, I should use multiply_two_numbers tool"
- LLM acts β Calls the tool with arguments
{"a": 5, "b": 3} - Tool executes β MCP server returns
15 - LLM responds β "5 multiplied by 3 is 15"
Installation & Setup
Prerequisites
- Python 3.14+
- Install Ollama for local LLM
Option 1: Direct Download: Go to https://ollama.ai/download and Download the macOS installer
Option 2: Using Homebrew: brew install ollama
Also to install local llm models
Pull the Model: `ollama pull llama3.1:8b`
Alternative Models (that support tools):
Smaller, faster (needs less RAM): ollama pull llama3.2:3b
Larger, smarter (needs more RAM): ollama pull llama3.1:70b
Mistral (good alternative): ollama pull mistral:7b
Step 1: Clone the Repository
Step 2: Create Virtual Environment
# Using Python venv
python -m venv .venv
# Activate virtual environment
# On macOS/Linux:
source .venv/bin/activate
# On Windows (Command Prompt):
.venv\Scripts\activate
# On Windows (PowerShell):
.venv\Scripts\Activate.ps1
Step 3: Install Dependencies
pip install -r requirements.txt
uv pip install -r requirements.txt #Or using uv (faster)
Running the Project
Step 1: Start the Weather Server (HTTP Transport)
In a separate terminal:
cd mcp-lang
source .venv/bin/activate # or .venv\Scripts\activate on Windows
python servers/weatherserver.py
Keep this terminal running. You should see:
Starting MCP server on http://localhost:8000/mcp
Step 2: Run the Client
In a new terminal:
cd mcp-lang
source .venv/bin/activate
python client.py
Expected Output
============================================================
Response The current weather in New York is sunny with a temperature of 25Β°C.
============================================================
Response 5 multiplied by 3 is 15.
============================================================
Code Explanation
client.py - The Main Application
# Import the MCP client adapter for LangChain
from langchain_mcp_adapters.client import MultiServerMCPClient
# Import the ReAct agent pattern from LangGraph
from langgraph.prebuilt import create_react_agent
# Import the Ollama LLM wrapper
from langchain_ollama import ChatOllama
Key Concepts:
- MultiServerMCPClient: Connects to multiple MCP tool servers simultaneously
- create_react_agent: Creates a ReAct agent that reasons about when to use tools
- ChatOllama: Wrapper to use local Ollama models with LangChain
servers/mathserver.py - Math Tool Server
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("mathserver")
@mcp.tool("add_two_numbers")
def add(a: float, b: float) -> float:
"""Add two numbers together."""
return a + b
Key Concepts:
- FastMCP: Quick way to create MCP servers
- @mcp.tool(): Decorator to expose functions as tools
- Type hints: Required for LLM to understand parameters
Transport Types
| Transport | Use Case | Server Start |
|---|---|---|
stdio | Client spawns server as subprocess | Automatic |
streamable-http | Server runs independently on HTTP | Manual |
