Llms Fetch MCP
High-quality MCP server for fetching and caching web content with smart URL variations and HTML-to-Markdown conversion
Ask AI about Llms Fetch MCP
Powered by Claude · Grounded in docs
I know everything about Llms Fetch MCP. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
llms-fetch-mcp
MCP server that fetches web content in LLM-friendly formats. Automatically discovers and uses llms.txt files when available, tries Markdown versions, and falls back to clean HTML-to-Markdown conversion.
Quick Start
Add to your MCP client configuration:
Claude Desktop / Claude Code
{
"mcpServers": {
"llms-fetch": {
"command": "npx",
"args": ["-y", "llms-fetch-mcp"]
}
}
}
Cursor IDE
{
"mcp.servers": {
"llms-fetch": {
"command": "npx",
"args": ["-y", "llms-fetch-mcp"]
}
}
}
How It Works
When you fetch a URL, the server tries multiple sources in parallel:
https://example.com/llms-full.txt- Comprehensive LLM documentationhttps://example.com/llms.txt- Concise LLM documentationhttps://example.com.md- Markdown versionhttps://example.com/index.md- Directory Markdownhttps://example.com- Original URL (converts HTML to Markdown if needed)
Content is cached locally in .llms-fetch-mcp/ for quick access. The server automatically generates a table of contents for cached files to help navigate large documents.
Configuration
Table of Contents Settings
The server intelligently generates a table of contents, selecting heading levels to maximize detail while staying within budget:
--toc-budget- Maximum ToC size in bytes (default: 4000)--toc-threshold- Minimum document size in bytes to generate ToC (default: 8000)
With npx:
{
"mcpServers": {
"llms-fetch": {
"command": "npx",
"args": ["-y", "llms-fetch-mcp", "--toc-budget", "2000", "--toc-threshold", "4000"]
}
}
}
With installed binary:
{
"mcpServers": {
"llms-fetch": {
"command": "llms-fetch-mcp",
"args": ["--toc-budget", "2000", "--toc-threshold", "4000"]
}
}
}
Custom Cache Directory
With npx:
{
"mcpServers": {
"llms-fetch": {
"command": "npx",
"args": ["-y", "llms-fetch-mcp", "/path/to/custom/cache"]
}
}
}
With installed binary:
{
"mcpServers": {
"llms-fetch": {
"command": "llms-fetch-mcp",
"args": ["/path/to/custom/cache"]
}
}
}
Why llms.txt?
llms.txt is an emerging standard for websites to provide LLM-optimized documentation. Sites like FastHTML, Anthropic Docs, and others are adopting it. This server automatically discovers and uses these files when available, giving you cleaner, more concise content than HTML scraping.
Installation
If you prefer installing instead of using npx:
Shell (macOS/Linux)
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/Crazytieguy/llms-fetch-mcp/releases/latest/download/llms-fetch-mcp-installer.sh | sh
PowerShell (Windows)
irm https://github.com/Crazytieguy/llms-fetch-mcp/releases/latest/download/llms-fetch-mcp-installer.ps1 | iex
Homebrew
brew install Crazytieguy/tap/llms-fetch-mcp
npm
npm install -g llms-fetch-mcp
Cargo
cargo install llms-fetch-mcp
License
MIT
