Lightrag
MCP server for LightRag
Ask AI about Lightrag
Powered by Claude Β· Grounded in docs
I know everything about Lightrag. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
β π Requirements
β π¦ Installation
β π Getting Started
β βοΈ Configuration
β π MCP Client Configuration
β π οΈ Tools (26)
β π― Usage Examples
β π» Programmatic Usage
β π§ Development
β π§ Troubleshooting
β π Links
lightrag-mcp is a Model Context Protocol (MCP) server that bridges AI assistants with LightRAG's powerful knowledge graph capabilities. It enables your AI tools like Claude Desktop, Cline, or custom integrations to interact with LightRAG's Retrieval-Augmented Generation system through a standardized protocol.
- π MCP Protocol - Stdio transport for seamless client integration
- π‘οΈ Type Safety - Full TypeScript support with Zod validation
- π€ Auto-generated SDK - Types and client from OpenAPI spec
- π§° 26 Tools - Complete LightRAG API coverage
- π Knowledge Graphs - Build and explore entity relationships
- π Multiple Query Modes - Local, global, hybrid, and more
- π Document Management - Insert, track, and reprocess documents
- β‘ Fast & Lightweight - Minimal dependencies, maximum performance
Requirements
π οΈ / Requirements β
Before you begin, make sure your environment meets these requirements:
- Node.js 18+ (LTS recommended)
- Running LightRAG server (default port 9621)
- API key for authentication
Installation
π οΈ / Installation β β
Choose the installation method that fits your use case:
Install globally (for CLI usage):
npm i -g lightrag-mcp
Add to project dependencies:
npm i lightrag-mcp
Add as dev dependency (for development tools):
npm i -D lightrag-mcp
Getting Started
π οΈ / Getting Started β β
LightRAG MCP server connects your AI assistant to LightRAG's knowledge graph capabilities through the Model Context Protocol. The server runs as a stdio transport, making it compatible with any MCP client like Claude Desktop, Cline, or custom integrations.
βββββββββββββββββββββββ
β MCP Client β
β (Claude Desktop, β
β Cline, Custom) β
ββββββββββββ¬βββββββββββ
β stdio
β (MCP Protocol)
βΌ
βββββββββββββββββββββββ
β MCP Server β
β (lightrag-mcp) β
ββββββββββββ¬βββββββββββ
β HTTP/REST
β (OpenAPI)
βΌ
βββββββββββββββββββββββ
β LightRAG Server β
βββββββββββββββββββββββ
How it works:
- MCP Client sends tool requests via stdio transport
- lightrag-mcp validates requests and translates them to LightRAG API calls
- LightRAG Server processes requests and returns results
- lightrag-mcp formats responses back to MCP protocol
- MCP Client receives structured data for the AI assistant
With this server, you can:
- Build knowledge graphs from your documents automatically
- Query information using multiple search strategies (local, global, hybrid)
- Manage entities and relationships in your knowledge base
- Track document processing and reprocess failed items
- Explore connections between concepts through graph visualization
While MCP servers are designed to work with MCP clients, you can test them directly:
Using MCP Inspector (recommended for testing):
npx @modelcontextprotocol/inspector lightrag-mcp --token your-api-key
Direct stdio testing with echo:
echo '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"query_text","arguments":{"query":"What is LightRAG?","mode":"hybrid"}}}' | lightrag-mcp --token your-api-key
Note: Replace your-api-key with your actual API key and add -u https://your-server.com if using a remote LightRAG server.
Configuration
π οΈ / Configuration β β
You can configure the lightrag-mcp server in multiple ways, with the following priority (highest to lowest):
- CLI options - Direct command-line arguments
- Environment variables - System or shell environment
.envfile - Local configuration file- MCP client configuration - Settings in your MCP client (Claude Desktop, etc.)
CLI Options
Pass configuration directly via command-line flags:
| Option | Description |
|---|---|
-u, --url <url> | LightRAG server URL (overrides LIGHTRAG_BASE_URL) |
-t, --token <token> | API key (overrides LIGHTRAG_API_KEY) |
-v, --version | Display version |
-h, --help | Display help |
Example:
lightrag-mcp --url https://rag.example.com --token your-api-key
Environment Variables
Set configuration via environment variables:
| Variable | Description | Default |
|---|---|---|
LIGHTRAG_BASE_URL | LightRAG server URL | http://localhost:9621 |
LIGHTRAG_API_KEY | API key for authentication | - |
Example:
export LIGHTRAG_BASE_URL=https://rag.example.com
export LIGHTRAG_API_KEY=your-api-key
lightrag-mcp
.env File
Create a .env file in your project root:
LIGHTRAG_BASE_URL=https://rag.example.com
LIGHTRAG_API_KEY=your-api-key
Then simply run:
lightrag-mcp
MCP Client Configuration
π οΈ / MCP Client Configuration β β
Claude Desktop
Add to your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Linux: ~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"lightrag": {
"command": "lightrag-mcp",
"env": {
"LIGHTRAG_BASE_URL": "https://rag.example.com",
"LIGHTRAG_API_KEY": "your-api-key"
}
}
}
}
Other MCP Clients (YAML)
For MCP clients and plugins that support YAML configuration:
name: LightRAG MCP Server
version: 0.0.1
schema: v1
mcpServers:
- name: LightRag
type: stdio
command: lightrag-mcp
env:
LIGHTRAG_BASE_URL: "https://rag.example.com"
LIGHTRAG_API_KEY: "your-api-key"
Available Tools
π οΈ / Available Tools β β
The server provides 26 MCP tools organized into 4 categories:
π Documents (11 tools) - Manage your knowledge base
Document Management
insert_text
Add text content to LightRAG knowledge base.
interface InsertTextParams {
text: string // Text content to add
file_source?: string // Optional source identifier
}
Example:
{
"tool": "insert_text",
"arguments": {
"text": "TypeScript is a typed superset of JavaScript",
"file_source": "docs/typescript.md"
}
}
insert_texts
Add multiple text documents at once.
interface InsertTextsParams {
texts: string[] // Array of text contents
file_sources?: string[] // Optional source identifiers
}
scan_documents
Trigger scanning process for new documents in the input directory.
get_documents_paginated
Retrieve documents with pagination support.
interface GetDocumentsPaginatedParams {
status_filter?: 'pending' | 'processing' | 'completed' | 'failed'
page?: number // Default: 1
page_size?: number // Default: 20
sort_field?: string
sort_direction?: 'asc' | 'desc'
}
get_document_status_counts
Get counts of documents by status.
get_track_status
Get the processing status of documents by tracking ID.
interface GetTrackStatusParams {
track_id: string
}
delete_document
Delete documents by their IDs.
interface DeleteDocumentParams {
doc_ids: string[] // Document IDs to delete
delete_file?: boolean // Delete source files
delete_llm_cache?: boolean // Clear LLM cache
}
clear_documents
Clear all documents from the RAG system.
clear_cache
Clear all cache data from the LLM response cache storage.
reprocess_failed
Reprocess failed and pending documents.
π Queries (2 tools) - Search and retrieve information
Query Operations
query_text
Query the RAG system with various modes.
interface QueryTextParams {
query: string
mode?: 'local' | 'global' | 'hybrid' | 'naive' | 'mix' | 'bypass'
top_k?: number
enable_rerank?: boolean
include_references?: boolean
}
Modes:
local- Search within local contextglobal- Global knowledge graph searchhybrid- Combines local and global (default)naive- Simple vector searchmix- Knowledge graph + vector searchbypass- Direct LLM query without RAG
Example:
{
"tool": "query_text",
"arguments": {
"query": "What is LightRAG?",
"mode": "hybrid",
"top_k": 5,
"enable_rerank": true
}
}
query_data
Advanced data retrieval endpoint for structured RAG analysis without LLM generation.
πΈοΈ Knowledge Graph (11 tools) - Build and explore entity relationships
Graph Operations
get_graph_labels
Get all graph labels (entity types).
get_popular_labels
Get popular labels by node degree (most connected entities).
interface GetPopularLabelsParams {
limit?: number // Default: 10
}
search_labels
Search labels with fuzzy matching.
interface SearchLabelsParams {
q: string // Search query
limit?: number // Default: 10
}
Example:
{
"tool": "search_labels",
"arguments": {
"q": "machine learning",
"limit": 10
}
}
get_knowledge_graph
Retrieve a connected subgraph of nodes.
interface GetKnowledgeGraphParams {
label: string // Starting entity label
max_depth?: number // Default: 2
max_nodes?: number // Default: 50
}
Example:
{
"tool": "get_knowledge_graph",
"arguments": {
"label": "LightRAG",
"max_depth": 2,
"max_nodes": 50
}
}
check_entity_exists
Check if an entity with the given name exists.
interface CheckEntityExistsParams {
name: string
}
create_entity
Create a new entity in the knowledge graph.
interface CreateEntityParams {
entity_name: string
entity_data: {
entity_type?: string
description?: string
source_id?: string
}
}
update_entity
Update an entity's properties.
interface UpdateEntityParams {
entity_name: string
updated_data: object
allow_rename?: boolean // Allow renaming entity
allow_merge?: boolean // Allow merging with existing
}
delete_entity
Remove an entity and all its relationships.
interface DeleteEntityParams {
entity_name: string
}
merge_entities
Merge multiple entities into a single entity, preserving all relationships.
interface MergeEntitiesParams {
entities_to_change: string[] // Entities to merge
entity_to_change_into: string // Target entity
}
Example:
{
"tool": "merge_entities",
"arguments": {
"entities_to_change": ["ML", "machine learning"],
"entity_to_change_into": "Machine Learning"
}
}
create_relation
Create a new relationship between two entities.
interface CreateRelationParams {
source_entity: string
target_entity: string
relation_data: {
keywords?: string
weight?: number
description?: string
source_id?: string
}
}
update_relation
Update a relation's properties.
interface UpdateRelationParams {
source_id: string
target_id: string
updated_data: object
}
delete_relation
Remove a relationship between two entities.
interface DeleteRelationParams {
source_entity: string
target_entity: string
}
βοΈ System (2 tools) - Monitor and control pipeline
System Management
get_pipeline_status
Get the current status of the document indexing pipeline.
cancel_pipeline
Request cancellation of the currently running pipeline.
Usage Examples
π οΈ / Usage Examples β β
π Basic Workflow - Insert, process, and query documents
Basic Workflow
Insert documents
{
"tool": "insert_text",
"arguments": {
"text": "LightRAG is a simple and fast RAG system.",
"file_source": "intro.txt"
}
}
Wait for processing
{
"tool": "get_pipeline_status"
}
Query your knowledge base
{
"tool": "query_text",
"arguments": {
"query": "What is LightRAG?",
"mode": "hybrid"
}
}
π Query with Different Modes - Explore various search strategies
Query Modes
Hybrid mode (default)
{
"tool": "query_text",
"arguments": {
"query": "What is LightRAG?",
"mode": "hybrid"
}
}
Mix mode (knowledge graph + vector search)
{
"tool": "query_text",
"arguments": {
"query": "Explain RAG systems",
"mode": "mix"
}
}
πΈοΈ Knowledge Graph Operations - Manage entities and relationships
Graph Management
Search for entities
{
"tool": "search_labels",
"arguments": {
"q": "machine learning",
"limit": 10
}
}
Get entity subgraph
{
"tool": "get_knowledge_graph",
"arguments": {
"label": "LightRAG",
"max_depth": 2,
"max_nodes": 50
}
}
Merge duplicate entities
{
"tool": "merge_entities",
"arguments": {
"entities_to_change": ["ML", "machine learning"],
"entity_to_change_into": "Machine Learning"
}
}
Programmatic Usage
π οΈ / Programmatic Usage β β
import { LightRagServer } from 'lightrag-mcp'
const server = new LightRagServer({
clientOptions: {
baseUrl: 'https://rag.example.com',
},
apiKey: 'your-api-key',
})
await server.start()
Development
π οΈ / Development β β
# Install dependencies
npm install
# Build
npm run build
# Lint
npm run lint
# Update API types from OpenAPI spec
npm run update:api
Troubleshooting
π οΈ / Troubleshooting β β
π API key not found
Symptoms: Error: Use apiKey option or LIGHTRAG_API_KEY
Solution:
- Set
LIGHTRAG_API_KEYin your.envfile - Pass via
--tokenflag in CLI - Provide in constructor options
# .env
LIGHTRAG_API_KEY=your-secret-key
π Cannot connect to LightRAG
Symptoms: Connection refused or ECONNREFUSED
Solution:
- Verify LightRAG server is running on port 9621
- Check
LIGHTRAG_BASE_URLconfiguration - Verify firewall settings
- Test connection:
curl http://localhost:9621/health
β³ Documents not processing
Symptoms: Documents stuck in pending or processing state
Check pipeline status:
# Using CLI
curl http://localhost:9621/health
# Or use MCP tool
{
"tool": "get_pipeline_status"
}
Common causes:
- LightRAG server overloaded
- Document format not supported
- Insufficient memory
Solutions:
- Check pipeline status: use
get_pipeline_statustool - Review LightRAG server logs
- Try reprocessing: use
reprocess_failedtool - Verify document format and size limits
- Increase server resources
- Split large documents
πΈοΈ Knowledge graph queries return empty results
Symptoms: get_knowledge_graph returns no nodes
Solution:
- Verify entity exists: use
check_entity_exists - Try fuzzy search: use
search_labels - Check if documents are fully processed
- Increase
max_depthandmax_nodesparameters
Links
π οΈ / Links β β
You can find more MCP servers and tools on NPM
- LightRAG - Knowledge graph-powered RAG system
- MCP Specification - Model Context Protocol documentation
Issues
π οΈ / Issues β
If you find a bug or have a suggestion, please file an issue on GitHub
