Language Converter
No description available
Ask AI about Language Converter
Powered by Claude Β· Grounded in docs
I know everything about Language Converter. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
MCP Server Language Converter
Parse the past, build the future - Bridging legacy systems to AI, one interface at a time.
A hybrid MCP (Model Context Protocol) server implementation that supports multiple domain-specific MCP servers, each exposing business logic through multiple interfaces: MCP protocol (STDIO and HTTP streaming) and REST API. Currently focused on COBOL program analysis and reverse engineering, with an extensible architecture for other legacy languages.
Purpose
This project demonstrates how to build a modern server that serves both AI agents (via MCP) and traditional applications (via REST API) while maintaining a single source of truth for business logic.
Key Features
- COBOL Reverse Engineering: Comprehensive analysis tools for legacy COBOL programs
- Domain-Specific MCP Servers: Separate MCP servers for different domains (general, COBOL analysis, etc.)
- Dual Interface Support: MCP protocol and REST API using the same core business logic
- Multiple Transport Layers: STDIO, HTTP streaming (MCP), and standard REST
- MCP Capabilities: Tools, Resources, and Prompts
- Incremental Development: Phased approach across capabilities and transport layers
- Modern Python Stack: UV for package management, FastMCP 2.0, FastAPI
COBOL Analysis
The COBOL analysis domain provides a comprehensive suite of reverse engineering tools designed to help AI agents understand, analyze, and document legacy COBOL systems.
Analysis Capabilities
| Tool | Description |
|---|---|
parse_cobol | Parse COBOL source code into an Abstract Syntax Tree (AST) |
build_asg | Build Abstract Semantic Graph with symbol tables and cross-references |
build_cfg | Build Control Flow Graph for program flow analysis |
build_dfg | Build Data Flow Graph for variable usage tracking |
analyze_complexity | Calculate cyclomatic complexity with optional ASG/CFG/DFG enhancement |
resolve_copybooks | Resolve COPY statements and expand copybook includes |
batch_analyze_cobol_directory | Analyze entire directories of COBOL programs |
analyze_program_system | Analyze inter-program relationships and dependencies |
build_call_graph | Generate program call graphs across a codebase |
analyze_copybook_usage | Track copybook usage across programs |
analyze_data_flow | Trace data flow through program execution |
Progressive Analysis Model
The analysis tools support progressive enhancement β start with basic parsing and add semantic analysis as needed:
AST (Syntax) β ASG (Semantics) β CFG (Control Flow) β DFG (Data Flow)
β β β β
βββ Structure βββ Symbols βββ Complexity βββ Variable
Paragraphs Cross-refs Paths Tracking
Statements Data items Unreachable Dead code
Running the COBOL Analysis Server
# STDIO transport (for Claude Desktop, Cursor IDE)
uv run python -m src.mcp_servers.mcp_cobol_analysis stdio
# SSE transport (for web clients)
uv run python -m src.mcp_servers.mcp_cobol_analysis sse
# Available at: http://localhost:8001/sse
# Streamable HTTP transport
uv run python -m src.mcp_servers.mcp_cobol_analysis streamable-http
# Available at: http://localhost:8003/mcp
For multi-agent workflows and LangGraph integration, see LangGraph Architecture.
Architecture
The application follows a Hexagonal/Ports and Adapters architecture pattern:
- Interface Layer: MCP Server (FastMCP) and REST API (FastAPI)
- Core Business Logic Layer: Transport-agnostic, reusable functions
graph TB
subgraph Interface["Interface Layer"]
STDIO["STDIO Server<br/>(FastMCP 2.0)<br/>βββββββββββββ<br/>β’ STDIO transport<br/>β’ Claude Desktop<br/>β’ Cursor IDE"]
HTTP["HTTP Streaming Server<br/>(FastMCP 2.0)<br/>βββββββββββββ<br/>β’ Server-Sent Events<br/>β’ Web-based clients<br/>β’ Real-time streaming"]
REST["REST API<br/>(FastAPI)<br/>βββββββββββββ<br/>β’ HTTP endpoints<br/>β’ JSON responses<br/>β’ Standard REST"]
end
subgraph Core["Core Business Logic Layer"]
BL["<b>Shared Functions:</b><br/>β’ Transport-agnostic<br/>β’ Reusable across interfaces<br/>β’ Single source of truth<br/>β’ Pure business logic"]
end
STDIO --> BL
HTTP --> BL
REST --> BL
style Interface fill:#1a1a1a,stroke:#fff,stroke-width:2px,color:#fff
style Core fill:#0d0d0d,stroke:#fff,stroke-width:2px,color:#fff
style STDIO fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style HTTP fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style REST fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style BL fill:#1a1a1a,stroke:#fff,stroke-width:2px,color:#fff
For detailed architectural decisions and design patterns, see Architecture Documentation.
Multi-Server Architecture with Shared Infrastructure
The project supports domain-specific MCP servers with zero code duplication:
src/
βββ core/ # Shared business logic
β βββ models/ # Database models
β βββ repositories/ # Data access layer
β βββ services/ # Business logic and tool handlers
β βββ schemas/ # Validation schemas
β
βββ mcp_servers/
β βββ common/ # Shared MCP infrastructure (NO duplication!)
β β βββ base_server.py # FastMCP initialization
β β βββ unified_runner.py # Protocol-agnostic runner (stdio/sse/streamable-http)
β β βββ tool_registry.py # Tool registration and JSON config loading
β β βββ config_loader.py # JSON configuration loader
β β
β βββ mcp_general/ # Domain servers (minimal code - just entry points)
β β βββ __main__.py # Unified entry point
β β
β βββ mcp_cobol_analysis/ # COBOL analysis domain
β β βββ __main__.py # Unified entry point
β β
β βββ mcp_kubernetes/ # Future: Same minimal pattern
β βββ mcp_os_commands/ # Future: Same minimal pattern
β
βββ rest_api/ # Shared REST API (planned)
Architecture Benefits:
- Zero Code Duplication: All MCP server code is in
common/- domain servers are just entry points - Easy to Add Domains: New domain server = single
__main__.pyfile - Separation of Concerns: Each server handles one domain
- Shared Infrastructure: Same repositories, services, AND MCP runtime code
- Independent Scaling: Each server can be scaled separately
- Security: Domain-specific permissions and isolation
JSON Config-Driven Tools
Tools are configured via JSON (config/tools.json) and dynamically loaded at server startup:
- Tool Configuration: Version-controlled JSON file with category, domain, and active status
- Handler Registry: Predefined Python functions for business logic in
tool_handlers_service.py - Dynamic Registration: Tools registered in code via
@register_tooldecorator, filtered by JSON config - Enable/Disable: Toggle
is_activein JSON to enable or disable tools without code changes
Tool Classification:
- Category: Functional grouping (utility, calculation, analysis, preprocessing, etc.)
- Domain: Business domain (general, cobol_analysis, etc.)
Quick Start
Transport Options
The MCP Server Language Converter supports multiple transport mechanisms for different client types:
STDIO Server (Claude Desktop, Cursor IDE)
- Transport: STDIO (standard input/output)
- Clients: Claude Desktop, Cursor IDE, command-line tools
- Protocol: MCP over STDIO
HTTP Streaming Server (Web-based Clients)
- Transport: Server-Sent Events (SSE) over HTTP
- Clients: Web applications, browser-based AI clients
- Protocol: MCP over HTTP streaming
Streamable HTTP Server (Full MCP Protocol)
- Transport: Streamable HTTP (bidirectional)
- Clients: Web applications requiring full MCP protocol
- Protocol: MCP over Streamable HTTP with session management
Separate Server Processes (Recommended)
Why separate processes?
- β Clean separation: Each transport has a single responsibility
- β Independent scaling: Scale each server based on demand
- β Reliability: One server failure doesn't affect the other
- β Different configurations: Optimize each for its use case
- β Easier debugging: Isolate issues to specific transports
How to start each server:
# Terminal 1: STDIO server (for Claude Desktop, Cursor IDE)
uv run python -m src.mcp_servers.mcp_general stdio
# Terminal 2: SSE server (for web-based clients)
uv run python -m src.mcp_servers.mcp_general sse
# Server available at: http://localhost:8000/sse
# Terminal 3: Streamable HTTP server (for full MCP protocol)
uv run python -m src.mcp_servers.mcp_general streamable-http
# Server available at: http://localhost:8002/mcp
All transports share the same core business logic and tools, just with different protocols.
Testing Your Setup
STDIO Testing (Claude Desktop)
- Configure Claude Desktop with the server
- Test tools through Claude Desktop interface
HTTP Streaming Testing
-
Quick test with curl:
curl -N -H "Accept: text/event-stream" http://localhost:8000/sse -
MCP Inspector (Recommended):
npx @modelcontextprotocol/inspector # Open http://localhost:3000 and connect to http://localhost:8000/sse -
Comprehensive testing guide: See HTTP Streaming Guide
Streamable HTTP Testing
-
Python client test:
uv run python test_streamable_http_client.py -
Test both transports:
uv run python test_both_transports.py -
Comprehensive guide: See Streamable HTTP Guide
Prerequisites
- Python 3.12+
- UV (Python package manager)
- PostgreSQL 14+ (database)
- Docker (optional, for containerized deployment)
- Cursor IDE with Claude Code integration (recommended)
Installation
# Clone the repository
git clone <repository-url>
cd mcp-server-language-converter
# Install UV (if not already installed)
# macOS (Homebrew)
brew install uv
# Windows (Chocolatey)
choco install uv
# Install dependencies
uv sync
# Set up pre-commit hooks
uv run pre-commit install
Database Setup
# Install PostgreSQL
# macOS
brew install postgresql@16
brew services start postgresql@16
# Windows
choco install postgresql
# Create database
createdb mcp_server
# Configure environment
cp env.example .env
# Edit .env with your database credentials
# Initialize database tables
uv run python scripts/init_db.py
Note: Tool configuration is managed via config/tools.json, not the database. Edit this file to enable/disable tools or add new ones.
Running the Server
# Initialize database (first time only)
uv run python scripts/init_db.py
# Run General MCP server (STDIO mode)
uv run python -m src.mcp_servers.mcp_general
# Future: Run other domain-specific servers
# uv run python -m src.mcp_servers.mcp_os_commands
# uv run python -m src.mcp_servers.mcp_kubernetes
# uv run python -m src.mcp_servers.mcp_shopping
# Run tests
uv run pytest
# Run with coverage
uv run pytest --cov=src
Documentation
| Document | Description |
|---|---|
| Architecture | Architectural decisions, design patterns, and development phases |
| LangGraph Architecture | Multi-agent workflow for COBOL reverse engineering |
| COBOL Implementation | COBOL-specific implementation details |
| Setup Guide | Development environment setup, tools, and configuration |
| Database Guide | Database schema, setup, migrations, and management |
| Usage Guide | Common usage patterns and examples |
| Testing Quickstart | Minimal steps to test STDIO, SSE, and Streamable HTTP |
| Testing Guide | Claude Desktop and Cursor testing walkthrough |
| Contributing | Guidelines for contributing to the project |
| API Documentation | MCP tools/resources/prompts and REST endpoint reference |
Technology Stack
- Language: Python 3.12+
- Package Manager: UV - Fast Python package installer
- MCP Framework: FastMCP 2.0 - STDIO and HTTP streaming support
- REST Framework: FastAPI - High-performance REST API
- Database: PostgreSQL with async support (SQLAlchemy + asyncpg)
- Development Tools:
- Cursor IDE with Claude Code integration
- Pre-commit hooks for code quality
- Docker for containerization
- Ruff for linting and formatting
- Pytest for testing
Development Phases
The project is developed in three major phases, each with three sub-steps:
graph LR
subgraph Phase1["Phase 1: Tools"]
T1["1.1<br/>STDIO"]
T2["1.2<br/>HTTP Streaming"]
T3["1.3<br/>REST API"]
T1 --> T2 --> T3
end
subgraph Phase2["Phase 2: Resources"]
R1["2.1<br/>STDIO"]
R2["2.2<br/>HTTP Streaming"]
R3["2.3<br/>REST API"]
R1 --> R2 --> R3
end
subgraph Phase3["Phase 3: Prompts"]
P1["3.1<br/>STDIO"]
P2["3.2<br/>HTTP Streaming"]
P3["3.3<br/>REST API"]
P1 --> P2 --> P3
end
Phase1 --> Phase2 --> Phase3
style Phase1 fill:#1a1a1a,stroke:#fff,stroke-width:2px,color:#fff
style Phase2 fill:#1a1a1a,stroke:#fff,stroke-width:2px,color:#fff
style Phase3 fill:#1a1a1a,stroke:#fff,stroke-width:2px,color:#fff
style T1 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style T2 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style T3 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style R1 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style R2 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style R3 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style P1 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style P2 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
style P3 fill:#2d2d2d,stroke:#fff,stroke-width:2px,color:#fff
Summary:
- Phase 1: Tools - Implement MCP tools across all transport layers
- Phase 2: Resources - Add MCP resources across all transport layers
- Phase 3: Prompts - Implement MCP prompts across all transport layers
Each phase follows the same pattern: STDIO β HTTP Streaming β REST API
See Architecture Documentation for detailed phase breakdown.
Contributing
We welcome contributions! Please read our Contributing Guidelines before submitting PRs.
Development Workflow
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests and linters
- Submit a pull request
License
This project is licensed under the MIT License.
Additional References
- HTTP Streaming Guide - Complete guide for SSE transport implementation
- Streamable HTTP Guide - Complete guide for Streamable HTTP transport
Resources
Contact
- Email: hyalen@gmail.com
- LinkedIn: linkedin.com/in/hyalen
Built with Cursor + Claude Code
