Host Integration
A faithful Java/Spring Boot implementation of the MCP architecture from your diagram: one host application (Spring Boot + Anthropic Claude) maintains three MCP clients, each connected 1:1 to a separate MCP server that wraps an external system
Ask AI about Host Integration
Powered by Claude Β· Grounded in docs
I know everything about Host Integration. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Spring AI MCP Host (Java) β DSM-flavored Demo
A faithful Java/Spring Boot implementation of the MCP architecture from your diagram: one host application (Spring Boot + Anthropic Claude) maintains three MCP clients, each connected 1:1 to a separate MCP server that wraps an external system.
Architecture
ββββββββββββββββββββββββββββββββββββββ
β MCP HOST (port 8080) β
β Spring Boot + ChatClient + Claude β
β β
β ββββββββββββ ββββββββββββ βββββββ β
β β Client A β β Client B β β C β β
β ββββββ¬ββββββ ββββββ¬ββββββ ββββ¬βββ β
βββββββββΌβββββββββββββΌβββββββββββΌβββββ
β MCP/HTTP β MCP/HTTP β MCP/HTTP
βββββββββββΌβββ ββββββββΌβββββββ βββΌββββββββββ
β db-server β β fs-server β β web-serverβ
β port 8090 β β port 8091 β β port 8092 β
β β β β β β
β Tools: β β Tools: β β Tools: β
β - distrib β β - listFiles β β - fetchUrlβ
β - volume β β - readFile β β β
β - top-merchβ β - writeFile β β β
β - failed β β β β β
βββββββ¬βββββββ ββββββββ¬βββββββ βββββββ¬ββββββ
β β β
βββββββΌβββββββ ββββββββΌβββββββ βββββββΌββββββ
β H2/PG β β ./sandbox β β httpbin, β
β (txns DB) β β (filesys) β β wikipedia β
ββββββββββββββ βββββββββββββββ βββββββββββββ
Mapping to the diagram you shared:
- The diagram's MCP Host = the
host/module - Each MCP Client = auto-created by Spring AI's MCP client starter, one per
streamable-http.connections.*entry inapplication.yml - Each MCP Server = one of
db-server/,fs-server/,web-server/ - The transport between client and server = Streamable HTTP (the modern MCP transport, replaces SSE)
Project layout
mcp-host/
βββ pom.xml # Parent POM (BOM imports)
βββ host/ # The MCP HOST application
β βββ src/main/...
βββ db-server/ # MCP Server: transaction database
β βββ src/main/...
βββ fs-server/ # MCP Server: sandboxed filesystem
β βββ src/main/...
βββ web-server/ # MCP Server: web fetch (allowlisted)
β βββ src/main/...
βββ scripts/
βββ start-all.sh
βββ stop-all.sh
Prerequisites
- Java 21
- Maven 3.9+
- An Anthropic API key (or swap the LLM β see "Switching LLM providers" below)
Quickstart
# 1. Set your API key
export ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxx
# 2. Build everything
mvn -DskipTests clean package
# 3. Start all four services
./scripts/start-all.sh
# 4. Ask a question that requires the database
curl 'http://localhost:8080/chat?q=How+many+CHECKING+transactions+failed+enrichment%3F'
# 5. Ask one that requires the filesystem
curl 'http://localhost:8080/chat?q=What+does+the+vendor-notes.txt+file+say+about+performance+targets%3F'
# 6. Ask one that combines sources
curl 'http://localhost:8080/chat?q=Compare+the+actual+vendor+primacy+rate+for+CHECKING+with+the+target+in+vendor-notes.txt'
# 7. Stop everything
./scripts/stop-all.sh
How the wiring works (the part most tutorials gloss over)
Look at host/src/main/resources/application.yml:
spring:
ai:
mcp:
client:
streamable-http:
connections:
transaction-server: { url: http://localhost:8090 }
filesystem-server: { url: http://localhost:8091 }
web-server: { url: http://localhost:8092 }
toolcallback:
enabled: true
That's it. The spring-ai-starter-mcp-client dependency reads those three connections,
creates one MCP client per entry, calls each server's tools/list endpoint at startup,
and aggregates every discovered tool into a single ToolCallbackProvider bean. The
host's ChatClient gets all of them attached, so the LLM sees a unified tool palette.
When you ask "how many CHECKING transactions failed enrichment?", Claude:
- Looks at all available tools and their descriptions
- Picks
getFailedEnrichmentCountsfrom the transaction-server - Spring AI's MCP client runtime invokes it over HTTP
- The result flows back into the LLM, which writes a natural-language answer
Things to try (and why each one matters)
| Question | What it teaches |
|---|---|
| "How many CHECKING transactions did we have in the last 7 days?" | Tool selection by description matching |
| "What's the total volume across all account types in the last 30 days?" | LLM calls the same tool multiple times with different args |
| "Read vendor-notes.txt and summarize the latency targets" | Cross-server: filesystem tool used standalone |
| "Compare our actual Spade success rate to the target in vendor-notes.txt" | Cross-server reasoning: DB tool + FS tool combined |
| "Fetch https://httpbin.org/json and tell me what's there" | Web tool with allowlist |
| "Fetch https://evil.com/x" | Allowlist rejection β see the guardrail in action |
Switching LLM providers
The MCP architecture is LLM-agnostic. To swap Anthropic for OpenAI:
<!-- in host/pom.xml, replace the anthropic dep with: -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai</artifactId>
</dependency>
# in host/application.yml, replace anthropic config with:
spring.ai.openai.api-key: ${OPENAI_API_KEY}
spring.ai.openai.chat.options.model: gpt-4o
Or use Ollama for fully local inference β same pattern. You change one dependency and a few config lines. The MCP servers don't change at all. That's the whole point of MCP.
Production hardening checklist
This is a learning project. Before anything resembling production:
- Auth on every server. Put OAuth2 / mTLS / API keys in front of each MCP server. Spring AI 1.1+ has MCP Security support built in.
- Tool input validation. The LLM controls tool arguments. Validate them on the server side as if they were user input β because effectively they are.
- Audit logging. Log every tool invocation with caller identity, args, and result size. This is non-negotiable in a regulated environment.
- Rate limits per tool. A confused LLM can call a tool in a loop. Cap it.
- PII masking on tool outputs. Reuse your existing
@MaskPiiAOP β apply it to the methods onTransactionTools. - Don't expose write tools without explicit user confirmation flow. The
writeFiletool should require a separate approval step in production. - Network egress controls. The web server's allowlist is a starting point, not a real control. Force outbound through your bank's egress proxy.
How this maps to your DSM 2.0 work
The db-server is intentionally modeled on DSM 2.0:
- The schema mirrors your enrichment output (account_type, vendor, status)
- The tools mirror queries you already run on Datadog dashboards (vendor primacy distribution, failed enrichment counts)
- The vendor-notes file mirrors the kind of operational context that lives in Confluence today
A natural next step would be to point the db-server at the actual Postgres
schema you use for enrichment routing rules. The MCP server pattern means an
LLM-powered Slack bot could answer engineering questions like "what's our Spade
success rate by account type today?" without anyone hand-writing SQL.
