Spring AI Showcase With Simple MCP Server
A Spring AI showcase with Chat and MCP-Server to experiment with the tools, ressources and prompts.
Ask AI about Spring AI Showcase With Simple MCP Server
Powered by Claude · Grounded in docs
I know everything about Spring AI Showcase With Simple MCP Server. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Contents
Project
This project is a small demo for Spring AI.
It provides an MCP-Server to be used by your local LLM.
The tools provided by this application are:
- DateTimeTools:
- Get the current time
- Get the current date
- Get the current year
- Get the current week day
graph LR
;
A(LLM) --> B(MCP-Server)
B --> C(DateTimeTools)
Basically, this application is an aggregation layer for the LLM to get all the real-time data.
Tech Stack
- Java 21.0.8
- Maven 3.9.11
- Spring Boot 3.5.7
- Spring AI 1.1.0
- Docker Compose (Should work with Podman as well, just for the database)
- LM Studio 0.3.31
Project structure
Besides the standard Spring Boot Maven structure, we have some additional folder:
- docker: Contains the docker-compose.yml file to start the application and database locally
LLM
To use this project you need to run a LLM with tool support and probably thinking enabled; otherwise it did not work for
me.
I recommend to use LM Studio with openai/gpt-oss-20b or qwen/qwen3-4b-thinking-2507 as the model.
openai/gpt-oss-20b has to be set to 16k context to load properly, but it is blazingly fast.
qwen/qwen3-4b-thinking-2507 is fast and has a small memory footprint.
To make sure you can see every setting, turn on Developer Mode.
MCP-Server configuration in LM Studio
On the right side under Program, Install and Edit mcp.json you can add this application here as a server:
{
"mcpServers": {
"showcase-lite": {
"url": "http://localhost:58090/mcp"
}
}
}
That should be enough to get started. And the tools should be available as soon as you start the application.
LM Studio server configuration
To allow the application to connect to LM Studio for the chat in the application frontend, you need to navigate to
Developer in the left menu.
Now you can toggle the server, and it should be available under http://localhost:1234.
If you want to use LM Studio server in your whole network, you can toggle serve on local network in the server
settings.
Please be aware that this server is not secure and should only be used in a local and trusted network.
Running the application
Sample Prompts
A valid answer for the following prompt should reflect your current local time, so no offsets or timezones.
What is the current time?
About my system, for reference
(no bragging, just for you to have an idea how the stuff might run on your system) I am using the following system:
- Linux Mint 22.2
- AMD Ryzen 7 9700X
- 64GB RAM, but not much is used for this project
- XFX 9070 XT 16GB
Performance:
| Prompt | What is the current time? one tool call |
|---|---|
| qwen/qwen3-4b-thinking-2507 | ~2 seconds |
| ibm/granite-4-h-tiny | ~1 second |
| openai/gpt-oss-20b (context reduced to 16k) | ~1 second |
| microsoft/phi-4-reasoning-plus | did not terminate / endless loop in reasoning |
| mistralai/magistral-small-2509 | ~25 seconds |
| qwen/qwen3-14b | ~50 seconds |
Issues
- no differentiation between tools, resources, and prompts
- No tests / no useful tests
- performance is measured manually
