io.github.guidance-ai/guidance-lark-mcp
Validate and test llguidance grammars with batch testing and documentation
Ask AI about io.github.guidance-ai/guidance-lark-mcp
Powered by Claude Β· Grounded in docs
I know everything about io.github.guidance-ai/guidance-lark-mcp. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
MCP Grammar Tools
MCP server for validating and testing llguidance grammars (Lark format). Provides grammar validation, batch test execution, and syntax documentation β ideal for iteratively building grammars with AI coding assistants.
Installation
With uvx (recommended)
uvx guidance-lark-mcp
With pip
pip install guidance-lark-mcp
From source
cd mcp-grammar-tools
pip install -e .
MCP Client Configuration
GitHub Copilot CLI
You can add the server using the interactive /mcp add command or by editing the config file directly. See the Copilot CLI MCP documentation for full details.
Option 1: Interactive setup
In the Copilot CLI, run /mcp add, select Local/STDIO, and enter uvx guidance-lark-mcp as the command.
Option 2: Edit config file
Add the following to ~/.copilot/mcp-config.json:
{
"mcpServers": {
"grammar-tools": {
"type": "local",
"command": "uvx",
"args": ["guidance-lark-mcp"],
"tools": ["*"]
}
}
}
This gives you grammar validation and batch testing out of the box. To also enable LLM-powered generation (generate_with_grammar), add ENABLE_GENERATION and your credentials to env:
"env": {
"ENABLE_GENERATION": "true",
"OPENAI_API_KEY": "your-key-here"
}
For Azure OpenAI (with Entra ID via az login), use guidance-lark-mcp[azure] and set the endpoint instead:
"args": ["guidance-lark-mcp[azure]"],
"env": {
"ENABLE_GENERATION": "true",
"AZURE_OPENAI_ENDPOINT": "https://your-resource.openai.azure.com/",
"OPENAI_MODEL": "your-deployment-name"
}
See Backend Configuration for all supported backends.
After saving, use /mcp show to verify the server is connected.
VS Code
{
"mcpServers": {
"grammar-tools": {
"type": "local",
"command": "uvx",
"args": ["guidance-lark-mcp"],
"env": {
"ENABLE_GENERATION": "true",
"OPENAI_API_KEY": "your-key-here"
},
"tools": ["*"]
}
}
}
Claude Desktop
{
"mcpServers": {
"grammar-tools": {
"command": "uvx",
"args": ["guidance-lark-mcp"],
"env": {
"ENABLE_GENERATION": "true",
"OPENAI_API_KEY": "your-key-here"
}
}
}
}
Usage
Available Tools
-
validate_grammarβ Validate grammar completeness and consistency using llguidance's built-in validator.{"grammar": "start: \"hello\" \"world\""} -
run_batch_validation_testsβ Run batch validation tests from a JSON file against a grammar. Returns pass/fail statistics and detailed failure info.{ "grammar": "start: /[0-9]+/", "test_file": "tests.json" }Test file format:
[ {"input": "123", "should_parse": true, "description": "Valid number"}, {"input": "abc", "should_parse": false, "description": "Not a number"} ] -
get_llguidance_documentationβ Fetch the llguidance grammar syntax documentation from the official repo. -
generate_with_grammar(optional, requiresENABLE_GENERATION=true) β Generate text using an OpenAI model constrained by a grammar. Uses the Responses API with custom tool grammar format, so output is guaranteed to conform to the grammar. RequiresOPENAI_API_KEYenvironment variable. See Backend Configuration for Azure and other endpoints.
Backend Configuration
The generate_with_grammar tool uses the OpenAI Python SDK, which natively supports multiple backends via environment variables:
| Backend | Required env vars | Optional env vars |
|---|---|---|
| OpenAI (default) | OPENAI_API_KEY | OPENAI_MODEL |
| Azure OpenAI (API key) | AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY | AZURE_OPENAI_API_VERSION, OPENAI_MODEL |
| Azure OpenAI (Entra ID) | AZURE_OPENAI_ENDPOINT + az login | AZURE_OPENAI_API_VERSION, OPENAI_MODEL |
| Custom endpoint | OPENAI_API_KEY, OPENAI_BASE_URL | OPENAI_MODEL |
The server auto-detects which backend to use:
- If
AZURE_OPENAI_ENDPOINTis set β usesAzureOpenAIclient (with Entra ID or API key) - Otherwise β uses
OpenAIclient (readsOPENAI_API_KEYandOPENAI_BASE_URLautomatically)
The server logs which backend it detects on startup.
Example: Azure OpenAI (API key)
{
"mcpServers": {
"grammar-tools": {
"type": "local",
"command": "uvx",
"args": ["guidance-lark-mcp"],
"env": {
"ENABLE_GENERATION": "true",
"AZURE_OPENAI_ENDPOINT": "https://my-resource.openai.azure.com",
"AZURE_OPENAI_API_KEY": "your-azure-key",
"OPENAI_MODEL": "gpt-4.1"
},
"tools": ["*"]
}
}
}
Example: Azure OpenAI (Entra ID / keyless)
Requires az login and the azure extra: pip install guidance-lark-mcp[azure]
{
"mcpServers": {
"grammar-tools": {
"type": "local",
"command": "uvx",
"args": ["guidance-lark-mcp[azure]"],
"env": {
"ENABLE_GENERATION": "true",
"AZURE_OPENAI_ENDPOINT": "https://my-resource.openai.azure.com",
"OPENAI_MODEL": "gpt-4.1"
},
"tools": ["*"]
}
}
}
Example Workflow
Build a grammar iteratively with an AI assistant:
- Start with the spec β paste EBNF rules from a language specification
- Write a basic grammar β translate a few rules to Lark format
- Validate β use
validate_grammarto check for missing rules - Write tests β create a JSON test file with sample inputs
- Batch test β use
run_batch_validation_teststo find failures - Fix & repeat β refine the grammar until all tests pass
Example Grammars
The examples/ directory includes sample grammars built using these tools, with Lark grammar files, test suites, and documentation:
- GraphQL β executable subset of the GraphQL spec (queries, mutations, fragments, variables)
Troubleshooting
Server fails to connect in Copilot CLI / VS Code?
MCP clients like Copilot CLI only show "Connection closed" when a server crashes on startup. To see the actual error, run the server directly in your terminal:
uvx guidance-lark-mcp
Or with generation enabled:
ENABLE_GENERATION=true OPENAI_API_KEY=your-key uvx guidance-lark-mcp
Common issues:
- Missing credentials β
ENABLE_GENERATION=truewithout a validOPENAI_API_KEYorAZURE_OPENAI_ENDPOINT. The server will still start and serve validation tools;generate_with_grammarwill return a descriptive error. - Azure Entra ID β make sure you've run
az loginand are usingguidance-lark-mcp[azure](not the base package). - Slow first start β
uvxneeds to resolve and install dependencies on first run, which may exceed the MCP client's connection timeout. Runuvx guidance-lark-mcponce manually to warm the cache. - Updating to a new version β
uvxcaches packages, so after a new release you may need to clear the cache and restart your MCP client:uv cache clean guidance-lark-mcp
Development
git clone https://github.com/guidance-ai/guidance-lark-mcp
cd guidance-lark-mcp
uv sync
uv run pytest tests/ -q
