Genai Playground
CLI first ai chatbot playground
Ask AI about Genai Playground
Powered by Claude Β· Grounded in docs
I know everything about Genai Playground. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
GenAI Playground
A Python CLI for experimenting with AI models and MCP (Model Context Protocol) tools.
Supports two operational modes:
- V1 (Default): Direct Azure OpenAI + MCP integration
- V2: GitHub Copilot SDK-powered agent orchestration
Features
- Azure OpenAI Integration: Uses the Azure OpenAI SDK for model inference
- MCP Tool Support: Connect multiple MCP tool servers for extended capabilities
- Flexible Configuration: Use JSON config files or command-line arguments
- Streaming Responses: See AI responses appear token-by-token in real-time
- Reasoning Display: View the model's thinking process (for reasoning models like o1, o3, gpt-5)
- Built-in Tools:
- Web Search: Search the web using DuckDuckGo (no API key required)
- Azure Data Explorer: Query Kusto databases with KQL
Project Structure
genai_playground/
βββ src/
β βββ playground.py # CLI entry point
β βββ client.py # V1: Azure OpenAI + MCP orchestration
β βββ client_v2.py # V2: GitHub Copilot SDK integration
βββ tools/
β βββ web_search.py # Web search MCP server
β βββ adx_kusto.py # Azure Data Explorer MCP server
βββ examples/
β βββ web_search.json # Example: Web search configuration
β βββ adx_query.json # Example: ADX query configuration
β βββ multi_tool.json # Example: Multiple tools configuration
βββ requirements.txt # Python dependencies
βββ .env.template # Environment variables template
βββ README.md # This file
Setup
1. Create Virtual Environment
python -m venv .venv
# Windows
.venv\Scripts\activate
# Linux/Mac
source .venv/bin/activate
2. Install Dependencies
pip install -r requirements.txt
3. Configure Environment
Copy the environment template and fill in your Azure OpenAI credentials:
cp .env.template .env
Edit .env with your values:
AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URLAZURE_OPENAI_API_KEY: Your API keyAZURE_OPENAI_DEPLOYMENT: Your deployment name (e.g., gpt-4o)
Usage
Basic Command
# Activate virtual environment first
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
# Run with a simple prompt (no tools)
python src/playground.py run -p "What is the capital of France?"
# Run with web search tool
python src/playground.py run -p "What are the latest AI news?" -t tools/web_search.py -v
# Run with a config file
python src/playground.py run --config examples/web_search.json
Interactive Chat Mode
Start a continuous conversation session where you can ask follow-up questions:
# Start chat with web search tool
python src/playground.py chat -t tools/web_search.py
# Start chat with verbose output to see tool calls
python src/playground.py chat -t tools/web_search.py -v
# Chat with a config file for base settings
python src/playground.py chat --config examples/web_search.json
Chat commands:
- Type your message and press Enter to send
- Type
clearto reset the conversation history - Type
exitorquitto end the session
CLI Options
python src/playground.py run --help
Options:
-c, --config PATH Path to JSON configuration file
-s, --system TEXT System prompt
-p, --prompt TEXT User prompt (required)
-m, --model TEXT Model/deployment name [default: gpt-4o]
-t, --tool PATH Tool script path (can use multiple times)
--max-iterations INTEGER Max tool call iterations [default: 10]
-v, --verbose Show detailed output
--azure-endpoint TEXT Azure OpenAI endpoint
--azure-api-key TEXT Azure OpenAI API key
--azure-deployment TEXT Azure OpenAI deployment name
-o, --output PATH Write response to file
--raw Output raw text without formatting
List Available Tools
python src/playground.py list-tools tools/web_search.py
Generate Config Template
python src/playground.py init-config -o my_config.json
Configuration File Format
{
"system_prompt": "You are a helpful assistant.",
"user_prompt": "Your question here",
"model": "gpt-4o",
"tools": [
"tools/web_search.py",
"tools/adx_kusto.py"
],
"max_iterations": 10,
"verbose": true,
"stream": true,
"show_reasoning": true,
"reasoning_effort": null,
"azure_endpoint": "https://your-resource.openai.azure.com/",
"azure_deployment": "gpt-4o"
}
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
system_prompt | string | "You are a helpful assistant." | The system prompt for the conversation |
user_prompt | string | "" | The initial user prompt (for single-shot mode) |
model | string | "gpt-4o" | The model name |
tools | array | [] | List of MCP tool configurations |
max_iterations | int | 10 | Maximum tool call iterations |
verbose | bool | false | Enable verbose output |
stream | bool | true | Enable streaming responses |
show_reasoning | bool | true | Display reasoning content from reasoning models |
reasoning_effort | string | null | Reasoning effort level: "low", "medium", or "high" |
azure_endpoint | string | - | Azure OpenAI endpoint URL (overrides env) |
azure_deployment | string | - | Deployment name (overrides env) |
Streaming & Reasoning
Streaming Responses
By default, chat mode streams responses token-by-token in real-time, giving you immediate feedback as the AI generates its response.
When streaming is enabled, you'll see a status line when chat starts:
[Streaming enabled, reasoning display on]
Type 'exit' or 'quit' to end, 'clear' to reset conversation.
You: What is the capital of France?
Tools
This playground supports any MCP-compatible tool. You have full flexibility to:
- Use built-in tools we've included in this repo
- Create your own custom MCP servers for internal/proprietary tools
- Use pre-built MCP servers from the community via
uvx
Option 1: Built-in Tools
Web Search (tools/web_search.py)
A simple example of a custom MCP server we created. Provides web search capabilities using DuckDuckGo:
search_web: Search the web for a querysearch_news: Search for recent news articles
No API key required!
Usage:
# As a command-line argument
python src/playground.py run -p "What's the latest AI news?" -t tools/web_search.py
# Or in a config file
{
"tools": ["tools/web_search.py"]
}
Option 2: Create Your Own MCP Server (In-House Tools)
You can create custom MCP servers for any internal tool, API, or data source. Here's an example of how we built the ADX (Azure Data Explorer) tool:
Example: Custom ADX Tool (tools/adx_kusto.py)
This is a custom MCP server we created to query Kusto databases:
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
from azure.kusto.data import KustoClient, KustoConnectionStringBuilder
server = Server("adx-kusto")
@server.list_tools()
async def list_tools() -> list[Tool]:
return [
Tool(
name="execute_kql",
description="Execute a KQL query against Azure Data Explorer",
inputSchema={
"type": "object",
"properties": {
"query": {"type": "string", "description": "KQL query to execute"},
"database": {"type": "string", "description": "Database name"}
},
"required": ["query", "database"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
if name == "execute_kql":
# Connect to ADX and execute query
result = execute_query(arguments["database"], arguments["query"])
return [TextContent(type="text", text=result)]
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, server.create_initialization_options())
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Usage:
python src/playground.py run -p "Show me recent errors" -t tools/adx_kusto.py -v
When to create custom MCP servers:
- Proprietary APIs or internal services
- Custom data sources
- Tools requiring specific authentication
- Specialized business logic
Option 3: Use Pre-Built MCP Servers with uvx
The MCP ecosystem has many pre-built servers available on PyPI. You can use them directly with uvx (no local installation required).
Prerequisites: Install uv with pip install uv or see uv docs
Example: Azure Kusto MCP (from PyPI)
Instead of our custom tool, you can use the official azure-kusto-mcp package:
{
"tools": [
{
"name": "azure-kusto-mcp",
"command": "uvx",
"args": ["azure-kusto-mcp"],
"env": {
"KUSTO_SERVICE_URI": "https://your-cluster.kusto.windows.net"
}
}
]
}
Example: Other Community MCP Servers
You can use any MCP server published to PyPI:
{
"tools": [
{
"name": "filesystem",
"command": "uvx",
"args": ["mcp-server-filesystem", "/path/to/allowed/dir"]
},
{
"name": "github",
"command": "uvx",
"args": ["mcp-server-github"],
"env": {
"GITHUB_TOKEN": "your-token"
}
},
{
"name": "sqlite",
"command": "uvx",
"args": ["mcp-server-sqlite", "path/to/database.db"]
}
]
}
Finding MCP Servers:
- Browse PyPI for MCP servers
- Check the MCP Servers Repository
- Search for
mcp-server-*or*-mcppackages
Summary: Choosing Your Approach
| Approach | When to Use | Example |
|---|---|---|
| Built-in tools | Quick start, common use cases | tools/web_search.py |
| Custom MCP server | Internal APIs, proprietary data, custom logic | tools/adx_kusto.py |
| Pre-built via uvx | Community tools, standard integrations | uvx azure-kusto-mcp |
You can mix and match all three approaches in a single configuration:
{
"tools": [
"tools/web_search.py",
"tools/my_custom_tool.py",
{
"name": "github",
"command": "uvx",
"args": ["mcp-server-github"],
"env": { "GITHUB_TOKEN": "..." }
}
]
}
Creating Custom Tools
Create a new Python file in the tools/ directory following the MCP server pattern:
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Tool, TextContent
server = Server("my-tool")
@server.list_tools()
async def list_tools() -> list[Tool]:
return [
Tool(
name="my_function",
description="Description of what it does",
inputSchema={
"type": "object",
"properties": {
"param1": {"type": "string", "description": "..."}
},
"required": ["param1"]
}
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
if name == "my_function":
# Your logic here
return [TextContent(type="text", text="Result")]
async def main():
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, server.create_initialization_options())
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Examples
Web Search Example
python src/playground.py run \
-s "You are a research assistant. Cite your sources." \
-p "What is the Model Context Protocol?" \
-t tools/web_search.py \
-v
ADX Query Example
python src/playground.py run \
-s "You are a data analyst. Help query Kusto data." \
-p "Show me the tables in my database" \
-t tools/adx_kusto.py \
-v
Using Config File
python src/playground.py run --config examples/web_search.json
Understanding Iterations
When using tools, the playground operates in iterations. Each iteration is one round-trip with the AI model:
Iteration 1: User prompt β Model decides to call tool(s) β Returns tool calls
β
CLI executes tools, sends results back
β
Iteration 2: Model receives results β Calls more tools OR provides final response
β
(repeat until final response or max_iterations)
Why multiple iterations?
- The model may call multiple tools in sequence
- It might refine searches based on initial results
- Complex queries may require gathering info from different sources
Tips to reduce iterations:
- Use specific, detailed prompts
- In system prompt, instruct the model to "make 1-2 searches maximum then provide answer"
- Set appropriate
max_iterations(default: 10)
V2 Mode: GitHub Copilot SDK
The playground includes an alternative execution mode powered by the GitHub Copilot SDK. This mode uses the same production-tested agent runtime behind GitHub Copilot CLI, providing:
- Agentic Workflows: Copilot handles planning, tool invocation, and orchestration
- Native MCP Support: Pass MCP servers directly to sessions without manual orchestration
- Streaming with Reasoning: Real-time response streaming with optional reasoning display
- Simplified Integration: No need to manage tool call loops manually
Prerequisites for V2 Mode
-
GitHub Copilot Subscription: A GitHub Copilot subscription is required. See GitHub Copilot pricing.
-
Copilot CLI: Install the GitHub Copilot CLI and ensure
copilotis available in your PATH. Follow the Copilot CLI installation guide. -
GitHub Copilot SDK: Install the Python SDK:
pip install github-copilot-sdk -
Authentication: Authenticate with your GitHub account:
Execute copilot
copilotRun
/loginin the prompt window and use your github account to authenticate.
Using V2 Mode
Add the --v2 flag to any command to use the Copilot SDK:
# Single prompt with V2
python src/playground.py run -p "What are the latest AI news?" -t tools/web_search.py --v2
# Interactive chat with V2
python src/playground.py chat -t tools/web_search.py --v2
# Chat with reasoning display
python src/playground.py chat -t tools/web_search.py --v2 --reasoning
# With config file
python src/playground.py chat --config examples/adx_query.json --v2 -v
V2-Specific Options
| Option | Description |
|---|---|
--v2 | Enable GitHub Copilot SDK mode |
--reasoning, -r | Show the AI's reasoning/thinking process (V2 only) |
How V2 Works
The V2 client (src/client_v2.py) wraps the GitHub Copilot SDK:
Your Prompt
β
CopilotClient (SDK)
β JSON-RPC
Copilot CLI (server mode)
β
GitHub Copilot API
β
MCP Tool Servers (if configured)
The SDK manages:
- Session lifecycle and state
- Tool discovery and invocation
- Response streaming and events
- Context compaction for long sessions
V2 vs V1 Comparison
| Feature | V1 (Default) | V2 (Copilot SDK) |
|---|---|---|
| Backend | Azure OpenAI | GitHub Copilot |
| Auth | Azure API Key | GitHub Account |
| Tool Orchestration | Manual loop | Automatic |
| Reasoning Display | Model-dependent | Built-in support |
| Billing | Azure consumption | Copilot subscription |
Example: V2 with Custom MCP Tools
{
"system_prompt": "You are a data analyst assistant.",
"tools": [
{
"name": "adx-kusto",
"command": "python",
"args": ["tools/adx_kusto.py"],
"tools_filter": ["*"]
}
],
"verbose": true,
"stream": true,
"show_reasoning": true
}
Run with:
python src/playground.py chat --config examples/adx_query.json --v2
License
MIT
