Ajenitk Context
agentic context and planning for coding projects
Ask AI about Ajenitk Context
Powered by Claude Β· Grounded in docs
I know everything about Ajenitk Context. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Ajenitk Context - Ajentik AI System
A powerful, modular ajentik AI system built with PydanticAI and enhanced with Logfire monitoring. This system provides autonomous agents for chat, code generation, and code analysis with rich CLI interactivity and comprehensive observability.
Features
-
π€ Multiple Specialized Agents
- ChatAgent: Interactive conversational AI with memory management
- CodeAgent: Intelligent code generation across multiple languages
- AnalysisAgent: Comprehensive code analysis for quality, security, and performance
-
π¨ Rich CLI Interface
- Enhanced terminal UI with colors and animations
- Interactive menus and file browsers
- Real-time progress indicators
- Markdown rendering in terminal
-
π Comprehensive Monitoring
- Real-time metrics dashboard
- Performance tracking and alerts
- Token usage and cost monitoring
- Distributed tracing with Logfire
-
π§ Flexible Architecture
- Support for multiple AI providers (OpenAI, Anthropic, Google)
- Modular design with dependency injection
- Async-first with sync compatibility
- Extensible tool system with sandboxing
-
π¨ Powerful Tool System
- Dynamic tool loading and discovery
- Built-in file system tools
- Custom tool creation with decorators
- Security validation and sandboxing
- Auto-generated documentation
-
π MCP (Model Context Protocol) Support
- Full MCP server implementation
- MCP client for connecting to any MCP server
- Seamless Claude Desktop integration
- Tool bridging between MCP servers
- Multiple transport protocols (stdio, SSE)
Quick Start
Installation
-
Clone the repository:
git clone https://github.com/yourusername/ajenitk-context.git cd ajenitk-context -
Install dependencies:
pip install -r requirements.txt -
Configure environment:
cp .env.example .env # Edit .env with your API keys -
Install the CLI:
pip install -e .
Basic Usage
# Interactive chat
ajentik chat --enhanced
# Generate code
ajentik code generate -l python
# Analyze code
ajentik code analyze script.py
# View monitoring dashboard
ajentik monitor --live
# Manage tools
ajentik tools list
ajentik tools run calculator --params expression="2+2"
# MCP server/client
ajentik mcp server --categories file_system
ajentik mcp connect "npx @modelcontextprotocol/server-name"
# Configure settings
ajentik config
Configuration
Create a .env file with your API keys:
# AI Provider Keys (at least one required)
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key
GOOGLE_API_KEY=your-google-key
# Monitoring (optional)
LOGFIRE_WRITE_TOKEN=your-logfire-token
LOGFIRE_PROJECT=your-project-name
# Model Settings
DEFAULT_MODEL=openai:gpt-4o
DEFAULT_TEMPERATURE=0.7
DEFAULT_MAX_TOKENS=1000
Examples
Chat with Context
from src.agents import ChatAgent
from src.models import ConversationHistory
agent = ChatAgent()
history = ConversationHistory(messages=[], session_id="my-session")
response = await agent.chat(
"Explain Python decorators",
conversation_history=history
)
print(response.message)
Generate Code
from src.agents import CodeAgent
from src.models import CodeGenerationRequest
agent = CodeAgent()
request = CodeGenerationRequest(
description="Create a REST API endpoint",
language="python",
framework="fastapi",
requirements=["Include authentication", "Add input validation"]
)
response = await agent.generate_code(request)
print(response.code)
Analyze Code
from src.agents import AnalysisAgent
from src.models import CodeAnalysisRequest
agent = AnalysisAgent()
request = CodeAnalysisRequest(
code=open("script.py").read(),
language="python",
analysis_types=["security", "quality", "performance"]
)
response = await agent.analyze_code(request)
print(f"Score: {response.overall_score}/10")
for issue in response.issues:
print(f"- {issue.description}")
Project Structure
ajenitk-context/
βββ src/
β βββ agents/ # Agent implementations
β β βββ base_agent.py
β β βββ chat_agent.py
β β βββ code_agent.py
β β βββ analysis_agent.py
β βββ cli/ # CLI interface
β β βββ main.py
β β βββ chat_interface.py
β β βββ tools_command.py
β β βββ utils.py
β βββ models/ # Data models
β β βββ configs.py
β β βββ schemas.py
β βββ monitoring/ # Observability
β β βββ enhanced_monitoring.py
β βββ tools/ # Tool system
β β βββ base.py # Base classes
β β βββ registry.py # Tool registry
β β βββ decorators.py # Tool decorators
β β βββ loader.py # Dynamic loading
β β βββ validation.py # Security & validation
β β βββ documentation.py # Doc generation
β β βββ builtin/ # Built-in tools
β βββ utils/ # Utilities
β βββ dependencies.py
β βββ logfire_setup.py
βββ examples/ # Example scripts
βββ tests/ # Test suite
βββ docs/ # Documentation
Testing
Run the test suite:
# Run all tests
python run_tests.py all
# Run specific test suite
python run_tests.py agents
# Run with coverage
python run_tests.py coverage
# Quick test run
python run_tests.py quick
Advanced Features
Multi-Model Support
# Use different AI providers
config = AgentConfig(
name="MultiModelAgent",
model="anthropic:claude-3-5-sonnet" # or "google:gemini-2.0"
)
agent = ChatAgent(config)
Tool System
from src.tools import tool, tool_registry
# Create a custom tool
@tool(name="word_counter", description="Count words in text")
def count_words(text: str) -> dict:
words = text.split()
return {"word_count": len(words), "char_count": len(text)}
# Use tools in agents
from src.agents import ChatAgent
agent = ChatAgent(tools=[
tool_registry.get("read_file"),
tool_registry.get("word_counter")
])
MCP Integration
from src.mcp import create_mcp_server, create_mcp_client
# Expose tools as MCP server
server = create_mcp_server(
categories=["file_system"],
security_level="sandboxed"
)
await server.start()
# Connect to MCP server
client = create_mcp_client(
server_command=["npx", "@modelcontextprotocol/server-name"]
)
await client.connect()
# Use remote tools
tools = await client.list_tools()
result = await client.call_tool("remote_tool", {"arg": "value"})
Claude Desktop Integration
Add to claude_desktop_config.json:
{
"mcpServers": {
"ajentik": {
"command": "ajentik",
"args": ["mcp", "server"],
"env": {}
}
}
}
Monitoring & Alerts
from src.monitoring import monitor_operation, alert_manager
# Monitor operations
with monitor_operation("critical_task", agent_name="MyAgent"):
result = await agent.process(data)
# Check alerts
alerts = alert_manager.check_alerts()
Development
Code Style
# Format code
black src/ tests/
# Lint code
ruff check src/ tests/
# Type checking
mypy src/
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
Architecture
The system follows a modular architecture:
- Agents: Core AI functionality with PydanticAI
- Models: Pydantic models for type safety
- CLI: Rich terminal interface
- Monitoring: Comprehensive observability
- Utils: Shared utilities and helpers
Performance
- Async-first design for optimal performance
- Connection pooling for API calls
- Intelligent caching mechanisms
- Retry logic with exponential backoff
Security
- API keys stored securely in environment
- Input validation on all user inputs
- Safe code execution sandboxing
- Prompt injection protection
Roadmap
- Comprehensive tool system with sandboxing
- Memory persistence with vector databases
- Multi-agent coordination
- Advanced reasoning patterns (CoT, ReAct)
- Plugin system for custom agents
- Web UI dashboard
- Deployment templates
License
MIT License - see LICENSE file for details.
Support
- Documentation: docs/
- Examples: examples/
- Issues: GitHub Issues
Acknowledgments
Built with:
- PydanticAI - AI agent framework
- Logfire - Observability platform
- Rich - Terminal formatting
- Click - CLI framework
