KrAIna
Standalone AI desktop application with chat, text processing snippets, and automation tools for daily tasks.
Ask AI about KrAIna
Powered by Claude Β· Grounded in docs
I know everything about KrAIna. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation

KrAIna - AI-Powered Tools for Everyday Use
KrAIna provides standalone AI-powered tools for everyday use with OpenAI, Azure OpenAI, Anthropic, Amazon Bedrock, Google Gemini LLMs or Ollama.
Standalone Applications
KrAIna consists of two main standalone executables built with PyInstaller:
kraina_app - Chat GUI Application

A modern Chat GUI application built with tkinter featuring:
- Interactive Chat Interface with HTML and text tabs
- Assistant Management - Switch between different AI assistants
- Snippet Integration - Transform text with right-click context menu
- Macro Execution - Run Python automation scripts
- Chat History - Auto-named conversations with management features
- Multi-theme Support - Light/Dark and other built-in themes
- Image Support - Drag & drop images, text-to-image generation
- Markdown/HTML Rendering - Supports Mermaid graphs and LaTeX expressions
- Token Estimation - Live token usage tracking
- Export Features - Save chats as HTML, PDF, or text files
- Debug Window - Application logs and troubleshooting
- IPC - Control application from Python scripts or from kraina_cli
- Drag & Drop - Drag & drop files to chat
- Debuging - Debugging window with logs and troubleshooting
- MCP Tools Integration - Model Context Protocol tools for enhanced AI capabilities
- LangGraph Support - Advanced agent workflows and multi-step reasoning
kraina_cli - Command Line Interface
A fast and small CLI tool for interactive with kraina_app via IPC:
usage: kraina_cli command
KraIna chat application.
Commands:
SHOW_APP - Trigger to display the application
HIDE_APP - Trigger to minimize the application
GET_LIST_OF_SNIPPETS - Get list of snippets
RUN_SNIPPET - Run snippet 'name' with 'text'
RUN_SNIPPET_WITH_FILE - Run snippet 'name' with 'file'
RELOAD_CHAT_LIST - Reload chat list
SELECT_CHAT - Select conv_id chat
DEL_CHAT - Delete conv_id chat
No argument - run GUI app. If app is already run, show it
options:
-h, --help show this help message and exit
Installation & Usage
End User Installation (Recommended)
- Download the latest release containing
kraina_appandkraina_cliexecutables - First Run the
./kraina_appwill start with default settings and the.envandconfig.yamlfiles will be created. - Configure API Keys - Edit the generated
.envfile:# OpenAI OPENAI_API_KEY=sk-... # Azure OpenAI AZURE_OPENAI_ENDPOINT=https://... AZURE_OPENAI_API_KEY=... OPENAI_API_VERSION=2024-02-01 # Anthropic ANTHROPIC_API_KEY=... # AWS Bedrock AWS_DEFAULT_REGION=us-east-1 AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... # Google Gemini GOOGLE_API_KEY=... # Ollama (optional - leave empty for local server) OLLAMA_ENDPOINT=http://server:11434 # MCP Tools (optional) FIRECRAWL_API_KEY=... - Edit
config.yamlfile to configure for you needs:llm: force_api_for_snippets: # force api: azure, openai, aws, anthropic, ollama to be used by snippets # when empty or null or not exists, kraina_app api_type is used # priority of usage: force_api (from snippet) -> force_api_for_snippets -> kraina_app api_type map_model: # Map model aliases to actual models per provider azure: A: gpt-4o B: gpt-4o-mini embed: text-embedding-ada-002 openai: A: gpt-4o B: gpt-4o-mini embed: text-embedding-ada-002 # ... other providers chat: default_assistant: samantha visible_last_chats: 10 editor: subl # External editor command tools: text-to-image: model: dall-e-3 vector-search: model: embed context7: type: mcp url: https://mcp.context7.com/mcp transport: streamable_http firecrawl: type: mcp command: npx args: - -y - firecrawl-mcp env: FIRECRAWL_API_KEY: "{env-FIRECRAWL_API_KEY}" include_tools: - firecrawl_scrape assistants: samantha: tools: - text-to-image - vector-search - audio-to-text - text-to-text - brave_web - joplin-search - file_mgmt - context7 - firecrawl
The files are automaticly reloaded. No need to re-run the app.
On Linux you can add shortcuts to the application by creating a desktop file:
[Desktop Entry]
Encoding=UTF-8
Name=krAIna
Comment=KrAIna
Exec=<path_to_kraina_app>
Icon=<get icon from https://github.com/Bumshakalaka/krAIna/blob/main/img/logo.png and refer to it here>
Type=Application
Categories=Office;
StartupWMClass=Tk
Save it to ~/.local/share/applications/kraina.desktop .
User Extensibility
Create your own components alongside the executables without modifying core code:
your_kraina_deployment/
βββ kraina_app # Main GUI application
βββ kraina_cli # CLI tool
βββ .env # Your API keys
βββ config.yaml # Configuration
βββ snippets/ # Your custom snippets
β βββ solver/
β βββ prompt.md
β βββ config.yaml
βββ assistants/ # Your custom assistants
β βββ bob/
β βββ prompt.md
β βββ config.yaml
βββ macros/ # Your custom macros
βββ my_macro.py
The Assistants and Snippets will automaticly available in the chat interface.
Core Concepts
Snippets
Actions that transform selected text using AI. Perfect for:
- Text Translation - Translate between languages
- Code Documentation - Generate docstrings
- Text Improvement - Fix grammar and style
- Git Commits - Generate commit messages
Built-in Snippets
KrAIna includes these ready-to-use snippets:
code- Write Python function of methodcommit- Generate conventional commit messages from git diffsdocstring- Create Python docstrings in reStructuredText formatdoit- Direct task execution without commentary or explanationsfix_text- Improve grammar, spelling, and readability using proven techniquesnameit- Generate concise names and descriptions for chat logs (JSON output)ocr- Extract text from images and screenshots with markdown formattingsolve- Problem-solving with direct, focused answerssummary- Compress and summarize text content while preserving key factstranslate- Bidirectional Polish-English translation
Custom Snippets
Snippet Structure:
snippets/my_snippet/
βββ prompt.md # System prompt (required)
βββ config.yaml # LLM settings (optional)
βββ custom_logic.py # Override behavior (optional)
Configuration Example:
force_api: openai
model: gpt-4o
temperature: 0.5
max_tokens: 512
contexts:
string: "Always respond in professional tone"
file:
- ./examples.txt
- ./context.md
Check out the solver example snippet.
Using Tools in Snippets
Snippets can attach the same LangChain and MCP tools available to assistants by listing them in their local snippets/<name>/config.yaml file. Tools are initialized on first use and reused for subsequent calls, while snippets still return a single final response (no streaming telemetry).
# snippets/my_snippet/config.yaml
model: gpt-4o
max_tokens: 512
tools:
- text-to-image
- vector-search
- brave_web
Only tool names registered under kraina.tools are supported. Invalid entries raise a configuration error when the snippet is loaded.
Assistants
Specialized AI personas for different tasks with enhanced tool integration:
- Conversational Memory - Remember chat history
- Tool Integration - Use built-in, custom, and MCP tools
- Context Awareness - Include custom knowledge
- Flexible Configuration - Customize behavior per assistant
- LangGraph Support - Advanced multi-step reasoning and workflows
- Token Tracking - Monitor usage across all tool types
Built-in Assistants
KrAIna includes these specialized assistants:
samantha- General-purpose assistant with full tool access including MCP tools (text-to-image, vector-search, web search, audio transcription, file management, context7, firecrawl, and more)kodi- Professional software development specialist focused on coding best practices, debugging, and optimizationpromcreat- Prompt engineering expert that helps create, modify, and enhance system prompts using proven techniques
Custom Assistants
Assistant Structure:
assistants/my_assistant/
βββ prompt.md # System prompt
βββ config.yaml # Configuration
Configuration Example:
model: gpt-4o
temperature: 0.7
tools:
- text-to-image
- vector-search
- web-search
- context7
- firecrawl
contexts:
string: "You are a helpful coding assistant"
file: ./knowledge_base.md
Check out the bob example assistant.
Macros
Python scripts for complex AI-powered workflows:
- Agent-like Behavior - Multi-step AI interactions
- Custom Logic - Combine multiple tools and models
- GUI Integration - Run from chat interface
- Automation Ready - Perfect for repetitive tasks

Macro Structure:
def run(topic: str, depth: str = "basic") -> str:
"""Generate comprehensive overview of a topic.
Args:
topic: Topic to research
depth: Detail level (basic/detailed/expert)
"""
# Your implementation here
return result
Check out the topic_overview.py example macro.
Built-in Tools
Tools that can be used by Assistants to extend their capabilities. Tools are attached to assistants by name in the config.yaml file.
Text-to-Image
Generate images using DALL-E API.
tools:
text-to-image:
model: dall-e-3 # dall-e-2 or dall-e-3
Vector Search
Semantic search through documents. User uploads a document to an in-memory vector database and then query it with a specific question.
The tool uses LangChain document loaders and in-memory vector storage to process local files. The file is split and stored only once (embedding is done once), and the vector database is dumped to a local file (located in .store_files), so the next queries against the file do not require new file processing.
tools:
vector-search:
model: embed
Supported formats: PDF, TXT, LOG, CSV, MD
Audio-to-Text
Transcribe audio files using Whisper.
tools:
audio-to-text:
model: whisper-1
Image Analysis
Analyze and interpret images.
- Object Detection
- Scene Understanding
- Content Extraction
Text-to-Text
Process text files and web content.
- File Reading
- Web Content Extraction
- Format Conversion
Joplin Search
Search through Joplin notes (requires API key). The in-memory vector database is created from all notes in the Joplin database and then query it with a specific question.
tools:
joplin-search:
model: embed
Web Search (Brave)
Search the web for current information.
tools:
brave_web:
count: 3 # Number of results
MCP Tools Integration
KrAIna now supports Model Context Protocol (MCP) tools for enhanced AI capabilities:
Add your own MCP tools by configuring them in the tools section:
tools:
tool_stdio:
type: mcp
command: your_mcp_command
args: [arg1, arg2, ...]
env:
API_KEY: "{env-API_KEY}"
include_tools:
# list of tools to attach to assistants
# if not specified, all tools are attached
- tool_name1
- tool_name2
- ...
tool_remote_server:
type: mcp
url: https://x.server/mcp or https://x.server/mcp
# transport is optional - it will be set based on url
transport: streamable_http or stdio
include_tools:
# list of tools to attach to assistants
# if not specified, all tools are attached
- tool_name1
- tool_name2
- ...
CopyQ Integration
Boost productivity with clipboard-based AI transformations:
Setup
- Install CopyQ. For Linux users, you must use x11 window manager.
- Import custom actions from
copyQ/directory:ai_select.ini- Transform selected text (ALT+SHIFT+1)kraina_run.ini- Show/hide KrAIna (ALT+SHIFT+~)toggle.ini- Show/hide CopyQ (CTRL+~)
More info in copyQ/README.md
Usage

- Select text in any application
- Press ALT+SHIFT+1
- Choose snippet (translate, fix, docstring, etc.)
- Press ENTER - transformed text replaces selection
Configuration
Global settings in config.yaml:
llm:
force_api_for_snippets:
# force api: azure, openai, aws, anthropic, ollama to be used by snippets
# when empty or null or not exists, kraina_app api_type is used
# priority of usage: force_api (from snippet) -> force_api_for_snippets -> kraina_app api_type
map_model:
# Map model aliases to actual models per provider
azure:
A: gpt-4o
B: gpt-4o-mini
embed: text-embedding-ada-002
openai:
A: gpt-4o
B: gpt-4o-mini
embed: text-embedding-ada-002
# ... other providers
chat:
default_assistant: samantha
visible_last_chats: 10
editor: subl # External editor command
tools:
text-to-image:
model: dall-e-3
vector-search:
model: embed
context7:
# MCP tool configuration,
# exampe context7 http-streamable server
type: mcp
url: https://mcp.context7.com/mcp
firecrawl:
# MCP tool configuration
# example firecrawl stdio server
type: mcp
command: npx
args:
- -y
- firecrawl-mcp
env:
# the {env-VAR_NAME} is replaced with the value of the VAR_NAME environment variable
FIRECRAWL_API_KEY: "{env-FIRECRAWL_API_KEY}"
include_tools:
- firecrawl_scrape
assistants:
# configuration of assistants tools
samantha:
# assistant name - built-in or custom
tools:
# list of tools to use by assistant
- text-to-image
- vector-search
- audio-to-text
- text-to-text
- brave_web
- joplin-search
- file_mgmt
- context7
- firecrawl
LangFuse Integration
Monitor and analyze AI usage with LangFuse:
# Add to .env file
LANGFUSE_PUBLIC_KEY=pk-...
LANGFUSE_SECRET_KEY=sk-...
LANGFUSE_HOST=https://cloud.langfuse.com
Developer Documentation
Development Installation
Requirements:
- Python >= 3.10 + IDLE (Tk GUI) < 3.13
- Python venv package
- Git
Setup:
# Clone and setup
git clone <repository>
cd krAIna/
## Linux
## for compile python-debus, the libdbus-1-dev is required
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
## Windows
python -m venv .venv
.venv\Scripts\activate
pip install -e .
# Run from source
python app/kraina_app.py
python app/kraina_cli.py --help
Building Standalone Executables
# Linux
./build_standalone.sh
# Windows
./build_standalone.bat
Build Output:
dist/kraina_app- GUI executable (~110MB) for Linuxdist/kraina_app.exe- GUI executable (~90MB) for Windowsdist/kraina_cli- CLI executable (~11MB) for Linuxdist/kraina_cli.exe- CLI executable (~9MB) for Windows- Self-contained with all dependencies
- No Python installation required on target systems
Project Structure
krAIna/
βββ app/ # Standalone app entry points
β βββ kraina_app.py # GUI application
β βββ kraina_cli.py # CLI tool
βββ src/kraina/ # Core library
β βββ assistants/ # Built-in assistants
β βββ snippets/ # Built-in snippets
β βββ tools/ # Built-in tools
β βββ macros/ # Macro system
β βββ libs/ # Supporting libraries
βββ src/kraina_chat/ # GUI implementation
βββ dist/ # Built executables
Scripting & API
Snippet Usage
from dotenv import load_dotenv, find_dotenv
from kraina.snippets.base import Snippets
load_dotenv(find_dotenv())
snippets = Snippets()
action = snippets["fix"]
result = action.run("I'd like to speak something interest")
print(result) # "I'd like to say something interesting"
Assistant Usage
from kraina.assistants.base import Assistants
assistants = Assistants()
action = assistants["samantha"]
# One-shot (no memory)
result = action.run("What is Python?", use_db=False)
# With conversation memory
first = action.run("My name is Paul")
second = action.run("What's my name?", conv_id=first.conv_id)
Image Processing
from kraina.libs.utils import convert_llm_response, convert_user_query
# Text-to-image
llm = assistants["samantha"] # Assistant with text-to-image tool
result = llm.run("generate image of a cat", use_db=False)
# Save base64 data URL to file
print(convert_llm_response(result.content))
# Image-to-text
result = llm.run(
convert_user_query("Analyze this image: "),
use_db=False
)
Tool Usage
from kraina.tools.text_to_image import text_to_image
from kraina.libs.utils import convert_llm_response
result = text_to_image("Red LEGO tiger", "SMALL_SQUARE")
print(convert_llm_response(result)) # Saves and returns file path
Pydantic Output
from pydantic import BaseModel
class NameIt(BaseModel):
name: str
description: str
snippets = Snippets()
nameit = snippets["nameit"]
nameit.pydantic_output = NameIt
result = nameit.run("Your conversation text here") # Returns NameIt instance
print(result.name) # Extracted name
print(result.description) # Extracted description
Chat Interface (IPC)
from kraina_chat.cli import ChatInterface
chat = ChatInterface(silent=True)
chat("SHOW_APP") # Show application
chat("RELOAD_CHAT_LIST") # Refresh chat list
chat("SELECT_CHAT", conv_id) # Select conversation
License
MIT License - see LICENSE file for details.
