AI Microcore
A handy lib for smooth interaction with large language models (LLMs) and crafting AI apps.
Ask AI about AI Microcore
Powered by Claude Β· Grounded in docs
I know everything about AI Microcore. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
AI MicroCore: A Minimalistic Foundation for AI Applications
MicroCore is a collection of python adapters for Large Language Models and Vector Databases / Semantic Search APIs allowing to communicate with these services in a convenient way, make them easily switchable and separate business logic from the implementation details.
It defines interfaces for features typically used in AI applications, which allows you to keep your application as simple as possible and try various models & services without need to change your application code.
You can even switch between text completion and chat completion models only using configuration.
Thanks to LLM-agnostic MCP integration, MicroCore connects MCP tools to any language models easily, whether through API providers that do not support MCP, or through inference using pytorch or arbitrary python functions.
The basic example of usage is as follows:
from microcore import llm
while user_msg := input('Enter message: '):
print('AI: ' + llm(user_msg))
π Links
π» Installation
Install as PyPi package:
pip install ai-microcore
Alternatively, you may just copy microcore folder to your project sources root.
git clone git@github.com:Nayjest/ai-microcore.git && mv ai-microcore/microcore ./ && rm -rf ai-microcore
π Requirements
Python 3.10 / 3.11 / 3.12 / 3.13 / 3.14
βοΈ Configuring
Minimal Configuration
Having OPENAI_API_KEY in OS environment variables is enough for basic usage.
Similarity search features will work out of the box if you have the chromadb pip package installed.
Configuration Methods
There are a few options available for configuring microcore:
- Use
microcore.configure(**params)
π‘ All configuration options appear in IDE autocompletion tooltips - Create a
.envfile in your project root; examples: basic.env, Mistral Large.env, Anthropic Claude 3 Opus.env, Gemini on Vertex AI.env, Gemini on AI Studio.env - Use a custom configuration file:
mc.configure(DOT_ENV_FILE='dev-config.ini') - Define OS environment variables
For the full list of available configuration options, you may also check
microcore/configuration.py.
Installing vendor-specific packages
For models working not via OpenAI API, you may need to install additional packages:
Anthropic Claude
pip install anthropic
Google Gemini via AI Studio or Vertex AI
pip install google-genai
Local language models via Hugging Face Transformers
You will need to install transformers and a deep learning library of your choice (PyTorch, TensorFlow, Flax, etc).
See transformers installation.
Priority of Configuration Sources
- Configuration options passed as arguments to
microcore.configure()have the highest priority. - The priority of configuration file options (
.envby default or the value ofDOT_ENV_FILE) is higher than OS environment variables.
π‘ SettingUSE_DOT_ENVtofalsedisables reading configuration files. - OS environment variables have the lowest priority.
Vector Databases
Vector database functions are available via microcore.texts.
ChromaDB
The default vector database is Chroma.
In order to use vector database functions with ChromaDB, you need to install the chromadb package:
pip install chromadb
By default, MicroCore will use ChromaDB PersistentClient (if the corresponding package is installed). Alternatively, you can run Chroma as a separate service and configure MicroCore to use HttpClient:
from microcore import configure
configure(
EMBEDDING_DB_HOST = 'localhost',
EMBEDDING_DB_PORT = 8000,
)
Qdrant
In order to use vector database functions with Qdrant, you need to install the qdrant-client package:
pip install qdrant-client
Configuration example
from microcore import configure, EmbeddingDbType
from sentence_transformers import SentenceTransformer
configure(
EMBEDDING_DB_TYPE=EmbeddingDbType.QDRANT,
EMBEDDING_DB_HOST="localhost",
EMBEDDING_DB_PORT="6333",
EMBEDDING_DB_SIZE=384, # number of dimensions in the SentenceTransformer model
EMBEDDING_DB_FUNCTION=SentenceTransformer("paraphrase-multilingual-MiniLM-L12-v2"),
)
π Core Functions
llm(prompt: str, **kwargs) β str
Performs a request to a large language model (LLM).
Asynchronous variant: allm(prompt: str, **kwargs)
from microcore import *
# Will print all requests and responses to console
use_logging()
# Basic usage
ai_response = llm('What is your model name?')
# You may also pass a list of strings as prompt
# - For chat completion models elements are treated as separate messages
# - For completion LLMs elements are treated as text lines
llm(['1+2', '='])
llm('1+2=', model='gpt-5.2')
# To specify a message role, you can use dictionary or classes
llm(dict(role='system', content='1+2='))
# equivalent
llm(SysMsg('1+2='))
# The returned value is a string
assert '7' == llm([
SysMsg('You are a calculator'),
UserMsg('1+2='),
AssistantMsg('3'),
UserMsg('3+4=')]
).strip()
# But it contains all fields of the LLM response in additional attributes
for i in llm('1+2=?', n=3, temperature=2).choices:
print('RESPONSE:', i.message.content)
# To use response streaming you may specify the callback function:
llm('Hi there', callback=lambda x: print(x, end=''))
# Or multiple callbacks:
output = []
llm('Hi there', callbacks=[
lambda x: print(x, end=''),
lambda x: output.append(x),
])
tpl(file_path, **params) β str
Renders prompt template with params.
Full-featured Jinja2 templates are used by default.
Related configuration options:
from microcore import configure
configure(
# 'tpl' folder in current working directory by default
PROMPT_TEMPLATES_PATH = 'my_templates_folder'
)
texts.search(collection: str, query: str | list, n_results: int = 5, where: dict = None, **kwargs) β list[str]
Similarity search
texts.find_one(self, collection: str, query: str | list) β str | None
Find most similar text
texts.get_all(self, collection: str) -> list[str]
Return collection of texts
texts.save(collection: str, text: str, metadata: dict = None)
Store text and related metadata in embeddings database
texts.save_many(collection: str, items: list[tuple[str, dict] | str])
Store multiple texts and related metadata in the embeddings database
texts.clear(collection: str):
Clear collection
API providers and models support
MI-MicroCore supports major API providers via various chat completion / text completion APIs.
Tested with the following services:
- OpenAI
- Anthropic (via Anthropic API and via OpenAI API)
- MistralAI
- Google AI Studio (via Google GenAI API and via OpenAI API)
- Google Vertex AI
- xAI
- Microsoft Azure
- Perplexity
- DeepSeek
- Cohere
- RunPod (via OpenAI API)
- Cerebras
- HuggingFace Inference API
- AI21 Studio
- Deep Infra
- Anyscale
- Groq
- Fireworks
- Together AI
- OpenRouter
- 01.AI
And more via Google / Anthropic / OpenAI API.
Supported local language model APIs:
- HuggingFace Transformers (see configuration examples here).
- Custom local models by providing own function for chat / text completion, sync / async inference.
πΌοΈ Examples
Code review tool
Performs a code review by LLM for changes in git .patch files in any programming languages.
Image analysis (Google Colab)
Determine the number of petals and the color of the flower from a photo (gpt-4-turbo)
Benchmark LLMs on math problems (Kaggle Notebook)
Benchmark accuracy of 20+ state of the art models on solving olympiad math problems. Inferencing local language models via HuggingFace Transformers, parallel inference.
Generate meme image
Simple example demonstrating image generation using OpenAI GPT Image model.
Local inference with PyTorch / Transformers
Text generation using HF/Transformers model locally (example with Qwen 3 0.6B).
Other examples
π Guides & Reference
For more detailed information, check out these articles:
Python functions as AI tools
Usage Example:
from microcore.ai_func import ai_func
@ai_func
def search_products(
query: str,
category: str = "all",
max_results: int = 10,
in_stock_only: bool = False
):
"""
Search for products in the catalog.
Args:
query: Search terms to find matching products
category: Product category to filter by (e.g., "electronics", "clothing")
max_results: Maximum number of results to return
in_stock_only: If True, only return products currently in stock
Returns:
List of matching products with name, price, and availability
"""
# Implementation would go here
pass
Output:
# Search for products in the catalog.
Args:
query: Search terms to find matching products
category: Product category to filter by (e.g., "electronics", "clothing")
max_results: Maximum number of results to return
in_stock_only: If True, only return products currently in stock
Returns:
List of matching products with name, price, and availability
{
"call": "search_products",
"query": <str>,
"category": <str> (default = "all"),
"max_results": <int> (default = 10),
"in_stock_only": <bool> (default = False)
}
π€ AI Modules
This is an experimental feature.
Tweaks the Python import system to provide automatic setup of MicroCore environment based on metadata in module docstrings.
Usage:
import microcore.ai_modules
Features:
- Automatically registers template folders of AI modules in Jinja2 environment
π οΈ Contributing
Please see CONTRIBUTING for details.
π License
Licensed under the MIT License Β© 2023β2026 Vitalii Stepanenko
