NeuroCore
Contract-first memory core for storing and retrieving policy-aware notes and documents.
Ask AI about NeuroCore
Powered by Claude Β· Grounded in docs
I know everything about NeuroCore. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
NeuroCore
NeuroCore is a Python package for capturing, storing, querying, and governing policy-aware memory artifacts. The current repository includes a working core library, a CLI entrypoint, FastAPI and MCP adapters, multiple storage backends, first-class brain/session/protocol surfaces, synthesized briefing and reporting flows, ingestion helpers for Slack and Discord payloads, and automated tests covering the main subsystem contracts.
The active architecture and scope contract lives in docs/ssd/.
Keep README-level guidance aligned with those documents when behavior changes.
Version: 0.1.0
Declared in pyproject.toml.
Overview
Main capabilities currently present in the repository:
- Capture notes and longer documents into record or document storage paths.
- Query stored content with metadata filters and optional semantic ranking.
- Manage first-class brain manifests and route product-layer memory through
brain_id. - Capture, checkpoint, and resume session memory for operator or agent workflows.
- Run reusable named protocols for review, audit, handoff, and triage flows.
- Generate synthesized briefings from durable memory for compact handoffs.
- Route storage to in-memory, SQLite, or Postgres-backed primary and sealed stores.
- Expose the same core behavior through library, CLI, HTTP, and MCP surfaces.
- Run a FastAPI-first reference app and MCP server through official CLI serve commands.
- Gate higher-risk or optional surfaces such as admin operations, dashboard views, background summarization, and multi-model consensus via configuration.
- Provide reporting helpers for building review and report workflows on top of query results.
- Validate repository metadata and scan for obvious secret-like values with a built-in governance checker.
- Provide ecosystem contribution surfaces for recipes, skills, integrations, dashboards, schemas, primitives, and curated extensions.
Repository Structure
.
βββ src/neurocore/
β βββ adapters/ # CLI, FastAPI, and MCP adapter implementations
β βββ core/ # Config, shared content primitives, models, and policy validation
β βββ governance/ # Repository contract and secret-scan validator
β βββ ingest/ # Chunking plus ingest-specific compatibility helpers
β βββ interfaces/ # Public capture, query, brain, session, ingest, protocol, report, and dashboard APIs
β βββ maintenance/ # Store maintenance and migration helpers
β βββ reporting/ # Report context builders and consensus reporting helpers
β βββ retrieval/ # Query engine and rankers
β βββ storage/ # In-memory, SQLite, Postgres, and routed stores
β βββ summarization/ # Background and consensus summarization logic
β βββ runtime.py # Runtime factories for stores, rankers, summarizers
βββ tests/ # Pytest suite grouped by subsystem
βββ scripts/ # Local bootstrap and repo helper scripts
βββ assets/screenshots/ # README visuals
βββ data/ # Local runtime storage created by bootstrap
βββ docs/ssd/ # Architecture/specification source of truth
βββ dashboards/ # Dashboard contribution surface and templates
βββ extensions/ # Extension contribution surface and templates
βββ integrations/ # Starter integrations and templates
βββ primitives/ # Primitive building blocks and templates
βββ recipes/ # Runnable workflow recipes
βββ schemas/ # Metadata schemas and templates
βββ skills/ # Skill definitions and templates
βββ .github/ # CI workflow, PR template, metadata schema
βββ .claude/commands/ # AI-assisted slash-command prompts used in this repo
βββ pyproject.toml # Packaging metadata, dependencies, tool config
βββ Makefile # Convenience targets for lint, test, and validation
βββ .env.example # Example environment variables
βββ .env.reporting-provider.example # External reporting provider example
βββ .env.security-operator.example # Security-oriented local profile
βββ ingest-profiles.json.example # Example source/channel ingest defaults
βββ secrets.json.example # Local-only secret template
βββ preferences.json.example # Local-only preference template
βββ CHANGELOG.md # Project change log
Installation Instructions
Prerequisites
- Python
3.11or newer pip- Optional:
venvor another virtual environment tool
Quick Start
The fastest local onboarding path is the bootstrap script:
python scripts/bootstrap.py
This creates or reuses .venv, installs .[dev,semantic], writes a
security-oriented .env, copies the local-only config templates, creates
data/, runs pytest plus the repo validator, and prints a readiness summary
for semantic, query, and report support.
If you want a small guided flow for namespace and verification choices:
python scripts/bootstrap.py --wizard
Setup
- Clone the repository and change into it.
- Run the bootstrap script:
python scripts/bootstrap.py
- Activate the virtual environment and load the generated environment:
source .venv/bin/activate
set -a
source .env
set +a
- Optional: use the detailed manual path in docs/setup.md if you want to control each step yourself.
Manual Setup
If you prefer a fully manual setup, the project still supports the documented step-by-step flow in docs/setup.md.
Optional Extras
The bootstrap already installs the semantic extra by default for local
workflows. If you are following the manual path and want the
sentence-transformers ranker:
python -m pip install -e ".[dev,semantic]"
Usage Guide
The checked-in runnable entrypoint is the neurocore CLI defined in
pyproject.toml. HTTP and MCP support are available as Python adapter
factories and through dedicated serve commands. The same CLI also exposes the
product-layer brain, session, and protocol surfaces described in
docs/ssd/architecture.md.
For repo checkouts, prefer the checkout-safe wrappers:
python scripts/neurocore_checkout.py --help
python scripts/validate_checkout.py
For security-focused local work, there is also a helper wrapper that reuses the
repo virtual environment, loads .env, and exposes shortcuts for notes, files,
papers, and hackingagent artifacts:
./.venv/bin/python scripts/security_workflow.py --help
Use ./.venv/bin/python scripts/security_workflow.py presets to list the built-in bug
bounty, pentest, paper-tracking, and agent-memory workflows.
Two local readiness tiers matter:
- query-ready: capture and retrieval work with your configured storage and semantic backend
- briefing-ready: synthesized briefings work from durable memory even when reporting is unavailable
- full report-ready: consensus reporting also works because the configured provider is live for the current invocation
Check the current state at any time:
./.venv/bin/python scripts/security_workflow.py capabilities
Inspect the CLI
neurocore --help
Generate a briefing
neurocore briefing --request-json '{"query_text":"recon status","allowed_buckets":["recon","findings"],"sensitivity_ceiling":"restricted"}'
Run structural quality checks
make sentrux
This runs the repo's checked-in Sentrux rules plus the saved structural
baseline from .sentrux/.
Capture a note
This example relies on the default namespace and sensitivity from your exported environment variables.
neurocore capture --request-json '{"bucket":"recon","content":"Initial recon note","content_format":"markdown","source_type":"note"}'
Query stored content
neurocore query --request-json '{"query_text":"recon","allowed_buckets":["recon","findings"],"sensitivity_ceiling":"restricted"}'
Use brain, session, and protocol surfaces
Create or upsert a brain manifest:
neurocore brain create --request-json '{"brain_id":"project-alpha","namespace":"project-alpha","display_name":"Project Alpha"}'
Store a high-signal session checkpoint:
neurocore session checkpoint --request-json '{"brain_id":"project-alpha","session_id":"sess-001","source_client":"codex","content":"Confirmed the auth bypass path and queued report prep.","importance":"high"}'
Inspect and run reusable protocols:
neurocore protocol list
neurocore protocol run --request-json '{"name":"project-review-v1","brain_id":"project-alpha"}'
Generate a consensus report
Consensus reporting must be enabled first with
NEUROCORE_ENABLE_MULTI_MODEL_CONSENSUS=true.
Recommended production profile:
- primary provider: DeepSeek via
https://api.deepseek.com - primary models:
deepseek-v4-flash,deepseek-v4-pro - fallback provider: OpenAI via
https://api.openai.com/v1 - fallback model:
gpt-5-mini - consensus mode:
claim_voting_with_judgefor claim reconciliation plus fallback-provider judge review
Set this with the provider-aware environment variables:
NEUROCORE_REPORTING_STRATEGY=primary_with_fallbackNEUROCORE_REPORTING_CONSENSUS_MODE=lexical_select|claim_voting|claim_voting_with_judgeNEUROCORE_REPORTING_PRIMARY_PROVIDER=deepseekNEUROCORE_REPORTING_FALLBACK_PROVIDER=openaiNEUROCORE_REPORTING_PROVIDER_DEEPSEEK_BASE_URLNEUROCORE_REPORTING_PROVIDER_DEEPSEEK_API_KEYNEUROCORE_REPORTING_PROVIDER_DEEPSEEK_MODELSNEUROCORE_REPORTING_PROVIDER_OPENAI_BASE_URLNEUROCORE_REPORTING_PROVIDER_OPENAI_API_KEYNEUROCORE_REPORTING_PROVIDER_OPENAI_MODELS
For local mock development only, you can still use the legacy single-provider
settings and start the bundled mock provider with
./.venv/bin/python scripts/mock_openai_compatible.py.
For the machine-local external template, start from .env.reporting-provider.example.
neurocore report consensus --request-json '{"objective":"Generate a pentest review report.","query_request":{"query_text":"ssrf findings","allowed_buckets":["findings","reports"],"sensitivity_ceiling":"restricted"}}'
If consensus reporting is disabled or the provider is unavailable, the same
report path now returns a synthesized markdown briefing payload with
"mode":"fallback-briefing" instead of hard failing.
Run the dashboard-enabled reference app
Enable the HTTP adapter in your environment, then run the blessed serve path:
neurocore serve http --host 127.0.0.1 --port 8000
If you want the checkout-safe wrapper that loads the repo environment for you, use:
python scripts/neurocore_checkout.py serve http --host 127.0.0.1 --port 8000
Run the MCP server
Enable the MCP adapter in your environment, then run:
neurocore serve mcp --transport stdio
The checkout-safe wrapper is also available:
python scripts/neurocore_checkout.py serve mcp --transport stdio
Ingest an external event payload
neurocore ingest slack --request-json '{"type":"event_callback","team_id":"T123","event":{"type":"message","channel":"C123","user":"U123","text":"incident note","ts":"1713897900.000100"},"bucket":"ops"}'
The CLI also supports ingest discord.
Optional ingest profile defaults can be loaded from a JSON file by setting
NEUROCORE_INGEST_PROFILE_PATH=/path/to/ingest-profiles.json.
Run background summaries
Background summarization must be enabled in the environment first with
NEUROCORE_ENABLE_BACKGROUND_SUMMARIZATION=true.
neurocore summaries run --request-json '{"limit":10}'
Use admin commands
Admin operations are gated behind NEUROCORE_ENABLE_ADMIN_SURFACE=true.
neurocore admin reindex --request-json '{"ids":["rec-1"],"scope":"records"}'
Audit stored memory for secret-like values and review non-mutating remediation candidates:
neurocore admin audit --request-json '{"namespace":"project-alpha","allowed_buckets":["research"]}'
Use the adapter factories from Python
FastAPI app factory:
from neurocore.adapters.http_api import create_app
app = create_app()
MCP server factory:
from neurocore.adapters.mcp_server import create_mcp_server
server = create_mcp_server()
Configuration
Required configuration values:
NEUROCORE_DEFAULT_NAMESPACENEUROCORE_ALLOWED_BUCKETSNEUROCORE_DEFAULT_SENSITIVITY
Common optional settings:
NEUROCORE_STORAGE_BACKEND=in_memory|sqlite|postgresNEUROCORE_PRIMARY_STORE_PATHNEUROCORE_SEALED_STORE_PATHNEUROCORE_SEMANTIC_BACKEND=none|sentence-transformersNEUROCORE_SEMANTIC_MODEL_NAMENEUROCORE_DEFAULT_TOP_KNEUROCORE_INGEST_PROFILE_PATHNEUROCORE_ALLOW_HARD_DELETE=true|falseNEUROCORE_ENABLE_CLI_ADAPTER=true|falseNEUROCORE_ENABLE_HTTP_ADAPTER=true|falseNEUROCORE_ENABLE_MCP_ADAPTER=true|falseNEUROCORE_ENABLE_ADMIN_SURFACE=true|falseNEUROCORE_ENABLE_DASHBOARD=true|falseNEUROCORE_ENABLE_BACKGROUND_SUMMARIZATION=true|falseNEUROCORE_ENABLE_MULTI_MODEL_CONSENSUS=true|falseNEUROCORE_CONSENSUS_PROVIDER=none|openai_compatibleNEUROCORE_CONSENSUS_MODEL_NAMES=model-a,model-bNEUROCORE_CONSENSUS_BASE_URLNEUROCORE_CONSENSUS_API_KEYNEUROCORE_REPORTING_STRATEGY=single_provider|primary_with_fallbackNEUROCORE_REPORTING_CONSENSUS_MODE=lexical_select|claim_voting|claim_voting_with_judgeNEUROCORE_REPORTING_PRIMARY_PROVIDERNEUROCORE_REPORTING_FALLBACK_PROVIDERNEUROCORE_REPORTING_PROVIDER_<NAME>_TYPE=openai_compatibleNEUROCORE_REPORTING_PROVIDER_<NAME>_BASE_URLNEUROCORE_REPORTING_PROVIDER_<NAME>_API_KEYNEUROCORE_REPORTING_PROVIDER_<NAME>_MODELS=model-a,model-bNEUROCORE_PRODUCTION_BACKEND_PROVIDER=none|neonNEUROCORE_PRODUCTION_DATABASE_URLNEUROCORE_PRODUCTION_SEALED_DATABASE_URL
For the full generic environment template, see .env.example. For the security-oriented bootstrap profile, see .env.security-operator.example. For setup, security, and troubleshooting details, see the docs linked below.
Setup And Validation
Bootstrap commands:
python scripts/bootstrap.py
python scripts/bootstrap.py --wizard
Validation commands:
make lint
make test
make validate
python scripts/validate_checkout.py
make sentrux
The GitHub Actions workflow in .github/workflows/repo-gate.yml runs those same
checks across Python 3.11, 3.12, and 3.13.
Documentation
- SSD Architecture
- SSD Specification
- SSD Source Matrix
- Setup Guide
- Reference Stack
- Hosted Stack
- Security Guide
- Security Workflows
- Troubleshooting
- AI-Assisted Setup
- Contributing Guide
Ecosystem Surfaces
Recommended runnable examples:
Demo Screenshots
Publication preview:
Dashboard mock:
Security
- Do not commit
.env,secrets.json,preferences.json,token.json, or real database URLs. - Treat
secrets.json.exampleandpreferences.json.exampleas local-only templates. - Run
python scripts/validate_checkout.pybefore publishing changes.
Contributing
See CONTRIBUTING.md for workflow expectations. In short: review the SSD docs first, keep implementation and contracts aligned, and run the validation commands before opening a PR.
License
This project is licensed under the MIT License.
