arthurpanhku/Arthor-Agent
...
Ask AI about arthurpanhku/Arthor-Agent
Powered by Claude ยท Grounded in docs
I know everything about arthurpanhku/Arthor-Agent. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Arthor Agent
Automated security assessment for documents and questionnaires
What is Arthor Agent?
Arthor Agent is an AI-powered assistant for security teams. It automates the review of security-related documents, forms, and reports (e.g. Security Questionnaires, design docs, compliance evidence), compares them against your policy and knowledge base, and produces structured assessment reports with risks, compliance gaps, and remediation suggestions.
๐ Agent Ready: Supports Model Context Protocol (MCP) to be used as a "skill" by OpenClaw, Claude Desktop, and other autonomous agents.
- Multi-format input: PDF, Word, Excel, PPT, text โ parsed into a unified format for the LLM.
- Knowledge base (RAG): Upload policy and compliance documents; the agent uses them as reference when assessing.
- Multiple LLMs: Use OpenAI, Claude, Qwen, or Ollama (local) via a single interface.
- Structured output: JSON/Markdown reports with risk items, compliance gaps, and actionable remediations.
Ideal for enterprises that need to scale security assessments across many projects without proportionally scaling headcount.
Why Arthor Agent?
| Pain Point | Arthor Agent Solution |
|---|---|
| Fragmented criteria Policies, standards, and precedents are scattered. | Single knowledge base ensures consistent findings and traceability. |
| Heavy questionnaire workflow Business fills form โ Security reviews โ Business adds evidence โ Security reviews again. | Automated first-pass and gap analysis reduces manual back-and-forth rounds. |
| Pre-release review pressure Security needs to review and sign off on technical docs before launch. | Structured reports help reviewers focus on decision-making, not line-by-line reading. |
| Scale vs. consistency Many projects and standards lead to inconsistent or delayed manual reviews. | Unified pipeline with configurable scenarios keeps assessments consistent and auditable. |
See the full problem statement and product goals in SPEC.md.
Architecture
Arthor Agent is built around an orchestrator that coordinates parsing, the knowledge base (RAG), skills, and the LLM. You can use cloud or local LLMs and optional integrations (e.g. AAD, ServiceNow) as your environment requires.
flowchart TB
subgraph User["๐ค User / Security Staff"]
end
subgraph Access["Access Layer"]
API["REST API / MCP"]
end
subgraph Core["Arthor Agent Core"]
Orch["Orchestrator"]
Mem["Memory"]
Skill["Skills"]
KB["Knowledge Base (RAG)"]
Parser["Parser"]
end
subgraph LLM["LLM Layer"]
Abst["LLM Abstraction"]
end
subgraph Backends["LLM Backends"]
Cloud["OpenAI / Claude / Qwen"]
Local["Ollama / vLLM"]
end
User --> API
API --> Orch
Orch <--> Mem
Orch --> Skill
Orch --> KB
Orch --> Parser
Orch --> Abst
Abst --> Cloud
Abst --> Local
Data flow (simplified):
- User uploads documents and selects scenario.
- Parser converts files (PDF, Word, Excel, PPT, etc.) to text/Markdown.
- Orchestrator loads KB chunks (RAG) and invokes Skills.
- LLM (OpenAI, Ollama, etc.) produces structured findings.
- Returns assessment report (risks, gaps, remediations).
Detailed architecture: ARCHITECTURE.md and docs/01-architecture-and-tech-stack.md.
Features
| Area | Capabilities |
|---|---|
| Parsing | Word, PDF, Excel, PPT, Text โ Markdown/JSON. |
| Knowledge Base | Multi-format upload, chunking, vectorization (Chroma), RAG query. |
| Assessment | Submit files โ structured report (risks, gaps, remediations). |
| LLM | Configurable provider: Ollama (local), OpenAI, etc. |
| API | REST API & MCP Server for Agent integration. |
| Security | Built-in RBAC, Audit Logs, and Prompt Injection guards. |
| Integration | Supports MCP for OpenClaw, Claude Desktop, etc. |
Roadmap (e.g. AAD/SSO, ServiceNow integration) in SPEC.md.
๐ Features at a Glance
1. Assessment Workbench
Upload documents, select a persona (e.g. SOC2 Auditor), and get instant risk analysis.

2. Structured Report
Clear view of Risks, Compliance Gaps, and Remediation Steps.

3. Knowledge Base Management
Upload policy documents to RAG. The agent cites these as evidence.

Quick Start
Option A: One-Click Deployment (Recommended)
Run the deployment script to start the full stack (API + Dashboard + Vector DB + optional Ollama).
git clone https://github.com/arthurpanhku/Arthor-Agent.git
cd Arthor-Agent
chmod +x deploy.sh
./deploy.sh
- Dashboard: http://localhost:8501
- API Docs: http://localhost:8000/docs
Option B: Docker Manual
Prerequisites: Python 3.10+. Optional: Ollama (ollama pull llama2).
git clone https://github.com/arthurpanhku/Arthor-Agent.git
cd Arthor-Agent
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env # Edit if needed: LLM_PROVIDER=ollama or openai
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
- API docs: http://localhost:8000/docs ยท Health: http://localhost:8000/health
Example: submit an assessment
You can use the sample files in examples/ to try the API.
# Use sample file from repo
curl -X POST "http://localhost:8000/api/v1/assessments" \
-F "files=@examples/sample.txt" \
-F "scenario_id=default"
# Response: { "task_id": "...", "status": "accepted" }
# Get the result (replace TASK_ID with the returned task_id)
curl "http://localhost:8000/api/v1/assessments/TASK_ID"
Example: upload to KB and query
# Use sample policy from repo
curl -X POST "http://localhost:8000/api/v1/kb/documents" -F "file=@examples/sample-policy.txt"
# Query the KB (RAG)
curl -X POST "http://localhost:8000/api/v1/kb/query" \
-H "Content-Type: application/json" \
-d '{"query": "What are the access control requirements?", "top_k": 5}'
Project layout
Arthor-Agent/
โโโ app/ # Application code
โ โโโ api/ # REST routes: assessments, KB, health
โ โโโ agent/ # Orchestration & Assessment pipeline
โ โโโ core/ # Configuration (pydantic-settings)
โ โโโ kb/ # Knowledge Base (Chroma, chunking, RAG)
โ โโโ llm/ # LLM abstraction (OpenAI, Ollama)
โ โโโ parser/ # Document parsing (PDF, Word, Excel, PPT, text)
โ โโโ models/ # Pydantic models
โ โโโ main.py
โโโ tests/ # Automated tests (pytest)
โโโ examples/ # Sample files (questionnaires, policies)
โโโ docs/ # Design & Spec documentation
โ โโโ 01-architecture-and-tech-stack.md
โ โโโ 02-api-specification.yaml
โ โโโ 03-assessment-report-and-skill-contract.md
โ โโโ 04-integration-guide.md
โ โโโ 05-deployment-runbook.md
โ โโโ schemas/
โโโ .github/ # Issue/PR templates, CI (Actions)
โโโ Dockerfile
โโโ docker-compose.yml # API only
โโโ docker-compose.ollama.yml # API + Ollama optional
โโโ CONTRIBUTING.md # Contribution guidelines
โโโ CODE_OF_CONDUCT.md # Code of conduct
โโโ CHANGELOG.md
โโโ SPEC.md
โโโ LICENSE
โโโ SECURITY.md
โโโ requirements.txt
โโโ requirements-dev.txt # Dev dependencies
โโโ pytest.ini
โโโ .env.example
Configuration
| Variable | Description | Default |
|---|---|---|
LLM_PROVIDER | ollama or openai | ollama |
OLLAMA_BASE_URL / OLLAMA_MODEL | Local LLM | http://localhost:11434 / llama2 |
OPENAI_API_KEY / OPENAI_MODEL | OpenAI | โ |
CHROMA_PERSIST_DIR | Vector DB path | ./data/chroma |
UPLOAD_MAX_FILE_SIZE_MB / UPLOAD_MAX_FILES | Upload limits | 50 / 10 |
See .env.example and docs/05-deployment-runbook.md for full options.
Documentation and PRD
- ARCHITECTURE.md โ System architecture: high-level diagram, Mermaid views, component design, data flow, security.
- SPEC.md โ Product requirements: problem statement, solution, features, security controls.
- CHANGELOG.md โ Version history; Releases.
- Design docs docs/๏ผArchitecture, API spec (OpenAPI), contracts, integration guides (AAD, ServiceNow), deployment runbook. Q1 Launch Checklist: docs/LAUNCH-CHECKLIST.md.
Development & Testing
To verify your installation or contribute to the project, run the test suite:
Option A: One-Click Test (Recommended)
Automatically sets up a test environment and runs all checks.
chmod +x test_integration.sh
./test_integration.sh
Option B: Manual
# 1. Install dev dependencies
pip install -r requirements-dev.txt
# 2. Run all tests
pytest
# 3. Run specific test (e.g. Skills API)
pytest tests/test_skills_api.py
Contributing
Issues and Pull Requests are welcome. Please read CONTRIBUTING.md for setup, tests, and commit guidelines. By participating you agree to the CODE_OF_CONDUCT.md.
๐ค AI-Assisted Contribution: We encourage using AI tools to contribute! Check out CONTRIBUTING_WITH_AI.md for best practices.
๐ Submit a Skill Template: Have a great security persona? Submit a Skill Template or add it to examples/templates/. We welcome real-world (sanitized) security questionnaires to improve our templates!
Security
- Vulnerability reporting: See SECURITY.md for responsible disclosure.
- Security requirements: Follows security controls in SPEC ยง7.2.
License
This project is licensed under the MIT License โ see the LICENSE file for details.
Star History
Author and links
- Author: PAN CHAO (Arthur Pan)
- Repository: github.com/arthurpanhku/Arthor-Agent
- SPEC and design docs: See links above.
If you use Arthor Agent in your organization or contribute back, weโd love to hear from you (e.g. via GitHub Discussions or Issues).
