Wordmark
Open-source chatbot web app for the Responses API
Ask AI about Wordmark
Powered by Claude Β· Grounded in docs
I know everything about Wordmark. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
Wordmark
Overview
Wordmark is a client-side AI chat for OpenAI/xAI Responses APIs and local LM Studio or Ollama servers. It supports tool/function calling, TTS, themes, and fully local storage β no backend required.
Docs:
- Getting Started
- Overview
- Services & Models
- Tool Calling
- Streaming
- Memory
- Security
- Storage
- UI & UX
- Docker
- Troubleshooting
Features
- Providers β OpenAI Responses (hosted), xAI Grok (Responses-compatible), and local LM Studio or Ollama servers (Services & Models)
- Tool calling β built-in weather, provider web + X search, Code Interpreter, image generation, file search (OpenAI), direct file attachments (xAI), and custom MCP servers (Tool Calling)
- Streaming & reasoning β dedicated reasoning panel, rich tool timelines, inline code previews, automatic image capture (Streaming)
- TTS β OpenAI (13 voices) and xAI (5 voices) providers, optional autoplay, per-message controls, audio cached locally
- UX β themes, responsive layout, syntax highlighting, markdown, image gallery (UI & UX)
- Local-only storage β conversations, images, and audio via IndexedDB; keys stay in the browser (Storage)
- Memory β local, FIFO-limited memories appended to the system prompt (Memory)
Quick Start
git clone https://github.com/h1ddenpr0cess20/Wordmark.git
cd Wordmark
Open index.html directly, or serve over HTTPS for APIs, TTS, and geolocation (see Getting Started).
- In Settings β API Keys, add your OpenAI/xAI keys. Keys and URLs are stored locally.
- Choose a provider and model in Settings β Model.
- Type a message and send.
Local Models
- LM Studio β run the server (default
http://localhost:1234), set the base URL in Settings β API Keys, then select LM Studio in Settings β Model (LM Studio guide) - Ollama β run the server (default
http://localhost:11434), set the base URL in Settings β API Keys, then select Ollama in Settings β Model
Note: Chrome may prompt you to allow local network access. This is only used to connect to local LM Studio/Ollama servers.
HTTPS & Docker
HTTPS is recommended for full functionality β quick steps in Getting Started. Full Docker/Compose instructions and SSL options in the Docker guide.
# Pull from Docker Hub and run
docker run --rm -p 8080:80 h1ddenpr0cess20/wordmark:latest
Or build from source:
docker build -t wordmark:latest .
docker run --rm -p 8080:80 wordmark:latest
Architecture & Development
- Architecture β high-level structure
- Security & Storage β data handling
- UI & UX β layout and design
- Development & CONTRIBUTING β developer guide
Common tasks:
- Add tools β extend the catalog in
src/js/services/api/toolManager.jsand implement handlers (seesrc/js/services/weather.js) β Tool Calling - Adjust models/providers β edit
src/config/config.jsβ Services & Models - Themes and styling β
src/css/themes/**,src/css/components/**
Usage
- Enable Tools in Settings to allow function calls for weather, web search, file attachments, and any MCP servers you connect (Tool Calling)
- Manage conversations, images, and audio locally via History and Gallery
- Use TTS for spoken responses β configure provider and voice in Settings β TTS
Policies & Notes
- Privacy/Security β client-side only; no tracking (Security)
- Troubleshooting β common issues and tips (Troubleshooting)
- Not a Companion β philosophy and boundaries (Not a Companion)
License
MIT β see LICENSE
