io.github.wyattbenno777/icme-preflight
Jailbreak-proof AI guardrails. Automated Reasoning SMT solver, not an LLM. ZK proofs included.
Ask AI about io.github.wyattbenno777/icme-preflight
Powered by Claude · Grounded in docs
I know everything about io.github.wyattbenno777/icme-preflight. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
JOLT Atlas
JOLT Atlas is a zero-knowledge machine learning (zkML) framework that extends the JOLT proving system to support ML inference verification from ONNX models.
Made with ❤️ by ICME Labs.
Overview
JOLT Atlas enables practical zero-knowledge machine learning by leveraging Just One Lookup Table (JOLT) technology. Traditional circuit-based approaches are prohibitively expensive when representing non-linear functions like ReLU and SoftMax. Lookups eliminate the need for circuit representation entirely.
In JOLT Atlas, we eliminate the complexity that plagues other approaches: no quotient polynomials, no byte decomposition, no grand products, no permutation checks, and most importantly — no complicated circuits.
Our core ethos is to reduce commitment costs via sumcheck while committing only to small-value polynomials.
Examples
Examples live in jolt-atlas-core/examples/ and demonstrate end-to-end prove → verify flows for various ONNX models.
GPT-2
GPT-2 proof and verification flow.
cargo run --release --package jolt-atlas-core --example gpt2
nanoGPT
A ~0.25M-parameter GPT model (4 transformer layers). Loads the ONNX graph, generates a SNARK proof of inference, and verifies it.
cargo run --release --package jolt-atlas-core --example nanoGPT
Transformer (self-attention)
Single self-attention block proof.
cargo run --release --package jolt-atlas-core --example transformer
MiniGPT / MicroGPT
Smaller GPT variants useful for quick iteration and debugging.
cargo run --release --package jolt-atlas-core --example minigpt
cargo run --release --package jolt-atlas-core --example microgpt
Benchmarks
System specs: MacBook Pro M3, 16GB RAM
GPT-2 (125M params)
GPT-2 is a 125-million-parameter transformer model from OpenAI.
JOLT Atlas
| Stage | Wall clock |
|---|---|
Proving/verifying key generation (setup_prover) | 1.003 s |
Witness + commitment phase (ONNXProof::commit_witness_polynomials) | 0.762 s |
IOP proving (ONNXProof::iop) | 5.997 s |
Reduction opening proof (excluding HyperKZG::prove) | 1.899 s |
HyperKZG prove (HyperKZG::prove) | 2.392 s |
Proof time (ONNXProof::prove) | 14.889 s |
Verify time (ONNXProof::verify) | 1.038 s |
End-to-end total (setup_prover + prove + verify) | 16.930 s |
nanoGPT (~0.25M params, 4 transformer layers)
nanoGPT is the standard workload we use for cross-project comparison. It is a ~250k-parameter GPT model with 4 transformer layers.
JOLT Atlas:
| Stage | Wall clock |
|---|---|
Verifying key generation (setup_verifier) | <0.001 s |
Proving key generation (setup_prover) | 0.263 s |
Proof time (ONNXProof::prove) | 2.288 s |
Verify time (ONNXProof::verify) | 0.127 s |
End-to-end total (setup_prover + prove + verify) | 2.678 s |
ezkl on the same model (source):
| Stage | Wall clock |
|---|---|
| Verifying key generation | 192 s |
| Proving key generation | 212 s |
| Proof time | 237 s |
| Verify time | 0.34 s |
JOLT Atlas produces a proof for nanoGPT in ~2.29 s versus ezkl's ~237 s proof time (not counting their 400+ s of key generation). That is roughly a 104× speed-up on proof generation alone.
How to reproduce locally
# from repo root
cargo run --release --package jolt-atlas-core --example gpt2
Add -- --trace for Chrome Tracing JSON output (view in chrome://tracing), or -- --trace-terminal for timing printed to the terminal.
Getting Started
GPT-2 (first run)
GPT-2 uses a Hugging Face–hosted ONNX model that is not checked into the repo. A helper script downloads and prepares it automatically.
- Clone the repository.
- Install Rust and Cargo.
- Download the model:
# Create a virtual environment (one-time)
python3 -m venv .venv
source .venv/bin/activate
# Run the download script
python scripts/download_gpt2.py
This exports GPT-2 via Hugging Face Optimum
into atlas-onnx-tracer/models/gpt2/ and copies model.onnx → network.onnx.
- Test the model (trace only, no proof):
cargo run --release --package atlas-onnx-tracer --example gpt2
You should see the model graph printed and an output shape like
[1, 16, 65536] (vocab size 50257 padded to the next power of two).
- Prove & verify GPT-2:
cargo run --release --package jolt-atlas-core --example gpt2
A successful run prints Proof verified successfully!.
Acknowledgments
Thanks to the Jolt team for their foundational work. We are standing on the shoulders of giants.
