Auracle docs
Auracle/Docs/Architecture

Architecture

Four principles: local-first, AI-native, cost-bounded, auditable.

Design docs

Four long-form documents live in the core repository at docs/architecture/. They're the source of truth for everything the Forge does. The user-facing pages on this site are summaries; if there's ever a conflict, the design doc wins.

DocumentWhat it covers
auracle-forge.md The full multi-agent architecture: agents, communication contracts, the catalog schema, scan lifecycle.
auracle-forge-surfaces.md Concrete mockups for the Web UI, JupyterLab side panel, and CLI surfaces. Pixel-level annotations on each surface.
auracle-forge-design.md The 12-section visual design system: typography, spacing, color, motion, accessibility, dark/light parity.
auracle-forge-jupyternaut.md The 11-section ambient-context model: the five integration layers, the relevance algorithm, the kill switch contract.
Access The design docs live in the private SiixQuant/Auracle repo and are not currently shipped inside customer installs. Institutional / Enterprise customers can request a read-only repo invite at contact@aurapointcapital.com.

Principles

Local-first

Every byte of every backtest, every fill, every prompt to a third-party LLM is logged and reproducible from your install alone. Auracle does not phone home except for license verification (and that check is one signed request per day, contents are {tier, install_uuid, expiry}). Strategy code, fills, and market data never leave your host.

This pushes some complexity onto the operator (you manage your own backups, your own uptime) but eliminates the vendor-trust surface entirely.

AI-native

Auracle is built around the assumption that the operator uses an AI assistant day-to-day. Every surface is designed twice — once for a human and once for an LLM. The MCP server is not an afterthought; it has feature parity with the HTTP API for read operations, and a deliberately narrower write surface so an over-eager assistant can't do damage.

The Forge is the clearest expression of this principle: a system whose primary user is another AI agent (or a human via Jupyternaut), shaped to be cheap to reason about and safe to delegate to.

Cost-bounded

Every LLM call has a per-call cap, a per-scan cap, and a per-day cap. Every cap is a hard limit; hitting one pauses the work in flight and surfaces in the audit log. No cost surprises — if you wake up to a $500 Anthropic bill it's because you raised the cap yourself, not because an agent ran away.

Auditable

Every agent invocation lands in forge_agent_runs with:

Replaying a scan with the same input fingerprints should produce hash-matching outputs modulo LLM non-determinism (the only entropy source is the model's sampling temperature, which the Forge pins to 0 by default).

Security

API keys

Provider API keys live in ~/auracle/.env on disk and as environment variables inside the containers. They are never written to the database. They are never sent to Anthropic. They are not logged.

Two tokens guard the machine surfaces: HOUSTON_API_KEY for Houston's HTTP/JSON API (sent as X-API-Key), and AURACLE_MCP_TOKEN for the standalone MCP server on port 7777 (sent as Authorization: Bearer). Both live in .env; the installer generates the Houston key on first run and you set the MCP token yourself. Rotation is a single line in .env followed by docker compose up -d.

License + tier limits

What each tier permits — and where the limits apply:

Your license is validated server-side. If you need more installs, more brokers, or want to migrate a license to a new machine, contact support.

EULA

First-run web UI access shows the EULA and refuses to proceed without acceptance. The EULA text is fixed per release; you accept it once per install.

Generated-code sandbox

Strategies the Forge generates are sandboxed and never auto-deployed. You review the code and metrics before promotion; promotion is the only path from "Synthesis produced a candidate" to "this is now running against real money."

What we don't claim

Auracle does not claim to be a fortress against a nation-state attacker. The threat model is: a commercial competitor on the same hardware, a curious cloud provider, a vendor whose database leaks tomorrow. If you're worried about an APT in your kernel, no amount of self-hosting helps; you need hardware isolation.