• zk-verified ✓atlas treasury
  • 11.84%kamino · usdc
  • 18.20%drift · ksol
  • 9.40%marginfi · usdc
  • 14.10%jupiter · jlp
  • 7.80%kamino · jitosol
  • 8.92%drift · usdc
  • 22.40%orca · sol-usdc
  • 24.10%raydium · sol-usdc
  • 6.20%marginfi · sol
  • 12.30%kamino · pyusd
  • 19.80%jupiter · jlp-perp
  • 5.40%meteora · usdc-usdt
homeDecision Engine devnet

Decisions today

0

Proofs minted

0

Refusals (correct)

0

Avg decision time

0ms

normal signalproof-bearingrefusal path

01 · architecture

Three stages. No trust assumptions.

The decision engine runs every 4 seconds. Six sensor agents fetch data, three decider agents vote, and the rebalancer executor produces a plan. The plan is never trusted by Atlas's on-chain programs — it is proven via SP1 zkVM before any state moves.

stage 01

Sensors

Six agents read oracles + alt data every 4s. Each is sandboxed, with strict latency budgets.

  • · Oracle (Pyth, Switchboard)
  • · Exposure (per-protocol caps)
  • · Fee (priority fee oracle)
  • · MEV (sandwich detection)
  • · Liquidity (slippage scout)
  • · Anomaly (cross-feed liar detection)

stage 02

Deciders

Three voting agents synthesize the sensor outputs into a rebalance plan or a refusal.

  • · Aggregator (vote merger)
  • · Policy Gate (invariant check)
  • · Trigger Gate (3-gate final)

stage 03

Executor

One agent emits the final tx. The plan becomes a Groth16 receipt before any vault state moves.

  • · Rebalancer (SP1 → Solana)

02 · the agents

Seven agents. One vote. One proof.

sensor

Oracle

  • · Pyth pull-oracle posting on-demand
  • · Switchboard fallback at slot-drift threshold
  • · Cross-feed median + outlier reject
atlas-pyth-post412 LOC
sensor

Exposure

  • · Per-protocol caps in bps (Kamino/Drift/Jupiter)
  • · Treasury entity exposure ledger
  • · Hard refusal above policy ceiling
atlas-exposure655 LOC
sensor

Fee Oracle

  • · Priority fee floor from recent slots
  • · Compute unit budget per ix
  • · Refuse rebalance if cost > expected gain
atlas-fee-oracle187 LOC
sensor

MEV

  • · Sandwich detection from slot context
  • · JIT-liquidity pattern recognition
  • · Routes via Jito bundle if risk > threshold
atlas-mev240 LOC
sensor

Liquidity

  • · Pool depth check via Jupiter Quote
  • · Slippage budget enforced pre-execute
  • · Tier-A RPC failover (Helius, Triton)
atlas-rpc-router1237 LOC
sensor

Anomaly

  • · Cross-validates oracle feeds
  • · Detects feed-divergence > 25 bps
  • · Triggers refusal path — withdrawals still flow
atlas-lie650 LOC
decider

Aggregator

  • · Merges 6 sensor votes
  • · Confidence-weighted by agent latency
  • · Outputs draft rebalance plan + reasoning trace
atlas-intelligence1538 LOC
decider

Policy Gate

  • · Strategy commitment hash check
  • · Allocation must match registered strategy
  • · Refuses if any I-1..I-26 invariant violated
atlas-trigger-gate680 LOC
decider

Trigger Gate

  • · 3-gate final: freshness · oracle · exposure
  • · Last veto before SP1 prover runs
  • · Refusal logged to atlas-bus for forensics
atlas-trigger-gate680 LOC
executor

Rebalancer

  • · Builds versioned tx with scoped keys
  • · Submits via atlas-presign + Jito bundle
  • · Waits for atlas_verifier CPI → atlas_vault commit
atlas-operator-agent1711 LOC

03 · the gates

Three gates. Any veto refuses the rebalance.

Before SP1 generates a proof, three sequential gates must all pass. Any one of them returning REFUSE aborts the rebalance entirely. Withdrawals remain unblocked under invariant I-11.

gate 01

Freshness

Slot drift between Atlas's last proof and current Solana slot must stay within the freshness budget. Stale state cannot rebalance.

REFUSE if drift > 400 slots

gate 02

Oracle

Pyth + Switchboard feeds must agree within 25 bps. Cross-feed liar detection runs at every tick.

REFUSE if median spread > 25 bps

gate 03

Exposure

Proposed allocation cannot exceed per-protocol exposure cap (Kamino ≤ 40%, Drift ≤ 30%, Jupiter ≤ 30%). Caps live in the registered strategy.

REFUSE if any leg > strategy cap

04 · the proof

Every rebalance is a Groth16 receipt.

The decision plan becomes the public input to an SP1 zkVM program. The prover proves: given these sensor votes, the policy gate would have passed. The proof is verified on-chain by atlas_verifier before atlas_rebalancer is allowed to touch atlas_vault state.

public_input_v3.rs — 300 bytes

pub struct PublicInputV3 {
  pub vault_id:            [u8; 32],   // PDA of target vault
  pub strategy_hash:       [u8; 32],   // committed strategy
  pub allocation:          Allocation, // proposed split
  pub slot:                u64,        // proof slot
  pub timestamp:           i64,        // wall-clock
  pub commitment:          [u8; 32],   // Pedersen of (idle, deployed)
  pub disclosure_policy:   [u8; 32],   // confidential-mode hash
  pub confidential_mode:   u8,         // pattern A or B
  pub agent_votes:         [Vote; 7],  // sensor + decider results
  pub gate_results:        [Gate; 3],  // 3-gate verdicts
}

proof size

~280 bytes

verifier compute

~200k CU

verifier program

A738nTHZK…ufR1

05 · invariants

26 hard rules. Enforced in CI.

06 · adversarial

We try to break it before you do.

atlas-chaos and atlas-replay run an adversarial test corpus on every CI build. The decision engine must produce REFUSE on every malicious input, and withdrawals must succeed even under prover failure.

Pyth feed jump > 100 bps

expects:

REFUSE

pass48 cases

Slot drift > 400

expects:

REFUSE

pass24 cases

Strategy hash mismatch

expects:

REFUSE

pass12 cases

Replay of stale proof

expects:

REFUSE

pass36 cases

Exposure cap exceeded

expects:

REFUSE

pass18 cases

MEV sandwich pattern

expects:

REROUTE

pass22 cases

Prover offline 5min

expects:

WITHDRAW STILL OK

pass15 cases

Dodo webhook replay

expects:

DEDUP-REJECT

pass10 cases

07 · the code

Read the source.

All crates above ship in the open. ~43,000 LOC across 46 Rust crates.