The same topological identity engine that powers market regime detection —
ported to spatial autonomy. 6 CUDA kernels. H₀ persistent homology.
O(1) scene recall. No LLM in the loop.
DATA PIPELINE
The Perception Path
From raw LiDAR point cloud to motion intent in a single deterministic pass. Every stage is a policy syscall with bounded latency.
01 — Perception
LiDAR → Voxel Grid
1M-point cloud quantized into CUDA voxel grid. Traversability, occupancy gradient, and motion pressure computed in parallel.
4-agent deterministic council (Solidity, Flow, Topology, Risk) votes on mode + velocity. Director synthesizes consensus.
<1ms
→
04 — Safety Policy
Governor Veto
Director Governor can hard-stop the robot if topological stability collapses. O(1) SceneMemory skips re-solving known rooms.
<1ms
BENCHMARK
Production Numbers
Verified on RTX 6000 Blackwell with 1M-point synthetic warehouse scans. Every benchmark is reproducible.
45ms
End-to-End Latency
LiDAR → Decision
6
CUDA Kernels
Parallel GPU streams
O(1)
Scene Recall
Hash-indexed memory
28 Hz
Frame Rate
Real-time perception
COMPONENTS
What We Built
Six CUDA kernels, a topological identity engine, and a deterministic safety governor — all patent-protected.
CUDA Compute
6 GPU Kernels
Massively parallel processing on RTX 6000 Blackwell:
Voxel Grid — point cloud quantization
Traversability — solidity index computation
Scene Metrics — occupancy, motion pressure
RMQ — O(1) range-min valley detection
Persistence — H₀ chain-graph homology
Cluster Analysis — obstacle segmentation
TOPOLOGICAL IDENTITY
H₀ Persistent Homology
Same mathematics as the market regime detector, applied to 3D space:
Persistence Stability — how "clean" is this room?
Persistence Entropy — structural complexity
Fingerprint Hash — 16×16 image → SHA-256
O(1) Recall — "I've seen this room before"
COUNCIL ARCHITECTURE
4-Agent Deterministic Council
No LLM. No probabilistic reasoning. Pure policy compilation:
SolidityAgent — traversability assessment
FlowAgent — dynamic obstacle tracking
TopologyAgent — persistence-derived safety
RiskAgent — sensor health + degradation
SAFETY LAYER
Director Governor Veto
Deterministic safety override — no "hallucinated navigation":
Stability < 0.25 → force ProceedCautious
Stability < 0.15 → hard stop
Any agent veto → override all others
SceneMemory → skip compute for known rooms
LIVE DASHBOARD
Spatial Autonomy Dashboard
Real-time visualization of perception → council → decision. Three demo scenarios: nominal navigation, fog occlusion, and degraded sensor with safety veto.
Warehouse Nominal
Open space, good visibility. All 4 agents vote Proceed. Topology confidence 0.75, valley score 0.70. Director synthesizes 83% velocity at 71% confidence.
Safety Veto — RiskAgent Override
Sensor degradation detected. RiskAgent vetoes at 91% confidence, forcing hard stop. Velocity → 0%. This is the "hallucination prevention" layer — the robot cannot be convinced to move through unstable topology.
LIVE OUTPUT
What the Engine Actually Prints
Raw output from a LiDAR spatial perception pipeline run. Every number is deterministic — run it yourself and get the same result.
python proof-artifacts/benchmarks/run_spatial_proof.py — same output every run
"Apex17 does not just detect space — it recognizes the structure of space and feeds that directly into autonomous decision-making."
THE APEX17 THESIS
DATA FLOW
A Different Kind of Pipeline
Competitors usually give either fast local perception or heavy map-based memory. Apex17 gives fast local spatial decisions plus topological identity and O(1) structural recall in the same engine.
Strong for mapping and revisits, but heavier. Slower to turn into direct reflex semantics. Coordinate-centric rather than structure-centric.
APEX17
Sense → Interpret → Identify → Remember → Govern
Geometry→Spatial Prior→Regime Semantics→Topo Identity→O(1) Recall→Director Control
Data doesn't just become a map — it becomes reflex semantics, compact topological memory, a reusable identity hash, and explicit confidence modifiers for a higher-level agent.
OUTPUT SIGNAL
What the System Can Tell You
A typical stack tells you where obstacles are and what path is open. Apex17 tells you:
Is motion safe?
How fast?
What structural regime is this?
How stable / complex?
Have I seen this before?
That is a much richer downstream signal from the same raw sensor input.
DIFFERENTIATORS
Five Architecture-Level Differences
These are not incremental improvements. They are structural design choices that change what the system can do.
01
Perception → Decision Semantics Early
Competitors pass along bulky intermediates: voxels, costmaps, occupancy grids, embeddings. Apex17 converts much earlier into regime, decision, velocity, and confidence-relevant topology signals.
This reduces the distance between sensing and action. The data doesn't stop at "here is a point cloud" — it keeps compressing until it reaches actionable semantics.
02
Topology Before Long-Horizon Cognition
In most systems, memory comes from map databases, SLAM graphs, or learned latent state. In Apex17, memory comes from persistence pairs, digest metrics, and topological hash with O(1) recall.
The system remembers structure, not just position. This makes recall independent of coordinate frame or sensor calibration.
03
Clean Fast-Path / Deep-Path Split
Fast path: point cloud → CUDA spatial reflex → decision. Deep path: same scene → PH / topology → digest/hash → Director veto / recall.
Many competitors either make everything heavy or keep everything shallow. Apex17 has both lanes in a single engine, running in parallel.
04
Compute-Aware Data Flow
GPU for data-parallel spatial stages. CPU for inherently serial filtration sweep. O(1) lookup for recall. The data flow is also a compute-conscious flow.
Competitors often force one compute model across everything. Apex17 uses the right substrate at each step — CUDA parallelism where it scales, serial where it must, hash tables where O(1) matters.
05
Interpretable Governance Signals
Topology affects the Director through explicit mechanisms: low stability → confidence cut. High entropy → confidence cut. Veto reasons logged in reasoning strings.
Competitors bury this inside planner tuning, learned uncertainty, and hidden heuristics. Apex17's flow makes the modulation legible — every safety override has a named reason and a numerical threshold.
CARLA BENCHMARK — TOWN05
Three Agents. Same Route. Only the Brain Differs.
Head-to-head evaluation on CARLA Town05 autonomous driving benchmark. APEX17 policy bridge vs. a stateless reactive agent vs. a hand-tuned finite-state machine. All data from versioned scorecard.json files.
APEX17 policy bridge connected to a live iRobot Roomba via REST API. Same observe → fingerprint → recall → policy → execute → record loop. 25+ experiment runs, zero safety violations.
✓ PROVEN
0
Safety Violations
25+ runs, zero incidents
✓ PROVEN
6 µs
Decision Latency
Avg from hardware telemetry
✓ PROVEN
5.4 µs
O(1) Scene Recall
Fingerprint hash lookup
85.5 s
Mission Duration
Live kitchen navigation
DECISION DISTRIBUTION — 2,000 CARLA FRAMES
Stop
571
Cautious
466
Nominal
408
Yield
353
Sensor Fallback
180
Emergency
22
Only 1.1% emergency brakes across 2,000 decision frames. The engine is conservative (28.5% stop, 23.3% cautious) — it defaults to safety.
"notes": "Hardware proof run 3: APEX17 live telemetry + adaptive decisions"
EVIDENCE MATRIX
Same Engine. Two Platforms. Consistent Results.
The APEX17 policy bridge is platform-agnostic. The same observe → fingerprint → recall → policy → execute → record chain runs identically on a driving simulator and a cleaning robot.
All experiment data is versioned and reproducible. Source: src/apex17-robotics/
HONEST ASSESSMENT
Where We're Different. Where Others May Still Lead.
Apex17's edge is not "it beats everything at everything." It is a focused architectural advantage in specific capabilities.
Apex17's Edge
Reflex speed — 45ms perception-to-decision
Structural reasoning — H₀ topological identity
Compact memory — O(1) hash recall, no map DB
Director integration — topology modulates confidence
Interpretability — every override has a named reason
Where Competitors May Be Stronger
Full SLAM maturity — decades of research, battle-tested
Large ecosystem/tooling — ROS2 integration breadth
Global path planning — complete planning stack depth
Semantic scene understanding — deep-learning classifiers
Production polish — UI/dashboard/deployment tooling
CROSS-DOMAIN
Same Math. Different Domains.
The topological identity engine is domain-agnostic. The same H₀ persistent homology that detects market regime transitions also detects spatial structure changes.
🤖 Robotics
LiDAR point cloud → persistence stability. Low stability = unstable room → Director Governor veto. O(1) SceneMemory prevents re-solving known environments.
35ms CUDA · 28 Hz
📈 Markets
Price series → persistence stability. Low stability = regime transition → Director confidence cut. O(1) hash recall identifies previously seen market structures.
0.16ms CPU · sub-1ms
See it live.
Full spatial perception demo with real-time council deliberation and topological radar.