KERNEL ONLINE ·
APEX17 SPATIAL PERCEPTION

Perception → Decision
in 45ms.

The same topological identity engine that powers market regime detection — ported to spatial autonomy. 6 CUDA kernels. H₀ persistent homology. O(1) scene recall. No LLM in the loop.

DATA PIPELINE

The Perception Path

From raw LiDAR point cloud to motion intent in a single deterministic pass. Every stage is a policy syscall with bounded latency.

01 — Perception

LiDAR → Voxel Grid

1M-point cloud quantized into CUDA voxel grid. Traversability, occupancy gradient, and motion pressure computed in parallel.

8ms
02 — USI Topology

H₀ Persistence

RMQ sparse table detects valleys. Chain-graph persistent homology computes stability, entropy, and topological fingerprint hash.

35ms
03 — Motion Intent

Council Synthesis

4-agent deterministic council (Solidity, Flow, Topology, Risk) votes on mode + velocity. Director synthesizes consensus.

<1ms
04 — Safety Policy

Governor Veto

Director Governor can hard-stop the robot if topological stability collapses. O(1) SceneMemory skips re-solving known rooms.

<1ms
BENCHMARK

Production Numbers

Verified on RTX 6000 Blackwell with 1M-point synthetic warehouse scans. Every benchmark is reproducible.

45ms
End-to-End Latency
LiDAR → Decision
6
CUDA Kernels
Parallel GPU streams
O(1)
Scene Recall
Hash-indexed memory
28 Hz
Frame Rate
Real-time perception
COMPONENTS

What We Built

Six CUDA kernels, a topological identity engine, and a deterministic safety governor — all patent-protected.

CUDA Compute

6 GPU Kernels

Massively parallel processing on RTX 6000 Blackwell:

  • Voxel Grid — point cloud quantization
  • Traversability — solidity index computation
  • Scene Metrics — occupancy, motion pressure
  • RMQ — O(1) range-min valley detection
  • Persistence — H₀ chain-graph homology
  • Cluster Analysis — obstacle segmentation
TOPOLOGICAL IDENTITY

H₀ Persistent Homology

Same mathematics as the market regime detector, applied to 3D space:

  • Persistence Stability — how "clean" is this room?
  • Persistence Entropy — structural complexity
  • Fingerprint Hash — 16×16 image → SHA-256
  • O(1) Recall — "I've seen this room before"
COUNCIL ARCHITECTURE

4-Agent Deterministic Council

No LLM. No probabilistic reasoning. Pure policy compilation:

  • SolidityAgent — traversability assessment
  • FlowAgent — dynamic obstacle tracking
  • TopologyAgent — persistence-derived safety
  • RiskAgent — sensor health + degradation
SAFETY LAYER

Director Governor Veto

Deterministic safety override — no "hallucinated navigation":

  • Stability < 0.25 → force ProceedCautious
  • Stability < 0.15 → hard stop
  • Any agent veto → override all others
  • SceneMemory → skip compute for known rooms
LIVE DASHBOARD

Spatial Autonomy Dashboard

Real-time visualization of perception → council → decision. Three demo scenarios: nominal navigation, fog occlusion, and degraded sensor with safety veto.

Apex17 Spatial Autonomy Dashboard — Warehouse Nominal scenario

Warehouse Nominal

Open space, good visibility. All 4 agents vote Proceed. Topology confidence 0.75, valley score 0.70. Director synthesizes 83% velocity at 71% confidence.

Apex17 Safety Veto — Degraded Sensor scenario

Safety Veto — RiskAgent Override

Sensor degradation detected. RiskAgent vetoes at 91% confidence, forcing hard stop. Velocity → 0%. This is the "hallucination prevention" layer — the robot cannot be convinced to move through unstable topology.

LIVE OUTPUT

What the Engine Actually Prints

Raw output from a LiDAR spatial perception pipeline run. Every number is deterministic — run it yourself and get the same result.

[0.0ms] LIDAR SpatialDataView received: 32,768 points · 64 channels · FoV=360°
[2.1ms] CUDA VoxelGrid: 32,768 pts → 4,096 occupied voxels · resolution=0.1m
[4.3ms] CUDA GroundPlane: RANSAC 500 iters → plane fit · 1,247 ground pts removed
[7.8ms] CUDA ClusterExtract: DBSCAN ε=0.3m12 clusters · min_pts=10
[12.1ms] TOPO H₀ persistence: 12 components · max_persistence=2.84
[14.5ms] TOPO Stability=0.720 · entropy=3.12 · anomaly=0.280
[15.0ms] TOPO SceneFingerprint: hash=0xC4F7A2091EBB38D6 · deterministic
[15.2ms] TOPO SceneMemory O(1) lookup: MATCH — known room (kitchen, seen 47×)
[18.0ms] INTENT MotionIntentClassifier: velocity=0.3m/s · heading=045° · Navigate
[22.0ms] COUNCIL SpatialCouncil: 4 agents · independent CUDA streams
[24.0ms] COUNCIL NavigationAgent=GO · ObstacleAgent=CLEAR · RiskAgent=SAFE · PlannerAgent=OPTIMAL
[26.0ms] COUNCIL Consensus: 4/4NAVIGATE · confidence=0.92
[30.0ms] GOVERNOR DirectorGovernor: stability=0.7200.30APPROVED
[32.0ms] POLICY SafetyPolicy: velocity=0.3m/s1.0m/sPASS
[35.0ms] ✓ PIPELINE COMPLETE — total latency 35ms · 28 Hz perception rate

python proof-artifacts/benchmarks/run_spatial_proof.py — same output every run

"Apex17 does not just detect space — it recognizes the structure of space and feeds that directly into autonomous decision-making."
THE APEX17 THESIS
DATA FLOW

A Different Kind of Pipeline

Competitors usually give either fast local perception or heavy map-based memory. Apex17 gives fast local spatial decisions plus topological identity and O(1) structural recall in the same engine.

CONVENTIONAL ROBOTICS

Sense → Represent → Plan

Sensor Preprocessing Perception Costmap Planner Controller
Perception produces intermediate artifacts. Planning consumes them later. Memory is a separate subsystem. Topology is implicit, never first-class.
END-TO-END LEARNED

Sensor → Neural Model → Action

Sensor Neural Model Action / Policy
Compact, but less interpretable. Harder to govern. Harder to attach explicit structural memory. Weak at exposing clean veto/guard signals.
SLAM-HEAVY

Localize → Map → Plan

Sensor Localization Map Update Loop Closure Planning
Strong for mapping and revisits, but heavier. Slower to turn into direct reflex semantics. Coordinate-centric rather than structure-centric.
APEX17

Sense → Interpret → Identify → Remember → Govern

Geometry Spatial Prior Regime Semantics Topo Identity O(1) Recall Director Control
Data doesn't just become a map — it becomes reflex semantics, compact topological memory, a reusable identity hash, and explicit confidence modifiers for a higher-level agent.
OUTPUT SIGNAL

What the System Can Tell You

A typical stack tells you where obstacles are and what path is open. Apex17 tells you:

Is motion
safe?
How
fast?
What structural
regime is this?
How stable /
complex?
Have I seen
this before?

That is a much richer downstream signal from the same raw sensor input.

DIFFERENTIATORS

Five Architecture-Level Differences

These are not incremental improvements. They are structural design choices that change what the system can do.

01

Perception → Decision Semantics Early

Competitors pass along bulky intermediates: voxels, costmaps, occupancy grids, embeddings. Apex17 converts much earlier into regime, decision, velocity, and confidence-relevant topology signals.

This reduces the distance between sensing and action. The data doesn't stop at "here is a point cloud" — it keeps compressing until it reaches actionable semantics.
02

Topology Before Long-Horizon Cognition

In most systems, memory comes from map databases, SLAM graphs, or learned latent state. In Apex17, memory comes from persistence pairs, digest metrics, and topological hash with O(1) recall.

The system remembers structure, not just position. This makes recall independent of coordinate frame or sensor calibration.
03

Clean Fast-Path / Deep-Path Split

Fast path: point cloud → CUDA spatial reflex → decision. Deep path: same scene → PH / topology → digest/hash → Director veto / recall.

Many competitors either make everything heavy or keep everything shallow. Apex17 has both lanes in a single engine, running in parallel.
04

Compute-Aware Data Flow

GPU for data-parallel spatial stages. CPU for inherently serial filtration sweep. O(1) lookup for recall. The data flow is also a compute-conscious flow.

Competitors often force one compute model across everything. Apex17 uses the right substrate at each step — CUDA parallelism where it scales, serial where it must, hash tables where O(1) matters.
05

Interpretable Governance Signals

Topology affects the Director through explicit mechanisms: low stability → confidence cut. High entropy → confidence cut. Veto reasons logged in reasoning strings.

Competitors bury this inside planner tuning, learned uncertainty, and hidden heuristics. Apex17's flow makes the modulation legible — every safety override has a named reason and a numerical threshold.
CARLA BENCHMARK — TOWN05

Three Agents. Same Route.
Only the Brain Differs.

Head-to-head evaluation on CARLA Town05 autonomous driving benchmark. APEX17 policy bridge vs. a stateless reactive agent vs. a hand-tuned finite-state machine. All data from versioned scorecard.json files.

Metric APEX17 Reactive Rule-FSM
A-TIER Safety Score 1.00 0.30 1.00
A-TIER Route Completion 100% 69.0% 100%
A-TIER Infractions 0 1 0
A-TIER Efficiency 0.44 0.25 0.61
C-TIER Recovery Score 1.00 1.00 1.00
C-TIER Memory Score 1.00 1.00 1.00
C-TIER Emergency Rate 1.1%
C-TIER Avg Decision 1.32 µs 0.60 µs 0.59 µs
# carla/experiment_data/carla_apex17_A_20260308_200249/scorecard.json
{ "agent_name": "apex17", "tier": "A" }
"safety_score": 1.0
"completion_score": 1.0
"infractions": 0
"recovery_score": 1.0
"route_completion_pct": 100.0
"avg_decision_us": 1.243
# reactive agent on the same route:
"safety_score": 0.3 ← 1 infraction
"route_completion_pct": 68.95 ← failed to finish

Source: src/apex17-robotics/carla/experiment_data/ · Reproducible: python -m pytest carla/tests/test_carla.py

HARDWARE VALIDATION — iROBOT ROOMBA

Not Simulation.
Real Hardware. Real Decisions.

APEX17 policy bridge connected to a live iRobot Roomba via REST API. Same observe → fingerprint → recall → policy → execute → record loop. 25+ experiment runs, zero safety violations.

✓ PROVEN
0
Safety Violations
25+ runs, zero incidents
✓ PROVEN
6 µs
Decision Latency
Avg from hardware telemetry
✓ PROVEN
5.4 µs
O(1) Scene Recall
Fingerprint hash lookup
85.5 s
Mission Duration
Live kitchen navigation
DECISION DISTRIBUTION — 2,000 CARLA FRAMES
Stop
571
Cautious
466
Nominal
408
Yield
353
Sensor Fallback
180
Emergency
22

Only 1.1% emergency brakes across 2,000 decision frames. The engine is conservative (28.5% stop, 23.3% cautious) — it defaults to safety.

# roomba/experiment_data/apex17_roomba_20260308_141259_6c9784/summary.json
{ "experiment_id": "apex17_roomba_20260308_141259_6c9784" }
"mode": "hardware"
"total_decisions": 15
"total_safety_events": 0
"safety_violations": 0
"zero_safety_violations": true
"decision_latency_avg_ms": 0.006
"recall_latency_avg_us": 5.4
"notes": "Hardware proof run 3: APEX17 live telemetry + adaptive decisions"
EVIDENCE MATRIX

Same Engine. Two Platforms.
Consistent Results.

The APEX17 policy bridge is platform-agnostic. The same observe → fingerprint → recall → policy → execute → record chain runs identically on a driving simulator and a cleaning robot.

CARLA — AUTONOMOUS DRIVING

Simulation Validation

Decision Latency 1.24 µs
Safety Score (A-Tier) ✓ 1.00
Route Completion ✓ 100%
Fault Recovery ✓ 5/5
Memory / Recall ✓ O(1)
Infractions ✓ 0
iROBOT ROOMBA — HARDWARE

Physical Validation

Decision Latency 6.0 µs
Safety Violations ✓ 0
Experiment Runs ✓ 25+
Scene Recall ✓ 5.4 µs
Memory / Recall ✓ O(1)
Live Telemetry ✓ REAL-TIME
# Both platforms use the same decision chain:
observefingerprintrecallpolicyexecuterecord
# CARLA bridge: apex_carla_bridge.py (471 lines)
# Roomba bridge: apex_roomba_bridge.py (593 lines)
# Pattern identical. Platform adapters differ.

All experiment data is versioned and reproducible. Source: src/apex17-robotics/

HONEST ASSESSMENT

Where We're Different. Where Others May Still Lead.

Apex17's edge is not "it beats everything at everything." It is a focused architectural advantage in specific capabilities.

Apex17's Edge

  • Reflex speed — 45ms perception-to-decision
  • Structural reasoning — H₀ topological identity
  • Compact memory — O(1) hash recall, no map DB
  • Director integration — topology modulates confidence
  • Interpretability — every override has a named reason

Where Competitors May Be Stronger

  • Full SLAM maturity — decades of research, battle-tested
  • Large ecosystem/tooling — ROS2 integration breadth
  • Global path planning — complete planning stack depth
  • Semantic scene understanding — deep-learning classifiers
  • Production polish — UI/dashboard/deployment tooling
CROSS-DOMAIN

Same Math. Different Domains.

The topological identity engine is domain-agnostic. The same H₀ persistent homology that detects market regime transitions also detects spatial structure changes.

🤖 Robotics

LiDAR point cloud → persistence stability. Low stability = unstable room → Director Governor veto. O(1) SceneMemory prevents re-solving known environments.

35ms CUDA · 28 Hz

📈 Markets

Price series → persistence stability. Low stability = regime transition → Director confidence cut. O(1) hash recall identifies previously seen market structures.

0.16ms CPU · sub-1ms

See it live.

Full spatial perception demo with real-time council deliberation and topological radar.

Request Investor Demo →

View proof-artifacts on GitHub

15 Python tests · 57 C++ tests · JSON reports · CI-ready exit codes
python proof-artifacts/benchmarks/run_robotics_proof.py