The same topological identity engine that powers robotics and market regime detection — applied to clinical data. CT, ECG, vitals. Modality-agnostic. Auditable. Decision-support, not diagnosis.
From raw sensor data to decision-support signal in a single deterministic pass. Every number traces to a real test result.
CT voxel grid, ECG waveform, or vitals stream ingested via ClinicalDataView. Buffer consistency enforced.
SegmentVoxelGrid extracts tissue points. ProcessECGWaveform detects R-peaks. OwnedVitalsBuffer tracks trajectory.
H₀ persistent homology extracts structural features. Stability, entropy, anomaly score. Deterministic fingerprint hash.
3-agent modality-gated council. ImagingAgent, VitalsAgent, LabsAgent. Support-weighted consensus. Only relevant agents vote.
5-level acuity scoring. Regime classification. Audit trail with fingerprint, vote log, and reasoning strings. FDA-auditable output.
Every metric comes from the real test suite. C++ 10/10. Python 6/6. Reproducible.
ClinicalPrior accepts any modality through a common ClinicalDataView interface. Validation is modality-specific. Processing is agnostic.
Three hardened headers. Six rounds of expert code review. Every lifetime, overflow, and validation edge case addressed.
Core engine scaffold with modality-specific validation:
Safe data conversion with ownership-aware lifetime contract:
3-agent modality-gated consensus system:
Composite scoring from topology metrics:
"The clinical engine does not diagnose — it detects structural anomalies in sensor data and converts them into auditable decision-support signals for human clinicians."
Every claim on this page is backed by reproducible test output. Run it yourself.
Raw output from a CT scan pipeline run. Every number is deterministic — run it yourself and get the same result.
python proof-artifacts/benchmarks/run_clinical_proof.py — same output every run
How Apex17 Clinical compares to existing approaches for real-time sensor-driven clinical intelligence.
Most clinical alerting is threshold-based. Most AI is black-box. This engine sits in between — structural, deterministic, interpretable.
These are not incremental feature additions. They are structural design choices that change what the system can prove.
The same H₀ persistence engine processes CT point clouds, ECG time series, and vitals trajectories. No modality-specific neural networks required.
PathologyFingerprint produces a 64-bit hash from topology features. Same input → same hash, always. Similarity uses std::popcount for portable Hamming distance.
Agents only vote when relevant to the input modality. CT input activates ImagingAgent; VitalsAgent and LabsAgent abstain. No "phantom votes" from irrelevant specialists.
Every decision carries: fingerprint hash → council votes → reasoning strings → regime classification → acuity level. All timestamped, all deterministic.
The engine explicitly frames outputs as signals for clinician review. "Specialist review recommended" — never "patient has condition X." This is a design choice, not a limitation.
This scaffold demonstrates the architecture works. Production healthcare requires significantly more.
The topological identity engine is domain-agnostic. H₀ persistent homology works on any structured data — spatial, temporal, or clinical.
LiDAR point cloud → persistence stability. Low stability = unstable room → Director Governor veto. O(1) SceneMemory prevents re-solving known environments.
Price series → persistence stability. Low stability = regime transition → Director confidence cut. O(1) hash recall identifies previously seen market structures.
CT/ECG/Vitals → persistence topology → clinical council → acuity signal. Modality-gated agents, support-weighted consensus, FDA-auditable output chain.
Run the clinical pipeline in your browser. Real algorithms, real timing, real proof.
Try Clinical Demo →
10 C++ tests · 6 Python tests · JSON reports · CI-ready exit codes
python proof-artifacts/benchmarks/run_clinical_proof.py