Early Risk Alert AI
Model Card — Public

Early Risk Alert AI

Explainable Rules-Based Clinical Command Center — Rules-Based Prioritization Engine

This platform does not replace clinician judgment and is not intended to diagnose, direct treatment, or independently trigger escalation. All outputs require independent review by an authorized health care professional.
Decision Support Only HCP-Facing Rules-Based Engine Explainable Pilot Phase No FDA Clearance

What this system is — and is not

Early Risk Alert AI is a rules-based threshold and trend prioritization engine — not a machine learning model, not a neural network, and not an autonomous clinical decision system. It applies transparent, configurable logic to structured vital-sign data to surface patients whose monitored context suggests they may warrant further clinical review.

There is no black box. Every review score is the direct product of the signal weights and threshold comparisons described in this card. A clinician can independently verify why any patient was flagged by reviewing the explainability panel, which shows every contributing factor, delta trend context, confidence level, and visible limitation.

Algorithm — how the review score is calculated

The review score (0.0 – 9.9) is computed from six monitored vital signals using the following additive rules-based formula. No training data, no weights from gradient descent, no black-box inference.

SpO₂ deficit
× 0.75
Temp elevation
× 0.70
RR elevation
× 0.12
Diastolic BP
× 0.03
Heart rate
× 0.035
Systolic BP
× 0.02

Trend modifier: deteriorating +1.2, watch +0.6, recovering −0.4. Score is clamped to 0.8 – 9.9. Review priority: Critical ≥ 8.5, High ≥ 6.2, Stable < 6.2.

Performance — retrospective validation

Retrospective validation — April 2026 (synthetic dataset). Results below are from a 10,000-patient synthetic dataset (260,765 readings) engineered with clinically grounded deterioration trajectories (sepsis, respiratory failure, cardiac decompensation, hypertensive crisis). Validated across 500, 1,000, 2,000, 5,000, and 10,000 patient cohorts. April 2026. MIMIC-IV real de-identified ICU data validation is planned for Q2 2026, subject to data-access approval and completion of the evaluation. Results Results are intended to be published publicly upon completion. Prospective clinical validation has not yet been completed. Independent clinical review of all results is required before drawing conclusions about prospective performance.
ERA Sensitivity (t=6.0)
18.8–19.3%
Clinical events flagged at t=6.0 across 2,000–10,000 patient datasets. Intentional trade for lower false positives. Threshold 4.0 yields 33–35% sensitivity for ICU.
False Positive Rate (t=6.0)
4.2–4.5%
ERA false positive rate vs 27–28% for standard threshold alerting — a 22–24 percentage point reduction in unnecessary interruptions across all tested datasets.
Alert Reduction (t=6.0)
83–84%
Reduction in alert volume vs standard threshold alerting. Results validated across five synthetic cohort sizes — 500, 1,000, 2,000, 5,000, and 10,000 patients (12,873–260,765 rows). ERA sensitivity 18.8–23.4% at t=6.0, false positive rate 4.2–5.1% vs 26.9–28.5% for standard thresholds, alert reduction 81.9–84.2%. Results most consistent at 2,000–10,000 patients: 19–19.3% sensitivity, 4.2–4.5% FPR, 83.3–84.2% alert reduction.

Why is ERA sensitivity lower? The ERA rules-based logic intentionally trades some sensitivity for dramatically lower false positives (6.2% vs 20.4%), resulting in 71.6% fewer unnecessary alerts while still surfacing key deterioration patterns in the critical 6-hour pre-event window (10,000-patient synthetic retrospective validation, April 2026). In clinical settings where alarm fatigue is a primary safety risk, reducing false positives is often more impactful than maximizing raw sensitivity.

Threshold ERA Sensitivity False Positive Rate Alert Reduction Best For
4.0 33–36% 8.0–8.5% 69–70% ICU / high-acuity
5.0 24–26% 5.7–6.0% 77–79% Mixed units — balanced
6.0 ★ default 18.8–19.3% 4.2–4.5% 83.3–84.2% Telemetry / alarm fatigue reduction
Standard only 72–75% 27–29% Baseline — no ERA

Threshold is configurable per unit in the command center. Primary benchmark: 10,000-patient synthetic dataset · 260,765 rows · 54,161 events · April 2026. Validated across 5 confirmed cohort sizes (500–10,000 patients). MIMIC-IV real de-identified ICU data validation is planned for Q2 2026, subject to data-access approval and completion of the evaluation. Results are intended to be published publicly upon completion. Values shown as confirmed ranges across five validated cohort sizes (500–10,000 patients, 12,873–260,765 rows). Largest validated dataset: 10,000 patients, 260,765 rows, 3,150 clinical events. Threshold selection should be calibrated to your unit's acuity level and alarm fatigue tolerance.

MIMIC-IV real de-identified ICU data validation is planned for Q2 2026, subject to data-access approval and completion of the evaluation. Results Results are intended to be published publicly upon completion.

Intended use

Early Risk Alert AI is an HCP-facing decision-support and workflow-support software platform intended to assist authorized health care professionals in identifying patients who may warrant further clinical evaluation, supporting patient prioritization, and improving command-center operational awareness.

It does not replace clinician judgment and is not intended to diagnose, direct treatment, or independently trigger escalation.

Intended usersAuthorized health care professionals — physicians, nurses, clinical operations staff — in hospital and health system settings
Care settingsICU, Telemetry, Stepdown, Ward, Remote Patient Monitoring programs
Not intended forAutonomous escalation, diagnosis, treatment direction, or use by non-clinical personnel without oversight

Inputs and outputs

Supported inputs

Heart ratebpm, numeric
SpO₂%, numeric
Blood pressuremmHg systolic / diastolic
Respiratory ratebreaths/min, numeric
Temperature°F, numeric
Trend directiondeteriorating / watch / recovering

Supported outputs

Review score0.0 – 9.9 (rules-based, fully explainable)
Priority statusCritical / High / Stable
Contributing factorsPer-signal explainability
Delta trend contextChange from last observation
Workflow noteSupportive review context only
Alert notificationEmail / SMS to authorized personnel

Limitations and known gaps

Synthetic validation10,000-patient synthetic dataset (260,765 readings) engineered with clinically grounded deterioration trajectories (sepsis, respiratory failure, cardiac decompensation, hypertensive crisis). April 2026. Results: 38.3% patient detection in 6-hr pre-event window · 71.6% alert reduction · 6.2% ERA FPR vs 20.4% standard threshold alerting · 14.6% reading sensitivity at t=6.0. At t=4.0 (ICU): 61.4% patient detection / 9.6% FPR. At t=5.0 (mixed): 48.1% patient detection / 7.8% FPR. Validated consistently across 500, 1,000, 2,000, 5,000, and 10,000 patient cohorts. MIMIC-IV real de-identified ICU data validation is planned for Q2 2026, subject to data-access approval and completion of the evaluation. Results Results are intended to be published publicly upon completion. Prospective clinical validation has not yet been completed.
Rules-based onlyCurrent engine uses additive threshold rules, not machine learning. No training dataset, no AUC, no sensitivity/specificity from a held-out test set yet.
No EHR integrationCurrent deployment uses structured CSV input and simulated vitals. Live EHR integration via FHIR R4 and HL7 is on the product roadmap — current pilot entry point is retrospective validation via de-identified CSV, which requires no EHR integration and can begin within days of data availability.
Simulated demo environmentThe public demo runs on simulated patient data. No real patient data is used in the demonstration environment.
Incomplete or delayed dataOutputs may be affected by missing, delayed, or erroneous vital sign inputs. The platform does not validate source data quality.
Population generalizabilitySignal weights have not been validated across diverse patient populations, acuity levels, or care settings. Local validation is strongly recommended.
Alert fatigue riskIf thresholds are set too low for a given unit, alert volume may increase rather than decrease. Configurable thresholds and local calibration are recommended.
No FDA clearanceThe platform does not have FDA clearance or approval. It is positioned for controlled pilot evaluation as decision-support software.

Governance and oversight

Human oversight requiredAll outputs require independent review by an authorized HCP. The platform does not act autonomously.
ExplainabilityEvery output displays contributing factors, signal weights, confidence level, data freshness, and limitations.
Audit trailAll workflow actions (ACK, Assign, Escalate, Resolve) are logged with role, timestamp, and unit. Persistent across restarts.
Claims controlApproved and banned claims enforced across all platform materials and communications.
Change controlAll releases documented in change approval log. No material changes to clinical output logic without notification.
Regulatory statusNo FDA clearance or approval claimed. Positioned as decision-support software for controlled pilot evaluation.
BAA availabilityThe company is prepared to execute a Business Associate Agreement for any engagement involving identifiable patient data. Phase 1 retrospective validation is conducted on de-identified data only.
MFA implementationPhishing-resistant MFA implemented across all core administrative systems — business email, source control, hosting, password management, and domain/DNS — April 10, 2026.
Research ethics trainingCITI Program training completed April 10, 2026 — Data or Specimens Only Research and Conflict of Interest certificates obtained prior to MIMIC-IV data access application.
MIMIC-IV validationPhysioNet MIMIC-IV data access application submitted April 10, 2026. Validation planned for Q2 2026, subject to data-access approval and completion of the evaluation. Results are intended to be published publicly upon completion.
Pilot onboardingHospital pilot onboarding checklist available at /pilot-onboarding — includes step-by-step CSV upload, governance docs, 4-6 week pilot timeline, and proposed success metrics.
Platform versionstable-pilot-1.0.6-gov · April 10, 2026

Advisory structure

Milton MunroeFounder & CEO — Product leadership, governance ownership, pilot operations
Uche AnosikeTechnical Infrastructure & Security Advisor — Infrastructure, security posture, deployment readiness
Andrene Louison, RNClinical Advisor — Clinical workflow review, monitored-context guidance, retrospective validation support