Set alarm conditions on any streaming signal you care about and Delphi watches for you — no pagers, no polling, no dashboards to babysit. When data trips a condition, you get a notification with the matched values, a sparkline or map for context, and a one-click path to the next action. For the judgment calls that thresholds can’t capture, agentic evaluation lets an AI agent weigh the signal against the rest of your data before it bothers you.
Configure an alert
Alerts attach to data connectors. Pick a streaming source — an earthquake feed, a river gauge, a Slack channel, a Salesforce pipeline — and describe the condition you want to monitor. Delphi handles the filter chain, deduplication, and wiring.
Just ask in chat:
Alert me when any earthquake in the Pacific Northwest registers M4.5 or higher. Show it as a map.
You can also set rolling-window conditions (“flag when average CPU exceeds 80% for 10 minutes”), compound conditions (“fire if inventory drops below reorder point AND supplier lead time is over 14 days”), or rate-limited ones (“at most one page per hour per host”). Each alert stores a friendly name, a description, and a visualization type — sparkline, map, markdown, or single value — that shapes how notifications render.
How alerts evaluate
Once provisioned, an alert reads directly from its connector’s stream. Every message flows through your filter chain; anything that passes is a match. Delphi automatically deduplicates on a semantic key you define (for example, an earthquake ID or a ticket number) so re-polled data doesn’t page you twice.
Each match updates the alert’s metrics — match count, last matched timestamp, last matched payload — and drives the alarm state machine. The alert transitions from clear to alarmed and a notification is created for the dashboard. The reverse transition happens automatically when the condition stops matching.
Agentic alert evaluation
Thresholds are great for unambiguous signals. For everything else, mark the alert as agentic and give it an evaluation prompt. When a candidate match comes through, Delphi hands the payload and your dashboard context to an AI evaluator, which returns one of three decisions: fire, suppress, or defer. Each decision comes with a confidence score and an explanation, so you can audit why the agent made the call.
Encode domain knowledge in the prompt — what counts as normal, what context matters, which seasonal patterns to discount. This is how you kill alert fatigue without missing the one signal that actually matters.
Create an agentic alert on the Colorado River streamflow connector. Fire when flow drops more than 20%, but suppress if the drop is consistent with seasonal snowpack patterns. Reference the snowpack dataset for context.
The evaluator reads the datasets and sample feeds you reference, reasons across them, and only fires when the drop is genuinely anomalous.
Notifications
Every fired alert writes a notification to the dashboard’s notifications feed. Notifications carry the matched data, the rendered visualization, a human-readable summary, and — for agentic alerts — the evaluator’s explanation and confidence. They show up in the dashboard’s Alerts tab and on the metrics view, and they’re the anchor point for follow-up: acknowledge them, annotate them, or escalate them into governed action proposals so an agent can draft a ticket, send a message, or page an on-call person under your approval policy.
Notifications that need human sign-off surface an inline verification gate, so an operator can confirm or reject the AI’s recommendation before anything ships.
Cascade alerts across initiatives
Real-world events rarely stay in one lane. A hurricane alert in a weather initiative should inform the tourism revenue forecast, the hospital capacity dashboard, and the infrastructure risk register in your sibling command centers. Cascade alerts let an agentic evaluator pull in data from related initiatives as part of its decision, so the fire/suppress/defer call reflects the full picture — not just the signal that tripped.
Point the alert at the sibling initiatives you want the evaluator to consider when you create it, and Delphi resolves the cross-initiative context automatically on every evaluation. The result: one coherent response across your whole organization instead of five disconnected pages.