Scenarios let you ask “what if” against the same data your dashboard reads from — no spreadsheets, no exports, no parallel universes to keep in sync. Run a projection, get a decision artifact, and keep the result cleanly isolated from your live operational numbers so nothing gets muddled. When the underlying data changes materially, Delphi knows to offer a rerun.
What scenarios are good for
Scenarios are for forward-looking questions where you already have the baseline in Delphi and want to model a change. Good candidates: resource and capacity planning, supply or demand shocks, budget reallocations, timing shifts on a project, stress-testing a threshold before you commit to it.
They are not a replacement for reports or ad-hoc chat. If you just want a written explanation of what happened last quarter, use Reports. If you want a single number or chart, ask chat directly. Reach for scenarios when you need a structured projection with baseline-vs-projected comparisons, highlighted metrics, and assumptions you can review later.
Run a scenario from chat
The fastest path is to describe the question in plain language to Delphi. The agent will pull the relevant datasets from your dashboard as the baseline, record your assumptions, and execute the projection. Results land on the Scenarios tab as their own workspace — separate datasets, separate visualizations, tied back to the baseline you started from.
What if we delay Project Atlas by four weeks and start Project Phoenix on May 1 instead? Use our current workforce allocations as the baseline and flag any role that would be overallocated.
You can also name a template explicitly, attach a time range, or anchor to a specific dataset. The more context you give Delphi up front, the less it has to guess. Scenarios you create show up on the Scenarios tab with their status (pending, running, completed, failed) and can be rerun on demand.
Built-in templates
Delphi ships with scenario templates tuned for common operational questions. The resource sufficiency template is the headline example: give it a project (start date, duration, required roles) and your current workforce with allocations and planned leave, and it returns a Proceed / Defer / Conditional decision with a shortfall estimate, slip-risk percentage, week-by-week burndown, bottleneck roles, and suggested mitigations.
Other templates handle threshold stress tests and trend projections against any dataset already wired into your dashboard. You don’t need to choose a template by name — describe the question and Delphi will pick the right one.
Do we have enough senior backend engineers to start Project Phoenix on May 1 for 12 weeks without slipping Project Atlas?
Interpreting results
Every completed scenario produces a short summary, a baseline-vs-projected comparison, and a set of highlighted metrics showing the delta for each one. Treat the highlights as the headline: which numbers moved, by how much, and in what direction. The summary explains the “why” in prose, and the generated charts live in the scenario’s own workspace so you can explore without touching the main dashboard.
Pay attention to the assumptions list and the confidence score. Assumptions are the inputs Delphi used to bridge gaps in the data — if one of them is wrong, the projection is wrong, and you should rerun with a correction. Confidence reflects how much signal the baseline datasets actually carry; a low number usually means the scenario needs more history or tighter inputs before you act on it.
When the underlying data shifts beyond the scenario’s sensitivity thresholds, Delphi will flag the result as stale. Rerun it from the Scenarios tab or ask in chat — the new run reuses the same assumptions so you’re comparing apples to apples across time.