A walk through the per-port dashboard, drift-event console, device detail, evidence export, SIEM view, and admin operations. The screens below are the ones a security engineer actually lives in. I'll tell you what I look for on each one and what changes when something is wrong.
One row per port, filterable by building, floor, site, VLAN. Each row shows current Device DNA, vendor hint with confidence, last-seen timestamp, and the framework controls the port satisfies right now. I open this first every morning. If it's quiet, the day starts quiet.
What I do here. Scan the color column. One red (Hikvision, NDAA-prohibited) and one yellow (unknown at 0.41 confidence). The red is a compliance hit, not an active attack; the yellow is investigative. Everything else is the boring green that means the day stays normal.
STOP 2 of 6
Drift events: the queue I actually triage
Events fire when the inventory changes. Each carries event type, severity, port, MITRE technique, and the underlying observations. The queue is the second thing I open after the dashboard. If anything fired overnight, this is where I see it.
What I do here. The 17:08 P1 is the one that gets my attention. Similarity 0.31 against a known Crestron means something else is plugged in. That goes straight to the on-call playbook. The 14:08 NDAA hit is a different escalation: compliance flow, not incident response. The P3s wait for the morning standup.
STOP 3 of 6
Device detail: the marker breakdown
Click any event or port to get the full record. Current and prior Device DNA. The five underlying markers, side by side. Framework controls. Recommended response, with the link to the runbook.
What I see. Same MAC, different everything else. Power dropped 2.2W. Link characteristics shifted. The reference-database lookup now resolves to a different device record. This is a real swap, not a firmware update. The marker comparison is the part that closes the argument when someone on the team says "are we sure?"
STOP 4 of 6
Evidence export: the part that used to take six weeks
For audit cycles. Pre-mapped to controls across the major frameworks. Auditor asks "show me your network inventory for PCI 12.5.1", you query, you export. Signed and timestamped at the control plane.
What changes. Audit prep stops being a six-week reconstruction project. The export is what the auditor asked for, signed and timestamped. At one customer, the auditor ran the query themselves during the walk-through; that was the moment the audit conversation changed shape.
STOP 5 of 6
SIEM ingest: what shows up in Splunk, Sentinel, Chronicle
The data the SOC analyst consumes inside their existing tool. Same payload across all ingest channels; only the transport changes.
What I wire up. The substitution event becomes a high-fidelity SIEM source. The SPL above is the one I run nightly: any port where the device got swapped and there was no open change ticket. That query alone has surfaced two real findings in the install base I've watched.
STOP 6 of 6
Admin: operational status and tuning
The ops view. External Scan Engine (ESE) health, tracker success rate, similarity-threshold tuning per environment, change-management lookback window, audit log of admin actions. The screen the operator opens when someone asks "is this thing still working?"
What I check. ESE health and data freshness. If data freshness is over 90 seconds, something is wrong with the tracker. If a tracker is down, the validation loop is silent on the ports it covers. That's the failure mode the threat-model page walks through.
Want to see these screens with your own data?
30-day pilot. We deploy the ESE, ingest your Layer 1 observations, and walk through every screen on this tour using your devices and your events. Our engineering team on the call.