● Engineering reference, for security engineers, SOC analysts, and detection-engineering teams.
Engineering site Company Threat model
Threat model

Trust boundaries. Defended attacks. Undefended attacks. What an adversary would do to this tool itself.

Most vendors describe what they catch. This page also describes what they don't, what an attacker would do to bypass us, and what happens when the platform itself is in the crosshairs. The threat model that survives diligence is the one written by the people who designed the product, not by the people who sell it. That's the shape of this page.

CUSTOMER NETWORK ENVIRONMENT ZONE 1 (the attacker's plane) switches + attached devices ZONE 2 External Scan Engine (ESE) customer-on-prem small server read-only attacker can plug anything in; that's the threat surface we detect no write to switch, no endpoint agent for inventory, runs in customer's hardening envelope mTLS signed records one-way ZONE 3 CybrIQ control plane cloud or on-prem SOC 2 Type II, mTLS auth, per-tenant isolation Data crosses each boundary in one direction. CybrIQ staff have no path back into the customer network. The arrows point right; nothing comes back left. Detail tables for each zone and attack class are below.

Trust boundaries

CybrIQ's data flow has three trust zones. Each boundary is enforced by its own control.

ZoneWhat lives hereBoundary controls
1. Customer network (the attacker's plane) Devices plugged into customer switches. The attacker can plug anything into any port; that's the threat surface we're built to detect. Physical access control, cable management, NAC enforcement. These are the customer's controls, not ours.
2. External Scan Engine (ESE) CybrIQ software running on a small customer-on-prem server (Linux or Windows). Polls the customer's managed switches in read-only mode and emits Device DNA records. Signed software releases, reproducible builds. Runs inside the customer's standard server-hardening envelope (locked rack, host integrity, signed-software-only policy). mTLS to the control plane. No traffic injection, no write access to switches.
3. Control plane (cloud or on-prem) CybrIQ tenant. Receives DNA records, runs the 750M+ device reference-database lookup, normalizes, stores, exposes via syslog and REST API. Hosts the dashboard. mTLS auth. Per-tenant isolation. AES-256 at rest. Audit log of every read. SOC 2 Type II controls.

Key assumption. The attacker can compromise zone 1 (anyone can plug something into a switch port). Our job is to surface that and ship a signal across the boundary into zones 2 and 3.

Attacks CybrIQ defends against

By design, CybrIQ catches what other tools miss because the observation happens before higher-layer abstractions kick in.

ScenarioWhy other tools miss itHow CybrIQ catches it
Rogue device plug-in Doesn't auth via 802.1X · no EDR agent · not in asset register device-appeared with the unenrolled flag
Swap attack (replace with malicious twin) Same MAC OUI, same VLAN tag, same SSO claim, all match device-substituted from DNA shift below similarity threshold
Unauthorized switch insertion Inserted switch is transparent to L2+; downstream device still authenticates normally port-topology-changed from Layer 1 topology probe
NDAA-prohibited equipment Vendor relabeling, MAC spoofing, certificate issuance can hide the manufacturer at L2+ Layer 1 fingerprint matches banned-vendor signature regardless of relabel
BYOD on production VLAN by mistake If the device authenticates, NAC lets it through device-class-mismatch against the expected device class for the port
Tampered firmware on inherited hardware Firmware change is below the OS level; agents don't run on AV gear, codecs, etc. DNA shifts in characteristic ways even with same MAC and same vendor

Attacks this platform does not defend against

Naming the gaps up front, because that's the question your vendor diligence team is going to ask anyway and you'd rather see the answer here than have to chase me for it.

  • Application-layer attacks (SQLi, XSS, RCE, deserialization). CybrIQ is not in this path; use a WAF.
  • Malware execution and persistence on a device. CybrIQ sees that the device is the device it was. What it does in user-space is EDR's job.
  • Identity compromise. Stolen creds, MFA bypass, SSO replay, these are IAM problems. CybrIQ doesn't watch authentication events.
  • Encrypted-traffic exfiltration. Inside TLS, CybrIQ sees only that the link is live, not what's flowing. Use NDR with cert visibility.
  • DNS-based C2. Operates at L7; not in scope.
  • Phishing, BEC, social engineering. Doesn't transit Layer 1 in a useful sense.
  • Cloud / SaaS attacks. No physical wire to observe; not in scope.
  • Insider threats with valid credentials. If the user is authorized and the device DNA matches, CybrIQ records the legitimate access. UEBA covers misuse-of-valid-access cases.

Attacks against CybrIQ itself

What an attacker could do to disable or evade the platform, and the controls that mitigate.

1. Compromise of the ESE host

What. Attacker gains administrative access to the small on-prem server the ESE runs on, modifies the binary, replaces the host, or removes it from the network.

Mitigation. The ESE runs inside the customer's standard server-hardening envelope: locked rack, host-integrity monitoring, signed-software-only policy. The CybrIQ binary is signed; the reproducible-build path lets the customer verify the running version's SHA against the artifact registry. If the ESE goes offline, the control plane logs the loss within 60 seconds and fires a critical alert. We can't prevent host compromise from outside the customer's environment, but the ESE can't be tampered silently.

2. Switch-derived signal spoofing

What. Attacker engineers a device whose switch-derived signal set matches the legitimate device's so closely that the 750M+ reference-database lookup resolves to the same identity.

Mitigation. The signal set is structured across multiple independent dimensions; the database match requires consistency across all of them. Spoofing all of them at line rate while maintaining functionality is a research-grade attack. We don't claim it's impossible; we claim it's several orders of magnitude harder than spoofing the abstractions above Layer 2, which are spoofable in software with public tools. The similarity score also surfaces near-matches as low-confidence signals, so a partial spoof shows up as anomalous rather than as a clean pass.

3. Denial of service against the control plane

What. Attacker DoS's the CybrIQ tenant or the customer's control-plane endpoint.

Mitigation. The ESE buffers DNA records locally for up to 14 days when the control plane is unreachable. On reconnect, the backlog ships at full rate. Detection events are delayed during the outage but not lost. Customer-hosted control-plane deployments are sized for their own peak ingestion plus 4x burst.

4. API credential theft

What. Attacker exfiltrates API keys or mTLS certificates and queries the API or injects fake events.

Mitigation. Per-tenant key scoping; read, write, and soar-action scopes are issued separately. Source-IP allowlists are mandatory for write and SOAR-action tokens. The ESE-to-control-plane channel uses mTLS with a key bound to the ESE host at registration; an attacker can't impersonate the ESE without obtaining that key. All API access is audit-logged; anomalous queries trigger their own detection rule.

5. Supply-chain compromise of the CybrIQ software

What. Adversary compromises CybrIQ's build or release pipeline so a malicious binary reaches customers.

Mitigation. The CybrIQ software is reproducibly built; SHA-256 hashes are published per release on the public artifact registry; customers can verify the binary they're running matches. Software releases are signed by a key stored in a hardware token, held offline at CybrIQ's HQ. The ESE refuses any unsigned image. Build provenance attestation (SLSA Level 3) ships with every release.

Operational failure modes and recovery

The other half of the threat model: not what an attacker does to us, but what breaks operationally. Diligence reviewers ask both. Here's the runbook for the ones that come up most.

FailureWhat you seeWhat's lostRecovery
ESE loses network to control plane Detection events keep firing locally on the ESE; control-plane dashboard shows a stale-source alert within 60 seconds Nothing for up to 14 days (ESE buffers locally) Automatic on reconnect; backlog ships at full rate, ordered by original timestamp
ESE loses management access to a switch That switch shows as polling-degraded in the dashboard; existing port records preserve their last-known state Visibility into that switch only; rest of the deployment unaffected Customer restores credentials or routing; ESE auto-detects within one polling cycle. Existing port history reconciles forward.
ESE host hardware / VM failure Control plane alerts on the ESE being unreachable within 60 seconds. Switches in scope are unmonitored until host is replaced. Inventory is paused for the affected switches; existing history persists in the control plane Customer provisions another Linux or Windows host (cloned VM image or fresh install). ESE software re-installs in minutes; configuration restores from the control plane on first connect.
Control plane outage (cloud tenant) Dashboard unavailable; ingest queues at the ESE None during the outage (events are buffered); dashboard access is the only thing affected SLA 99.9%. Status page links from the dashboard. SOC 2 Type II evidence on request.
Database resolves a device to the wrong identity The dashboard surfaces a confidence score; suspicious resolutions show low confidence. The customer can flag specific identities through the dispute flow. Vendor-hint accuracy on the affected device until the fix lands Customer files the dispute; engineering team reviews; corrected entry ships in the next semi-weekly database update. In the meantime, the customer's tenant gets a confidence-score override that suppresses noise from the wrong identity.
False positive (signature drift in normal operation) A device-substituted event fires on a port where nothing actually changed Analyst time triaging a non-event Tune similarity threshold for the VLAN; add the new DNA pattern to the per-port allow-list; report it through the false-positive feedback loop. Repeated patterns inform threshold defaults in future releases.
False negative (real attack missed) Attack proceeds; downstream detection (EDR, NDR, SOC analyst spot-check) catches it after the fact Detection time on the swap; the post-substitution DNA still lands in the record for forensics Tighten thresholds. Report the case through engineering escalation; each confirmed FN feeds the signal-improvement backlog.
USB-threat agent goes silent on a workstation Agent health check fails after the configurable interval; the host shows as agent-stale in the dashboard USB-threat detection coverage on that one host only; the inventory path is unaffected (it never depended on the agent) Restart the agent service (or re-deploy the agent via the customer's existing endpoint-management tool). Switch-derived inventory continues uninterrupted throughout.

The pattern across the table: every failure has a bounded blast radius (one switch, one host, one tenant), every failure has an automatic-or-near-automatic recovery path, and no failure produces silent data loss. Buffering and audit-trail integrity are the design guarantees we hold to.

Need this in a format your diligence team can run with?

Vendor-risk documentation (CAIQ Lite, SIG, custom) is a follow-up exchange between your procurement-security team and ours. Where the honest answer is "no, that's not what we do," we say it directly in the response — those answers age better than diplomatic ones.

Send us your questionnaire