Every security product on the market has an AI exposure story. Here's ours, told without the marketing layer.
Every ML-driven security product on the market claims to be "AI-resistant." The presentations are good; the follow-up questions are where the story falls apart. This page is the version you'd want instead: nine AI-class attack categories that have started landing against security products, which categories of tool are exposed, and where this product sits in each row.
The short answer is that the detection path has no model in it. The long answer is the next 20 minutes of reading.
Why the original design accidentally turned out to be the right one
The security industry spent the last decade bolting ML and (more recently) LLMs onto nearly every detection product on the market. The benefits are real: behavior-based detection for polymorphic malware, natural-language alert summaries, automated triage. The costs are also real and have started coming due. ML models have an attack surface. LLMs have an attack surface. Training pipelines have an attack surface. Every product that built on those primitives now has to ship a hardening story for each one.
Device DNA was designed as a deterministic hash over a switch-derived signal set, resolved against CybrIQ's 750M+ device reference database. That decision happened back in 2017 and was originally about audit defensibility. An auditor will accept "the signature is the SHA of the collected signals plus the database identity record." An auditor will not accept "the model thought it looked like a Crestron." The team that made that call wasn't trying to win at AI-resistance; they were trying to win at SOC 2. The fact that the design also dodges most of the AI-attack categories below is something I'd call lucky.
The next sections walk through those attacks, name the tool categories that are exposed, and explain the engineering reason we're not in each list.
The attack categories that matter
1. Adversarial ML evasion
CybrIQ: not exposedWhat it is. Attacker crafts inputs that look benign to a machine-learning classifier but actually correspond to the attack the classifier is meant to catch. Vast research literature (Goodfellow et al., 2014 onward). In production: malware authors increasingly tune payloads against open-source ML-based detection models until the classifier votes "benign."
Who's exposed. Any product whose detection path runs through an ML classifier: ML-based NDR (Vectra, Darktrace), ML-augmented EDR (most of them), URL/file reputation services, ML-based UEBA, behavioral anomaly detection.
Why we aren't. Device DNA isn't classified, it's resolved. The signature is a SHA hash over the switch-derived signal set combined with the matched identity record from the 750M+ reference database. There is no model deciding whether the observation looks like a device. The observation plus the database lookup IS the device's identity. An adversarial-input attack against us would require modifying the device's actual switch-side presentation. That's not an ML attack; that's a hardware engineering project.
2. Training-data poisoning
CybrIQ: not exposedWhat it is. Attacker pollutes the training set so the model learns to ignore (or misclassify) the future attack. Documented against signature-based AV training pipelines, ML-IDS systems, and any product that retrains on customer telemetry. NIST SP 800-218A and the OWASP ML Top 10 both list it as a top concern.
Who's exposed. Products that retrain on customer or open-source data: most ML-EDR, ML-NDR, ML-UEBA, anti-phishing classifiers, malware-family clustering services.
Why we aren't. No training. No labels. No model. The signature derivation is fixed at design time and version-pinned. Adding a new device family doesn't retrain anything; it adds a row to a vendor-hint lookup table that ships as a human-reviewed pull request in version control. The closest thing we have to a "training pipeline" is "someone opened a PR."
3. Prompt injection
CybrIQ: not exposedWhat it is. Attacker embeds adversarial text in observed inputs (log lines, ticket descriptions, file contents, web pages) that gets ingested by an LLM, hijacking the LLM's behavior to do something the operator didn't intend. OWASP LLM01. Documented against every commercial LLM-driven SOC tool since GPT-4 launched.
Who's exposed. LLM-driven SOC platforms (Microsoft Security Copilot, several startups), LLM-based alert summarization, LLM-powered chatbots that read incident tickets, "agentic" SOAR workflows that prompt LLMs with observed data. Any tool where attacker-controlled text reaches a language model.
Why we aren't. There is no LLM in the detection or analysis path. Output to humans is structured data: signature hashes, observation tuples, framework-control mappings. The only natural-language surface in the product is the email digest, which is a deterministic template fill (Mustache, basically), not an LLM completion. There's nowhere for an attacker to inject prompts into because there's nowhere a language model is reading our data.
4. Model supply chain
CybrIQ: not exposedWhat it is. Pretrained model weights or model components are compromised before the vendor or customer downloads them. Demonstrated repeatedly: PoisonGPT (2023), Hugging Face model takeovers, malicious model weights distributed via popular package registries. NIST SSDF, MITRE ATLAS.
Who's exposed. Anyone whose detection includes a pretrained third-party model. Common in newer ML-NDR products, several "AI SOC" startups, and any tool that bundles a Hugging Face checkpoint or OpenAI/Anthropic API call as part of detection.
Why we aren't. No model dependency. The product binary is a deterministic signal processor. External dependencies are operating-system libraries and the same human-curated vendor-hint table from category 2. Our supply-chain risk story is the boring software supply-chain story (CVEs in OS libraries, dependency audits), not the AI supply-chain story.
5. Hallucinated triage
CybrIQ: not exposedWhat it is. An LLM-based SOC assistant produces a confident, well-formatted, plausible-sounding conclusion that is wrong. The analyst trusts it, closes the incident, and the actual breach is missed. Documented across early LLM-SOC deployments. The failure mode that worries CISOs most.
Who's exposed. LLM-based incident summarizers, LLM-based "auto-triage" features, LLM-powered "explain this alert" chatbots. Any tool where the analyst-facing conclusion is synthesized by a language model.
Why we aren't. The output IS the observation. When the dashboard says "device dna:7a4f-... was substituted on port-47 at 14:08:33Z", that statement is the literal contents of the data file. It isn't a synthesized natural-language summary. There's nothing for a model to hallucinate because there's no model in the path generating sentences. Hallucinated triage is the failure mode where an LLM-driven SOC tool produces a confident summary, the analyst closes the incident, and the real attack proceeds. That failure mode is the deciding reason this product's detection path is model-free.
6. AI-generated polymorphic malware
CybrIQ: indirect benefitWhat it is. An LLM (or specialized code-generation model) produces a payload with the same intent but different bytes, recompiling per target. Signature-based AV is defeated trivially. Recent: WormGPT, FraudGPT, several "evil-LLM" services available on hacker forums since mid-2024.
Who's exposed. Signature-only AV. Less affected: behavior-based EDR (because the malicious behaviors are constant even if bytes mutate). Least affected: tooling that doesn't analyze files at all.
Why CybrIQ's exposure is indirect. CybrIQ doesn't analyze files or content. It observes link-level device behavior. AI-generated payloads still need to run on physical hardware that draws a link, exposes a MAC, and produces traffic, and that physical-layer behavior is what CybrIQ records. The indirect benefit: when AI-mutated malware is dropped onto an asset, CybrIQ doesn't lose visibility of the asset just because the malware changed shape.
7. AI-powered reconnaissance
CybrIQ: indirect benefitWhat it is. Automated reconnaissance pipelines (Shodan-like + LLM enrichment + autonomous target selection) generate continuous lists of likely-exposed assets for opportunistic exploitation. Increasingly the norm rather than the exception.
Who's exposed. The defender's problem here isn't a tool's exposure, it's whether the defender knows their own attack surface. Tools that depend on a static asset register are worse off; recon will find devices the defender didn't know existed.
Why CybrIQ's exposure is indirect. Accurate Layer 1 inventory means the question "what is on our network" has a real answer, not a stale-spreadsheet answer. When recon finds an exposed device, the security team can match the IP/port to the Device DNA record and act with full context.
8. AI-assisted social engineering
CybrIQ: out of scopeWhat it is. LLM-generated phishing, voice-cloned BEC, deepfaked video calls. Documented at scale. The fastest-growing initial-access category in the major IR reports.
Why this is out of CybrIQ's scope. The attack doesn't transit Layer 1. CybrIQ sits below the network application layer; phishing and BEC operate above it. Email security gateways, MFA, identity-verification platforms, security awareness training, those are the controls. CybrIQ does not claim to defend here.
9. Model-output exfiltration
CybrIQ: not exposedWhat it is. Attacker queries an LLM-based security tool repeatedly to extract training data, embedded credentials, or proprietary detection logic. Documented against several commercial AI products.
Who's exposed. LLM-driven security platforms with public-facing query APIs, internal LLM SOC tools that ingest sensitive data into context windows, any product where the model is queried with privileged information.
Why CybrIQ isn't. No model is queried. The dashboard is a deterministic read of a flat-file store. Extraction would require credentialed access, at which point the attacker has data directly, not from a model.
Where each category of security tool sits
A quick reference matrix. X = generally exposed to this attack class in current implementations. - = not in this defense category. Limited = exposure depends on specific implementation choices.
| Tool category | Adversarial ML | Data poisoning | Prompt injection | Model supply chain | Hallucinated triage |
|---|---|---|---|---|---|
| ML-based EDR / NDR / UEBA | X | X | - | X | - |
| LLM-driven SOC / Copilot | - | Limited | X | X | X |
| Signature-based AV | - | Limited | - | - | - |
| Behavior-based EDR (no ML) | - | - | - | - | - |
| NAC (802.1X, MAB) | - | - | - | - | - |
| Anti-phishing classifiers | X | X | Limited | X | - |
| CybrIQ (Layer 1 inventory) | - | - | - | - | - |
Matrix reflects current implementation patterns as of 2026. Specific products in each category may have hardening that mitigates one or more rows.
What we give up to stay out of the AI attack surface
No architectural decision is free. The list below is what the deterministic posture costs you in capabilities, so you can decide whether the trade is worth it for your environment.
- No automated triage suggestions. When a drift event fires, the analyst gets the event with its raw observations and framework mapping. There's no LLM-generated "this looks like a likely T1499 attempt, here's a suggested response." The output is data, not advice. If your team wants triage suggestions, wire CybrIQ events into the SOAR or LLM-driven SOC tool of your choice; let that tool do the inference, not us.
- No predictive scoring. ML tools produce risk scores like "this device is 73% likely to be malicious." We produce facts like "this device's DNA changed at this timestamp." Converting facts to a risk score is the analyst's call or the SIEM correlation rule's call. We chose not to put a number on the analyst's judgment.
- Vendor-hint table is curated, not learned. When a new device family ships, identifying it as that family (e.g., "this Layer 1 fingerprint matches a Crestron DM-MD8x8") requires a human-reviewed table update. An ML system would generalize automatically. We accepted the slower path because audit defensibility requires it.
If your security program assumes ML triage and predictive scoring, we're a complement, not a replacement. We contribute facts to the SIEM, the analyst, or the ML-augmented downstream tool that consumes them. The fact that we don't make the inference call ourselves is the property that keeps us off the rows above.
Frameworks referenced
For diligence and vendor-questionnaire backfill.
| Framework | Relevance |
|---|---|
| MITRE ATLAS | Adversarial Machine Learning threat matrix · the field's canonical reference for ML-specific attacks |
| OWASP LLM Top 10 (2025) | Prompt injection (LLM01), training data poisoning (LLM03), supply chain (LLM05), sensitive information disclosure (LLM02) |
| NIST AI RMF 1.0 | Risk-management framing for AI deployment, including the security characteristics of GenAI |
| NIST SP 800-218A | Secure software development for generative AI & dual-use foundation models |
| ENISA Threat Landscape: AI | European-perspective threat inventory for AI systems |
Send us your AI-specific questionnaire. We'll answer it on a working call.
If your vendor diligence process includes AI-specific risk questions (it should), forward them ahead of the call and we'll walk through each one with engineering, not sales. The answers include the cases where "this isn't the right tool for what you're asking" is the honest read.
Send us the questionnaire