| LLM01: Prompt Injection | Mitigated | L1 pattern matching + L2 ML semantic classifier (DeBERTa-small ONNX FP16) + L3 indirect injection scanning + L4 session context. 99.8% detection and 0% false-positive rate on the open 497-attack / 1,172-negative benchmark; ~75% on continuous red team mutation engine. |
| LLM02: Insecure Output Handling | Mitigated | L5 output guard + L6 PII & credential detector scan all outbound responses. Detects and redacts credentials, PII (15 categories), internal URLs. |
| LLM03: Training Data Poisoning | Awareness | Threat intelligence feed tracks poisoning research. Crawdad operates at inference time, not training time. |
| LLM04: Model Denial of Service | Not covered | Rate limiting and resource management are upstream provider concerns. |
| LLM05: Supply Chain Vulnerabilities | Partial | Skill attestation (SHA-256), SBOM generation, dependency tracking. Does not cover model weight supply chain. |
| LLM06: Sensitive Information Disclosure | Mitigated | L6 PII & credential detector: 15 PII categories, 10+ credential types including AWS, GitHub, Stripe, OpenAI, Anthropic API keys, JWT, SSH keys, database URLs. L2 ML classifier catches system prompt extraction attempts. |
| LLM07: Insecure Plugin Design | Partial | Policy engine evaluates tool calls against configurable rules. Action authorization endpoint checks agent permissions before execution. |
| LLM08: Excessive Agency | Partial | Policy engine with configurable rules. Trust levels restrict which actions agents can take. Audit log records all decisions. |
| LLM09: Overreliance | Not covered | Human judgment about AI output quality is outside the scope of runtime security. |
| LLM10: Model Theft | Not covered | Model access controls are the API provider's responsibility. |