D13: Security and Infrastructure
Technology Pillar
Security and Infrastructure assesses the security posture specific to AI workloads, including protection against adversarial attacks, data poisoning, model theft, prompt injection, and the infrastructure hardening required to operate AI systems safely in production environments.
Why It Matters
AI systems introduce novel attack surfaces that traditional cybersecurity does not address. Adversarial inputs can fool models, training data can be poisoned, model weights can be stolen, and prompt injection can bypass safety controls. Organizations that do not extend their security posture to cover AI-specific threats expose themselves to operational, reputational, and regulatory risk.
Maturity Levels
- Level 1: Foundational
- Standard IT security is applied to AI workloads with no AI-specific threat modeling, testing, or controls.
- Level 2: Developing
- AI-specific security risks have been identified and basic controls exist (access management for models and data), but adversarial testing is not conducted.
- Level 3: Defined
- AI threat models are maintained and regularly updated; adversarial testing is conducted for high-risk models, and security reviews are part of the deployment process.
- Level 4: Advanced
- Automated adversarial testing, continuous vulnerability scanning for AI components, and red-teaming exercises are standard; incident response plans cover AI-specific scenarios.
- Level 5: Transformational
- The organization contributes to AI security research, operates a dedicated AI security team, and security is embedded as a design principle in every AI system from inception.
Key Activities
- Conduct AI-specific threat modeling covering adversarial attacks, data poisoning, and model theft
- Implement access controls for model weights, training data, and inference endpoints
- Establish adversarial testing and red-teaming practices for AI systems
- Design AI-specific incident response procedures and playbooks
- Monitor AI systems for anomalous inputs and outputs that may indicate attack attempts
Assessment Criteria
- Existence of AI-specific threat models covering known attack categories
- Percentage of production AI systems that have undergone adversarial testing
- Presence of AI-specific incident response procedures
- Evidence of regular security review for AI deployments including model and data access controls
Abdelalim, T. (2025). “Security and Infrastructure — COMPEL Technology Pillar.” COMPEL by FlowRidge. https://www.compel.one/domain/security-and-infrastructure