AI-SPM (AI Security Posture Management)
cloud-audit includes security checks for AWS AI/ML services. This covers Amazon Bedrock and Amazon SageMaker -- the two primary AWS services where unmonitored AI usage leads to model theft, unauthorized inference costs, and data poisoning.
Bedrock Checks
| Check ID | Description | Severity |
|---|---|---|
| aws-bedrock-001 | Model invocation logging disabled | HIGH |
| aws-bedrock-002 | Guardrails not configured | MEDIUM |
aws-bedrock-001 verifies that Bedrock model invocation logging is enabled. Without logging, you have no visibility into who is calling which models, how often, or with what inputs. This is the primary detection gap exploited in LLMjacking attacks.
aws-bedrock-002 checks whether at least one Bedrock guardrail is configured. Guardrails enforce content filtering, topic restrictions, and PII redaction on model inputs and outputs.
SageMaker Checks
| Check ID | Description | Severity |
|---|---|---|
| aws-sagemaker-001 | Notebook instance root access enabled | HIGH |
| aws-sagemaker-002 | Notebook instance direct internet access | HIGH |
| aws-sagemaker-003 | SageMaker endpoint encryption disabled | MEDIUM |
aws-sagemaker-001 flags notebook instances running with root access. Root access combined with internet access (aws-sagemaker-002) gives an attacker a direct path to exfiltrate model weights and training data.
aws-sagemaker-002 checks for direct internet access on notebook instances. SageMaker notebooks with internet access can reach external endpoints without going through VPC controls.
aws-sagemaker-003 verifies that SageMaker endpoints have KMS encryption enabled. Unencrypted inference endpoints expose model data if the underlying storage is compromised.
Attack Chains
AI findings feed into three attack chains:
| Chain | Name | Description |
|---|---|---|
| AC-37 | AI Model Theft via SageMaker | Root access + internet access on notebook = model exfiltration path |
| AC-38 | LLMjacking - Unauthorized Model Usage | Missing Bedrock invocation logging = undetected model abuse |
| AC-39 | AI Data Poisoning via Unguarded Pipeline | Unencrypted training data = tampering risk |
See Attack Chains for the full rule list.
Why This Matters
LLMjacking (attackers using stolen credentials to run inference on your Bedrock models) is a growing attack vector. A single compromised access key can generate thousands of dollars in Bedrock charges before detection -- if logging is disabled, you only find out when the bill arrives.
cloud-audit is the first open-source tool to include AI-SPM checks alongside traditional cloud security checks. This means a single scan covers IAM, network, data protection, compliance, and AI/ML security in one pass.