Skip to content

Policy Violations

Use this runbook when:

  • A pod is rejected by an admission webhook with a Pepr denial message
  • A workload’s security context or configuration was unexpectedly modified after deployment
  • A Deployment, DaemonSet, or StatefulSet shows 0 available replicas with no obvious pod-level errors

Example error:

admission webhook "pepr-uds-core.pepr.dev" denied the request: Privilege escalation is disallowed. Authorized: [allowPrivilegeEscalation = false | privileged = false] Found: {"name":"test","ctx":{"capabilities":{"drop":["ALL"]},"privileged":true}}

UDS Core uses Pepr to enforce two types of policies on every resource submitted to the cluster:

  1. Mutations — run first and silently correct common misconfigurations. Your workloads may be adjusted without any error.
  2. Validations — run after mutations and reject resources that cannot be automatically corrected, returning a clear error message.
  1. Check for a validation denial

    Stream denial events to see if your workload is being rejected:

    Terminal window
    uds monitor pepr denied -f

    If denials aren’t streaming in real time, you can also check controller events directly. Denials appear on the owning controller — not the pod itself:

    Terminal window
    # For Deployments — check the ReplicaSet
    uds zarf tools kubectl get replicaset -n <namespace>
    uds zarf tools kubectl describe replicaset -n <namespace> <replicaset-name>
    # For DaemonSets or StatefulSets — check the controller directly
    uds zarf tools kubectl describe daemonset -n <namespace> <name>
    uds zarf tools kubectl describe statefulset -n <namespace> <name>

    What to look for: denial events in the monitor output, or admission webhook denial messages in the controller Events section. If found, skip to Cause 1: Validation rejected your resource.

  2. Check whether a mutation adjusted your workload

    If there’s no denial but your workload behaves unexpectedly, check for mutation events:

    Terminal window
    uds monitor pepr mutated -f

    You can also compare the running pod’s security context against your original spec:

    Terminal window
    uds zarf tools kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.containers[0].securityContext}'

    What to look for: mutation events for your workload in the monitor output, or security context values that differ from your spec. If found, skip to Cause 2: Mutation adjusted your workload.

Cause 1: Validation rejected your resource

Section titled “Cause 1: Validation rejected your resource”

The error message format varies by policy — some include Authorized: [...] Found: {...} details, while others are simple messages. Common fixes:

Error messageFix
Privilege escalation is disallowed. Authorized: [...]Remove privileged: true and set allowPrivilegeEscalation: false in securityContext
Sharing the host namespaces is disallowedRemove hostNetwork, hostPID, and hostIPC from the pod spec
NodePort services are not allowedChange service type to ClusterIP and use the service mesh gateway for external access
Volume <name> has a disallowed volume typeUse only allowed volume types (configMap, csi, downwardAPI, emptyDir, ephemeral, persistentVolumeClaim, projected, secret)
Host ports are not allowedRemove hostPort from container port definitions
Unauthorized container capabilities in securityContext.capabilities.addRemove capabilities beyond NET_BIND_SERVICE from securityContext.capabilities.add
Unauthorized container DROP capabilitiesEnsure securityContext.capabilities.drop includes ALL
Containers must not run as rootSet runAsNonRoot: true and runAsUser to a non-zero value in securityContext
hostPath volume '<name>' must be mounted as readOnlySet readOnly: true on the volume mount

If the fix isn’t possible, see Create UDS policy exemptions.

UDS Core applies three mutations to all pods:

MutationWhat it does
Disallow Privilege EscalationSets allowPrivilegeEscalation to false unless the container is privileged or has CAP_SYS_ADMIN
Require Non-root UserSets runAsNonRoot: true and defaults runAsUser/runAsGroup to 1000 if not specified
Drop All CapabilitiesSets capabilities.drop to ["ALL"] for all containers
  1. Control user/group IDs via pod labels

    To set specific user/group IDs, add labels to the pod rather than fighting the mutation:

    metadata:
    labels:
    uds/user: "65534" # sets runAsUser
    uds/group: "65534" # sets runAsGroup
    uds/fsgroup: "65534" # sets fsGroup
  2. Add specific capabilities when needed

    The DropAllCapabilities mutation drops all capabilities, but your workload may need specific ones. You can still add capabilities alongside the drop: ["ALL"] — for example, NET_BIND_SERVICE is allowed by default. If your workload needs additional capabilities beyond the allowed set, create an exemption for RestrictCapabilities.

  3. If the mutation is not acceptable, create an exemption

    See Create UDS policy exemptions to bypass specific mutations for your workload.

After applying a fix or creating an exemption, confirm the issue is resolved:

Terminal window
# Verify pods are running
uds zarf tools kubectl get pods -n <namespace>
# Check that security context matches expectations
uds zarf tools kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.containers[0].securityContext}'

Success indicators:

  • All pods are Running and Ready
  • No denial events in uds monitor pepr denied -f output
  • Security context fields match expected values

If this runbook doesn’t resolve your issue:

  1. Collect relevant details from the steps above
  2. Check UDS Core GitHub Issues for known issues
  3. Open a new issue with your relevant details attached