Configure log retention
What you’ll accomplish
Section titled “What you’ll accomplish”After completing this guide, Loki will automatically delete log data older than your configured retention period, reducing storage costs and helping meet data retention requirements.
Prerequisites
Section titled “Prerequisites”- UDS CLI installed
- Access to a Kubernetes cluster with UDS Core deployed
- Loki connected to external object storage — see Configure HA logging for object storage setup
Before you begin
Section titled “Before you begin”By default, Loki retains logs indefinitely — no automatic deletion occurs unless you explicitly configure retention. Retention is handled by Loki’s compactor component, which runs on the backend tier and periodically marks expired log chunks for deletion from object storage.
Retention settings apply only to data stored in Loki. Logs already forwarded to external systems via Vector (see Forward logs to an external system) are not affected.
-
Enable compactor retention and set a global retention period
Configure the compactor to enforce retention and set the default period for all log streams:
uds-bundle.yaml packages:- name: corerepository: registry.defenseunicorns.com/public/coreref: x.x.x-upstreamoverrides:loki:loki:values:# Enable retention enforcement in the compactor- path: loki.compactor.retention_enabledvalue: true# Which object store holds delete request markers.# Must match your loki.storage.type (s3, gcs, azure, etc.)- path: loki.compactor.delete_request_storevalue: "s3"# Directory for marker files that track chunks pending deletion.# Should be on persistent storage so deletes survive compactor restarts.- path: loki.compactor.working_directoryvalue: "/var/loki/compactor"# How often the compactor runs compaction and retention sweeps (Loki default: 10m)- path: loki.compactor.compaction_intervalvalue: "10m"# Safety delay before marked chunks are actually deleted from object storage.# Gives time to cancel accidental deletions. (Loki default: 2h)- path: loki.compactor.retention_delete_delayvalue: "2h"# Number of parallel workers that delete expired chunks (Loki default: 150)- path: loki.compactor.retention_delete_worker_countvalue: 150# Global retention period — logs older than this are deleted- path: loki.limits_config.retention_periodvalue: "30d" -
(Optional) Set per-stream retention rules
If different log streams need different retention periods, use
retention_streamrules. For example, keep security-related logs longer while shortening retention for noisy infrastructure logs:uds-bundle.yaml packages:- name: corerepository: registry.defenseunicorns.com/public/coreref: x.x.x-upstreamoverrides:loki:loki:values:- path: loki.compactor.retention_enabledvalue: true- path: loki.compactor.delete_request_storevalue: "s3"- path: loki.compactor.working_directoryvalue: "/var/loki/compactor"- path: loki.compactor.compaction_intervalvalue: "10m"- path: loki.compactor.retention_delete_delayvalue: "2h"- path: loki.compactor.retention_delete_worker_countvalue: 150- path: loki.limits_config.retention_periodvalue: "30d"- path: loki.limits_config.retention_streamvalue:- selector: '{namespace="keycloak"}'priority: 1period: "90d"- selector: '{namespace="kube-system"}'priority: 2period: "7d"Field Purpose selectorLogQL stream selector matching the logs to apply this rule to priorityHigher values take precedence when selectors overlap periodRetention period for matching streams (overrides the global default) -
Create and deploy your bundle
Terminal window uds create <path-to-bundle-dir>uds deploy uds-bundle-<name>-<arch>-<version>.tar.zst
Verification
Section titled “Verification”Confirm retention is configured by inspecting the rendered Loki config:
uds zarf tools kubectl get secret -n loki loki -o jsonpath='{.data.config\.yaml}' | base64 -d | grep -A 10 compactorYou should see retention_enabled: true with your configured delete_request_store, working_directory, and other compactor settings.
After the retention period elapses plus the retention_delete_delay, verify that old chunks are being removed by monitoring your object storage bucket size over time.
Troubleshooting
Section titled “Troubleshooting”Loki fails to start with “delete-request-store should be configured”
Section titled “Loki fails to start with “delete-request-store should be configured””Symptom: Loki backend pods crash with: invalid compactor config: compactor.delete-request-store should be configured when retention is enabled.
Solution: Add the loki.compactor.delete_request_store override set to your storage backend type (e.g., s3, gcs, azure). This field is required whenever retention_enabled is true. See Step 1 above.
Logs not being deleted after retention period
Section titled “Logs not being deleted after retention period”Symptom: Object storage size continues to grow beyond the expected retention window.
Solution: Check the backend pod logs for compactor activity or errors:
uds zarf tools kubectl logs -n loki -l app.kubernetes.io/component=backend --tail=1000 | grep -i "compactor"The compactor needs at least one full compaction cycle plus the retention_delete_delay (default: 2h) after deployment before chunks are actually removed. If storage size hasn’t decreased after several hours, check for errors related to object storage access in the output above.
Related Documentation
Section titled “Related Documentation”- Grafana Loki: Retention — full compactor retention reference
- Grafana Loki: Limits Config — all limits_config fields including retention
- Configure HA logging — S3 storage setup and Loki scaling