Route alerts to notification channels
What you’ll accomplish
Section titled “What you’ll accomplish”Configure Alertmanager to deliver alerts from Prometheus and Loki to notification channels like Slack, PagerDuty, or email. Centralizing alert routing through Alertmanager ensures your team receives consistent, actionable notifications from a single hub rather than managing alerts across multiple systems.
Prerequisites
Section titled “Prerequisites”- UDS CLI installed
- Access to a Kubernetes cluster with UDS Core deployed
- A webhook URL or credentials for your notification service (e.g., Slack incoming webhook)
Before you begin
Section titled “Before you begin”Alertmanager is the central hub for all alerts in UDS Core. Both Prometheus metric alerts and Loki log alerts route through it, so configuring Alertmanager receivers is the single point of integration for all notification delivery.
The Alertmanager UI is not directly exposed in UDS Core because it lacks built-in authentication. Use the Grafana > Alerting section to view and manage alerts instead. If you need direct access to the Alertmanager UI, use:
uds zarf connect alertmanager-
Configure Alertmanager receivers and routes
Define the notification receivers and routing rules that determine which alerts go where. The example below routes critical and warning alerts to a Slack channel while sending the always-firing
Watchdogalert to an empty receiver to reduce noise.uds-bundle.yaml packages:- name: corerepository: registry.defenseunicorns.com/public/coreref: x.x.x-upstreamoverrides:kube-prometheus-stack:uds-prometheus-config:values:# Allow Alertmanager to reach your notification service- path: additionalNetworkAllowvalue:- direction: Egressselector:app.kubernetes.io/name: alertmanagerports:- 443remoteHost: hooks.slack.comremoteProtocol: TLSdescription: "Allow egress Alertmanager to Slack"kube-prometheus-stack:values:# Setup Alertmanager receivers# See: https://prometheus.io/docs/alerting/latest/configuration/#general-receiver-related-settings- path: alertmanager.config.receiversvalue:- name: slackslack_configs:- channel: "#alerts"send_resolved: true- name: empty# Setup Alertmanager routing# See: https://prometheus.io/docs/alerting/latest/configuration/#route-related-settings- path: alertmanager.config.routevalue:group_by: ["alertname", "job"]receiver: emptyroutes:# Send always-firing Watchdog alerts to the empty receiver to avoid noise- matchers:- alertname = Watchdogreceiver: empty# Send critical and warning alerts to Slack- matchers:- severity =~ "warning|critical"receiver: slackvariables:- name: ALERTMANAGER_SLACK_WEBHOOK_URLpath: alertmanager.config.receivers[0].slack_configs[0].api_urlsensitive: trueuds-config.yaml variables:core:ALERTMANAGER_SLACK_WEBHOOK_URL: "https://hooks.slack.com/services/XXX/YYY/ZZZ" -
Create and deploy your bundle
Terminal window uds create <path-to-bundle-dir>uds deploy uds-bundle-<name>-<arch>-<version>.tar.zst
Silence alerts during maintenance
Section titled “Silence alerts during maintenance”You can temporarily mute alerts during maintenance windows or investigations by creating a silence through the Grafana UI.
- Navigate to Alerting > Silences
- Ensure Choose Alertmanager is set to
Alertmanager(notGrafana) - Click New Silence
- Specify matchers for the alerts you want to silence, set a duration, and add a comment
Verification
Section titled “Verification”Confirm alert routing is working:
# Check Alertmanager pods are runninguds zarf tools kubectl get pods -n monitoring -l app.kubernetes.io/name=alertmanager
# View Alertmanager logs for delivery statusuds zarf tools kubectl logs -n monitoring -l app.kubernetes.io/name=alertmanager --tail=50Success criteria:
- Grafana > Alerting > Alert rules shows active alerts
- The
Watchdogalert fires continuously by design — if routing is configured correctly, it should not appear in your notification channel (it routes to theemptyreceiver) - Critical or warning alerts arrive in your configured notification channel with
send_resolvednotifications when they clear
Troubleshooting
Section titled “Troubleshooting”Alerts not arriving in notification channel
Section titled “Alerts not arriving in notification channel”Symptom: Alert rules show as firing in Grafana, but no notifications appear in Slack (or your configured channel).
Solution: Verify that route matchers match the alert labels — a mismatch causes alerts to fall through to the default empty receiver. Check the receiver configuration (webhook URL, channel name). Review Alertmanager logs for delivery errors:
uds zarf tools kubectl logs -n monitoring -l app.kubernetes.io/name=alertmanager --tail=50Alertmanager can’t reach external service
Section titled “Alertmanager can’t reach external service”Symptom: Alertmanager logs show connection timeout or DNS resolution errors when sending notifications.
Solution: Verify the additionalNetworkAllow configuration includes the correct remoteHost and port for your notification service. Ensure the egress policy selector targets Alertmanager pods (app.kubernetes.io/name: alertmanager). See Configure network access for Core services for details on configuring egress policies.
Related Documentation
Section titled “Related Documentation”- Prometheus: Alertmanager configuration — full receiver and route configuration reference
- Prometheus: Alertmanager integrations — supported notification channels (Slack, PagerDuty, OpsGenie, email, webhooks, etc.)
- Configure network access for Core services — egress policy configuration for notification services
Next steps
Section titled “Next steps”These guides may be useful to explore next: