Forward logs to an external system
What you’ll accomplish
Section titled “What you’ll accomplish”After completing this guide, Vector will forward logs to an external S3-compatible destination for SIEM ingestion or long-term archival, while continuing to send all logs to Loki.
Prerequisites
Section titled “Prerequisites”- UDS CLI installed
- Access to a Kubernetes cluster with UDS Core deployed
- An S3-compatible bucket with write access (AWS S3, MinIO, or equivalent)
- For AWS: an IAM role for IRSA with
s3:PutObjectpermission on the target bucket
Before you begin
Section titled “Before you begin”Vector ships all pod and node logs to Loki by default through two pre-configured sinks (loki_pod and loki_host). Adding a new sink sends logs to an additional destination — it does not replace Loki.
You can choose what to forward:
- All pod logs — reference the
pod_logs_labelledtransform in your sink’sinputsfield (includes all pods with Kubernetes metadata) - Specific namespaces only — add a custom source with a namespace label selector
Vector supports many destination types beyond S3. This guide uses S3 as a concrete example. For other destinations (Elasticsearch, Splunk HEC, Kafka, etc.), see the Vector sinks reference and adapt the sink configuration accordingly.
-
Add a Vector sink via bundle overrides
The example below forwards only Keycloak and Pepr logs to an S3 bucket. It adds a custom source that collects logs from just the
keycloakandpepr-systemnamespaces, then ships them to S3 using IRSA authentication with GZIP compression.uds-bundle.yaml packages:- name: corerepository: registry.defenseunicorns.com/public/coreref: x.x.x-upstreamoverrides:vector:vector:values:# Add a separate log source that only collects from the keycloak and pepr-system namespaces.# This lets you forward just these logs to your external system instead of everything.# The "extra_namespace_label_selector" filters by Kubernetes namespace labels.- path: customConfig.sources.filtered_logsvalue:type: "kubernetes_logs"extra_namespace_label_selector: "kubernetes.io/metadata.name in (keycloak,pepr-system)"oldest_first: true# Static sink configuration — structure that stays the same across environments.# Only bucket, region, and credentials change per environment (set via variables below).- path: customConfig.sinks.siem_logsvalue:type: "aws_s3"inputs: ["filtered_logs"]compression: "gzip"encoding:codec: "json"framing:method: "newline_delimited"key_prefix: "vector_logs/{{ kubernetes.pod_namespace }}/"buffer:type: "disk"max_size: 1073741824 # 1 GiBacknowledgements:enabled: falsevariables:# Environment-specific values — set in uds-config.yaml per deployment- path: customConfig.sinks.siem_logs.bucketname: VECTOR_S3_BUCKET- path: customConfig.sinks.siem_logs.regionname: VECTOR_S3_REGION# IRSA role annotation for S3 access — allows Vector's service account# to assume an IAM role instead of using static credentials- path: serviceAccount.annotations.eks\.amazonaws\.com/role-arnname: VECTOR_IRSA_ROLE_ARNsensitive: trueuds-config.yaml variables:core:VECTOR_S3_BUCKET: "my-siem-logs-bucket"VECTOR_S3_REGION: "us-east-1"VECTOR_IRSA_ROLE_ARN: "arn:aws:iam::123456789012:role/vector-s3-role" -
Allow network egress for Vector
Vector needs network access to reach your external endpoint. Add an egress allow rule to the same
uds-bundle.yaml, under the existingcorepackage overrides:uds-bundle.yaml packages:- name: corerepository: registry.defenseunicorns.com/public/coreref: x.x.x-upstreamoverrides:vector:uds-vector-config:values:- path: additionalNetworkAllowvalue:- direction: Egressselector:app.kubernetes.io/name: vectorremoteHost: s3.us-east-1.amazonaws.comport: 443description: "S3 Storage"For the full set of egress control options, see Configure network access for Core services.
-
Create and deploy your bundle
Terminal window uds create <path-to-bundle-dir>uds deploy uds-bundle-<name>-<arch>-<version>.tar.zst
Verification
Section titled “Verification”Confirm Vector is running and the new sink is active:
# Check Vector pods for errorsuds zarf tools kubectl logs -n vector -l app.kubernetes.io/name=vector --tail=20Verify data is arriving at your S3 bucket:
# AWS CLI exampleaws s3 ls s3://my-siem-logs-bucket/vector_logs/ --recursive | headTroubleshooting
Section titled “Troubleshooting”S3 write failures
Section titled “S3 write failures”Symptom: Vector logs show PutObject errors or access denied messages.
Solution: Verify the IAM role has s3:PutObject permission on the target bucket and prefix. Confirm the IRSA annotation is correct and the service account is bound to the role:
uds zarf tools kubectl get sa -n vector vector -o yaml | grep eks.amazonaws.comNo logs arriving in S3
Section titled “No logs arriving in S3”Symptom: Vector is running without errors but no objects appear in the bucket.
Solution: Confirm the inputs field references an existing source. If using a custom source like filtered_logs, verify the namespace label selector matches your target namespaces:
uds zarf tools kubectl get ns --show-labels | grep "kubernetes.io/metadata.name"Connection timeout
Section titled “Connection timeout”Symptom: Vector logs show connection timeout errors to the S3 endpoint.
Solution: Check that the network egress allow rule is deployed. Verify the additionalNetworkAllow value is under the uds-vector-config chart (not the vector chart):
uds zarf tools kubectl get netpol -n vectorRelated Documentation
Section titled “Related Documentation”- Vector sinks reference — full list of supported destinations
- Vector AWS S3 sink — all S3 sink configuration options
- Configure network access for Core services — network egress for Core components
- Logging Concepts — how the Vector → Loki → Grafana pipeline works