Skip to content

Query application logs

After completing this guide you will be able to find, filter, and analyze logs from any workload in your cluster using Grafana’s Explore interface and LogQL — the query language for Loki.

  • UDS CLI installed
  • Access to a Kubernetes cluster with UDS Core deployed (logging is enabled by default)
  • Access to the Grafana admin UI (https://grafana.<admin_domain>)

UDS Core’s Vector DaemonSet automatically collects stdout/stderr from every pod and node logs from /var/log/*. Vector enriches each log entry with Kubernetes metadata before shipping to Loki. You can use these labels to filter and query logs:

LabelSourceExample
namespacePod namespacekube-system
appapp.kubernetes.io/name label, falls back to app pod label, then pod owner, then pod nameloki
componentapp.kubernetes.io/component label, falls back to component pod labelwrite
job{namespace}/{app}loki/loki
containerContainer nameloki
hostNode namenode-1
filenameLog file path/var/log/pods/...
collectorAlways vectorvector
  1. Open Grafana Explore

    Navigate to Grafana (https://grafana.<admin_domain>), then select Explore from the left sidebar. In the datasource dropdown at the top, select Loki. Adjust the time range picker in the top-right corner to cover the period you want to search.

  2. Filter logs by label

    Start with a stream selector — a set of label matchers inside curly braces. This is the most efficient way to narrow results because Loki indexes labels, not log content. Switch to Code mode (toggle in the top-right of the query editor) to paste LogQL queries directly.

    # All logs from a specific namespace
    {namespace="my-app"}
    # Logs from a specific application
    {app="keycloak"}
    # Combine labels to narrow further
    {namespace="loki", component="write"}
  3. Search log content

    After selecting a stream, add line filters to search within log messages:

    # Lines containing "error" (case-sensitive)
    {namespace="my-app"} |= "error"
    # Exclude health checks
    {namespace="my-app"} != "healthcheck"
    # Regex match for multiple patterns
    {namespace="my-app"} |~ "timeout|deadline|connection refused"
    # Case-insensitive search
    {namespace="my-app"} |~ "(?i)error"

    You can chain multiple filters. Each filter narrows the results further:

    {namespace="my-app"} |= "error" != "healthcheck" != "metrics"
  4. Parse and extract fields

    Use parser expressions to extract structured data from log lines:

    # Parse JSON logs and filter on extracted fields
    {namespace="my-app"} | json | status_code >= 500
    # Parse key=value formatted logs
    {namespace="my-app"} | logfmt | level="error"
  5. Aggregate with metric queries

    LogQL can compute metrics from log streams, useful for spotting patterns:

    # Error rate per namespace over 5-minute windows
    sum(rate({namespace="my-app"} |= "error" [5m])) by (app)
    # Count of log lines per application in the last hour
    sum(count_over_time({namespace="my-app"} [1h])) by (app)
    # Top 5 noisiest applications by log volume
    topk(5, sum(rate({namespace="my-app"} [5m])) by (app))
  6. Use live tail for real-time debugging

    In Grafana Explore, click the Live button in the top-right corner to stream logs in real time. This is useful when actively debugging a deployment or watching for specific events. Enter a stream selector and optional line filters, then click Start to begin tailing.

Confirm the queries above return log results in Grafana Explore. If you see log entries, the logging pipeline is working correctly.

Symptom: Loki does not appear in the datasource dropdown in Grafana Explore.

Solution: Navigate to Administration → Data sources in Grafana and confirm a Loki datasource exists. UDS Core provisions this automatically — if it’s missing, check that the Loki pods are running and the Grafana deployment has completed successfully:

Terminal window
uds zarf tools kubectl get pods -n loki
uds zarf tools kubectl get pods -n grafana

Symptom: Query returns empty results even for namespaces you know are active.

Solution: Check the time range selector in the top-right corner of Grafana Explore — the default may be too narrow. Expand to “Last 1 hour” or “Last 6 hours”. If still empty, confirm Vector is running:

Terminal window
uds zarf tools kubectl get pods -n vector

Symptom: Grafana shows an error about too many outstanding requests when running a query.

Solution: Narrow your query with more specific label selectors and a shorter time range. Avoid querying across all namespaces with broad time windows. Add label filters to reduce the number of streams Loki needs to scan.