UDS Core
What is UDS Core?
UDS Core is a collection of several individual applications combined into a single Zarf Package, that establishes a secure baseline for secure cloud-native systems. It comes equipped with comprehensive compliance documentation and prioritizes seamless support for highly regulated and egress-limited environments. Building upon the achievements of Platform One, UDS Core enhances the security stance introduced by Big Bang. It introduces advanced automation through the UDS Operator and UDS Policy Engine.
UDS Core enables your team to:
- Deploy full mission environments and applications efficiently and securely.
- Leverage specific functional applications to deliver a versatile platform that caters to diverse mission objectives.
- Enhance the efficiency, security, and success of software delivery and operations process.
Accomplishing Mission Objectives with Functional Applications
UDS leverages functional applications that are well-suited to perform the specific tasks required. These tools are carefully selected to ensure optimal performance and compatibility within the UDS landscape. By integrating functional tools into the platform, UDS ensures that Mission Heroes have access to cutting-edge technologies and best-in-class solutions for their missions.
Leveraging UDS Applications
Mission Heroes can leverage UDS Core Applications to tailor their mission environment and meet their unique requirements. By selecting and integrating specific tools into their deployments, your team can achieve a streamlined and secure software delivery process. Ranging from setting up a DevSecOps pipeline, enforcing security policies, or managing user identities, UDS Applications provide the necessary tools to accomplish mission objectives effectively.
UDS Core Dependency
A UDS Core dependency refers to the specific prerequisites and external elements required for the smooth operation of bundled tools. While UDS Applications are designed to offer distinct functionalities, some may necessitate external resources, services, or configurations to seamlessly integrate within a particular environment. These dependencies can include a wide range of components such as databases, security services, and networking tools.
1 -
Application Baseline
UDS Core provides a foundational set of applications that form the backbone of a secure and efficient mission environment. Each application addresses critical aspects of microservices communication, monitoring, logging, security, compliance, and data protection. These applications are essential for establishing a reliable runtime environment and ensuring that mission-critical applications operate seamlessly.
By leveraging these applications within UDS Core, users can confidently deploy and operate source packages that meet stringent security and performance standards. UDS Core provides the applications and flexibility required to achieve diverse mission objectives, whether in cloud, on-premises, or edge environments. UDS source packages cater to the specific needs of Mission Heroes and their mission-critical operations. Below are some of the key applications offered by UDS Core:
Note
For optimal deployment and operational efficiency, it is important to deliver a UDS Core Bundle before deploying any other optional bundle (UDS or Mission). Failure to meet this prerequisite can alter the complexity of the deployment process. To ensure a seamless experience and to leverage the full potential of UDS capabilities, prioritize the deployment of UDS Core as the foundational step.
Core Baseline
Capability | Application |
---|
Service Mesh | Istio: A powerful service mesh tool that provides traffic management, load balancing, security, and observability features. |
Monitoring | Prometheus Stack: Collects and stores time-series data for insights into application health and performance.
Grafana: Provides visualization and alerting capabilities for monitoring metrics.
Metrics Server: Offers resource utilization metrics for Kubernetes clusters, aiding in capacity planning and optimization. |
Logging | Loki: A log aggregation system that allows users to store, search, and analyze logs across their applications.
Promtail: A companion agent that efficiently gathers and sends log data to Loki, simplifying log monitoring, troubleshooting, and compliance auditing, enhancing the overall observability of the mission environment. |
Security and Compliance | NeuVector: Offers container-native security, protecting applications against threats and vulnerabilities.
Pepr: UDS policy engine and operator for enhanced security and compliance. |
Identity and Access Management | Keycloak: A robust open-source Identity and Access Management solution, providing centralized authentication, authorization, and user management for enhanced security and control over access to mission-critical resources. |
Backup and Restore | Velero: Provides backup and restore capabilities for Kubernetes clusters, ensuring data protection and disaster recovery. |
Authorization | AuthService: Offers centralized authorization services, managing access control and permissions within the mission environment. |
2 -
Deploying UDS Core
2.1 -
Distribution Support
UDS Core is a versatile software baseline designed to operate effectively across a variety of Kubernetes distributions. While it is not specifically tailored to any single Kubernetes distribution, it is compatible with multiple environments. This documentation provides an overview of UDS Core’s compatibility with different distributions and the level of support provided.
Understanding Support Levels
Supported: The Kubernetes distributions listed under this category undergo testing and are officially supported by UDS Core. Users can expect a high level of reliability and compatibility when deploying UDS Core on these distributions.
Compatible: Kubernetes distributions listed under this category may not have undergone extensive testing in UDS Core’s CI environments. While UDS Core may be compatible on these distributions, users should exercise caution and be prepared for potential compatibility issues or limitations.
Distribution | Category | Support Level |
---|
K3d, Amazon EKS | Tested | Supported Kubernetes distributions undergoing testing in CI environments. |
RKE2 | Tested | Supported Kubernetes distribution tested in production environments other than CI. |
Other | Untested/Unknown state | Compatible Kubernetes distributions that are not explicitly tested, documented, or supported by UDS Core. |
2.2 -
Deploy UDS Core
Prerequisites
Please ensure that the following prerequisites are on your machine prior to deploying UDS Core:
- Docker, or as an open source alternative, you can use Colima.
- If using Colima, please declare the following resources after installing:
colima start --cpu 6 --memory 14 --disk 50
UDS Bundles
UDS Core provides published bundles that serve multiple purposes: you can utilize them for experimenting with UDS Core or for UDS Package development when you only require specific components of UDS Core. These bundles leverage UDS K3d to establish a local k3d cluster.
UDS Bundles deployed for development and testing purposes are comprised of a shared configuration that equips users with essential tools, emulating a development environment for convenience. If deploying to a production environment, users have the ability to modify variables and configurations to best fit specific mission needs by creating their own bundle.
Note
These UDS Bundles are designed specifically for development and testing environments and are not intended for production use. Additionally, they serve as examples for creating customized bundles.
Quickstart: Development and Test Environments
Step 1: Install the UDS CLI
It is recommended to update to the latest version, all releases can be found in the UDS CLI GitHub repository.
brew tap defenseunicorns/tap && brew install uds
Step 2: Deploy the UDS Bundle
The UDS Bundle being deployed in this example is the k3d-core-demo
bundle which creates a local k3d cluster with UDS Core installed.
uds deploy k3d-core-demo:0.20.0
# deploy this bundle?
y
For additional information on UDS Bundles, please see the UDS Bundles documentation.
Optional:
Use the following command to visualize resources in the cluster via k9s:
Step 3: Clean Up
Use the following command to tear down the k3d cluster:
If you opted to use Colima, use the following command to tear down the virtual machine that the cluster was running on:
UDS Bundle Development
In addition to the demo bundle, there is also a k3d-slim-dev bundle
designed specifically for working with UDS Core with only Istio, Keycloak, and Pepr installed. To use it, execute the following command:
uds deploy k3d-core-slim-dev:0.20.0
Developing UDS Core
UDS Core development leverages the uds zarf dev deploy
command. To simplify the setup process, a dedicated UDS Task is available. Please ensure you have NodeJS version 20 or later installed before proceeding.
Below is an example of the workflow developing the metrics-server package:
# Create the dev environment
uds run dev
# If developing the Pepr module:
npx pepr dev
# If not developing the Pepr module (can be run multiple times):
npx pepr deploy
# Deploy the package (can be run multiple times)
uds run dev-deploy --set PKG=metrics-server
Testing UDS Core
You can perform a complete test of UDS Core by running the following command:
This command initiates the creation of a local k3d cluster, installs UDS Core, and executes a set of tests identical to those performed in CI. If you wish to run tests targeting a specific package, you can utilize the PKG
environment variable.
The example below runs tests against the metrics-server package:
UDS_PKG=metrics-server uds run test-single-package
Note
You can specify the --set FLAVOR=registry1
flag to test using Iron Bank images instead of the upstream images.
3 -
Configure UDS Core
3.1 -
Monitoring and Metrics
UDS Core leverages Pepr to handle setup of Prometheus scraping metrics endpoints, with the particular configuration necessary to work in a STRICT mTLS (Istio) environment. We handle this with both mutations of existing service monitors and generation of service monitors via the Package
CR.
Mutations
All service monitors are mutated to set the scrape scheme to HTTPS and set the TLS Config to what is required for Istio mTLS scraping (see this doc for details). Beyond this, no other fields are mutated. Supporting existing service monitors is useful since some charts include service monitors by default with more advanced configurations, and it is in our best interest to enable those and use them where possible.
Assumptions are made about STRICT mTLS here for simplicity, based on the istio-injection
namespace label. Without making these assumptions we would need to query PeerAuthentication
resources or another resource to determine the exact workload mTLS posture.
Note: This mutation is the default behavior for all service monitors but can be skipped using the annotation key uds/skip-sm-mutate
(with any value). Skipping this mutation should only be done if your service exposes metrics on a PERMISSIVE mTLS port.
Package CR monitor
field
UDS Core also supports generating service monitors from the monitor
list in the Package
spec. Charts do not always support service monitors, so generating them can be useful. This also provides a simplified way for other users to create service monitors, similar to the way we handle VirtualServices
today. A full example of this can be seen below:
...
spec:
monitor:
- selector: # Selector for the service to monitor
app: foobar
portName: metrics # Name of the port to monitor
targetPort: 1234 # Corresponding target port on the pod/container (for network policy)
# Optional properties depending on your application
description: "Metrics" # Add to customize the service monitor name
podSelector: # Add if pod labels are different than `selector` (for network policy)
app: barfoo
path: "/mymetrics" # Add if metrics are exposed on a different path than "/metrics"
This config is used to generate service monitors and corresponding network policies to setup scraping for your applications. The ServiceMonitor
s will go through the mutation process to add tlsConfig
and scheme
to work in an istio environment.
This spec intentionally does not support all options available with a ServiceMonitor
. While we may add additional fields in the future, we do not want to simply rebuild the ServiceMonitor
spec since mutations are already available to handle Istio specifics. The current subset of spec options is based on the bare minimum necessary to craft resources.
NOTE: While this is a rather verbose spec, each of the above fields are strictly required to craft the necessary service monitor and network policy resources.
Notes on Alternative Approaches
In coming up with this feature a few alternative approaches were considered but not chosen due to issues with each one. The current spec provides the best balance of a simplified interface compared to the ServiceMonitor
spec, and a faster/easier reconciliation loop.
Generation based on service lookup
An alternative spec option would use the service name instead of selectors/port name. The service name could then be used to lookup the corresponding service and get the necessary selectors/port name (based on numerical port). There are however 2 issues with this route:
- There is a timing issue if the
Package
CR is applied to the cluster before the app chart itself (which is the norm with our UDS Packages). The service would not exist at the time the Package
is reconciled. We could lean into eventual consistency here, if we implemented a retry mechanism for the Package
, which would mitigate this issue. - We would need an “alert” mechanism (watch) to notify us when the service(s) are updated, to roll the corresponding updates to network policies and service monitors. While this is doable it feels like unnecessary complexity compared to other options.
Generation of service + monitor
Another alternative approach would be to use a pod selector and port only. We would then generate both a service and servicemonitor, giving us full control of the port names and selectors. This seems like a viable path, but does add an extra resource for us to generate and manage. There could be unknown side effects of generating services that could clash with other services (particularly with istio endpoints). This would otherwise be a relative straightforward approach and is worth evaluating again if we want to simplify the spec later on.
3.2 -
UDS Operator
The UDS Operator plays a pivotal role in managing the lifecycle of UDS Package Custom Resources (CRs) along with their associated resources like NetworkPolicies and Istio VirtualServices. Leveraging Pepr, the operator binds watch operations to the enqueue and reconciler, taking on several key responsibilities for UDS Packages and exemptions:
Package
- Enabling Istio Sidecar Injection:
- The operator facilitates the activation of Istio sidecar injection within namespaces where the CR is deployed.
- Establishing Default-Deny Ingress/Egress Network Policies:
- It sets up default-deny network policies for both ingress and egress, creating a foundational security posture.
- Implementing Layered Allow-List Approach:
- A layered allow-list approach is applied on top of default-deny network policies. This includes essential defaults like Istio requirements and DNS egress.
- Providing Targeted Remote Endpoints Network Policies:
- The operator creates targeted network policies for remote endpoints, such as
KubeAPI
and CloudMetadata
. This approach aims to enhance policy management by reducing redundancy (DRY) and facilitating dynamic bindings in scenarios where static definitions are impractical.
- Creating Istio Virtual Services and Related Ingress Gateway Network Policies:
- In addition, the operator is responsible for generating Istio Virtual Services and the associated network policies for the ingress gateway.
Example UDS Package CR
apiVersion: uds.dev/v1alpha1
kind: Package
metadata:
name: grafana
namespace: grafana
spec:
network:
# Expose rules generate Istio VirtualServices and related network policies
expose:
- service: grafana
selector:
app.kubernetes.io/name: grafana
host: grafana
gateway: admin
port: 80
targetPort: 3000
# Allow rules generate NetworkPolicies
allow:
- direction: Egress
selector:
app.kubernetes.io/name: grafana
remoteGenerated: Anywhere
- direction: Egress
remoteNamespace: tempo
remoteSelector:
app.kubernetes.io/name: tempo
port: 9411
description: "Tempo"
# SSO allows for the creation of Keycloak clients and with automatic secret generation
sso:
- name: Grafana Dashboard
clientId: uds-core-admin-grafana
redirectUris:
- "https://grafana.admin.uds.dev/login/generic_oauth"
Exemption
- Exemption Scope:
- Granting exemption for custom resources is restricted to the
uds-policy-exemptions
namespace by default, unless specifically configured to allow exemptions across all namespaces.
- Policy Updates:
- Updating the policies Pepr store with registered exemptions.
Example UDS Exemption CR
apiVersion: uds.dev/v1alpha1
kind: Exemption
metadata:
name: neuvector
namespace: uds-policy-exemptions
spec:
exemptions:
- policies:
- DisallowHostNamespaces
- DisallowPrivileged
- RequireNonRootUser
- DropAllCapabilities
- RestrictHostPathWrite
- RestrictVolumeTypes
matcher:
namespace: neuvector
name: "^neuvector-enforcer-pod.*"
- policies:
- DisallowPrivileged
- RequireNonRootUser
- DropAllCapabilities
- RestrictHostPathWrite
- RestrictVolumeTypes
matcher:
namespace: neuvector
name: "^neuvector-controller-pod.*"
- policies:
- DropAllCapabilities
matcher:
namespace: neuvector
name: "^neuvector-prometheus-exporter-pod.*"
Example UDS Package CR with SSO Templating
By default, UDS generates a secret for the Single Sign-On (SSO) client that encapsulates all client contents as an opaque secret. In this setup, each key within the secret corresponds to its own environment variable or file, based on the method used to mount the secret. If customization of the secret rendering is required, basic templating can be achieved using the secretTemplate
property. Below are examples showing this functionality. To see how templating works, please see the Regex website.
apiVersion: uds.dev/v1alpha1
kind: Package
metadata:
name: grafana
namespace: grafana
spec:
sso:
- name: My Keycloak Client
clientId: demo-client
redirectUris:
- "https://demo.uds.dev/login"
# Customize the name of the generated secret
secretName: my-cool-auth-client
secretTemplate:
# Raw text examples
rawTextClientId: "clientField(clientId)"
rawTextClientSecret: "clientField(secret)"
# JSON example
auth.json: |
{
"client_id": "clientField(clientId)",
"client_secret": "clientField(secret)",
"defaultScopes": clientField(defaultClientScopes).json(),
"redirect_uri": "clientField(redirectUris)[0]",
"bearerOnly": clientField(bearerOnly),
}
# Properties example
auth.properties: |
client-id=clientField(clientId)
client-secret=clientField(secret)
default-scopes=clientField(defaultClientScopes)
redirect-uri=clientField(redirectUris)[0]
# YAML example (uses JSON for the defaultScopes array)
auth.yaml: |
client_id: clientField(clientId)
client_secret: clientField(secret)
default_scopes: clientField(defaultClientScopes).json()
redirect_uri: clientField(redirectUris)[0]
bearer_only: clientField(bearerOnly)
Configuring UDS Core Policy Exemptions
Default policy exemptions are confined to a singular namespace: uds-policy-exemptions
. We find this to be an optimal approach for UDS due to the following reasons:
- Emphasis on Security Impact:
- An exemption has the potential to diminish the overall security stance of the cluster. By isolating these exemptions within a designated namespace, administrators can readily recognize and assess the security implications associated with each exemption.
- Simplified RBAC Maintenance:
- Adopting this pattern streamlines the management of Role-Based Access Control (RBAC) for overseeing exemptions. Placing all UDS exemptions within a dedicated namespace simplifies the task of configuring and maintaining RBAC policies, enhancing overall control and transparency.
- Mitigation of Configuration Risks:
- By restricting exemptions to a specific namespace, the risk of unintentional misconfigurations in RBAC is significantly reduced. This ensures that cluster exemptions are only granted intentionally and within the confines of the designated namespace, minimizing the potential for security vulnerabilities resulting from misconfigured permissions.
Allow All Namespaces
If you find that the default scoping is not the right approach for your cluster, you have the option to configure UDS-CORE
at deploy time to allow exemption CRs in all namespaces:
zarf package deploy zarf-package-uds-core-*.zst --set ALLOW_ALL_NS_EXEMPTIONS=true
You can also achieve this through the uds-config.yaml
:
options:
# options here
shared:
ALLOW_ALL_NS_EXEMPTIONS: "true"
variables:
# package specific variables here
Key Files and Folders
src/pepr/operator/
├── controllers # Core business logic called by the reconciler
│ ├── exemptions # Manages updating Pepr store with exemptions from UDS Exemption
│ ├── istio # Manages Istio VirtualServices and sidecar injection for UDS Packages/Namespace
│ ├── keycloak # Manages Keycloak client syncing
│ └── network # Manages default and generated NetworkPolicies for UDS Packages/Namespace
├── crd
│ ├── generated # Type files generated by `uds run -f src/pepr/tasks.yaml gen-crds`
│ ├── sources # CRD source files
│ ├── migrate.ts # Migrates older versions of UDS Package CRs to new version
│ ├── register.ts # Registers the UDS Package CRD with the Kubernetes API
│ └── validators # Validates Custom Resources with Pepr
├── index.ts # Entrypoint for the UDS Operator
└── reconcilers # Reconciles Custom Resources via the controllers
3.3 -
Configuring Policy Exemptions
By default policy exemptions (UDSExemptions) are only allowed in a single namespace – uds-policy-exemptions
. We recognize this is not a conventional pattern in K8s, but believe it is ideal for UDS for the following reasons:
- highlights the fact that an exemption can reduce the overall security posture of the cluster
- makes maintaining RBAC for controlling exemptions more straightforward
- reduces the risk that an unintentional mis-configuration of RBAC allows a cluster exemption that would otherwise be denied
Allow All Namespaces
If you believe that the default scoping is not the right approach for your cluster, you can configure UDS-CORE at deploy time to allow exemption CRs in all namespaces.
zarf package deploy zarf-package-uds-core-*.zst --set ALLOW_ALL_NS_EXEMPTIONS=true
or via a uds bundle config:
uds-config.yaml
options:
# options here
shared:
ALLOW_ALL_NS_EXEMPTIONS: "true"
variables:
# package specific variables here
3.4 -
User Groups
UDS Core deploys Keycloak which has some preconfigured groups that applications inherit from SSO and IDP configurations.
Applications
Grafana
Grafana maps the groups from Keycloak to it’s internal Admin
and Viewer
groups.
Keycloak Group | Mapped Grafana Group |
---|
Admin | Admin |
Auditor | Viewer |
If a user doesn’t belong to either of these Keycloak groups the user will be unauthorized when accessing Grafana.
Neuvector
Neuvector maps the groups from Keycloak to it’s internal admin
and reader
groups.
Keycloak Group | Mapped Neuvector Group |
---|
Admin | admin |
Auditor | reader |
Keycloak
Note
All groups are under the Uds Core parent group. Frequently a group will be referred to as Uds Core/Admin or Uds Core/Auditor. In the Keycloak UI this requires an additional click to get down to the sub groups.
Identity Providers ( IDP )
UDS Core ships with a templated Google SAML IDP, more documentation to configure the realmInitEnv
values in uds-identity-config.
Alternatively, the realmInitEnv
can be configured via bundle overrides like in the k3d-standard-bundle.
Configuring your own IDP can be achieved via:
Custom uds-identity-config with a templated realm.json
Keycloak Admin UI and click ops
Custom realm.json for direct import in Keycloak
4 -
UDS Core Development
4.1 -
Development Maintenance
UDS Bundle [name]
How to upgrade this bundle
[Description and steps for upgrading this UDS bundle. Include any historic problems to watch out for]
5 -
UDS Identity Config
What is UDS Identity Config?
UDS Identity Config supplies the configuration for Keycloak to UDS Core.
5.1 -
Customization
These docs are intended for demonstrating how to customize the uds-core Identity (Keycloak) deployment by updating/changing the config image.
Testing custom image in UDS Core
Build a new image
# create a dev image uds-core-config:keycloak
uds run dev-build
# optionally, retag and publish to temporary registry for testing
docker tag uds-core-config:keycloak ttl.sh/uds-core-config:keycloak
docker push ttl.sh/uds-core-config:keycloak
Update UDS Core references
The custom image reference will need to be update in a few places in the uds-core
repository:
Deploy UDS Core
# build and deploy uds-core
uds run test-uds-core
See UDS Core for further details
Customizations
Add additional jars
Adding additional jars to Keycloak’s deployment is as simple as adding that jar to the src/extra-jars directory.
Adding new jars will require building a new identity-config image for uds-core.
See Testing custom image in UDS Core for building, publishing, and using the new image with uds-core
.
Once uds-core
has sucessfully deployed with your new image, viewing the Keycloak pod can provide insight into a successful deployment or not. Also describing the Keycloak pod, should display your new image being pulled instead of the default image defined here in the events section.
Customize Theme
Official Theming Docs
Changes can be made to the src/theme directory. At this time only Account and Login themes are included, but could be changed to include email, admin, and welcome themes as well.
Testing Changes
To test the identity-config
theme changes, a local running Keycloak instance is required.
Don’t have a local Keycloak instance? The simplest testing path is utilizing uds-core, specifically the dev-identity
task. This will create a k3d cluster with Istio, Pepr, Keycloak, and Authservice.
Once that cluster is up and healthy and after making theme changes:
Execute this command:
View the changes in the browser
Customizing Realm
The UDS Identity
realm is defined in the realm.json found in src/realm.json. This can be modified and will require a new uds-identity-config
image for uds-core
.
Note
Be aware that changing values in the realm may also need be to updated throughout the configuration of Keycloak and Authservice in uds-core
. For example, changing the realm name will break a few different things within Keycloak unless those values are changed in uds-core
as well.
See the Testing custom image in UDS Core for building, publishing, and using the new image with uds-core
.
Templated Realm Values
Keycloak supports using environment variables within the realm configuration, see docs.
These environment variables have default values set in the realm.json that uses the following syntax:
${REALM_GOOGLE_IDP_ENABLED:false}
In the uds-core keycloak values.yaml, the realmInitEnv
defines set of environment variables that can be used to configure the realm different from default values.
These environment variables will be created with a prefix REALM_
to avoid collisions with keycloak environment variables. If necessary to add additional template variables within the realm.json must be prefixed with REALM_
.
For example, this bundle override contains all the available overrides:
overrides:
keycloak:
keycloak:
values:
path: realmInitEnv
value:
GOOGLE_IDP_ENABLED: true
GOOGLE_IDP_ID: <fill in value here>
GOOGLE_IDP_SIGNING_CERT: <fill in value here>
GOOGLE_IDP_NAME_ID_FORMAT: <fill in value here>
GOOGLE_IDP_CORE_ENTITY_ID: <fill in value here>
GOOGLE_IDP_ADMIN_GROUP: <fill in value here>
GOOGLE_IDP_AUDITOR_GROUP: <fill in value here>
EMAIL_VERIFICATION_ENABLED: true
OTP_ENABLED: true
TERMS_AND_CONDITIONS_ENABLED: true
PASSWORD_POLICY: <fill in value here>
X509_OCSP_FAIL_OPEN: true
These environment variables can be found in the realm.json identityProviders
section.
Customize Truststore
The default truststore is configured in a script and excuted in the Dockerfile. There is a few different ways the script could be customized.
Build test authorized_certs.zip
Utilizing the regenerate-test-pki
task, you can create a test authorized_certs.zip
to use for the truststore.
To use the regenerate-test-pki
task:
Create csr.conf
[req]
default_bits = 2048
default_keyfile = key.pem
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_ext
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = US
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Colorado
localityName = Locality Name (eg, city)
localityName_default = Colorado Springs
organizationName = Organization Name (eg, company)
organizationName_default = Defense Unicorns
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = uds.dev
[req_ext]
subjectAltName = @alt_names
[v3_ext]
subjectAltName = @alt_names
[alt_names]
DNS.0 = *.uds.dev
# Generates new authorized_certs.zip
uds run regenerate-test-pki
Update Dockerfile and build image
Update CA_ZIP_URL
in Dockerfile to refer to the generated authorized_certs.zip
ARG CA_ZIP_URL=authorized_certs.zip
Build config image
# build image
uds run dev-build
Note
If you’re getting errors from the ca-to-jks.sh script, verify your zip folder is in the correct directory.
# In `uds-core` create cacert from the new identity-config image
uds run -f src/keycloak/tasks.yaml cacert --set IMAGE_NAME=<identity config image> --set VERSION=<identity config image version>
# Update tenant and admin gateway with generated cacerts
uds run -f src/keycloak/tasks.yaml dev-cacert
Deploy UDS Core with new uds-identity-config
See Testing custom image in UDS Core
Verify Istio Gateway configuration
# Verify the "Acceptable client certificate CA names"
openssl s_client -connect sso.uds.dev:443
Custom Plugin
Note
This isn’t recommended, however can be achieved if necessary
Note
Making these changes iteratively and importing into Keycloak to create a new realm can help to alleviate typo’s and mis-configurations. This is also the quickest solution for testing without having to create,build,deploy with new images each time.
The plugin provides the auth flows that keycloak uses for x509 (CAC) authentication as well as some of the surrounding registration flows.
One nuanced auth flow is the creation of a Mattermost ID attribute for users. CustomEventListener is responsible for generating the unique ID.
Note
When creating a user via ADMIN API or ADMIN UI, the ‘REGISTER’ event is not triggered, resulting in no Mattermost ID attribute generation. This will need to be done manually via click ops or the api. An example of how the attribute can be set via api can be seen here.
Developing
See PLUGIN.md.
Configuration
In addition, modify the realm for keycloak, otherwise the realm will require plugin capabilities for registering and authenticating users. In the current realm.json there is a few sections specifically using the plugin capabilities. Here is the following changes necessary:
Remove all of the UDS ...
authenticationFlows:
UDS Authentication
UDS Authentication Browser - Conditional OTP
UDS Registration
UDS Reset Credentials
UDS registration form
Make changes to authenticationExecutions from the browser
authenticationFlow:
- Remove
auth-cookie
- Remove
auth-spnego
- Remove
identity-provider-redirector
- Update the remaining authenticationFlow
"requirement": "REQUIRED"
"flowAlias": "Authentication"
Remove registration-profile-action
authenticationExecution from the registration form
authenticationFlow
Update the realm flows:
"browserFlow": "browser"
"registrationFlow": "registration"
"resetCredentialsFlow": "reset credentials"
Disabling
If desired the Plugin can be removed from the identity-config image by commenting out these lines in the Dockerfile:
COPY plugin/pom.xml .
COPY plugin/src ../src
RUN mvn clean package
Building New Image with Updates
Once satisfied with changes and tested that they work, see Testing custom image in UDS Core for building, publishing, and using the new image with uds-core
.
Transport Custom Image with Zarf
For convenience, a Zarf package definition has been included to simplify custom image transport and install in air-gapped systems.
Build the Zarf package
Use the included UDS task to build the custom image and package it with Zarf:
5.2 -
Plugin
The plugin provides the auth flows that keycloak uses for x509 (CAC) authentication as well as some of the surrounding registration flows.
One nuanced auth flow is the creation of a Mattermost ID attribute for users. CustomEventListener is responsible for generating the unique ID.
Note
When creating a user via ADMIN API or ADMIN UI, the REGISTER
event is not triggered, resulting in no Mattermost ID attribute generation. This will need to be done manually via click ops or the api. An example of how the attribute can be set via api can be seen here.
Requirements
Working on the plugin requires JDK17+ and Maven 3.5+.
# local java version
java -version
# loval maven version
mvn -version
Plugin Testing with Keycloak
After making changes to the plugin code and verifying that unit tests are passing ( and hopefully writing some more ), test against Keycloak.
See the New uds-identity-config Image
section in the CUSTOMIZE.md for building, publishing, and using the new image with uds-core
.
Plugin Unit Testing / Code Coverage
The maven surefire and jacoco plugins are configured in the pom.xml.
Note
mvn
commands will need to be executed from inside of the src/plugin
directory
Note
There is a uds-cli task for running the mvn clean verify
command: uds run dev-plugin
.
Some important commands that can be used when developing/testing on the plugin:
Command | Description |
---|
mvn clean install | Cleans up build artifacts and then builds and installs project into local maven repository. |
mvn clean test | Cleans up build artifacts and then compiles the source code and runs all tests in the project. |
mvn clean test -Dtest=com.defenseunicorns.uds.keycloak.plugin.X509ToolsTest | Same as mvn clean test but instead of running all tests in project, only runs the tests in designated file. |
mvn surefire-report:report | This command will run the mvn clean test and then generate the surefire-report.html file in target/site |
mvn clean verify | Clean project, run tests, and generate both surefire and jacoco reports |
Viewing the Test Reports
# maven command from src/plugin directory
mvn clean verify
Open the src/plugin/target/site/surefire-report.html
file in your browser to view the surefire test report.
Open the src/plugin/target/site/jacoco/index.html
file in your browser to view the unit test coverage report generated by jacoco.
Both reports will hot reload each time they are regenerated, no need to open each time.
5.3 -
Integration Testing For UDS Identity Config + UDS Core
Cypress Web Flow/Integration Testing Docs
Implemented Tests
Cypress Testing
Using uds-cli task uds-core-integration-tests
.
Task explanation:
- Cleanup an existing uds-core directory ( mainly for local testing )
- Create docker image that uses the new certs as well as a testing realm.json ( has a defined user, no MFA, and no email verification )
- Clone
uds-core
necessary for setting up k3d cluster to test against - Use that cacert in deploying
uds-core
istio gateways - Create zarf package that combines uds-core and identity-config
- Setup k3d cluster by utilizing
uds-core
(istio, keycloak, pepr, zarf) - Deploy zarf package that was created earlier
- Run cypress tests against deployed cluster
Updating Cypress Certs
Cypress testing requires that a ca.cer be created and put into an authorized_certs.zip, done by using the regenerate-test-pki
uds task, which is then utilized by the Dockerfile. Once a docker image has been created another command is used for pulling that cacert, uds task cacert
, from the image using it’s value to configure uds-core’s gateways, uds-core-gateway-cacert
uds task . Eventually cypress will require a pfx cert for its CAC testing.
Our cypress testing utilizes static certs that are created and saved to limit the need for constantly rebuilding and importing those certs.
Follow these steps to update the certs for cypress:
- Run
uds run regenerate-test-pki
to regenerate the necessary certs and authorized_certs.zip - Run
docker build --build-arg CA_ZIP_URL="authorized_certs.zip" -t uds-core-config:keycloak --no-cache src
to create docker image - Run
uds run cacert
to extract cacert from docker image for the tls_cacert.yaml file - Copy the authorized_certs.zip, test.pfx, and tls_cacert.yaml into the certs directory
mv test.pfx tls_cacert.yaml src/authorized_certs.zip src/cypress/certs/