It wasn’t long ago that Nubus for Kubernetes was first released in version 1.0. Now, version 1.5 is already available, which, in addition to minor bug fixes and improvements, primarily enables the redundant operation of the identity store.
Table of Contents
LDAP Directory Service – Scalable and Highly Available
With this version of Nubus for Kubernetes, it is now possible to operate the LDAP directory service not only scaled for high loads but also redundantly to protect against failures. The directory service is operated in three layers, each with different tasks:
- LDAP Primary
The primary LDAP nodes act as the “source of truth” for the data of the directory service. They store user and group information and accept updates for this data. From now on, two of these nodes can be operated in parallel. If one of these nodes fails, the other seamlessly takes over its tasks while the failed node is rebuilt or repaired. Both nodes continuously synchronize their data to ensure an uninterrupted handover without data loss.
- LDAP Secondary
The secondary LDAP nodes provide user and group information for read access by connected applications. They maintain a local copy of the data, which they obtain from the primary nodes. They can only respond to read requests and refer write requests to the primary nodes. Multiple secondary nodes can operate in parallel to handle high loads, such as many simultaneously active users, with ease.
- LDAP Proxy
The proxy nodes serve as the gateway for access by connected applications. They receive requests and forward them to the secondary nodes. If it is a write request or if a secondary node cannot respond to the request, it is forwarded to a primary node. The response is then sent back to the requesting application.
With the Nubus Helm chart, an operator can configure how many instances (“replicas”) should be created by Kubernetes for each of these layers. This allows for a setup that complies with the requirements of the BSI IT-Grundschutz to be established in a short time.
Highly Available Primary Nodes
With Nubus 1.5, it is possible to define up to two primary nodes that mirror each other. To ensure data consistency, exactly one node is always designated as “leader.” This node handles all (write) requests, processes them, and mirrors them to the other node. If the leader node fails, the other node is automatically declared the new leader. The system remains operational while Kubernetes replaces the failed node, for example, by restarting it on another Kubernetes “worker node.”
This is technically ensured through the use of “Kubernetes Leases“. Alongside each node, a so-called “sidecar” container runs, which constantly checks whether the LDAP service is reachable and applies for a “lease” with the Kubernetes orchestrator (comparable to a relay baton). Once a node receives the lease, the network service (Kubernetes Service) is configured to route all connections to that node. In the event of a problem, such as the node crashing or the LDAP service becoming unresponsive, the sidecar no longer applies for the lease. Kubernetes then grants the lease to the other node, and the network service is reconfigured accordingly so that the new leader node takes over all requests.
No Substitute for Backups!
The described capabilities ensure as uninterrupted an operation as possible. However, they cannot prevent data from being unintentionally altered due to user error, for example. Every change is propagated by the system as quickly as possible to maintain the described properties. In such cases, being able to restore the previous state from a backup is invaluable.
Events such as a fire in the data center that affects all nodes simultaneously have also occurred. In such situations, an (off-site) backup is worth its weight in gold. Therefore, please do not consider operational redundancy as a substitute for backups, which ensure the integrity of the data even at rest.
Available now
Information on how to try out Nubus for Kubernetes 1.5 or update an existing installation can be found in the operation manual. All changes compared to the previous version are listed in the release notes.