With Nubus for Kubernetes, you now have an Open Source solution that makes identity and access management in Kubernetes environments simple and centralized. We’ll show you how Kubernetes keeps your containers on course—and who’s really at the helm of your cluster.
In the world of modern IT infrastructure, Kubernetes has taken the helm—especially when applications are no longer confined to a single server but split into fleets of lightweight containers. But who’s steering all these containers through the rough and choppy waters of cloud and data center environments? The answer: Kubernetes.
In this article, we‘re inviting you aboard to explore how Kubernetes keeps your container fleet on course, who‘s in charge of what in a cluster, and how it helps bring structure to even the most tangled setups. And in part two of this little series, we‘ll actually launch something real: Nubus—our Open Source solution for handling identity and access management right inside the cluster.
Table of Contents
All Hands on Deck: Understanding Containers in Modern IT
Kubernetes—often shortened to K8s—is an Open Source platform for orchestrating containers. The name comes from the Greek word for helmsman or pilot, which fits perfectly: Kubernetes acts like the navigator that guides your containerized applications through the complex currents of today’s IT environments.
And about that nickname: K8s is what’s called a numeronym—it shortens the word by replacing the eight letters between the K and the s with the number 8, just like i18n stands for internationalization. Originally developed by Google, Kubernetes is now maintained by the Cloud Native Computing Foundation (CNCF) and powers workloads in data centers all over the world.
Container Basics: The Engine Room of Modern Apps
Before we talk Kubernetes, let‘s start with the basics. Containers are the lightweight engines powering today‘s modern applications. Think of them as neatly packaged units of software—each one containing everything a process needs to run, including its own filesystem, network layer, and runtime environment. Unlike traditional virtual machines, containers don‘t spin up full operating systems. Instead, they rely on the shared kernel of the host system, using Linux namespaces and control groups to stay isolated and efficient.
Here’s the neat trick: when a process inside a container accesses something like /etc, it’s only seeing its own view of the world. The kernel redirects the request to the container’s private environment. To the container, it feels like a full machine—but it’s much leaner, faster, and more portable.
This lightweight nature is what makes containers so popular. It’s common to run each service—say, a web server, database, or cache—in its own container. But when you start deploying dozens or hundreds of these microservices, things get complicated fast. That’s where Kubernetes comes in.
Steering Your Containers: How Kubernetes Keeps Applications on Course
Sure, you can run containers without Kubernetes. For a small project or a handful of services, tools like Docker or Docker Compose are often enough. You spin up a few containers, stop them when needed, maybe use a GUI to check what’s running. But what happens when you’re not managing five containers—but 50, or 5,000?
That’s when container orchestration becomes essential. And that’s where Kubernetes shines. It takes care of starting your apps, keeping them running, scaling them up or down, and distributing them across multiple machines—automatically and reliably. It’s like having a seasoned crew below deck, managing the ship’s engines without you needing to pull every lever yourself.
Whether you’re deploying to the cloud or running your own data center, Kubernetes helps ensure your applications stay highly available, resilient, and ready to scale. Even if a node in your cluster goes down, Kubernetes keeps things afloat. And when it’s time to update an app, Kubernetes has your back—helping you deploy updates in a controlled way and roll them back safely if something goes wrong.
Setting the Course: Declarative Control with Kubernetes
When you work with Kubernetes, you’re not issuing direct commands like “Start container X” or “Stop container Y.” Instead, you define a desired state: “This application should run in three instances, accessible at this address, secured with a certificate.” Kubernetes takes it from there—making sure that state becomes reality and keeping it that way, no matter what happens. At the heart of it all is a powerful API that lets you configure and manage every component in the cluster. You’re not steering the ship manually—you’re setting the destination, and Kubernetes charts the course.
A Kubernetes cluster is built around two main components: the control plane and the compute nodes. The control plane keeps a bird’s-eye view of everything, knowing what the cluster should look like—for example, how many instances of a service should be running—and steering the system toward that goal.
Meanwhile, the real work happens on the compute nodes—physical or virtual machines that run your containers inside what Kubernetes calls pods. The nodes do the heavy lifting, following the control plane’s instructions.
This clean separation between steering and sailing—between control and workload—is what makes Kubernetes so robust, scalable, and ready for anything the seas of IT throw at it.
Meet the Crew: Roles and Responsibilities in a Kubernetes Cluster
For Kubernetes to run smoothly day-to-day, it takes a clear division of responsibilities. In practice, three main roles have emerged—and they’re distinct not just organizationally, but technically, too:
- Kubernetes administrators are responsible for the big picture—managing the cluster itself. They handle tasks like provisioning physical or virtual machines, configuring networks, managing storage, setting up load balancers, and assigning access rights. They define who can access which resources and make sure the entire cluster stays stable and seaworthy.
- Technical operators—often part of DevOps teams—work directly on top of this infrastructure. They deploy applications, scale them as needed, and monitor their health. Their main tools of the trade are the Kubernetes API and package managers like Helm.
- System administrators focus on the applications themselves. They configure apps, manage user accounts, and keep services running—usually through the web interfaces of the applications they manage. They don’t need to know how Kubernetes operates behind the scenes to get their job done.
Navigational Aids: Key Kubernetes Terms You Need to Know
To help you find your way around Kubernetes, here are three key concepts you’ll encounter regularly:
- Pod: The smallest deployable unit in Kubernetes. A pod usually contains a single container—like a web server or a database—though sometimes you’ll find “sidecar” containers riding along for tasks like logging or initialization. All containers in a pod share storage and networking.
- Namespace: A way to logically divide resources within a cluster. Namespaces help you separate environments—like development, testing, and production—or organize teams working on different projects without stepping on each other‘s toes.
- Ingress: Your gateway from the outside world into the cluster. By default, containers inside Kubernetes aren‘t reachable from outside—and that‘s by design. An Ingress lets you define which applications are exposed externally. The Ingress controller, often powered by Nginx, acts as a reverse proxy, directing incoming requests to the right services inside the cluster—complete with domain routing, path matching, and TLS encryption.
Next Port of Call: Getting Started with Kubernetes
With Kubernetes, you can stay on top of even the most complex IT infrastructures. The platform takes a lot of repetitive work off your hands, keeps operations stable, and lays the foundation for a modern, flexible container environment.
TIP: If you want to dive deeper, it‘s worth checking out the official Kubernetes documentation.
In the next part of our series, we‘ll show you how to set up a Kubernetes test cluster right on your notebook—including a step-by-step guide to installing Nubus for Kubernetes, our Open Source solution for centralized Identity & Access Management in your cluster.
Ready to set sail? Get Docker and kind installed—we‘ll be launching your first Kubernetes cluster in the next article!