5 More Reasons to Love Kubernetes

851

In part one of this series, I covered my top five reasons to love Kubernetes, the open source container orchestration platform created by Google. Kubernetes was donated to the Cloud Native Computing Foundation in July of 2015, where it is now under development by dozens of companies including Canonical, CoreOS, Red Hat, and more.

My first five reasons were primarily about the project’s heritage, ease of use, and ramp-up. The next five get more technical. As I mentioned in part one, choosing a distributed system to perform tasks in a datacenter is much more complex than looking at a spreadsheet of features or performance. And, you should make your decision based on your own needs and team dynamics. However, this top 10 list will give you my perspective, as someone who has been using, testing, and developing systems for a while now.

#6 Rolling Updates and Rollbacks

Rolling updates and rollback are absolutely necessary features in application management. Especially when we start embracing extremely quick development cycles and continuous innovation. Having a system that has these features not only built-in but also thought out as part of how the system works is huge.

Deployment resources are new in Kubernetes — at first Kubernetes had Replication Controllers (RC). These were resources that defined a declarative state of your Pods — meaning, which containers and how many do I want in my system at all times. Rolling updates were implemented with replication controllers but were client side, which was problematic when the client shut down.

Therefore, Kubernetes introduced the Deployment resource and replaced replication controllers with replica sets (part of a bigger renaming of various resources). Every time a deployment is modified it creates a new replica set. Scaling up and down the replica sets of that Deployment gives you rolling update and the ability to roll backs. Indeed, old replica sets are not deleted but remain in the system and are part of the history of a Deployment.

The scaling and deployment strategy of a Deployment can be tweaked in the Deployment manifest.

And all of this, of course, is triggered by simple HTTP calls to a REST API.

Let’s check the history of a simple deployment:

```

$ kubectl rollout history deployment ghost

deployments "ghost":

REVISION    CHANGE-CAUSE

1        kubectl run ghost --image=ghost --record

2        kubectl edit deployment/ghost

```

Here it shows that one update was made. We can rollback that change with rollout undo. This will increment the revisions in the history.

```

$ kubectl rollout undo deployment ghost

deployment "ghost" rolled back

$ kubectl rollout history deployment ghost

deployments "ghost":

REVISION    CHANGE-CAUSE

2        kubectl edit deployment/ghost

3        kubectl run ghost --image=ghost --record

```

Bottom line, edit a Deployment and you get a rolling update. Rollback with a rollout undo, and, yes, you can rollback to a specific revision.

#7 Quotas

In open source, there have been lots of business model variants during the past 15 years. One of them has been and still is to make certain features commercial add-ons/plugins. Quotas that limit the use of resources in a system are sometimes paid add-ons.

In Kubernetes, quotas are built in. They can be used to limit the number of API resources or limit the use of physical resources like cpus and memory. Similarly to Pods and Deployments, quotas are API resources in k8s. You define them in yaml or json files and create them in your cluster using kubectl.

For example to limit the number of Pods to 1 in a specific namespace, you would define a ResourceQuota like this:

```

apiVersion: v1

kind: ResourceQuota

metadata:

 name: object-counts

 namespace: default

spec:

 hard:

   pods: "1"

```

You will be able to see and modify the quota like any other resource with the kubectl get and kubectl edit commands:

```

$ kubectl get resourcequota

NAME            AGE

object-counts   15s

```

With a single Pod running, if you try to create a new one, k8s will return an error saying that you are over quota:


```

$ kubectl create -f redis.yaml

Error from server: error when creating "redis.yaml": pods "redis" is forbidden: exceeded quota: object-counts, requested: pods=1, used: pods=1, limited: pods=1

```

Quotas are built in and first class citizen in the k8s API. Amazing!

#8 Third-Party Resources

This one is a bit more challenging to grasp as it is a new concept in most systems.

The philosophy of Kubernetes is that it should consist of a core set of API resources that are needed for managing containerized applications. It is foreseen that in the very short term, this core should be stable, stable as in no need for anything extra. Any additional API resource that users might need will not be added to the core, instead users will be able to create those resources on the fly. Kubernetes will manage them and the client will dynamically be able to use them. This technique is used at Pearson to manage databases.

The example I used in my LinuxCon talk, is to create a new API object called pinguin. You define it via a so-called ThirdParty Resource object. Like any other k8s resource it has metadata, apiversion, kind plus a set of versions. The metadata contains a name which defines a new resource group. Here it is:

```

metadata:

 name: pin-guin.k8s.linuxcon.com

apiVersion: extensions/v1beta1

kind: ThirdPartyResource

description: "A crazy pinguin at Linuxcon"

versions:

- name: v1

```

Let’s create this new resource:

```

$ kubectl create -f pinguins.yml

$ kubectl get thirdpartyresources

NAME                        DESCRIPTION                   VERSION(S)

pin-guin.k8s.linuxcon.com   A crazy pinguin at Linuxcon   v1

```

With this in place, you are now free to create a pinguin (to keep with the LinuxCon theme):

```

$ cat pinguin.yml

apiVersion: k8s.linuxcon.com/v1

kind: PinGuin

metadata:

 name: crazy

 labels:

   linuxcon: rocks

$ kubectl create -f pinguin.yml

```

And, dynamically, kubectl is now aware of the pinguin you created. Note that is only available in the latest version of k8s: v1.4.0.

```

$ kubectl get pinguin

NAME      LABELS           DATA

crazy     linuxcon=rocks   {"apiVersion":"k8s.linuxcon.com/v1","kind":"PinGui...

```

Now you will wonder what to do with that, and the answer is that you will need to write a controller. A piece of code that will watch for pinguins and perform actions when they get created, deleted, or modified.

This is purely awesome as it means that the Kubernetes API server is now fully extendable by each user.

#9 Role-Based Access Control (RBAC)

In addition to quotas, role based access control is a must have in enterprise systems. Similarly to quotas, it is also quite often an after-thought in data-center solutions when it is not a commercial add-on.

With Kubernetes, we now have fine-grained access control via roles (RBAC), and the best part is that, of course, it is 100% API driven. By that, I mean that roles and bindings are API resources that an administrator can write and create on the cluster like you would create Pods, deployments, etc.

It was first introduced in v1.3.0; it is an alpha API feature and still considered experimental, but couple more releases and I fully expect that it will be a stable API.

Roughly speaking, you create roles, API resources of kind role, and define some rules for each of these roles:

```

kind: Role

apiVersion: rbac.authorization.k8s.io/v1alpha1

metadata:

 namespace: default

 name: pod-reader

rules:

 - apiGroups: ["api/v1"]

   resources: ["pods"]

   verbs: ["get", "watch", "list"]

   nonResourceURLs: []

```

Then you associate users to these roles, by creating bindings resources of kind RoleBinding. A binding, takes a list of users — aka subjects — and associates them with a role. Once you create that binding, any users of the system will inherit the access rules defined in the role.

```

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1alpha1

metadata:

 name: admin

 namespace: default

subjects:

 - kind: ServiceAccount

   name: default

roleRef:

 kind: Role

 namespace: default

 name: admin

 apiVersion: rbac.authorization.k8s.io/v1alpha1

```

There is a great demo of this from Eric Chiang of coreOS on YouTube.

Built-in RBAC, fully API driven. What a joy!

#10 Cluster Federation

Last but not least is cluster federation.

Going back to the Borg paper and our first reason to love Kubernetes, you probably noticed that a single k8s cluster is in fact the equivalent of a single Borg “cell” or availability zone. Now in Kubernetes 1.4.0 is the ability to federate multiple clusters through a single control plane. That means that we now have Borg lite.

It is a key reason to like Kubernetes, as it will bring an Hybrid cloud solution for containers. Imagine having a k8s cluster on-premise and one in a public cloud (e.g., AWS, GKE, Azure). With this federated control plane, you will be able to launch microservices that could span multiple clusters. Scaling will automatically balance the replicas across clusters but still provide a single DNS endpoint, with load balancing also federated.

I find this super exciting, because we should be able to quickly migrate an app between on-prem and public cloud and vice-versa at will. Yes you heard that right, and this is all built-in, not a commercial add-on.

To get started with federation, Kelsey Hightower wrote a very detailed walk-through that is worth a try.

federation.png

[Picture courtesy of coreOS]

And that’s it, you have it, my top 10 reasons to like Kubernetes. I am sure that others will have a different list as there is so much to like about this system. I have been using, testing and developing data-center solutions for a while now, and I feel like Kubernetes is really a well thought-out system, extremely stable, extensible and with some key features built-in that we have grown to expect as commercial add-ons. Truly a game changer.

So you’ve heard of Kubernetes but have no idea what it is or how it works? The Linux Foundation’s Kubernetes Fundamentals course will take you from zero to knowing how to deploy a containerized application and manipulate resources via the API. Sign up now!