Wednesday, May 16, 2018

Quick overview of Kubernetes


These are the notes that i took while learning more about Kubernetes.

High-level overview

Most software today runs multiple processes, and is written across distributed systems, and keeping track of all this is a challenge.  At a high level, Kubernetes helps runs your software and processes on a cluster of computers, and runs them as one entity. Kubernetes manages the processes and ensures they stay running.
Kubernetes (K8s) is inspired by Google's Borg system, and originated there. Google had already built this infrastructure, and released K8s as an open-source project.

Containers

K8s runs Docker as the primary container format (among others, less popular ones). The container gives the developer a hermetically sealed container, i.e. a box for our processes. The context for these processes is always the same, which allows the package/container to be run on different machines and always give the same result.  K8s' role is to keep track of these processes, ensures they stay up, and helps them find each other.
K8s can be run on different environments: in any major cloud provider (GCP, Azure, AWS), but also on premise or in a hybrid environment, with consistency, as K8s is open source software. So in theory there is no vendor lock-in, and K8s workloads can be moved (gradually or not) from one provider to another.

Setup

The way to set up K8s is done in a declarative way, in a config file (i.e. version of the software to run, # of instances, desired state, etc). Dial or knob for the number of processes can be changed for scaling purposes, and is just a matter of changing the config file.

Schedulers

 Scheduling in a distributed environment is running copies of an instance consistently. Scheduling ensures loading the service on a machine that's not too busy, which as a metaphor equates to playing a multi-dimensional game of Tetris with resources: this create oddly shaped combinations of disk / CPU / disk requirements that are the nodes you run your software on, and that are set up for maximum efficiency.

Rolling updates can be managed this way: the developer can roll out a new definition of her software for a container, and slowly add the new definition/version of her app, while dialing down the older version to slowly replace the previous version. So atomic upgrades are possible, as well as rollbacks, in a seamless way.
Interestingly, prior existing data infrastructures like Hadoop that tried to do it all, including managing the infrastructure now go through Docker ,  running a dockerized application on Apache Hadoop YARN.

Service discovery

However K8s is more than a scheduler as it also performs service discovery. K8s intelligently routes to services, which are often tagged (i.e. #backend, #frontend, etc) services that you can target, which is a very powerful concept.
An example of this is a load balancer which is also managed by K8s. Usually static names for the different parts of your system are given, and thus can be handled easily.

Storage

From its inception, Docker encouraged the design of stateless services.
Persistence and statefulness are an afterthought in the world of containers. This design works in favor of workload scalability and portability.
It is one of the reasons why containers are fueling cloud-native architectures, microservices, and web-scale deployments.
So, given that either the host can abruptly terminate, or the container itself can fail, the state needs to be stored usually somewhere else via a networked volume independent of the host or the container.

A pod is the logical unit of Deployment in Kubernetes. 
A K8s volume is attached to the pod that encloses it. Data in the volume is preserved across container restarts. If the pod dies, the volume is gone. The K8s volume is a directory with some data that is accessible to all the containers of a pod.

Google Container Engine


GCE is the hosted version of K8s managed by Google.  Thus K8s is being upgraded, and the cluster handled for you, for example. What you get out of a hosted environment is:
- Dynamic creation and removal of machines
- APIs for controlling the cluster and the network.

Autoscaling is a major feature of GCP (and is also offered in other providers).

0 comments:

Post a Comment

Note: Only a member of this blog may post a comment.