Sunday, 19 February 2023

Devops: EVERYTHING KUBERNETES: A PRACTICAL GUIDE

EVERYTHING KUBERNETES: A PRACTICAL GUIDE


INTRODUCTION

KUBERNETES — BIRD’S EYE VIEW HIGH LEVEL ARCHITECTURE

KUBERNETES BUILDING BLOCKS 

THE BASICS BLOCKS

USING LABELS AND SELECTORS FOR FINE-GRAINED CONTROL 

SERVICE DISCOVERY 

3 STORAGE BUILDING BLOCKS

CHOOSING THE RIGHT BLOCK FOR THE JOB

IMPERATIVE VS. DECLARATIVE ORCHESTRATION

HANDS-ON: GETTING STARTED 

INSTALLATION 

LOGGING

MONITORING 

WORKING WITH MULTIPLE CLUSTERS

HANDS-ON: DEPLOYING AN APPLICATION

DIY CLUSTER CONSIDERATIONS

SUMMARY 

ABOUT STRATOSCALE

USING KUBECTL CLI


NODE

A Kubernetes cluster consists of one or more nodes managed by Kubernetes. The nodes are bare-metal servers, on-premises VMs, or VMs on a cloud provider. Every node contains a container runtime (for example, Docker Engine), 

kubelet (responsible for starting, stopping, and managing individual containers by requests from the Kubernetes 

control plane), and kube-proxy (responsible for networking and load balancing).

MASTER NODE

A Kubernetes cluster also contains one or more master nodes that run the Kubernetes control plane. The control plane 

consists of different processes, such as an API server (provides JSON over HTTP API), scheduler (selects nodes to run 

containers), controller manager (runs controllers, see below), and etcd (a globally available configuration store).

DASHBOARD AND CLI

A Kubernetes cluster can be managed via the Kubernetes Dashboard, a web UI running on the master node. The cluster  can also be managed via the command line tool kubectl, which can be installed on any machine able to access the API server, running on the master node. This tool can be used to manage several Kubernetes clusters by specifying a context defined in a configuration file


KUBERNETES 

BUILDING BLOCKS

Kubernetes provides basic mechanisms for the deployment, maintenance, and scaling of containerized applications. It 

uses declarative primitives, or building blocks, to maintain the state requested by the user, implementing the transition 

from the current observable state to the requested state.


A pod is the smallest deployable unit that can be managed by Kubernetes. A pod is a logical group of one or more containers that share the same IP address and port space. The main purpose of a pod is to support co-located processes, such as an application server and its local cache. Containers within a pod can find each other via localhost, and can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. In other words, a pod represents a “logical host”. Pods are not durable; they will not survive scheduling failures or node failures. If a node where the pod is running dies, the pod is deleted. It can then be replaced by an identical pod, with even the same name, but with a new unique identifier (UID). 


A label is a key/value pair that is attached to Kubernetes resource, for example, a pod. Labels 

can be attached to resources at creation time, as well as added and modified at any later time

A label selector can be used to organize Kubernetes resources that have labels. An equality-based selector defines a condition for selecting resources that have  the specified label value. A set-based selector defines a condition for selecting resources that have a label  value within the specified set of values

A controller manages a set of pods and ensures that  the cluster is in the specified state. Unlike manually created pods, the pods maintained by a replication  controller are automatically replaced if they fail, get deleted, or are terminated. There are several  controller types, such as replication controllers or deployment controllers.

A replication controller is responsible for running the specified number of pod copies (replicas) across the  clusterA deployment defines a desired state for logical  group of pods and replica sets. It creates new resources or replaces the existing resources, if necessary. A deployment can be updated, rolled 

out, or rolled back. A practical use case for a deployment is to bring up a replica set and pods, 

then update the deployment to re-create the pods (for example, to use a new image). Later, 

the deployment can be rolled back to an earlier revision if the current deployment is not stable.


A service uses a selector to define a logical group of pods and defines a policy to access such logical groups. Because pods are not durable, the actual pods that are running may change. A client that uses one or more containers within a pod should not need to be aware of which specific pod it works with, especially if there are several pods (replicas). 

There are several types of services in Kubernetes, including ClusterIP, NodePort, LoadBalancer. A ClusterIP service exposes pods to connections from inside the cluster. A NodePort service exposes pods to external traffic by forwarding traffic from a port on each node of the cluster to the container port. A LoadBalancer service also exposes  pods to external traffic, as NodePort service does, however it also provides a load balancer.


USING LABELS AND SELECTORS FOR 

FINE-GRAINED CONTROL

A Kubernetes controller, for example, uses a selector to define a set of managed pods so that pods in that set have the 

corresponding label. A label is just a key/value pair that is attached to Kubernetes resources such as pods. Labels can 

be attached to resources when they are created, or added and modified at any time. Each resource can have multiple 

labels. For example

release: stable

environment: dev


environment = dev

environment != live

environment in (dev, test)

environment notin (live)

release = stable, environment = dev


The first two selectors have an equality-based requirement, the third and fourth selectors have a set-based requirement. The last selector contains the comma separator, which acts as a logical “AND” operator, so the selector defines a set of resources where the label “release” equals “stable” and the label “environment” equals “dev


ENVIRONMENT VARIABLES

Kubernetes injects a set of environment variables into pods for each active service. Such environment variables 

contain the service host and port, for example:

MYSQL_SERVICE_HOST=10.0.150.150

MYSQL_SERVICE_PORT=3306


STORAGE BUILDING BLOCK

volume 

A container file system is ephemeral: if a container crashes, the changes to its file system are lost. A volume is defined at the pod level, and is used to preserve data across container crashes. A volume can be also used to share data between containers in a pod. A volume has the same lifecycle as the the pod that encloses it—when a pod is deleted, the volume is deleted as well. Kubernetes supports different volume types, which are implemented as plugins.


persistent volume 

A persistent volume represents a real networked storage unit in a cluster that has been provisioned by an administrator. Persistent storage has a lifecycle independent of any individual pod. It supports different access modes, such as mounting as read-write by a single node, mounting as read-only by many nodes, and mounting as read-write by many nodes. Kubernetes supports different persistent volume types, which are implemented as plugins. Examples of persistent volume types include AWS EBS, vSphere volume, Azure File, GCE Persistent Disk, 

CephFS, Ceph RBD, GlusterFS, iSCSI, NFS, and Host Path.


persistent volume claim 

A persistent volume claim defines a specific amount of storage requested and specific access modes. Kubernetes finds a matching persistent volume and binds it with the persistent volume claim. If a matching volume does not exist, a persistent volume claim will remain unbound indefinitely. It will be bound as soon as a matching volume become available


IMPERATIVE VS. DECLARATIVE ORCHESTRATION

Before getting to the practical steps of the Kubernetes 

deployment, it’s important to understand the key 

approaches to orchestration.

The classic imperative approach for managing software  involves several steps or tasks, some of which are manual. 

When working in a team, it is usually required that these  steps be documented, and, in an ideal case, automated. 

Preparing good documentation for a classic imperative administrative procedure and automating these steps can be non-trivial tasks, even if each of the steps is simple.


A declarative approach for administrative tasks is intended to solve such challenges. With a declarative 

approach, an administrator defines a target state for  a system (application, server, or cluster). Typically, a domain-specific language (DSL) is used to describe the target state. An administrative tool, such as 

Kubernetes, takes this definition as an input and  takes care of how to achieve the target state from the 

current observable state.


HANDS-ON: 

Minikube is an ideal tool for getting started with Kubernetes on a single computer. It enables running of a single-node Kubernetes cluster in a virtual machine. 

It can be used on GNU/Linux or OS X and requires VirtualBox, KVM (for Linux), xhyve (OS X), or VMware Fusion (OS X) to be installed on your computer. Minikube creates a new virtual machine with GNU/Linux, installs and configures Docker and Kubernetes, and finally runs a Kubernetes cluster


INSTALLATION

1. Install the kubectl command line tool locally:

$ curl -Lo kubectl \

http://storage.googleapis.com/kubernetes-release/\

release/v1.3.0/bin/linux/amd64/kubectl \

&& chmod +x kubectl \

&& sudo mv kubectl /usr/local/bin/


2. Next, install the KVM driver:

$ sudo curl -L \

 https://github.com/dhiltgen/docker-machine-kvm/\

 releases/download/v0.7.0/docker-machine-driver-kvm \

 -o /usr/local/bin/docker-machine-driver-kvm

 $ sudo chmod +x /usr/local/bin/docker-machine-driver-kvm


3. Install Minikube:

$ curl -Lo minikube \

https://storage.googleapis.com/minikube/\

releases/v0.6.0/minikube-linux-amd64 \

&& chmod +x minikube \

&& sudo mv minikube /usr/local/bin


4. Start the Minikube cluster

$ minikube start --vm-driver=kvm

Starting local Kubernetes cluster...

Kubernetes is available at https://192.168.1.113:8443.

Kubectl is now configured to use the cluster


The Kubernetes cluster is up and running. 

Let’s start with a simple deployment using an existing image


$ kubectl run hello-minikube \

--image=gcr.io/google_containers/echoserver:1.4 \

--port=8080

deployment “hello-minikube” created

$ kubectl expose deployment hello-minikube --type=NodePort

service “hello-minikube” exposed


5. Check that the pod is up and running

$ kubectl get pod

NAME READY STATUS RESTARTS AGE

hello-minikube-24335344028-ouxw8 1/1 Running 0 5m

“Running” should appear in the STATUS field. If “ContainerCreating” appears instead, 

wait a few moments, then repeat the last command.


6. Check that the service works


$ curl $(minikube service hello-minikube --url)

CLIENT VALUES:

client_address=172.17.0.1

command=GET

real path=/

query=nil

request_version=1.1

request_uri=http://192.168.42.213:8080/

SERVER VALUES:

server_version=nginx: 1.10.0 - lua: 10001

HEADERS RECEIVED:

accept=*/*

host=192.168.42.213:31759

user-agent=curl/7.35.0

BODY:

-no body in request

7. Execute the following command to open the Kubernetes 

Dashboard in your web browser

$ minikube dashboard


8. To stop the cluster (shut down the virtual machine and 

preserve its state), execute the following command


$ minikube stop

Stopping local Kubernetes cluster...

Stopping “minikubeVM”......


9. To start the cluster again and restore it to the previous state, 

execute the following command

$ minikube start --vm-driver=kvm/


10. To delete the cluster (delete the virtual machine and its state), 

execute the following command


$ minikube delete


Note: 

There are other open-source tools, such as kubeadm,

that simplify installation of Kubernetes cluster in public clouds, on-premises virtual machines, and bare-metal servers. However, there are still many things that are out of scope. For example, you still need a reliable distributed block or file storage,you still need to think about HA, scalability, networking and security. Sometimes, it is simpler to use Kubernetes as a Service.


LOGGING

Basic logging in Kubernetes behaves much like logging in Docker. There is a kubectl logs command that will show you all the information written to stdout and stderr for a given container in a pod. If a pod has only one container, there is no need to specify it explicitly, however when a pod has several containers we need to add -c container to the end of the command. As with Docker, we can opt to follow logs, to reduce the number of recent lines with --tail and we can filter them by date. Unlike Docker, Kubernetes enables us to check the logs of a container that crashed using the --previous option.


DEDICATED AGENT RUNNING ON EVERY NODE

In this approach, a logging agent is run on every node, preferably through a DeamonSet replica. A 

popular choice in this space is fluentd. It can be configured with various backends among which are 

Google Cloud Platform and Elasticsearch


DEDICATED CONTAINER INSIDE A POD

check them separately with kubectl 

logs $podname -c access-log and kubectl logs $podname -c error-log


DIRECT LOGGING FROM AN APPLICATION

 using kubectl logs. In most cases, this approach is discouraged.

 

 MONITORING

As Kubernetes containers are actually Linux processes, we can use our favourite tools to monitor 

cluster performance. Basic tools, such as top or kubectl top, will behave as expected. It’s also 

possible to use solutions that are dedicated to Kubernetes. One such solution is Heapster. Heapster 

aggregates events and data from across the cluster. It runs as a pod in system namespace. Discovery 

and querying of resources is done automatically with data coming from a kubelet managing node.

 

NOTE:
indentation need to do manually while working with every yaml file , otherwise error will come

WORKING WITH MULTIPLE CLUSTERS

So far, we have used kubectl to connect to only one cluster created by Minikube. But kubectl can 

be configured to use multiple clusters and multiple contexts to connect to them. To check available 

contexts, we use kubectl config get-contexts.

 We can confirm that only one context and only one cluster is defined by kubectl config view. 

It should look like this:


apiVersion: v1

clusters:

- cluster:

 certificate-authority: $HOME/.minikube/ca.crt

 server: https://192.168.99.100:8443

 name: minikube

contexts:

- context:

 cluster: minikube

 user: minikube

 name: minikube

kind: Config

preferences: {}

users:

- name: minikube

 user:

 client-certificate: $HOME/.minikube/apiserver.crt

 client-key: $HOME/.minikube/apiserver.key/value

 

  The config file used by kubectl is stored at ~/.kube/config. We can edit it with a text editor and 

add another cluster, context and user. When ready, kubectl config get-contexts should show 

our newly added context without marking it as current. This is the desired state


apiVersion: v1

clusters:

- cluster:

 certificate-authority: $HOME/.minikube/ca.crt

 server: https://192.168.99.100:8443

 name: minikube

- cluster:

 certificate-authority: $HOME/.minikube/ca.crt

 server: https://192.168.99.100:8443

 name: secondkube

contexts:

- context:

 cluster: minikube

 user: minikube

 name: minikube

- context:

 cluster: secondkube

 user: secondkube

 name: secondkube

current-context: secondkube

kind: Config

preferences: {}

users:

- name: minikube

 user:

 client-certificate: $HOME/.minikube/apiserver.crt

 client-key: $HOME/.minikube/apiserver.key

- name: secondkube

 user:

 client-certificate: $HOME/.minikube/apiserver.crt

 client-key: $HOME/.minikube/apiserver.key


To switch context, we use kubectl config use-context secondkube. We can verify the 

switch was successful again with kubectl config get-contexts. The marker for current should 

have moved to the new context. All kubectl commands from now on will be executed in a selected 

context (which in our example is exactly the same as the first one).


DEPLOYING AN APPLICATION

we deploy the WordPress content management system with a MySQL backend. It is a classic two-tier 

application, where the first tier is the application server (WordPress) that uses the second tier for data persistence (MySQL). 


STEP 1. CREATE A KUBERNETES SECRET

As discussed above, Kubernetes secrets allow users to pass sensitive information, such as 

passwords, database credentials, to containers. In the first step, we need to define a Kubernetes 

secret that contains a database name, user, and password. It should also contain the root 

password for MySQL. 

Before creating a secret, we need to encode such information in Base64 format. Let’s assume we 

want to use the following values in our application:

• “app-db” as a database name

• “app-user” as a database user name

• “app-pass” as a database password

• “app-rootpass” as a database root password

Note that we need to provide a database root password to allow WordPress to create the 

required database. To encode these values to Base64 format, we can use the standard base64 

utility that is available in almost all Linux distributions:


$ echo -n “app-db” | base64

YXBwLRi

$ echo -n “app-user” | base64

YXBwLXVzZXI=

$ echo -n “app-pass” | base64

YXBwLXBc3M=

$ echo -n “app-rootpass” | base64

YXBwLXJvb3wYXN


We use the “-n” option to make sure that the new line symbol (“\n”) is not included in the 

encoded value.

To define a new Kubernetes secret, create a new file, app-secret.yaml, with the following 

content:


apiVersion: v1

kind: Secret

metadata:

 name: app-secret

type: Opaque

data:

 dbname: YXBwLRi

 dbuser: YXBwLXVZXI=

 dbpass: YXBwLXBhc3M=

 dbrootpass: YXBwLXJv3RwYXN

 

 In the app-secret.yaml file, we specified the required Kubernetes API version and the 

data type to let Kubernetes know that we are defining a secret. In addition, the file defines four 

keys (dbname, dbuser, dbpass, dbrootpass) with the corresponding values we encoded 

above. Now we can create our Kubernetes secret using its definition in the app-secret.yaml  file:

$ kubectl create -f app-secret.yaml

secret “app-secret” created


$ kubectl get secrets

NAME TYPE DATA AGE

app-secrets Opaque 4 2m 


STEP 2. CREATE A PERSISTENT VOLUME

Next, we will create a Kubernetes persistent volume to provide the underlying storage for our 

MySQL database. To define a new persistent volume, create a new file, app-pv.yam, with the 

following content

apiVersion: v1

kind: PersistentVolume

metadata:

 name: app-pv

 labels:

 vol: mysql

spec:

 capacity:

 storage: 1Gi

 accessModes:


 - ReadWriteOnce

 hostPath:

 path: /data/app

 

  $ sudo mkdir -p /data/app


 $ kubectl create -f app-pv.yaml

persistentvolume “app-pv” created


Let’s verify that the persistent volume is available


$ kubectl describe pv/app-pv/app-pv


STEP 3. CLAIM A PERSISTENT VOLUME

For MySQL, we need to claim our previously created persistent volume. Create a new file, apppvc.yaml, with the following content:

 kind: PersistentVolumeClaim

apiVersion: v1

metadata:

 name: app-pvc

spec:

 accessModes:

 - ReadWriteOnce

 resources:

 requests:

 storage: 1Gi

 selector:

 matchLabels:

 vol: mysql

 

  $ kubectl create -f app-pvc.yaml

persistentvolumeclaim “app-pvc” created


STEP 4. DEPLOY MYSQL

Now we will create a new deployment for MySQL using the existing Docker image. Create a new 

file, mysql-deployment.yaml, with the following content


apiVersion: extensions/v1beta1

kind: Deployment

metadata:

 name: mysql-deployment

spec:

 replicas: 1

 template:

 metadata:

 labels:

 app: mysql

 spec:

 containers:

 - name: mysql

 image: “mysql:5.6”

 ports:

 - containerPort: 3306

 volumeMounts:

 - mountPath: “/var/lib/mysql”

 name: mysql-pd

 env:

 - name: MYSQL_ROOT_PASSWORD

 valueFrom:

 secretKeyRef:

 name: app-secret

 key: dbrootpass

 - name: MYSQL_USER

 valueFrom:

 secretKeyRef:

 name: app-secret

 key: dbuser

 - name: MYSQL_PASSWORD

 valueFrom:

 secretKeyRef:

 name: app-secret

 key: dbpass

 - name: MYSQL_DATABASE

 valueFrom:

 secretKeyRef:

 name: app-secret

 key: dbname

 volumes:

 - name: mysql-pd

 persistentVolumeClaim:

 claimName: app-pvc

 

$ kubectl create -f mysql-deployment.yaml

deployment “mysql-deployment” created


$ kubectl get pv

NAME CAPACITY ACCESSMODES STATUS CLAIM 

REASON AGE

app-pv 1Gi RWO Bound default/app-pvc    10m


$ kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESSMODES AGE  

app-pvc Bound app-pv 0 5m


$ kubectl get deployments

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE 

AGE

mysql-deployment 1 1 1 1 

3m


STEP 5. CREATE A SERVICE FOR MYSQL_DATABASE


apiVersion: v1

kind: Service

metadata:

 name: mysql-service

spec: ports:

 - port: 3306

 protocol: TCP

 targetPort: 3306

 selector:

 app: mysql

 

 $ kubectl create -f mysql-service.yaml

service “mysql-service” created


Let’s verify that the service is created and correctly mapped


$ kubectl describe svc/mysql-service

Name: mysql-service

Namespace: default

Labels: <none>

Selector: app=mysql

Type: ClusterIP

IP: ...

Port: <unset> 3306/TCP

Endpoints: 172.17.0.2:3306

Session Affinity: None

No events.


$ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP

mysql-deployment-... 1/1 RUNNING 0 30m  172.17.0.2


STEP 6. DEPLOY WORDPRES

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

 name: wordpress-deployment

spec:

 replicas: 2

 strategy:

 type: RollingUpdate

 template:

 metadata:

 labels:

 app: wordpress

 spec:

 containers:

 - name: “wordpress”

 image: “wordpress:4.5-apache”

 ports:

 - containerPort: 80

 env:

 - name: WORDPRESS_DB_HOST

 value: mysql-service

 - name: WORDPRESS_DB_USER

 valueFrom:

 secretKeyRef:

 name: app-secrets

 key: dbuser

 - name: WORDPRESS_DB_PASSWORD

 valueFrom:

 secretKeyRef:

 name: app-secret

 key: dbpass

 - name: WORDPRESS_DB_NAME

 valueFrom:

 secretKeyRef:

 name: app-secret

 key: dbname

 

  $ kubectl create -f wordpress-deployment.yaml

deployment “wordpress-deployment” created


$ kubectl get deployments

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

mysql-deployment 1 1 1 1 45m

wordpress-deployment 2 2 2 2 5m


$ kubectl get pods

NAME READY STATUS RESTARTS AGE

mysql-deployment-... 1/1 Running 0 45m

wordpress-deployment-... 1/1 Running 0 6m

wordpress-deployment-... 1/1 Running 0 8m


$ kubectl exec -it wordpress-deployment-... bash


Let’s check that the MySQL service can be resolved within the pod using the service’s name:

root@wordpress# getent hosts mysql-service

10.0.0.219 mysql-service.default.svc.cluster.local


The above output verifies that mysql-service can be resolved through DNS to the ClusterIP 

address that was assigned to the MySQL service (your IP address may be different). 

Now let’s verify that WordPress is properly configured


root@wordpress# grep -i db /var/www/html/wp-config.php

define(‘DB_NAME’, ‘app-db’);

define(‘DB_USER’, ‘app-user’);

define(‘DB_PASSWORD’, ‘app-pass’);

define(‘DB_HOST’, ‘mysql-service’);

...


STEP 7. CREATE A SERVICE FOR WORDPRESS


apiVersion: v1

kind: Service

metadata:

 name: wordpress-service

 labels:

 app: wordpress

spec:

 type: NodePort

 ports:

 - port: 80

 nodePort: 30080

 selector:

 app: wordpress

 

  $ kubectl create -f wordpress-service.yaml

service “wordpress-service” created


$ kubectl describe svc/wordpress-service

Name: wordpress-service

Namespace: default

Labels: app=wordpress

Selector: app=wordpress

Type: NodePort

IP: ...

Port: <unset> 80/TCP

NodePort: <unset> 30080/TCP

Endpoints: ...:80,...:80,...:80

Session Affinity: None

No events


STEP 8. TEST THE WORDPRESS APPLICATION

Open your browser and navigate to http://:30080,
where is the address of your Kubernetes cluster node. You can follow the installation wizard to get WordPress up and running through the browser. Congratulations!
The following diagram shows all of the Kubernetes building blocks we defined and created for our application



NOTE:
indentation need to do manually while working with every yaml file , otherwise error will come



No comments:

Post a Comment

How to Install Kubernetes How to Install Kubernetes? What is a Pod? previously , I discussed on what is Kubernetes, and its architecture. He...