Kubernetes

Basic of kubernetes

Kubernetes is an open source platform that can manage the workloads and services by containerized it. Kubernetes provides a framework that run a distributed systems resiliently.

In here I will show you some tutorial for make a cluster with kubernetes. A kubernetes cluster consist of two type resources.

  • Master : coordinates the cluster
  • Nodes : running the applications

Now we have to go through Minikube website to doing some interactive tutorial.

Minikube website

First lets klik the start scenario button to begin the process. After you klik the start the page will change like in the picture. Klik the minikube start button to activate the minikube.

Then enter the command below to know the information about the hostname and the cluster.

$ hostname
$ kubectl cluster-info

And it will show like in the picture.

Now we have to create the deployment name kubernetes-bootcamp2 by using this command.

$ kubectl run kubernetes-bootcamp2 --image=docker.io/jocatalin/kubernets-bootcamp:v1 --port=8080

It will show like this.

After that lets check if the deployment was created, by using this command.

$ kubectl get deployment
$ kubectl get pods -o wide

It will show like this if the command is correct.

Next, we will add the nodeport by using this command.

$ kubectl expose deployments/kubernetes-bootcamp2 --type=”NodePort” --port 8080

And if its corret, it will show like this.

Then lets do some replicas with that image by typing this command.

$ kubectl scale deployments/kubernetes-bootcamp2 --replicas=3

If its success, it will show you something like this picture.

And after that lets check the pods.

$ kubectl pods -o wide

In the last picture it showed that the kubernetes-bootcamp2 has been replicated into 3.

Docker Swarm

The latest version of Docker has been including swarm mode for managing a cluster of Docker Engine. Creating swarm, deploying application services to a swarm and manage the swarm behavior you can to do it with Docker CLI.

I will show you some example of using docker swarm. In here I use two machines and I will make that two machines can communicate with each other.

Firstly, lets change the hostname in both machines using this command.

# hostnamectl set-hostname vm1 #for machine 1
# hostnamectl set-hostname vm2 #for machine 2
# systemctl start sshd

Next, lets define vm1 as a host.

# vim /etc/hosts

//edit the file with writing this command below
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.56.101 vm1
192.168.56.103 vm2

After that lets check the ssh-keygen to know which port is work.

# ssh-keygen

Next you can enter the root to vm2 from vm1 by using this command.

# ssh root@vm2

Now lets remove all file in .ssh directory for prepare some setting up docker swarm in vm1 and also vm2.

After that you can run again this command on vm1.

# ssh-keygen

Then you have to copy the .ssh file to vm2 by using this command.

# ssh-copy-id root@vm2

Check the host of your machine with this command.

# scp /etc/host

Send the copy of that file to vm2 with this command.

# scp /etc/hosts vm2:/etc/hosts
Init and join docker swarm

The first step is pull the httpd:alpine image from registry on both machines and after that choose which machine as a master to initialized docker swarm. You can initialized the machine by using this command.

# docker swarm init --advertise-addr 192.168.56.101

After execute the first step there will be a join token showing like this.

Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \
    192.168.56.101:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

You have to copy the code from the –token until the port:2377 and to the command below in the vm2 to make a join in both machines.

#docker swarm join --token SWMTKN-1-5rsz8se2esukuualwaiqvkjsay1cnosh3fhz3p5ushoj0xpoak-bod0kql5xtajmfe2sex83lvtr 192.168.56.101:2377

Finally you have make a cluster with docker swarm, vm1 as the master and vm2 as a worker. To make it easier to indentifying the differences you can use dockersamples/visualizer to get a better visualization.

docker run -itd -p 8888:8080 -e HOST=192.168.56.101 -e PORT=8080 -v  /var/run/docker.sock:/var/run/docker.sock --name visualizer dockersamples/visualizer

Open your browser and go to 192.168.56.101:8888. Finally you can see the visualizer of docker swarm.

Docker Live Streaming

In this section I will share some tutorial about how to make a video live streaming with docker by using CentOS 7.

First you need to install VLC Media Player apps on your computer. If you don’t have the apps, you can download it in here.

Okay if all has been ready, open your terminal and start the docker with this command.

$ sudo su
# systemctl start docker

Next, for doing some live streaming in docker, we have to disable the firewall. To disable the firewall you can use this command.

vim /etc/selinux/config

And change the file like the picture below.

# This file controls the state of SELinux on the system.
# SELinux= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protacted.
#     mls - Multi Level Security protection
SELINUXTYPE=targeted

After changing the configuration file, the next step is to stop and disable the firewalld by using this command.

# systemctl stop firewalld
# systemctl disable firewalld

Then you have to reboot your system.

# reboot

Do the configuration setting file step until rebooting step in the other machine too.

After reboot the system, open the terminal again and install the openssh package. You can use this command to instal the package.

# sudo yum install openssh-server -y

Activated the sshd and check the status of sshd using this command.

# systemctl start sshd
# systemctl status sshd

Install some tool to unzip the .tar file. You can download it with this command.

# sudo yum install pcre pcre-devel openssl openssl-devel zlib zlib-devel -y

After that make a temporary directory to accommodate the download Nginx and Nginx-RTMP file. Download the file after entering the new directory.

# mkdir temp
# cd temp
# wget http://nginx.org/download/nginx-1.9.9.tar.gz
# wget https://github.com/arut/nginx-rtmp-module/archive/master.zip

Install the unzip package using the command below.

# sudo yum install unzip

And then unzip the Nginx and Nginx-RTMP file that has been downloaded before.

# tar -xvf nginx-1.9.9.tar.gz
# unzip master.zip

Next is enter to Nginx directory

# cd nginx-1.9.9

Docker Compose

Docker compose is a tool for determine and running more than one container in Docker applications. If you use compose it means that you using a YAML file to configure the application services. Then, just with a single command all the service can be created and started.

Compose can work in all environment such as production, staging, development, testing and also CI workflows. In the compose there are three basic steps you have to do:

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  3. Run docker-compose up and Compose starts and runs your entire app.

This is what docker-compose.yml looks like :

version: '3'
services:
  web:
    build: .
    ports:
    - "5000:5000"
    volumes:
    - .:/code
    - logvolume01:/var/log
    links:
    - redis
  redis:
    image: redis
volumes:
  logvolume01: {}
Installing Compose in CentOS 7

First you have to run this command below to download the latest of docker compose.

# sudo curl –L  "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Next you have to apply executable permissions to the binary.

# sudo chmod +x /usr/local/bin/docker-compose
Make an application using docker compose

For to do this you should already installed both Docker Engine and Docker Compose. First thing that we have to do is define the application dependencies. Let’s create a directory for the project by using this command below.

$ mkdir composetest
$ cd composetest

Then create a file called app.py in your project directory and paste this in.

import time

import redis
from flask import Flask

app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)


def get_hit_count():
    retries = 5
    while True:
        try:
            return cache.incr('hits')
        except redis.exceptions.ConnectionError as exc:
            if retries == 0:
                raise exc
            retries -= 1
            time.sleep(0.5)


@app.route('/')
def hello():
    count = get_hit_count()
return 'Hello World! I have been seen {} times.\n'.format(count)

In this example, the hostname is redis and the default port for Redis is 6379.

Then create requirement.txt file and paste this in that file.

flask
redis

After that we have to create a docker file with the name is Dockerfile and paste the following command below to that file.

FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP app.py
ENV FLASK_RUN_HOST 0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]

This tells Docker to:

  • Build an image starting with the Python 3.7 image.
  • Set the working directory to /code.
  • Set environment variables used by the flask command.
  • Install gcc so Python packages such as MarkupSafe and SQLAlchemy can compile speedups.
  • Copy requirements.txt and install the Python dependencies.
  • Copy the current directory in the project to the workdir in the image.
  • Set the default command for the container to flask run.

The next step is create a docker-compose.yml file in your project directory and write the command below to this file.

version: '3'
services:
  web:
    build: .
    ports:
      - "5000:5000"
  redis:
    image: "redis:alpine"

After all the step above are done, next is to build and run the app with compose. From your project directory, start up your application by running this command.

# docker-compose up

Then, open your browser and go to http://localhost:5000/ or http://127.0.0.1:5000/  to see if the application is running. The webpage should showing something like the picture.

Next, refresh the page and check if the number is change. If its correct, the number should increment like in the picture.

After that open another terminal and see if the redis and web images are in list. You can check the images by typing this command.

 $ docker image ls

Next lets add some bind mount for the web service  by editing the
docker-compose.yml file.

version: '3'
services:
  web:
    build: .
    ports:
      - "5000:5000"
    volumes:
      - .:/code
    environment:
      FLASK_ENV: development
  redis:
    image: "redis:alpine"

Then we have to re-build and run the app with this command.

return 'Hello from Docker! I have been seen {} times.\n'.format(count)

And finally refresh the app in your browser. It should be showing something like in the picture. The number should be incrementing.

Docker MariaDB

MariaDB is a community-developed fork of the MySQL relational database management system intended to remain free under the GNU GPL. Being a fork leading open source software system, it is not able for being led by the original developers of MySQL, who forked it due to concers over its acquisition by Oracle. Contributors are required to share their copyright with the MariaDB Foundation.

The intent is also to maintain high compatibility with MySQL, ensuring a “drop-in” replacement capability with library binary equivalency and exact matching with MySQL APIs and commands. It includes the XtraDB storage engine for replacing InnoDB, as well as a new storage engine, Aria, that intends to be both a transactional and non-transactional engine perhaps even included in future versions of MySQL.

In this section I will teach you how to connecting mariadb with docker.

First search MariaDB image in docker hub repositories with this command.

# docker search mariadb

Once you found an image that you want to use, you can download it via Docker. Some layers including necessary dependencies will be downloaded too. Note that, once a layer is downloaded for a certain image, Docker will not need to download it again for another image. You can download the image by using the command below.

# docker pull mariadb 

Check if the image has been downloaded using this command.

# docker images

Next step we have to run the image, but an image cannot running without a container. The command to create a container for the official MariaDB image is written below.

# docker run -d --name mariadb -e MYSQL_ROOT_PASSWORD=123456 --restart unless-stopped mariadb

To make sure that container has been created, we can check it by running this command.

# docker ps

If there is mariadb in image column it means the container has been created

After that lets access the container via Bash using this command.

# docker exec –it mariadb bash

Login into mysql as the root user.

# mysql –uroot -p

Try to create the database with the name of database is test.

# create database test;

Check if the database has been created using this command.

# show database;
After that lets connect the mariadb database with php.

First you have to make a directories like command below.

# mkdir –p /opt/www-data

After making the directories, you should download the php image by using this command.

# sudo docker search php
# sudo docker php: 7.2-apache

After starting the docker container, enter the docker container to install the extension mysqli with this command below.

# docker run –d name apache --restart unless-stopped –p 80:80 –v /opt/ww-data:/var/www-data:/var/www --link mariadb:mariadb php:7.2-apache  
# docker exec –it apache sh
# docker-php-ext-enable mysqli

Because there is a docker container for link mariadb, we can directly connect to mariadb. Write file index.php using this command.

# Sudo nano /opt/www-data/index.php

<?php
  $db = new mysqli('mariadb', 'root', 'youpass', 'test');
  if (mysqli_connect_errno()) {
     echo '<p>' . 'Connect DB error';
     exit;
  }else{
     echo "Connection success";
  }
?> 

Restart the apache.

# apachectl restart

Check the id of docker which is running using this command below.

# docker ps

Save the container as a new image using this command.

# docker commit  <container id>  myphp:7.2-apache
# docker commit <container id> mymariadb:latest

Introducing Docker Hub

Docker hub is a platform that been use for storing and sharing the container images with an array of content sources. The content sources that provided by docker hub is including container community developers, open source projects and independent software vendors (ISV) building and distributing the code in containers. Docker hub also can be a private repositoriy.

There are some features that provide by docker hub:

  • Private repositories, you can do some push and pull activities to  the container images.
  • Automated builds, this means if you want get some image from GitHub and Bitbucket the docker hub will automatically build the container images and push them into the docker hub.
  • Teams and Organizations, docker hub can manage an access for sharing and also a private access.
  • Official Images, pull and use high quality container images provided by docker.
  • Publisher images, pull and use high quality container images provided by external vendors. Certified images also include support and guarantee compatibility with Docker Enterprise.
  • Webhooks, trigger action after successfully push an image to repository to integrate Docker Hub with other services.

After the introduction of docker hub, in this section I will show you how to use the docker hub. Let’s we do some pull and push images from a docker hub. First of all you need to make an account in the docker hub website. Click here to go to the website.

After you finished your registry, you can login with fill in your username and password this command.

# docker login

*) If you see some note like “Login did not succeed, error : Cannot connect to Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?” it means your docker hasn’t been running. You can type this command to run the docker.
# systemctl start docker

When you already login pull some images from the other repository and change the tag of that image. Command below will show you the way to pull an httpd image and change the tag from “httpd:latest” to “tfdhiba13/httpd:v1”

# docker pull httpd
# docker tag httpd:latest  tdhiba13/httpd:v1

After that, you can check if the tag has been changed with this command.

# docker images

If the docker image is ready you can do some push command to your docker hub repository account by using this command.

# docker push tfdhiba13/httpd:v1

And check in your docker hub account if the image has been restored or not. If it has been restored it will show something like the picture.

Final step you have to check is your docker image is running or not using this command below.

# docker run –itd --name mywebtest –p 8080:80 –v /mydata:/usr/local/apache2/ htdocs tfdhiba13/httpd:v1

Finish and Good Luck !!

Reference
https://www.docker.com/products/docker-hub

Create a docker file and build an Apache web server docker image

In this section I will show you how to create a docker file and also building an Apache web server using docker image. Let’s follow the step below:

  1. Setting your virtual box network like this picture below.
  1. After that start the CentOS 7 in your virtual box and open the terminal.
  2. Change the user to super user.
    $ sudo su
  3. Make a directory and enter that directory that you made.
    # mkdir /mydata
    # cd mydata
  4. Start the docker with this command.
    # systemctl start docker
  5. Get the httpd image form the library.
    # docker pull httpd
  6. Check if the image has been downloaded. You can check with this command.
    # docker images
  7. Create a file like this
    # echo “example of docker apache” > apache.htm
  8. Now we will create a docker file in our project. Run command below.
    # docker run –itd --name mywebserver3 –p 8080:80 –v /mydata/usr/local/apache2/htdocs httpd
  9. After it run and success open this url 127.0.0.1:8080/apache.htm to check if is it running or not. If it’s running correctly the webpage will show something like the picture.

Installation & Setting Up Docker on CentOS 7

In this section I will tell you how to install Docker on your CentOS 7. So your computer or virtual box must be already installed with CentOS 7  and also have an access to internet. To install docker you must access the system as the super user. The step is :

  1. First, open your Terminal.
  2. Change your user to super user with command like below.
$ sudo su

After the user become super user, you can install the docker. But, firstly you must check is there any docker installed in your machines or not. You can check it using command below.

# rpm –qa | grep docker

And if the result is empty, you can follow this step below to install the docker.

  1. Intall the packages
    # yum-utils device-mapper-persistent-data lvm2
    yum-utils give the yum-config-manager utility, and device-mapper-persistent-data and lvm2 are necessary by the ‘device mapper’ storage driver.
  2. Enable the edge and test repositories. These repositories are included in the docker.repo file above but are disabled by default. You can enable them alongside the stable repository if you want.
    # yum-config-manager --enable docker-ce-edge
    # yum-config-manager --enable docker-ce-test
  3. Now you can list the total available packages with the command below.
    # yum repolist
  4. After going through the above configured repositories, now let’s install the docker by using command below.
    # yum install docker-ce
  5. Once docker has been installed, you can check it by using this command.
    # rpm –qa | grep docker
    or
    # docker --version
  6. Docker has been installed, now we have to start the docker  and enable it using this command.
    # systemctl start docker
    # systemctl enable docker
  7. Check if the docker has been running using this command.
    # systemctl status docker
  8. Test docker by running the most common hello-world image.
    # docker run hello-world

Congratulation finally the docker has been installed in your computer !!

Docker Introduction

What is Docker?

Docker is a popular software platform that creates and manages containers for application development. The use of containers to deploying an application is called as containerization. There are so many things that make containerization popular among developers,  even though containers has been used for  a long time ago.

These are the benefits which provides by using containers :

  • Flexibility : Even the most complex applications can be containerized.
  • Lightweight : Containers leverage and share the host kernel, making them much more efficient in terms of system resources than virtual machines.
  • Portable : You can build locally, deploy to cloud, and run anywhere.
  • Loosely coupled : Containers are highly self sufficient and encapsulated, allowing you to replace or upgrade one without disrupting others.
  • Scalable : You can increase and automatically distribute container replicas across a datacenter.
  • Secure : Containers apply aggressive constraints and isolations to processes without any configuration required on the part of the user.

What is docker images?

A docker image is a template from which containers are created an it is a read only template. Docker containers are use from Docker images. An image is built in multiple layers that only can be read. At the bottom, we might have bootfs and OS base image such as Debian. Higher layers could have custom software or libraries such as Emacs or Apache. This layering mechanism makes it easy to build new images on top of existing images. When an image gets instantiated into a running container, a thin writable layer is added on top, called Container  Layer. This means that all changes go into the top most layer. In fact, the underlying file system doesn’t make a copy of the lower layers since they are read-only. This helps us bring up containers quickly.

References: