All About Docker Using Virtualization

KUSHAGRA BANSAL
12 min readNov 24, 2020

--

Prerequisite: VM having Ubuntu:16.04 or up.

1. Playing with Vagrant

● Installing Vagrant

To install Vagrant on your Ubuntu system, follow these steps:

❖ We need to install Virtualbox first which will work as a hypervisor
$ sudo apt install VirtualBox

❖ Now, we will install vagrant
$ sudo apt install vagrant

❖ Verify the vagrant installation

$ vagrant –version

For Window’s => https://www.vagrantup.com/downloads.html

● Creating basic vagrant box using VirtualBox virtualization

❖ Create the project directory and switch to it with:

mkdir ~/first-vagrant

cd ~/first-vagrant

❖ The next step is to initialize a new Vagrantfile using the vagrant init

$ vagrant init

❖ You can open the Vagrantfile, read the comments and make adjustments according to your needs.

❖ Now we can run the vagrant

$ Vagrant up

❖ To ssh into the virtual machine simply run:
$ Vagrant ssh

❖ You can stop the virtual machine with the following command:
$ Vagrant halt

❖ To destroy the vagrant machine run the below command

$ Vagrant destroy

2. Understanding Vagrant File

● Configuration your Sandbox

❖ Configuring CPU resources in Vagrant
To customise the cpu resources to be used by your VM you need to update few parameters in Vagrantfile.
config.vm.provider”virtualbox”do|v|

v.cpus=2

end

❖ Configuring network resources in Vagrant
To customise the network resources to be used by your VM you need to update few parameters in Vagrantfile.
Vagrant.configure(“2”) do|config|

config.vm.network”private_network”, ip: “192.168.50.4”,

virtualbox__intnet: true

end

❖ Configuring memory resources in Vagrant
To customise the memory resources to be used by your VM you need to update few parameters in Vagrantfile.
config.vm.provider”virtualbox”do|v|

v.memory=1024

end

❖ Provisioning a shell script in Vagrant
In this scenario, we will setup apache server on boot using a shell script named provision.sh, which will look something like this:
#!/usr/bin/env bash

echo “Installing Apache and setting it up…”

apt-get update >/dev/null 2>&1

apt-get install -y apache2 >/dev/null 2>&1

rm -rf /var/www

ln -fs /vagrant /var/www

With the shell script created, the next step is to configure Vagrant to use the script. Add the following line somewhere in the Vagrantfile:

config.vm.provider”virtualbox”do|v|

Config.vm.box = ”precise64”
Config.vm.forward_port 80.8080
config.vm.provision”shell”path:”provision.sh”
end

3. Docker Machine

● Installation of Docker

❖ First, in order to ensure the downloads are valid, add the GPG key for the official Docker repository to your system:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

❖ Add the Docker repository to APT sources:

$ sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”

❖ Next, update the package database with the Docker packages from the newly added repo:

$ sudo apt-get update

❖ Make sure you are about to install from the Docker repo instead of the default Ubuntu 16.04 repo:

$ apt-cache policy docker-ce

❖ Finally, install Docker:

$ sudo apt-get install -y docker-ce

❖ Check that it’s running:

$ sudo systemctl status docker

● Configuration

❖ Executing the Docker Command Without Sudo

If you want to avoid typing sudo whenever you run the docker command, add your username to the docker group:

$ sudo usermod -aG docker ${USER}

❖ To apply the new group membership, you can log out of the server and back in, or you can type the following:

$ su — ${USER}

You will be prompted to enter your user’s password to continue. Afterwards, you can confirm that your user is now added to the docker group by typing:

❖ If you need to add a user to the docker group that you’re not logged in as, declare that username explicitly using:

$ sudo usermod -aG docker username

4. Docker

● Working with Docker images

❖ To check whether you can access and download images from Docker Hub, type:

$ docker run hello-world

❖ You can search for images available on Docker Hub:

$ docker search apache

❖ You can pull images from Docker Hub:

$ docker pull apache

❖ To run the apache container run the below command:

$ docker pull apache

❖ To list the images present in the local system

$ docker images

● Running Docker containers

❖ Run a docker container from “hello-world” image.

$ docker run -it -d hello-world /bin/bash

❖ Pull “alpine” image from docker registry and see if image is available in your local image list.

$ docker pull alpine

$ docker images

❖ Pull some specific version of docker “alpine” image from docker registry.

$ docker pull alpine:3.7

❖ Run a docker container from local image “alpine” and run an inline command “ls -l” while running container.

$ docker run -it alpine ls -l

❖ Try to take login to container created using “alpine” image.

$ docker run -it alpine /bin/sh

❖ Detach yourself from the container without making it exit/container kill.

$ ctrl+p+q

❖ Check running containers and see if you can find out the stopped containers.

$ docker ps

❖ Stop running container.

$ docker stop {container_id}

❖ Start container that was stopped earlier.

$ docker start {container_id}

❖ Try to remove “alpine” image from your local system.

$ docker rmi alpine

5. Dockerfile

● Containerizing application

❖ Create a dummy node.js app, create a file named index.js with below content.

$ cat index.js

var os = require(“os”);

var hostname = os.hostname();

console.log(“hello from “ + hostname);

❖ Create a file named Dockerfile and write code as per the steps mentioned.

Use alpine image.

Add Author/Maintainer name in DockerFile

run commands -> apk update & apk add nodejs

copy current directory to /app

change your working directory to /app

specify the default command to be run upon container creation as mentioned below.

node index.js

Now your Dockerfile will look like this.

$ cat Dockerfile

FROM alpine

MAINTAINER SUDIPT

RUN apk update

RUN apk add nodejs

RUN mkdir /app

COPY index.js /app

WORKDIR /app

RUN node index.js

This way you have dockerized your node application.

● Building Images

❖ Now we have dockerized the app, we will Build image from Dockerfile.

$ docker build -f Dockerfile .

● Tagging

❖ Tag image with name “hello:v0.1”

$ docker build -t hello:v0.1 .

6. Docker Extras

● Docker port binding

❖ Pull nginx image from dockerhub.

$ docker pull nginx

❖ Run a container from nginx image and map container port 80 to system port 80.

$ docker run -it -d -p 80:80 nginx

❖ Display all mapped ports on nginx image.

$ docker container port {container_id}

❖ Run a docker container named “containexpose” from nginx image and expose port 80 of container to outer world without mapping it to any of system port.

$ docker run -it -d — expose=80 — name containerexpose nginx

● Docker volumes

❖ Create docker volume named “dbvol”

$ docker volume create — name dbvol

$ docker volume ls

❖ Run docker container from wordpress image and mount “dbvol” to /var/lib/mysql

$ docker run -it -v dbvol:/var/lib/mysql wordpress bash

❖ Display all docker volumes.

$ docker volume ls

❖ Create another docker volume named “testvol”

$ docker volume create — name testvol

❖ Remove docker volume “testvol”

$ docker volume rm testvol

● Docker linking

❖ Run a container in detached mode with name “db” from image “training/postgres”

$ docker run -it -d — name db training/postgres

❖ Run another container in detached mode with name “web” from image “training/webapp”, link container “db” with alias “mydb” to this container and finally pass an inline command “python app.py” while running container.

$ docker run -it -d -–name web -–link db:mydb training/webapp
python app.py

❖ Take a bash terminal in “web” container and Test container linking by doing a ping to “mydb”

$ docker exec -it web bash

And then run
# ping mydb

● Monitoring (Docker stats)

❖ Run a container from nginx image and map container port 80 to system port 80.

$ docker run -it -d -p 80:80 nginx

❖ And run the below command to see the stats of running container and monitor them

$ docker stats

❖ To see the logs of the container

$ docker logs {container_id}

7. DTR

● Docker hub

❖ Create an account on docker hub with a username.

❖ Pull an image named “centos” from dockerhub.

$ docker pull centos:7

● Private registries

❖ Create a private registry named mycentos on your Docker Hub

❖ Tag “centos” image with name “mycentos” in your repository with version 1.1

$ docker image tag centos:7 username/mycentos:v.1

❖ Login to your Docker Hub account

$ docker login

● Publishing images

❖ Push that image to your repo “mytestrepo” on your docker hub.

$ docker push username/mycentos:v1.1

❖ Do command line logout on docker hub.

$ docker logout

8. Docker Compose

● Installation

❖ Install docker-compose on your machine, if not already installed.

$ sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose

$ sudo chmod +x /usr/local/bin/docker-compose

❖ Check docker-compose version.

$ docker-compose — version

● Creating compose files

❖ Create a directory named nginx in your root.

$ sudo mkdir nginx

❖ Switch to that directory and create a file named docker-compose.yaml

$ cd nginx

$ sudo vi docker-compose.yml

❖ Use docker-compose version 2 to create docker-compose.yaml file.

Create a service named “databases”. Use image named “mysql”

Map container 3306 port to host machine 3306 port.

Add environment variables named “MYSQL_ROOT_PASSWORD”, “MYSQL_DATABASE”, “MYSQL_USER” and “MYSQL_PASSWORD” along with corresponding values for all.

$ cat ens.env

MYSQL_ROOT_PASSWORD=redhat08

MYSQL_DATABASE=nginxdb

MYSQL_USER=root

Add another service named “web”

Use image “nginx”

$ cat docker-compose.yml

version: ‘3’

services:

databases:

image: mysql

ports:

- “3307:3306”

env_file:

- evs.env

web:

image: nginx

ports:

- “80:80”

depends_on:

- databases

● Running images using docker-compose

❖ Save docker-compose.yaml file and do docker-compose up.

$ docker-compose up -d

❖ Verify nginx service is up and is accessible on machine.

$ curl localhost:80

❖ Stop and remove your docker container using docker-compose.

$ docker-compose down

9: Docker-Swarm

Prerequisite: Here, we will spin up 3 virtual machines (vagrant in our case) and setup swarm cluster with one manager and 2 node.

● Create Swarm

❖ Run the following command to create a new swarm:

$ docker swarm init — advertise-addr <MANAGER-IP(ip of the vagrant machine which will act as manager)>

The following command creates a swarm on the manager machine: for example

$ docker swarm init — advertise-addr 192.168.99.100

OUTPUT:

Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.

❖ To add a worker to this swarm, run the following command:

$ docker swarm join \

— token SWMTKN-1–49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c \

192.168.99.100:2377

❖ Run docker info to view the current state of the swarm:

$ docker info

Containers: 2

Running: 0

Paused: 0

Stopped: 2

…snip…

Swarm: active

NodeID: dxn1zf6l61qsb1josjja83ngz

Is Manager: true

Managers: 1

Nodes: 1

…snip…

❖ Run the docker node ls command to view information about nodes:

$ docker node ls

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

dxn1zf6l61qsb1josjja83ngz * manager1 Ready Active Leader

❖ Add nodes to the cluster

Run the command produced by the docker swarm init output from the Create a swarm tutorial step to create a worker node joined to the existing swarm:

$ docker swarm join — token SWMTKN-1–49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 192.168.99.100:2377

This node joined a swarm as a worker.

NOTE: Repeat above step in all your nodes which has to be part of swarm cluster

❖ Now from the manager node run docker node ls command to check the status of the joined nodes

$ docker node ls

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

03g1y59jwfg7cf99w4lt0f662 worker2 Ready Active

9j68exjopxe7wfl6yuxml7a7j worker1 Ready Active

dxn1zf6l61qsb1josjja83ngz * manager1 Ready Active Leader

The MANAGER column identifies the manager nodes in the swarm. The empty status in this column for worker1 and worker2 identifies them as worker nodes.

Swarm management commands like docker node ls only work on manager nodes.

● Deploy service

❖ Open a terminal and ssh into the machine where you run your manager node. For example, the tutorial uses a machine named manager1.

Run the following command

$ docker service create — replicas 1 — name helloworld alpine ping docker.com

The docker service creates command creates the service.

The — name flag names the service helloworld.

The — replicas flag specifies the desired state of 1 running instance.

The arguments alpine ping docker.com define the service as an Alpine Linux container that executes the command ping docker.com.

❖ Run docker service ls to see the list of running services:

$ docker service ls

ID NAME SCALE IMAGE COMMAND

9uk4639qpg7n helloworld 1/1 alpine ping docker.com

● Inspect the service

❖ Run docker service inspect — pretty <SERVICE-ID> to display the details about a service in an easily readable format.

To see the details on the helloworld service:

$ docker service inspect — pretty helloworld

ID: 9uk4639qpg7npwf3fn2aasksr

Name: helloworld

Service Mode: REPLICATED

Replicas: 1

Placement:

UpdateConfig:

Parallelism: 1

ContainerSpec:

Image: alpine

Args: ping docker.com

Resources:

Endpoint Mode: vip

❖ Run docker service ps <SERVICE-ID> to see which nodes are running the service:

$ docker service ps helloworld

NAME IMAGE NODE DESIRED STATE LAST STATE

helloworld.1.8p1vev3fq5zm0mi8g0as41w35 alpine worker2 Running Running 3 minutes

In this case, the one instance of the hello world service is running on the worker2 node. You may see the service running on your manager node. By default, manager nodes in a swarm can execute tasks just like worker nodes.

Swarm also shows you the DESIRED STATE and LAST STATE of the service task, so you can see if tasks are running according to the service definition.

Run docker ps on the node where the task is running to see details about the container for the task.

$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

e609dde94e47 alpine:latest “ping docker.com” 3 minutes ago Up 3 minutes helloworld.1.8p1vev3fq5zm0mi8g0as41w35

● Scale the service in the swarm

❖ Run the following command to change the desired state of the service running in the swarm:

$ docker service scale <SERVICE-ID>=<NUMBER-OF-TASKS>

For example:

$ docker service scale helloworld=5

helloworld scaled to 5

❖ Run docker service ps <SERVICE-ID> to see the updated task list:

$ docker service ps helloworld

NAME IMAGE NODE DESIRED STATE CURRENT STATE

helloworld.1.8p1vev3fq5zm0mi8g0as41w35 alpine worker2 Running Running 7 minutes

helloworld.2.c7a7tcdq5s0uk3qr88mf8xco6 alpine worker1 Running Running 24 seconds

helloworld.3.6crl09vdcalvtfehfh69ogfb1 alpine worker1 Running Running 24 seconds

helloworld.4.auky6trawmdlcne8ad8phb0f1 alpine manager1 Running Running 24 seconds

helloworld.5.ba19kca06l18zujfwxyc5lkyn alpine worker2 Running Running 24 seconds

You can see that swarm has created 4 new tasks to scale to a total of 5 running instances of Alpine Linux. The tasks are distributed between the three nodes of the swarm. One is running on manager1.

❖ Run docker ps to see the containers running on the node where you’re connected. The following example shows the tasks running on manager1:

$ docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

528d68040f95 alpine:latest “ping docker.com” About a minute ago Up About a minute helloworld.4.auky6trawmdlcne8ad8phb0f1

● Delete a service

❖ Run docker service rm helloworld to remove the helloworld service.

$ docker service rm helloworld

helloworld

❖ Run docker service inspect <SERVICE-ID> to verify that the swarm manager removed the service. The CLI returns a message that the service is not found:

$ docker service inspect helloworld

[]

Error: no such service: helloworld

Even though the service no longer exists, the task containers take a few seconds to clean up. You can use docker ps to verify when they are gone.

10. Kubernetes -Minikube

Prerequisite: You’ll need ubuntu:16.04 in your local machine, it won’t work on VM.

● Installation

❖ Install minikube

Step 1: Update system

Run the following commands to update all system packages to the latest release:

$ sudo apt-get update

$ sudo apt-get install apt-transport-https

$ sudo apt-get upgrade

Step 2: Install KVM or VirtualBox Hypervisor

$ sudo apt install virtualbox virtualbox-ext-pack

Step 3: Download minikube

You need to download the minikube binary. You will put the binary under /usr/local/bin directory since it is inside $PATH.

$ wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

$ chmod +x minikube-linux-amd64

$ sudo mv minikube-linux-amd64 /usr/local/bin/minikube

Confirm version installed

$ minikube version

minikube version: v0.28.0

Step 4: Install kubectl on Ubuntu 16.04

We need kubectl which is a command line tool used to deploy and manage applications on Kubernetes

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

Add Kubernetes apt repository:

$ echo “deb http://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list

Update apt index and install kubectl

$ sudo apt update

$ sudo apt -y install kubectl

Step 5: Starting minikube

Now that components are installed, you can start minikube. VM image will be downloaded and configured for Kubernetes single node cluster.

$ minikube start

11. Deploying Pods and Services on Minikube

● Create a Deployment

❖ Create a Deployment

A Kubernetes Pod is a group of one or more Containers, tied together for the purposes of administration and networking. The Pod in this tutorial has only one Container. A Kubernetes Deployment checks on the health of your Pod and restarts the Pod’s Container if it terminates. Deployments are the recommended way to manage the creation and scaling of Pods.

Use the kubectl create command to create a Deployment that manages a Pod. The Pod runs a Container based on the provided Docker image.

$ kubectl create deployment hello-node — image=gcr.io/hello-minikube-zero-install/hello-node

❖ View the Deployment:

$ kubectl get deployments

NAME READY UP-TO-DATE AVAILABLE AGE

hello-node 1/1 1 1 1m

❖ View the Pod:

$ kubectl get pods

NAME READY STATUS RESTARTS AGE

hello-node-5f76cf6ccf-br9b5 1/1 Running 0 1m

❖ View cluster events:

$ kubectl get events

❖ View the kubectl configuration:

$ kubectl config view

● Create a service

By default, the Pod is only accessible by its internal IP address within the Kubernetes cluster. To make the hello-node Container accessible from outside the Kubernetes virtual network, you have to expose the Pod as a Kubernetes Service.

❖ Expose the Pod to the public internet using the kubectl expose command:

$ kubectl expose deployment hello-node — type=LoadBalancer — port=8080

The — type=LoadBalancer flag indicates that you want to expose your Service outside of the cluster.

❖ View the Service you just created:

$ kubectl get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

hello-node LoadBalancer 10.108.144.78 <pending> 8080:30369/TCP 21s

kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23m

On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.

❖ Run the following command:

$ minikube service hello-node

● Delete deployment and services

❖ Now you can clean up the resources you created in your cluster:

$ kubectl delete service hello-node

$ kubectl delete deployment hello-node

❖ Optionally, stop the Minikube virtual machine (VM):

$ minikube stop

❖ Optionally, delete the Minikube VM:

$ minikube delete

--

--

No responses yet