My "todo" List

I am planning to write some blogs at somepoint of time in the future on the following topics:

Spring Framework, Hibernate, ADO.NET Entity Framework, WPF, SharePoint, WCF, Whats new in Jave EE6, Whats new in Oracle 11G, TFS, FileNet, OnBase, Lombardi BPMS, Microsoft Solution Framework (MSF), Agile development, RUP ..................... the list goes on


I am currently working on writing the following blog

Rational Unified Process (RUP) and Rational Method Composer (RMC)

Tuesday, May 9, 2017

Docker as a Container Cluster (Docker Swarm)

Introduction
In this article I plan to use Dockers as a Container virtualization platform to create “tear-down” environments for doing Continuous Integration (CI)/Continuous Delivery (CD).
Containerization vs Virtualization
Containerization within Linux is a capability that utilizes the Linux kernel’s container isolation capability LXC (Linux Containers) as shown below. The LXC containers share the host OS and achieve isolation using features like – Libvirt, namespace, cgroups, chroot within Linux kernel. The benefit of containerization is that it’s possible to stand up multiple isolated containers within a machine/Virtual Machine (VM)/Host without the heavy burden of running OS binaries within the individual containers
With containerization it’s possible to share a VM or a host with multiple projects and spread cost across those projects. But what is more appealing is to create a cluster of containers grouped together in a logical manner to be used as a group to provide high availability for typical server roles like web servers, application servers. I intentionally left database server from this clustering role list because the same limitation that exists with typical DB cluster exists with containerized equivalent which is database software’s do not work well with multiple-write database servers and you have to orchestrate a solution that is relying on database options like  mirroring, replication, active-passive standby etc. In a nutshell, clustering works well with stateless servers/containers and requires additional planning when it comes to stateful servers/containers. However these stateful planning considerations is something that you will have to factor in even if you were not using containerized solution. What I am saying is that clusters in general are a good fit for stateless architecture and require a more detailed planning for stateful architecture especially when it comes to traditional RDBMS databases, it’s still doable.
In the following sections I plan to explain how to go about configuring Docker Swarm a fairly new concept introduced in Dockers to allow communication of Dockers containers installed across host/VM.

Architectural Overview
Overview of the various elements depicted in the Architectural Diagram
Diagram elements
Description
Docker Swarm
Docker swarm allows you to deploy services in a cluster and execute the same as tasks. The number of tasks created will depend on the number of replicas requested during service creation.
Manager Node
Manger is the node that plays the role of coordinating the installation of Docker services and provisioning of containers for those services on worker nodes. A manager node can participate as a worker node but for performance reasons it’s better to have no containers provisioned on the master node so it can be exclusively used for cluster coordination. For high availability and fail-over it’s better to create multiple Docker manager nodes.
Worker Node
Worker is the node that executes the services that are deployed in Docker swarm. It does that by spawning a Docker container and running tasks in the container. In my example I have three services defined in my Docker swarm – one for apache web server acting as a reverse proxy, one for tomcat to host my Java web application and one for mysql database server to contain my database objects (tables, users etc.)
Docker Containers
Docker containers are the ones that run the Docker images as tasks that are executed for the services.  
The custom Docker image that I created with apache as the base image to make the image run as a reverse proxy server for the application server.
The custom Docker image that I created with tomcat as the base image which has my Java web application deployed in it.
The custom mysql database image that I created with my application tables and data contained in the database.
Subnet
The overlay networks that Docker allows me to create to separate the three tiers via subnets.
Host
I used VMWare workstation for two hosts also referred to as a “node”. One host acts as a Manager node and another host acts as a Worker node in the Docker swarm.

Service, Tasks and Container relationship
It’s important to understand the difference between Service, Tasks and Containers. The diagram below shows that association
In a nutshell, a service is what you define that you need to deploy in a Docker swarm. The Docker swarm uses Tasks to execute that service in Docker containers.
Development Environment
  1. For my two hosts in the architectural diagram I used VMWare workstation with Ubuntu installed in the VM
  2. For container I used Dockers (there are other container technologies out there like rkt from CoreOS)
  3. For my Java web application, I used Java Spring MVC for the front end, Hibernate as an ORM layer with mysql as the database (I am not going to spend much time on explaining this as this is not the focal point of this blog)

Docker Swam installation steps
Step1
Let’s install Docker on both my hosts VM (I am using Ubuntu OS for my VM). The installation steps are pretty simple and are as follows
Step1a
Install packages to allow apt to use a repository over HTTPS:
Step1b
Add Dockers official GPG key:
Step1c
Set up stable repository to download Dockers from:
Step1d
Use apt-get to update and install Docker community edition
At this point the Docker engine is installed on the host VM; as we have two hosts repeat steps 1a through step 1d on the other machine.

Step 2
Let’s now create the Docker swarm

Step 2a
Pick one node as manager node and run the following command

The output of the above command will provide you with the Docker token and the URL that you will need in step 2b to add the worker node to the Docker swarm.
Step 2b
Let’s add the worker node to the Docker swarm created in step 2a.
If at all you want to find the token needed to join the worker node to the Docker swarm use the following command
docker swarm join-token -q worker

At this point my two node Docker swarm is created.
Step 3
Step 3a
Now for deploying services into the Docker swarm I plan to use Docker Compose (v3), Docker Compose (v3) allows me to use YAML format to define my services that I need to deploy in my Docker swarm
The lines below provide the Docker Compose file (“mystack.yml”) that I will be using to create three Docker services. Described below
  • Service 1 (tagged as – webserver:) – this service configuration section represents my apache web server acting as a reverse proxy. I am asking to create two replicas of the web server
  • Service 2 (tagged as – appserver:) – this service configuration section represents my application server and contains my Java web application. I am asking to create two replicas of the application server
  • Service 3 (tagged as – mysql:) – this service configuration section represents my database server with my custom tables and users. I am asking to create one replica of the database server. NOTE: Although the architectural diagram shows two replicas, I am asking for only one replica here to simply my deployment steps.
A few other things that I describe in the Docker compose file
  • I define three overlay networks – “mywebnw”, “myappnw” and “mydbnw”. Each for the three application tiers – web, application and database respectively.
  • I am relying on stack name to be defined as “mystack1” so that the web server can reference my application server as “mystack1_appserver” and my application server can reference the mysql database as “mystack1_mysql”.
  • “mystack1_appserver” and “mystack1_mysql”are logical names within the Docker swarm, Docker swarm provides an internal load balancer to route the request to multiple replicas in the back and these logical names are associated with a Virtual IP (VIP) in Dockers internal DNS server.
  • If you have created external overlay networks and do wish to reference them in your Docker compose file (“myfinal.yml”) above then you can do so using the following YAML code
Everything else in the Docker Compose file (“mystack.yml”) stays the same except the above code snippet. Here I am referencing three externally created overlay networks “my-web-network”, “my-app-network” and “my-db-network” respectively representing the same subnet layers as before except that before the networks were created as a part of the Docker swarm creation now its referencing an external overlay network. I created those three overlay network using the following commands
Step 3b
Now that I have the Docker Compose file all I need to do next is deploy the Docker services using the following command on manager node
Docker stack deploy –c “fullpath to my compose file - (mystack.yml)”

Docker UI
If you need to use a web User Interface to manage Docker there are a few options - Rancher, Portainer and Shipyard. I am going to show the steps to install Portainer next.
Step 1
Portainer can run as a service in Docker Swarm, use the following command to install the same from manager node.

Conclusion
Using Docker as a containerized solution will help you save cost by utilizing existing servers/infrastructure for multiple projects, thereby spreading the cost across those projects. It will also help you with your Continuous Integration (CI)/ Continuous Delivery (CD) SDLC process as creating containers using Docker Compose becomes an Infrastructure as Code (IaC) capability which when coupled with application code deployment and versioning software’s like Git will allow you to be more Agile. Another benefit with Container approach is that you can create prebuilt Docker containers that the developers can use on their laptop for development and then the same Container image along with the developer code can be used by testing team to do acceptance testing, thereby eliminating the age old problem – “IT ONLY WORKS ON DEVELOPER'S MACHINE”. I hope this blog is helpful to you. Enjoy.