An Introduction to Common Machine Learning Algorithms

Machine learning algorithms are computer programs that can learn from data and make predictions or decisions without being explicitly programmed to do so. These algorithms are used in a wide range of…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Docker From Scratch Session 1

What is Docker & why use docker ? (What n Why)

Docker makes it really easy to install and run software without worrying about setup and dependencies.

Case Study:
1. Try installing redis db into our local machine through a terminal cmd.

Cmd: docker run -it redis.
After running this cmd we have the instance of redis up and running.

When we ran the above cmd, the docker cli reached to docker hub via docker server and downloaded the single file called image. An image contains all the dependencies and the configuration required to run the specific program (redis).

A container is an instance of the image or a running program with its own isolated set of hardware resources, set of memory, networking and hardware space.

Installation for MacOS

When we install docker on our local system, it has two major tools:
1. Docker Client: Is also called as docker cli, it’s a program that we interact with from our terminal. We are going to enter the cmd through our terminal and issue them to our docker client and it further communicate to another tool called “Docker Server”.

2. Docker Server: Tool is responsible for downloading images, creating and running of container etc.

Steps to install docker for Mac

USING DOCKER CLIENT

1. Our very first cmd: ‘docker run hello-world’.
After running this cmd we get the following output: ‘Hello from Docker!’

The above output shows that your installation appears to be working correctly.

When we ran docker run hello-world that started up the docker cli, docker cli is in charge of taking command from us, do a little bit of processing and further communicate to the docker server.

When we ran docker run hello-world that meant that we wanted to start up a new container using the image with the name hello world. The hello-world image has a program inside it whose whole purpose is to print the above message ‘Hello from Docker!’.

When the docker server saw that we wanted to start the container using the image, it started to check whether the image is there in local(image cache) or not. As there was no image in the image cache so the docker server contacted the docker hub, it is a repo of free public images that we can freely download and run.

Docker server download the image from the hub and install it into the image cache, so that in future docker server won’t connect to docker hub it will directly download it from cache only.

After downloading the image, server creates the container as we know that it’s an instance of the image and it’s sole purpose is to run a specific program as a result we get the output.

What is a Container ??

Before understanding the container we need to have a high level overview of how the OS works on our computer.

Most OS have something called ‘Kernel’, a kernel is a running software process that govern access between all the running program on our computer and all the physical hardware that are connected to our computer.

Suppose we want to write a file to our hard-disk via Node.js, so in that case Node do not directly communicate with the physical device basically Node JS communicate to the Kernel via system call which further persists to the hard disk.

Minor Case Study

Suppose we have 2 software programs chrome and Node.js and both requires python but with different version number. In our hard disk we have only v2 of python and for this hypothetical scenario we cannot have 2 versions of python installed in our system.

As a result we will have chrome running successfully as we have python version 2 installed in our hard disk.

How To Solve Such Issue ? (Imp Point)

One way is to use the feature of OS(linux only) known as name spacing, which will make a different segment in the hard disk dedicated to “Python Version v3”. Kernel looks at the incoming system call and try to figure out which process is it coming from and will assign that call to its respective segment.

NOTE: This minor case study is has no relation with real time scenario as neither Chrome nor Node requires python as the major dependency.

Inside the container is really a running process or set of processes that have a grouping of resources specifically assigned to it.

How Image and Container is linked to each other ?

Following things happen when we turn the image and convert it into container:

1. Kernel isolates some section of hard drive just to make it available to a container.
2. FS snapshot (python n chrome) is taken from image and placed to that segment of the hard drive, where both of them gets installed.
3. Now the startup cmd is executed like run chrome, so the new instance of the process gets created which has an isolated set of resources installed in the hard disk

Summarize

We can say that container is a running process along with the subset of physical resources like HD, RAM etc on our computer that are allocated to a specific process only.

NameSpace & Control Group explained

Namespacing: The entire process of segmenting a hard disk based on the process asking for it. With name spacing we are allowed to isolate resources per process, we can also namespace a process to restrict the area of hard disk/network device/ ability to talk to other processes.

Control Grouping : Through this we could restrict the amount of resources that a particular process could use. We could also limit the amount of memory that a process can use, the amount of cpu usage and the amount of network band width.

These above 2 features could be used to really kind of isolate a single process and limit the amount of resources it can talk to and the amount of bandwidth essentially that it can make use of.

These above features are not included in all OS, they are specific to Linux OS system.

When we installed docker into our machine we actually installed linux virtual machine, so until docker is running on our comp we have linux vm also running. Inside this Linux VM we will have all the containers created.

Inside the Linux VM we have the Linux Kernel and it will gonna be hosting running processes

Kernel will be in charge of limiting/isolating access to different hardware resources on our computer.

NOTE: When we run docker version cmd on our terminal we see the config that will have OS/Arg as Linux mentioned.

*******************************************************************************************************

SECTION 2: Manipulating Containers with the Docker Client

# Overriding Default Commands.

In this section we will learn how to override the default cmd which gets executed inside the container, we will execute the cmd docker run the image name and after that we will write an alternate cmd to be executed inside the container after it starts up.

So whatever the initial default cmd is included inside of the image is not going to be executed.

Case Study: I want to see the files/folders inside the given directory/container.

cmd : docker run busybox ls, here “ls” is the <override cmd> which will print all files and folders

inside of given directory. These printed folders are solely present inside the container not

on our machine.

The same “ls” cmd won’t work in case of hello-world image as this “ls” program is present

in “busybox filesystem image” but not in the “hello-world filesystem image”.

# Listing all running containers

Cmd1 : docker ps, <ps> list all running containers.

Case Study: We will see the list of containers running on the docker.

Let’s write the cmd “docker run busybox ping google.com”, so this cmd will attempt to ping to the google server and will take time to execute completely, so meanwhile we could open another terminal and run the cmd — “docker ps”.

In the output we will see the container ID, Image, command, Created(when did it started), Status(how long it’s been up), Ports, names (randomly generated name).

Press ctrl + C on keyboard to stop the process.

Cmd2 : docker ps — all, listing of all the containers that we have ever created.

# Container Life Cycle

Creating a container and running it both are 2 different processes. Docker run cmd is the combination of docker create and docker start cmd.

Docker create cmd is used to simply create container out of the image and docker start is used to simply start the container.

Creating a container means putting the FS snapshot (dependencies & config) into the hard disk segment dedicated to some specific process.

Starting the container means executing the startup cmd that might start up the process inside the container.

docker start -a <id of the container> will execute the primary startup cmd inside the container.
The “-a” cmd actually make the docker watch for the output from the container and print it out to our terminal.

** Difference between docker run and docker start.

There is a very small difference between docker run and docker start, by default docker run will show you all the logs and information coming out of the container but vice versa in case of docker start.

Restarting stopped container

If the container has exited we can still start the container backup,so just the container stopped doesn’t mean that it’s like dead or cannot be used.

To start the container backup we can take it’s container ID and then execute docker start -a <container ID> cmd.

What actually happened at the back ground ?
At initial level we ran docker run or create cmd that took out FS snapshot from the image and essentially got the reference to it inside the container.
We then provided the override command of <echo hi there> that was the primary startup cmd ran into the container and after that it got successfully exited.
After exiting successfully we ran the docker start cmd a second time, what happened that this running cmd <echo hi there> got re-issued inside the container.
Once the container is created we cannot replace the default cmd as soon as we started up with the default cmd.

# Removing Stopped Container

The stopped container do actually takes space on the computer, so our advantage must be to delete all the containers rather than keeping them in the stopped state.

Below is the cmd to delete all the containers

1. cmd : docker system prune
The above cmd will not only removes the stopped container but also the billed cache, it is the image that we have fetched from docker hub. So, in future after running this cmd we need to re-download the images from the docker hub.

After executing the above cmd we get the container IDs that got deleted and also how much the space has been reclaimed after deleting the containers.

# Retrieving Log Output

<image to be added>

The cmd docker logs <container ID>, fetches all the logs info associates to that specific mentioned container.

By writing the docker logs cmd we are not re-starting or recreating the container, we are only getting the logs that have been emitted from the container.

*. Stopping the container

There are 2 ways to stop the container, either by stopping the container or killing the container. Below is the cmd to perform these actions:

When we issue a docker stop cmd, a hardware signal (SIGTERM) is sent to the process running inside the container. This signal tells the process to shut down on its own time which means that we give a process its own time to shut down itself.

In this section, we will try to run multiple processes inside the single container.

Small Case Study w.r.t Redis Server :

We will start up the instance of the redis server on our machine using docker cmd:
Step1: docker run redis.

If we try to run another cmd “redis-cli” in another terminal to communicate to the redis server process running on the container will throw us the error “Could not connect to the redis server at …blah blah”.

Here we were trying to run the “redis-cli” cmd outside the container and our redis server was up and running inside the container, so we need to do something in order to execute the “redis-cli” cmd inside the container.

Below is a diagrammatic representation of our both processes to be executed inside the container

In order to execute both the processes inside the container we have the specific cmd:

After running the above cmd we are able to start up another process into the container.

*. What’s the purpose of the “-it” flag ?

Quick Note: When we are running a docker on our system, every single container we are running is inside of a virtual machine running Linux, so all these processes are running inside the linux world.

Every process we create inside the container has 3 communication channel attached to it, we refer to it as: standard in, standard out, standard error.

These channels are used to communicate information either into the process (STDIN) or out of the process (STDOUT).

When we write any command into the terminal it gets directed into the running standard in channel, the standard out channel that is attached to any given process is going to convey the information coming out from the process. The info coming out from the stdout channel might end up redirected to the running terminal. If redis-cli has some error occured inside of it will channel out through STDERR to the terminal screen.

**. How does all these things are related to the “-it” flag ?

The “-it” right here is 2 different flasks, in reality it’s a “-i” and “-t” but in short we write it as “-it”.
By adding “-i” we make that the stuff that we type gets directed to “STDIN” channel of redis-cli process.
The “-t” is responsible for showing the nicely formatted output on the terminal screen.

*. Getting a cmd prompt in a Container

In order to exit from the container cmd prompt, we need to either enter “exit” or press “Ctrl + D”.

*. Two Containers do not share there file system

HandsOn Practise:
Step1:

Open your cmd prompt and run the below cmd:
“docker run -it busybox sh”, it will open the terminal of the container.

Now type “ls” cmd in order to fetch the by default directory inside the container.

Step2:
Repeat the step1 in another cmd prompt of our system, after running it another time we will have the second instance of the container running up.

We could even check the status of the containers by simply typing “docker ps” cmd on our terminal.

Step3:
Now in the terminal of the second container we will write the cmd “touch newFile”, this will create the new file inside the file system of the second container which is totally isolated to the file system of the container 1.

Learning: There is no sharing of file system b/w containers until we make any specific connection between them.

Add a comment

Related posts:

A review of KitchenAid favourites.

KitchenAid is a well-known brand in the kitchen appliance market, known for their high-quality and durable products. Some of the most popular KitchenAid products include stand mixers, food…

3 Quotes by Albert Einstein to Make You a Better Lover

This quote makes perfect sense to most people when applied in a professional context. Yet, in a private context we tend to ignore it. Proof is the fact that in most relationship there comes a point…

Types of Reconstructive Surgeries

Reconstructive surgery is an important part of any medical treatment plan for many people. Whether it’s to correct a birth defect or treat a chronic disease, the need for reconstructive surgery is…