FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop

Series

Docker Volumes

150 150 Frank

Hey everyone,

Today I want to explain you a bit about Docker Volumes, when/where to use them and their benefits.

Remember that in the Docker Introduction I said a Docker container is stateless? For the contained data that is mostly true.
There are two differences:

  1. Docker Volumes
  2. External Connections

A completely stateless container would of course be useless for us. We couldn´t store e.g. no Grocery list or Appointments.
And we couldn´t store files like pictures from the Wedding.

So the idea is to have the docker container that is stateless but add a Volume if you want to store files and configure external connections
to e.g. issue an order.

Depending on the type of data (Grocery List Entry or Wedding Pictures) we want to store them differently. The Grocery Entry belongs to a database,
the Picture into a volume.

Both can be configured through the Stack Configuration, the external Connection to e.g. a database is normally a simple Environment Variable with the url,
while the volume is a mounted filesystem.

An example Stack configuration for a single service can look like this:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port

I will explain a stack configuration file in another post, so we focus only on the environment key.
Under it is an Environment Variable MYSQL_URL that can be read inside the container. So your API can read MYSQL_URL and connect to that address.
This way you can use the same docker container and just configure it locally for testing to your computer while in the production environment it points to e.g. an Amazon RDS MySQL Instance.

The Volume is somewhat different:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port
    volumes:
      - $volume_name:$mounted_path   
      
volumes:
  youtube_publish_data:
    external: true

As you can see we describe a volume under the volumes key. We give it a unique name, followed by the path inside the container where it should be mounted.
Additionally we add a root key volumes, under it the name of the volume and under that we set external to true, so that the data is outside of the docker container of the logic, but inside
a data container.
This means we have separated the logic from the data files. Depending on the driver it is limited to the local machine (the default we use is on the local machine).

Still this give us e.g. the possibility to have a generic downloading container that is started by a message. This container uses an external data volume for the downloaded data.
A second container processes the downloads. Therefore we need a config like this:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port
    volumes:
      - $volume_name:$mounted_path 
        
volumes:
  $volume_name:
    external:
      name: $Stackname_$volume_name

Now we can mount the same volume in two or more containers and access the data there. That is quite cool and saves us ressource for transfering the data back and forth.
Depending on your setup you might want to have it this way and just transfer out the final production file to e.g. Amazon S3 to save time and money on bandwith.

I hope that helped you to get a step further into distributed system architecture.
I´m myself a life-long student of it and every day I learn a new possibility that helps to simplify my projects.
So if you have another idea how to do things please contact me in the comments or via mail. I appreciate it.

Yours sincerely,
Frank

Sources:
(1) Docker Tip #12: A Much Better Development Experience With Volumes
(2) Using Docker Compose for NodeJS Development
(3) Understanding Volumes in Docker
(4) Docker and Continuous Integration: Code, Build, Test, Deploy
(5) Dive Into Docker – The Complete Docker Course for Developers

What´s a Dockerfile?

150 150 Frank

Hey everyone,

after the introduction of Docker I thought I show you the recipe to create such a container. The recipe is like a recipe for your favorite lasagne and is called Dockerfile.
It looks similar to this:
FROM node:8.15.0-alpine

# Add application folder
RUN mkdir /app
WORKDIR /app

# Add package.json and install deps
ADD package.json /app/package.json
RUN npm install

COPY . .

# Expose the listening port
EXPOSE 1235

# Start the server
CMD [“pm2-docker”, “start”, “process.yml”]

As you see it is quite short and nothing to be afraid about. So lets get through it:
FROM node:8.15.0-alpine
As mentioned in the introduction Docker uses multiple layer to form a container. With this we tell it which container in which version to use as our starting point.

# Add application folder
Lines starting with a # are comments. Use the to explain anything that isn´t clear to you

RUN mkdir /app
RUN is the command to execute a custom command. In this case create a directory app in the root of the container filesystem

WORKDIR /app
WORKDIR is similar to the commandline command cd. It changes the current working directory for the following commands.
Here we change into the newly created directory /app.

ADD package.json /app/package.json
Adding a file to the container in the mentioned path (/app/package.json).

RUN npm install
Again we run a custom command, here npm install to install all dependencies for the used Nodejs project.

COPY . .
COPY copies all content in the current folder recursively into the mentioned path of the container (. translates to the last WORKDIR command, so we copy to /app).
COPY respects a .dockerignore file, with which you can filter out files you don´t want to copy over (e.g. log files).

CMD [“pm2-docker”, “start”, “process.yml”]
Docker needs a starting point / software. If you want to let the container running the software needs to run.
With CMD we define which software we want to use for this.
Here we use pm2-docker from the pm2 package to keep (nodejs) projects running (restart when necessary, etc.).

And that´s it. Basically you can now already package your own project into a docker container. Just execute “docker build -t $yourname/$your_container_name:$your_version .” and it will be build. An example can be “docker build -t frankthedevop/MyTodolistAPI:v0.0.1 .”.
You can then run it with “docker run $yourname/$your_container_name:$your_version” or in my case “docker run frankthedevop/MyTodolistAPI:v0.0.1”. More details about the commands in a later post.

Just remember that we are locker inside the container, that means that all paths are inside the container too. If you have e.g. a configuration file in your personal user directory make sure to copy it to the correct point inside the container, otherwise it doesn´t exist.

This is just the beginning, but I wanted you to see how short and easy such a recipe can be.
You find further links in the sources. And I will create a more extensive post about it soon too.

Feel free to contact me about how to create a Dockerfile for your project. I will do my best to help you :).

Yours sincerely,
Frank

Sources:
(1) Docker Dockerfile Documentation
(2) Docker 101: Fundamentals & The Dockerfile
(3) Inspiration how to use Dockerfiles

What is Docker? Or: Why choose Docker?

150 150 Frank

Hey everyone,

today it´s about one of the virtualisation solutions out there: Docker.
I want to help you understand quickly how it works.

What is Docker?
As mentioned Docker is a software virtualisation solution. There are others out
there which are different. The important point is that it uses container virtualisation.
That means with Docker you don´t have to install a whole virtual machine if you want to
e.g. setup a webserver. You package the webserver into a container and docker reuses the
operating system it runs on. This means a difference in size, both storage and main memory.

In short it looks like this:
Host OS -> Docker -> Docker Container
-> Docker Container
-> Docker Container
.
.
.

When to use it?
I personally use Docker for most of my projects of custom developed online hosted software.
You probably heard of the MEAN (Mongo, Express, Angular, Nodejs) Stack. To host those Docker
is often used.

Whats the benefit?
When you create software whose fundamentals work similarly (e.g. the same Nodejs Framework)
you write the Recipe to package the Software (Dockerfile) once and can copy it over with
little changes.

Docker container are stateless. You can rest assured that you can start a software from the same
configuration every time. You change the configuration of the data storage but else it works everywhere the same.
No more “but it works on my computer”.

Easy to combine into Systems (Stacks). You take multiple containers with different software and can combine them
to achieve your goal. You can reuse the same containers, just point the configuration to e.g. another Database and it works.

Works like a charm with the concept of microservices. With Docker you can set up microservices easier. I have e.g. one container
to handle a generic JobSystem for distributed workflows. I need it in a new stack? No problem, point to it and it basically is available.
The new System creates to much load? Start another instance of that container to do load balancing.

Online Hosting is available and affordable. It is not as cheap as a $3.95/month web hosting but it doesn´t need to cost $1000/month either.
The exact cost will of course depend on the size of your project.

What are the negatives?
Docker does virtualisation and virtualisation itself takes up more resources.

Docker has layers of containers. If you have to many layers the logic slows down.

Sometimes it takes more time to find the correct base image layer and install software in the correct way.

Conclusion
Like everything Docker has it´s pros and cons. Every user has to weigh them and draw his/her own conclusion.
Personally I use a microservice approach. And Docker is a great help for that after you figured a few things out.
I love that I can package my software into a stateless container, configure it externally with connections to data and systems and know that it works the same on my laptop as well as my server.
If it doesn´t work on the server then I know 99.9% of the time that it is a configuration problem. That alone already saves a headache or two ;).
But I am not at the point of an enterprise that uses 1000s of container yet. That might be a bigger challenge with Docker. But the knowledge from Docker will make a transition even then easier.

I hope I was able to give you a quick overview of Docker as a base for following posts and your own decision whether to use it or not.

Yours Sincerely,
Frank

Sources:
(1) Docker Curriculum.com
(2) Docker from the Beginning I