FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop

Docker

Docker Volumes

150 150 Frank

Hey everyone,

Today I want to explain you a bit about Docker Volumes, when/where to use them and their benefits.

Remember that in the Docker Introduction I said a Docker container is stateless? For the contained data that is mostly true.
There are two differences:

  1. Docker Volumes
  2. External Connections

A completely stateless container would of course be useless for us. We couldn´t store e.g. no Grocery list or Appointments.
And we couldn´t store files like pictures from the Wedding.

So the idea is to have the docker container that is stateless but add a Volume if you want to store files and configure external connections
to e.g. issue an order.

Depending on the type of data (Grocery List Entry or Wedding Pictures) we want to store them differently. The Grocery Entry belongs to a database,
the Picture into a volume.

Both can be configured through the Stack Configuration, the external Connection to e.g. a database is normally a simple Environment Variable with the url,
while the volume is a mounted filesystem.

An example Stack configuration for a single service can look like this:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port

I will explain a stack configuration file in another post, so we focus only on the environment key.
Under it is an Environment Variable MYSQL_URL that can be read inside the container. So your API can read MYSQL_URL and connect to that address.
This way you can use the same docker container and just configure it locally for testing to your computer while in the production environment it points to e.g. an Amazon RDS MySQL Instance.

The Volume is somewhat different:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port
    volumes:
      - $volume_name:$mounted_path   
      
volumes:
  youtube_publish_data:
    external: true

As you can see we describe a volume under the volumes key. We give it a unique name, followed by the path inside the container where it should be mounted.
Additionally we add a root key volumes, under it the name of the volume and under that we set external to true, so that the data is outside of the docker container of the logic, but inside
a data container.
This means we have separated the logic from the data files. Depending on the driver it is limited to the local machine (the default we use is on the local machine).

Still this give us e.g. the possibility to have a generic downloading container that is started by a message. This container uses an external data volume for the downloaded data.
A second container processes the downloads. Therefore we need a config like this:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port
    volumes:
      - $volume_name:$mounted_path 
        
volumes:
  $volume_name:
    external:
      name: $Stackname_$volume_name

Now we can mount the same volume in two or more containers and access the data there. That is quite cool and saves us ressource for transfering the data back and forth.
Depending on your setup you might want to have it this way and just transfer out the final production file to e.g. Amazon S3 to save time and money on bandwith.

I hope that helped you to get a step further into distributed system architecture.
I´m myself a life-long student of it and every day I learn a new possibility that helps to simplify my projects.
So if you have another idea how to do things please contact me in the comments or via mail. I appreciate it.

Yours sincerely,
Frank

Sources:
(1) Docker Tip #12: A Much Better Development Experience With Volumes
(2) Using Docker Compose for NodeJS Development
(3) Understanding Volumes in Docker
(4) Docker and Continuous Integration: Code, Build, Test, Deploy
(5) Dive Into Docker – The Complete Docker Course for Developers

What´s a Dockerfile?

150 150 Frank

Hey everyone,

after the introduction of Docker I thought I show you the recipe to create such a container. The recipe is like a recipe for your favorite lasagne and is called Dockerfile.
It looks similar to this:
FROM node:8.15.0-alpine

# Add application folder
RUN mkdir /app
WORKDIR /app

# Add package.json and install deps
ADD package.json /app/package.json
RUN npm install

COPY . .

# Expose the listening port
EXPOSE 1235

# Start the server
CMD [“pm2-docker”, “start”, “process.yml”]

As you see it is quite short and nothing to be afraid about. So lets get through it:
FROM node:8.15.0-alpine
As mentioned in the introduction Docker uses multiple layer to form a container. With this we tell it which container in which version to use as our starting point.

# Add application folder
Lines starting with a # are comments. Use the to explain anything that isn´t clear to you

RUN mkdir /app
RUN is the command to execute a custom command. In this case create a directory app in the root of the container filesystem

WORKDIR /app
WORKDIR is similar to the commandline command cd. It changes the current working directory for the following commands.
Here we change into the newly created directory /app.

ADD package.json /app/package.json
Adding a file to the container in the mentioned path (/app/package.json).

RUN npm install
Again we run a custom command, here npm install to install all dependencies for the used Nodejs project.

COPY . .
COPY copies all content in the current folder recursively into the mentioned path of the container (. translates to the last WORKDIR command, so we copy to /app).
COPY respects a .dockerignore file, with which you can filter out files you don´t want to copy over (e.g. log files).

CMD [“pm2-docker”, “start”, “process.yml”]
Docker needs a starting point / software. If you want to let the container running the software needs to run.
With CMD we define which software we want to use for this.
Here we use pm2-docker from the pm2 package to keep (nodejs) projects running (restart when necessary, etc.).

And that´s it. Basically you can now already package your own project into a docker container. Just execute “docker build -t $yourname/$your_container_name:$your_version .” and it will be build. An example can be “docker build -t frankthedevop/MyTodolistAPI:v0.0.1 .”.
You can then run it with “docker run $yourname/$your_container_name:$your_version” or in my case “docker run frankthedevop/MyTodolistAPI:v0.0.1”. More details about the commands in a later post.

Just remember that we are locker inside the container, that means that all paths are inside the container too. If you have e.g. a configuration file in your personal user directory make sure to copy it to the correct point inside the container, otherwise it doesn´t exist.

This is just the beginning, but I wanted you to see how short and easy such a recipe can be.
You find further links in the sources. And I will create a more extensive post about it soon too.

Feel free to contact me about how to create a Dockerfile for your project. I will do my best to help you :).

Yours sincerely,
Frank

Sources:
(1) Docker Dockerfile Documentation
(2) Docker 101: Fundamentals & The Dockerfile
(3) Inspiration how to use Dockerfiles

What is Docker? Or: Why choose Docker?

150 150 Frank

Hey everyone,

today it´s about one of the virtualisation solutions out there: Docker.
I want to help you understand quickly how it works.

What is Docker?
As mentioned Docker is a software virtualisation solution. There are others out
there which are different. The important point is that it uses container virtualisation.
That means with Docker you don´t have to install a whole virtual machine if you want to
e.g. setup a webserver. You package the webserver into a container and docker reuses the
operating system it runs on. This means a difference in size, both storage and main memory.

In short it looks like this:
Host OS -> Docker -> Docker Container
-> Docker Container
-> Docker Container
.
.
.

When to use it?
I personally use Docker for most of my projects of custom developed online hosted software.
You probably heard of the MEAN (Mongo, Express, Angular, Nodejs) Stack. To host those Docker
is often used.

Whats the benefit?
When you create software whose fundamentals work similarly (e.g. the same Nodejs Framework)
you write the Recipe to package the Software (Dockerfile) once and can copy it over with
little changes.

Docker container are stateless. You can rest assured that you can start a software from the same
configuration every time. You change the configuration of the data storage but else it works everywhere the same.
No more “but it works on my computer”.

Easy to combine into Systems (Stacks). You take multiple containers with different software and can combine them
to achieve your goal. You can reuse the same containers, just point the configuration to e.g. another Database and it works.

Works like a charm with the concept of microservices. With Docker you can set up microservices easier. I have e.g. one container
to handle a generic JobSystem for distributed workflows. I need it in a new stack? No problem, point to it and it basically is available.
The new System creates to much load? Start another instance of that container to do load balancing.

Online Hosting is available and affordable. It is not as cheap as a $3.95/month web hosting but it doesn´t need to cost $1000/month either.
The exact cost will of course depend on the size of your project.

What are the negatives?
Docker does virtualisation and virtualisation itself takes up more resources.

Docker has layers of containers. If you have to many layers the logic slows down.

Sometimes it takes more time to find the correct base image layer and install software in the correct way.

Conclusion
Like everything Docker has it´s pros and cons. Every user has to weigh them and draw his/her own conclusion.
Personally I use a microservice approach. And Docker is a great help for that after you figured a few things out.
I love that I can package my software into a stateless container, configure it externally with connections to data and systems and know that it works the same on my laptop as well as my server.
If it doesn´t work on the server then I know 99.9% of the time that it is a configuration problem. That alone already saves a headache or two ;).
But I am not at the point of an enterprise that uses 1000s of container yet. That might be a bigger challenge with Docker. But the knowledge from Docker will make a transition even then easier.

I hope I was able to give you a quick overview of Docker as a base for following posts and your own decision whether to use it or not.

Yours Sincerely,
Frank

Sources:
(1) Docker Curriculum.com
(2) Docker from the Beginning I

Docker Stack for Apache, PHP FPM & MySQL

150 150 Frank

Hey everyone,

it has been quite some time since my last post, I know. Today I came across a new problem and wanted to share my Solution (that is based on some giants shoulders) with you.

I have a development environment for Apache & PHP on my MacBook, but with all the different ways, changes which source is working and so on I wanted a reliable solution that I can replicate.

So I decided to put everything into Docker Container and put them together as a stack.

The Idea was to have a stack where I can manipulate the vhost definition and the Apache config from the outside, while the Projects are mapped from a local source folder.

The Requirements

– Docker installed and working
– Access to the internet
– A directory where the Apache config (httpd.conf) is located
– A directory where vhost configs for apache are hosted
– A directory where the sourcecode is reachable
– A cloned copy of https://github.com/FrankTheDevop/php-apache-mysql-containerized

Okay, let´s start:

Start

After you cloned the git repository you find this structure:
– root folder
–README.md
–apache
—Dockerfile
—conf
—-httpd.conf
—vhosts
—-demo.apache.conf
–docker-compose.yml
–php
—Dockerfile
–public_html

I use this as my base folder and leave the configs here. You can decide to put them elsewhere, just remember to change you paths later.

Step 1

Open a console and change into the root folder of the git clone. Change into the php.

Edit the Dockerfile and put the php version you want to use into PHP_VERSION=”$YOUR_VERSION_NAME”.

TIP

Depending on the PHP Version you use you might want to change the following:
RUN docker-php-ext-install mysqli
This statement installs the mysqli extension for php. Starting from PHP 5.5 you should use it. But if you have a software that still uses the mysql extension and PHP <= 5.6 change it to: RUN docker-php-ext-install mysql Finally execute: docker build -t $your-image-prefix/$your-image-name:$version . This could look like this: docker build -t frankthedevop/php:v5.6 . Don´t forget the "." at the end, that tells Docker to use the current folder. That takes a bit of time, depending on the speed of your internet connection and your computer.

Step 2:

Change into the apache folder. Edit the Dockerfile.
Put the version you want into APACHE_VERSION=”$yourversion”, e.g. APACHE_VERSION=”2.4.25″.

Execute:
docker build -t $your-image-prefix/$your-image-name:$version .
This could look like this: docker build -t frankthedevop/apache:v2.4.25 .
Don´t forget the “.” at the end, that tells Docker to use the current folder.

Again this might take a while.

Step 3

After you have the containers for Apache and PHP you can start your stack. Here you have to be a bit careful, it is slightly different if you use docker-compose or have a Docker Swarm Installation.

If you do not use a Docker Swarm installation you could go into the root folder and execute:
docker-compose up

It will start the Apache & PHP Container, retrieve a mysql container and make everything available.
You can already use it, but it doesn´t point to the folder of your sources yet. For that skip to Step 4.

If you use a Docker Swarm Installation you need to use the docker-compose.stack.yml file as config.
Please edit the file, change the image names to those you chose earlier. For MySQL place replace:
– $YourRootPW for the Root Password,
– $YourDatabaseName for the Database Name,
– $YourUser for the username and
– $YourPW for the password

The MySQL Container will use those variables to automatically create a MySQL Instance for you. You can use $YourUser and $YourPW for the connection in your code.

For PHP and Apache please remove the volume keys and what is below them for now.

Now you can execute:
docker stack deploy –compose-file docker-compose.stack.yml $YourStackName

Interim Result

Finally you have a running Stack of Apache, PHP & MySQL. If you visit localhost:8080 you should reach a site telling you: “It works”.

Step 4

If you use the Docker Swarm Installation or not, you still miss the ability to edit the php files and see the results.

For this we have to set the Volume key accordingly.

For PHP we add:
volumes:
– $YourPathToTheSourceCode:/var/www/html/

Please replace $YourPathToTheSourceCode with the path to where you sourcecode is located. I typically put the parent folder of my projects here, e.g. /Users/username/projects/php/.

For apache we start the same way:
volumes:
– $YourPathToTheSourceCode:/var/www/html/
After that we add:
– $YourPathToTheVhostsFolder:/usr/local/apache2/conf/
– $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf

Replace $YourPathToTheVhostsFolder with the folder where you want to put your virtual host definitions and $YourPathTOTheHttpd.confFolder with the folder where you put your httpd.conf (optional, if you want to use the default one remove this line).

This is how it should look like:

version: '3'

services:
  php:
    image: fdsmedia/php:5.6
    volumes:
      - $YourPathToTheSourceCode:/var/www/html/
  apache:
    image: fdsmedia/apache:2.4.25
    depends_on:
      - php
      - mysql
    ports:
      - "8080:8080"
    volumes:
      - $YourPathToTheSourceCode:/var/www/html/
      - $YourPathToTheVhostsFolder:/usr/local/apache2/conf/vhosts
      - $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf
  mysql:
    image: mysql:${MYSQL_VERSION:-latest}
    ports:
      - "3306:3306"
    volumes:
      - data:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=$YourRootPW
      - MYSQL_DATABASE=$YourDatabaseName
      - MYSQL_USER=$YourUser
      - MYSQL_PASSWORD=$YourPW
volumes:
    data:

Now you can just restart the service for PHP & Apache, go to localhost:8080 and find your project. If you change the url or port for the virtual host please change that here too.

Result

You finally have a working Docker-based Stack where you can develop your PHP-based Applications and maintain your database content (as long as you don´t remove the data).

If you have a new application it is as easy as creating a new folder under your project folder, add a vhost configuration and you are good to go.
If you want to change the used php version you just need to create another container with the other version, add an entry to the stack and change the url in the vhost configuration. That´s all. Isn´t that amazing?

If you have any question feel free to post them into the comments. Or to send my an email.

Yours sincerely,

Frank

P.S.: If you are looking for a Hosting Solution have a look at Digital Ocean (digitalocean.com*) They let you set up Docker Hosting easily & quickly.

* Affiliate Link

Sources
(1) https://www.cloudreach.com/blog/containerize-this-php-apache-mysql-within-docker-containers/
(2) https://github.com/mzazon/php-apache-mysql-containerized
(3) https://dev.to/chiefoleka/how-to-setup-nginx-and-php71-with-fpm-on-mac-os-x-without-crying-4m8 (they do it with nginx)
(4) http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/ (nginx too)
(5) https://getgrav.org/blog/macos-mojave-apache-multiple-php-versions (One of the sources that should work but sadly not for me)
(6) https://www.pascallandau.com/blog/php-php-fpm-and-nginx-on-docker-in-windows-10/#setup-php-fpm (Inspiration for the development setup)
(7) https://easyengine.io/tutorials/php/directly-connect-php-fpm (Debug PHP FPM)

SSL Termination Stack Setup: Let´s encrypt, HAProxy, Your Stack

150 150 admin

Hi everyone,

for a setup at work I needed an quick and easy way to terminate an SSL Connection without hassle. After a short research I found it feasible to use Let´s encrypt for free SSL Certificates.But it looked like a lot of work to create the certificate so I searched for an quicker and hassle free approach. I found one, but it still took me a few hours to figure out how to use it correctly. And I want to save you the time.
My setup looked like this:
– Domain hosted at GoDaddy.com
– Server hosted at Digital Ocean (digitalocean.com*)
– Docker in Swarm Mode
– Portainer as UI
The expected outcome is:
– 1 Stack to (re-)generate the certificate
– 1-x Worker Stacks
If your Website is hosted somewhere else than GoDaddy that is no problem as long as you find it in this list: https://github.com/Neilpang/acme.sh/blob/master/dnsapi/README.md.
Let’s dive into the work:
1. Create API Credentials for GoDaddy / your supported Provider. How you do it depends on the provider, refer to this list: https://github.com/Neilpang/acme.sh/blob/master/dnsapi/README.md.
Remember to create Production keys, e.g. GoDaddy allows to create sandbox keys, those won’t work.
2. Deploy this stack config for the generation stack:
version: '3.5'
services:
  acme:
    command: daemon
    deploy:
      placement:
        constraints:
          - node.role == manager
      resources:
        reservations:
          cpus: '0.01'
          memory: 50M
    environment:
      DEPLOY_HAPROXY_PEM_PATH: /haproxy
      DEPLOY_HAPROXY_RELOAD: for task in $$(docker service ps SSL_system_haproxy -f desired-state=running -q); do docker run --rm -v /var/run/docker.sock:/var/run/docker.sock datagridsys/skopos-plugin-swarm-exec task-exec $$task /reload.sh; done
      
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
    image: interaction/acme.sh
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - acme-data:/acme.sh
      - nginx-data:/www
      - system_haproxy-data:/haproxy
  nginx:
    deploy:
      resources:
        reservations:
          cpus: '0.01'
          memory: 20M
    healthcheck:
      test: curl -f http://localhost || exit 1
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
    image: interaction/acme.sh-nginx
    ports:
      - 80:80
    volumes:
      - nginx-data:/www

volumes:
  acme-data:
    driver: local
    name: 'acme-data'
  nginx-data:
    driver: local
    name: 'nginx-data'
  system_haproxy-data:
    external: true

 

3. Go into your acme Container, either by docker exec -it $containerhash /bin/sh or via your UI.
4. For GoDaddy issue the commands:
4.1 export GD_Key=$yourkey
4.2 export GD_Secret=$yoursecret
4.3 acme.sh —issue -d “$yourFQDN” —dns dns_gd —dnssleep 15
4.4 acme.sh —deploy -d “$yourFQDN” —deploy-hook haproxy
The first two Commands 4.2 and 4.2 will set the required environment variables for the acme.sh script. In 4.3 replace $yourFQDN with the (sub-)domain you want the certificate be created for, e.g. web.stack.example.com.
With the last Command 4.4 you deploy the Certificate and let the script restart your HAProxy.
Let look at your worker stack. Here is my definition:
version: '3.5'
services:
  system_haproxy:
    image: 'dockercloud/haproxy:1.6.6'
    depends_on:
      - web
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M
    environment:
      - CERT_FOLDER=/haproxy
      - DOCKER_HOST=127.0.0.1
      - 'EXTRA_GLOBAL_SETTINGS="debug"'
      - 'STATS_AUTH=admin:$password'
      - 'STATS_PORT=1936'
      - DOCKER_TLS_VERIFY
      - DOCKER_HOST
      - DOCKER_CERT_PATH
    volumes:
      - system_haproxy-data:/haproxy
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - '443:443'
      - '1936:1936'
      
  web:
    image: 'dockercloud/hello-world:v1.0.0-frank'
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
    environment:
      - SERVICE_PORTS=$yourport
      - FORCE_SSL=yes
      - SSL_CERT=/haproxy/$crtificatename.pem
      - 'VIRTUAL_HOST=https://$yourFQDN'

volumes:
  system_haproxy-data:
    external: true

 

So what do we have here? A HAProxy Container with my default configuration + SSL Ports and the ENV Var CERT_FOLDER pointing to the folder where the certificate(s) are located. That is needed for the start up as HAProxy recognises, that you want SSL Termination and requires one of multiple ways (for more details see https://github.com/docker/dockercloud-haproxy/tree/master#ssl-termination).
The second entry is a test container you can find on docker hub, I just changed it to another port to reflect my own requirements. Normally the image is ‘dockercloud/hello-world’.
The important things here are the Environment Variables. VIRTUAL_HOST is probably already known by you. You can set the scheme to https instead of http and HAProxy recognises it. You also need the SERVICE_PORTS set to the ports you want to use on this container.
What is probably new to you is FORCE_SSL and SSL_CERT. FORCE_SSL enforces that every access to this container will be done securely via HTTPS. And SSL_CERT points to the location of the certificate we generated earlier. It is the mount point of the external container that is shared with the acme container.
After you deployed both stacks and issued the four commands in the acme container you are ready to go. When you open $yourFQN in your browser you should see something similar to this picture:
Congratulations! You now have a SSL terminated Stack you can easily develop and have no dependencies inside your worker stack(s). I hope I could save you quite some time so you can enjoy the benefits!
You can find the two stack definitions here: https://github.com/FrankTheDevop/ssl-termination-stack.
Feel free to use them :).
Kind Regards,
Frank
* Affiliate Link