FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop


What is Docker? Or: Why choose Docker?

150 150 Frank

Hey everyone,

today it´s about one of the virtualisation solutions out there: Docker.
I want to help you understand quickly how it works.

What is Docker?
As mentioned Docker is a software virtualisation solution. There are others out
there which are different. The important point is that it uses container virtualisation.
That means with Docker you don´t have to install a whole virtual machine if you want to
e.g. setup a webserver. You package the webserver into a container and docker reuses the
operating system it runs on. This means a difference in size, both storage and main memory.

In short it looks like this:
Host OS -> Docker -> Docker Container
-> Docker Container
-> Docker Container

When to use it?
I personally use Docker for most of my projects of custom developed online hosted software.
You probably heard of the MEAN (Mongo, Express, Angular, Nodejs) Stack. To host those Docker
is often used.

Whats the benefit?
When you create software whose fundamentals work similarly (e.g. the same Nodejs Framework)
you write the Recipe to package the Software (Dockerfile) once and can copy it over with
little changes.

Docker container are stateless. You can rest assured that you can start a software from the same
configuration every time. You change the configuration of the data storage but else it works everywhere the same.
No more “but it works on my computer”.

Easy to combine into Systems (Stacks). You take multiple containers with different software and can combine them
to achieve your goal. You can reuse the same containers, just point the configuration to e.g. another Database and it works.

Works like a charm with the concept of microservices. With Docker you can set up microservices easier. I have e.g. one container
to handle a generic JobSystem for distributed workflows. I need it in a new stack? No problem, point to it and it basically is available.
The new System creates to much load? Start another instance of that container to do load balancing.

Online Hosting is available and affordable. It is not as cheap as a $3.95/month web hosting but it doesn´t need to cost $1000/month either.
The exact cost will of course depend on the size of your project.

What are the negatives?
Docker does virtualisation and virtualisation itself takes up more resources.

Docker has layers of containers. If you have to many layers the logic slows down.

Sometimes it takes more time to find the correct base image layer and install software in the correct way.

Like everything Docker has it´s pros and cons. Every user has to weigh them and draw his/her own conclusion.
Personally I use a microservice approach. And Docker is a great help for that after you figured a few things out.
I love that I can package my software into a stateless container, configure it externally with connections to data and systems and know that it works the same on my laptop as well as my server.
If it doesn´t work on the server then I know 99.9% of the time that it is a configuration problem. That alone already saves a headache or two ;).
But I am not at the point of an enterprise that uses 1000s of container yet. That might be a bigger challenge with Docker. But the knowledge from Docker will make a transition even then easier.

I hope I was able to give you a quick overview of Docker as a base for following posts and your own decision whether to use it or not.

Yours Sincerely,

(1) Docker
(2) Docker from the Beginning I

Auto Remove RabbitMQ Orphan Queues from Loopback MQ Connector

150 150 Frank

Hi everyone,

who doesn’t know this: You’re working on multiple projects, some container still run on docker, others are already terminated. But your rabbitmq gets slower and slower. You check the Queue overview and you see numerous queues with names like 1234568.node /app/server/server.js.54.response.queue.
These are queues that are left over, sometimes from crashed containers, sometimes a container didn’t end its connection correctly.
But basically these queues stay open until you reset rabbitmq, remove them or they expire. Waiting for the expiration can take quite some time depending on your configuration.
I will show you a quick and easy way to get rid of them using the ui and a regex pattern.

If you want to quickly and easily remove them, then you can do this:

Go to the Admin -> Policies site
Expand Add / update a policy
Configure it as follows
Name: Whatever you wish, doesn’t matter
Pattern: [a-z0-9]*\.node \/app\/server\/server\.js\.[0-9]*\.response.queue 
Apply to: Select Queues 
Priority: Leave empty
Definition: In the first left field write expires, in the field to the right write 1 and in the drop down select number
Click Add policy

That’s it. Your queues are already delete or if you have too many then it takes a few seconds. But then they are all gone.
Don’t forget to delete the policy again.

I hope this helps you.

Your Frank

Regex Pattern: [a-z0-9]*\.node \/app\/server\/server\.js\.[0-9]*\.response.queue


Useful tools:

How to promisify with bluebird

150 150 Frank

Hey @everyone,

this will be a quick tip / reference. Sometimes I get asked about the syntax to promisify only one function with the bluebird Promise Library. I will show you an example to promisify the readdir method of the fs package:

'use strict'
const fs = require('fs')
const Promise = require('bluebird')

const readdirAsync = Promise.promisify(fs.readdir)

That´s it already. Before your code looked somewhat like this:

fs.readdir(myPath, (err, files) => {
  // Handle the files

If you needed to do further asynchronous operations you came into the callback hell (1).

Promisifying it make the syntax clearer and more elegant:

.then(files => {
  // Handle the files

You see, now you are in the promise chain, which makes elegant and readable code easier to achieve.
And you don´t have to be extra careful about using Promises and callbacks at the same time.

Yours sincerely,

(1) //

Docker Stack for Apache, PHP FPM & MySQL

150 150 Frank

Hey everyone,

it has been quite some time since my last post, I know. Today I came across a new problem and wanted to share my Solution (that is based on some giants shoulders) with you.

I have a development environment for Apache & PHP on my MacBook, but with all the different ways, changes which source is working and so on I wanted a reliable solution that I can replicate.

So I decided to put everything into Docker Container and put them together as a stack.

The Idea was to have a stack where I can manipulate the vhost definition and the Apache config from the outside, while the Projects are mapped from a local source folder.

The Requirements

– Docker installed and working
– Access to the internet
– A directory where the Apache config (httpd.conf) is located
– A directory where vhost configs for apache are hosted
– A directory where the sourcecode is reachable
– A cloned copy of

Okay, let´s start:


After you cloned the git repository you find this structure:
– root folder

I use this as my base folder and leave the configs here. You can decide to put them elsewhere, just remember to change you paths later.

Step 1

Open a console and change into the root folder of the git clone. Change into the php.

Edit the Dockerfile and put the php version you want to use into PHP_VERSION=”$YOUR_VERSION_NAME”.


Depending on the PHP Version you use you might want to change the following:
RUN docker-php-ext-install mysqli
This statement installs the mysqli extension for php. Starting from PHP 5.5 you should use it. But if you have a software that still uses the mysql extension and PHP <= 5.6 change it to: RUN docker-php-ext-install mysql Finally execute: docker build -t $your-image-prefix/$your-image-name:$version . This could look like this: docker build -t frankthedevop/php:v5.6 . Don´t forget the "." at the end, that tells Docker to use the current folder. That takes a bit of time, depending on the speed of your internet connection and your computer.

Step 2:

Change into the apache folder. Edit the Dockerfile.
Put the version you want into APACHE_VERSION=”$yourversion”, e.g. APACHE_VERSION=”2.4.25″.

docker build -t $your-image-prefix/$your-image-name:$version .
This could look like this: docker build -t frankthedevop/apache:v2.4.25 .
Don´t forget the “.” at the end, that tells Docker to use the current folder.

Again this might take a while.

Step 3

After you have the containers for Apache and PHP you can start your stack. Here you have to be a bit careful, it is slightly different if you use docker-compose or have a Docker Swarm Installation.

If you do not use a Docker Swarm installation you could go into the root folder and execute:
docker-compose up

It will start the Apache & PHP Container, retrieve a mysql container and make everything available.
You can already use it, but it doesn´t point to the folder of your sources yet. For that skip to Step 4.

If you use a Docker Swarm Installation you need to use the docker-compose.stack.yml file as config.
Please edit the file, change the image names to those you chose earlier. For MySQL place replace:
– $YourRootPW for the Root Password,
– $YourDatabaseName for the Database Name,
– $YourUser for the username and
– $YourPW for the password

The MySQL Container will use those variables to automatically create a MySQL Instance for you. You can use $YourUser and $YourPW for the connection in your code.

For PHP and Apache please remove the volume keys and what is below them for now.

Now you can execute:
docker stack deploy –compose-file docker-compose.stack.yml $YourStackName

Interim Result

Finally you have a running Stack of Apache, PHP & MySQL. If you visit localhost:8080 you should reach a site telling you: “It works”.

Step 4

If you use the Docker Swarm Installation or not, you still miss the ability to edit the php files and see the results.

For this we have to set the Volume key accordingly.

For PHP we add:
– $YourPathToTheSourceCode:/var/www/html/

Please replace $YourPathToTheSourceCode with the path to where you sourcecode is located. I typically put the parent folder of my projects here, e.g. /Users/username/projects/php/.

For apache we start the same way:
– $YourPathToTheSourceCode:/var/www/html/
After that we add:
– $YourPathToTheVhostsFolder:/usr/local/apache2/conf/
– $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf

Replace $YourPathToTheVhostsFolder with the folder where you want to put your virtual host definitions and $YourPathTOTheHttpd.confFolder with the folder where you put your httpd.conf (optional, if you want to use the default one remove this line).

This is how it should look like:

version: '3'

    image: fdsmedia/php:5.6
      - $YourPathToTheSourceCode:/var/www/html/
    image: fdsmedia/apache:2.4.25
      - php
      - mysql
      - "8080:8080"
      - $YourPathToTheSourceCode:/var/www/html/
      - $YourPathToTheVhostsFolder:/usr/local/apache2/conf/vhosts
      - $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf
    image: mysql:${MYSQL_VERSION:-latest}
      - "3306:3306"
      - data:/var/lib/mysql
      - MYSQL_DATABASE=$YourDatabaseName
      - MYSQL_USER=$YourUser

Now you can just restart the service for PHP & Apache, go to localhost:8080 and find your project. If you change the url or port for the virtual host please change that here too.


You finally have a working Docker-based Stack where you can develop your PHP-based Applications and maintain your database content (as long as you don´t remove the data).

If you have a new application it is as easy as creating a new folder under your project folder, add a vhost configuration and you are good to go.
If you want to change the used php version you just need to create another container with the other version, add an entry to the stack and change the url in the vhost configuration. That´s all. Isn´t that amazing?

If you have any question feel free to post them into the comments. Or to send my an email.

Yours sincerely,


P.S.: If you are looking for a Hosting Solution have a look at Digital Ocean (*) They let you set up Docker Hosting easily & quickly.

* Affiliate Link

(3) (they do it with nginx)
(4) (nginx too)
(5) (One of the sources that should work but sadly not for me)
(6) (Inspiration for the development setup)
(7) (Debug PHP FPM)

SSL Termination Stack Setup: Let´s encrypt, HAProxy, Your Stack

150 150 admin

Hi everyone,

for a setup at work I needed an quick and easy way to terminate an SSL Connection without hassle. After a short research I found it feasible to use Let´s encrypt for free SSL Certificates.But it looked like a lot of work to create the certificate so I searched for an quicker and hassle free approach. I found one, but it still took me a few hours to figure out how to use it correctly. And I want to save you the time.
My setup looked like this:
– Domain hosted at
– Server hosted at Digital Ocean (*)
– Docker in Swarm Mode
– Portainer as UI
The expected outcome is:
– 1 Stack to (re-)generate the certificate
– 1-x Worker Stacks
If your Website is hosted somewhere else than GoDaddy that is no problem as long as you find it in this list:
Let’s dive into the work:
1. Create API Credentials for GoDaddy / your supported Provider. How you do it depends on the provider, refer to this list:
Remember to create Production keys, e.g. GoDaddy allows to create sandbox keys, those won’t work.
2. Deploy this stack config for the generation stack:
version: '3.5'
    command: daemon
          - node.role == manager
          cpus: '0.01'
          memory: 50M
      DEPLOY_HAPROXY_RELOAD: for task in $$(docker service ps SSL_system_haproxy -f desired-state=running -q); do docker run --rm -v /var/run/docker.sock:/var/run/docker.sock datagridsys/skopos-plugin-swarm-exec task-exec $$task /; done
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
    image: interaction/
      - /var/run/docker.sock:/var/run/docker.sock
      - acme-data:/
      - nginx-data:/www
      - system_haproxy-data:/haproxy
          cpus: '0.01'
          memory: 20M
      test: curl -f http://localhost || exit 1
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
    image: interaction/
      - 80:80
      - nginx-data:/www

    driver: local
    name: 'acme-data'
    driver: local
    name: 'nginx-data'
    external: true


3. Go into your acme Container, either by docker exec -it $containerhash /bin/sh or via your UI.
4. For GoDaddy issue the commands:
4.1 export GD_Key=$yourkey
4.2 export GD_Secret=$yoursecret
4.3 —issue -d “$yourFQDN” —dns dns_gd —dnssleep 15
4.4 —deploy -d “$yourFQDN” —deploy-hook haproxy
The first two Commands 4.2 and 4.2 will set the required environment variables for the script. In 4.3 replace $yourFQDN with the (sub-)domain you want the certificate be created for, e.g.
With the last Command 4.4 you deploy the Certificate and let the script restart your HAProxy.
Let look at your worker stack. Here is my definition:
version: '3.5'
    image: 'dockercloud/haproxy:1.6.6'
      - web
          memory: 512M
          memory: 256M
      - CERT_FOLDER=/haproxy
      - DOCKER_HOST=
      - 'EXTRA_GLOBAL_SETTINGS="debug"'
      - 'STATS_AUTH=admin:$password'
      - 'STATS_PORT=1936'
      - system_haproxy-data:/haproxy
      - /var/run/docker.sock:/var/run/docker.sock
      - '443:443'
      - '1936:1936'
    image: 'dockercloud/hello-world:v1.0.0-frank'
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
      - SERVICE_PORTS=$yourport
      - FORCE_SSL=yes
      - SSL_CERT=/haproxy/$crtificatename.pem
      - 'VIRTUAL_HOST=https://$yourFQDN'

    external: true


So what do we have here? A HAProxy Container with my default configuration + SSL Ports and the ENV Var CERT_FOLDER pointing to the folder where the certificate(s) are located. That is needed for the start up as HAProxy recognises, that you want SSL Termination and requires one of multiple ways (for more details see
The second entry is a test container you can find on docker hub, I just changed it to another port to reflect my own requirements. Normally the image is ‘dockercloud/hello-world’.
The important things here are the Environment Variables. VIRTUAL_HOST is probably already known by you. You can set the scheme to https instead of http and HAProxy recognises it. You also need the SERVICE_PORTS set to the ports you want to use on this container.
What is probably new to you is FORCE_SSL and SSL_CERT. FORCE_SSL enforces that every access to this container will be done securely via HTTPS. And SSL_CERT points to the location of the certificate we generated earlier. It is the mount point of the external container that is shared with the acme container.
After you deployed both stacks and issued the four commands in the acme container you are ready to go. When you open $yourFQN in your browser you should see something similar to this picture:
Congratulations! You now have a SSL terminated Stack you can easily develop and have no dependencies inside your worker stack(s). I hope I could save you quite some time so you can enjoy the benefits!
You can find the two stack definitions here:
Feel free to use them :).
Kind Regards,
* Affiliate Link

Node.js Tooling I – Processmanager PM2

150 150 Frank

The purpose of this post is to help you get started with tools for Node.js in general and Loopback specifically to ease you life as Developer and Operator.


You need a Node.js based API (to follow through this article. PM2 supports other languages too).

PM2 Processmanager

The PM2 Processmanager is a mighty one with many different options, has various integrations and even supports an online monitoring.

In this Article I present you the basic usage of it to get started fast. In a later Article I will help you migrate to Docker and add the online monitoring of your processes.


The Installation is as easy as npm install pm2 -g. It is important that you install it globally.

How to use it

You can manually start an api by issuing pm2 start app.js, but then you have to specify all parameter on every start. So it could be fast to test, but I recommend writing a small configuration file for it name process.yml.

Configuration Syntax

PM2 offers multiple syntax variants for the configuration file. Currently they support Javascript, JSON and YAML format. I prefer to create this configuration in YAML so I will present it to you in this syntax. For the others please have a look at Process Configuration File Syntax (1)


Most often my configuration looks like this:

  - script: path_to_startup_script
    name: api_name
    exec_mode: cluster
    instances: 1
      DEBUG_FD: 1

apps is the root element.

Each script entry refers to an app or api that should be started. Here you define the path to the file that starts your api.

You can either define one app per app/api or you can even define a stack you want to start with multiple entries.

Personally I use one configuration file per api in development and one configuration for a stack in the staging environment before I deploy to docker.

The name describe the name you will see in the process manager when you list the running processes.

With exec_mode and the following instances things get interesting. If you define the mode fork you can only start up one instance of this app/api. But if you define cluster then you´re able to scale the app/api with one single command and pm2 will load balance it for you!

instances defines how many concurrent instances you want to launch of this app/api at startup. I set this normally to 1 and adjust on the fly according to the needs.
This way I can already get a first idea of the need to scale.

With the env you can specify environment variables you want to set.
DEBUG_FD: 1 tells Node.js to change the output stream to process.stdout.
DEBUG_COLORS: 1 will add colors to the pm2 log output. This is handy because you see on the first glance if the logs message is an error or not.

These have not been all possibly Attributes for the configuration file. If you want a tighter control have a look at the Configuration File Attributes (2).

After this explanation you will find my configurations for a Express based API and a Loopback based API.

Express Example

  - script: bin/www
    name: api
    exec_mode: cluster
    instances: 1
      DEBUG_FD: 1
      NODE_ENV: staging

Loopback Example

  - script: server/server.js
    name: loopback_api
    exec_mode: cluster
    instances: 1
      DEBUG_FD: 1


After we defined the configuration we need to interact with the Processmanager to start and stop our APIs and check the log output.

Listing currently running Processes

To view all currently running Processes issue pm2 list at a console.

Start an API with an process.x

Starting your API with an configuration file is as easy as the command pm2 start process.x (x is config.js, json or yml).
After this command PM2 starts your API with the specified configuration and outputs it list of currently running processes.

Stopping an API

You can stop your API with pm2 stop process.x.
Important to know is that PM2 just stops your API then, but won´t remove it from the prepared to run process list. If you want to remove it cleanly and make sure on the next start you have a clean plate you have to destroy it.

Destroying an prepared to run API entry

To remove a API entry from the prepared to run list you issue pm2 delete process.x

Check the Logs

To check all logs without filtering to one API you issue pm2 logs.

If you want to filter the logs to one specific API you can add it to the command like this pm2 logs name_of_your_api

If you have any question post them into the comments. Or feel free to send my an email.

Yours sincerely,