FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop

Posts By :

Frank

Docker Volumes

150 150 Frank

Hey everyone,

Today I want to explain you a bit about Docker Volumes, when/where to use them and their benefits.

Remember that in the Docker Introduction I said a Docker container is stateless? For the contained data that is mostly true.
There are two differences:

  1. Docker Volumes
  2. External Connections

A completely stateless container would of course be useless for us. We couldn´t store e.g. no Grocery list or Appointments.
And we couldn´t store files like pictures from the Wedding.

So the idea is to have the docker container that is stateless but add a Volume if you want to store files and configure external connections
to e.g. issue an order.

Depending on the type of data (Grocery List Entry or Wedding Pictures) we want to store them differently. The Grocery Entry belongs to a database,
the Picture into a volume.

Both can be configured through the Stack Configuration, the external Connection to e.g. a database is normally a simple Environment Variable with the url,
while the volume is a mounted filesystem.

An example Stack configuration for a single service can look like this:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port

I will explain a stack configuration file in another post, so we focus only on the environment key.
Under it is an Environment Variable MYSQL_URL that can be read inside the container. So your API can read MYSQL_URL and connect to that address.
This way you can use the same docker container and just configure it locally for testing to your computer while in the production environment it points to e.g. an Amazon RDS MySQL Instance.

The Volume is somewhat different:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port
    volumes:
      - $volume_name:$mounted_path   
      
volumes:
  youtube_publish_data:
    external: true

As you can see we describe a volume under the volumes key. We give it a unique name, followed by the path inside the container where it should be mounted.
Additionally we add a root key volumes, under it the name of the volume and under that we set external to true, so that the data is outside of the docker container of the logic, but inside
a data container.
This means we have separated the logic from the data files. Depending on the driver it is limited to the local machine (the default we use is on the local machine).

Still this give us e.g. the possibility to have a generic downloading container that is started by a message. This container uses an external data volume for the downloaded data.
A second container processes the downloads. Therefore we need a config like this:

version: '3'

services:
  myservice:
    image: image
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 32M
    environment:
      - MYSQL_URL=mysql://$username:$password@$hostname:$port/$path
    ports:
      - $port
    volumes:
      - $volume_name:$mounted_path 
        
volumes:
  $volume_name:
    external:
      name: $Stackname_$volume_name

Now we can mount the same volume in two or more containers and access the data there. That is quite cool and saves us ressource for transfering the data back and forth.
Depending on your setup you might want to have it this way and just transfer out the final production file to e.g. Amazon S3 to save time and money on bandwith.

I hope that helped you to get a step further into distributed system architecture.
I´m myself a life-long student of it and every day I learn a new possibility that helps to simplify my projects.
So if you have another idea how to do things please contact me in the comments or via mail. I appreciate it.

Yours sincerely,
Frank

Sources:
(1) Docker Tip #12: A Much Better Development Experience With Volumes
(2) Using Docker Compose for NodeJS Development
(3) Understanding Volumes in Docker
(4) Docker and Continuous Integration: Code, Build, Test, Deploy
(5) Dive Into Docker – The Complete Docker Course for Developers

What´s a Dockerfile?

150 150 Frank

Hey everyone,

after the introduction of Docker I thought I show you the recipe to create such a container. The recipe is like a recipe for your favorite lasagne and is called Dockerfile.
It looks similar to this:
FROM node:8.15.0-alpine

# Add application folder
RUN mkdir /app
WORKDIR /app

# Add package.json and install deps
ADD package.json /app/package.json
RUN npm install

COPY . .

# Expose the listening port
EXPOSE 1235

# Start the server
CMD [“pm2-docker”, “start”, “process.yml”]

As you see it is quite short and nothing to be afraid about. So lets get through it:
FROM node:8.15.0-alpine
As mentioned in the introduction Docker uses multiple layer to form a container. With this we tell it which container in which version to use as our starting point.

# Add application folder
Lines starting with a # are comments. Use the to explain anything that isn´t clear to you

RUN mkdir /app
RUN is the command to execute a custom command. In this case create a directory app in the root of the container filesystem

WORKDIR /app
WORKDIR is similar to the commandline command cd. It changes the current working directory for the following commands.
Here we change into the newly created directory /app.

ADD package.json /app/package.json
Adding a file to the container in the mentioned path (/app/package.json).

RUN npm install
Again we run a custom command, here npm install to install all dependencies for the used Nodejs project.

COPY . .
COPY copies all content in the current folder recursively into the mentioned path of the container (. translates to the last WORKDIR command, so we copy to /app).
COPY respects a .dockerignore file, with which you can filter out files you don´t want to copy over (e.g. log files).

CMD [“pm2-docker”, “start”, “process.yml”]
Docker needs a starting point / software. If you want to let the container running the software needs to run.
With CMD we define which software we want to use for this.
Here we use pm2-docker from the pm2 package to keep (nodejs) projects running (restart when necessary, etc.).

And that´s it. Basically you can now already package your own project into a docker container. Just execute “docker build -t $yourname/$your_container_name:$your_version .” and it will be build. An example can be “docker build -t frankthedevop/MyTodolistAPI:v0.0.1 .”.
You can then run it with “docker run $yourname/$your_container_name:$your_version” or in my case “docker run frankthedevop/MyTodolistAPI:v0.0.1”. More details about the commands in a later post.

Just remember that we are locker inside the container, that means that all paths are inside the container too. If you have e.g. a configuration file in your personal user directory make sure to copy it to the correct point inside the container, otherwise it doesn´t exist.

This is just the beginning, but I wanted you to see how short and easy such a recipe can be.
You find further links in the sources. And I will create a more extensive post about it soon too.

Feel free to contact me about how to create a Dockerfile for your project. I will do my best to help you :).

Yours sincerely,
Frank

Sources:
(1) Docker Dockerfile Documentation
(2) Docker 101: Fundamentals & The Dockerfile
(3) Inspiration how to use Dockerfiles

What is Docker? Or: Why choose Docker?

150 150 Frank

Hey everyone,

today it´s about one of the virtualisation solutions out there: Docker.
I want to help you understand quickly how it works.

What is Docker?
As mentioned Docker is a software virtualisation solution. There are others out
there which are different. The important point is that it uses container virtualisation.
That means with Docker you don´t have to install a whole virtual machine if you want to
e.g. setup a webserver. You package the webserver into a container and docker reuses the
operating system it runs on. This means a difference in size, both storage and main memory.

In short it looks like this:
Host OS -> Docker -> Docker Container
-> Docker Container
-> Docker Container
.
.
.

When to use it?
I personally use Docker for most of my projects of custom developed online hosted software.
You probably heard of the MEAN (Mongo, Express, Angular, Nodejs) Stack. To host those Docker
is often used.

Whats the benefit?
When you create software whose fundamentals work similarly (e.g. the same Nodejs Framework)
you write the Recipe to package the Software (Dockerfile) once and can copy it over with
little changes.

Docker container are stateless. You can rest assured that you can start a software from the same
configuration every time. You change the configuration of the data storage but else it works everywhere the same.
No more “but it works on my computer”.

Easy to combine into Systems (Stacks). You take multiple containers with different software and can combine them
to achieve your goal. You can reuse the same containers, just point the configuration to e.g. another Database and it works.

Works like a charm with the concept of microservices. With Docker you can set up microservices easier. I have e.g. one container
to handle a generic JobSystem for distributed workflows. I need it in a new stack? No problem, point to it and it basically is available.
The new System creates to much load? Start another instance of that container to do load balancing.

Online Hosting is available and affordable. It is not as cheap as a $3.95/month web hosting but it doesn´t need to cost $1000/month either.
The exact cost will of course depend on the size of your project.

What are the negatives?
Docker does virtualisation and virtualisation itself takes up more resources.

Docker has layers of containers. If you have to many layers the logic slows down.

Sometimes it takes more time to find the correct base image layer and install software in the correct way.

Conclusion
Like everything Docker has it´s pros and cons. Every user has to weigh them and draw his/her own conclusion.
Personally I use a microservice approach. And Docker is a great help for that after you figured a few things out.
I love that I can package my software into a stateless container, configure it externally with connections to data and systems and know that it works the same on my laptop as well as my server.
If it doesn´t work on the server then I know 99.9% of the time that it is a configuration problem. That alone already saves a headache or two ;).
But I am not at the point of an enterprise that uses 1000s of container yet. That might be a bigger challenge with Docker. But the knowledge from Docker will make a transition even then easier.

I hope I was able to give you a quick overview of Docker as a base for following posts and your own decision whether to use it or not.

Yours Sincerely,
Frank

Sources:
(1) Docker Curriculum.com
(2) Docker from the Beginning I

Script in Node.js to iterate a directory and extract information from it´s files

150 150 Frank

Hi everyone,

after we did the template last time, I want to show you how to put the single pieces together.
Based on a task at hand I choose the example of iterating and working with files in a directory.
The exact task was:
– Iterate a directory
– find all JSON files in it
– read them
– extract all objects in them
– extract the property email from them
– extract the unique domains of the email addresses
– count how often each domain occurs
– write this information to a summary file for further processing / display

'use strict'

const Promise = require('bluebird')
const fs = require('fs');
const path = require('path');
const util = require('util');

// Promisify only readdir as we don´t need more
const readdirAsync = Promise.promisify(fs.readdir);
const writeFileAsync = Promise.promisify(fs.writeFile);

// Commandline handling
const optionDefinitions = [
  { name: 'folder', alias: 'f', type: String }
]
const commandLineArgs = require('command-line-args')
const options = commandLineArgs(optionDefinitions)

// Add the path to your files
const folder = options.folder
// e.g. '/Users/$Yourusername/Downloads/customerdata';

// This will hold all entries from all files
//  Not unique
const all = []

// Read all files in our directory
return readdirAsync(folder)
  .then(files => {
    files
      .map(entry => {
        if(entry.indexOf('.') > 0 && path.extname(entry) === '.json') {
          // In case there are .json files in the folder that are not in JSON format
          try {
            const temp = require(path.join(folder, entry))

	    // I know for sure that all entries have an filled email column so I can just split here
            // and extract the domain name without checking
            temp.map(entry => (all.push(entry.email.split('@')[1])))
          } catch (e) {}
          return null
        }
      })

      console.log(all)
  })
  .then(() => {
    // Create a unique array
    // Use the new Set feature of ES 6
   return Promise.resolve([...new Set(all)])
  })
  .then(allUnique => {
    // Get a list of unique entries with the number of times it appears
    let newList = []

    allUnique.map(entry => {
      const t = all.filter(innerEntry => innerEntry === entry)
      newList.push({name: entry, count: t.length})
    })

    return Promise.resolve(newList)
  })
  .then(allUnique => {
    allUnique.sort((a,b) => b.count - a.count)
    return Promise.resolve(allUnique)
   })
  .then(allUnique => {
    let content = allUnique.reduce((a, b) => a + `${b.name};${b.count}\n`, '')
    return writeFileAsync(path.join(folder, 'all.txt'), content)
  })
  .then(data => {
    console.log('Wrote file successfully')
  })
  .catch(err => {
    console.log('An error occurred:', err)
  })

You find the repository for it here.

If you are looking for the explanation, continue to read. Otherwise be happy with the template and change it to your hearts desire ;).

This one is a bit longer but stay with me, we will go through it together.
At first we have the standard block where we import all required libraries in Line 3-6.

Then we convert the async, callback-based functions for readdir and writeFile to Promises (we promisify them) for easier
and more elegant handling in Line 9-10.

Next comes the handling of command line (CLI) parameters as we did before in Line 12 – 21.

We define an array all which will receive all email domains from the read files (not unique) in Line 25 .

Now we have everything together to start:
We read the directory content in Line 28.
In Line 29 it returns an array of all found files
With Line 30-31 we start iterating all elements of the array and with Line 32 with make sure that only files ending with .json are accepted, all others are ignored
Line 34-41 is a bit of cheating, Node.js is able to require an JSON file. So instead of reading the file, parsing it and having to handle it all myself, I use the functionality of require.
In case there is a JSON file that can not be parsed I wrap it into a try error, so that it continues on an error
Line 39 does a few things at once:
– With .map I iterate over all entries in the file
– I know each object contains an email property, therefore I act on it without checking
– An email address is in the form username@domainname.domainextension, I need the domain name and extension, so I split the email property and take the second half of the email address, which is the domain part
– Each of these domain parts is pushed into the array for further processing

After all the processing I make a debug output in Line 45.

JavaScript ES 6 introduced some nice new features, one is a Set (an “array” of unique values) and the decomposition operator. In Line 50 I return an new array that is created by decomposing the Set,
so in short: In one line I get an unique array of domains.

In the next function we create a new array of object with the domain name and the number of occurrences. For that we iterate over each entry of the unique array in Line 56,
use the filter method of the not unique array with all entries in Line 57. The filter method returns an array, so I can create the JSON object with the number of occurrences easily by using array.length in Line 58.

After I have the array with the number of occurrences I want to see it sorted. The sort function allows use to provide a function how to sort. And thanks to (5) & (6) I found an short way to do in as you see
in Line 64.

In the last function I use the array.reduce function to create a string from the JSON objects. You can see this post-processing step in Line 68.

All that is left is to write the data to a file as you see in Line 69.

This is followed by a simple message to signal that the script has successfully finished (Line 72) or the output of the error if one occurred in Line 75.

I hope I could help you save time again in your race against the clock and you found the explanations useful.

Yours sincerely,
Frank

Sources:
(1) How to escape Callback Hell
(2) Explanation of Node.js CLI Argument handling
(3) Explanation of Node.js CLI Argument handling II
(4) My own short example of an template for Node.js CLI Argument handling
(5) Sorting an array
(6) Sorting an array of objects by their property
(7) How to write to a file in Node.js
(8) How to avoid making mistakes with Promises
(9) Repository for the scrip
(10) Escape Callback Hell with Promises
(11) My article about how to convert (promisify) an async function with callback to a Promise based one

Auto Remove RabbitMQ Orphan Queues from Loopback MQ Connector

150 150 Frank

Hi everyone,

who doesn’t know this: You’re working on multiple projects, some container still run on docker, others are already terminated. But your rabbitmq gets slower and slower. You check the Queue overview and you see numerous queues with names like 1234568.node /app/server/server.js.54.response.queue.
These are queues that are left over, sometimes from crashed containers, sometimes a container didn’t end its connection correctly.
But basically these queues stay open until you reset rabbitmq, remove them or they expire. Waiting for the expiration can take quite some time depending on your configuration.
I will show you a quick and easy way to get rid of them using the ui and a regex pattern.

If you want to quickly and easily remove them, then you can do this:

Go to the Admin -> Policies site
Expand Add / update a policy
Configure it as follows
Name: Whatever you wish, doesn’t matter
Pattern: [a-z0-9]*\.node \/app\/server\/server\.js\.[0-9]*\.response.queue 
Apply to: Select Queues 
Priority: Leave empty
Definition: In the first left field write expires, in the field to the right write 1 and in the drop down select number
Click Add policy

That’s it. Your queues are already delete or if you have too many then it takes a few seconds. But then they are all gone.
Don’t forget to delete the policy again.

I hope this helps you.

Regards,
Your Frank

Regex Pattern: [a-z0-9]*\.node \/app\/server\/server\.js\.[0-9]*\.response.queue

Sources:
(1) https://www.cloudamqp.com/blog/2016-06-21-how-to-delete-queues-in-rabbitmq.html

Useful tools:
(1) https://regex101.com/

How to promisify with bluebird

150 150 Frank

Hey @everyone,

this will be a quick tip / reference. Sometimes I get asked about the syntax to promisify only one function with the bluebird Promise Library. I will show you an example to promisify the readdir method of the fs package:

'use strict'
const fs = require('fs')
const Promise = require('bluebird')

const readdirAsync = Promise.promisify(fs.readdir)

That´s it already. Before your code looked somewhat like this:

fs.readdir(myPath, (err, files) => {
  // Handle the files
})

If you needed to do further asynchronous operations you came into the callback hell (1).

Promisifying it make the syntax clearer and more elegant:

readdirAsync(myPath)
.then(files => {
  // Handle the files
})

You see, now you are in the promise chain, which makes elegant and readable code easier to achieve.
And you don´t have to be extra careful about using Promises and callbacks at the same time.

Yours sincerely,
Frank

(1) // https://blog.syntonic.io/2017/07/07/escaping-callback-hell-util-promisify/

Angular Tips I: Fix Can’t bind to ‘routerLink’ since it isn’t a known property of ‘a’.

150 150 Frank

Hi everyone,

this will be a quicky. I guess everyone has had this error and was so used to it working that you forgot how to solve it. Because I just had that experience I write this post.

What you tried to achieve

Add a routerLink entry to an a html element in the Angular template.

Error

Can’t bind to ‘routerLink’ since it isn’t a known property of ‘a’.

Solution

Remember to import the RouterModule in every Module where you want to user routerLink.
To do that add import { RouterModule } from '@angular/router'; at the top of the corresponding module. And don´t forget to add it to the imports array:

@NgModule({
imports: [ CommonModule, RouterModule ],
...
})
export class ...

If you put that into a shared Module don´t forget the exports array too:

exports: [
...,
...,
RouterModule
]

I hope this helps you avoid a headache when you see the message “Can’t bind to ‘routerLink’ since it isn’t a known property of ‘a’.” again.

Yours sincerely,
Frank

Sources:
(1) https://blog.ng-book.com/basic-routing-in-angular-2/
(2) https://coryrylan.com/blog/introduction-to-angular-routing
(3) https://toddmotto.com/angular-component-router
(4) https://malcoded.com/posts/angular-fundamentals-routing

Docker Stack for Apache, PHP FPM & MySQL

150 150 Frank

Hey everyone,

it has been quite some time since my last post, I know. Today I came across a new problem and wanted to share my Solution (that is based on some giants shoulders) with you.

I have a development environment for Apache & PHP on my MacBook, but with all the different ways, changes which source is working and so on I wanted a reliable solution that I can replicate.

So I decided to put everything into Docker Container and put them together as a stack.

The Idea was to have a stack where I can manipulate the vhost definition and the Apache config from the outside, while the Projects are mapped from a local source folder.

The Requirements

– Docker installed and working
– Access to the internet
– A directory where the Apache config (httpd.conf) is located
– A directory where vhost configs for apache are hosted
– A directory where the sourcecode is reachable
– A cloned copy of https://github.com/FrankTheDevop/php-apache-mysql-containerized

Okay, let´s start:

Start

After you cloned the git repository you find this structure:
– root folder
–README.md
–apache
—Dockerfile
—conf
—-httpd.conf
—vhosts
—-demo.apache.conf
–docker-compose.yml
–php
—Dockerfile
–public_html

I use this as my base folder and leave the configs here. You can decide to put them elsewhere, just remember to change you paths later.

Step 1

Open a console and change into the root folder of the git clone. Change into the php.

Edit the Dockerfile and put the php version you want to use into PHP_VERSION=”$YOUR_VERSION_NAME”.

TIP

Depending on the PHP Version you use you might want to change the following:
RUN docker-php-ext-install mysqli
This statement installs the mysqli extension for php. Starting from PHP 5.5 you should use it. But if you have a software that still uses the mysql extension and PHP <= 5.6 change it to: RUN docker-php-ext-install mysql Finally execute: docker build -t $your-image-prefix/$your-image-name:$version . This could look like this: docker build -t frankthedevop/php:v5.6 . Don´t forget the "." at the end, that tells Docker to use the current folder. That takes a bit of time, depending on the speed of your internet connection and your computer.

Step 2:

Change into the apache folder. Edit the Dockerfile.
Put the version you want into APACHE_VERSION=”$yourversion”, e.g. APACHE_VERSION=”2.4.25″.

Execute:
docker build -t $your-image-prefix/$your-image-name:$version .
This could look like this: docker build -t frankthedevop/apache:v2.4.25 .
Don´t forget the “.” at the end, that tells Docker to use the current folder.

Again this might take a while.

Step 3

After you have the containers for Apache and PHP you can start your stack. Here you have to be a bit careful, it is slightly different if you use docker-compose or have a Docker Swarm Installation.

If you do not use a Docker Swarm installation you could go into the root folder and execute:
docker-compose up

It will start the Apache & PHP Container, retrieve a mysql container and make everything available.
You can already use it, but it doesn´t point to the folder of your sources yet. For that skip to Step 4.

If you use a Docker Swarm Installation you need to use the docker-compose.stack.yml file as config.
Please edit the file, change the image names to those you chose earlier. For MySQL place replace:
– $YourRootPW for the Root Password,
– $YourDatabaseName for the Database Name,
– $YourUser for the username and
– $YourPW for the password

The MySQL Container will use those variables to automatically create a MySQL Instance for you. You can use $YourUser and $YourPW for the connection in your code.

For PHP and Apache please remove the volume keys and what is below them for now.

Now you can execute:
docker stack deploy –compose-file docker-compose.stack.yml $YourStackName

Interim Result

Finally you have a running Stack of Apache, PHP & MySQL. If you visit localhost:8080 you should reach a site telling you: “It works”.

Step 4

If you use the Docker Swarm Installation or not, you still miss the ability to edit the php files and see the results.

For this we have to set the Volume key accordingly.

For PHP we add:
volumes:
– $YourPathToTheSourceCode:/var/www/html/

Please replace $YourPathToTheSourceCode with the path to where you sourcecode is located. I typically put the parent folder of my projects here, e.g. /Users/username/projects/php/.

For apache we start the same way:
volumes:
– $YourPathToTheSourceCode:/var/www/html/
After that we add:
– $YourPathToTheVhostsFolder:/usr/local/apache2/conf/
– $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf

Replace $YourPathToTheVhostsFolder with the folder where you want to put your virtual host definitions and $YourPathTOTheHttpd.confFolder with the folder where you put your httpd.conf (optional, if you want to use the default one remove this line).

This is how it should look like:

version: '3'

services:
  php:
    image: fdsmedia/php:5.6
    volumes:
      - $YourPathToTheSourceCode:/var/www/html/
  apache:
    image: fdsmedia/apache:2.4.25
    depends_on:
      - php
      - mysql
    ports:
      - "8080:8080"
    volumes:
      - $YourPathToTheSourceCode:/var/www/html/
      - $YourPathToTheVhostsFolder:/usr/local/apache2/conf/vhosts
      - $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf
  mysql:
    image: mysql:${MYSQL_VERSION:-latest}
    ports:
      - "3306:3306"
    volumes:
      - data:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=$YourRootPW
      - MYSQL_DATABASE=$YourDatabaseName
      - MYSQL_USER=$YourUser
      - MYSQL_PASSWORD=$YourPW
volumes:
    data:

Now you can just restart the service for PHP & Apache, go to localhost:8080 and find your project. If you change the url or port for the virtual host please change that here too.

Result

You finally have a working Docker-based Stack where you can develop your PHP-based Applications and maintain your database content (as long as you don´t remove the data).

If you have a new application it is as easy as creating a new folder under your project folder, add a vhost configuration and you are good to go.
If you want to change the used php version you just need to create another container with the other version, add an entry to the stack and change the url in the vhost configuration. That´s all. Isn´t that amazing?

If you have any question feel free to post them into the comments. Or to send my an email.

Yours sincerely,

Frank

P.S.: If you are looking for a Hosting Solution have a look at Digital Ocean (digitalocean.com*) They let you set up Docker Hosting easily & quickly.

* Affiliate Link

Sources
(1) https://www.cloudreach.com/blog/containerize-this-php-apache-mysql-within-docker-containers/
(2) https://github.com/mzazon/php-apache-mysql-containerized
(3) https://dev.to/chiefoleka/how-to-setup-nginx-and-php71-with-fpm-on-mac-os-x-without-crying-4m8 (they do it with nginx)
(4) http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/ (nginx too)
(5) https://getgrav.org/blog/macos-mojave-apache-multiple-php-versions (One of the sources that should work but sadly not for me)
(6) https://www.pascallandau.com/blog/php-php-fpm-and-nginx-on-docker-in-windows-10/#setup-php-fpm (Inspiration for the development setup)
(7) https://easyengine.io/tutorials/php/directly-connect-php-fpm (Debug PHP FPM)

Node.js Tooling I – Processmanager PM2

150 150 Frank

The purpose of this post is to help you get started with tools for Node.js in general and Loopback specifically to ease you life as Developer and Operator.

Prerequisite

You need a Node.js based API (to follow through this article. PM2 supports other languages too).

PM2 Processmanager

The PM2 Processmanager is a mighty one with many different options, has various integrations and even supports an online monitoring.

In this Article I present you the basic usage of it to get started fast. In a later Article I will help you migrate to Docker and add the online monitoring of your processes.

Installation

The Installation is as easy as npm install pm2 -g. It is important that you install it globally.

How to use it

You can manually start an api by issuing pm2 start app.js, but then you have to specify all parameter on every start. So it could be fast to test, but I recommend writing a small configuration file for it name process.yml.

Configuration Syntax

PM2 offers multiple syntax variants for the configuration file. Currently they support Javascript, JSON and YAML format. I prefer to create this configuration in YAML so I will present it to you in this syntax. For the others please have a look at Process Configuration File Syntax (1)

process.yml

Most often my configuration looks like this:

apps:
  - script: path_to_startup_script
    name: api_name
    exec_mode: cluster
    instances: 1
    env:
      DEBUG_FD: 1
      DEBUG_COLORS: 1

 
apps is the root element.

Each script entry refers to an app or api that should be started. Here you define the path to the file that starts your api.

You can either define one app per app/api or you can even define a stack you want to start with multiple entries.

Personally I use one configuration file per api in development and one configuration for a stack in the staging environment before I deploy to docker.

The name describe the name you will see in the process manager when you list the running processes.

With exec_mode and the following instances things get interesting. If you define the mode fork you can only start up one instance of this app/api. But if you define cluster then you´re able to scale the app/api with one single command and pm2 will load balance it for you!

instances defines how many concurrent instances you want to launch of this app/api at startup. I set this normally to 1 and adjust on the fly according to the needs.
This way I can already get a first idea of the need to scale.

With the env you can specify environment variables you want to set.
DEBUG_FD: 1 tells Node.js to change the output stream to process.stdout.
DEBUG_COLORS: 1 will add colors to the pm2 log output. This is handy because you see on the first glance if the logs message is an error or not.

These have not been all possibly Attributes for the configuration file. If you want a tighter control have a look at the Configuration File Attributes (2).

After this explanation you will find my configurations for a Express based API and a Loopback based API.

Express Example

apps:
  - script: bin/www
    name: api
    exec_mode: cluster
    instances: 1
    env:
      DEBUG_FD: 1
      DEBUG_COLORS: 1
      NODE_ENV: staging

Loopback Example

apps:
  - script: server/server.js
    name: loopback_api
    exec_mode: cluster
    instances: 1
    env:
      DEBUG_FD: 1
      DEBUG_COLORS: 1

Commands

After we defined the configuration we need to interact with the Processmanager to start and stop our APIs and check the log output.

Listing currently running Processes

To view all currently running Processes issue pm2 list at a console.

Start an API with an process.x

Starting your API with an configuration file is as easy as the command pm2 start process.x (x is config.js, json or yml).
After this command PM2 starts your API with the specified configuration and outputs it list of currently running processes.

Stopping an API

You can stop your API with pm2 stop process.x.
Important to know is that PM2 just stops your API then, but won´t remove it from the prepared to run process list. If you want to remove it cleanly and make sure on the next start you have a clean plate you have to destroy it.

Destroying an prepared to run API entry

To remove a API entry from the prepared to run list you issue pm2 delete process.x

Check the Logs

To check all logs without filtering to one API you issue pm2 logs.

If you want to filter the logs to one specific API you can add it to the command like this pm2 logs name_of_your_api

If you have any question post them into the comments. Or feel free to send my an email.

Yours sincerely,

Frank

Sources:
(1) http://pm2.keymetrics.io/docs/usage/application-declaration/
(2) http://pm2.keymetrics.io/docs/usage/application-declaration/#attributes-available

MLAND Series – Tips III – Loopback RabbitMQ Usage

150 150 Frank

The purpose of this post is to help you get up and running with RabbitMQ integrated into your Loopback API.

Prerequisite

You have a working Loopback API Project.

RabbitMQ

Install the Loopback component

Inside your Loopback project folder run:

npm install loopback-component-mq --save

This will install the component and all it´s dependencies as you are used to

Configure the RabbitMQ component

Register it

Loopback loads components per default only if they are inside known folders. External components are in different locations and need be registered.
You do so by adding "../node_modules/loopback-component-mq/lib/mixins" to the mixins array in the model-config.json.

Component configuration

Loopback checks if a configuration is available for a component inside component-config.json.
Therefore add the following template to component-config.json:

{
"loopback-component-mq": {
    "path": "loopback-component-mq",
    "options": {
      "restPort": 15672,
      "acls": [
        {
          "accessType": "*",
          "principalType": "ROLE",
          "principalId": "$unauthenticated",
          "permission": "DENY"
        }
      ]
    },
    "topology": {
      "connection": {
        "uri": "amqp://$username:$password@$host:$port/$vhost", (1)
        "timeout": 30000
      },
      "exchanges": [
        {
          "name": "my_first_exhange",
          "type": "topic",
          "persistent": true
        }
      ],
      "queues": [
        {
          "name": "my_first_queue",
          "subscribe": true,
          "limit": 1
        },
        {
          "name": "my_second_queue",
          "limit": 1
        }
      ],
      "bindings": [
        {
          "exchange": "my_first_exchange",
          "target": "my_first_queue",
          "keys": [
            "my_first_queue"
          ]
        },
        {
          "exchange": "my_first_exchange",
          "target": "my_second_queue",
          "keys": [
            "my_second_queue"
          ]
        }
      ],
      "logging": {
        "adapters": {
          "stdOut": {
            "level": 5,
            "bailIfDebug": true
          }
        }
      }
    }
  }

Explanation:
The path entry reflects the component name.
You can configure where the management interface is located (restPort) and the acls you like via the options entry.

In topology we define the exchanges and queues we want to use inside this API.

  1. Inside connection you define with the uri where your RabbitMQ Service is located and with the timeout how long the system should wait until a connect call fails.

    I strongly suggest defining it, especially if you connect over the internet. Otherwise you instantly receive a connection error.

  2. In exchanges you define the exchanges you want to use, with their name, their type and if you want them to persist after the api disconnects (key persistent). To learn more about
    the Types have a look at (2).
  3. queues contains all queues you want to use with the name, if you want to subscribe and if you want to limit the amount of concurrently processed messages.
    If you subscribe to a queue you will retrieve all messages inside it, so make sure you handle them all. Otherwise you wonder why you number of messages grow and the api outputs errors. Use this if you implement a consumer for all messages you send to this queue (I show you later how it´s done).

    I strongly advise you to use limits, otherwise the api might just stop working when too many message fight for resources. Additionally this impacts the performance of your API.

  4. bindings model the connection between exchanges and queues while defining which routing key (key keys) you use.
    exchange is self explanatory, target is the queue you want to connect. You can add keywords or themes in the keys array, I just put in the queue name I´d like to use.
  5. You can configure the logging you want to have inside logging. I added the configuration I use myself for debugging.
  6.  

    Mixin configuration

    The RabbitMQ component uses a mixin. This way you can configure the consumers and producers per model.

    Add a similar structure like this to your model.json file:

    "mixins": {
        "MessageQueue": {
          "consumers": {
            "consumerMessage": {
              "queue": "my_first_queue",
              "type": "$company.$type.$subtype"
            }
          },
          "producers": {
            "producerGreet": {
              "exchange": "my_first_exchange",
              "options": {
                "routingKey": "my_first_queue",
                "type": "$company.$type.$subtype"
                "contentType": "application/json"
              }
            },
          }
        }
      },
    

    If you already have a mixins key in the model.json just add the inner structure, beginning at "MessageQueue".

    Inside “MessageQueue” you can define a consumers and producers object, in which you define the consumers and producers respectively.

    A consumer has a name ( in the example consumerMessage) and needs to know from which queue it should get it´s messages (key queue) and which message type it is responsibly for (key type). If only one message type will occur in queue you need only one consumer, else you need more. The name of the consumer (consumerMessage in the example) is the name of the method you have to implement for this model. I come to this in a bit.

    A producer has a name too, is connected to an exchange (key exchange) and has some options. Here comes the keys from the component-config.json into play, where I said I use the queue name I want to target. They need to match the routingKey. At last we set contentType, for me this is normally json.
    You don´t need to implement a producer, the component does it for you. In a few moment I will show you how you can call it.

    Usage

    Consumer

    As mentioned you need to implement a consumer yourself. The syntax is (ES6 Syntax):

    {
    $model.consumername = (payload) => {
      // If your message comes from another source than a loopback-component-mq based API
      const { message } = JSON.parse(payload)
    
      // Otherwise you can simplify it to
      const { message } = payload
    
      // Do something
      ....
    
      if (error) {
        // Depending on your architecture you might want to reject the message if an error occurs.
        // This will not acknowledge the message and it will be re-delivered to you. So you can use this if you have a temporary problem, but the message is important
        return Promise.reject(error)
      } else {
        // If everything is alright acknowledge this message 
        return Promise.return()
      }
    }
    

    You tell the queue that you handled a message by returning a Promise.resolve(). If you want the message to be re-delivered you send Promise.reject().

    Producer

    You can use a producer anywhere inside the scope of your model this way:

    {
    $model.greet = (name) => {
      return $model.producerGreet({greeting: 'Hi', name: name}))
    }
    

    That´s it already. The producer send this message with a JSON payload to the defined exchange with the defined routingKey.

    Conclusion / Lessons learned

    That´s it for today. In this tip you learned:

    • How to install the Loopback RabbitMQ Component
    • How to register the Loopback RabbitMQ Component
    • How to configure the Loopback RabbitMQ Component
    • How to configure consumers and producers for the Loopback RabbitMQ Component
    • How to implement a consumer
    • How to use a producer

    If you have any question post them into the comments. Or feel free to send my an email.

    Yours sincerely,

    Frank

    Sources
    (1) https://www.rabbitmq.com/uri-spec.html
    (2) https://www.rabbitmq.com/getstarted.html

  • 1
  • 2