FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop

Tips

Script in Node.js to iterate a directory and extract information from it´s files

150 150 Frank

Hi everyone,

after we did the template last time, I want to show you how to put the single pieces together.
Based on a task at hand I choose the example of iterating and working with files in a directory.
The exact task was:
– Iterate a directory
– find all JSON files in it
– read them
– extract all objects in them
– extract the property email from them
– extract the unique domains of the email addresses
– count how often each domain occurs
– write this information to a summary file for further processing / display

'use strict'

const Promise = require('bluebird')
const fs = require('fs');
const path = require('path');
const util = require('util');

// Promisify only readdir as we don´t need more
const readdirAsync = Promise.promisify(fs.readdir);
const writeFileAsync = Promise.promisify(fs.writeFile);

// Commandline handling
const optionDefinitions = [
  { name: 'folder', alias: 'f', type: String }
]
const commandLineArgs = require('command-line-args')
const options = commandLineArgs(optionDefinitions)

// Add the path to your files
const folder = options.folder
// e.g. '/Users/$Yourusername/Downloads/customerdata';

// This will hold all entries from all files
//  Not unique
const all = []

// Read all files in our directory
return readdirAsync(folder)
  .then(files => {
    files
      .map(entry => {
        if(entry.indexOf('.') > 0 && path.extname(entry) === '.json') {
          // In case there are .json files in the folder that are not in JSON format
          try {
            const temp = require(path.join(folder, entry))

	    // I know for sure that all entries have an filled email column so I can just split here
            // and extract the domain name without checking
            temp.map(entry => (all.push(entry.email.split('@')[1])))
          } catch (e) {}
          return null
        }
      })

      console.log(all)
  })
  .then(() => {
    // Create a unique array
    // Use the new Set feature of ES 6
   return Promise.resolve([...new Set(all)])
  })
  .then(allUnique => {
    // Get a list of unique entries with the number of times it appears
    let newList = []

    allUnique.map(entry => {
      const t = all.filter(innerEntry => innerEntry === entry)
      newList.push({name: entry, count: t.length})
    })

    return Promise.resolve(newList)
  })
  .then(allUnique => {
    allUnique.sort((a,b) => b.count - a.count)
    return Promise.resolve(allUnique)
   })
  .then(allUnique => {
    let content = allUnique.reduce((a, b) => a + `${b.name};${b.count}\n`, '')
    return writeFileAsync(path.join(folder, 'all.txt'), content)
  })
  .then(data => {
    console.log('Wrote file successfully')
  })
  .catch(err => {
    console.log('An error occurred:', err)
  })

You find the repository for it here.

If you are looking for the explanation, continue to read. Otherwise be happy with the template and change it to your hearts desire ;).

This one is a bit longer but stay with me, we will go through it together.
At first we have the standard block where we import all required libraries in Line 3-6.

Then we convert the async, callback-based functions for readdir and writeFile to Promises (we promisify them) for easier
and more elegant handling in Line 9-10.

Next comes the handling of command line (CLI) parameters as we did before in Line 12 – 21.

We define an array all which will receive all email domains from the read files (not unique) in Line 25 .

Now we have everything together to start:
We read the directory content in Line 28.
In Line 29 it returns an array of all found files
With Line 30-31 we start iterating all elements of the array and with Line 32 with make sure that only files ending with .json are accepted, all others are ignored
Line 34-41 is a bit of cheating, Node.js is able to require an JSON file. So instead of reading the file, parsing it and having to handle it all myself, I use the functionality of require.
In case there is a JSON file that can not be parsed I wrap it into a try error, so that it continues on an error
Line 39 does a few things at once:
– With .map I iterate over all entries in the file
– I know each object contains an email property, therefore I act on it without checking
– An email address is in the form username@domainname.domainextension, I need the domain name and extension, so I split the email property and take the second half of the email address, which is the domain part
– Each of these domain parts is pushed into the array for further processing

After all the processing I make a debug output in Line 45.

JavaScript ES 6 introduced some nice new features, one is a Set (an “array” of unique values) and the decomposition operator. In Line 50 I return an new array that is created by decomposing the Set,
so in short: In one line I get an unique array of domains.

In the next function we create a new array of object with the domain name and the number of occurrences. For that we iterate over each entry of the unique array in Line 56,
use the filter method of the not unique array with all entries in Line 57. The filter method returns an array, so I can create the JSON object with the number of occurrences easily by using array.length in Line 58.

After I have the array with the number of occurrences I want to see it sorted. The sort function allows use to provide a function how to sort. And thanks to (5) & (6) I found an short way to do in as you see
in Line 64.

In the last function I use the array.reduce function to create a string from the JSON objects. You can see this post-processing step in Line 68.

All that is left is to write the data to a file as you see in Line 69.

This is followed by a simple message to signal that the script has successfully finished (Line 72) or the output of the error if one occurred in Line 75.

I hope I could help you save time again in your race against the clock and you found the explanations useful.

Yours sincerely,
Frank

Sources:
(1) How to escape Callback Hell
(2) Explanation of Node.js CLI Argument handling
(3) Explanation of Node.js CLI Argument handling II
(4) My own short example of an template for Node.js CLI Argument handling
(5) Sorting an array
(6) Sorting an array of objects by their property
(7) How to write to a file in Node.js
(8) How to avoid making mistakes with Promises
(9) Repository for the scrip
(10) Escape Callback Hell with Promises
(11) My article about how to convert (promisify) an async function with callback to a Promise based one

Commandline tools with Nodejs

150 150 admin

Hi everyone,

sometimes you need a small tool like but you might be working for an extended period of time in Node.js so that you don´t want to
switch languages and loose time and momentum on it. You want to do it quickly, but correctly to be able to reuse it in one way or another at a later point.
This is what today is about about.

I will show you quickly how to do a template for commandline arguments and handling them comfortably, so that you have this off your plate.

Here is the code for it:

'use strict'

const commandLineArgs = require('command-line-args')

// Commandline handling
const optionDefinitions = [
{ name: 'folder', alias: 'f', type: String }
]

const options = commandLineArgs(optionDefinitions)

// Add the path to your files
const folder = options.folder
// '/Users/$YourUsername/Downloads/customerdata';
console.log(`Given folder:${folder}`)

Explanation:

I use the npm package command-line-args to be able to handle commandline arguments easily.
Line 3: At first we import the command-line-args package.
Line 6: Then we define the options we want to be able to use. I chose an option folder with the type string.

Line 10: After we defined them we feed then to commandLineArgs and it parses them for use and returns a json document with the result.

Line 13: In that result we have properties with the name we defined in our options and we can extract them like we are used to.

If you save it as template.js, the following syntax is supported on the commandline:

node template.js --folder $YourFolder
node template.js --folder=$YourFolder
node template.js -f $YourFolder

As you can see we do have defined long and short form of the parameter that is required.

You can find the file on https://github.com/FrankTheDevop/cli-template too.
The npm package you find on https://www.npmjs.com/package/command-line-args and it´s repo on https://github.com/75lb/command-line-args.

I hope I could help you save some time i research and trial & error with this short nugget.

Yours Sincerely,
Frank

Sources:
(1) https://flaviocopes.com/node-cli-args/
(2) https://code-maven.com/argv-raw-command-line-arguments-in-nodejs
(3) https://codeburst.io/need-for-promises-and-rookie-mistakes-to-avoid-when-using-promises-9cabba215e04

Auto Remove RabbitMQ Orphan Queues from Loopback MQ Connector

150 150 Frank

Hi everyone,

who doesn’t know this: You’re working on multiple projects, some container still run on docker, others are already terminated. But your rabbitmq gets slower and slower. You check the Queue overview and you see numerous queues with names like 1234568.node /app/server/server.js.54.response.queue.
These are queues that are left over, sometimes from crashed containers, sometimes a container didn’t end its connection correctly.
But basically these queues stay open until you reset rabbitmq, remove them or they expire. Waiting for the expiration can take quite some time depending on your configuration.
I will show you a quick and easy way to get rid of them using the ui and a regex pattern.

If you want to quickly and easily remove them, then you can do this:

Go to the Admin -> Policies site
Expand Add / update a policy
Configure it as follows
Name: Whatever you wish, doesn’t matter
Pattern: [a-z0-9]*\.node \/app\/server\/server\.js\.[0-9]*\.response.queue 
Apply to: Select Queues 
Priority: Leave empty
Definition: In the first left field write expires, in the field to the right write 1 and in the drop down select number
Click Add policy

That’s it. Your queues are already delete or if you have too many then it takes a few seconds. But then they are all gone.
Don’t forget to delete the policy again.

I hope this helps you.

Regards,
Your Frank

Regex Pattern: [a-z0-9]*\.node \/app\/server\/server\.js\.[0-9]*\.response.queue

Sources:
(1) https://www.cloudamqp.com/blog/2016-06-21-how-to-delete-queues-in-rabbitmq.html

Useful tools:
(1) https://regex101.com/

How to promisify with bluebird

150 150 Frank

Hey @everyone,

this will be a quick tip / reference. Sometimes I get asked about the syntax to promisify only one function with the bluebird Promise Library. I will show you an example to promisify the readdir method of the fs package:

'use strict'
const fs = require('fs')
const Promise = require('bluebird')

const readdirAsync = Promise.promisify(fs.readdir)

That´s it already. Before your code looked somewhat like this:

fs.readdir(myPath, (err, files) => {
  // Handle the files
})

If you needed to do further asynchronous operations you came into the callback hell (1).

Promisifying it make the syntax clearer and more elegant:

readdirAsync(myPath)
.then(files => {
  // Handle the files
})

You see, now you are in the promise chain, which makes elegant and readable code easier to achieve.
And you don´t have to be extra careful about using Promises and callbacks at the same time.

Yours sincerely,
Frank

(1) // https://blog.syntonic.io/2017/07/07/escaping-callback-hell-util-promisify/

Angular Tips I: Fix Can’t bind to ‘routerLink’ since it isn’t a known property of ‘a’.

150 150 Frank

Hi everyone,

this will be a quicky. I guess everyone has had this error and was so used to it working that you forgot how to solve it. Because I just had that experience I write this post.

What you tried to achieve

Add a routerLink entry to an a html element in the Angular template.

Error

Can’t bind to ‘routerLink’ since it isn’t a known property of ‘a’.

Solution

Remember to import the RouterModule in every Module where you want to user routerLink.
To do that add import { RouterModule } from '@angular/router'; at the top of the corresponding module. And don´t forget to add it to the imports array:

@NgModule({
imports: [ CommonModule, RouterModule ],
...
})
export class ...

If you put that into a shared Module don´t forget the exports array too:

exports: [
...,
...,
RouterModule
]

I hope this helps you avoid a headache when you see the message “Can’t bind to ‘routerLink’ since it isn’t a known property of ‘a’.” again.

Yours sincerely,
Frank

Sources:
(1) https://blog.ng-book.com/basic-routing-in-angular-2/
(2) https://coryrylan.com/blog/introduction-to-angular-routing
(3) https://toddmotto.com/angular-component-router
(4) https://malcoded.com/posts/angular-fundamentals-routing

Docker Stack for Apache, PHP FPM & MySQL

150 150 Frank

Hey everyone,

it has been quite some time since my last post, I know. Today I came across a new problem and wanted to share my Solution (that is based on some giants shoulders) with you.

I have a development environment for Apache & PHP on my MacBook, but with all the different ways, changes which source is working and so on I wanted a reliable solution that I can replicate.

So I decided to put everything into Docker Container and put them together as a stack.

The Idea was to have a stack where I can manipulate the vhost definition and the Apache config from the outside, while the Projects are mapped from a local source folder.

The Requirements

– Docker installed and working
– Access to the internet
– A directory where the Apache config (httpd.conf) is located
– A directory where vhost configs for apache are hosted
– A directory where the sourcecode is reachable
– A cloned copy of https://github.com/FrankTheDevop/php-apache-mysql-containerized

Okay, let´s start:

Start

After you cloned the git repository you find this structure:
– root folder
–README.md
–apache
—Dockerfile
—conf
—-httpd.conf
—vhosts
—-demo.apache.conf
–docker-compose.yml
–php
—Dockerfile
–public_html

I use this as my base folder and leave the configs here. You can decide to put them elsewhere, just remember to change you paths later.

Step 1

Open a console and change into the root folder of the git clone. Change into the php.

Edit the Dockerfile and put the php version you want to use into PHP_VERSION=”$YOUR_VERSION_NAME”.

TIP

Depending on the PHP Version you use you might want to change the following:
RUN docker-php-ext-install mysqli
This statement installs the mysqli extension for php. Starting from PHP 5.5 you should use it. But if you have a software that still uses the mysql extension and PHP <= 5.6 change it to: RUN docker-php-ext-install mysql Finally execute: docker build -t $your-image-prefix/$your-image-name:$version . This could look like this: docker build -t frankthedevop/php:v5.6 . Don´t forget the "." at the end, that tells Docker to use the current folder. That takes a bit of time, depending on the speed of your internet connection and your computer.

Step 2:

Change into the apache folder. Edit the Dockerfile.
Put the version you want into APACHE_VERSION=”$yourversion”, e.g. APACHE_VERSION=”2.4.25″.

Execute:
docker build -t $your-image-prefix/$your-image-name:$version .
This could look like this: docker build -t frankthedevop/apache:v2.4.25 .
Don´t forget the “.” at the end, that tells Docker to use the current folder.

Again this might take a while.

Step 3

After you have the containers for Apache and PHP you can start your stack. Here you have to be a bit careful, it is slightly different if you use docker-compose or have a Docker Swarm Installation.

If you do not use a Docker Swarm installation you could go into the root folder and execute:
docker-compose up

It will start the Apache & PHP Container, retrieve a mysql container and make everything available.
You can already use it, but it doesn´t point to the folder of your sources yet. For that skip to Step 4.

If you use a Docker Swarm Installation you need to use the docker-compose.stack.yml file as config.
Please edit the file, change the image names to those you chose earlier. For MySQL place replace:
– $YourRootPW for the Root Password,
– $YourDatabaseName for the Database Name,
– $YourUser for the username and
– $YourPW for the password

The MySQL Container will use those variables to automatically create a MySQL Instance for you. You can use $YourUser and $YourPW for the connection in your code.

For PHP and Apache please remove the volume keys and what is below them for now.

Now you can execute:
docker stack deploy –compose-file docker-compose.stack.yml $YourStackName

Interim Result

Finally you have a running Stack of Apache, PHP & MySQL. If you visit localhost:8080 you should reach a site telling you: “It works”.

Step 4

If you use the Docker Swarm Installation or not, you still miss the ability to edit the php files and see the results.

For this we have to set the Volume key accordingly.

For PHP we add:
volumes:
– $YourPathToTheSourceCode:/var/www/html/

Please replace $YourPathToTheSourceCode with the path to where you sourcecode is located. I typically put the parent folder of my projects here, e.g. /Users/username/projects/php/.

For apache we start the same way:
volumes:
– $YourPathToTheSourceCode:/var/www/html/
After that we add:
– $YourPathToTheVhostsFolder:/usr/local/apache2/conf/
– $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf

Replace $YourPathToTheVhostsFolder with the folder where you want to put your virtual host definitions and $YourPathTOTheHttpd.confFolder with the folder where you put your httpd.conf (optional, if you want to use the default one remove this line).

This is how it should look like:

version: '3'

services:
  php:
    image: fdsmedia/php:5.6
    volumes:
      - $YourPathToTheSourceCode:/var/www/html/
  apache:
    image: fdsmedia/apache:2.4.25
    depends_on:
      - php
      - mysql
    ports:
      - "8080:8080"
    volumes:
      - $YourPathToTheSourceCode:/var/www/html/
      - $YourPathToTheVhostsFolder:/usr/local/apache2/conf/vhosts
      - $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf
  mysql:
    image: mysql:${MYSQL_VERSION:-latest}
    ports:
      - "3306:3306"
    volumes:
      - data:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=$YourRootPW
      - MYSQL_DATABASE=$YourDatabaseName
      - MYSQL_USER=$YourUser
      - MYSQL_PASSWORD=$YourPW
volumes:
    data:

Now you can just restart the service for PHP & Apache, go to localhost:8080 and find your project. If you change the url or port for the virtual host please change that here too.

Result

You finally have a working Docker-based Stack where you can develop your PHP-based Applications and maintain your database content (as long as you don´t remove the data).

If you have a new application it is as easy as creating a new folder under your project folder, add a vhost configuration and you are good to go.
If you want to change the used php version you just need to create another container with the other version, add an entry to the stack and change the url in the vhost configuration. That´s all. Isn´t that amazing?

If you have any question feel free to post them into the comments. Or to send my an email.

Yours sincerely,

Frank

P.S.: If you are looking for a Hosting Solution have a look at Digital Ocean (digitalocean.com*) They let you set up Docker Hosting easily & quickly.

* Affiliate Link

Sources
(1) https://www.cloudreach.com/blog/containerize-this-php-apache-mysql-within-docker-containers/
(2) https://github.com/mzazon/php-apache-mysql-containerized
(3) https://dev.to/chiefoleka/how-to-setup-nginx-and-php71-with-fpm-on-mac-os-x-without-crying-4m8 (they do it with nginx)
(4) http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/ (nginx too)
(5) https://getgrav.org/blog/macos-mojave-apache-multiple-php-versions (One of the sources that should work but sadly not for me)
(6) https://www.pascallandau.com/blog/php-php-fpm-and-nginx-on-docker-in-windows-10/#setup-php-fpm (Inspiration for the development setup)
(7) https://easyengine.io/tutorials/php/directly-connect-php-fpm (Debug PHP FPM)

SSL Termination Stack Setup: Let´s encrypt, HAProxy, Your Stack

150 150 admin

Hi everyone,

for a setup at work I needed an quick and easy way to terminate an SSL Connection without hassle. After a short research I found it feasible to use Let´s encrypt for free SSL Certificates.But it looked like a lot of work to create the certificate so I searched for an quicker and hassle free approach. I found one, but it still took me a few hours to figure out how to use it correctly. And I want to save you the time.
My setup looked like this:
– Domain hosted at GoDaddy.com
– Server hosted at Digital Ocean (digitalocean.com*)
– Docker in Swarm Mode
– Portainer as UI
The expected outcome is:
– 1 Stack to (re-)generate the certificate
– 1-x Worker Stacks
If your Website is hosted somewhere else than GoDaddy that is no problem as long as you find it in this list: https://github.com/Neilpang/acme.sh/blob/master/dnsapi/README.md.
Let’s dive into the work:
1. Create API Credentials for GoDaddy / your supported Provider. How you do it depends on the provider, refer to this list: https://github.com/Neilpang/acme.sh/blob/master/dnsapi/README.md.
Remember to create Production keys, e.g. GoDaddy allows to create sandbox keys, those won’t work.
2. Deploy this stack config for the generation stack:
version: '3.5'
services:
  acme:
    command: daemon
    deploy:
      placement:
        constraints:
          - node.role == manager
      resources:
        reservations:
          cpus: '0.01'
          memory: 50M
    environment:
      DEPLOY_HAPROXY_PEM_PATH: /haproxy
      DEPLOY_HAPROXY_RELOAD: for task in $$(docker service ps SSL_system_haproxy -f desired-state=running -q); do docker run --rm -v /var/run/docker.sock:/var/run/docker.sock datagridsys/skopos-plugin-swarm-exec task-exec $$task /reload.sh; done
      
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
    image: interaction/acme.sh
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - acme-data:/acme.sh
      - nginx-data:/www
      - system_haproxy-data:/haproxy
  nginx:
    deploy:
      resources:
        reservations:
          cpus: '0.01'
          memory: 20M
    healthcheck:
      test: curl -f http://localhost || exit 1
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
    image: interaction/acme.sh-nginx
    ports:
      - 80:80
    volumes:
      - nginx-data:/www

volumes:
  acme-data:
    driver: local
    name: 'acme-data'
  nginx-data:
    driver: local
    name: 'nginx-data'
  system_haproxy-data:
    external: true

 

3. Go into your acme Container, either by docker exec -it $containerhash /bin/sh or via your UI.
4. For GoDaddy issue the commands:
4.1 export GD_Key=$yourkey
4.2 export GD_Secret=$yoursecret
4.3 acme.sh —issue -d “$yourFQDN” —dns dns_gd —dnssleep 15
4.4 acme.sh —deploy -d “$yourFQDN” —deploy-hook haproxy
The first two Commands 4.2 and 4.2 will set the required environment variables for the acme.sh script. In 4.3 replace $yourFQDN with the (sub-)domain you want the certificate be created for, e.g. web.stack.example.com.
With the last Command 4.4 you deploy the Certificate and let the script restart your HAProxy.
Let look at your worker stack. Here is my definition:
version: '3.5'
services:
  system_haproxy:
    image: 'dockercloud/haproxy:1.6.6'
    depends_on:
      - web
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M
    environment:
      - CERT_FOLDER=/haproxy
      - DOCKER_HOST=127.0.0.1
      - 'EXTRA_GLOBAL_SETTINGS="debug"'
      - 'STATS_AUTH=admin:$password'
      - 'STATS_PORT=1936'
      - DOCKER_TLS_VERIFY
      - DOCKER_HOST
      - DOCKER_CERT_PATH
    volumes:
      - system_haproxy-data:/haproxy
      - /var/run/docker.sock:/var/run/docker.sock
    ports:
      - '443:443'
      - '1936:1936'
      
  web:
    image: 'dockercloud/hello-world:v1.0.0-frank'
    hostname: '{{ .Service.Name }}-{{ .Task.Slot }}'
    environment:
      - SERVICE_PORTS=$yourport
      - FORCE_SSL=yes
      - SSL_CERT=/haproxy/$crtificatename.pem
      - 'VIRTUAL_HOST=https://$yourFQDN'

volumes:
  system_haproxy-data:
    external: true

 

So what do we have here? A HAProxy Container with my default configuration + SSL Ports and the ENV Var CERT_FOLDER pointing to the folder where the certificate(s) are located. That is needed for the start up as HAProxy recognises, that you want SSL Termination and requires one of multiple ways (for more details see https://github.com/docker/dockercloud-haproxy/tree/master#ssl-termination).
The second entry is a test container you can find on docker hub, I just changed it to another port to reflect my own requirements. Normally the image is ‘dockercloud/hello-world’.
The important things here are the Environment Variables. VIRTUAL_HOST is probably already known by you. You can set the scheme to https instead of http and HAProxy recognises it. You also need the SERVICE_PORTS set to the ports you want to use on this container.
What is probably new to you is FORCE_SSL and SSL_CERT. FORCE_SSL enforces that every access to this container will be done securely via HTTPS. And SSL_CERT points to the location of the certificate we generated earlier. It is the mount point of the external container that is shared with the acme container.
After you deployed both stacks and issued the four commands in the acme container you are ready to go. When you open $yourFQN in your browser you should see something similar to this picture:
Congratulations! You now have a SSL terminated Stack you can easily develop and have no dependencies inside your worker stack(s). I hope I could save you quite some time so you can enjoy the benefits!
You can find the two stack definitions here: https://github.com/FrankTheDevop/ssl-termination-stack.
Feel free to use them :).
Kind Regards,
Frank
* Affiliate Link

MLAND Series – Tips III – Loopback RabbitMQ Usage

150 150 Frank

The purpose of this post is to help you get up and running with RabbitMQ integrated into your Loopback API.

Prerequisite

You have a working Loopback API Project.

RabbitMQ

Install the Loopback component

Inside your Loopback project folder run:

npm install loopback-component-mq --save

This will install the component and all it´s dependencies as you are used to

Configure the RabbitMQ component

Register it

Loopback loads components per default only if they are inside known folders. External components are in different locations and need be registered.
You do so by adding "../node_modules/loopback-component-mq/lib/mixins" to the mixins array in the model-config.json.

Component configuration

Loopback checks if a configuration is available for a component inside component-config.json.
Therefore add the following template to component-config.json:

{
"loopback-component-mq": {
    "path": "loopback-component-mq",
    "options": {
      "restPort": 15672,
      "acls": [
        {
          "accessType": "*",
          "principalType": "ROLE",
          "principalId": "$unauthenticated",
          "permission": "DENY"
        }
      ]
    },
    "topology": {
      "connection": {
        "uri": "amqp://$username:$password@$host:$port/$vhost", (1)
        "timeout": 30000
      },
      "exchanges": [
        {
          "name": "my_first_exhange",
          "type": "topic",
          "persistent": true
        }
      ],
      "queues": [
        {
          "name": "my_first_queue",
          "subscribe": true,
          "limit": 1
        },
        {
          "name": "my_second_queue",
          "limit": 1
        }
      ],
      "bindings": [
        {
          "exchange": "my_first_exchange",
          "target": "my_first_queue",
          "keys": [
            "my_first_queue"
          ]
        },
        {
          "exchange": "my_first_exchange",
          "target": "my_second_queue",
          "keys": [
            "my_second_queue"
          ]
        }
      ],
      "logging": {
        "adapters": {
          "stdOut": {
            "level": 5,
            "bailIfDebug": true
          }
        }
      }
    }
  }

Explanation:
The path entry reflects the component name.
You can configure where the management interface is located (restPort) and the acls you like via the options entry.

In topology we define the exchanges and queues we want to use inside this API.

  1. Inside connection you define with the uri where your RabbitMQ Service is located and with the timeout how long the system should wait until a connect call fails.

    I strongly suggest defining it, especially if you connect over the internet. Otherwise you instantly receive a connection error.

  2. In exchanges you define the exchanges you want to use, with their name, their type and if you want them to persist after the api disconnects (key persistent). To learn more about
    the Types have a look at (2).
  3. queues contains all queues you want to use with the name, if you want to subscribe and if you want to limit the amount of concurrently processed messages.
    If you subscribe to a queue you will retrieve all messages inside it, so make sure you handle them all. Otherwise you wonder why you number of messages grow and the api outputs errors. Use this if you implement a consumer for all messages you send to this queue (I show you later how it´s done).

    I strongly advise you to use limits, otherwise the api might just stop working when too many message fight for resources. Additionally this impacts the performance of your API.

  4. bindings model the connection between exchanges and queues while defining which routing key (key keys) you use.
    exchange is self explanatory, target is the queue you want to connect. You can add keywords or themes in the keys array, I just put in the queue name I´d like to use.
  5. You can configure the logging you want to have inside logging. I added the configuration I use myself for debugging.
  6.  

    Mixin configuration

    The RabbitMQ component uses a mixin. This way you can configure the consumers and producers per model.

    Add a similar structure like this to your model.json file:

    "mixins": {
        "MessageQueue": {
          "consumers": {
            "consumerMessage": {
              "queue": "my_first_queue",
              "type": "$company.$type.$subtype"
            }
          },
          "producers": {
            "producerGreet": {
              "exchange": "my_first_exchange",
              "options": {
                "routingKey": "my_first_queue",
                "type": "$company.$type.$subtype"
                "contentType": "application/json"
              }
            },
          }
        }
      },
    

    If you already have a mixins key in the model.json just add the inner structure, beginning at "MessageQueue".

    Inside “MessageQueue” you can define a consumers and producers object, in which you define the consumers and producers respectively.

    A consumer has a name ( in the example consumerMessage) and needs to know from which queue it should get it´s messages (key queue) and which message type it is responsibly for (key type). If only one message type will occur in queue you need only one consumer, else you need more. The name of the consumer (consumerMessage in the example) is the name of the method you have to implement for this model. I come to this in a bit.

    A producer has a name too, is connected to an exchange (key exchange) and has some options. Here comes the keys from the component-config.json into play, where I said I use the queue name I want to target. They need to match the routingKey. At last we set contentType, for me this is normally json.
    You don´t need to implement a producer, the component does it for you. In a few moment I will show you how you can call it.

    Usage

    Consumer

    As mentioned you need to implement a consumer yourself. The syntax is (ES6 Syntax):

    {
    $model.consumername = (payload) => {
      // If your message comes from another source than a loopback-component-mq based API
      const { message } = JSON.parse(payload)
    
      // Otherwise you can simplify it to
      const { message } = payload
    
      // Do something
      ....
    
      if (error) {
        // Depending on your architecture you might want to reject the message if an error occurs.
        // This will not acknowledge the message and it will be re-delivered to you. So you can use this if you have a temporary problem, but the message is important
        return Promise.reject(error)
      } else {
        // If everything is alright acknowledge this message 
        return Promise.return()
      }
    }
    

    You tell the queue that you handled a message by returning a Promise.resolve(). If you want the message to be re-delivered you send Promise.reject().

    Producer

    You can use a producer anywhere inside the scope of your model this way:

    {
    $model.greet = (name) => {
      return $model.producerGreet({greeting: 'Hi', name: name}))
    }
    

    That´s it already. The producer send this message with a JSON payload to the defined exchange with the defined routingKey.

    Conclusion / Lessons learned

    That´s it for today. In this tip you learned:

    • How to install the Loopback RabbitMQ Component
    • How to register the Loopback RabbitMQ Component
    • How to configure the Loopback RabbitMQ Component
    • How to configure consumers and producers for the Loopback RabbitMQ Component
    • How to implement a consumer
    • How to use a producer

    If you have any question post them into the comments. Or feel free to send my an email.

    Yours sincerely,

    Frank

    Sources
    (1) https://www.rabbitmq.com/uri-spec.html
    (2) https://www.rabbitmq.com/getstarted.html

MLAND Series – Tips II – Loopback AthenaPDF

150 150 Frank

Purpose

When developing an Infrastructure for a Client Project I faced the situation that I needed to support 50 concurrent request to generate a PDF. The purpose of this post is to show you how I did this using AthenaPDF and RabbitMQ.

Setup

The Setup used for this Tip is a machine on Digital Ocean*, created and managed through Docker Cloud and with an Loopback API Container, an AthenaPDF Container and RabbitMQ hosted on CloudAMQP .

 

Mixin Definition

You can use a mixin definition similar to this:

"mixins": {
    "MessageQueue": {
      "producers": {
        "producerSendGeneratePDF": {
          "exchange": "generate.pdf",
          "options": {
            "routingKey": "pdf",
            "type": "company.project.messages.generatepdf",
            "contentType": "application/json"
          }
        },
        "producerPdfGenerated": {
          "exchange": "generate.pdf",
          "options": {
            "routingKey": "pdf",
            "type": "company.project.messages.pdfgenerated",
            "contentType": "application/json"
          }
        }
      },
      "consumers": {
        "consumerGeneratePDF": {
          "queue": "client.generate.pdf",
            "type": "company.project.messages.generatepdf",
        },
        "consumerPDFGenerated": {
          "queue": "client.generate.pdf",
            "type": "company.project.messages.pdfgenerated",
        }
      }
    }
  }

 

With this configuration you can execute  Model.producerSendGeneratePDF({data}) and send a message of the following type:

"company.project.messages.generatepdf"

 

The message with call Model.consumerGeneratePDF where you can prepare the HTML Document you want to be converted and call the AthenaPDF Container to create it. After the work is done in Model.consumerGeneratePDF you can call producerPdfGenerated({data}) to start the next step of your workflow.

If you go with my config, you implement Model.consumerPDFGenerated. There you can execute whatever next step you want to do, send an e-mail to your customer, Upload to Amazon S3 or whatever your next step is.

 

How to limit the AthenaPDF Container Resource consumption

To limit the Memory Usage of the AthenaPDF Container at this to the Docker Stackfile definition of the Container: mem_limit: Xm.

X is the number of MB you allow the Container to get. 256 MB is a good starting point if you start measuring.

How to fix Xlib: extension “RANDR” missing on display “:99”

If you are getting this error it means the shared memory area /dev/shm isn´t shared between the host and the Container.  To fix this add these two Lines to the PDF Container definition of the Stackfile:

 

volumes:
  - '/dev/shm:/dev/shm'

 

How to not overload the AthenaPDF Container

Especially when you limit the amount of  Memory the PDF Container can use you want to limit the number of concurrent PDFs being generated.

Referring to the RabbitMQ Tip you can limit the number of concurrent tasks that the API Endpoint does at any given moments:

Check the Section server/component-config.json, in the loopback-component-mq definition

{  
   "loopback-component-mq":{  
      "path":"loopback-component-mq",
      "options":{  
         "restPort":15672,
         "acls":[  
            {  
               "accessType":"",
               "principalType":"ROLE",
               "principalId":"$unauthenticated",
               "permission":"DENY"
            }
         ]
      }
   },
   "topology":{  
      "connection":{  
         "uri":"amqp://user:password@host/vhost",
         "timeout": timeoutinmilliseconds
      },
      "exchanges":[  
         {  
            "name":"exchangename",
            "type":"exchangetype",
            "persistent":persistent
         }
      ],
      "queues":[  
         {  
            "name":"queuename",
            "subscribe":true,
            "limit":concurrentnonacknoledgedtasknumber
         }
      ],
      "bindings":[  
         {  
            "exchange":"exchangename",
            "target":"queuename",
            "keys":[  

            ]
         }
      ],
      "logging":{  
         "adapters":{  
            "stdOut":{  
               "level":5,
               "bailIfDebug":true
            }
         }
      }
   }
}

Experiment with the limit in the queues definition and see how many concurrent tasks your infrastructure can handle with you configuration.

 

Scaling

Vertically

The great point about this way of setting up your infrastructure is that you can easily scale horizontally. Just give the PDF Container more Memory, get a bigger Machine and increase the limit of the queue definition.

Horizontally

If you want to scale vertically you need to add a haproxy container and change your configuration a bit. I will show you how to do it in the next Tip.

 

Sources

AthenaPDF

Digital Ocean* (Affiliate Link)

Docker Cloud

RabbitMQ

CloudAMQP

MLAND Series – Tips I – Loopback RabbitMQ Mistakes

150 150 Frank

Purpose

The purpose of this post is to highlight the most common mistakes I experienced with Loopback and RabbitMQ myself.

Mistakes I made and might have done too

  1. Receiving [rabbit.unhandled] MessageForgotten to load mixing in model-config.json
  2. Message keep getting redelivered
  3. Getting undefined from payload

Fix Receiving [rabbit.unhandled]

This error can result from 4 different mistakes:

  • Forgotten to load mixin in model-config.json
  • Forgotten to configure component-config.json
  • Messed up the Entry in model.json
  • Connecting to Queue while not handling all Message Types

Fix Forgotten to load mixin in model-config.json

Check how the Entry for model-config.json should look like below

Fix Forgotten to configure component-config.json

Check how the Entry for component-config.json should look like below

Fix Messed up the Entry in model.json

Check how the Entry for model.json should look like below

Connecting to Queue while not handling all Message Types

If you get [rabbit.unhandled] Message of x on queue y, connection ‘default’ was not procced by any registered handlers and the messages only get re-queued and redelivered, you have probably connected an api to a queue where you receive message you don´t handle.

Loopback-rabbit-mq picks the next message available on the queues if is connected to, even if you haven´t implemented handling this messages.

So make sure you connect each api only to queues where you handle all message types from. Otherwise one API will eventually block the queue because it gets messages it can´t process.

Fix Message keep getting redelivered

Check the function you implemented and make sure you return a Promise.resolve if your function succeeded and Promise.reject if it failed.

Fix Getting undefined from payload

If the origin of your message is not loopback it might mean you need to use JSON.parse to retrieve the data in the Message. Check how the Code should look like above

Files to Look at

  1. server/model-config.json
  2. server/component-config.json
  3. common/models/model.json / server/models/model.json
  4. common/models/model.js / server/models/model.js

server/model-config.json

This file holds the Information which mixins to load for this project.

This happens inside the “mixins”: [] Block

Make sure you load the loopback-component-mq with an entry like this:

“../nodemodules/loopback-component-mq/lib/mixins”

server/component-config.json

This file holds the general configuration for components that need it.

Make sure you configure loopback-component-mq with an entry like this:

{  
   "loopback-component-mq":{  
      "path":"loopback-component-mq",
      "options":{  
         "restPort":15672,
         "acls":[  
            {  
               "accessType":"",
               "principalType":"ROLE",
               "principalId":"$unauthenticated",
               "permission":"DENY"
            }
         ]
      }
   },
   "topology":{  
      "connection":{  
         "uri":"amqp://user:password@host/vhost",
         "timeout": timeoutinmilliseconds
      },
      "exchanges":[  
         {  
            "name":"exchangename",
            "type":"exchangetype",
            "persistent":persistent
         }
      ],
      "queues":[  
         {  
            "name":"queuename",
            "subscribe":true,
            "limit":concurrentnonacknoledgedtasknumber
         }
      ],
      "bindings":[  
         {  
            "exchange":"exchangename",
            "target":"queuename",
            "keys":[  

            ]
         }
      ],
      "logging":{  
         "adapters":{  
            "stdOut":{  
               "level":5,
               "bailIfDebug":true
            }
         }
      }
   }
}

 

 

common/models/model.json / server/models/model.json

In the model.json you configure the consumers and producers that this model implements, together with which queue they operate on and which type the message has.

Make sure you configure loopback-rabbit-mq with an entry like this:

{  
   "mixins":{  
      "MessageQueue":{  
         "consumers":{  
            "nameof_function_to_imlement":{  
               "queue":"queue_name_from_component_config",
               "type":"name_of_message_type"
            }
         },
         "producers":{  
            "name_of_function_to_imlement":{  
               "exchange":"exchange_name_from_component_config",
               "options":{  
                  "routingKey":"routing_key_name",
                  "type":"name_of_message_type",
                  "contentType":"content_type_you_wish"
               }
            }
         }
      }
   }
}

 

 

common/models/model.js / server/models/model.js

In the model.js you implement the consumer and producer functions you declared i> the model.json file.

Make sure you implement it with an entry like this:

Model.name_of_function_to_implement = (payload) => {
  // If send from Loopback
  const { variable_name} = payload

  // If send from another source like python
  const { variable_name } = JSON.parse(payload)
}

If your queue needs acknowledgements make sure to return a Promise.resolve().

If something went wrong return a Promise.reject(). RabbitMQ redelivers the Message then.