FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop FrankTheDevop

Posts By :

Frank

Docker Stack for Apache, PHP FPM & MySQL

150 150 Frank

Hey everyone,

it has been quite some time since my last post, I know. Today I came across a new problem and wanted to share my Solution (that is based on some giants shoulders) with you.

I have a development environment for Apache & PHP on my MacBook, but with all the different ways, changes which source is working and so on I wanted a reliable solution that I can replicate.

So I decided to put everything into Docker Container and put them together as a stack.

The Idea was to have a stack where I can manipulate the vhost definition and the Apache config from the outside, while the Projects are mapped from a local source folder.

The Requirements

– Docker installed and working
– Access to the internet
– A directory where the Apache config (httpd.conf) is located
– A directory where vhost configs for apache are hosted
– A directory where the sourcecode is reachable
– A cloned copy of https://github.com/FrankTheDevop/php-apache-mysql-containerized

Okay, let´s start:

Start

After you cloned the git repository you find this structure:
– root folder
–README.md
–apache
—Dockerfile
—conf
—-httpd.conf
—vhosts
—-demo.apache.conf
–docker-compose.yml
–php
—Dockerfile
–public_html

I use this as my base folder and leave the configs here. You can decide to put them elsewhere, just remember to change you paths later.

Step 1

Open a console and change into the root folder of the git clone. Change into the php.

Edit the Dockerfile and put the php version you want to use into PHP_VERSION=”$YOUR_VERSION_NAME”.

TIP

Depending on the PHP Version you use you might want to change the following:
RUN docker-php-ext-install mysqli
This statement installs the mysqli extension for php. Starting from PHP 5.5 you should use it. But if you have a software that still uses the mysql extension and PHP <= 5.6 change it to: RUN docker-php-ext-install mysql Finally execute: docker build -t $your-image-prefix/$your-image-name:$version . This could look like this: docker build -t frankthedevop/php:v5.6 . Don´t forget the "." at the end, that tells Docker to use the current folder. That takes a bit of time, depending on the speed of your internet connection and your computer.

Step 2:

Change into the apache folder. Edit the Dockerfile.
Put the version you want into APACHE_VERSION=”$yourversion”, e.g. APACHE_VERSION=”2.4.25″.

Execute:
docker build -t $your-image-prefix/$your-image-name:$version .
This could look like this: docker build -t frankthedevop/apache:v2.4.25 .
Don´t forget the “.” at the end, that tells Docker to use the current folder.

Again this might take a while.

Step 3

After you have the containers for Apache and PHP you can start your stack. Here you have to be a bit careful, it is slightly different if you use docker-compose or have a Docker Swarm Installation.

If you do not use a Docker Swarm installation you could go into the root folder and execute:
docker-compose up

It will start the Apache & PHP Container, retrieve a mysql container and make everything available.
You can already use it, but it doesn´t point to the folder of your sources yet. For that skip to Step 4.

If you use a Docker Swarm Installation you need to use the docker-compose.stack.yml file as config.
Please edit the file, change the image names to those you chose earlier. For MySQL place replace:
– $YourRootPW for the Root Password,
– $YourDatabaseName for the Database Name,
– $YourUser for the username and
– $YourPW for the password

The MySQL Container will use those variables to automatically create a MySQL Instance for you. You can use $YourUser and $YourPW for the connection in your code.

For PHP and Apache please remove the volume keys and what is below them for now.

Now you can execute:
docker stack deploy –compose-file docker-compose.stack.yml $YourStackName

Interim Result

Finally you have a running Stack of Apache, PHP & MySQL. If you visit localhost:8080 you should reach a site telling you: “It works”.

Step 4

If you use the Docker Swarm Installation or not, you still miss the ability to edit the php files and see the results.

For this we have to set the Volume key accordingly.

For PHP we add:
volumes:
– $YourPathToTheSourceCode:/var/www/html/

Please replace $YourPathToTheSourceCode with the path to where you sourcecode is located. I typically put the parent folder of my projects here, e.g. /Users/username/projects/php/.

For apache we start the same way:
volumes:
– $YourPathToTheSourceCode:/var/www/html/
After that we add:
– $YourPathToTheVhostsFolder:/usr/local/apache2/conf/
– $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf

Replace $YourPathToTheVhostsFolder with the folder where you want to put your virtual host definitions and $YourPathTOTheHttpd.confFolder with the folder where you put your httpd.conf (optional, if you want to use the default one remove this line).

This is how it should look like:

version: '3'

services:
  php:
    image: fdsmedia/php:5.6
    volumes:
      - $YourPathToTheSourceCode:/var/www/html/
  apache:
    image: fdsmedia/apache:2.4.25
    depends_on:
      - php
      - mysql
    ports:
      - "8080:8080"
    volumes:
      - $YourPathToTheSourceCode:/var/www/html/
      - $YourPathToTheVhostsFolder:/usr/local/apache2/conf/vhosts
      - $YourPathTOTheHttpd.confFolder:/usr/local/apache2/conf/httpd.conf
  mysql:
    image: mysql:${MYSQL_VERSION:-latest}
    ports:
      - "3306:3306"
    volumes:
      - data:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=$YourRootPW
      - MYSQL_DATABASE=$YourDatabaseName
      - MYSQL_USER=$YourUser
      - MYSQL_PASSWORD=$YourPW
volumes:
    data:

Now you can just restart the service for PHP & Apache, go to localhost:8080 and find your project. If you change the url or port for the virtual host please change that here too.

Result

You finally have a working Docker-based Stack where you can develop your PHP-based Applications and maintain your database content (as long as you don´t remove the data).

If you have a new application it is as easy as creating a new folder under your project folder, add a vhost configuration and you are good to go.
If you want to change the used php version you just need to create another container with the other version, add an entry to the stack and change the url in the vhost configuration. That´s all. Isn´t that amazing?

If you have any question feel free to post them into the comments. Or to send my an email.

Yours sincerely,

Frank

P.S.: If you are looking for a Hosting Solution have a look at Digital Ocean (digitalocean.com*) They let you set up Docker Hosting easily & quickly.

* Affiliate Link

Sources
(1) https://www.cloudreach.com/blog/containerize-this-php-apache-mysql-within-docker-containers/
(2) https://github.com/mzazon/php-apache-mysql-containerized
(3) https://dev.to/chiefoleka/how-to-setup-nginx-and-php71-with-fpm-on-mac-os-x-without-crying-4m8 (they do it with nginx)
(4) http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/ (nginx too)
(5) https://getgrav.org/blog/macos-mojave-apache-multiple-php-versions (One of the sources that should work but sadly not for me)
(6) https://www.pascallandau.com/blog/php-php-fpm-and-nginx-on-docker-in-windows-10/#setup-php-fpm (Inspiration for the development setup)
(7) https://easyengine.io/tutorials/php/directly-connect-php-fpm (Debug PHP FPM)

Node.js Tooling I – Processmanager PM2

150 150 Frank

The purpose of this post is to help you get started with tools for Node.js in general and Loopback specifically to ease you life as Developer and Operator.

Prerequisite

You need a Node.js based API (to follow through this article. PM2 supports other languages too).

PM2 Processmanager

The PM2 Processmanager is a mighty one with many different options, has various integrations and even supports an online monitoring.

In this Article I present you the basic usage of it to get started fast. In a later Article I will help you migrate to Docker and add the online monitoring of your processes.

Installation

The Installation is as easy as npm install pm2 -g. It is important that you install it globally.

How to use it

You can manually start an api by issuing pm2 start app.js, but then you have to specify all parameter on every start. So it could be fast to test, but I recommend writing a small configuration file for it name process.yml.

Configuration Syntax

PM2 offers multiple syntax variants for the configuration file. Currently they support Javascript, JSON and YAML format. I prefer to create this configuration in YAML so I will present it to you in this syntax. For the others please have a look at Process Configuration File Syntax (1)

process.yml

Most often my configuration looks like this:

apps:
  - script: path_to_startup_script
    name: api_name
    exec_mode: cluster
    instances: 1
    env:
      DEBUG_FD: 1
      DEBUG_COLORS: 1

 
apps is the root element.

Each script entry refers to an app or api that should be started. Here you define the path to the file that starts your api.

You can either define one app per app/api or you can even define a stack you want to start with multiple entries.

Personally I use one configuration file per api in development and one configuration for a stack in the staging environment before I deploy to docker.

The name describe the name you will see in the process manager when you list the running processes.

With exec_mode and the following instances things get interesting. If you define the mode fork you can only start up one instance of this app/api. But if you define cluster then you´re able to scale the app/api with one single command and pm2 will load balance it for you!

instances defines how many concurrent instances you want to launch of this app/api at startup. I set this normally to 1 and adjust on the fly according to the needs.
This way I can already get a first idea of the need to scale.

With the env you can specify environment variables you want to set.
DEBUG_FD: 1 tells Node.js to change the output stream to process.stdout.
DEBUG_COLORS: 1 will add colors to the pm2 log output. This is handy because you see on the first glance if the logs message is an error or not.

These have not been all possibly Attributes for the configuration file. If you want a tighter control have a look at the Configuration File Attributes (2).

After this explanation you will find my configurations for a Express based API and a Loopback based API.

Express Example

apps:
  - script: bin/www
    name: api
    exec_mode: cluster
    instances: 1
    env:
      DEBUG_FD: 1
      DEBUG_COLORS: 1
      NODE_ENV: staging

Loopback Example

apps:
  - script: server/server.js
    name: loopback_api
    exec_mode: cluster
    instances: 1
    env:
      DEBUG_FD: 1
      DEBUG_COLORS: 1

Commands

After we defined the configuration we need to interact with the Processmanager to start and stop our APIs and check the log output.

Listing currently running Processes

To view all currently running Processes issue pm2 list at a console.

Start an API with an process.x

Starting your API with an configuration file is as easy as the command pm2 start process.x (x is config.js, json or yml).
After this command PM2 starts your API with the specified configuration and outputs it list of currently running processes.

Stopping an API

You can stop your API with pm2 stop process.x.
Important to know is that PM2 just stops your API then, but won´t remove it from the prepared to run process list. If you want to remove it cleanly and make sure on the next start you have a clean plate you have to destroy it.

Destroying an prepared to run API entry

To remove a API entry from the prepared to run list you issue pm2 delete process.x

Check the Logs

To check all logs without filtering to one API you issue pm2 logs.

If you want to filter the logs to one specific API you can add it to the command like this pm2 logs name_of_your_api

If you have any question post them into the comments. Or feel free to send my an email.

Yours sincerely,

Frank

Sources:
(1) http://pm2.keymetrics.io/docs/usage/application-declaration/
(2) http://pm2.keymetrics.io/docs/usage/application-declaration/#attributes-available

MLAND Series – Tips III – Loopback RabbitMQ Usage

150 150 Frank

The purpose of this post is to help you get up and running with RabbitMQ integrated into your Loopback API.

Prerequisite

You have a working Loopback API Project.

RabbitMQ

Install the Loopback component

Inside your Loopback project folder run:

npm install loopback-component-mq --save

This will install the component and all it´s dependencies as you are used to

Configure the RabbitMQ component

Register it

Loopback loads components per default only if they are inside known folders. External components are in different locations and need be registered.
You do so by adding "../node_modules/loopback-component-mq/lib/mixins" to the mixins array in the model-config.json.

Component configuration

Loopback checks if a configuration is available for a component inside component-config.json.
Therefore add the following template to component-config.json:

{
"loopback-component-mq": {
    "path": "loopback-component-mq",
    "options": {
      "restPort": 15672,
      "acls": [
        {
          "accessType": "*",
          "principalType": "ROLE",
          "principalId": "$unauthenticated",
          "permission": "DENY"
        }
      ]
    },
    "topology": {
      "connection": {
        "uri": "amqp://$username:$password@$host:$port/$vhost", (1)
        "timeout": 30000
      },
      "exchanges": [
        {
          "name": "my_first_exhange",
          "type": "topic",
          "persistent": true
        }
      ],
      "queues": [
        {
          "name": "my_first_queue",
          "subscribe": true,
          "limit": 1
        },
        {
          "name": "my_second_queue",
          "limit": 1
        }
      ],
      "bindings": [
        {
          "exchange": "my_first_exchange",
          "target": "my_first_queue",
          "keys": [
            "my_first_queue"
          ]
        },
        {
          "exchange": "my_first_exchange",
          "target": "my_second_queue",
          "keys": [
            "my_second_queue"
          ]
        }
      ],
      "logging": {
        "adapters": {
          "stdOut": {
            "level": 5,
            "bailIfDebug": true
          }
        }
      }
    }
  }

Explanation:
The path entry reflects the component name.
You can configure where the management interface is located (restPort) and the acls you like via the options entry.

In topology we define the exchanges and queues we want to use inside this API.

  1. Inside connection you define with the uri where your RabbitMQ Service is located and with the timeout how long the system should wait until a connect call fails.

    I strongly suggest defining it, especially if you connect over the internet. Otherwise you instantly receive a connection error.

  2. In exchanges you define the exchanges you want to use, with their name, their type and if you want them to persist after the api disconnects (key persistent). To learn more about
    the Types have a look at (2).
  3. queues contains all queues you want to use with the name, if you want to subscribe and if you want to limit the amount of concurrently processed messages.
    If you subscribe to a queue you will retrieve all messages inside it, so make sure you handle them all. Otherwise you wonder why you number of messages grow and the api outputs errors. Use this if you implement a consumer for all messages you send to this queue (I show you later how it´s done).

    I strongly advise you to use limits, otherwise the api might just stop working when too many message fight for resources. Additionally this impacts the performance of your API.

  4. bindings model the connection between exchanges and queues while defining which routing key (key keys) you use.
    exchange is self explanatory, target is the queue you want to connect. You can add keywords or themes in the keys array, I just put in the queue name I´d like to use.
  5. You can configure the logging you want to have inside logging. I added the configuration I use myself for debugging.
  6.  

    Mixin configuration

    The RabbitMQ component uses a mixin. This way you can configure the consumers and producers per model.

    Add a similar structure like this to your model.json file:

    "mixins": {
        "MessageQueue": {
          "consumers": {
            "consumerMessage": {
              "queue": "my_first_queue",
              "type": "$company.$type.$subtype"
            }
          },
          "producers": {
            "producerGreet": {
              "exchange": "my_first_exchange",
              "options": {
                "routingKey": "my_first_queue",
                "type": "$company.$type.$subtype"
                "contentType": "application/json"
              }
            },
          }
        }
      },
    

    If you already have a mixins key in the model.json just add the inner structure, beginning at "MessageQueue".

    Inside “MessageQueue” you can define a consumers and producers object, in which you define the consumers and producers respectively.

    A consumer has a name ( in the example consumerMessage) and needs to know from which queue it should get it´s messages (key queue) and which message type it is responsibly for (key type). If only one message type will occur in queue you need only one consumer, else you need more. The name of the consumer (consumerMessage in the example) is the name of the method you have to implement for this model. I come to this in a bit.

    A producer has a name too, is connected to an exchange (key exchange) and has some options. Here comes the keys from the component-config.json into play, where I said I use the queue name I want to target. They need to match the routingKey. At last we set contentType, for me this is normally json.
    You don´t need to implement a producer, the component does it for you. In a few moment I will show you how you can call it.

    Usage

    Consumer

    As mentioned you need to implement a consumer yourself. The syntax is (ES6 Syntax):

    {
    $model.consumername = (payload) => {
      // If your message comes from another source than a loopback-component-mq based API
      const { message } = JSON.parse(payload)
    
      // Otherwise you can simplify it to
      const { message } = payload
    
      // Do something
      ....
    
      if (error) {
        // Depending on your architecture you might want to reject the message if an error occurs.
        // This will not acknowledge the message and it will be re-delivered to you. So you can use this if you have a temporary problem, but the message is important
        return Promise.reject(error)
      } else {
        // If everything is alright acknowledge this message 
        return Promise.return()
      }
    }
    

    You tell the queue that you handled a message by returning a Promise.resolve(). If you want the message to be re-delivered you send Promise.reject().

    Producer

    You can use a producer anywhere inside the scope of your model this way:

    {
    $model.greet = (name) => {
      return $model.producerGreet({greeting: 'Hi', name: name}))
    }
    

    That´s it already. The producer send this message with a JSON payload to the defined exchange with the defined routingKey.

    Conclusion / Lessons learned

    That´s it for today. In this tip you learned:

    • How to install the Loopback RabbitMQ Component
    • How to register the Loopback RabbitMQ Component
    • How to configure the Loopback RabbitMQ Component
    • How to configure consumers and producers for the Loopback RabbitMQ Component
    • How to implement a consumer
    • How to use a producer

    If you have any question post them into the comments. Or feel free to send my an email.

    Yours sincerely,

    Frank

    Sources
    (1) https://www.rabbitmq.com/uri-spec.html
    (2) https://www.rabbitmq.com/getstarted.html

MLAND Series – Part I Simple Todo List App Api

150 150 Frank

MLAND Series – Part I Simple Todo List App Api

The purpose of this post is to familiarize you with the Loopback framework and Angular 4. You will create a very basic todo list app with a backend api and a frontend ui.

Familiarizing with Loopback

Install Loopback

I expect you have installed Strongloop Loopback on your development system already. Loopback offer Installation Instruction and they worked perfectly for me. If you have problems there contact me and I try my best to help you.

 

Create Loopback project

At first you need to create a loopback project in some folder. I call it todolistapp, not very creative I know.

After you installed the Loopback to your system, open a console, change into th directory you wish and type lb and hit enter. Now you should see this:

? What's the name of your application? (todolistapp)

 

The Loopback Generator asks you how you want to call your project. It defaults to the name of the folder you are in. Choose a name and hit enter.

 ? Which version of LoopBack would you like to use? (Use arrow keys)
  2.x (long term support)
❯ 3.x (current)

 

Now it asks you which Loopback version you want to use for this project. If you don´t have a good reason to use the LTS Version 2.x, choose the current version 3.x and hit enter.

? What kind of application do you have in mind? (Use arrow keys)

❯ api-server (A LoopBack API server with local User auth)
empty-server (An empty LoopBack API, without any configured models or datasources)
hello-world (A project containing a controller, including a single vanilla Message and a single remote method)
notes (A project containing a basic working example, including a memory database)

 

At last it asks you what type of application you want to be generated for you. Here you get some magic, choose api-server and loopback with configure local user authentication for you.

Depending on your internet connection you can get a coffee now, Loopback download the necessary package and configures the project according to our selections.

Generating .yo-rc.json


I'm all done. Running npm install for you to install the required dependencies. If this fails, try running the command yourself.


   create .editorconfig
   create .eslintignore
   create .eslintrc
   create server/boot/root.js
   create server/middleware.development.json
   create server/middleware.json
   create server/server.js
   create README.md
   create server/boot/authentication.js
   create .gitignore
   create client/README.md

 

 

Next steps:

  Create a model in your app
    $ lb model

  Run the app
    $ node .

The API Connect team at IBM happily continues to develop,
support and maintain LoopBack, which is at the core of
API Connect. When your APIs need robust management and
security options, please check out http://ibm.biz/tryAPIC

 

Type into the console node . and the api starts.

 

Experiment with the API Explorer

Fire up your browser and open  http://localhost:3000/explorer. You should see this:

Did you notice the User Entry? Click on it and you see this:

 

As I mentioned earlier Loopback supports you with automatic generation of common models if you choose the api-server in the creation dialogue. You can see the CRUD Routes that you can already use, just because you made the right decision.

 

Add a model to the API

Let us create the task model now. Open the console again and cancel the running server with Ctrl+C. Type in lb model and hit enter.

You should see:

? Enter the model name:

 

Enter the name Task for the model and hit enter.

? Select the datasource to attach task to: (Use arrow keys)

❯ db (memory)
(no datasource)

 

Choose db(memory) as data-source. This Datasource was automatically added by Loopback and saves all data inside the computers main memory. This means the data vanishes when the server stops.

 

? Select model's base class (Use arrow keys)

Model
❯ PersistedModel
ACL
AccessToken
Application
Change
Checkpoint

 

Choose PersistedModel. This is a base-model from the Loopback Framework and as the name indicates this model allows saving the data of the model.

Next Loopback wants to know if this is a model you want to expose, so it can be accessed e,g, from our frontend. Hit enter to take default yes.

? Expose task via the REST API? (Y/n)

 

 

The model gets exposed in the plural form, Loopback adds an s to the model name as default. Normally you can accept the default here with hitting enter.

 

? Custom plural form (used to build REST URL):

 

You can save the model inside the server folder of loopback or above it. Choose the default common.

 

 

? Common model or server only? (Use arrow keys)

❯ common
server

 

 

Now we add properties to the loopback model. Choose the following: choose id, type string, required yes. When Loopback asks for a default value hit enter.

Enter an empty property name when done.

? Property name: id
invoke loopback:property
? Property type: string
? Required? Yes
? Default value leave blank for none :

 

We repeat this for the property description. Choose type string, required yes, hit enter on default value.

Let's add another task property.
Enter an empty property name when done.
? Property name: description
invoke loopback:property
? Property type: string
? Required? Yes
? Default value leave blank for none:

 

At last we need a property to mark a task done. Choose type boolean, required yes and default value false.

? Property name: done
invoke loopback:property
? Property type: boolean
? Required? Yes
? Default value leave blank for none:

At the new prompt for a property name just hit enter and we finished creating the task model.

 

That´s it. You have the API for a very basic Todo list App. We make only two small changes, so that Loopback generates the id for us:  Open api/common/models/task.json, it should look like this:

 

 

{
  "name": "Task",
  "base": "PersistedModel",
  "idInjection": true,
  "options": {
    "validateUpsert": true
  },
  "properties": {
    "id": {
      "type": "string",
      "required": true
    },
    "description": {
      "type": "string",
      "required": true
    },
    "done": {
      "type": "boolean",
      "required": true
    }
  },
  "validations": [],
  "relations": {},
  "acls": [],
  "methods": {}
}

 

Change the Value  "idInjection": false,to "idInjection": true,

and add the following to the id property:

"required": true,
"defaultFn": "uuidv4"

 

So task.json now does look like this:

{
  "name": "Task",
  "base": "PersistedModel",
  "idInjection": false,
  "options": {
    "validateUpsert": true
  },
  "properties": {
    "id": {
      "type": "string",
      "id": true,
      "required": true,
      "defaultFn": "uuidv4"
    },
    "description": {
      "type": "string",
      "required": true
    },
    "done": {
      "type": "boolean",
      "required": true
    }
  },
  "validations": [],
  "relations": {},
  "acls": [],
  "methods": {}
}

 

Next we will have a look at the new model and it´s routes.

 

Experiment with the API Explorer II

Start the API again like before with  node .

and open it . Admire your newly created data model and the default CRUD Routes loopback gave you:

 

Getting familiar with Loopback files

You will get familiar with more and more files of Loopback over the course. I will show you the new areas relevant to the content of each part of the series.

 

/common/.json

These are the definition files of your custom models. These Models are shared with a client in /client.

/common/.js

These are the files for the custom methods ( Loopback calls them Remote Method) of your models.

/server/.json, /server/.js

These are the location of the same files type of files as /common. The Difference is that these models are not (!) shared with /client. You can use it so separate models of the client and the server.

Usually I put my models into /common

Conclusion / Lessons learned

In this part of the series you learned:

  • How to create a Loopback Project
  • How to use the API Explorer
  • How to Add a model to the Project
  • Where are model definition files placed

Yours sincerely,

Frank

MLAND Series – Tips II – Loopback AthenaPDF

150 150 Frank

Purpose

When developing an Infrastructure for a Client Project I faced the situation that I needed to support 50 concurrent request to generate a PDF. The purpose of this post is to show you how I did this using AthenaPDF and RabbitMQ.

Setup

The Setup used for this Tip is a machine on Digital Ocean*, created and managed through Docker Cloud and with an Loopback API Container, an AthenaPDF Container and RabbitMQ hosted on CloudAMQP .

 

Mixin Definition

You can use a mixin definition similar to this:

"mixins": {
    "MessageQueue": {
      "producers": {
        "producerSendGeneratePDF": {
          "exchange": "generate.pdf",
          "options": {
            "routingKey": "pdf",
            "type": "company.project.messages.generatepdf",
            "contentType": "application/json"
          }
        },
        "producerPdfGenerated": {
          "exchange": "generate.pdf",
          "options": {
            "routingKey": "pdf",
            "type": "company.project.messages.pdfgenerated",
            "contentType": "application/json"
          }
        }
      },
      "consumers": {
        "consumerGeneratePDF": {
          "queue": "client.generate.pdf",
            "type": "company.project.messages.generatepdf",
        },
        "consumerPDFGenerated": {
          "queue": "client.generate.pdf",
            "type": "company.project.messages.pdfgenerated",
        }
      }
    }
  }

 

With this configuration you can execute  Model.producerSendGeneratePDF({data}) and send a message of the following type:

"company.project.messages.generatepdf"

 

The message with call Model.consumerGeneratePDF where you can prepare the HTML Document you want to be converted and call the AthenaPDF Container to create it. After the work is done in Model.consumerGeneratePDF you can call producerPdfGenerated({data}) to start the next step of your workflow.

If you go with my config, you implement Model.consumerPDFGenerated. There you can execute whatever next step you want to do, send an e-mail to your customer, Upload to Amazon S3 or whatever your next step is.

 

How to limit the AthenaPDF Container Resource consumption

To limit the Memory Usage of the AthenaPDF Container at this to the Docker Stackfile definition of the Container: mem_limit: Xm.

X is the number of MB you allow the Container to get. 256 MB is a good starting point if you start measuring.

How to fix Xlib: extension “RANDR” missing on display “:99”

If you are getting this error it means the shared memory area /dev/shm isn´t shared between the host and the Container.  To fix this add these two Lines to the PDF Container definition of the Stackfile:

 

volumes:
  - '/dev/shm:/dev/shm'

 

How to not overload the AthenaPDF Container

Especially when you limit the amount of  Memory the PDF Container can use you want to limit the number of concurrent PDFs being generated.

Referring to the RabbitMQ Tip you can limit the number of concurrent tasks that the API Endpoint does at any given moments:

Check the Section server/component-config.json, in the loopback-component-mq definition

{  
   "loopback-component-mq":{  
      "path":"loopback-component-mq",
      "options":{  
         "restPort":15672,
         "acls":[  
            {  
               "accessType":"",
               "principalType":"ROLE",
               "principalId":"$unauthenticated",
               "permission":"DENY"
            }
         ]
      }
   },
   "topology":{  
      "connection":{  
         "uri":"amqp://user:password@host/vhost",
         "timeout": timeoutinmilliseconds
      },
      "exchanges":[  
         {  
            "name":"exchangename",
            "type":"exchangetype",
            "persistent":persistent
         }
      ],
      "queues":[  
         {  
            "name":"queuename",
            "subscribe":true,
            "limit":concurrentnonacknoledgedtasknumber
         }
      ],
      "bindings":[  
         {  
            "exchange":"exchangename",
            "target":"queuename",
            "keys":[  

            ]
         }
      ],
      "logging":{  
         "adapters":{  
            "stdOut":{  
               "level":5,
               "bailIfDebug":true
            }
         }
      }
   }
}

Experiment with the limit in the queues definition and see how many concurrent tasks your infrastructure can handle with you configuration.

 

Scaling

Vertically

The great point about this way of setting up your infrastructure is that you can easily scale horizontally. Just give the PDF Container more Memory, get a bigger Machine and increase the limit of the queue definition.

Horizontally

If you want to scale vertically you need to add a haproxy container and change your configuration a bit. I will show you how to do it in the next Tip.

 

Sources

AthenaPDF

Digital Ocean* (Affiliate Link)

Docker Cloud

RabbitMQ

CloudAMQP

MLAND Series – Tips I – Loopback RabbitMQ Mistakes

150 150 Frank

Purpose

The purpose of this post is to highlight the most common mistakes I experienced with Loopback and RabbitMQ myself.

Mistakes I made and might have done too

  1. Receiving [rabbit.unhandled] MessageForgotten to load mixing in model-config.json
  2. Message keep getting redelivered
  3. Getting undefined from payload

Fix Receiving [rabbit.unhandled]

This error can result from 4 different mistakes:

  • Forgotten to load mixin in model-config.json
  • Forgotten to configure component-config.json
  • Messed up the Entry in model.json
  • Connecting to Queue while not handling all Message Types

Fix Forgotten to load mixin in model-config.json

Check how the Entry for model-config.json should look like below

Fix Forgotten to configure component-config.json

Check how the Entry for component-config.json should look like below

Fix Messed up the Entry in model.json

Check how the Entry for model.json should look like below

Connecting to Queue while not handling all Message Types

If you get [rabbit.unhandled] Message of x on queue y, connection ‘default’ was not procced by any registered handlers and the messages only get re-queued and redelivered, you have probably connected an api to a queue where you receive message you don´t handle.

Loopback-rabbit-mq picks the next message available on the queues if is connected to, even if you haven´t implemented handling this messages.

So make sure you connect each api only to queues where you handle all message types from. Otherwise one API will eventually block the queue because it gets messages it can´t process.

Fix Message keep getting redelivered

Check the function you implemented and make sure you return a Promise.resolve if your function succeeded and Promise.reject if it failed.

Fix Getting undefined from payload

If the origin of your message is not loopback it might mean you need to use JSON.parse to retrieve the data in the Message. Check how the Code should look like above

Files to Look at

  1. server/model-config.json
  2. server/component-config.json
  3. common/models/model.json / server/models/model.json
  4. common/models/model.js / server/models/model.js

server/model-config.json

This file holds the Information which mixins to load for this project.

This happens inside the “mixins”: [] Block

Make sure you load the loopback-component-mq with an entry like this:

“../nodemodules/loopback-component-mq/lib/mixins”

server/component-config.json

This file holds the general configuration for components that need it.

Make sure you configure loopback-component-mq with an entry like this:

{  
   "loopback-component-mq":{  
      "path":"loopback-component-mq",
      "options":{  
         "restPort":15672,
         "acls":[  
            {  
               "accessType":"",
               "principalType":"ROLE",
               "principalId":"$unauthenticated",
               "permission":"DENY"
            }
         ]
      }
   },
   "topology":{  
      "connection":{  
         "uri":"amqp://user:password@host/vhost",
         "timeout": timeoutinmilliseconds
      },
      "exchanges":[  
         {  
            "name":"exchangename",
            "type":"exchangetype",
            "persistent":persistent
         }
      ],
      "queues":[  
         {  
            "name":"queuename",
            "subscribe":true,
            "limit":concurrentnonacknoledgedtasknumber
         }
      ],
      "bindings":[  
         {  
            "exchange":"exchangename",
            "target":"queuename",
            "keys":[  

            ]
         }
      ],
      "logging":{  
         "adapters":{  
            "stdOut":{  
               "level":5,
               "bailIfDebug":true
            }
         }
      }
   }
}

 

 

common/models/model.json / server/models/model.json

In the model.json you configure the consumers and producers that this model implements, together with which queue they operate on and which type the message has.

Make sure you configure loopback-rabbit-mq with an entry like this:

{  
   "mixins":{  
      "MessageQueue":{  
         "consumers":{  
            "nameof_function_to_imlement":{  
               "queue":"queue_name_from_component_config",
               "type":"name_of_message_type"
            }
         },
         "producers":{  
            "name_of_function_to_imlement":{  
               "exchange":"exchange_name_from_component_config",
               "options":{  
                  "routingKey":"routing_key_name",
                  "type":"name_of_message_type",
                  "contentType":"content_type_you_wish"
               }
            }
         }
      }
   }
}

 

 

common/models/model.js / server/models/model.js

In the model.js you implement the consumer and producer functions you declared i> the model.json file.

Make sure you implement it with an entry like this:

Model.name_of_function_to_implement = (payload) => {
  // If send from Loopback
  const { variable_name} = payload

  // If send from another source like python
  const { variable_name } = JSON.parse(payload)
}

If your queue needs acknowledgements make sure to return a Promise.resolve().

If something went wrong return a Promise.reject(). RabbitMQ redelivers the Message then.