How to create a Node.js app with Docker, Part 5: Services

How to create a Node.js app with Docker, Part 5: Services

In the previous posts, we have learned how to manage each aspect of a Node.js application development by using Docker. We also often need external services, like a database, to implement some app's features. Docker provides us a good tool to manage and scale these services: Docker Compose.

In this post we'll see how to add a service to a containerized application and, as an example, we'll implement a visitor counter by using Redis.

Prerequisites

  • The Dockerfile, index.js and package.json files from the previous posts.

Docker Compose

Docker Compose is a tool to deploy and orchestrate services, each of them executed in a container or, if we use the Docker's swarm mode, in more than one. Among its features, we'll find the opportunity to create several service instances, allocate the right amount of resources for each of them, plan a recovery strategy, map the ports, mount the volumes, configure a load balancer for web applications and so on.

Just like Docker manages the configuration of a container by using a Dockerfile, Docker Compose manages the configuration and the deployment of services by using a YAML file called docker-compose.yml. Let's create this file in our working directory along with the Dockerfile and paste the following configuration

version: '3.4'
services:
  app:
    environment:
      PORT: '3000'
    build:
      context: ./
      target: development
    entrypoint: bash -c "npm install && npm start"
    ports:
      - '4000:3000'
    volumes:
      - './:/app'

Let's examine each field:

  • version: it defines the Docker Compose's version to use.

  • services: it contains the list of services to deploy. The name of services is arbitrary, for our application we have chosen app.

  • environment: here we can define the environment variables for each service.

  • build: it manages the settings for building the images for the service's container. It has the same function of docker build command.

    • context: it defines the path to the folder where there is the Dockerfile for the image's building.

    • target (optional, available from 3.4 version of Docker Compose): if we have a Dockerfile with multi-stage builds, here we can set which build to use for image's build. The build that we've chosen is that created in the second post of this series.

  • entrypoint (optional): it defines the command with which the container is started. Since the dependencies are installed in the same folder where a volume will be mounted, we have to install them after volume's mounting and we can do by using the entrypoint (we deepened this topic in the second post of this series). Note: If you installed Docker on Windows, you may replace npm install command with yarn install --no-bin-links due to different handling of symlinks between the host (Windows) and the container (Linux).

  • ports (optional): it defines ports to map. To the left there is the host's port, to the right the container's one.

  • volumes (optional): it defines the volumes to mount in the container. To the left there is the path to the host's volume, to the right the path to the container's one.

As you can see, the entrypoint, ports and volumes fields are equivalent to the instructions in the Dockerfile or the options in the docker run command. In fact, as the options in the docker run command, these and other fields configured in the docker-compose.yml can override the homonymous instructions within the Dockerfile or add new ones.

Let's start our application by using the following command (the -d option let container start in background)

$ docker-compose up -d

As always, you can see your application from the browser at the address localhost:4000 or, if you installed Docker by using Docker Toolbox, <DOCKER MACHINE IP>:4000.

Also for a single service, Docker Compose is very useful because it allows us to define all settings for the deploy in a configuration file instead of using CLI options as in the docker run command. Another nice thing is that we can stop and remove all started containers by using a single command instead to use docker stop <CONTAINER ID> and docker rm <CONTAINER ID> for each of them

$ docker-compose down

Adding Redis

To implement our visitor count we have to add a database to our application and install a client for it. Let's edit the docker-compose.yml file and add Redis as service

version: '3.4'
services:
  services:
  app:
    environment:
      PORT: '3000'
      REDIS_HOST: redis
      REDIS_PORT: '6379'
    build:
      context: ./
      target: development
    # entrypoint: bash -c "npm install && npm start"
    ports:
      - '4000:3000'
    volumes:
      - './:/app'
    stdin_open: true
    tty: true
    networks:
      webnet:
  redis:
    image: 'redis:alpine'
    networks:
      webnet:
        aliases:
          - redis
networks:
  webnet:

These are the done changes:

  • With the networks fields we defined the networks that services can use to communicate, in our case webnet. To allow a service to communicate in a network, we have to define in its configuration.

  • We added a service called redis.

    • With the image field we specified the image from which the service's container will be created.

    • We allowed the service to communicate on webnet network and we assigned the redis alias to it. In this way other services on the network can use this alias to connect to Redis.

  • We added the app service to the webnet network and defined as environment variables the info needed for Redis' connection. We temporarily disabled the entrypoint and enabled the stdin_open and tty options because this will be needed to log on the service's container by the command line and install the Node.js client; once done, you can revert the entrypoint command and remove the other two added fields.

To update the configuration of our services, let's relaunch the docker-compose up -d command. If some services are running, you can avoid to stop them before launching the command because Docker Compose will update the configuration regardless.

Now let's log on the app service's container

$ docker-compose exec app bash

and install the Redis client with one of the two commands (use the second one if you are using Docker on Windows)

root@34975c58a577:/app# npm install ioredis
root@34975c58a577:/app# yarn add ioredis --no-bin-links

At this point let's implement the visitor count. Edit the index.js file in this way

var express = require('express')
var redis = require('ioredis')
var app = express()
var { PORT, REDIS_HOST, REDIS_PORT } = process.env

var redisClient = new redis(REDIS_PORT, REDIS_HOST)

app.get('/', async function (req, res) {
  var count = parseInt(await redisClient.get('visits')) || 0
  var updatedCount = count + 1
  await redisClient.set('visits', updatedCount)
  res.send(`Hello World! This page was visited ${count} times.`)
})

if (process.env.NODE_ENV === 'test') {
  module.exports = app
} else {
  app.listen(PORT)
}

and restart the application (remember to revert the entrypoint command before to do)

$ docker-compose down && docker-compose up -d

Now if you visit your application from the browser, you will note that every time you refresh the page the visitor counter will be updated correctly.

Conclusions

Congratulations, now you know how to deploy from 1 to 1000 containers at a time!

As you have seen, Docker Compose is a very handy and powerful tool for the deployment of dockerized applications, whether they are composed of one service or several ones. With only a couple of commands you can start and stop several services at the same time without every time dealing with stuff like images' building, volumes' mounting, ports' mapping and so on. Furthermore, its many commands are similar to the Docker's ones (see docker help and docker-compose help) but designed to manage two or more containers, so when you learn Docker you will master also Docker Compose quickly.