How to create a Node.js application with Docker, Part 1: Deployment

In this article we'll see how to use Docker for running our Node.js app without fear of long and hard setups.

How many times did you set up your pc for the development of a new application which required not only the installation of a programming language and dependencies but also the configuration of one or more services like a database or a debugger?

Working on an application which runs in very different environments can be the origin of unpleasant experiences that range from a difficult and sometimes unlikely set up (imagine to have on the same pc two or more projects which use different versions of the same dependencies) to bugs that cannot be reproduced easily on other machines.

Docker is a tool created to deal with these and other problems. The idea behind is simple: create a portable and self contained environment with only the needed dependencies to allow the execution of our application.

In this serie of articles we'll see step by step how to use the Docker's features to manage the whole life cycle of a Node.js web application: development, testing, debugging, deployment with other services. We'll use Express as framework to build the app but the procedures that we'll describe here are suitable for every application of this kind.

In this first article we'll cover how to create an image for executing the app.

Requirements

Before to proceed, be sure to have Docker installed correctly on your pc. You can do from terminal

$ docker --version
Docker version 18.06.1-ce, build e69vc7a

The Dockerfile

Let's create a new folder for the project. Here we'll create a file called Dockerfile

$ mkdir nodejs-app
$ cd nodejs-app/
$ touch Dockerfile

Docker uses the instructions in this file to create an image. Think of an image as a complete and portable OS with only the files and programs needed to run our application and that can be used on every machine on which Docker is installed.

Paste this in the newly created Dockerfile:

# Use the official Node.js image as parent image
FROM node:10.15-slim

# Set the working directory. If it doesn't exists, it'll be created
WORKDIR /app

# Define the env variable `PORT`
ENV PORT 3000

# Expose the port 3000
EXPOSE ${PORT}

# Copy the file `package.json` from current folder
# inside our image in the folder `/app`
COPY ./package.json /app/package.json

# Install the dependencies
RUN npm install

# Copy all files from current folder
# inside our image in the folder `/app`
COPY . /app

# Start the app
ENTRYPOINT ["npm", "start"]

Let's examine each of these instructions.

The base image

Consider the first instruction:

# Use the official Node.js image as parent image
FROM node:10.15-slim

An image is typically created starting from another one to which add files and programs according to the needs. With the FROM node:10.15-slim instruction we select a base image with already Node.js and NPM installed.

I suggest you to choose a base image with already the needed dependencies in order to keep the Dockerfile concise and with few instructions, not only for a better maintenance and to save time but also to take full advantage from Docker's caching system (which we'll see in a moment).

An image is identified by a tag typically composed by a name (ex. node), by a version number (ex. 10.15) and by a variant (ex. -slim). Refer to the source from which do you pull the image to understand what's the most affordable version for your use case. When the tag is omitted, the latest tag will be implied which targets the latest versions of image.

When Docker starts an image, it creates a container, namely an instance of image which is executed in an context isolated from the machine.

Set the working directory

WORKIDR is used to set the default directory in which will be executed next instructions. We can change it more times in the same Dockerfile.

# Set the working directory. If it does not exists, it will be created
WORKDIR /app

Expose a port

A Docker container is an environment isolated from our machine. So if we want reach our web application from outisde we have to expose a port by using:

# Expose the port 3000
EXPOSE 3000

We can also define an environment variable in order to reuse easily this value in our app.

# Define the env variable `PORT`
ENV PORT 3000

# Expose the port 3000
EXPOSE ${PORT}

Installing the dependencies

Now that we have an environment with all needed software, let's take care of app's dependencies. The folder created at the beginning will not contain only the stuff related to Docker but also the application's code on which we'll work.

In the same folder let's create a package.json file like the following:

{
  "name": "nodejs-app",
  "dependencies": {
    "express": "^4.17.0"
  },
  "scripts": {
    "start": "node index.js"
  }
}

We'll include this in our image and we'll install the dependencies by using the instructions COPY and RUN. The former is used to copy a file or a directory from our local folder into the Docker image while the latter allow us to execute commands in the shell within the image.

# Copy the file `package.json` from current folder
# inside our image in the folder `/app`
COPY ./package.json /app/package.json

# Install the dependencies
RUN npm install

The Docker's caching system

In some tutorial you can find the following instruction for COPY

COPY . /app

RUN npm install

It copies the whole content of current directory to the image in /app and then install the dependencies. What does that entail?

When Docker executes an instruction from Dockerfile, as FROM, COPY, RUN etc., it creates a new layer. At the end all layers are merged to create the final image. Before to create a new layer, Docker checks if has already a copy of that layer in the cache and if so it uses that instead to create a new one. Conversely when it encounters an instruction whose layer isn't cached, it executes and create a new layer for this and for next instructions.

For every instruction Docker uses a different strategy to check the cache.

Consider the COPY instruction. Before to create the new layer, Docker compares, through checksum, the files that should be copied with those already copied in a previous layer of same instruction. If it finds a match with a layer in cache, it will use that instead to create a new one.

The instruction seen just before

COPY . /app

RUN npm install

copies the whole working directory where some files will be often edited. Then, during the build, it will be hard to find a cached layer for this instruction and Docker will create a new layer for this and for next instructions which could be time consuming, as RUN npm install which install all dependencies for our Javascript app.

Instead if we use

COPY ./package.json /app/package.json

RUN npm install

we can allow to Docker to retrieve from cache the layers for this two instructions if the package.json file is not changed. The dependencies for our app are defined in this file so if it doesn't change, we can retrieve them safely from cache instead of refetching and reinstall them.

To check if Docker uses the cache for layers of interest to us, we can examine the logs during the image's build:

Step 5/8 : COPY ./package.json /app/package.json
 ---> Using cache
 ---> b6bbe99bb004
Step 6/8 : RUN npm install
 ---> Using cache
 ---> 8f3ecd93131b

Adding the source code

Now let's take care of source code of web app and create a simple file index.js where put the following code. When the app will start, it will listen on port which we have used just before with the instruction EXPOSE and it will show a page with a "Hello world" text.

var express = require('express');
var app = express();
var port = process.env.PORT || 3000;

app.get('/', function (req, res) {
  res.send('Hello World!');
});

app.listen(port);

Let's copy this with the whole directory in the image

# Copy all files from current folder
# inside our image in the folder `/app`
COPY . /app

and add the command executed by the container when it will be started

# Start the app
ENTRYPOINT ["npm", "start"]

Build and run the image

All that's left is to run our application. For first thing we have to build our image with the command

$ docker build -t nodejs-app:1.0.0 .

The -t parameter tells to Docker the name of tag which we want give to our image, in this case nodejs-app:1.0.0. With the argument . we specify the current folder as the folder context for instructions in the Dockerfile.

Once the image was builded, we can find it in the list of images saved on our pc. To see the list you can use the command:

$ docker image ls
REPOSITORY   TAG     IMAGE ID       CREATED         SIZE
nodejs-app   1.0.0   1182ac9acd7b   2 minutes ago   146MB

Now start the application with:

$ docker container run -p 4000:3000 -d nodejs-app:1.0.0

With the -p parameter we tell to Docker to map the container's port 3000 on host machine's port 4000 in order to use the last one to reach the app from outside the container. The -d optional parameter is used to launch the container as process in background.

If now you open the browser and navigate to localhost:4000, you'll see that app is running and shows the "Hello World" page.

If you installed Docker on Windows with Docker Toolbox, you may not reach the app from localhost. In this case you should use the ip of Docker Machine, for example http://192.168.99.100:4000/. To know what's it, you can use the command:

$ docker-machine ip
192.168.99.100

Useful commands

Those listed below are some commands very common for a basic usage of Docker

# Show running containers
docker ps

# Stop a running container
# You can get the id from previous command
docker stop <CONTAINER ID>

# Remove a container
docker rm <CONTAINER ID>

# Restart a container
docker restart <CONTAINER ID>

# See the logs generated by a container
docker logs <CONTAINER ID>

# See the available options for a Docker command
docker help
docker image help
docker image ps help

Conclusions

Now we know how to create and run an image for executing our web app on every machine where Docker is installed. In the next articles we'll learn how to manage the other aspects of its development.

How to create a Node.js application with Docker, Part 2: Development

How to create a Node.js application with Docker, Part 2: Development

Docker is a great tool not only for the applications' deployment but also for the other software lifecycle steps. In this second part we'll see how to use it for the development of our Node.js application.

How to create a Node.js application with Docker, Part 3: Debugging

How to create a Node.js application with Docker, Part 3: Debugging

When developing a complex application the debugging is fundamental to implement new features, improve the code and fix bugs. Let's see how to use tools provided by Docker and Node.js to face with this important task.

How to create a Node.js application with Docker, Part 4: Testing

How to create a Node.js application with Docker, Part 4: Testing

In the fourth part of this series we'll see how to create easily a complete testing environment for our Node.js application with Docker, Mocha and Chai.