How to create a Node.js application with Docker, Part 2: Development

Docker is a great tool not only for the applications' deployment but also for the other software lifecycle steps. In this second part we'll see how to use it for the development of our Node.js application.

In the first article of this series we learned how to create and run a Docker container in which a Node.js web application can be executed thus reducing several issues related to environment configuration.

Now we'll create a development environment for our application which, in addition to pros seen before, will allow us to easily edit files, to see changes made immediately with live reloading and to use programs or scripts both inside and outside the container.

Prerequisites

Since we'll use files created in the previous article, I suggest you recover them before to proceed. Furthermore some terms and concepts which we'll mention have already been dealt with in the previous article so take a look if you don't know Docker yet.

Volumes

The first concern is editing application's files both from the outside of container (for example with an IDE installed on our pc) and the inside (e.g. with scripts or task runners). We want the changes to persist after destroying container and to be in sync on host system and container instantly and automatically.

To achieve this we can use a type of volume called bind mount, that is a space shared between host system and the container.

Let's start editing the Dockerfile used in the previous article to start the application

# Use the Node.js image for the development
FROM node:10.15

# Set the working directory. If it doesn't exists, it'll be created
WORKDIR /app

# Define the env variable `PORT`
ENV PORT 3000

# Expose the port 3000
EXPOSE ${PORT}

As you can see we removed the instructions to install the dependencies and to run the app. When a volume is mounted in a container's folder, this last is obscured and replaced by mounted volume's folder. Since we'll use the container's folder /app as mount point, every file already present here will be inaccessible so we better leave it empty.

Another edit made, also if it's optional, is changing the instruction from FROM node:10.15-slim in FROM node:10.15. That's because for a development environment we prefer to work on a complete Node.js image rather than on a lightweight version that is more suited to a production environment.

Let's make a build of image

$ docker build -t nodejs-app-dev:1.0.0 .

If we want keep both Dockerfile, one for the deployment and one for the development, we can rename them differently and execute the build of one of them by adding the --file flag (or simply -f). There is a best approach to achieve this, but we'll see it at the end of the article.

$ docker build -t nodejs-app-dev:1.0.0 -f ./Dockerfile.dev .

Since the environment built from image is agnostic and independent from OS, it cannot be aware of file system on which the container will be created. So we can mount the volume only at the container's boot with

$ docker run -p 4000:3000 -d \
  -it \
  --mount type=bind,source="$(pwd)",target=/app \
  --name myapp \
  nodejs-app-dev:1.0.0

Let's examine the command:

  • We have already seen docker run -p 4000:3000 -d command, it creates a new container, maps the port to which access from host and launch it as background process.
  • The -it flag allows us to use, from host, a terminal executed in the container.
  • The --mount flag is used to mount a volume. It accepts multiple key-value pairs, separated by commas, to define settings. With type=bind we choose the type of volume, with source=$(pwd) we set the working directory as volume's content and with target=/app we set the app container's folder as mounting point.
  • The optional --name flag serves to give a name to container which we can use as CONTAINER ID in some Docker commands to simplify their syntax.
  • nodejs-app-dev:1.0.0 is the name of the image from which to create the container.

To know if the volume has been correctly mounted, check the Mounts section in the output generated by

$ docker inspect myapp

When the volume is mounted, we can access to container's terminal by using

$ docker exec -it myapp bash
root@13f480d37103:/app#

Install the dependencies and launch the application with

root@13f480d37103:/app# npm install && npm start

Note: If you use Docker on Windows, you could have some troubles due to differences between the file systems of container (Linux) and host (Windows). In that case you can try to use

root@13f480d37103:/app# yarn install --no-bin-links && yarn start

When the application has been launched, you can see it from browser at localhost:4000 (if you are on Windows remember that you could have to use the ip of docker machine instead of localhost).

If you take a look on the working directory, you will see that node_modules folder has been created. This means that the changes made inside the container were propagated outside to folder on your host.

Now try the opposite thing. Open the index.js file from host and replace the "Hello World!" text with "Hello Docker!". Then, always using the container's shell, kill the process started by npm start and relaunch it. Refreshing the page opened before in browser, you'll see the amended message.

Live reloading

Now every time we change a file in our app we don't need anymore to build a new image but we still have to restart the application manually.

We can address this downside by using nodemon, a Node.js utility for restarting a web application automatically when its files are changed.

Install it in the container

root@13f480d37103:/app# npm install nodemon -D

Add a script in our package.json to launch the app in development mode with nodemon and run it with npm run dev

{
  "name": "nodejs-app",
  "dependencies": {
    "express": "^4.17.0"
  },
  "scripts": {
    "start": "node index.js",
    "dev": "npx nodemon index.js"
  },
  "devDependencies": {
    "nodemon": "^1.19.1"
  }
}

Note: If you use Docker on Windows and the command doesn't seems to restart the application, use npx nodemon -L index.js. As said before, when we use a volume shared between a Windows host and a container, we could have compatibility issues due to different file systems. To solve this one we should call Nodemon with -L flag in order to let it know to use a different strategy (the polling of Chokidar JS library) for watching the files.

Now we can see the changes made in our application directly from browser without restarting the application manually.

Multi-stage build

We have one final concern to address: the different Dockerfiles. We have seen the problems caused by using one single Dockerfile for both the deployment and the development. As temporary fix we renamed the two Dockerfile files but this means that we have to keep them in sync manually and this is not scalable. We can think about a script for this purpose but if in future we want use another Dockerfile to build a testing environment also this option could be a mess. We need a more convenient solution.

From Docker version 17.05, we can use the multi-stage builds. With a single Dockerfile we are able to define more than one build and compose the final image by choosing what of them use.

Let's see an example for our use case

# Use the Node.js image for the development
FROM node:10.15 AS development

# Set the working directory. If it doesn't exists, it'll be created
WORKDIR /app

# Define the env variable `PORT`
ENV PORT 3000

# Expose the port 3000
EXPOSE ${PORT}

# Use the Node.js image for the deployment
FROM node:10.15-slim AS deploy

# Set the working directory. If it doesn't exists, it'll be created
WORKDIR /app

# Copy the file `package.json` from current folder
# inside our image in the folder `/app`
COPY ./package.json /app/package.json

# Install the dependencies
RUN npm install

# Copy all files from current folder
# inside our image in the folder `/app`
COPY . /app

# Start the app
ENTRYPOINT ["npm", "start"]

As you can see, we have more FROM instructions. Each of them define a stage with a name (e.g. AS deploy). During the creation of image, Docker makes a build for every stage according to the order in the file but in the image only files in the last build will be included. We can use the --target optional flag to choose the last stage to build

$ docker build --target development -t nodejs-app-dev:1.0.0 .

It's also possible to copy files created in a build to another one by using COPY --from=<STAGE-NAME> (e.g. COPY --from=deploy) as instruction in the Dockerfile.

Conclusions

At this point we know how to develop and deploy a Node.js web application with Docker. In my opinion one of best benefit of this approach is to use, without particular efforts, programs and scripts both inside and outside the container and at the same time keeping our work machine clean and ready to host several projects with every kind of dependencies and environment settings. In the next articles we'll focus on debugging, testing and orchestraing other services.

How to create a Node.js application with Docker, Part 3: Debugging

How to create a Node.js application with Docker, Part 3: Debugging

When developing a complex application the debugging is fundamental to implement new features, improve the code and fix bugs. Let's see how to use tools provided by Docker and Node.js to face with this important task.

How to create a Node.js application with Docker, Part 4: Testing

How to create a Node.js application with Docker, Part 4: Testing

In the fourth part of this series we'll see how to create easily a complete testing environment for our Node.js application with Docker, Mocha and Chai.

How to create a Node.js application with Docker, Part 5: Services

How to create a Node.js application with Docker, Part 5: Services

In the fifth part of this series, we'll learn how to use Docker Compose to manage a containerized app and add services to it.