Marc Denning

Getting Started with Docker Part 2: Building Images and Docker Compose

This blog post was originally published 22 April 2017. I have recently updated content to reflect changes to syntax and context as well as compatibility with a Docker alternative: Podman.

This post is part 2 in a series on getting up and running with Docker. It would be best to walk through part 1 before continuing with this post, but if you are already familiar with Docker and running containers, feel free to pick up here. In this post, we will cover the following topics:

Feel free to open your command prompt and favorite text editor before we get going. All three sections walk you through some hands-on activities. These exercises assume some familiarity in working with command line tools and some basic knowledge of Linux system administration. You can get by without much of either.

Building an Image

Now that we know a little bit about running containers and different options available to us, let's try building our own image that we can run, distribute, and deploy.

Building custom images is a great way to share exact environments and build tools with fellow developers as well as distribute applications in a consistent way. If we publish that image to a public container registry like Docker Hub, other individuals can pull our image down and run it on their machines. Those containers will have the exact replica of operating system, programs, configuration, and application files that we built into the image. If someone has a compatible container host, they can run our image, wherever that environment is.

If you are planning to use containers in a public environment (say a shared testing environment or production), custom images are probably the best way to deploy your application. The image can be dropped into any environment easily from an individual's machine running the Docker CLI or via a continuous integration/continuous delivery pipeline tool without having to worry about application build steps, dependencies, or system configuration changes.

We create container images by writing a kind of script called a "Dockerfile" (or "Containerfile" if you're using a Docker alternative). Dockerfiles have a relatively short list of commands at their disposal and allow us to specify how to build one image from another. There are several base images available on Docker Hub and other public registries. We can also derive our image from an application-specific image like nginx if we want. We will start from the Ubuntu image and create our own nginx image to use for the purpose of this exercise.

Let's create a new directory on our desktop or wherever you keep code projects. Now, we add a new file to this directory: Dockerfile. There is no file extension for Dockerfiles. Let's open this file in our favorite editor, and add the following lines:

FROM ubuntu:jammy
RUN apt update && apt install -y nginx
EXPOSE 80
COPY . /var/www/html
CMD ["nginx", "-g", "daemon off;"]

Let's break down the Dockerfile line-by-line:

Before we build our image, let's add a default HTML page for the COPY command to incorporate. Create an index.html file in the same directory as Dockerfile and add the following content:

<html>
<head>
<title>Docker Hello World</title>
</head>
<body>
<h1>Hello World from the nginx Docker Container!</h1>
</body>
</html>

We will know that our container launched successfully when we run it and can see this page in our browser. Now, we build the image by running docker build -t my-nginx . inside the directory with Dockerfile. The -t flag provides a "tag" or name to use for your image within your Docker host, and the final argument "." specifies the directory with the Dockerfile we want Docker to build. In our command prompt, we should see a lot of output from the build process including output from the apt command (which is quite verbose). Look for the following lines indicating the steps in our Dockerfile that Docker is executing:

Step 1/5 : FROM ubuntu:xenial ... Step 2/5 : RUN apt update && apt install -y nginx ... Step 3/5 : EXPOSE 80 ... Step 4/5 : COPY dist /var/www/html ... Step 5/5 : CMD nginx -g daemon off; Successfully built 96047713afe8

This process may take a few seconds to a few minutes. When Docker is done building your image, we should see a line starting with "Successfully built," and Docker will return us to the command line. Now, we have our own container image named my-nginx stored in our Docker Engine's cache of images. Let's run our new image with the command docker run --rm -d -p 8080:80 --name my-nginx my-nginx. Just like before, this tells Docker to throw away the container when it stops, run it in the background, and map port 8080 on the host to port 80 on the container. The name of the container is my-nginx just like the image so that we can easily find it. Run docker ps to check on our new container:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3f35e760ae10 my-nginx "nginx -g 'daemon ..." 3 seconds ago Up 3 seconds 0.0.0.0:8080->80/tcp my-nginx

If we open our browser, navigate to http://localhost:8080, and refresh the page, we should see the message we wrote earlier in index.html: "Hello World from the nginx Docker Container!" When we are done testing the container, we can stop it with docker stop my-nginx.

Basic Container Orchestration with Docker Compose

A common discussion point when talking about containers is "containerizing" applications. Containerizing essentially refers to the practice of porting a non-container-based application (i.e. traditional VM or native platform environment) and re-working different components to run inside containers.

Almost all web applications consist of at least a database, an application runtime, and a web server. Take the LAMP stack, for example:

Porting this across to containers might look like the following:

With this structure, each major component of the application runs in its own isolated environment with explicit connections to dependent containers. This allows components to be scaled independently as needed and upgraded on their own. For example, to deploy a new version of the PHP application, you just need to re-build and re-deploy the PHP-FPM image, not the whole stack. Similarly, Apache could be upgraded to address a security vulnerability without impacting the database or application configuration.

There are many open source container orchestration tools out there. Docker Desktop comes bundled with a Docker-native tool called Docker Compose. Compose allows you to describe an application's set of containers declaratively via a YAML file. Invoking Compose commands on the command line instructs Docker on how to manage the application's containers including respecting dependencies between containers, mapping ports, and negotiating rolling upgrades.

A Note for Podman Users

If you're using an alternative container development tool like Podman, don't fret - there is a Compose alternative called Podman Compose! The Compose spec has been open-sourced as its own project since I originally authored this post. I have had some trouble with the networking and container dependency parts of Compose, but hopefully that will not plague you.

Architecting an Application with Compose

Let's try writing our own Docker Compose file and fitting two system components together with it. The different containers that Compose runs as part of your application are called "services". We will set up two simple services to demonstrate some basics of Compose.

Note: If you want to skip ahead or have a reference to compare your work against, check out the Docker 101 Compose repository on GitHub. All the files we write in the blog post are included in the repository, so you can always refer to it if you get stuck. It is configured to reference a full Express app vs. a static Node app, so the api directory and nginx.conf files are a bit different.

We start by creating a new directory to work in calling it docker-101-compose. Inside that directory, create an api directory and a webserver directory. Now, we add a docker-compose.yml file in the root directory. Our file structure should look like this:

docker-101-compose | docker-compose.yml +-- api +-- webserver

The docker-compose.yml file will describe how our two services should be configured and relate. We will come back to this, but first, we should set up the two images we will use. Inside the webserver directory, let's create a new Dockerfile and a nginx.conf file. Our Dockerfile will be very simple, but allow us to provide custom configuration to nginx. Add the following lines to Dockerfile:

FROM docker.io/nginx:1.25
COPY nginx.conf /etc/nginx/nginx.conf

All we are doing is building our image from the nginx base image, and overriding the default configuration file with our own. Add the following to nginx.conf:

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    gzip       on;
    gzip_types *;

    server {
        listen 80;

        location /api/ {
            proxy_pass http://api:8888/api/;
        }
    }
}

Almost all of this is taken from the default nginx.conf file inside the nginx image. We just modify the server block at the end to listen on port 80 (the default), and provide a proxy rule to direct all HTTP requests beginning with /api/ to http://api:8888/api/. This is the other service that we will build and deploy with Compose.

Our api service will start as a simple Node app that just serves a JSON file. We could evolve this to be a full Express-based application, but we just want to show two different images working together. Let's start by adding a package.json file to the api directory with the following contents:

{
  "name": "docker-101-compose",
  "version": "1.0.0",
  "description": "Sample Node JSON API server app.",
  "scripts": {
    "start": "static . --host-address \"0.0.0.0\" --port 8888"
  },
  "author": "your.email@address.com",
  "license": "MIT",
  "dependencies": {
    "node-static": "^0.7.9"
  }
}

This pulls in the node-static package to allow us to serve our JSON file and defines a start script for our image to run with npm. Download the movies.json file and add it to another subdirectory called api as well. These two files are enough for our Node API to run. Now we need to build the Docker image, but before we do, let's add the following to a .dockerignore file:

node_modules
npm-debug.log

This prevents the node_modules directory from being copied into the image if we install them locally during development. Modules will be installed natively in the container when the Docker image is built. Add the following to a Dockerfile in the api directory:

FROM docker.io/node:18-buster

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install

# Bundle app source
COPY . /usr/src/app

# Update permsissions of app source files
RUN chown -R node /usr/src/app

ENV PORT=8888

EXPOSE 8888

USER node

CMD [ "npm", "start" ]

We do not need to get into the details of these instructions, especially if you are not a Node developer, but the file follows a convention put forth by the Node group in their blog post Dockerizing a Node.js web app. Essentially, the build file installs the npm packages natively in the container, then copies over application source. We expose the port we need for our app and run npm start to kick things off.

Our full directory structure should now look like this:

docker-101-compose | docker-compose.yml +-- api +-- api | movies.json | .dockerignore | Dockerfile | package.json +-- webserver | Dockerfile | nginx.conf

Now that we have Dockerfiles for both images set up, let's get back to the docker-compose.yml file. This YAML file serves as a manifest or blueprint for all of the containers in our application. These containers do not all have to have custom images. In our case, docker-compose.yml looks like this:

version: "3"
services:
  api:
    build: ./api
    environment:
      NODE_ENV: production
  webserver:
    build: ./webserver
    ports:
      - "80:80"
    links:
      - api

We will review just a couple of features, but you can also review the full Compose file reference if you need to. The version property identifies which version of the Compose specification the file was written for.

The main property of interest is services. The services property describes the containers you want Docker Compose to run for you. The next property down gives each container a name. Incidentally, this is the same name we use to address one container from another within the network Docker creates. Within the definition of a service, we define the following properties:

Now, let's navigate to the root directory and run docker-compose up -d. This command performs several steps for us:

Both of our images will probably be built when we run this command. We may see a lot of output from the build process, and when it finishes, Compose will return us to the command line. We can check the status of our services with docker-compose ps.

Name Command State Ports ----------------------------------------------------------------------------------------- docker101jsonapi_api_1 npm start Up 8888/tcp docker101jsonapi_webserver_1 nginx -g daemon off; Up 443/tcp, 0.0.0.0:8080->80/tcp

We can see in the console output that each container has a specific name, and like docker ps, the exposed and mapped ports are shown. In our browser, we should now be able to navigate to http://localhost:8080/api/movies.json and view our movies.json file being served by Node through nginx.

We can run docker-compose stop to halt all of the services in our docker-compose.yml file. The docker-compose rm command will remove them from our Docker host, just like the docker rm command. There is also a shorthand command docker-compose down that stops and removes everything created by docker-compose up.

Container orchestration can be a lot more elaborate, and you can quickly outgrow the functionality of Docker Compose by itself. At this point, you are equipped with the basics and can tackle more complex tools as you need them.

Next Steps

Now that you know more about containers and have some hands-on experience with Docker, you may want to try incorporating it into your workflow on a new or existing project. Here are some ideas to get you going:

We have only begun to scratch the surface of container technology. Hopefully you now have enough base knowledge to begin working with containers and exploring ways that it can benefit your development workflow. Keep an eye out for more posts on containers in the future, and if you have any questions or feedback, feel free to reach out on LinkedIn.