Getting Started with Docker Part 2: Building Images and Docker Compose
This blog post was originally published 22 April 2017. I have recently updated content to reflect changes to syntax and context as well as compatibility with a Docker alternative: Podman.
This post is part 2 in a series on getting up and running with Docker. It would be best to walk through part 1 before continuing with this post, but if you are already familiar with Docker and running containers, feel free to pick up here. In this post, we will cover the following topics:
- Building an image
- Basic container orchestration with Docker Compose
- Architecting an application with Compose
Feel free to open your command prompt and favorite text editor before we get going. All three sections walk you through some hands-on activities. These exercises assume some familiarity in working with command line tools and some basic knowledge of Linux system administration. You can get by without much of either.
Building an Image
Now that we know a little bit about running containers and different options available to us, let's try building our own image that we can run, distribute, and deploy.
Building custom images is a great way to share exact environments and build tools with fellow developers as well as distribute applications in a consistent way. If we publish that image to a public container registry like Docker Hub, other individuals can pull our image down and run it on their machines. Those containers will have the exact replica of operating system, programs, configuration, and application files that we built into the image. If someone has a compatible container host, they can run our image, wherever that environment is.
If you are planning to use containers in a public environment (say a shared testing environment or production), custom images are probably the best way to deploy your application. The image can be dropped into any environment easily from an individual's machine running the Docker CLI or via a continuous integration/continuous delivery pipeline tool without having to worry about application build steps, dependencies, or system configuration changes.
We create container images by writing a kind of script called a "Dockerfile" (or "Containerfile" if you're using a Docker alternative). Dockerfiles have a relatively short list of commands at their disposal and allow us to specify how to build one image from another. There are several base images available on Docker Hub and other public registries. We can also derive our image from an application-specific image like nginx if we want. We will start from the Ubuntu image and create our own nginx image to use for the purpose of this exercise.
Let's create a new directory on our desktop or wherever you keep code projects.
Now, we add a new file to this directory: Dockerfile
.
There is no file extension for Dockerfiles.
Let's open this file in our favorite editor, and add the following lines:
FROM ubuntu:jammy
RUN apt update && apt install -y nginx
EXPOSE 80
COPY . /var/www/html
CMD ["nginx", "-g", "daemon off;"]
Let's break down the Dockerfile line-by-line:
FROM
: tells the host what image to start with. We start from theubuntu
image, specifying thejammy
tag after the colon. The tag is a version identifier, and in this case, it specifies the Ubuntu 22.04 release.RUN
: allows us to execute commands and programs inside the container. This is frequently used to install packages like our file does, and can be used to do a myriad of other tasks (ex. create and change permissions of directories, download external files, run other setup scripts). Notice that we use the&&
operator to run theapt
commands sequentially. This helps ensure that theapt install
command succeeds in getting the latest version of the package, and helps us manage our image size because of the way Docker executesRUN
commands. We do not need to get into the "why" here, but you can read up on the subject in the Docker docs on Dockerfile best practices.EXPOSE
: lets the host know with ports on the container should be exposed to internal networks and available for mapping to the host. In our case, we just want to expose port 80 so that nginx's default configuration will work. Note that exposing a port does not mean it is mapped to the host and accessible.COPY
: copies files from your local directory to the specified path in the container. Make sure to use an absolute path for the container target. For our image,/var/www/html
is the default path that nginx uses to serve files.CMD
: instructs the host to treat the specified command and subsequent arguments as the primary process in the container. When this process terminates, the host stops the container. Our instructions tell the host to start nginx in the foreground and let it govern the lifecycle of the container.
Before we build our image, let's add a default HTML page for the COPY
command to incorporate.
Create an index.html
file in the same directory as Dockerfile
and add the following content:
<html>
<head>
<title>Docker Hello World</title>
</head>
<body>
<h1>Hello World from the nginx Docker Container!</h1>
</body>
</html>
We will know that our container launched successfully when we run it and can see this page in our browser.
Now, we build the image by running docker build -t my-nginx .
inside the directory with Dockerfile
.
The -t
flag provides a "tag" or name to use for your image within your Docker host, and the final argument ".
" specifies the directory with the Dockerfile
we want Docker to build.
In our command prompt, we should see a lot of output from the build process including output from the apt
command (which is quite verbose).
Look for the following lines indicating the steps in our Dockerfile
that Docker is executing:
Step 1/5 : FROM ubuntu:xenial ... Step 2/5 : RUN apt update && apt install -y nginx ... Step 3/5 : EXPOSE 80 ... Step 4/5 : COPY dist /var/www/html ... Step 5/5 : CMD nginx -g daemon off; Successfully built 96047713afe8
This process may take a few seconds to a few minutes.
When Docker is done building your image, we should see a line starting with "Successfully built," and Docker will return us to the command line.
Now, we have our own container image named my-nginx
stored in our Docker Engine's cache of images.
Let's run our new image with the command docker run --rm -d -p 8080:80 --name my-nginx my-nginx
.
Just like before, this tells Docker to throw away the container when it stops, run it in the background, and map port 8080 on the host to port 80 on the container.
The name of the container is my-nginx
just like the image so that we can easily find it.
Run docker ps
to check on our new container:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3f35e760ae10 my-nginx "nginx -g 'daemon ..." 3 seconds ago Up 3 seconds 0.0.0.0:8080->80/tcp my-nginx
If we open our browser, navigate to http://localhost:8080, and refresh the page, we should see the message we wrote earlier in index.html
: "Hello World from the nginx Docker Container!"
When we are done testing the container, we can stop it with docker stop my-nginx
.
Basic Container Orchestration with Docker Compose
A common discussion point when talking about containers is "containerizing" applications. Containerizing essentially refers to the practice of porting a non-container-based application (i.e. traditional VM or native platform environment) and re-working different components to run inside containers.
Almost all web applications consist of at least a database, an application runtime, and a web server. Take the LAMP stack, for example:
- Linux host
- Apache web server
- MySQL database
- PHP process that executes the application
Porting this across to containers might look like the following:
- A MySQL container to run the database
- A PHP-FPM container that includes the application source and connects to the MySQL container
- An Apache container that runs the web server and connects to the PHP-FPM container
With this structure, each major component of the application runs in its own isolated environment with explicit connections to dependent containers. This allows components to be scaled independently as needed and upgraded on their own. For example, to deploy a new version of the PHP application, you just need to re-build and re-deploy the PHP-FPM image, not the whole stack. Similarly, Apache could be upgraded to address a security vulnerability without impacting the database or application configuration.
There are many open source container orchestration tools out there. Docker Desktop comes bundled with a Docker-native tool called Docker Compose. Compose allows you to describe an application's set of containers declaratively via a YAML file. Invoking Compose commands on the command line instructs Docker on how to manage the application's containers including respecting dependencies between containers, mapping ports, and negotiating rolling upgrades.
A Note for Podman Users
If you're using an alternative container development tool like Podman, don't fret - there is a Compose alternative called Podman Compose! The Compose spec has been open-sourced as its own project since I originally authored this post. I have had some trouble with the networking and container dependency parts of Compose, but hopefully that will not plague you.
Architecting an Application with Compose
Let's try writing our own Docker Compose file and fitting two system components together with it. The different containers that Compose runs as part of your application are called "services". We will set up two simple services to demonstrate some basics of Compose.
Note: If you want to skip ahead or have a reference to compare your work against, check out the Docker 101 Compose repository on GitHub.
All the files we write in the blog post are included in the repository, so you can always refer to it if you get stuck.
It is configured to reference a full Express app vs. a static Node app, so the api directory and nginx.conf
files are a bit different.
We start by creating a new directory to work in calling it docker-101-compose
.
Inside that directory, create an api
directory and a webserver
directory.
Now, we add a docker-compose.yml
file in the root directory.
Our file structure should look like this:
docker-101-compose | docker-compose.yml +-- api +-- webserver
The docker-compose.yml
file will describe how our two services should be configured and relate.
We will come back to this, but first, we should set up the two images we will use.
Inside the webserver
directory, let's create a new Dockerfile
and a nginx.conf
file.
Our Dockerfile
will be very simple, but allow us to provide custom configuration to nginx.
Add the following lines to Dockerfile
:
FROM docker.io/nginx:1.25
COPY nginx.conf /etc/nginx/nginx.conf
All we are doing is building our image from the nginx
base image, and overriding the default configuration file with our own.
Add the following to nginx.conf
:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
gzip on;
gzip_types *;
server {
listen 80;
location /api/ {
proxy_pass http://api:8888/api/;
}
}
}
Almost all of this is taken from the default nginx.conf
file inside the nginx
image.
We just modify the server
block at the end to listen on port 80 (the default), and provide a proxy rule to direct all HTTP requests beginning
with /api/
to http://api:8888/api/
.
This is the other service that we will build and deploy with Compose.
Our api
service will start as a simple Node app that just serves a JSON file.
We could evolve this to be a full Express-based application, but we just want to show two different images working together.
Let's start by adding a package.json
file to the api
directory with the following contents:
{
"name": "docker-101-compose",
"version": "1.0.0",
"description": "Sample Node JSON API server app.",
"scripts": {
"start": "static . --host-address \"0.0.0.0\" --port 8888"
},
"author": "your.email@address.com",
"license": "MIT",
"dependencies": {
"node-static": "^0.7.9"
}
}
This pulls in the node-static
package to allow us to serve our JSON file and defines a start script for our image to run with npm.
Download the movies.json
file and add it to another subdirectory called api
as well.
These two files are enough for our Node API to run.
Now we need to build the Docker image, but before we do, let's add the following to a .dockerignore
file:
node_modules
npm-debug.log
This prevents the node_modules
directory from being copied into the image if we install them locally during development.
Modules will be installed natively in the container when the Docker image is built.
Add the following to a Dockerfile
in the api
directory:
FROM docker.io/node:18-buster
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
# Update permsissions of app source files
RUN chown -R node /usr/src/app
ENV PORT=8888
EXPOSE 8888
USER node
CMD [ "npm", "start" ]
We do not need to get into the details of these instructions, especially if you are not a Node developer, but the file follows a convention put forth by the Node group in their blog post Dockerizing a Node.js web app.
Essentially, the build file installs the npm packages natively in the container, then copies over application source.
We expose the port we need for our app and run npm start
to kick things off.
Our full directory structure should now look like this:
docker-101-compose | docker-compose.yml +-- api +-- api | movies.json | .dockerignore | Dockerfile | package.json +-- webserver | Dockerfile | nginx.conf
Now that we have Dockerfiles for both images set up, let's get back to the docker-compose.yml
file.
This YAML file serves as a manifest or blueprint for all of the containers in our application.
These containers do not all have to have custom images.
In our case, docker-compose.yml
looks like this:
version: "3"
services:
api:
build: ./api
environment:
NODE_ENV: production
webserver:
build: ./webserver
ports:
- "80:80"
links:
- api
We will review just a couple of features, but you can also review the full
Compose file reference if you need to. The version
property identifies which version of the Compose specification the file was written for.
The main property of interest is services
.
The services
property describes the containers you want Docker Compose to run for you.
The next property down gives each container a name.
Incidentally, this is the same name we use to address one container from another within the network Docker creates.
Within the definition of a service, we define the following properties:
build
: tells the host where the Dockerfile is for our image. If we were using an out-of-the-box image, we would instead specifyimage: docker.io/mongodb
. We can provide the same format for the image name as we do for theFROM
directive in a Dockerfile.environment
: specifies environment variables that will be set for the container when it runs.ports
: identify port mappings between the container and the host. This is just like the-p
flag we pass todocker run
.depends_on
: specifies other services that should be exposed to this service via the network. For us, we want the Node app to be exposed to nginx, but only nginx is exposed outside the host. Back in ournginx.conf
file, we reference theapi
service withhttp://api:8888/
. The container host adds services we name in thedepends_on
property to the target service as named hosts so they are easy to address.
Now, let's navigate to the root directory and run docker-compose up -d
.
This command performs several steps for us:
- Check if any of the services we declare need to be built or re-built.
- Build custom images for services as needed.
- Set up internal networks.
- Namespace all of the resources for our application in our host.
- Start up all of your services in the background.
Both of our images will probably be built when we run this command.
We may see a lot of output from the build process, and when it finishes, Compose will return us to the command line.
We can check the status of our services with docker-compose ps
.
Name Command State Ports ----------------------------------------------------------------------------------------- docker101jsonapi_api_1 npm start Up 8888/tcp docker101jsonapi_webserver_1 nginx -g daemon off; Up 443/tcp, 0.0.0.0:8080->80/tcp
We can see in the console output that each container has a specific name, and like docker ps
, the exposed and mapped ports are shown.
In our browser, we should now be able to navigate to http://localhost:8080/api/movies.json and view our movies.json
file being served by Node through nginx.
We can run docker-compose stop
to halt all of the services in our docker-compose.yml
file.
The docker-compose rm
command will remove them from our Docker host, just like the docker rm
command.
There is also a shorthand command docker-compose down
that stops and removes everything created by docker-compose up
.
Container orchestration can be a lot more elaborate, and you can quickly outgrow the functionality of Docker Compose by itself. At this point, you are equipped with the basics and can tackle more complex tools as you need them.
Next Steps
Now that you know more about containers and have some hands-on experience with Docker, you may want to try incorporating it into your workflow on a new or existing project. Here are some ideas to get you going:
- Try containerizing different components of your application. Can you build a Docker image that hosts your application? Do you need multiple containers for different services? Maybe you can use Docker Compose to orchestrate a few containers to support your application.
- Level up your knowledge of container orchestration by learning about the leading orchestration platform: Kubernetes. Alternative orchestrators Docker Swarm mode and Apache Mesos are much less common, but you may still find them in use. These tools take the principles behind Docker Compose and apply them to managing production-grade systems that automatically scale, restart, and distribute multiple instances of your containers across many host nodes.
We have only begun to scratch the surface of container technology. Hopefully you now have enough base knowledge to begin working with containers and exploring ways that it can benefit your development workflow. Keep an eye out for more posts on containers in the future, and if you have any questions or feedback, feel free to reach out on LinkedIn.