Docker 101

abd
4 min readSep 17, 2024

--

In my last blog post, we compared virtualization and containerization. As I said at the end of that blog post, I am going to write about Docker architecture and some of the most common commands to containerize your applications.
Let’s dive in.

Docker is a containerization tool. Under the hood, Docker uses namespaces and cgroups to create independent processes on your host OS without having to install VMs. Docker helps us to manage some low-level computer processes like namespaces and cgroups.

Like other well-known applications, Docker employs a client-server architecture by using a REST API. You might think about why Docker needs a client-server architecture. Does it send requests to some servers, etc.? It actually does, but not like you think.

Docker architecture consists of a client, Docker Desktop or CMD, and a server, Docker Daemon.

https://media.geeksforgeeks.org/wp-content/uploads/20240208031606/Screenshot-2024-02-08-031526.png

Docker daemon executes commands issued by the client by translating them into actionable operations within the Docker environment. The Docker daemon also manages containers and images.

You might have heard about “Docker Engine” at least once in your career, which is actually the Docker daemon. You can easily install Docker Desktop and start your Docker engine. I am not going to cover these steps in this blog post. Docker Desktop is actually both the client and the server at this point. The engine runs in the background and waits for commands from the client, the Docker Desktop app.

How does Docker manage containers and images?

Here is a simple roadmap if you would like to dockerize your application:

Create a Dockerfile in your application’s repo:

touch Dockerfile

Then build the image from this Dockerfile:

docker build .

This code generates a Docker image based on your Dockerfile.

Now you can create and run a container that contains your application with:

docker run <image_name>

Basically, Docker instructions are in the Dockerfile to explain how the image will be created. If you check the Docker Desktop application you can see running containers. Or you can see running containers on CLI with:

docker ps

Here is an example Dockerfile:

# Use the Node.js 18 image as the base image for building the application
FROM node:18 AS build
# Set the working directory inside the container to /app
WORKDIR /app
# Copy the package.json and package-lock.json files to the working directory
COPY package*.json ./
# Install the project dependencies listed in package.json
RUN npm install
# Copy the rest of the project files to the working directory
COPY . .
# Build the React application for production
RUN npm run build
# Use a lightweight Nginx image to serve the built application
FROM nginx:alpine
# Copy the built application from the build stage to Nginx's default directory for serving HTML
COPY - from=build /app/build /usr/share/nginx/html
# Expose port 80 to make the app accessible over HTTP
EXPOSE 80
# Start Nginx in the foreground (without daemonizing)
CMD ["nginx", "-g", "daemon off;"]

What is Docker Hub?

Docker Hub is a container registry built for developers and open-source contributors to find, use, and share their container images. You can deploy your images or pull (download) other images that were created by developers. For this aspect, I compare Docker Hub to GitHub.

You can download Docker images from the hub with:

docker pull <image_name>

Or you can push your images to the hub but First, you have to login:

docker login

Then:

docker push myusername/myapp:latest

What is Docker Compose?

Docker Compose helps us to run multiple containers with a single command, docker-compose up and docker-compose down. To define Docker Compose, you have to create a docker-compose.yml file. The most important feature of Docker Compose is that it automatically creates a network between connected containers.

Let’s say you dockerized both your application and the database. You can use docker-compose up to run these containers. You would not have to
create a distinct network for these containers because Docker Compose does it for you.

Here is an example docker-compose.yml file:

version: '3'
services:
frontend:
build:
context: .
dockerfile: Dockerfile
ports:
- "80:80"
restart: always
backend:
image: node:18
working_dir: /app
volumes:
- ./backend:/app
command: npm start
ports:
- "5000:5000"
environment:
- NODE_ENV=production
restart: always
# Example of using a database like PostgreSQL
db:
image: postgres:13
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydatabase
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
restart: always
volumes:
db_data:

You can run multiple containers(frontend and backend) by using this docker-compose.yml with:

docker-compose run

Some other Docker commands that you might use:

docker run -it ubuntu

It runs the Ubuntu image and starts an interactive shell to use the Ubuntu CLI.

docker run -d postgres

It runs the Postgres image in the background. So it keeps running until you stop it. Databases are the most suitable cases for this tag.

docker ps

It lists running containers and images.

docker logs -f <container_id>

--

--