Docker Demistified

Docker Demistified

May 25, 2023
  1. What is Docker?

Docker is a platform that allows you to automate the deployment of applications inside lightweight, portable containers. Containers have everything an application needs to run, including the code, runtime, libraries, and system tools. This makes it easy to move applications between different environments.

  1. Install Docker

Follow official docs

  1. Key Docker Concepts
  • Image: A read-only template with instructions for creating a container. Images are stored in Docker Hub or your private repository.
  • Container: A runnable instance of an image. Multiple containers can be created from a single image.
  • Dockerfile: A text file with a set of instructions to build an image.
  • Docker Hub: A repository where Docker images are stored.
  1. Basic Docker Commands

Here is a comprehensive list of basic Docker commands:

  1. Docker Version and System Info

Check Docker version:

docker –version

Check system-wide information:

docker info

  1. Docker Image Commands

Pull an image from Docker Hub:

docker pull <image-name>

Example:

docker pull ubuntu

List all images:

docker images

Remove an image:

docker rmi <image-id>

Build an image from a Dockerfile:

docker build -t <image-name> .

Example: docker build -t myapp .

Tag an image:

docker tag <image-id> <repository>:<tag>

Push an image to Docker Hub:

docker push <repository>:<tag>

  1. Docker Container Commands

Run a container from an image:

Docker run <image-name>

Example: docker run ubuntu

Run a container interactively with a terminal:

docker run -it <image-name>

Example: docker run -it ubuntu

Run a container in detached mode (in the background):

docker run -d <image-name>

List running containers:

docker ps

List all containers (including stopped):

docker ps -a

Stop a running container:

docker stop <container-id>

Start a stopped container:

docker start <container-id>

Restart a container:

docker restart <container-id>

Remove a stopped container:

docker rm <container-id>

Remove a running container (force):

docker rm -f <container-id>

  1. Docker Network Commands

List Docker networks:

docker network ls

Create a new network:

docker network create <network-name>

Connect a container to a network:

docker network connect <network-name> <container-id>

Disconnect a container from a network:

docker network disconnect <network-name> <container-id>

  1. Docker Volume Commands

Create a volume:

Docker volume create <volume-name>

List Docker volumes:

docker volume ls

Inspect a volume:

docker volume inspect <volume-name>

Remove a volume:

docker volume rm <volume-name>

  1. Docker Compose Commands

Start services defined in a docker-compose.yml:

docker-compose up

Stop and remove containers defined in docker-compose.yml:

docker-compose down

Build services defined in docker-compose.yml:

docker-compose build

Run a one-off command in a container from a docker-compose.yml:

docker-compose run <service-name> <command>

  1. Docker Logs and Stats

View container logs:

docker logs <container-id>

Follow container logs in real-time:

docker logs -f <container-id>

View resource usage (CPU, memory, etc.) of running containers:

docker stats

  1. Docker Exec (Running Commands in a Running Container)

Run a command inside a running container:

docker exec <container-id> <command>

Example:

docker exec -it <container-id> /bin/

  1. Docker System Management

Prune unused Docker objects (containers, images, networks, volumes):

docker system prune

Prune unused images:

docker image prune

Prune unused volumes:

docker volume prune

These commands should cover most of your basic Docker tasks.

  1. Creating Your Own Docker Image

To create your own image, you need to write a Dockerfile.

Sample File (see chatgpt)

# Use an official Python runtime as a base image

FROM python:3.8-slim

# Set the working directory in the container

WORKDIR /app

# Copy the current directory contents into the container at /app

COPY . /app

# Install any needed packages specified in requirements.txt

RUN pip install –no-cache-dir -r requirements.txt

# Make port 80 available to the world outside this container

EXPOSE 80

# Define environment variable

ENV NAME World

# Run app.py when the container launches

CMD [“python”, “app.py”]

————-file end————–

docker build -t my-python-app .                # dot at end is pwd where Dockerfile is

docker run -p 4000:80 my-python-app

  1. Docker Compose

If you have multiple containers (for example, a web app container and a database container), you can use Docker Compose to manage them.

For install see official docs

Example docker-compose.yml:

version: ‘3’

services:

web:

image: my-python-app

ports:

– “5000:80”

redis:

image: “redis:alpine”

Run the application:

docker-compose up

  • Pushing Images to Docker hub
  • Docker Volumes (Persistent Data)

Docker containers are stateless by default, meaning that data inside a container does not persist after it’s stopped. To keep data persistent, you use volumes.

Create a volume:

docker volume create <volume-name>

Attach the volume to a container:

docker run -v <volume-name>:/data <image-name>

Example:

docker run -v myvolume:/data ubuntu

  • Docker Networking

Docker provides networking capabilities to enable communication between containers.

List Docker networks:

docker network ls

Create a new network:

docker network create <network-name>

Connect a container to a network:

docker network connect <network-name> <container-name>

  1. Bridge Network (Default)

Use Case: Default network for standalone containers (i.e., not using Docker Compose or Swarm).

  1. Host Network

Use Case: When you want the container to share the host machine’s network stack directly.

Description: The container bypasses Docker’s virtual network, and directly uses the host’s network interface, effectively making the container use the same IP as the host.

Key Features:

No network isolation between host and container.

Reduces the overhead of network translation.

Useful for performance-critical applications that require high network throughput.

Command:

docker run –network host …

  1. None Network

Use Case: When you want a completely isolated container with no networking.

  1. Overlay Network

Use Case: For Docker Swarm services or multi-host Docker setups.

Description: Allows containers running on different Docker hosts to communicate securely over an encrypted overlay network.

Key Features:

Useful for distributed systems across multiple hosts.

Can span across different physical or cloud environments.

Works well with Docker Swarm and Kubernetes.

Command:

docker network create -d overlay my-overlay-network

docker service create –network=my-overlay-network …

  1. Macvlan Network

Use Case: When you want containers to appear as physical devices on the network, with their own unique MAC address.

Description: Assigns a MAC address to each container, and they appear as separate physical devices on the network.

Key Features:

Useful for scenarios where you need containers to have their own IP addresses within the same network as the host.

Ideal for legacy systems that expect each device to have its own MAC and IP address.

Command:

docker network create -d macvlan \

–subnet=192.168.1.0/24 \

–gateway=192.168.1.1 \

-o parent=eth0 my-macvlan-network

docker run –network=my-macvlan-network …

  1. Custom User-Defined Networks

Use Case: Customized network setups for specific use cases.

Description: Besides the default bridge and overlay networks, you can create your own network using bridge or overlay drivers.

Key Features:

Allows easier communication between containers via container names (instead of IP addresses).

More control over network rules, like subnet and gateway configuration.

Command:

docker network create my-custom-network

docker run –network=my-custom-network …

Multi-stage Docker build

A multi-stage Docker build is an optimization technique that allows you to use multiple FROM statements in your Dockerfile. Each FROM statement starts a new stage, and you can selectively copy artifacts from one stage to another, minimizing the final image size. This is particularly useful for large builds, such as building a Go, Node.js, or Java application where you don’t want to include all the build dependencies in the final image.

Here’s an example of a multi-stage Dockerfile for a Django application. The first stage installs the dependencies, and the second stage creates a lean, production-ready image with only the necessary components to run the application.

Multi-Stage Dockerfile for Django

# Stage 1: Build environment

FROM python:3.9-slim AS builder

# Set environment variables to avoid creating .pyc files and to use unbuffered output

ENV PYTHONDONTWRITEBYTECODE 1

ENV PYTHONUNBUFFERED 1

# Set working directory

WORKDIR /app

# Install system dependencies

RUN apt-get update && apt-get install -y –no-install-recommends \

gcc \

libpq-dev \

&& rm -rf /var/lib/apt/lists/*

# Install Python dependencies

COPY requirements.txt .

RUN pip install –user –no-warn-script-location -r requirements.txt

# Copy the entire Django project into the build environment

COPY . .

# Stage 2: Production environment

FROM python:3.9-alpine

# Set environment variables

ENV PYTHONDONTWRITEBYTECODE 1

ENV PYTHONUNBUFFERED 1

# Install system dependencies in the production image

RUN apk update \

&& apk add –no-cache \

postgresql-libs \

&& apk add –no-cache –virtual .build-deps gcc musl-dev postgresql-dev \

&& apk del .build-deps

# Set working directory

WORKDIR /app

# Copy only the installed dependencies from the build stage

COPY –from=builder /root/.local /root/.local

# Copy the project files

COPY . .

# Set the environment path to include installed dependencies

ENV PATH=/root/.local/bin:$PATH

# Expose the port used by Django

EXPOSE 8000

# Run migrations and collect static files

RUN python manage.py collectstatic –noinput && python manage.py migrate

# Command to run the application

CMD [“gunicorn”, “–bind”, “0.0.0.0:8000”, “myproject.wsgi:application”]

Explanation:

  • Stage 1: Build environment
      • Base image: python:3.9-slim is used to create the build environment.
      • System dependencies: We install necessary packages like gcc and libpq-dev for compiling dependencies and interacting with PostgreSQL.
      • Install Python dependencies: The requirements.txt file is used to install dependencies using pip.
      • Copy Django project: The entire project is copied over so that dependencies and the application code can be built.
  • Stage 2: Production environment
    • Base image: python:3.9-alpine is used to create a smaller, optimized image for production.
    • System dependencies: Necessary packages like postgresql-libs are installed to connect with PostgreSQL.
    • Copy from build stage: Only the necessary components (installed Python dependencies and project files) are copied from the builder stage.
    • Run Django management commands: collectstatic is used to collect static files, and migrate applies database migrations.
    • Gunicorn: The Django app is served using gunicorn for better performance in production.

Benefits of this setup:

  • Smaller final image: Since the final image only includes the necessary runtime dependencies, the image is smaller and more secure.
  • Isolation of build dependencies: The build dependencies (like gcc) are kept out of the final production image, which helps to reduce the attack surface.
  • Efficient build caching: By copying the requirements.txt first, Docker caches the dependency installation, reducing build times for subsequent builds if only the application code changes.

Make sure to replace myproject.wsgi:application with the appropriate WSGI module for your Django project.

Leave A Comment