How to Use Docker Compose

How to Use Docker Compose Docker Compose is a powerful orchestration tool that simplifies the management of multi-container Docker applications. While Docker allows you to run individual containers, Docker Compose enables you to define and manage complex applications composed of multiple interconnected services—such as web servers, databases, message queues, and caching layers—all through a single

Nov 10, 2025 - 11:36
Nov 10, 2025 - 11:36
 0

How to Use Docker Compose

Docker Compose is a powerful orchestration tool that simplifies the management of multi-container Docker applications. While Docker allows you to run individual containers, Docker Compose enables you to define and manage complex applications composed of multiple interconnected servicessuch as web servers, databases, message queues, and caching layersall through a single YAML configuration file. This makes it indispensable for developers, DevOps engineers, and system administrators aiming to replicate production environments locally, streamline deployment workflows, and accelerate development cycles.

Before Docker Compose, managing multi-service applications required writing shell scripts to start, stop, and link containers manually. This approach was error-prone, difficult to version control, and inconsistent across environments. Docker Compose eliminates these pain points by offering a declarative, repeatable, and portable method to define application stacks. Whether you're building a simple LAMP stack or a microservices architecture with Redis, PostgreSQL, and Node.js, Docker Compose provides the structure and automation needed to make your workflow efficient and scalable.

In this comprehensive guide, well walk you through everything you need to know to use Docker Compose effectivelyfrom installation and basic syntax to advanced configurations, real-world examples, and industry best practices. By the end of this tutorial, youll be equipped to design, deploy, and maintain robust containerized applications with confidence.

Step-by-Step Guide

Prerequisites

Before diving into Docker Compose, ensure your system meets the following requirements:

  • Docker Engine installed (version 17.06.0 or later)
  • Basic familiarity with the command line
  • A text editor (e.g., VS Code, Sublime Text, or Nano)

You can verify Docker is installed by running:

docker --version

If Docker is not installed, visit Dockers official documentation to install it for your operating system. Docker Compose is included by default in Docker Desktop for Windows and macOS. On Linux, you may need to install it separately.

Installing Docker Compose

On Linux systems, Docker Compose is not bundled with Docker Engine. To install it, execute the following commands:

sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose

docker-compose --version

On Windows and macOS, Docker Compose is automatically installed with Docker Desktop. No additional steps are required.

Understanding the docker-compose.yml File

The heart of Docker Compose is the docker-compose.yml file. This YAML-formatted file defines the services, networks, and volumes that make up your application. Each service corresponds to a container, and you can specify the image, environment variables, ports, dependencies, and more.

Heres a minimal example:

version: '3.8'

services:

web:

image: nginx:latest

ports:

- "80:80"

db:

image: postgres:15

environment:

POSTGRES_DB: myapp

POSTGRES_USER: user

POSTGRES_PASSWORD: password

Lets break this down:

  • version: Specifies the Compose file format version. Use version 3.8 for modern Docker environments.
  • services: A top-level key defining all containers in the stack.
  • web: A service name; you can choose any descriptive name.
  • image: The Docker image to use (e.g., nginx:latest).
  • ports: Maps host port 80 to container port 80.
  • db: Another service using PostgreSQL with environment variables for database configuration.

Save this as docker-compose.yml in your project directory.

Starting Your Application

Once your docker-compose.yml file is ready, navigate to the directory containing it in your terminal and run:

docker-compose up

This command downloads the specified images (if not already present), creates containers for each service, and starts them. By default, it runs in the foreground and logs output from all containers. To run in detached mode (in the background), use:

docker-compose up -d

You can verify that your containers are running with:

docker-compose ps

This lists all services, their current state, ports, and container IDs.

Stopping and Removing Services

To stop the running containers without removing them:

docker-compose stop

To stop and remove containers, networks, and volumes defined in the file:

docker-compose down

Use docker-compose down -v to also remove named volumes declared in the volumes section of the Compose file. This is useful for cleaning up persistent data between development cycles.

Building Custom Images

While using pre-built images from Docker Hub is convenient, youll often need to build custom images for your application code. To do this, replace the image key with build:

version: '3.8'

services:

app:

build: .

ports:

- "3000:3000"

volumes:

- .:/app

depends_on:

- db

db:

image: postgres:15

environment:

POSTGRES_DB: myapp

In this example, build: . tells Docker Compose to build an image from the Dockerfile located in the current directory. The volumes section mounts your local code into the container, enabling live reloads during development. The depends_on key ensures the database starts before the application, though note that it does not wait for the database to be readyonly for the container to start.

Using Environment Variables

To manage configuration across environments (development, staging, production), use environment variables. Define them in a .env file in the same directory as your docker-compose.yml:

DB_HOST=db

DB_PORT=5432

DB_NAME=myapp

DB_USER=admin

DB_PASS=secret123

Then reference them in your Compose file:

version: '3.8'

services:

db:

image: postgres:15

environment:

POSTGRES_DB: ${DB_NAME}

POSTGRES_USER: ${DB_USER}

POSTGRES_PASSWORD: ${DB_PASS}

ports:

- "${DB_PORT}:5432"

Docker Compose automatically loads variables from the .env file. You can also override them at runtime by exporting them in your shell:

export DB_PASS=anothersecret

docker-compose up

Networks and Volumes

By default, Docker Compose creates a default network for your services so they can communicate with each other using service names as hostnames. You can customize this behavior:

version: '3.8'

services:

web:

image: nginx:latest

ports:

- "80:80"

networks:

- frontend

app:

image: myapp:latest

networks:

- frontend

- backend

db:

image: postgres:15

networks:

- backend

volumes:

- db_data:/var/lib/postgresql/data

networks:

frontend:

backend:

volumes:

db_data:

In this example:

  • frontend network connects the web server and app service.
  • backend network isolates the app and database for security.
  • db_data is a named volume that persists PostgreSQL data even after containers are removed.

Named volumes are preferred over bind mounts for production data because they are managed by Docker and are portable across systems.

Scaling Services

Docker Compose allows you to scale services horizontally. For example, to run three instances of your web server:

docker-compose up --scale web=3

Each instance will be assigned a unique name (e.g., web_1, web_2, web_3). Note that scaling only works with services that dont expose host ports or use unique ports per instance. For services like databases, scaling is not recommended unless using clustering or replication features.

Best Practices

Use Specific Image Tags

Avoid using latest in production. Tags like nginx:1.25 or node:20-alpine ensure reproducibility. The latest tag can change unexpectedly, leading to untested or incompatible versions being deployed. Pinning versions is a cornerstone of reliable infrastructure.

Organize Projects with Separate Compose Files

For complex applications, split your configuration into multiple files:

  • docker-compose.yml: Base configuration
  • docker-compose.dev.yml: Development overrides (e.g., volume mounts, debug ports)
  • docker-compose.prod.yml: Production settings (e.g., environment variables, resource limits)

Then combine them using:

docker-compose -f docker-compose.yml -f docker-compose.dev.yml up

This modular approach improves maintainability and allows environment-specific customization without duplicating configuration.

Minimize Container Size

Use lightweight base images like alpine or distroless where possible. For example:

FROM node:20-alpine

WORKDIR /app

COPY package*.json ./

RUN npm ci --only=production

COPY . .

EXPOSE 3000

CMD ["node", "server.js"]

Smaller images reduce download times, improve security (fewer packages = fewer vulnerabilities), and optimize storage.

Use Health Checks

Define health checks to ensure services are truly ready before dependent services start:

version: '3.8'

services:

db:

image: postgres:15

healthcheck:

test: ["CMD-SHELL", "pg_isready -U postgres"]

interval: 10s

timeout: 5s

retries: 5

start_period: 40s

While depends_on only waits for container startup, health checks wait for the service to be responsive. This prevents application crashes due to premature connections.

Limit Resource Usage

Prevent one service from consuming all system resources by setting CPU and memory limits:

services:

app:

image: myapp:latest

deploy:

resources:

limits:

cpus: '0.5'

memory: 512M

reservations:

cpus: '0.2'

memory: 256M

These settings are only honored when using Docker Compose with Docker Swarm mode. For standalone Compose, use mem_limit and cpu_shares if needed, though theyre deprecated in newer versions.

Secure Sensitive Data

Never hardcode passwords, API keys, or secrets in your docker-compose.yml. Use Docker secrets (in Swarm mode) or external secret management tools like HashiCorp Vault. For local development, use environment files (.env) and add them to .gitignore to prevent accidental commits.

Enable Logging and Monitoring

Configure logging drivers to centralize logs:

services:

web:

image: nginx:latest

logging:

driver: "json-file"

options:

max-size: "10m"

max-file: "3"

Use tools like docker-compose logs -f to monitor output, or integrate with ELK stack or Loki for production-grade log aggregation.

Version Control Your Configuration

Treat your docker-compose.yml and related files as code. Commit them to Git alongside your application source. This ensures:

  • Reproducible environments across teams
  • Change tracking and rollback capability
  • Integration with CI/CD pipelines

Always include a README.md explaining how to start the application and any prerequisites.

Tools and Resources

Visual Tools for Docker Compose

While the CLI is powerful, visual interfaces can enhance productivity:

  • Docker Desktop: Offers a GUI to view containers, logs, and resource usage. Ideal for beginners and macOS/Windows users.
  • Portainer: A lightweight web UI for managing Docker environments. Supports Compose stacks and allows you to deploy and monitor services visually.
  • Lazydocker: A terminal-based UI built with Go. Provides real-time logs, service status, and quick actions with keyboard shortcuts.

To install Portainer:

docker run -d -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce

Access it at http://localhost:9000 and connect to your local Docker daemon.

Linting and Validation

Validate your docker-compose.yml syntax before deployment:

  • docker-compose config: Checks for syntax errors and resolves variables. Run it to see the final merged configuration.
  • YAML Linters: Use tools like yamllint or VS Code extensions to catch indentation and structure issues.
  • Checkov: A static analysis tool that scans for security misconfigurations in IaC files, including Docker Compose.

Template Repositories

Start with proven templates:

  • Awesome Compose Official Docker repository with over 50 real-world examples (Node.js + Redis, Django + PostgreSQL, etc.)
  • 12factor.net Guidelines for building cloud-native apps, many of which align with Docker Compose best practices.
  • Dockerize A utility to wait for services to be ready before starting your app.

CI/CD Integration

Integrate Docker Compose into your CI/CD pipeline:

  • Use GitHub Actions or GitLab CI to run tests in a Compose environment before deployment.
  • Build and push images to a registry (e.g., Docker Hub, GitHub Packages) as part of the pipeline.
  • Use docker-compose pull and docker-compose up -d to deploy to staging or production servers.

Example GitHub Actions workflow:

name: Deploy with Docker Compose

on:

push:

branches: [ main ]

jobs:

deploy:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- name: Setup Docker

uses: docker/setup-docker-action@v3

- name: Deploy

run: |

docker-compose up -d

Documentation and Learning

Keep these official resources handy:

Real Examples

Example 1: WordPress with MySQL

WordPress is a common use case for Docker Compose. Heres a production-ready configuration:

version: '3.8'

services:

db:

image: mysql:8.0

container_name: wordpress_db

restart: unless-stopped

env_file: .env

volumes:

- db_data:/var/lib/mysql

networks:

- wordpress

healthcheck:

test: ["CMD", "mysqladmin", "ping", "-p$MYSQL_PASSWORD", "--silent"]

retries: 3

timeout: 5s

wordpress:

image: wordpress:latest

container_name: wordpress

restart: unless-stopped

ports:

- "8000:80"

env_file: .env

depends_on:

db:

condition: service_healthy

volumes:

- ./wp-content:/var/www/html/wp-content

networks:

- wordpress

environment:

WORDPRESS_DB_HOST: db:3306

WORDPRESS_DB_USER: $MYSQL_USER

WORDPRESS_DB_PASSWORD: $MYSQL_PASSWORD

WORDPRESS_DB_NAME: $MYSQL_DATABASE

volumes:

db_data:

networks:

wordpress:

And the corresponding .env file:

MYSQL_DATABASE=wordpress

MYSQL_USER=wpuser

MYSQL_PASSWORD=wpsecurepass

MYSQL_ROOT_PASSWORD=rootpass

Run with:

docker-compose up -d

Access WordPress at http://localhost:8000. This setup includes persistent storage, health checks, and environment isolation.

Example 2: Node.js + Redis + PostgreSQL Microservice

A modern API stack using Express.js, Redis for caching, and PostgreSQL for data persistence:

version: '3.8'

services:

api:

build: .

ports:

- "5000:5000"

volumes:

- .:/app

depends_on:

redis:

condition: service_healthy

db:

condition: service_healthy

environment:

- REDIS_HOST=redis

- DB_HOST=db

- NODE_ENV=development

networks:

- app-network

redis:

image: redis:7-alpine

ports:

- "6379:6379"

healthcheck:

test: ["CMD", "redis-cli", "ping"]

interval: 10s

timeout: 5s

retries: 5

networks:

- app-network

db:

image: postgres:15

ports:

- "5432:5432"

environment:

POSTGRES_DB: myapi

POSTGRES_USER: apiuser

POSTGRES_PASSWORD: apipass

volumes:

- pg_data:/var/lib/postgresql/data

healthcheck:

test: ["CMD-SHELL", "pg_isready -U apiuser"]

interval: 10s

timeout: 5s

retries: 5

networks:

- app-network

volumes:

pg_data:

networks:

app-network:

This example demonstrates:

  • Custom image build from local code
  • Health checks for dependency readiness
  • Network isolation
  • Volume persistence for database

Example 3: Multi-Service E-Commerce App

A scalable e-commerce platform with:

  • Frontend (React)
  • Backend API (Python/FastAPI)
  • Database (PostgreSQL)
  • Message Queue (RabbitMQ)
  • Cache (Redis)
version: '3.8'

services:

frontend:

build: ./frontend

ports:

- "3000:3000"

volumes:

- ./frontend:/app

depends_on:

- api

networks:

- frontend

api:

build: ./api

ports:

- "8000:8000"

environment:

- DATABASE_URL=postgresql://user:pass@db:5432/ecommerce

- REDIS_URL=redis://redis:6379

- RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672/

depends_on:

db:

condition: service_healthy

redis:

condition: service_healthy

rabbitmq:

condition: service_healthy

networks:

- backend

db:

image: postgres:15

volumes:

- db_data:/var/lib/postgresql/data

environment:

POSTGRES_DB: ecommerce

POSTGRES_USER: user

POSTGRES_PASSWORD: pass

healthcheck:

test: ["CMD-SHELL", "pg_isready -U user"]

interval: 10s

timeout: 5s

retries: 5

networks:

- backend

redis:

image: redis:7-alpine

networks:

- backend

healthcheck:

test: ["CMD", "redis-cli", "ping"]

interval: 10s

timeout: 5s

retries: 5

rabbitmq:

image: rabbitmq:3-management

ports:

- "15672:15672"

- "5672:5672"

environment:

RABBITMQ_DEFAULT_USER: guest

RABBITMQ_DEFAULT_PASS: guest

networks:

- backend

volumes:

db_data:

networks:

frontend:

backend:

This example showcases how Docker Compose scales to enterprise-level architectures while remaining readable and maintainable.

FAQs

What is the difference between Docker and Docker Compose?

Docker is the platform that allows you to create, run, and manage individual containers. Docker Compose is a tool built on top of Docker that lets you define and run multi-container applications using a single configuration file. Think of Docker as the engine and Docker Compose as the dashboard that controls multiple engines together.

Can Docker Compose be used in production?

Yes, but with caveats. Docker Compose is excellent for local development, testing, and small-scale deployments. For large-scale, high-availability production environments, consider using Docker Swarm or Kubernetes for orchestration, service discovery, auto-scaling, and rolling updates. However, many teams use Docker Compose for staging and CI/CD pipelines without issue.

Why isnt my service starting even though I used depends_on?

depends_on only waits for the container to start, not for the service inside to be ready. For example, PostgreSQL may be running but still initializing its database. Use healthcheck to ensure the service is truly available before depending on it.

How do I update my application after making code changes?

If youre using a volume mount (e.g., - .:/app), changes are reflected immediately. If youre building a custom image, rebuild it with:

docker-compose build

docker-compose up -d

Or use docker-compose up --build -d to rebuild and restart in one command.

How do I access logs from a specific service?

Use:

docker-compose logs web

To follow logs in real time:

docker-compose logs -f web

Can I use Docker Compose with non-Docker applications?

No. Docker Compose is designed exclusively for orchestrating Docker containers. However, you can use tools like Ansible, Terraform, or systemd alongside Docker Compose to manage non-containerized services in the same environment.

Is Docker Compose compatible with ARM64 (Apple Silicon)?

Yes. Docker Desktop for Mac supports Apple Silicon natively. Ensure your images are available for the arm64 architecture. Use multi-platform images (e.g., node:20 or postgres:15) which are built for multiple architectures.

What happens if I delete a volume?

Deleting a volume using docker-compose down -v permanently removes all data stored in it. Use this with cautionespecially for databases. Always back up critical data before performing destructive operations.

How do I run a one-off command in a service?

Use docker-compose run to execute a command in a new container based on the service definition:

docker-compose run web python manage.py migrate

This is useful for running database migrations, console shells, or cleanup scripts without affecting running containers.

Conclusion

Docker Compose is not just a convenience toolits a fundamental component of modern software development. By enabling developers to define complex, multi-service applications in a single, version-controlled file, it bridges the gap between local development and production deployment. Whether you're building a personal project, a startup MVP, or a corporate microservice architecture, Docker Compose provides the speed, consistency, and clarity needed to succeed.

In this guide, weve covered everything from installation and basic syntax to advanced configurations, security best practices, real-world examples, and integration with CI/CD pipelines. You now understand how to structure your applications, manage dependencies, persist data, and scale services efficiently.

As you continue to use Docker Compose, remember the core principles: reproducibility, isolation, and automation. Avoid hardcoding secrets, always use specific image tags, and leverage health checks to ensure reliability. Combine these practices with tools like Portainer, GitHub Actions, and linting utilities to create a robust, professional workflow.

The future of application deployment is containerized, and Docker Compose is your gateway into that world. Master it, and youll not only streamline your own workflowyoull empower your entire team to deliver software faster, safer, and with greater confidence.