How to Run Containers

How to Run Containers Running containers has become a foundational skill for modern software development, DevOps engineering, and cloud infrastructure management. Containers provide a lightweight, portable, and consistent way to package applications along with their dependencies, ensuring they run reliably across different computing environments—from a developer’s laptop to production servers in t

Nov 10, 2025 - 11:34
Nov 10, 2025 - 11:34
 0

How to Run Containers

Running containers has become a foundational skill for modern software development, DevOps engineering, and cloud infrastructure management. Containers provide a lightweight, portable, and consistent way to package applications along with their dependencies, ensuring they run reliably across different computing environmentsfrom a developers laptop to production servers in the cloud. Unlike traditional virtual machines, containers share the host operating systems kernel, making them faster to start, more resource-efficient, and easier to scale. Whether youre deploying a simple web application, a microservice architecture, or a machine learning model, understanding how to run containers is no longer optionalits essential.

This guide offers a comprehensive, step-by-step walkthrough on how to run containers effectively. Youll learn the core concepts, practical execution methods, industry best practices, essential tools, real-world examples, and answers to frequently asked questions. By the end of this tutorial, youll have the knowledge and confidence to containerize, run, and manage applications using industry-standard tools like Docker and Podman.

Step-by-Step Guide

Understanding Container Fundamentals

Before running your first container, its critical to understand what a container is and how it differs from other deployment methods. A container is a standardized unit of software that packages code, runtime, system tools, libraries, and settings into a single, isolated environment. This isolation ensures that the application behaves consistently regardless of where it is deployed.

Containers rely on operating system-level virtualization. They use kernel features such as namespaces (for isolation) and cgroups (for resource limiting) to create lightweight, portable environments. Unlike virtual machineswhich emulate entire operating systems and require significant overheadcontainers share the host OS kernel, making them far more efficient in terms of memory usage and startup time.

Popular container runtimes include Docker, Podman, and containerd. While Docker is the most widely adopted, alternatives like Podman offer rootless operation and better integration with modern Linux security models. For this guide, well focus on Docker as the primary tool, but well note where Podman commands differ.

Prerequisites

Before you begin, ensure your system meets the following requirements:

  • A modern operating system: Linux (Ubuntu 20.04+, CentOS 8+, Debian 11+), macOS (10.15+), or Windows 10/11 Pro/Enterprise (with WSL2 enabled)
  • At least 4GB of RAM (8GB recommended for complex workloads)
  • Internet connection to pull container images from registries

On Linux, ensure your user is part of the docker group to avoid using sudo for every command:

sudo usermod -aG docker $USER

newgrp docker

On macOS and Windows, Docker Desktop provides a seamless installation experience with built-in Kubernetes and resource management.

Installing Docker

Docker Engine is the core component that runs containers. Installation varies slightly by platform.

On Ubuntu/Debian:

sudo apt update

sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

sudo apt install docker-ce docker-ce-cli containerd.io

On CentOS/RHEL:

sudo yum install -y yum-utils

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

sudo yum install docker-ce docker-ce-cli containerd.io

sudo systemctl enable --now docker

On macOS: Download and install Docker Desktop from docker.com/products/docker-desktop.

On Windows: Install Docker Desktop with WSL2 backend. Enable WSL2 via PowerShell as Administrator:

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart

dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

Then download Docker Desktop and restart your system.

Verifying the Installation

After installation, verify Docker is working correctly:

docker --version

docker run hello-world

If you see a message like Hello from Docker!, your installation is successful. This command pulls the hello-world image from Docker Hub and runs it in a temporary container.

Running Your First Container

Containers are launched from images. An image is a read-only template that includes everything needed to run an application. To run a container, use the docker run command.

For example, to run an Nginx web server:

docker run -d -p 8080:80 --name my-nginx nginx

Lets break down this command:

  • -d: Run the container in detached mode (in the background)
  • -p 8080:80: Map port 8080 on the host to port 80 inside the container
  • --name my-nginx: Assign a custom name to the container
  • nginx: The image name to use

Once running, open your browser and navigate to http://localhost:8080. You should see the default Nginx welcome page.

Managing Running Containers

Docker provides several commands to inspect and manage containers:

docker ps                    

List running containers

docker ps -a

List all containers (including stopped ones)

docker logs my-nginx

View container logs

docker stop my-nginx

Stop a running container

docker start my-nginx

Restart a stopped container

docker rm my-nginx

Remove a stopped container

docker rmi nginx

Remove the image

To access a running containers shell (e.g., for debugging), use:

docker exec -it my-nginx /bin/bash

This opens an interactive bash session inside the container. You can inspect files, test configurations, or troubleshoot issues directly.

Building a Custom Container Image

While pre-built images from Docker Hub are convenient, youll often need to create custom images tailored to your application. This is done using a Dockerfile.

Create a directory for your project:

mkdir my-app

cd my-app

Create a file named Dockerfile with the following content:

FROM python:3.10-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

CMD ["python", "app.py"]

Next, create a simple Python app (app.py):

from flask import Flask

app = Flask(__name__)

@app.route('/')

def hello():

return "Hello from a custom container!"

if __name__ == '__main__':

app.run(host='0.0.0.0', port=5000)

Create a requirements.txt:

Flask==2.3.3

Now build the image:

docker build -t my-python-app .

Run it:

docker run -d -p 5000:5000 --name my-app my-python-app

Visit http://localhost:5000 to see your application live.

Using Docker Compose for Multi-Container Applications

Most applications require multiple serviceslike a web server, database, and cache. Docker Compose simplifies managing multi-container applications using a YAML file.

Create a docker-compose.yml file:

version: '3.8'

services:

web:

build: .

ports:

- "5000:5000"

depends_on:

- redis

redis:

image: redis:alpine

Run the entire stack with:

docker-compose up -d

Check status:

docker-compose ps

Stop and remove everything:

docker-compose down

Docker Compose is ideal for local development, testing, and even small-scale production deployments.

Best Practices

Use Minimal Base Images

Always prefer slim or Alpine-based images (e.g., python:3.10-slim or node:18-alpine). Smaller images reduce attack surface, speed up downloads, and minimize storage usage. Avoid using latest tags in productionpin to specific versions to ensure reproducibility.

Implement Multi-Stage Builds

Multi-stage builds allow you to use multiple FROM statements in a single Dockerfile. This enables you to compile your application in one stage and copy only the necessary artifacts into a minimal runtime image.

Example:

FROM golang:1.20 AS builder

WORKDIR /app

COPY . .

RUN go build -o myapp .

FROM alpine:latest

RUN apk --no-cache add ca-certificates

WORKDIR /root/

COPY --from=builder /app/myapp .

CMD ["./myapp"]

This results in a final image under 10MB instead of hundreds of MBs.

Never Run Containers as Root

By default, containers run as the root user, which poses a security risk if compromised. Create a non-root user inside the container:

FROM python:3.10-slim

RUN addgroup -g 1001 -S appuser && adduser -u 1001 -S appuser -g appuser

USER appuser

WORKDIR /home/appuser

COPY --chown=appuser:appuser . .

CMD ["python", "app.py"]

This prevents privilege escalation attacks.

Use .dockerignore Files

Just as you use .gitignore, create a .dockerignore file to exclude unnecessary files from the build context:

.git

node_modules

.env

README.md

Dockerfile

.dockerignore

This reduces build time and prevents sensitive files from being included in the image.

Limit Resource Usage

Containers can consume excessive CPU or memory if left unbounded. Use resource constraints:

docker run -d \

--name my-app \

--memory="512m" \

--cpus="1.0" \

my-python-app

In Docker Compose:

services:

web:

image: my-python-app

deploy:

resources:

limits:

memory: 512M

cpus: '1.0'

Secure Your Images

Scan images for vulnerabilities using tools like Docker Scout, Trivy, or Clair:

trivy image my-python-app

Regularly update base images and re-build your containers. Automate this process with CI/CD pipelines.

Log Management and Monitoring

Use structured logging (JSON) instead of plain text. Forward logs to centralized systems like ELK Stack, Loki, or Datadog. Avoid writing logs to the containers filesystemuse stdout/stderr instead.

Enable Dockers built-in logging drivers:

docker run -d \

--log-driver=json-file \

--log-opt max-size=10m \

--log-opt max-file=3 \

my-app

Use Environment Variables for Configuration

Never hardcode secrets or configuration values in images. Use environment variables:

docker run -d \

-e DATABASE_URL=postgresql://user:pass@db:5432/mydb \

-e API_KEY=your-key-here \

my-app

In Docker Compose:

environment:

- DATABASE_URL=postgresql://user:pass@db:5432/mydb

- API_KEY=${API_KEY}

Use .env files to manage secrets securely and avoid committing them to version control.

Tools and Resources

Core Tools

  • Docker The most popular container runtime and toolchain for building, running, and managing containers.
  • Podman A Docker-compatible alternative that runs without a daemon and supports rootless containers. Ideal for security-conscious environments.
  • Docker Compose Orchestrate multi-container applications with a single YAML file.
  • BuildKit A modern backend for Docker builds offering faster, more secure, and parallelized builds. Enable it with DOCKER_BUILDKIT=1.
  • Kubernetes The industry standard for orchestrating containers at scale. Use Minikube or Kind for local development.

Image Registries

  • Docker Hub Public registry with millions of images. Free tier available.
  • GitHub Container Registry (GHCR) Integrated with GitHub repositories. Ideal for CI/CD workflows.
  • Amazon ECR Secure, scalable registry for AWS users.
  • Google Container Registry (GCR) Native registry for Google Cloud Platform.
  • GitLab Container Registry Built into GitLab CI/CD pipelines.

Security and Monitoring Tools

  • Trivy Open-source vulnerability scanner for containers and infrastructure.
  • Docker Scout Dockers official image scanning and policy enforcement tool.
  • Clair Static analysis tool for identifying vulnerabilities in container images.
  • Prometheus + Grafana Monitor container metrics like CPU, memory, and network usage.
  • Logstash + Elasticsearch + Kibana (ELK) Centralized log aggregation and visualization.

Learning Resources

Command-Line Utilities

Enhance your workflow with these helpful utilities:

  • docker-slim Minifies Docker images by analyzing runtime behavior.
  • docker-du Shows disk usage per container and image.
  • docker-gen Generates configuration files from templates using container metadata.
  • docker-compose-ls Enhanced list view for Docker Compose services.

Real Examples

Example 1: Running a WordPress Site with MySQL

WordPress requires a web server and a database. Heres how to run it with Docker Compose:

version: '3.8'

services:

db:

image: mysql:8.0

environment:

MYSQL_DATABASE: wordpress

MYSQL_USER: wordpress

MYSQL_PASSWORD: wordpress

MYSQL_ROOT_PASSWORD: rootpassword

volumes:

- db_data:/var/lib/mysql

restart: always

wordpress:

image: wordpress:latest

ports:

- "8000:80"

environment:

WORDPRESS_DB_HOST: db:3306

WORDPRESS_DB_USER: wordpress

WORDPRESS_DB_PASSWORD: wordpress

WORDPRESS_DB_NAME: wordpress

volumes:

- wp_data:/var/www/html

restart: always

volumes:

db_data:

wp_data:

Run with docker-compose up -d. Access WordPress at http://localhost:8000. This setup is perfect for local development or staging environments.

Example 2: Containerized Node.js API with Redis Cache

Build a REST API that uses Redis for caching:

version: '3.8'

services:

api:

build: ./api

ports:

- "3000:3000"

environment:

REDIS_HOST: redis

depends_on:

- redis

redis:

image: redis:alpine

ports:

- "6379:6379"

The Node.js app connects to Redis using redis://redis:6379 as the connection string. This architecture is scalable and reusable across environments.

Example 3: Machine Learning Inference with TensorFlow

Deploy a pre-trained model as a containerized API:

FROM tensorflow/tensorflow:2.13.0-jupyter

WORKDIR /app

COPY model.h5 .

COPY app.py .

RUN pip install flask numpy

EXPOSE 5000

CMD ["python", "app.py"]

The app.py file loads the model and exposes a /predict endpoint. This allows data scientists to share models without requiring users to install Python dependencies.

Example 4: CI/CD Pipeline with GitHub Actions

Automate container builds and pushes:

name: Build and Push Docker Image

on:

push:

branches: [ main ]

jobs:

build:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- name: Set up Docker Buildx

uses: docker/setup-buildx-action@v3

- name: Login to GitHub Container Registry

uses: docker/login-action@v3

with:

registry: ghcr.io

username: ${{ github.actor }}

password: ${{ secrets.GITHUB_TOKEN }}

- name: Build and push

uses: docker/build-push-action@v5

with:

context: .

file: ./Dockerfile

push: true

tags: ghcr.io/${{ github.repository }}:latest

This pipeline automatically builds and pushes a new image whenever code is pushed to the main branchenabling continuous delivery.

FAQs

What is the difference between a container and a virtual machine?

Containers share the host operating systems kernel and isolate processes using namespaces and cgroups. Virtual machines emulate entire operating systems, including their own kernel, using a hypervisor. Containers are lighter, faster to start, and more resource-efficient. VMs offer stronger isolation and can run different OSes, making them better suited for legacy applications or multi-tenant environments requiring strict separation.

Can I run Windows containers on Linux?

No. Containers rely on the host OS kernel. Linux containers run on Linux hosts, and Windows containers run on Windows hosts. However, Docker Desktop on Windows and macOS can switch between Linux and Windows container modes using a toggle in the UI. This allows developers to test both types locally.

Why should I avoid using the :latest tag in production?

The :latest tag is mutableit can point to different image versions over time. This makes deployments non-reproducible. If a new version of the image is pushed, your production container may suddenly start running untested code. Always pin to a specific version (e.g., nginx:1.25) to ensure stability and auditability.

How do I update a running container?

You cannot update a running container in place. Instead, stop and remove the old container, then run a new one from the updated image:

docker stop my-app

docker rm my-app

docker pull my-image:latest

docker run -d --name my-app my-image:latest

In production, use orchestration tools like Kubernetes to perform rolling updates without downtime.

How much disk space do containers use?

Container images are stored in layers. Multiple containers using the same base image share those layers, reducing overall disk usage. A typical small application image is 100500MB. However, logs, volumes, and build caches can accumulate. Use docker system prune to clean unused objects regularly.

Are containers secure?

Containers are secure when configured properly. Key practices include running as non-root, scanning for vulnerabilities, limiting resource access, and using read-only filesystems where possible. However, misconfigurations (e.g., exposing internal ports, using privileged mode) can introduce risks. Treat containers like any other serviceapply the principle of least privilege and monitor for anomalies.

Can I run containers without Docker?

Yes. Alternatives include Podman (drop-in replacement), containerd (used by Kubernetes), and CRI-O. These tools interact directly with the OS kernel and dont require a daemon. Podman is particularly popular in enterprise environments due to its rootless operation and compatibility with Docker CLI commands.

Whats the best way to persist data in containers?

Use Docker volumes or bind mounts. Volumes are managed by Docker and are the preferred method for data persistence:

docker run -v mydata:/app/data my-app

Bind mounts link a host directory to a container path:

docker run -v /host/path:/container/path my-app

For databases and stateful applications, always use volumes to avoid data loss when containers are removed.

How do containers help with microservices architecture?

Containers enable independent deployment, scaling, and management of individual microservices. Each service can be built, tested, and deployed separately using its own container image. This promotes modularity, fault isolation, and technology diversityeach service can use a different language or framework. Orchestration platforms like Kubernetes automate scaling, service discovery, and load balancing across containerized microservices.

Is containerization suitable for legacy applications?

Yes, but with caveats. Monolithic applications designed for traditional OS environments may require refactoring to function properly in containers. However, lift-and-shift containerizationwrapping legacy apps in containers without code changesis a common first step toward modernization. It provides benefits like consistent deployment and easier migration to the cloud, even before full refactoring.

Conclusion

Running containers is a transformative capability that bridges the gap between development and operations. By encapsulating applications in standardized, portable units, containers eliminate the it works on my machine problem and empower teams to deploy faster, scale smarter, and operate more reliably. This guide has walked you through the full lifecyclefrom installing Docker and running your first container, to building custom images, orchestrating multi-service applications, and applying enterprise-grade best practices.

As you continue your journey, remember that containerization is not just a technical toolits a cultural shift toward automation, reproducibility, and resilience. Embrace the principles of immutable infrastructure, declarative configuration, and continuous delivery. Use the tools and examples provided here as a foundation, and expand your knowledge by exploring Kubernetes, service meshes, and infrastructure-as-code.

Whether youre a developer, DevOps engineer, or systems administrator, mastering how to run containers opens doors to modern cloud-native architectures. Start small, experiment often, and build confidence through practice. The future of software delivery is containerizedand youre now equipped to lead the way.