Complete Guide to Docker and Docker Compose: Commands and Best Practices
Complete Guide to Docker and Docker Compose: Commands and Best Practices
Fundamental Docker Concepts
Docker is an open-source system that facilitates application development, deployment, and execution through containerization. It solves compatibility problems by packaging an application and its dependencies into self-contained, portable units that run consistently across any environment.
Key Components
-
Images: These are lightweight, self-contained, executable software packages that contain everything needed to run an application (code, runtime, libraries, configurations). They are the virtualized template for creating containers.
-
Containers: These are the runtime instance of a Docker image. They isolate software from its environment, allowing many to run simultaneously on a host.
-
Docker Hub: This is a service provided by Docker for finding and sharing container images.
Docker CLI Commands (Basic)
| Task | CLI Command | Description |
|---|---|---|
| Images | ||
| Build an Image | docker build -t <image_name> . | Builds an image from a Dockerfile in current directory. |
| Build without cache | docker build -t <image_name> . --no-cache | Forces rebuilding without using cache. |
| List Images | docker images | Shows all local images. |
| Remove an Image | docker rmi <image_name> | Removes a local image. |
| Remove Unused Images | docker image prune | Removes all unused images. |
| Publish to Docker Hub | docker push <user>/<image_name> | Uploads an image to Docker Hub. |
| Containers | ||
| Run Container (with name) | docker run --name <container_name> <image_name> | Creates and runs a container from an image. |
| Run in Background | docker run -d <image_name> | Runs container in detached mode (background). |
| Run and Publish Ports | docker run -p <host_port>:<container_port> <image_name> | Maps a container port to host. |
| List Containers (Running) | docker ps | Shows currently running containers. |
| List All (Running and Stopped) | docker ps --all | Shows all containers, regardless of their state. |
| Start/Stop/Restart | docker start|stop|restart <container_name> | Starts or stops an existing container. docker stop sends a SIGTERM signal for graceful shutdown. |
| Remove a Container | docker rm <container_name> | Removes a stopped container. |
| Enter Container | docker exec -it <container_name> sh | Opens an interactive shell inside a running container. |
| View Logs | docker logs -f <container_name> | Shows and follows container logs. |
| Inspect Container | docker inspect <container_name> | Shows container details, typically in JSON format. |
| Monitor Resources | docker stats | Shows runtime resource usage statistics. |
Advanced Docker: Best Practices for Images and Security
Multi-stage Builds
Multi-stage builds optimize Dockerfiles while keeping them readable. They allow creating a small, secure production image that contains only necessary binaries or artifacts, without including development tools.
- Usage: Use multiple
FROMinstructions, each starting a new stage. - Selective Copying: Use
COPY --from=<stage>to copy artifacts from a previous stage. - Stage Naming: It's better to name stages with
FROM <image> AS <NAME>to reference them more easily and avoid breaking the copy if instructions are reordered:COPY --from=build /bin/hello /bin/hello. - Debugging: You can build only up to a specific stage using:
docker build --target <stage> -t hello .. - BuildKit: This is the recommended builder, as it only processes the stages that the target depends on, unlike the legacy builder which processes all stages up to the target.
Image and Container Security
| Practice | Details |
|---|---|
| Base Images | Use official or verified images from reliable sources. Start with a minimal base image and include only essential dependencies to minimize vulnerabilities. |
| Scanning | Regularly scan images and containers for known vulnerabilities (e.g., using tools like Trivy or Docker Scout). |
| Privileges | Follow the principle of least privilege. Run containers as non-root users. |
| File System | Use read-only file systems whenever possible. |
| Content Signing | Enable Docker Content Trust (DCT) to ensure you only use signed images. |
Docker Compose (Multi-Container Application Management)
Docker Compose is a tool that simplifies management of multi-container applications. It allows defining and orchestrating multiple containers (services, networks, volumes) in a single YAML file, typically docker-compose.yml.
- Key difference with Docker: Docker manages individual containers, while Docker Compose coordinates multiple containers that work together.
- Ideal Use: It's especially useful for development and testing environments where quick and fluid configuration of multiple services is required (e.g., web application, database, and frontend).
- Version V2: It's recommended to use the latest version. V1 (with hyphen:
docker-compose) has stopped receiving updates; V2 usesdocker compose(without hyphen).
Compose Structure and Basic Commands
The docker-compose.yml file defines the structure:
# Note: In recent Docker Compose versions, version is no longer required
# version: '3.8' # Optional in modern versions
services:
service1: # Definition of containers/services
# Service configuration
networks:
network1: # Custom network configuration
# Network configuration
volumes:
volume1: # Named volume definition
# Volume configuration
| Task | CLI Command (V2) | Description |
|---|---|---|
| Build and Deploy | docker compose up | Builds and runs containers defined in the yml. |
| Deploy in Detached | docker compose up -d | Deploys and leaves the application running in the background. |
| Stop Application | docker compose stop | Stops the containers. |
| Stop and Remove | docker compose down | Stops and removes services and networks. Does not remove named volumes by default. |
| Remove with Volumes | docker compose down --volumes or -v | Forces removal of services, networks, and named volumes. |
| Scale Service | docker compose up --scale service=n | Increases the number of containers (n) for a specific service. |
| Service Status | docker compose ps | Shows the current status of containers. |
| Logs | docker compose logs | Shows logs of defined containers. |
Persistent Data Management (Volumes)
Volumes are essential for persisting data between containers and the host, ensuring that data is not lost when containers are stopped or restarted.
- Named Volumes: These have a user-defined name, facilitating their identification and management. They are defined in the
volumessection of theymland then mounted in the service. - Host Mounts (Bind Mounts): Allows sharing host directories with the container to facilitate data access and editing. The syntax used is:
/path/on/host:/path/in/container.
| Task | CLI Command (Docker, applies to Compose volumes) |
|---|---|
| List Volumes | docker volume ls |
| Inspect Volume | docker volume inspect <volume_name> |
| Remove Volume | docker volume rm <volume_name> |
| Clean Unused Volumes | docker volume prune |
Advanced Production Best Practices with Compose
Secrets and Environment Variables Management
Separating key configurations (such as database credentials and API keys) from application code allows for more flexible and secure deployment.
- Risk: Never hardcode secrets (passwords, tokens) in
Dockerfilesor accidentally expose them in image layers. - Environment File (
env_file): It's recommended to store environment variables in a.envfile and reference it indocker-compose.ymlusing theenv_file: .envdirective. - Security: This
.envfile must be excluded from the code repository (e.g., using.gitignore), allowing secrets to be loaded at runtime without being exposed in version control.
Networks and Access Security
- Internal Connectivity: By default, containers defined in Compose have connectivity with each other internally.
- Database Protection: It's a good security practice to remove port exposure of the database in the
yml(not doing port binding to the host). Although the database process claims the port, it won't be bound to the Host, thus protecting the database from public exposure. To access it, you can tunnel a connection via SSH.
Resource Limits and Restarts
For shared environments, it's vital to limit the resources an application can consume to avoid impacting others.
- Resource Limitation (
deploy: resources): Allows specifying usage limits for memory, CPU, or GPU in the service.- Limits (
limits): Total usage the application can have (e.g.,cpus: 1,memory: 1GB). - Reservations (
reservations): RAM and CPU that the container reserves exclusively for its operation.
- Limits (
- Restart Policy (
restart_policy): It's crucial to userestart: alwaysto ensure the application starts automatically if the server restarts.- More detailed configurations allow defining conditions (e.g.,
on-failure), delays (delay), maximum number of attempts (max_attempts), and a time window (window) for restarts.
- More detailed configurations allow defining conditions (e.g.,
Health Checks
These allow defining a continuous test that launches against the container to verify it's actually functioning, and restart it if it fails.
- Configuration: You can define the
interval(how often the test is launched),retries(number of failures before considering it down),timeout(maximum wait time for the request), andstart_period(margin of time at startup for the application to start).
Host Security and General Best Practices
To improve the security of the Linux server where Docker Engine runs:
- Limited Root Access: Limit Root access for login (often disabled by default on Ubuntu Server).
- RSA Keys: It's recommended to use RSA (SSH keys) instead of passwords for login.
- You can create an SSH key with
ssh-keygen. - You can copy the public key to the server with
ssh-copy-id <user>@<server>.
- You can create an SSH key with
- Brute Force Attack Prevention: Install Fail2Ban (
apt install fail2ban). This application uses firewall rules (like IP Tables) to block IPs that make too many failed attempts against the SSH port. - Maintenance: Update Docker Engine and its dependencies frequently to mitigate vulnerabilities.
- Monitoring: Implement a monitoring system to record application logs and automatically detect crashes or resource shortages on the server, in addition to using
docker statsto monitor resources.
Conclusion
Docker and Docker Compose are fundamental tools in modern software development. Mastering their commands and best practices not only improves development efficiency but also ensures more secure and maintainable deployments in production. Proper implementation of these practices allows creating scalable, portable, and resilient applications.
To continue your learning journey in containerization, I recommend exploring Kubernetes orchestration strategies or diving deeper into image optimization techniques to further reduce the size and startup time of your applications.