Container Logging
LEVEL 0
The Problem
Your container is running. Something’s wrong. You need to know what happened.
Where do you look?
Applications write logs. But containers are ephemeral. If the container stops, where did the logs go? If you’re running 10 replicas of a service, how do you view all their logs together? How do you search logs? How do you prevent logs from filling up disk space?
Docker logging solves these problems, but you need to understand how it works.
LEVEL 1
The Concept — The Flight Recorder
The Concept
Imagine an airplane’s black box flight recorder.
Everything that happens gets logged:
- Pilot communications
- System events
- Sensor readings
- Errors and warnings
The black box stores this data. After a flight, investigators can review the logs to understand what happened.
Container logs are the flight recorder for your application.
Everything your application writes to stdout (standard output) and stderr (standard error) gets captured by Docker and stored.
LEVEL 2
The Mechanics — How Docker Captures Logs
The Mechanics
When your containerized application does this:
print("Starting application")
Or this:
import sys
sys.stderr.write("Error occurred\n")
Docker captures it.
Key concept: Containers should log to stdout/stderr, not to files.
Why? Because Docker can capture stdout/stderr automatically. If your app logs to files inside the container, those logs disappear when the container stops (unless you use volumes).
Viewing logs:
docker logs <container-name>
Common options:
docker logs my-container # Show all logs
docker logs -f my-container # Follow logs (like tail -f)
docker logs --tail 50 my-container # Last 50 lines
docker logs --since 30m my-container # Logs from last 30 minutes
docker logs --timestamps my-container # Show timestamps
With Compose:
docker compose logs # All services
docker compose logs -f # Follow all
docker compose logs app # One service
docker compose logs -f --tail=100 app # Follow last 100 lines of app
LEVEL 3
Logging Drivers
Docker uses logging drivers to handle logs. The default is json-file, which stores logs as JSON files on the host.
Available drivers:
json-file(default) — Logs stored in JSON filesjournald— Send to systemd journalsyslog— Send to sysloggelf— Send to Graylog Extended Log Format endpointsfluentd— Send to Fluentdawslogs— Send to Amazon CloudWatchgcplogs— Send to Google Cloud Logginglocal— Optimized file-based logging with automatic rotation
Configure in docker-compose.yml:
services:
app:
image: myapp
logging:
driver: json-file
options:
max-size: "10m" # Rotate after 10MB
max-file: "3" # Keep 3 rotated files
Or globally in /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
LEVEL 4
Log Rotation
Without rotation, log files grow indefinitely and fill up disk space.
Configure rotation:
services:
app:
image: myapp
logging:
driver: json-file
options:
max-size: "50m" # Max size per file
max-file: "5" # Number of files to keep
compress: "true" # Compress rotated files
This keeps 5 files of 50MB each (250MB total per container).
LEVEL 5
Centralized Logging
In production, you typically send logs to a centralized logging system:
Fluentd:
services:
app:
image: myapp
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
tag: myapp.logs
AWS CloudWatch:
services:
app:
image: myapp
logging:
driver: awslogs
options:
awslogs-region: us-west-2
awslogs-group: my-app-logs
awslogs-stream: app
ELK Stack (Elasticsearch, Logstash, Kibana):
version: '3.9'
services:
app:
image: myapp
logging:
driver: gelf
options:
gelf-address: "udp://logstash:12201"
logstash:
image: logstash:8.0.0
# Logstash configuration to forward to Elasticsearch
elasticsearch:
image: elasticsearch:8.0.0
# Store logs
kibana:
image: kibana:8.0.0
# Visualize logs
ports:
- "5601:5601"