Monitoring log file growth for self-hosted step runners deployed on Docker can prevent disk space issues. Left unchecked, these issues can lead to cascading problems, such as orphaned containers and unrecognized failed steps.
This guide provides practical steps to diagnose and address disk space problems, ensuring reliable and seamless operations in Docker environments.
1. Check Disk Utilization
Overall Disk Usage: Run
df -h
to display total disk space usage.Directory Sizes: Use
du --max-depth=2 | sort -n -r | head
to list the top directories by size (limited to 2 levels deep) and investigate further to identify large files.
2. Inspect the Docker Environment
View Containers:
Running and Stopped Containers:
docker ps -a
Active Containers:
docker container ls
View Images: Use
docker image ls
to list all Docker images in the system.
3. Remove Unused Resources
Stopped Containers: Run
docker container prune
to remove stopped containers. Use optional filters like--filter until="-196h"
to target containers older than a specified time frame, such as 7 days.Unused Images: Run
docker image prune
with the-a
flag to remove all unused or dangling images.
Example: Container Log File Cleanup
Troubleshoot a disk space issue:
Identify Directories Using Excessive Disk Space: Use the
du
command to check if any directories are consuming too much disk space. For example, the/var/lib/docker/containers/
directory stores logs for each container and can grow significantly.Locate the Problematic Container Log: Use the container's long ID to find the specific log causing the issue.
docker ps -a | grep <container_id>
Truncate the Oversized Log File: To resolve the issue quickly, truncate the problematic log file.
truncate -s 0 /var/lib/docker/containers/<container_id>/*-json.log