I run some nightly test scripts at a Red Hat server. Occasionally when I check
it in the morning, it becomes really slow. Turns out it's out of disk space. I
`du h` my directories, and they are huge. But all FILES in these directories are
small.
After some time it turned out that some log files exceeded 2G. When you remove a
file *that* big on this filesystem (or this kernel, or something), the disk
space is not reclaimed to an extent which would allow you to actually create new
files using that space. It is only reclaimed to an extent allowing you to get it
back after a fsck run. Which costs a sysadmin intervention and some down time
while the disk is mounted read-only.
Want to clean up your disk? Look for small files, the ones you can remove.