[please don't top-post on technical lists] On 02/21/2012 01:55 PM, George R Goffe wrote: > Eric, > > I use this command in filesystems where the usage is at 100% or nearly so and > I need to find possible offending files that are BIG. The syntax works great > for NON / filesystems where there are no other partitions mounted. > > > I had assumed that -x would prohibit descending into directories in OTHER > filesystems or partitions. I suppose I could exclude mount points manually > but I'd have to remember to update the exclude file whenever I mount other > partitions. This tactic would fail if there was no partition mounted but the > specific mount point was the culprit like when a user gets root (not uncommon > in the environments I work in) and goofs by copying data to a mount point but > has NOT mounted a partition first. My purpose with this command is to > recursively find directories with large files in them so I can deal with them > appropriately.
-x _does_ prohibit descending into directories of other filesystems, but on a per-argument basis. That is, if you call 'du -x /path/to/dir1 /path/to/dir2' and dir1 and dir2 are mounted on different devices, then you have searched two devices at once: the device of /path/to/dir1 and the device of /path/to/dir2. That is the same whether you literally spelled it 'du -x /path/to/dir1 /path/to/dir2' or if you used a glob and spelled it 'du -x /path/to/dir?', or even if you did 'cd /path/to; du -x dir?'. So, if you want to find the usage from the current directory downwards, but without crossing device boundaries, and without regards to whether you are starting in / or in some other directory, use 'du -x .', and _not_ 'du -x * .??* .[!.]'. In other words, use '.' as the starting point to properly recurse starting at the current directory, rather than trying to make 'du' take a glob that expands to every possible file but possibly multiple mount points in the current directory. Also, if you are interested in finding large files, rather than just directories that have an aggregate of large space, you may be interested in using find(1) instead of du(1). For example, 'find . -xdev -size +$((1024*1024*10))' finds all files that are 10MiB or larger. But remember that lots of small files in a single directory can add up to large usage, even if none of the particular files is huge by itself. -- Eric Blake ebl...@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
signature.asc
Description: OpenPGP digital signature