> I'm having a problem with a volume that seems to occupy more disk space
> than it should. This is happening: I have just a volume in a partition
> (830 Megs), this volume has 290021 K on-line, if I check the partition
> it seems like the volume is occuping like 500 Megs.
As Bob Oesterlin and Mark Giuffrida point out, this is probably due to the
disk's clustering factor. The following perl script checks how much disk
space a volume is really costing, assuming a 4k cluster.
Dan
Dan Bloch Transarc Corporation, Pittsburgh [EMAIL PROTECTED]
----------------------------------------------------------------------
#!/usr/bin/perl
#
# ndu - like du -s but doesn't cross mountpoints, and calculates real size
# taken up on partition as well as size according to du or fs lq.
#
# usage: ndu [<directory> ...]
#
# limitations:
# - assumes a 4K cluster size (correct on RS/6000 servers)
# - fooled by hard links (will add size once for each link)
#
# Dan Bloch <[EMAIL PROTECTED]>
#
@ARGV = "." if ! @ARGV;
$T1 = $T2 =0;
foreach $dir (@ARGV) {
$t1 = $t2 = 0;
&sum($dir);
&dodir("$dir/$file") if -d _; # directory or mountpoint
printf("%7d / %-7d %s\n", $t1, $t2*4, $dir);
$T1 += $t1;
$T2 += $t2;
}
printf(" ------ ------\n%7d / %-7d\n", $T1, $T2*4) if @ARGV > 1;
sub dodir {
local($dir) = @_;
local(@filenames, $file);
opendir(DIR, $dir) || die "Can't open $dir ($!)\n";
@filenames = sort(readdir(DIR));
closedir(DIR);
foreach $file (@filenames) {
next if $file eq '.' || $file eq '..';
&sum("$dir/$file");
&dodir("$dir/$file") if -d _ && $inode & 1; # directory
}
}
#
# sum - add filesize to running totals.
# Leave stat fields in global variables for faster performance.
#
sub sum {
local($filename) = @_;
($inode, $size) = (lstat($filename))[1, 7];
$t1 += ($size + 1023) >> 10; # K
$t2 += ($size + 4095) >> 12; # 4K
}