On Jan 9, 2006, at 8:35 PM, Justin Mazzola Paluska wrote:

Good evening,

I'm running PVFS with two I/O nodes and a single metadata node.  I
think I've come across some wierdness in the values returned by
statfs.  My I/O nodes are mounting directories on RAID arrays that are
3.2TB (2.8TB free) and 2.0TB (1.4TB) free.  For the PVFS client mount
point, df displays something sane for this configuration:

    $ df
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda1             9.7G  6.9G  2.3G  76% /
    /dev/hda6              45G  668M   44G   2% /home
    /dev/sda              3.2T  461G  2.8T  15% /RAIDS/RAID_1
    tcp://cluster-1:3334/pvfs2-fs
                          3.9T  1.2T  2.7T  30% /RAIDS/RAID_2

However, many apps, like Nautilus, are report that the mount point has
far less free space -- 688MB.  It reports this no matter how many
files we add to the PVFS2 filesystem.

If I make a statfs function call, I get the smaller amounts.  To wit:

    $ python
    Python 2.4 (#2, Feb 12 2005, 00:29:46)
    [GCC 3.4.3 (Mandrakelinux 10.2 3.4.3-3mdk)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import os
os.statvfs('/RAIDS/RAID_2').f_blocks
    1001258L
os.statvfs('/RAIDS/RAID_2').f_bfree
    705068L

(using strace, the system call that Python's os.statvfs actually makes
is:

    statfs64("/RAIDS/RAID_2", 84, {f_type=0x20030528, f_bsize=4194304,
f_blocks=1001258, f_bfree=705068, f_bavail=705068, f_files=4294967292,
    f_ffree=4294967277, f_fsid={2016975803, 0}, f_namelen=255,
    f_frsize=1024}) = 0
)

705,068 1KB blocks would give me the 688MB free.


But that's using f_frsize as the block size, which is actually the fragment size. If you use f_bsize as the block size, the f_bavail looks correct. For the 2.4 kernel, we just set f_frsize to 1024, which is why you're seeing those results. It looks like the 2.6 kernel doesn't have the f_frsize field in the statfs struct, so you'll probably have more luck if you're able to update to the 2.6 kernel. I'm curious why Nautilus and other tools use don't use the f_bsize field (seems like a bug)...if you're able to strace the Nautilus process that might be enlightening.

-sam

We noticed this problem when we ran one of our Windows applications
that is accessing PVFS2 via Samba.  It allocates a big sparse file
then fills it.  Whenever it reached the 688MB mark, it would die and
complain that there's no space left.

Any ideas what's up?  Thanks,
    --Justin
_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users


_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to