Good evening,

I'm running PVFS with two I/O nodes and a single metadata node.  I
think I've come across some wierdness in the values returned by
statfs.  My I/O nodes are mounting directories on RAID arrays that are
3.2TB (2.8TB free) and 2.0TB (1.4TB) free.  For the PVFS client mount
point, df displays something sane for this configuration:

    $ df
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda1             9.7G  6.9G  2.3G  76% /
    /dev/hda6              45G  668M   44G   2% /home
    /dev/sda              3.2T  461G  2.8T  15% /RAIDS/RAID_1
    tcp://cluster-1:3334/pvfs2-fs
                          3.9T  1.2T  2.7T  30% /RAIDS/RAID_2

However, many apps, like Nautilus, are report that the mount point has
far less free space -- 688MB.  It reports this no matter how many
files we add to the PVFS2 filesystem.

If I make a statfs function call, I get the smaller amounts.  To wit:

    $ python
    Python 2.4 (#2, Feb 12 2005, 00:29:46)
    [GCC 3.4.3 (Mandrakelinux 10.2 3.4.3-3mdk)] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import os
    >>> os.statvfs('/RAIDS/RAID_2').f_blocks
    1001258L
    >>> os.statvfs('/RAIDS/RAID_2').f_bfree
    705068L

(using strace, the system call that Python's os.statvfs actually makes
is:

    statfs64("/RAIDS/RAID_2", 84, {f_type=0x20030528, f_bsize=4194304,
    f_blocks=1001258, f_bfree=705068, f_bavail=705068, f_files=4294967292,
    f_ffree=4294967277, f_fsid={2016975803, 0}, f_namelen=255,
    f_frsize=1024}) = 0
)

705,068 1KB blocks would give me the 688MB free.

We noticed this problem when we ran one of our Windows applications
that is accessing PVFS2 via Samba.  It allocates a big sparse file
then fills it.  Whenever it reached the 688MB mark, it would die and
complain that there's no space left.

Any ideas what's up?  Thanks,
    --Justin
_______________________________________________
PVFS2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to