Masthan,


*/dudekula mastan <[EMAIL PROTECTED]>/* wrote:


    Hi All,
In my test set up, I have one zpool of size 1000M bytes.
Is this the size given by zfs list ? Or is the amount of disk space that you had ? The reason I ask this is because ZFS/Zpool takes up some amount of space for its house keeping. So, if you add 1G worth of disk space to the pool the effective space available is a little less (few MBs)
than 1G.

    On this zpool, my application writes 100 files each of size 10 MB.
First 96 files were written successfully with out any problem.

Here you are filling the FS to the brim. This is a border case and the copy-on-write nature of ZFS
could lead to the behaviour that you are seeing.

But the 97 file is not written successfully , it written only 5 MB
    (the return value of write() call ).
Since it is short write my application tried to truncate it to
    5MB. But ftruncate is failing with an erroe message saying that No
    space on the devices.

This is expected because of the copy-onwrite nature of ZFS. During truncate it is trying to allocate
new disk blocks probably to write the new metadata and fails to find them.

Have you people ever seen these kind of error message ?

Yes, there are others who have seen these errors.

After ftruncate failure I checked the size of 97 th file, it is
    strange. The size is 7 MB but the expected size is only 5 MB.


Is there any particular reason that you are pushing the filesystem to the brim ? Is this part of some test ? Please, help us understand what you are trying to test.

Thanks and regards,
Sanjeev.

--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel: x27521 +91 80 669 27521
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to