Re: [zfs-discuss] Meta data corruptions on ZFS.
This is expected because of the copy-onwrite nature of ZFS. During truncate it is trying to allocate new disk blocks probably to write the new metadata and fails to find them. I realize there is a fundamental issue with copy on write, but does this mean ZFS does not maintain some kind of reservation to guarantee you can always remove data? If so I would consider this a major issue for general purpose use, and if nothing else it should most definitely be clearly documented. Accidentally filling up space is not at *all* uncommon in many situations, be it home use or medium sized business type use. Yes you should avoid it, but shit (always) happens. -- / Peter Schuller PGP userID: 0xE9758B7D or 'Peter Schuller [EMAIL PROTECTED]' Key retrieval: Send an E-Mail to [EMAIL PROTECTED] E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Meta data corruptions on ZFS.
Hi All, No one has any idea on this ? -Masthan dudekula mastan [EMAIL PROTECTED] wrote: Hi All, In my test set up, I have one zpool of size 1000M bytes. On this zpool, my application writes 100 files each of size 10 MB. First 96 files were written successfully with out any problem. But the 97 file is not written successfully , it written only 5 MB (the return value of write() call ). Since it is short write my application tried to truncate it to 5MB. But ftruncate is failing with an erroe message saying that No space on the devices. Have you people ever seen these kind of error message ? After ftruncate failure I checked the size of 97 th file, it is strange. The size is 7 MB but the expected size is only 5 MB. You help is appreciated. Thanks Regards Mastan - TV dinner still cooling? Check out Tonight's Picks on Yahoo! TV.___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss - Have a burning question? Go to Yahoo! Answers and get answers from real people who know.___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Meta data corruptions on ZFS.
Masthan, */dudekula mastan [EMAIL PROTECTED]/* wrote: Hi All, In my test set up, I have one zpool of size 1000M bytes. Is this the size given by zfs list ? Or is the amount of disk space that you had ? The reason I ask this is because ZFS/Zpool takes up some amount of space for its house keeping. So, if you add 1G worth of disk space to the pool the effective space available is a little less (few MBs) than 1G. On this zpool, my application writes 100 files each of size 10 MB. First 96 files were written successfully with out any problem. Here you are filling the FS to the brim. This is a border case and the copy-on-write nature of ZFS could lead to the behaviour that you are seeing. But the 97 file is not written successfully , it written only 5 MB (the return value of write() call ). Since it is short write my application tried to truncate it to 5MB. But ftruncate is failing with an erroe message saying that No space on the devices. This is expected because of the copy-onwrite nature of ZFS. During truncate it is trying to allocate new disk blocks probably to write the new metadata and fails to find them. Have you people ever seen these kind of error message ? Yes, there are others who have seen these errors. After ftruncate failure I checked the size of 97 th file, it is strange. The size is 7 MB but the expected size is only 5 MB. Is there any particular reason that you are pushing the filesystem to the brim ? Is this part of some test ? Please, help us understand what you are trying to test. Thanks and regards, Sanjeev. -- Solaris Revenue Products Engineering, India Engineering Center, Sun Microsystems India Pvt Ltd. Tel:x27521 +91 80 669 27521 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss