Re: [zfs-discuss] [raidz] file not removed: No space left on device
On Tue, Jul 04, 2006 at 09:10:11AM +0200, Constantin Gonzalez wrote: > Hi Eric, > > Eric Schrock wrote: > > You don't need to grow the pool. You should always be able truncate the > > file without consuming more space, provided you don't have snapshots. > > Mark has a set of fixes in testing which do a much better job of > > estimating space, allowing us to always unlink files in full pools > > (provided there are no snapshots, of course). This provides much more > > logical behavior by reserving some extra slop. > > is this a planned and not yet implemented functionality or why did Tatjana > see the "not able to rm" behaviour? As I mentioned, Mark has a set of fixes in testing. They should be available sometime in the near future. In the meantime, you can truncate large files to free up space instead - because these don't involve rewriting the parent directory pointers, this should always work. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [raidz] file not removed: No space left on device
Hi Eric, Eric Schrock wrote: > You don't need to grow the pool. You should always be able truncate the > file without consuming more space, provided you don't have snapshots. > Mark has a set of fixes in testing which do a much better job of > estimating space, allowing us to always unlink files in full pools > (provided there are no snapshots, of course). This provides much more > logical behavior by reserving some extra slop. is this a planned and not yet implemented functionality or why did Tatjana see the "not able to rm" behaviour? Or should she use unlink(1M) in these cases? Best regards, Constantin > > - Eric > > On Mon, Jul 03, 2006 at 02:23:06PM +0200, Constantin Gonzalez wrote: >> Hi, >> >> of course, the reason for this is the copy-on-write approach: ZFS has >> to write new blocks first before the modification of the FS structure >> can reflect the state with the deleted blocks removed. >> >> The only way out of this is of course to grow the pool. Once ZFS learns >> how to free up vdevs this may become a better solution because you can then >> shrink the pool again after the rming. >> >> I expect many customers to run into similar problems and I've already gotten >> a number of "what if the pool is full" questions. My answer has always been >> "No file system should be used up more than 90% for a number of reasons", but >> in practice this is hard to ensure. >> >> Perhaps this is a good opportunity for an RFE: ZFS should reserve enough >> blocks in a pool in order to always be able to rm and destroy stuff. >> >> Best regards, >>Constantin >> >> P.S.: Most US Sun employees are on vacation this week, so don't be alarmed >> if the really good answers take some time :). > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock -- Constantin GonzalezSun Microsystems GmbH, Germany Platform Technology Group, Client Solutionshttp://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [raidz] file not removed: No space left on device
That's excellent news, as with the frequency that customers applications go feral and write a whole heap of crap (or they don't watch closely enough with gradual filling) we will forever be getting calls if this functionality is *anything* but transparent... Most explorers I see have filesystem 100% full messages in them... It will be interesting to see how the current S10_u2 bits go. :) Nathan. On Tue, 2006-07-04 at 02:19, Eric Schrock wrote: > You don't need to grow the pool. You should always be able truncate the > file without consuming more space, provided you don't have snapshots. > Mark has a set of fixes in testing which do a much better job of > estimating space, allowing us to always unlink files in full pools > (provided there are no snapshots, of course). This provides much more > logical behavior by reserving some extra slop. > > - Eric > > On Mon, Jul 03, 2006 at 02:23:06PM +0200, Constantin Gonzalez wrote: > > Hi, > > > > of course, the reason for this is the copy-on-write approach: ZFS has > > to write new blocks first before the modification of the FS structure > > can reflect the state with the deleted blocks removed. > > > > The only way out of this is of course to grow the pool. Once ZFS learns > > how to free up vdevs this may become a better solution because you can then > > shrink the pool again after the rming. > > > > I expect many customers to run into similar problems and I've already gotten > > a number of "what if the pool is full" questions. My answer has always been > > "No file system should be used up more than 90% for a number of reasons", > > but > > in practice this is hard to ensure. > > > > Perhaps this is a good opportunity for an RFE: ZFS should reserve enough > > blocks in a pool in order to always be able to rm and destroy stuff. > > > > Best regards, > >Constantin > > > > P.S.: Most US Sun employees are on vacation this week, so don't be alarmed > > if the really good answers take some time :). > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [raidz] file not removed: No space left on device
You don't need to grow the pool. You should always be able truncate the file without consuming more space, provided you don't have snapshots. Mark has a set of fixes in testing which do a much better job of estimating space, allowing us to always unlink files in full pools (provided there are no snapshots, of course). This provides much more logical behavior by reserving some extra slop. - Eric On Mon, Jul 03, 2006 at 02:23:06PM +0200, Constantin Gonzalez wrote: > Hi, > > of course, the reason for this is the copy-on-write approach: ZFS has > to write new blocks first before the modification of the FS structure > can reflect the state with the deleted blocks removed. > > The only way out of this is of course to grow the pool. Once ZFS learns > how to free up vdevs this may become a better solution because you can then > shrink the pool again after the rming. > > I expect many customers to run into similar problems and I've already gotten > a number of "what if the pool is full" questions. My answer has always been > "No file system should be used up more than 90% for a number of reasons", but > in practice this is hard to ensure. > > Perhaps this is a good opportunity for an RFE: ZFS should reserve enough > blocks in a pool in order to always be able to rm and destroy stuff. > > Best regards, >Constantin > > P.S.: Most US Sun employees are on vacation this week, so don't be alarmed > if the really good answers take some time :). -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] [raidz] file not removed: No space left on device
Hi, of course, the reason for this is the copy-on-write approach: ZFS has to write new blocks first before the modification of the FS structure can reflect the state with the deleted blocks removed. The only way out of this is of course to grow the pool. Once ZFS learns how to free up vdevs this may become a better solution because you can then shrink the pool again after the rming. I expect many customers to run into similar problems and I've already gotten a number of "what if the pool is full" questions. My answer has always been "No file system should be used up more than 90% for a number of reasons", but in practice this is hard to ensure. Perhaps this is a good opportunity for an RFE: ZFS should reserve enough blocks in a pool in order to always be able to rm and destroy stuff. Best regards, Constantin P.S.: Most US Sun employees are on vacation this week, so don't be alarmed if the really good answers take some time :). Tatjana S Heuser wrote: > On a system still running nv_30, I've a small RaidZ filled to the brim: > > 2 3 [EMAIL PROTECTED] pts/9 ~ 78# uname -a > SunOS mir 5.11 snv_30 sun4u sparc SUNW,UltraAX-MP > > 0 3 [EMAIL PROTECTED] pts/9 ~ 50# zfs list > NAME USED AVAIL REFER MOUNTPOINT > mirpool1 33.6G 0 137K /mirpool1 > mirpool1/home 12.3G 0 12.3G /export/home > mirpool1/install 12.9G 0 12.9G /export/install > mirpool1/local1.86G 0 1.86G /usr/local > mirpool1/opt 4.76G 0 4.76G /opt > mirpool1/sfw 752M 0 752M /usr/sfw > > Trying to free some space is meeting a lot of reluctance, though: > 0 3 [EMAIL PROTECTED] pts/9 ~ 51# rm debug.log > rm: debug.log not removed: No space left on device > 0 3 [EMAIL PROTECTED] pts/9 ~ 55# rm -f debug.log > 2 3 [EMAIL PROTECTED] pts/9 ~ 56# ls -l debug.log > -rw-r--r-- 1 th12242027048 Jun 29 23:24 debug.log > 0 3 [EMAIL PROTECTED] pts/9 ~ 58# :> debug.log > debug.log: No space left on device. > 0 3 [EMAIL PROTECTED] pts/9 ~ 63# ls -l debug.log > -rw-r--r-- 1 th12242027048 Jun 29 23:24 debug.log > > There are no snapshots, so removing/clearing the files /should/ > be a way to free some space there. > > Of course this is the same filesystem where zdb dumps core > - see: > > *Synopsis*: zdb dumps core - bad checksum > http://bt2ws.central.sun.com/CrPrint?id=6437157 > *Change Request ID*: 6437157 > > (zpool reports the RaidZ pool as healthy while > zdb crashes with a 'bad checksum' message.) > > > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Constantin GonzalezSun Microsystems GmbH, Germany Platform Technology Group, Client Solutionshttp://www.sun.de/ Tel.: +49 89/4 60 08-25 91 http://blogs.sun.com/constantin/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] [raidz] file not removed: No space left on device
On a system still running nv_30, I've a small RaidZ filled to the brim: 2 3 [EMAIL PROTECTED] pts/9 ~ 78# uname -a SunOS mir 5.11 snv_30 sun4u sparc SUNW,UltraAX-MP 0 3 [EMAIL PROTECTED] pts/9 ~ 50# zfs list NAME USED AVAIL REFER MOUNTPOINT mirpool1 33.6G 0 137K /mirpool1 mirpool1/home 12.3G 0 12.3G /export/home mirpool1/install 12.9G 0 12.9G /export/install mirpool1/local1.86G 0 1.86G /usr/local mirpool1/opt 4.76G 0 4.76G /opt mirpool1/sfw 752M 0 752M /usr/sfw Trying to free some space is meeting a lot of reluctance, though: 0 3 [EMAIL PROTECTED] pts/9 ~ 51# rm debug.log rm: debug.log not removed: No space left on device 0 3 [EMAIL PROTECTED] pts/9 ~ 55# rm -f debug.log 2 3 [EMAIL PROTECTED] pts/9 ~ 56# ls -l debug.log -rw-r--r-- 1 th12242027048 Jun 29 23:24 debug.log 0 3 [EMAIL PROTECTED] pts/9 ~ 58# :> debug.log debug.log: No space left on device. 0 3 [EMAIL PROTECTED] pts/9 ~ 63# ls -l debug.log -rw-r--r-- 1 th12242027048 Jun 29 23:24 debug.log There are no snapshots, so removing/clearing the files /should/ be a way to free some space there. Of course this is the same filesystem where zdb dumps core - see: *Synopsis*: zdb dumps core - bad checksum http://bt2ws.central.sun.com/CrPrint?id=6437157 *Change Request ID*: 6437157 (zpool reports the RaidZ pool as healthy while zdb crashes with a 'bad checksum' message.) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss