[zfs-discuss] Re: where has all my space gone? (with zfs, mountroot + b38)

2006-07-04 Thread James C. McPherson


Hi folks,
here's what I hope is the final entry in this saga.

At last, success.

I bfu'd to 2nd July 2006 bits, added

[ $fstype = zfs ]  mntopts=${mntopts},rw

to /lib/svc/method/fs-usr @ line 51, and on the reboot
I waited for the delete queue to flush.

And waited. And waited some more. And after about 10 hours
and 8 panics variously due to my disk ignoring me and
assertion failures because we'd filled up various caches
I was back to using a miniscule __7%__ of my disk.

Just what I'd wanted :)


Note to self - always make sure you've got bootable media
with the current ZFS module in the miniroot!

Thanks to Team ZFS for all the help in getting over this
annoying issue.


cheers,
James C. McPherson
--
Solaris Datapath Engineering
Data Management Group
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-06-26 Thread Tabriz

James C. McPherson wrote:

James C. McPherson wrote:

Jeff Bonwick wrote:

6420204 root filesystem's delete queue is not running

The workaround for this bug is to issue to following command...
# zfs set readonly=off pool/fs_name
This will cause the delete queue to start up and should flush your 
queue.
Thanks for the update.  James, please let us know if this solves 
your problem.

yes, I've tried that several times and it didn't work for me at all.
One thing that worked a *little* bit was to set readonly=on, then
go in with mdb -kw and set the drained flag on root_pool to 0 and
then re-set readonly=off. But that only freed up about 2Gb.


Here's the next installment in the saga. I bfu'd to include Mark's
recent putback, rebooted, re-ran the set readonly=off op on the
root pool and root filesystem, and waited. Nothing. Nada. Not a
sausage.

Here's my root filesystem delete head:


 ::fsinfo ! head -2
VFSP FS  MOUNT
fbcaa4e0 zfs /
 fbcaa4e0::print struct vfs vfs_data |::print struct zfsvfs 
z_delete_head

{
z_delete_head.z_mutex = {
_opaque = [ 0 ]
}
z_delete_head.z_cv = {
_opaque = 0
}
z_delete_head.z_quiesce_cv = {
_opaque = 0
}
z_delete_head.z_drained = 0x1
z_delete_head.z_draining = 0
z_delete_head.z_thread_target = 0
z_delete_head.z_thread_count = 0
z_delete_head.z_znode_count = 0x5ce4
z_delete_head.z_znodes = {
list_size = 0xc0
list_offset = 0x10
list_head = {
list_next = 0x9232ded0
list_prev = 0xfe820d2c16b0
}
}
}




I also went in with mdb -kw and set z_drained to 0, then re-set the
readonly flag... still nothing. Pool usage is now up to ~93%, and a
zdb run shows lots of leaked space too:

[snip bazillions of entries re leakage]

block traversal size 273838116352 != alloc 274123164672 (leaked 
285048320)


bp count: 5392224
bp logical:454964635136  avg:  84374
bp physical:   272756334592  avg:  50583compression:   
1.67
bp allocated:  273838116352  avg:  50783compression:   
1.66

SPA allocated: 274123164672 used: 91.83%

Blocks  LSIZE   PSIZE   ASIZE avgcomp   %Total  Type
 3  48.0K  8K   24.0K  8K6.00 0.00  L1 
deferred free
 5  44.0K   14.5K   37.0K   7.40K3.03 0.00  L0 
deferred free

 8  92.0K   22.5K   61.0K   7.62K4.09 0.00  deferred free
 1512 512  1K  1K1.00 0.00  object directory
 3  1.50K   1.50K   3.00K  1K1.00 0.00  object array
 116K   1.50K   3.00K   3.00K   10.67 0.00  packed nvlist
 -  -   -   -   -   --  packed nvlist 
size

 116K  1K   3.00K   3.00K   16.00 0.00  L1 bplist
 116K 16K 32K 32K1.00 0.00  L0 bplist
 232K   17.0K   35.0K   17.5K1.88 0.00  bplist
 -  -   -   -   -   --  bplist header
 -  -   -   -   -   --  SPA space map 
header
   140  2.19M364K   1.06M   7.79K6.16 0.00  L1 SPA 
space map
 5.01K  20.1M   15.4M   30.7M   6.13K1.31 0.01  L0 SPA 
space map

 5.15K  22.2M   15.7M   31.8M   6.17K1.42 0.01  SPA space map
 1  28.0K   28.0K   28.0K   28.0K1.00 0.00  ZIL intent log
 232K  2K   6.00K   3.00K   16.00 0.00  L6 DMU dnode
 232K  2K   6.00K   3.00K   16.00 0.00  L5 DMU dnode
 232K  2K   6.00K   3.00K   16.00 0.00  L4 DMU dnode
 232K   2.50K   7.50K   3.75K   12.80 0.00  L3 DMU dnode
15   240K   50.5K152K   10.1K4.75 0.00  L2 DMU dnode
   594  9.28M   3.88M   11.6M   20.1K2.39 0.00  L1 DMU dnode
 68.7K  1.07G274M549M   7.99K4.00 0.21  L0 DMU dnode
 69.3K  1.08G278M561M   8.09K3.98 0.21  DMU dnode
 3  3.00K   1.50K   4.50K   1.50K2.00 0.00  DMU objset
 -  -   -   -   -   --  DSL directory
 3  1.50K   1.50K   3.00K  1K1.00 0.00  DSL directory 
child map
 2 1K  1K  2K  1K1.00 0.00  DSL dataset 
snap map

 5  64.5K   7.50K   15.0K   3.00K8.60 0.00  DSL props
 -  -   -   -   -   --  DSL dataset
 -  -   -   -   -   --  ZFS znode
 -  -   -   -   -   --  ZFS ACL
 2.82K  45.1M   2.93M   5.85M   2.08K   15.41 0.00  L2 ZFS 
plain file
  564K  8.81G612M   1.19G   2.17K   14.76 0.47  L1 ZFS 
plain file
 4.40M   414G253G253G   57.5K1.6399.21  L0 ZFS 
plain file

 4.95M   422G254G254G   51.4K1.6799.68  ZFS plain file
 116K  1K   3.00K   3.00K   16.00 0.00  L2 ZFS 
directory
  

Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-06-23 Thread James C. McPherson

Mark Shellenbaum wrote:
...
So we have a bunch of stuff in the in-core delete queue, but no threads 
to process them.  The fact that we don't have the threads is related to 
the bug that Tabriz is working on.


Hi Mark,
after installing your fixes from three days ago and (cough!) ensuring
that my boot archive contained them, I then spent the next 7 or so
hours waiting for the delete queue to be flushed.

In that time my root disk (a Maxtor) decided it didn't like me much (I
was asking it to do too much io) so zfs paniced... then a few single-
user boots later (where each time the boot process was stuck in the
fs-usr service, flushing the queue) and I'm finally back to having the
disk space that I think I should have.

My one remaining concern is that I'm not sure that I've got all my
zfs bits totally sync'd with my kernel so I'll be bfuing again tomorrow
just to make sure.


Thanks for your help with this, I reallyreally appreciate it.


best regards,
James C. McPherson
--
Solaris Datapath Engineering
Data Management Group
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-06-22 Thread James C. McPherson

James C. McPherson wrote:

Jeff Bonwick wrote:

6420204 root filesystem's delete queue is not running

The workaround for this bug is to issue to following command...
# zfs set readonly=off pool/fs_name
This will cause the delete queue to start up and should flush your 
queue.
Thanks for the update.  James, please let us know if this solves your 
problem.

yes, I've tried that several times and it didn't work for me at all.
One thing that worked a *little* bit was to set readonly=on, then
go in with mdb -kw and set the drained flag on root_pool to 0 and
then re-set readonly=off. But that only freed up about 2Gb.


Here's the next installment in the saga. I bfu'd to include Mark's
recent putback, rebooted, re-ran the set readonly=off op on the
root pool and root filesystem, and waited. Nothing. Nada. Not a
sausage.

Here's my root filesystem delete head:


 ::fsinfo ! head -2
VFSP FS  MOUNT
fbcaa4e0 zfs /
 fbcaa4e0::print struct vfs vfs_data |::print struct zfsvfs 
z_delete_head

{
z_delete_head.z_mutex = {
_opaque = [ 0 ]
}
z_delete_head.z_cv = {
_opaque = 0
}
z_delete_head.z_quiesce_cv = {
_opaque = 0
}
z_delete_head.z_drained = 0x1
z_delete_head.z_draining = 0
z_delete_head.z_thread_target = 0
z_delete_head.z_thread_count = 0
z_delete_head.z_znode_count = 0x5ce4
z_delete_head.z_znodes = {
list_size = 0xc0
list_offset = 0x10
list_head = {
list_next = 0x9232ded0
list_prev = 0xfe820d2c16b0
}
}
}




I also went in with mdb -kw and set z_drained to 0, then re-set the
readonly flag... still nothing. Pool usage is now up to ~93%, and a
zdb run shows lots of leaked space too:

[snip bazillions of entries re leakage]

block traversal size 273838116352 != alloc 274123164672 (leaked 285048320)

bp count: 5392224
bp logical:454964635136  avg:  84374
bp physical:   272756334592  avg:  50583compression:   1.67
bp allocated:  273838116352  avg:  50783compression:   1.66
SPA allocated: 274123164672 used: 91.83%

Blocks  LSIZE   PSIZE   ASIZE avgcomp   %Total  Type
 3  48.0K  8K   24.0K  8K6.00 0.00  L1 deferred free
 5  44.0K   14.5K   37.0K   7.40K3.03 0.00  L0 deferred free
 8  92.0K   22.5K   61.0K   7.62K4.09 0.00  deferred free
 1512 512  1K  1K1.00 0.00  object directory
 3  1.50K   1.50K   3.00K  1K1.00 0.00  object array
 116K   1.50K   3.00K   3.00K   10.67 0.00  packed nvlist
 -  -   -   -   -   --  packed nvlist size
 116K  1K   3.00K   3.00K   16.00 0.00  L1 bplist
 116K 16K 32K 32K1.00 0.00  L0 bplist
 232K   17.0K   35.0K   17.5K1.88 0.00  bplist
 -  -   -   -   -   --  bplist header
 -  -   -   -   -   --  SPA space map header
   140  2.19M364K   1.06M   7.79K6.16 0.00  L1 SPA space map
 5.01K  20.1M   15.4M   30.7M   6.13K1.31 0.01  L0 SPA space map
 5.15K  22.2M   15.7M   31.8M   6.17K1.42 0.01  SPA space map
 1  28.0K   28.0K   28.0K   28.0K1.00 0.00  ZIL intent log
 232K  2K   6.00K   3.00K   16.00 0.00  L6 DMU dnode
 232K  2K   6.00K   3.00K   16.00 0.00  L5 DMU dnode
 232K  2K   6.00K   3.00K   16.00 0.00  L4 DMU dnode
 232K   2.50K   7.50K   3.75K   12.80 0.00  L3 DMU dnode
15   240K   50.5K152K   10.1K4.75 0.00  L2 DMU dnode
   594  9.28M   3.88M   11.6M   20.1K2.39 0.00  L1 DMU dnode
 68.7K  1.07G274M549M   7.99K4.00 0.21  L0 DMU dnode
 69.3K  1.08G278M561M   8.09K3.98 0.21  DMU dnode
 3  3.00K   1.50K   4.50K   1.50K2.00 0.00  DMU objset
 -  -   -   -   -   --  DSL directory
 3  1.50K   1.50K   3.00K  1K1.00 0.00  DSL directory child map
 2 1K  1K  2K  1K1.00 0.00  DSL dataset snap map
 5  64.5K   7.50K   15.0K   3.00K8.60 0.00  DSL props
 -  -   -   -   -   --  DSL dataset
 -  -   -   -   -   --  ZFS znode
 -  -   -   -   -   --  ZFS ACL
 2.82K  45.1M   2.93M   5.85M   2.08K   15.41 0.00  L2 ZFS plain file
  564K  8.81G612M   1.19G   2.17K   14.76 0.47  L1 ZFS plain file
 4.40M   414G253G253G   57.5K1.6399.21  L0 ZFS plain file
 4.95M   422G254G254G   51.4K1.6799.68  ZFS plain file
 116K  1K   3.00K   3.00K   16.00 0.00  L2 ZFS directory
   261  4.08M280K839K   3.21K   14.94 

Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-06-22 Thread Mark Shellenbaum

James C. McPherson wrote:

James C. McPherson wrote:

Jeff Bonwick wrote:

6420204 root filesystem's delete queue is not running

The workaround for this bug is to issue to following command...
# zfs set readonly=off pool/fs_name
This will cause the delete queue to start up and should flush your 
queue.
Thanks for the update.  James, please let us know if this solves your 
problem.

yes, I've tried that several times and it didn't work for me at all.
One thing that worked a *little* bit was to set readonly=on, then
go in with mdb -kw and set the drained flag on root_pool to 0 and
then re-set readonly=off. But that only freed up about 2Gb.


Here's the next installment in the saga. I bfu'd to include Mark's
recent putback, rebooted, re-ran the set readonly=off op on the
root pool and root filesystem, and waited. Nothing. Nada. Not a
sausage.

Here's my root filesystem delete head:


  ::fsinfo ! head -2
VFSP FS  MOUNT
fbcaa4e0 zfs /
  fbcaa4e0::print struct vfs vfs_data |::print struct zfsvfs 
z_delete_head

{
z_delete_head.z_mutex = {
_opaque = [ 0 ]
}
z_delete_head.z_cv = {
_opaque = 0
}
z_delete_head.z_quiesce_cv = {
_opaque = 0
}
z_delete_head.z_drained = 0x1
z_delete_head.z_draining = 0
z_delete_head.z_thread_target = 0
z_delete_head.z_thread_count = 0
z_delete_head.z_znode_count = 0x5ce4


So we have a bunch of stuff in the in-core delete queue, but no threads 
to process them.  The fact that we don't have the threads is related to 
the bug that Tabriz is working on.


  -Mark




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-05-22 Thread James C. McPherson

Hi Darren


Darren J Moffat wrote:

James C. McPherson wrote:

...

I know that zdb is private and totally unstable, but could we get
the manpage for it to at least say what the LSIZE, PSIZE and ASIZE
columns mean please?

See Section 2.6 of the ZFS On-Disk Specification:
-- BEGIN QUOTE --
lsize: Logical size.  The size of the data without compression, raidz or 
gang overheard.


psize: Physical size of the blok on disk after compression.

asize: Allocated size, total size of all blocks allocated to hold this 
data including any gang headers or raidz parity information.


If compression is turned off and ZFS is not on raidz storage, lsize, 
asize and psize will all be equal.


All sizes are stored as the number of 512 byte sectors (minus one) 
needed to represent the size of this block.

-- END QUOTE --


Thankyou for the pointer.

/me slaps self on side of head for research failure

I'll go and do some more reading :)

cheers,
James C. McPherson
--
Solaris Datapath Engineering
Data Management Group
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-05-22 Thread Tabriz Leman



Jeff Bonwick wrote:

I've had a zdb -bv root_pool running for about 30 minutes now.. it
just finished and of course told me that everything adds up:



This is definitely the delete queue problem:

  

Blocks  LSIZE   PSIZE   ASIZE avgcomp   %Total  Type
  4.18M   357G222G223G   53.2K1.6199.68  ZFS plain file
  8.07K   129M   47.2M   94.9M   11.8K2.74 0.04  ZFS delete queue



Under normal circumstances, the delete queue should be empty.
Here, the delete queue *itself* is 129M, which means that it's
probably describing many GB of data to be deleted.

The problem is described here:

6420204 root filesystem's delete queue is not running

Tabriz, any update on this?

  
Well, unfortunately, the update is no update.  I have not had a chance 
to fix this yet.  I will try to get to it this week or early next week.  
The workaround for this bug is to issue to following command...


# zfs set readonly=off pool/fs_name

This will cause the delete queue to start up and should flush your queue.
I apologize for the inconvenience.

Tabriz

Jeff

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-05-21 Thread James C. McPherson

James C. McPherson wrote:

Hi all,
I got a new 300Gb SATA disk last friday for my u20. After figuring
out that the current u20 bios barfs on EFI labels and creating a
whole-disk-sized slice0 I used TimF's script to implement zfsroot.
All seemed well and good until this morning when I did a df:
pieces: $ df -k /
Filesystemkbytesused   avail capacity  Mounted on
root_pool/root_filesystem
 286949376 183288496 10365257164%/
Now try as I might, I'm unable to account for more than about 26Gb
on this disk.
Where has all my space gone?



Ok, I thought things were going ok until today when I realised that
not only has the space in use not gone down, but even after acquiring
more disks and moving 22gb of data *off* root_pool, my in use count
on root_pool has increased.

I've bfu'd to 20/May archives and done update_nonON to build 40 as
well (though I noticed this yesterday before I updated).


Using du -sk on the local directories under /:

root_pool/root_filesystem shows

(kb)
919 bfu.child
99  bfu.conflicts
796 bfu.parent
0   bin
53330   boot
5   Desktop
445 dev
217 devices
85831   etc
1697108 export
56977   kernel
578545  opt/netbeans
4633opt/onbld
336935  opt/openoffice.org2.0
1788opt/OSOLopengrok
4983opt/PM
6018opt/schily
7   opt/SUNWits
49  opt/SUNWmlib
441595  opt/SUNWspro
37954   platform
1489sbin
3149821 usr
3202887 var

for a grand total of ~ 9.2gb


whereas my new pool, inout, has these filesystems:

  18G   /scratch
 739M   /opt/local
 945M   /opt/csw
 2.4G   /opt/hometools


pieces: # df -k
Filesystemkbytesused   avail capacity  Mounted on
root_pool/root_filesystem
 286949376 234046924 5286584382%/
/devices   0   0   0 0%/devices
ctfs   0   0   0 0%/system/contract
proc   0   0   0 0%/proc
mnttab 0   0   0 0%/etc/mnttab
swap 8097836 768 8097068 1%/etc/svc/volatile
objfs  0   0   0 0%/system/object
/usr/lib/libc/libc_hwcap2.so.1
 286912767 234046924 5286584382%/lib/libc.so.1
fd 0   0   0 0%/dev/fd
swap 8097148  80 8097068 1%/tmp
swap 8097104  40 8097064 1%/var/run
inout/scratch132120576 18536683 10939708715%/scratch
/dev/dsk/c1d0s0  8266719 6941900 124215285%/ufsroot
inout/optlocal   132120576  752264 109397087 1%/opt/local
inout/csw132120576  959117 109397087 1%/opt/csw
inout/hometools  132120576 2470511 109397087 3%/opt/hometools
inout132120576  24 109397087 1%/inout
root_pool286949376  24 52865843 1%/root_pool
/export/home/jmcp286912767 234046924 5286584382%/home/jmcp


I've had a zdb -bv root_pool running for about 30 minutes now.. it
just finished and of course told me that everything adds up:



Traversing all blocks to verify nothing leaked ...



No leaks (block sum matches space maps exactly)

bp count: 4575397
bp logical:384796715008  avg:  84101
bp physical:   238705154048  avg:  52171compression:   1.61
bp allocated:  239698446336  avg:  52388compression:   1.61
SPA allocated: 239698446336 used: 80.30%

Blocks  LSIZE   PSIZE   ASIZE avgcomp   %Total  Type
12   132K   32.5K   89.0K   7.42K4.06 0.00  deferred free
 1512 512  1K  1K1.00 0.00  object directory
 3  1.50K   1.50K   3.00K  1K1.00 0.00  object array
 116K   1.50K   3.00K   3.00K   10.67 0.00  packed nvlist
 -  -   -   -   -   --  packed nvlist size
 232K   17.0K   35.0K   17.5K1.88 0.00  bplist
 -  -   -   -   -   --  bplist header
 -  -   -   -   -   --  SPA space map header
 5.78K  24.8M   17.6M   35.6M   6.16K1.41 0.02  SPA space map
 116K 16K 16K 16K1.00 0.00  ZIL intent log
 59.5K   952M237M477M   8.01K4.02 0.21  DMU dnode
 3  3.00K   1.50K   4.50K   1.50K2.00 0.00  DMU objset
 -  -   -   -   -   --  DSL directory
 3  1.50K   1.50K   3.00K  1K1.00 0.00  DSL directory child map
 2 1K  1K  2K  1K1.00 0.00  DSL dataset snap map
 5  64.5K   7.50K   15.0K   3.00K8.60 0.00  DSL props
 -  -   -   -   -   --  DSL dataset
 -  -   -   -   -   --  ZFS znode
 -  -   -   -   -   --  ZFS ACL
 4.18M   357G222G223G   53.2K  

Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-05-21 Thread Mike Gerdts

On 5/21/06, James C. McPherson [EMAIL PROTECTED] wrote:

Using du -sk on the local directories under /:


I assume that you have also looked through /proc/*/fd/* to be sure
that there aren't any big files with a link count of zero (file has
been rm'd but process still has it open).  The following commands may
be helpful:

Find the largest files that are open:

# du -h /proc/*/fd/* | sort -n

or

Find all open files that have a link count of zero (have been removed)

# ls -l /proc/*/fd/* \
   | nawk 'BEGIN { print Size_MB File }
   $2 == 0  $1 ~ /^-/ {
  printf %7d %s\n, $5/1024/1024, $NF
   }'

Mike

--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-05-21 Thread James C. McPherson

Mike Gerdts wrote:

On 5/21/06, James C. McPherson [EMAIL PROTECTED] wrote:

Using du -sk on the local directories under /:


I assume that you have also looked through /proc/*/fd/* to be sure
that there aren't any big files with a link count of zero (file has
been rm'd but process still has it open).  The following commands may
be helpful:

Find the largest files that are open:

# du -h /proc/*/fd/* | sort -n

or

Find all open files that have a link count of zero (have been removed)

# ls -l /proc/*/fd/* \
   | nawk 'BEGIN { print Size_MB File }
   $2 == 0  $1 ~ /^-/ {
  printf %7d %s\n, $5/1024/1024, $NF
   }'


Hi Mike,
Thanks for the script, but no, it produced only this:


Size_MB File
/proc/102286/fd/3: No such file or directory
/proc/102286/fd/63: No such file or directory


The disk usage has persisted across several reboots.

--
James C. McPherson
--
Solaris Datapath Engineering
Data Management Group
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: where has all my space gone? (with zfs mountroot + b38)

2006-05-21 Thread Jeff Bonwick
 I've had a zdb -bv root_pool running for about 30 minutes now.. it
 just finished and of course told me that everything adds up:

This is definitely the delete queue problem:

 Blocks  LSIZE   PSIZE   ASIZE avgcomp   %Total  Type
   4.18M   357G222G223G   53.2K1.6199.68  ZFS plain file
   8.07K   129M   47.2M   94.9M   11.8K2.74 0.04  ZFS delete queue

Under normal circumstances, the delete queue should be empty.
Here, the delete queue *itself* is 129M, which means that it's
probably describing many GB of data to be deleted.

The problem is described here:

6420204 root filesystem's delete queue is not running

Tabriz, any update on this?

Jeff

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss