> Since your commenting on good things in B56 I'll add the following server
> observations (on X4100):
>
> - Boot seems faster
> - There isn't any more "first boot lag"... Solaris has always dog'ed when
> dealing with devices after an install finishes. This is often seen when you
> have installed a system and run "format" for the first time... hang hang
> hang. No more!
> - Similar to above, creating zpool's on the first boot is significantly
> faster.
>
> In general B56 just feels more peppy that previous releases even as current
> as B54. That big bug hunt would seem to have paid off dividends! Great
> work everone!
>
yes indeed ... I have been waiting a looong time to mirror my zpool like so :
bash-3.2# zpool status zfs0
pool: zfs0
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zfs0 ONLINE 0 0 0
c1t9d0 ONLINE 0 0 0
c1t10d0 ONLINE 0 0 0
c1t11d0 ONLINE 0 0 0
c1t12d0 ONLINE 0 0 0
c1t13d0 ONLINE 0 0 0
c1t14d0 ONLINE 0 0 0
errors: No known data errors
bash-3.2#
bash-3.2# zpool attach zfs0 c1t9d0 c0t9d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c0t9d0s0 contains a ufs filesystem.
/dev/dsk/c0t9d0s2 contains a ufs filesystem.
kiss them goodbye ! bye bye ufs .. thanks for all the bits
bash-3.2# zpool attach -f zfs0 c1t9d0 c0t9d0
bash-3.2# zpool status zfs0
pool: zfs0
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 0.05% done, 4h34m to go
config:
NAME STATE READ WRITE CKSUM
zfs0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t9d0 ONLINE 0 0 0
c0t9d0 ONLINE 0 0 0
c1t10d0 ONLINE 0 0 0
c1t11d0 ONLINE 0 0 0
c1t12d0 ONLINE 0 0 0
c1t13d0 ONLINE 0 0 0
c1t14d0 ONLINE 0 0 0
errors: No known data errors
then I can check it :
bash-3.2# zpool status zfs0
pool: zfs0
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 3.41% done, 0h47m to go
config:
NAME STATE READ WRITE CKSUM
zfs0 ONLINE 0 0 0
mirror ONLINE 0 0 0
c1t9d0 ONLINE 0 0 0
c0t9d0 ONLINE 0 0 0
c1t10d0 ONLINE 0 0 0
c1t11d0 ONLINE 0 0 0
c1t12d0 ONLINE 0 0 0
c1t13d0 ONLINE 0 0 0
c1t14d0 ONLINE 0 0 0
errors: No known data errors
did the iostat option always allow an interval ? is this new ?
bash-3.2# zpool iostat -v zfs0 15
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zfs0 195G 7.14G 0 2 58.9K 109K
mirror 32.6G 1.19G 114 44 12.6M 199K
c1t9d0 - - 0 0 50.3K 22.5K
c0t9d0 - - 0 135 0 12.8M
c1t10d0 32.6G 1.19G 0 0 3.05K 21.8K
c1t11d0 32.6G 1.19G 0 0 1.32K 21.7K
c1t12d0 32.6G 1.19G 0 0 1.33K 21.8K
c1t13d0 32.6G 1.19G 0 0 1.40K 21.7K
c1t14d0 32.6G 1.19G 0 0 2.97K 21.7K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zfs0 195G 7.14G 125 2 14.9M 19.3K
mirror 32.6G 1.19G 119 2 14.9M 19.3K
c1t9d0 - - 119 1 15.0M 20.3K
c0t9d0 - - 0 121 0 14.9M
c1t10d0 32.6G 1.19G 3 0 45.3K 0
c1t11d0 32.6G 1.19G 0 0 3.09K 0
c1t12d0 32.6G 1.19G 0 0 3.44K 0
c1t13d0 32.6G 1.19G 0 0 4.47K 0
c1t14d0 32.6G 1.19G 1 0 17.0K 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zfs0 195G 7.14G 127 3 15.1M 22.9K
mirror 32.6G 1.19G 120 3 15.0M 22.9K
c1t9d0 - - 120 1 15.0M 22.0K
c0t9d0 - - 0 122 0 15.0M
c1t10d0 32.6G 1.19G 1 0 20.7K 0
c1t11d0 32.6G 1.19G 0 0 4.43K 0
c1t12d0 32.6G 1.19G 0 0 5.14K 0
c1t13d0 32.6G 1.19G 0 0 2.85K 0
c1t14d0 32.6G 1.19G 3 0 50.3K 0
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zfs0 195G 7.14G 127 3 15.2M 21.8K
mirror 32.6G 1.19G 121 3 15.1M 21.8K
c1t9d0 - - 120 1 15.1M 21.8K
c0t9d0 - - 0 123 0 15.2M
c1t10d0 32.6G 1.19G 2 0 36.6K 0
c1t11d0 32.6G 1.19G 1 0 7.58K 0
c1t12d0 32.6G 1.19G 0 0 5.71K 0
c1t13d0 32.6G 1.19G 0 0 1.58K 0
c1t14d0 32.6G 1.19G 1 0 17.5K 0
---------- ----- ----- ----- ----- ----- -----
^C
there were previous issue with memory usage in snv_b52 or thereabouts
and I'm happy to have the latest going here. On Sparc by the way.
Dennis Clarke
_______________________________________________
opensolaris-discuss mailing list
[email protected]