Measure the I/O performance with iostat. You should see something that
looks sorta like (iostat -zxCn 10):
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
5948.9 349.3 40322.3 5238.1 0.1 16.70.02.7 0 330
On Nov 2, 2010, at 12:10 AM, Ian Collins wrote:
On 11/ 2/10 08:33 AM, Mark Sandrock wrote:
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done, 173h7m to go
and a week being 168
On Nov 1, 2010, at 3:33 PM, Mark Sandrock mark.sandr...@oracle.com wrote:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mark Sandrock
I'm working with someone who replaced a failed 1TB drive (50%
utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon,
On 11/ 2/10 08:33 AM, Mark Sandrock wrote:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be wrong.
Last Tuesday afternoon, zpool status reported:
scrub: resilver in progress for 306h0m, 63.87% done,
On 11/ 2/10 11:55 AM, Ross Walker wrote:
On Nov 1, 2010, at 3:33 PM, Mark Sandrock mark.sandr...@oracle.com
mailto:mark.sandr...@oracle.com wrote:
Hello,
I'm working with someone who replaced a failed 1TB drive (50% utilized),
on an X4540 running OS build 134, and I think something must be