> root@host:~# fmadm faulty
> --------------- ------------------------------------ --------------
> ---------
> --------------- ------------------------------------ --------------
> ---------
> Jan 05 08:21:09 7af1ab3c-83c2-602d-d4b9-f9040db6944a ZFS-8000-HC Major
> Host : host
> Platform : PowerEdge-R810
> Product_sn :
> Fault class : fault.fs.zfs.io_failure_wait
> Affects : zfs://pool=test
> faulted but still in service
> Problem in : zfs://pool=test
> faulted but still in service
> Description : The ZFS pool has experienced currently unrecoverable I/O
> failures. Refer to http://illumos.org/msg/ZFS-8000-HC for
> more information.
> Response : No automated response will be taken.
> Impact : Read and write I/Os cannot be serviced.
> Action : Make sure the affected devices are connected, then run
> 'zpool clear'.
> --
The pool looks healthy to me, but it it isn't very well balanced. Have you been 
adding new VDEVs on your way to grow it? Check if of the VDEVs are fuller than 
others. I don't have an OI/IllumOS system available ATM, but IIRC this can be 
done with iostat -v. Older versions of ZFS striped to all VDEVs regardless to 
fill, which slowed down the write speeds rather horribly if some VDEVs were 
full (>90%). This shouldn't be the case with OmniOS, but it *may* be the case 
with an old zpool version. I don't know. I'd check fill rate of the VDEVs 
first, then perhaps try to upgrade the zpool unless you have to be able to 
mount it on an older version of zpool (on S10 or similar). Vennlige hilsener / 
Best regards roy -- Roy Sigurd Karlsbakk (+47) 98013356 r...@karlsbakk.net 
http://blogg.karlsbakk.net/ GPG Public key: 
http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt -- I all pedagogikk er det 
essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ 
for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp 
etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på 
zfs-discuss mailing list

Reply via email to