On a v40z running snv_51, I'm doing a "zpool replace z c1t4d0 c1t5d0".
(so, why am I doing the replace? The outgoing disk has been reporting read errors sporadically but with increasing frequency over time..) zpool iostat -v shows writes going to the old (outgoing) disk as well as to the replacement disk. Is this intentional? Seems counterintuitive as I'd think you'd want to touch a suspect disk as little as possible and as nondestructively as possible... A representative snapshot from "zpool iostat -v" : capacity operations bandwidth pool used avail read write read write ------------- ----- ----- ----- ----- ----- ----- z 306G 714G 1.43K 658 23.5M 1.11M raidz1 109G 231G 1.08K 392 22.3M 497K replacing - - 0 1012 0 5.72M c1t4d0 - - 0 753 0 5.73M c1t5d0 - - 0 790 0 5.72M c2t12d0 - - 339 177 9.46M 149K c2t13d0 - - 317 177 9.08M 149K c3t12d0 - - 330 181 9.27M 147K c3t13d0 - - 352 180 9.45M 146K raidz1 100G 240G 117 101 373K 225K c1t3d0 - - 65 33 3.99M 64.1K c2t10d0 - - 60 44 3.77M 63.2K c2t11d0 - - 62 42 3.87M 63.4K c3t10d0 - - 63 42 3.88M 62.3K c3t11d0 - - 65 35 4.06M 61.8K raidz1 96.2G 244G 234 164 768K 415K c1t2d0 - - 129 49 7.85M 112K c2t8d0 - - 133 54 8.05M 112K c2t9d0 - - 132 56 8.08M 113K c3t8d0 - - 132 52 8.01M 113K c3t9d0 - - 132 49 8.16M 112K - Bill _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss