Re: [zfs-discuss] Why replacing a drive generates writes to other disks?
Robert Milkowski wrote: Hello zfs-discuss, Subject says it all. I first checked - no IO activity at all to the pool named thumper-2. So I started replacing one drive with 'zpool replace thumper-2 c7t7d0 c4t1d0'. Now the question is why am I seeing writes to other disks than c7t7d0? Are you *sure* that nothing else is going on? Not even atime updates? Do 'zfs umount -a' and see if there's still writes to other disks. There may be some small amount of writes to update some metadata with the resilvering status. I just did this yesterday on a raidz2 pool and didn't see writes to the other disks. Maybe the code has changed since you tried? Also why in case of replacing a disk we do not just copy disk-to-disk? It would be MUCH faster here. Probably 'coz we're traversing meta-data? But perhaps it could be done in a clever way so we endup just copying from one disk to another. Checking parity or checksum here it's not necessary - scrub is for it. What we want in most cases is to replace drive as fast as possible. In some cases it would be faster but others not. For example, if the pool is not very full, it would be slower. Also if the disk you're replacing is not available, a straight disk-to-disk copy would not be possible. On another thumper I have a failing drive (port resets, etc.) so I issued over a week ago drive replacement. Well it still hasn't completed even 4% in a week! The pool config is the same. It's just wy to slow and in a long term risky. Are you taking snapshots? They cause scrubbing / resilvering to restart (this is bug 6343667). --matt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: ZFS party - PANIC collection
Conclusion: After a day of tests we are going to think that ZFS doesn't work well with MPXIO. What kind of array is this? If it is not a Sun array then how are you configuring mpxio to recognize the array? We are facing the same problems with a JBOD (EMC DAE2), a Storageworks EVA and an old Storageworks EMA. Gino This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] nv59 + HA-ZFS
David Anderson wrote: Hi, I'm attempting to build a ZFS SAN with iSCSI+IPMP transport. I have two ZFS nodes that access iSCSI disks on the storage network and then the ZFS nodes share ZVOLs via iSCSI to my front-end Linux boxes. My throughput from one Linux box is about 170+MB/s with nv59 (earlier builds were about 60MB/s), so I am pleased with the performance so far. My next step is to configure HA-ZFS for failover between the two ZFS nodes. Does Sun Cluster 3.2 work with SXCE? If so, are there any caveats for my situation? I thought Sun Cluster's support for iSCSI was not ready. You could perhaps check with the sun cluster group. Regards, Manoj ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: ZFS party - PANIC collection
Gino Ruopolo wrote: Conclusion: After a day of tests we are going to think that ZFS doesn't work well with MPXIO. What kind of array is this? If it is not a Sun array then how are you configuring mpxio to recognize the array? We are facing the same problems with a JBOD (EMC DAE2), a Storageworks EVA and an old Storageworks EMA. What makes you think that these arrays work with mpxio? Every array does not automatically work. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss