Heh, it might have been me who suggested that. I'm testing the idea out at the moment, but being new to Solaris it's taking some time.
So far I've confirmed that you can import iSCSI volumes to ZFS fine, but you need to use static discovery. If you use sendtargets, it breaks when devices go offline (hangs iSCSI and ZFS and then Solaris won't boot). I've also got a basic cluster running with HA-ZFS mirroring a pair of iSCSI disks, with HA-NFS running on top of that. That appears to work fine too, and is pretty reliable. In terms of recovery time after one half of the mirror going down, I thought ZFS already had that feature - it was one of the things I read that gave me this idea in the first place. Have a look at page 15 of this presentation, it specifically says "a 5 second outage takes 5 seconds to repair": http://opensolaris.org/os/community/zfs/docs/zfs_last.pdf I read that to understand that if the iSCSI server breaks but is repairable, you will only need to re-sync the data that has changed. Of course, if the whole thing dies you have rather a lot of data to shift around, but if you're running ZFS with dual parity raid on the x4500's, the chances are you'll only need to do that when hell freezes over :) I'm doing my level best to kill our setup at the moment. I've been pulling the (virtual) power on the iSCSI servers, resilvering ZFS, and swopping ZFS between the two cluster nodes. So far I've had a few teething problems but it's always come back online and I've never lost any data. Even swopping active nodes in the cluster while iSCSI devices are offline isn't a problem, but I do have a lot more stress testing to do. The latest trick is that I've now got 5 Solaris boxes running under VMware (2x iSCSI servers, 2x Cluster, 1x client), and I'm about to test: VMware -> Solaris -> ZFS pool -> iSCSI -> Solaris Cluster -> HA-ZFS -> HA-NFS -> VMware Yes, Vmware is quite happy accessing an NFS store hosted within itself, although I'm yet to test how it handles a cluster node failure. I'm going to test that, and then host an XP desktop on the NFS share and see how performance compares to a desktop on native storage. I figure that will give me a reasonable idea as to how much overhead this is adding :) One of the main reasons I'm testing with VMware is that I plan to access the iSCSI storage on the Thumpers via a Solaris machine hosted under VMware. That way I can connect directly to it from other virtual servers and take advantage of the 64Gbps speed and low latency of the virtual network. It means mirroring the Thumpers shouldn't add any noticable latency to the traffic. That's about the extent of my progress so far. Would love to hear your feedback if you're testing this too. This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss