On Thu, Aug 21, 2008 at 11:46:47AM +0100, Robert Milkowski wrote: > > Wednesday, August 20, 2008, 7:11:01 PM, you wrote: > > I'm currently working out details on an upgrade from UFS/SDS on DAS to > ZFS on a SAN fabric. I'm interested in hearing how ZFS has behaved in > more traditional SAN environments using gear that scales vertically > like EMC Clarion/HDS AMS/3PAR etc. Do you experience issues with > zpool integrity because of MPxIO events? Has the zpool been reliable > over your fabric? Has performance been where you would have expected > it to be? > > Yes it works fine.
We have a 2-TB ZFS pool on a T2000 server with storage on our Iscsi SAN. Disk devices are four LUNs from our Netapp file server, with multiple IP paths between the two. We've tested this by disconnecting and reconnecting ethernet cables in turn. Failover and failback worked as expected, with no interruption to data flow. It looks like this: $ zpool status pool: space state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM space ONLINE 0 0 0 c4t60A98000433469764E4A2D456A644A74d0 ONLINE 0 0 0 c4t60A98000433469764E4A2D456A696579d0 ONLINE 0 0 0 c4t60A98000433469764E4A476D2F6B385Ad0 ONLINE 0 0 0 c4t60A98000433469764E4A476D2F664E4Fd0 ONLINE 0 0 0 errors: No known data errors > The only issue there is, with some disk arrays, is a cache flush issue > - you can disable it on disk array or in zfs. > > Then if you want to leverage ZFS self-healing properties then make > sure you have some kind of redundancy on zfs level regardless of your > redundancy on the array. -- -Gary Mills- -Unix Support- -U of M Academic Computing and Networking- _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss