On Sun, May 16, 2010 at 01:14:24PM -0700, Charles Hedrick wrote: > We use this configuration. It works fine. However I don't know > enough about the details to answer all of your questions. > > The disks are accessible from both systems at the same time. Of > course with ZFS you had better not actually use them from both > systems.
That's what I wanted to know. I'm not familiar with SAS fabrics, so it's good to know that they operate similarly to multi-initiator SCSI in a cluster. > Actually, let me be clear about what we do. We have two J4200's and > one J4400. One J4200 uses SAS disks, the others SATA. The two with > SATA disks are used in Sun cluster configurations as NFS > servers. They fail over just fine, losing no state. The one with SAS > is not used with Sun Cluster. Rather, it's a Mysql server with two > systems, one of them as a hot spare. (It also acts as a mysql slave > server, but it uses different storage for that.) That means that our > actual failover experience is with the SATA configuration. I will > say from experience that in the SAS configuration both systems see > the disks at the same time. I even managed to get ZFS to mount the > same pool from both systems, which shouldn't be possible. Behavior > was very strange until we realized what was going on. Our situation is that we only need a small amount of shared storate in the cluster. It's intended for high-availability of core services, such as DNS and NIS, rather than as a NAS server. > I get the impression that they have special hardware in the SATA > version that simulates SAS dual interface drives. That's what lets > you use SATA drives in a two-node configuration. There's also some > additional software setup for that configuration. That would be the SATA interposer that does that. -- -Gary Mills- -Unix Group- -Computer and Network Services- _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss