> 
> 
> The SE also told me that Sun Cluster requires
> hardware raid, which
> conflicts with the general recommendation to feed ZFS
> raw disk. It seems
> such a configuration would either require configuring
> zdevs directly on the
> raid LUNs, losing ZFS self-healing and checksum
> correction features, or
> losing space to not only the hardware raid level, but
> a partially redundant
> ZFS level as well. What is the general consensus on
> the best way to deploy
> ZFS under a cluster using hardware raid?

I have a pair of 3510FC units, each export 2 RAID-5 (5-disk) LUNs.

On the T2000 to I map a LUN from each array into a mirror set, then add the 2nd 
set the same way into the ZFS pool.   I guess it's RAID-5+1+0.  Yes we have 
multipath SAN setup too.

e.g.

{cyrus1:vf5:133} zpool status -v
  pool: ms1
 state: ONLINE
 scrub: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        ms1                                        ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c4t600C0FF0000000000A73D97F16461700d0  ONLINE       0     0     0
            c4t600C0FF0000000000A719D7C1126E500d0  ONLINE       0     0     0
          mirror                                   ONLINE       0     0     0
            c4t600C0FF0000000000A73D94517C4A900d0  ONLINE       0     0     0
            c4t600C0FF0000000000A719D38B93FD200d0  ONLINE       0     0     0

errors: No known data errors

Works great.  Nothing beats having an entire 3510FC down and never having users 
notice there is a problem.  I was replacing a controller in the 2nd array and 
goofed up my cabling taking the entire array offline.  Not a hiccup in service, 
although I could see the problem in zpool status.  I sorted everything out 
plugged it up right, and everything was fine.

I like very much that the 3510 knows it has a global spare that is used for 
that array, and having that level of things handled locally.  In ZFS AFAICT, 
there is no way to specify what affinity a spare has so a spare from one array 
if it went hot to replace one in the other array, becomes an undesirable 
dependency.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to