For the moment, SolarisCluster 3.2 does not support using AVS replication
within a cluster for failover of storage. We do support using storage based
replication for failover data with high end Hitachi based storage.
Also at this point SolarisCluster does not ship with support for zfs send.
You could probably write your own agent for zfs send using our
agent builder tool. However, integrating this with the HANFS agent
that ships with SolarisCluster will require that you are familiar with
all of the failures that you may hit and what recovery action you want to
take.

-Charles

a habman wrote:
Hello all,  I am interested in setting up an HA NFS server with zfs as
the storage filesystem on Solaris 10 + Sun Cluster 3.2. This is an HPC
environment with a 70 node cluster attached. File sizes are 1-200meg
or so, with an average around 10meg.

I have two servers, and due to changing specs through time I have
ended up with heterogeneus storage.  They are physically close to each
other, so no offsite replication needs.

Server A has an areca 12 port raid card attached to 12x400 gig drives.
Server B has an onboard raid with 6 available slots which I plan on
populating with either 750 gig or 1tb drives.

With AVS 4.0 (which I have running on a test volume pair) I am able to
mirror the zpools at the block level, but I am forced to have an equal
number of LUNs for it to work on( AVS mirrors block devices that zfs
works on top of).  If I carve up each raid set into 4 volumes, AVS
those(plus bitmap volumes) and then ZFS stripe over that,
theoretically I am in business, although this has a couple of
downsides.

If I want to maximize my performance first, while keeping a margin of
safety in this replicated environment, how can I best use my storage?

Option one:

  AVS + Hardware raid 5 on each side.  Make 4 LUNs and zfs stripe on
top.  Hardware raid takes care of drive failure. AVS ensures that the
whole storage pool is replicated at all times to Server B. This method
does not take advantage of disk caching zfs can do, nor additional
performance scheduling zfs would like to manage at the drive level.
Also unknown is how the SC3.2 HA ZFS module will work on an AVS zfs
filesystem as I believe it was designed for a fiberchannel shared set
of disks. On the plus side with this method we have block level
replication, so close to instantaneous sync between filesystems.

Option two:
  Full zfs pools on both side using zfs send+zfs recieve for the
replication.  This has benifits because my pools can be different
sized and grow and thats ok. Could also be mounted on server B as well
(most of the time).  Downside is I have to hack a zfs send + recieve
script+cron job, which is not likely as bombproof as the tried and
tested AVS?

So... basically, how are you all doing replication between two
different disk topologies using zfs?

I am a solaris newbie, attracted by the smell of the zfs, and so
pardon my lack of in depth knowledge into these issues.

Thank you in advance.

Ahab
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to