This is accomplished with the Nexenta HA cluster plugin.  The plugin is
written by RSF, and you can read more about it here :

You can do either option 1 or two that you put forth.  There is some
failover time, but in the latest version of Nexenta (3.1.1) there are some
additional tweaks that bring the failover time down significantly.
Depending on pool configuration and load, failover can be done in under 10
seconds based on some of my internal testing.

-Matt Breitbach

-----Original Message-----
[] On Behalf Of Jim Klimov
Sent: Tuesday, November 08, 2011 5:53 PM
Subject: Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

Hello all,

   A couple of months ago I wrote up some ideas about clustered
ZFS with shared storage, but the idea was generally disregarded
as not something to be done in near-term due to technological

   Recently I stumbled upon a Nexenta+Supermicro report [1] about
cluster-in-a-box with shared storage boasting an "active-active
cluster" with "transparent failover". Now, I am not certain how
these two phrases fit in the same sentence, and maybe it is some
marketing-people mixup, but I have a couple of options:

1) The shared storage (all 16 disks are accessible to both
    motherboards) is split into two ZFS pools, each mounted
    by one node normally. If a node fails, another imports
    the pool and continues serving it.

2) All disks are aggregated into one pool, and one node
    serves it while another is in hot standby.

    Ideas (1) and (2) may possibly contradict the claim that
    the failover is seamless and transparent to clients.
    A pool import usually takes some time, maybe long if
    fixups are needed; and TCP sessions are likely to get
    broken. Still, maybe the clusterware solves this...

3) Nexenta did implement a shared ZFS pool with both nodes
    accessing all of the data instantly and cleanly.
    Can this be true? ;)

If this is not a deeply-kept trade secret, can the Nexenta
people elaborate in technical terms how this cluster works?


//Jim Klimov
zfs-discuss mailing list

zfs-discuss mailing list

Reply via email to