On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:

> On 4/25/12 6:57 PM, Paul Kraus wrote:
>> On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams<n...@cryptonector.com>  wrote:
>>> On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
>>> <richard.ell...@gmail.com>  wrote:
> 
>>> 
>>> Nothing's changed.  Automounter + data migration ->  rebooting clients
>>> (or close enough to rebooting).  I.e., outage.
>> 
>>     Uhhh, not if you design your automounter architecture correctly
>> and (as Richard said) have NFS clients that are not lame to which I'll
>> add, automunters that actually work as advertised. I was designing
> 
> And applications that don't pin the mount points, and can be idled during the 
> migration. If your migration is due to a dead server, and you have pending 
> writes, you have no choice but to reboot the client(s) (and accept the data 
> loss, of course).

Reboot requirement is a lame client implementation.

> Which is why we use AFS for RO replicated data, and NetApp clusters with 
> SnapMirror and VIPs for RW data.
> 
> To bring this back to ZFS, sadly ZFS doesn't support NFS HA without shared / 
> replicated storage, as ZFS send / recv can't preserve the data necessary to 
> have the same NFS filehandle, so failing over to a replica causes stale NFS 
> filehandles on the clients. Which frustrates me, because the technology to do 
> NFS shadow copy (which is possible in Solaris - not sure about the open 
> source forks) is a superset of that needed to do HA, but can't be used for HA.

You are correct, a ZFS send/receive will result in different file handles on 
the receiver, just like
rsync, tar, ufsdump+ufsrestore, etc.

Do you mean the Sun ZFS Storage 7000 Shadow Migration feature?  This is not a 
HA feature, it
is an interposition architecture.

It is possible to preserve NFSv[23] file handles in a ZFS environment using 
lower-level replication
like TrueCopy, SRDF, AVS, etc. But those have other architectural issues (aka 
suckage). I am 
open to looking at what it would take to make a ZFS-friendly replicator that 
would do this, but
need to know the business case [1]

The beauty of AFS and others, is that the file handle equivalent is not a 
number. NFSv4 also has
this feature. So I have a little bit of heartburn when people say, "NFS sux 
because it has a feature
I won't use because I won't upgrade to NFSv4 even though it was released 10 
years ago."

As Nico points out, there are cases where you really need a Lustre, Ceph, 
Gluster, or other 
parallel file system. That is not the design point for ZFS's ZPL or volume 
interfaces.

[1] FWIW, you can build a metropolitan area ZFS-based, shared storage cluster 
today for about 1/4 
the cost of the NetApp Stretch Metro software license. There is more than one 
way to skin a cat :-)
So if the idea is to get even lower than 1/4 the NetApp cost, it feels like a 
race to the bottom.

 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422







_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to