I jump into this loop with different alternative -- ip-based block device.
And I saw few successful cases with "HAST + UCARP + ZFS + FreeBSD".
If zfsonlinux is robust enough, trying "DRBD + PACEMAKER + ZFS + LINUX" is
definitely encouraged.

Thanks.


Fred

> -----Original Message-----
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nico Williams
> Sent: 星期四, 四月 26, 2012 14:00
> To: Richard Elling
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] cluster vs nfs
> 
> On Thu, Apr 26, 2012 at 12:10 AM, Richard Elling
> <richard.ell...@gmail.com> wrote:
> > On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
> > Reboot requirement is a lame client implementation.
> 
> And lame protocol design.  You could possibly migrate read-write NFSv3
> on the fly by preserving FHs and somehow updating the clients to go to
> the new server (with a hiccup in between, no doubt), but only entire
> shares at a time -- you could not migrate only part of a volume with
> NFSv3.
> 
> Of course, having migration support in the protocol does not equate to
> getting it in the implementation, but it's certainly a good step in
> that direction.
> 
> > You are correct, a ZFS send/receive will result in different file
> handles on
> > the receiver, just like
> > rsync, tar, ufsdump+ufsrestore, etc.
> 
> That's understandable for NFSv2 and v3, but for v4 there's no reason
> that an NFSv4 server stack and ZFS could not arrange to preserve FHs
> (if, perhaps, at the price of making the v4 FHs rather large).
> Although even for v3 it should be possible for servers in a cluster to
> arrange to preserve devids...
> 
> Bottom line: live migration needs to be built right into the protocol.
> 
> For me one of the exciting things about Lustre was/is the idea that
> you could just have a single volume where all new data (and metadata)
> is distributed evenly as you go.  Need more storage?  Plug it in,
> either to an existing head or via a new head, then flip a switch and
> there it is.  No need to manage allocation.  Migration may still be
> needed, both within a cluster and between clusters, but that's much
> more manageable when you have a protocol where data locations can be
> all over the place in a completely transparent manner.
> 
> Nico
> --
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to