On Wed, Apr 6, 2011 at 10:51 AM, David Dyer-Bennet <d...@dd-b.net> wrote:
>
> On Tue, April 5, 2011 14:38, Joe Auty wrote:

>> Also, more generally, is ZFS send/receive mature enough that when you do
>> data migrations you don't stress about this? Piece of cake? The
>> difficulty of this whole undertaking will influence my decision and the
>> whole timing of all of this.
>
> A full send / receive has been reliable for a long time.  With a real
> (large) data set, it's often a long run.  It's often done over a network,
> and any network outage can break the run, and at that point you start
> over, which can be annoying.  If the servers themselves can't stay up for
> 10 or 20 hours you presumably aren't ready to put them into production
> anyway :-).

    At my employer we have about 20TB of data in one city and a zfs
replicated copy of it in another city. The data is spread out over 15
pools and over 200 datasets. The initial full replication of the
larger datasets (the largest is 3 TB) took days, the largest even took
close to two weeks. The incremental send/recv sessions are much
quicker, based on how much data has changed, but we run the
replication script every 4 hours and it usually completes before the
next scheduled run. Once we got past a few bugs in both my script and
the older zfs code (we are at zpool 22 and zfs 4 right now, we started
all this at zpool 10) the replications have been flawless.

>> I'm also thinking that a ZFS VM guest might be a nice way to maintain a
>> remote backup of this data, if I can install the VM image on a
>> drive/partition large enough to house my data. This seems like it would
>> be a little less taxing than rsync cronjobs?
>
> I'm a big fan of rsync, in cronjobs or wherever.  What it won't do is
> properly preserve ZFS ACLs, and ZFS snapshots, though.  I moved from using
> rsync to using zfs send/receive for my backup scheme at home, and had
> considerable trouble getting that all working (using incremental
> send/receive when there are dozens of snapshots new since last time).  But
> I did eventually get up to recent enough code that it's working reliably
> now.

    We went with zfs send/recv over rsync for two big reasons, an
incremental zfs send is much, much faster than an rsync if you have
lots of files (our 20TB of data consists of 200 million files), and we
are leveraging zfs ACLs and need them preserved on the copy.

    I have not tried zfs on a VM guest.

-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to