the problem with zfs snapshots is that you will have to be very careful
about organizing the data. you are taking a snapshot of a live filesystem
but what does it depend on? and when you send it to the other machine is
that dependancy met? what if you miss sending a snapshot?
The problem here is automating the process. I have toyed with the idea of
doing a script that would do an md5 and timestamp on each file, write that
to a log and then do the same on the remote system. then compare the two
systems and use rsync on each item. It is something like rsync already does
but uses a file on the local filesystem to write the transfer list and spawn
a seperate process for each file to be transfered and do this sequentially.
you could also mix in a little in-line bzip2 or 7z compression to improve
speed on slow links and even allow a certain number of processes to run in
paralell to utilize multiple cpus
I am currently syncing over 240GB with rsync without issues. the sync takes
about 1-1.5 hours with most of that time being spent making the file list.
this does work well, you just have to be patient and avoid the temptation to
kill rsync because you think it is dead.
On Fri, Jul 25, 2008 at 12:03 PM, Les Mikesell <[EMAIL PROTECTED]>
wrote:
> dan wrote:
>
>> i have tested the send and receive functionality of zfs on openbsd. the
>> problem is that it sends the entire fileset, block by block. this is not
>> going to work for remote replication within a reasonable timeframe.
>>
>
> You have to do that once, but I thought you could do one snapshot, send it
> for the initial snapshot copy, then subsequently do other snapshots and
> incremental sends to be received into the remote snapshot copy. The
> question is, how efficient are those incremental sends when backuppc has
> made a run that didn't add a lot of new data but made lots of new
> hardlinks?
>
> rsync is about the only option. i know it does not scale well with file
>> count but it still does the best job of syncing up filesystems remotely.
>> the best solution here is to get with the rsync team and see what can be
>> done about the memory usage. maybe an option to write the file list to a
>> temp file or maybe compress the file list in memory.
>>
>> I do remote rsyncs for a large fileset with millions of files. It does
>> work, reliably even, though i have 4GB of ram available on both computers.
>>
>
> I haven't seen this work even locally with several hundred gigs in the
> archive filesystem.
>
> I have done massive amounts of testing and trial and error and have found
>> that this is the best setup for my needs, and purhaps many people's needs
>>
>
> Did you measure the size of the zfs incremental send when done from a
> snapshot where a previous snapshot had already been sent? So far I haven't
> been able to boot any opensolaris based system on the boxes where I'd like
> to test. I might eventually try it with freebsd.
>
> Maybe a zfs snapshot copied to a local external drive or sent to an
> external drive on another machine on the LAN, then carried offsite would
> work if the incremental send is not efficient.
>
> --
> Les Mikesell
> [EMAIL PROTECTED]
>
>
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/