On Tue, Feb 16, 2010 at 10:43 PM, Edward Ned Harvey <[email protected]>wrote:

>  Does nobody backup sparse files?  I can’t believe there’s no good way to
> do it.  Of particular interest, I would like to backup:
>
> ·         TrueCrypt sparse files in Windows (Truecrypt calls this
> “Dynamic.”)
>
> ·         Virtualbox, or VMWare Workstation sparse (“expanding”) virtual
> disks in windows
>
> ·         VMWare Fusion or Parallels sparse virtual disks in Mac
>
>
>
> I would like to back these up frequently, and efficiently.  If I have a 50G
> container file that occupies 200M on disk, the backup should be close to
> 200M, and when I modify 1M in the middle of the file and then save, I don’t
> want the incremental backup trying to send the whole 50G again.
>
>
>
> On the mac, the Sparsebundle concept solves this problem.  It’s just like a
> truecrypt image, but it’s broken up into a whole bunch of little 8M chunks.
> So when I modify 1M in the middle of the volume and save, my next backup
> will send one updated 8M chunk for backup.  A little bit of waste, but well
> within reason.
>
>
>
> I currently have Virtual Machines and TrueCrypt images excluded from the
> regular Time Machine and Acronis True Image backups of peoples’ laptops.
> But I’m not comfortable simply neglecting the VM’s and TrueCrypt volumes, as
> if they’re not important.
>
>
>
> I haven’t found anything satisfactory yet.  The closest I found so far was
> Crashplan.  It does “byte pattern differential” and “continuous real-time
> backup,” which means it can detect blocks changing in the middle of a file,
> and only send the changed blocks of a sparse file during incrementals,
> instead of sending the whole 50G again.  Unfortunately, crashplan can’t
> restore a sparse file.  D’oh!!!   :-(   Actually, that’s a fib.  It can
> restore sparse files, but they won’t be sparse anymore.  So … IMHO … that’s
> not useful.
>
>
>
> I’ve also tried rsync.  People all over the place say it should do well,
> but in practice, I found that doing a single incremental takes 2x longer
> than doing the whole image.  So again, IMHO, not useful.  Unless I am simply
> using it wrong.  But I put plenty of effort into making sure I was using it
> right, so I’m really pretty sure I didn’t get that wrong.
>
>
>
> Anybody doing anything they’re happy with, to backup sparse files on a
> regular basis, quickly, efficiently, frequently?
>
>
>
> Thanks…
>

rsync can handle sparse files by using the -S (--sparse) option.  However,
it can take longer as it's doing a bunch of processing instead of blindly
sending everything over the wire.  You are trading off bandwidth use for CPU
use.  On a local network, this tradeoff may not be worth it.  Personally I
use rsync on the local network since it also gives me that ability to
resume, preserve ownership, etc...

Have you looked into piping your backups through gzip?  Any sparse file
would get compressed down to almost nothing, though it also might take a
little more time.
_______________________________________________
Tech mailing list
[email protected]
http://lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to