Hi, Bowie Bailey wrote on 2014-04-24 16:08:49 -0400 [Re: [BackupPC-users] Only Incremental Backups]: > On 4/24/2014 3:52 PM, backu...@kosowsky.org wrote: > > Bowie Bailey wrote at about 14:38:26 -0400 on Thursday, April 24, 2014: > > > Think of incremental backups this way... > > > > > > Pros: > > > 1) They finish running quicker. > > > > > > Cons: > > > 1) They can miss backing up files in certain circumstances > > > 2) They cause the same data to be transferred multiple times. > > > - Each incremental will transfer all the same data as the previous > > > one plus any new changes > > > - The next full backup will also need to transfer all of that data > > > again. > > > > Not quite. > > > > Only data changed/added since the previous level incremental needs to be > > transferred.
not quite ;-). My turn to be pedantic. For incremental backups, that is true. For *full* backups, the reference is the previous backup *of any level*. So a full backup will *not* retransmit any changes the preceeding backup has caught. > True, but I was trying to keep things fairly simple. Besides, I don't > think I've seen an example of a situation where multiple levels of > incrementals were needed in BackupPC. That probably depends on how you define "need". As you explained, incremental backups are an invention meant to speed up backups at the cost of accuracy. Back in the days of tape backups, that was inevitable due to storage capacity considerations. Nowadays, you might argue that you should *never* trade accuracy for speed when doing backups (why do backups at all if they're not reliable?). Conversely, you might argue that you still need the speed gain (in order to not disrupt the service you are trying to back up), and that you cannot afford the bandwidth penalty the growing deltas incur. There you have the "need" for incremental levels. Just to be clear: for small backups with only a few tens of GB this is a non-issue. But in the TB range, you might have to be more selective about when full backups are feasible. > [...] > And if the incrementals were dependent on each other, you would have to > keep *all* of those incrementals. This is a good point. A consequence of this is also that corruption *anywhere* in your backup history could corrupt your most recent backup, and this corruption would never be detected and fixed until the corrupted file changes again. > [...] > > The bottom line remains the same. If your bottleneck is bandwidth, > > then do all fulls. > > Exactly. Almost. Again being pedantic, an incremental following a full should use up marginally less bandwidth (checksums of unchanged files) than a second full would. For a huge number of small files, that might make a difference, though it would more typically be unnoticeable. Concerning *file data* transferred, alternating full and incremental backups (i.e. one full, one incremental, one full, one incremental and so on) should be exactly equivalent to only full backups (unless the incrementals miss changes, in which case they will (wrongly) transfer less). I would want to rephrase the bottom line. If your bottleneck is bandwidth, you need frequent fulls. "Only fulls" is fine (optimal accuracy), if you can do that. If you can't, at least get as close to that as you can. Regards, Holger P.S.: If your bandwidth is not sufficient for the changes in your data, no backup plan is going to magically fix that. Get more bandwidth or back up less data. ------------------------------------------------------------------------------ Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/