On Sun, Jun 17, 2018, at 20:05, Warren Young wrote:
> However, I’ll also give a counterargument to the whole idea: you 
> probably aren’t saving anything in the end.  An intelligent deconstruct 
> + backup probably saves no net I/O over just re-copying the Fossil repo 
> DB to the destination unless the destination is *much* slower than the 
> machine being backed up.
> 
> (rsync was created for the common case where networks are much slower 
> than the computers they connect.  rsync within a single computer is 
> generally no faster than cp -r, and sometimes slower, unless you take 
> the mtime optimization mentioned above.)
> 
> The VM/ZFS + snapshots case has a similar argument against it: if you’re 
> using snapshots to back up a Fossil repo, deconstruction isn’t helpful.  
> The snapshot/CoW mechanism will only clone the changed disk blocks in 
> the repo.
> 
> So, what problem are you solving?  If it isn’t the slow-networks 
> problem, I suspect you’ve got an instance of the premature optimization 
> problem here.  If you go ahead and implement it, measure before 
> committing the change, and if you measure a meaningful difference, 
> document the conditions to help guide expectations.

I want my approximately daily backups to be small.

I currently version the fossil SQLite files in borg, and I am considering 
versioning instead the artefact dumps. I figure these will change less than the 
SQLite files do and that they also will be smaller because they lack caches.

But the backups are already very small.

I suppose I could test this.
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to