On Sun, Jun 17, 2018, at 20:05, Warren Young wrote:
> However, I’ll also give a counterargument to the whole idea: you
> probably aren’t saving anything in the end. An intelligent deconstruct
> + backup probably saves no net I/O over just re-copying the Fossil repo
> DB to the destination
On 6/17/18, Thomas Levine <_...@thomaslevine.com> wrote:
> As content is added to a fossil repository, files in the corresponding
> deconstructed repository never change; they are only added. Most backup
> software will track changes to the deconstructed repository with great
> efficiency.
>
> I
On Sun, Jun 17, 2018 at 10:08 PM Warren Young wrote:
> On Jun 17, 2018, at 2:05 PM, Warren Young wrote:
> >
> > If you’re willing to gamble that if the first test returns true that the
> second will also returns true, it buys you a big increase in speed. The
> gamble is worth taking as long as
On Jun 17, 2018, at 2:05 PM, Warren Young wrote:
>
> If you’re willing to gamble that if the first test returns true that the
> second will also returns true, it buys you a big increase in speed. The
> gamble is worth taking as long as the files’ modification timestamps are
> trustworthy.
I
On Jun 17, 2018, at 12:16 PM, Thomas Levine <_...@thomaslevine.com> wrote:
>
> One inconvenience I noted is that the deconstruct command always writes
> artefacts to the filesystem, even if a file of the appropriate name and
> size and contents already exists.
You might want to split that
As content is added to a fossil repository, files in the corresponding
deconstructed repository never change; they are only added. Most backup
software will track changes to the deconstructed repository with great
efficiency.
I should thus take my backups of the deconstructed repositories, yes?
6 matches
Mail list logo