[BackupPC-users] attrib_0 files?

2020-06-08 Thread
I have some empty attrib files, labeled attrib_0.
Note that the directory it represents, has no subdirectories. So, I would have
thought that no attrib file was present/necessary -- which seems to be
the case in most of my empty directories.

So what is the difference (and rationale) for attrib_0 vs no attrib
file.
Does that have to do with a prior/subsequent file deletion?


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
I pushed a commit

that implements nightly pool checking on a configurable portion of the pool
files.  It needs the latest version of backuppc-xs, 0.61.

Craig

On Mon, Jun 8, 2020 at 4:22 PM Michael Huntley  wrote:

> I’m fine with both action items.
>
> I back up millions of emails and so far the restores I’ve performed have
> never been an issue.
>
> mph
>
>
>
> On Jun 8, 2020, at 3:01 PM, Craig Barratt via BackupPC-users <
> backuppc-users@lists.sourceforge.net> wrote:
>
> 
> Jeff & Guillermo,
>
> Agreed - it's better to scan small subsets of the pool.  I'll add that
> to BackupPC_refCountUpdate (which does the nightly pool scanning to delete
> unused files and update stats).
>
> Craig
>
> On Mon, Jun 8, 2020 at 2:35 PM  wrote:
>
>> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>>  > > While it's helpful to check the pool, it isn't obvious how to fix
>> any errors.
>>  >
>>  > Sure. Actually I've put aside to interpret the error and the file
>>  > involved until I find an actual error (so I hope to never need that
>>  > information! :) )
>>  >
>>  > > So it's probably best to have rsync-bpc implement the old
>> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly
>> skipping the --checksum short-circuit during a full.  For that fraction of
>> files, it would do a full rsync check and update, which would update the
>> pool file if they are not identical.
>>  >
>>  > That would be a good compromise. It makes the fulls a bit slower in
>>  > servers with poor network and slow disks, but it's more clear what to
>>  > do in case of error. Maybe also add a "warning of possible pool
>>  > corruption" if the stored checksum and the new checksum differs for
>>  > those files?
>>  >
>>
>> The only problem with this approach is that it never revisits pool
>> files that aren't part of new backups.
>>
>> That is why I suggested a nightly troll through the cpool/pool to
>> check md5sums going sequentially through X% each night...
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Michael Huntley
I’m fine with both action items.

I back up millions of emails and so far the restores I’ve performed have never 
been an issue.

mph



> On Jun 8, 2020, at 3:01 PM, Craig Barratt via BackupPC-users 
>  wrote:
> 
> 
> Jeff & Guillermo,
> 
> Agreed - it's better to scan small subsets of the pool.  I'll add that to 
> BackupPC_refCountUpdate (which does the nightly pool scanning to delete 
> unused files and update stats).
> 
> Craig
> 
>> On Mon, Jun 8, 2020 at 2:35 PM  wrote:
>> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>>  > > While it's helpful to check the pool, it isn't obvious how to fix any 
>> errors.
>>  > 
>>  > Sure. Actually I've put aside to interpret the error and the file
>>  > involved until I find an actual error (so I hope to never need that
>>  > information! :) )
>>  > 
>>  > > So it's probably best to have rsync-bpc implement the old 
>> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly 
>> skipping the --checksum short-circuit during a full.  For that fraction of 
>> files, it would do a full rsync check and update, which would update the 
>> pool file if they are not identical.
>>  > 
>>  > That would be a good compromise. It makes the fulls a bit slower in
>>  > servers with poor network and slow disks, but it's more clear what to
>>  > do in case of error. Maybe also add a "warning of possible pool
>>  > corruption" if the stored checksum and the new checksum differs for
>>  > those files?
>>  > 
>> 
>> The only problem with this approach is that it never revisits pool
>> files that aren't part of new backups.
>> 
>> That is why I suggested a nightly troll through the cpool/pool to
>> check md5sums going sequentially through X% each night...
>> 
>> 
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Automated script to restore and compare backups

2020-06-08 Thread
I wrote the attached script to automate testing the complete fidelity
of BackupPC backups... so that I could test the round-trip of dump &
restore as broadly, easily, and accurately as possible

In my case, since I use btrfs snapshots for the source of my backups,
I am able to ensure that the source is unchanged allowing for full
compare of restores against the original source.

Working with Craig, this has uncovered several bugs in acl's and
xatrributes... but now I am able to get 100% accurate backups against
'rsync -niacXAH --delete' -- which is about as close at it gets :)

The attached BASH script automates the restore and compare process
including retrieving the appropriate shares, merges, compression
levels, etc. needed to execute the compare. I should have written it
in perl but it started as just a series of CLI commands that kept
growing until they became a program...

This script should allow others to validate their own rsync backups
both for their own sake as well as to identify other bugs that may
still persist.

Enjoy and please report back any errors in either this script or
BackupPC...



BackupPC_restoreTest
Description: Binary data

Best,
Jeff___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
Jeff & Guillermo,

Agreed - it's better to scan small subsets of the pool.  I'll add that
to BackupPC_refCountUpdate (which does the nightly pool scanning to delete
unused files and update stats).

Craig

On Mon, Jun 8, 2020 at 2:35 PM  wrote:

> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>  > > While it's helpful to check the pool, it isn't obvious how to fix any
> errors.
>  >
>  > Sure. Actually I've put aside to interpret the error and the file
>  > involved until I find an actual error (so I hope to never need that
>  > information! :) )
>  >
>  > > So it's probably best to have rsync-bpc implement the old
> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly
> skipping the --checksum short-circuit during a full.  For that fraction of
> files, it would do a full rsync check and update, which would update the
> pool file if they are not identical.
>  >
>  > That would be a good compromise. It makes the fulls a bit slower in
>  > servers with poor network and slow disks, but it's more clear what to
>  > do in case of error. Maybe also add a "warning of possible pool
>  > corruption" if the stored checksum and the new checksum differs for
>  > those files?
>  >
>
> The only problem with this approach is that it never revisits pool
> files that aren't part of new backups.
>
> That is why I suggested a nightly troll through the cpool/pool to
> check md5sums going sequentially through X% each night...
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread
Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
 > > While it's helpful to check the pool, it isn't obvious how to fix any 
 > > errors.
 > 
 > Sure. Actually I've put aside to interpret the error and the file
 > involved until I find an actual error (so I hope to never need that
 > information! :) )
 > 
 > > So it's probably best to have rsync-bpc implement the old 
 > > $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly 
 > > skipping the --checksum short-circuit during a full.  For that fraction of 
 > > files, it would do a full rsync check and update, which would update the 
 > > pool file if they are not identical.
 > 
 > That would be a good compromise. It makes the fulls a bit slower in
 > servers with poor network and slow disks, but it's more clear what to
 > do in case of error. Maybe also add a "warning of possible pool
 > corruption" if the stored checksum and the new checksum differs for
 > those files?
 > 

The only problem with this approach is that it never revisits pool
files that aren't part of new backups.

That is why I suggested a nightly troll through the cpool/pool to
check md5sums going sequentially through X% each night...


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Guillermo Rozas
> While it's helpful to check the pool, it isn't obvious how to fix any errors.

Sure. Actually I've put aside to interpret the error and the file
involved until I find an actual error (so I hope to never need that
information! :) )

> So it's probably best to have rsync-bpc implement the old 
> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly 
> skipping the --checksum short-circuit during a full.  For that fraction of 
> files, it would do a full rsync check and update, which would update the pool 
> file if they are not identical.

That would be a good compromise. It makes the fulls a bit slower in
servers with poor network and slow disks, but it's more clear what to
do in case of error. Maybe also add a "warning of possible pool
corruption" if the stored checksum and the new checksum differs for
those files?

Regards,
Guillermo


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
Guillermo,

Yes, that's an excellent point.  Actually v3 suffers from this too since,
with cached block and full-file checksums, it doesn't recheck the file
contents either.  However, v3 had a
parameter $Conf{RsyncCsumCacheVerifyProb} (default 0.01 == 1%) that caused
rsync to verify that random fraction of the file contents.  Other xfer
methods (eg, tar and smb) always do a full-file compare during a full, so
shouldn't be undetected server-side corruption with those XferMethods.

Thanks for the script.  While it's helpful to check the pool, it isn't
obvious how to fix any errors.  So it's probably best to have rsync-bpc
implement the old $Conf{RsyncCsumCacheVerifyProb} setting.  It could do
that by randomly skipping the --checksum short-circuit during a full.  For
that fraction of files, it would do a full rsync check and update, which
would update the pool file if they are not identical.

If folks agree with that approach, that's what I'll implement.

Craig

On Mon, Jun 8, 2020 at 10:16 AM Guillermo Rozas 
wrote:

> I've attached the script I'm using. It's very rough, so use at your own
> risk!
>
> I run it daily checking 4 folders of the pool per day, sequentially,
> so it takes 32 days to check them all. You can modify the external
> loop to change this. The last checked folder is saved in an auxiliary
> file.
>
> The checksum is done uncompressing the files in the pool using
> zlib-flate (line 25), but it can be changed to pigz or BackupPC_zcat.
> On my severely CPU-limited server (Banana Pi) both pigz and zlib-flate
> are much faster than BackupPC_zcat, they take around a quarter of the
> time to check the files (pigz is marginally faster than zlib-flate).
> On the other hand, BackupPC_zcat puts the lowest load on the CPU,
> zlib-flate's load is 30-35% higher, and pigz's is a whooping 80-100%
> higher.
>
> However, as BackupPC_zcat produces slightly modified gzip files, there
> is a (very) small chance that a BackupPC_zcat compressed file is not
> properly uncompressed by the other two (line 28 in the script). If
> that happens, you need to re-check every zlib-flate or pigz failure
> with BackupPC_zcat before calling it a real error. I think this gets
> the best balance between load on the system and time spent checking
> the pool (at least for my server and pool...).
>
> Best regards,
> Guillermo
>
>
> On Mon, Jun 8, 2020 at 1:28 PM  wrote:
> >
> > Good point...
> > Craig - would it make sense to add a parameter to BackupPC_nightly
> > that would check a user-settable percentage of the files each night,
> > say NightlyChecksumPercent. So if set to 3%, the pool would be checked
> > (sequentially) over the period of ~1 month
> >
> > Guillermo Rozas wrote at about 11:12:39 -0300 on Monday, June 8, 2020:
> >  > Yes, I wouldn't worry about collisions by chance.
> >  >
> >  > However, there is a second aspect that is not covered here: if you
> >  > rely only on saved checksums in the server, it will not check again
> >  > unmodified pool files. This risks you missing file system corruption
> >  > or bit rot in the backup files that were previously caught by the V3
> >  > behaviour (which periodically checksummed the pool files).
> >  >
> >  > Two solutions:
> >  > - put the pool in a file system with checksum verification included
> >  > - use a script to periodically traverse the pool and chesum the files
> >  >
> >  > Best regards,
> >  > Guillermo
> >  >
> >  >
> >  >
> >  > On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
> >  >  wrote:
> >  > >
> >  > > Hi there,
> >  > >
> >  > > On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
> >  > >
> >  > > > ... presumably a very rare event ...
> >  > >
> >  > > That's putting it a little mildly.
> >  > >
> >  > > If it's really all truly random, then if you tried random
> collisions a
> >  > > million times per picosecond you would (probably) need of the order
> of
> >  > > ten trillion years to have a good chance of finding one...
> >  > >
> >  > > $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
> >  > > 10.79
> >  > >
> >  > > I think it's safe to say that it's not going to happen by chance.
> >  > >
> >  > > If it's truly random.
> >  > >
> >  > > --
> >  > >
> >  > > 73,
> >  > > Ged.
> >  > >
> >  > >
> >  > > ___
> >  > > BackupPC-users mailing list
> >  > > BackupPC-users@lists.sourceforge.net
> >  > > List:
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >  > > Wiki:http://backuppc.wiki.sourceforge.net
> >  > > Project: http://backuppc.sourceforge.net/
> >  >
> >  >
> >  > ___
> >  > BackupPC-users mailing list
> >  > BackupPC-users@lists.sourceforge.net
> >  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >  > Wiki:http://backuppc.wiki.sourceforge.net
> >  > Project: http://backuppc.sourceforge.net/
> >
> >
> > ___
> > BackupPC-users mailing list

Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Guillermo Rozas
I've attached the script I'm using. It's very rough, so use at your own risk!

I run it daily checking 4 folders of the pool per day, sequentially,
so it takes 32 days to check them all. You can modify the external
loop to change this. The last checked folder is saved in an auxiliary
file.

The checksum is done uncompressing the files in the pool using
zlib-flate (line 25), but it can be changed to pigz or BackupPC_zcat.
On my severely CPU-limited server (Banana Pi) both pigz and zlib-flate
are much faster than BackupPC_zcat, they take around a quarter of the
time to check the files (pigz is marginally faster than zlib-flate).
On the other hand, BackupPC_zcat puts the lowest load on the CPU,
zlib-flate's load is 30-35% higher, and pigz's is a whooping 80-100%
higher.

However, as BackupPC_zcat produces slightly modified gzip files, there
is a (very) small chance that a BackupPC_zcat compressed file is not
properly uncompressed by the other two (line 28 in the script). If
that happens, you need to re-check every zlib-flate or pigz failure
with BackupPC_zcat before calling it a real error. I think this gets
the best balance between load on the system and time spent checking
the pool (at least for my server and pool...).

Best regards,
Guillermo


On Mon, Jun 8, 2020 at 1:28 PM  wrote:
>
> Good point...
> Craig - would it make sense to add a parameter to BackupPC_nightly
> that would check a user-settable percentage of the files each night,
> say NightlyChecksumPercent. So if set to 3%, the pool would be checked
> (sequentially) over the period of ~1 month
>
> Guillermo Rozas wrote at about 11:12:39 -0300 on Monday, June 8, 2020:
>  > Yes, I wouldn't worry about collisions by chance.
>  >
>  > However, there is a second aspect that is not covered here: if you
>  > rely only on saved checksums in the server, it will not check again
>  > unmodified pool files. This risks you missing file system corruption
>  > or bit rot in the backup files that were previously caught by the V3
>  > behaviour (which periodically checksummed the pool files).
>  >
>  > Two solutions:
>  > - put the pool in a file system with checksum verification included
>  > - use a script to periodically traverse the pool and chesum the files
>  >
>  > Best regards,
>  > Guillermo
>  >
>  >
>  >
>  > On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
>  >  wrote:
>  > >
>  > > Hi there,
>  > >
>  > > On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
>  > >
>  > > > ... presumably a very rare event ...
>  > >
>  > > That's putting it a little mildly.
>  > >
>  > > If it's really all truly random, then if you tried random collisions a
>  > > million times per picosecond you would (probably) need of the order of
>  > > ten trillion years to have a good chance of finding one...
>  > >
>  > > $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
>  > > 10.79
>  > >
>  > > I think it's safe to say that it's not going to happen by chance.
>  > >
>  > > If it's truly random.
>  > >
>  > > --
>  > >
>  > > 73,
>  > > Ged.
>  > >
>  > >
>  > > ___
>  > > BackupPC-users mailing list
>  > > BackupPC-users@lists.sourceforge.net
>  > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > > Wiki:http://backuppc.wiki.sourceforge.net
>  > > Project: http://backuppc.sourceforge.net/
>  >
>  >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


check_pool
Description: Binary data
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread backuppc
Good point...
Craig - would it make sense to add a parameter to BackupPC_nightly
that would check a user-settable percentage of the files each night,
say NightlyChecksumPercent. So if set to 3%, the pool would be checked
(sequentially) over the period of ~1 month

Guillermo Rozas wrote at about 11:12:39 -0300 on Monday, June 8, 2020:
 > Yes, I wouldn't worry about collisions by chance.
 > 
 > However, there is a second aspect that is not covered here: if you
 > rely only on saved checksums in the server, it will not check again
 > unmodified pool files. This risks you missing file system corruption
 > or bit rot in the backup files that were previously caught by the V3
 > behaviour (which periodically checksummed the pool files).
 > 
 > Two solutions:
 > - put the pool in a file system with checksum verification included
 > - use a script to periodically traverse the pool and chesum the files
 > 
 > Best regards,
 > Guillermo
 > 
 > 
 > 
 > On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
 >  wrote:
 > >
 > > Hi there,
 > >
 > > On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
 > >
 > > > ... presumably a very rare event ...
 > >
 > > That's putting it a little mildly.
 > >
 > > If it's really all truly random, then if you tried random collisions a
 > > million times per picosecond you would (probably) need of the order of
 > > ten trillion years to have a good chance of finding one...
 > >
 > > $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
 > > 10.79
 > >
 > > I think it's safe to say that it's not going to happen by chance.
 > >
 > > If it's truly random.
 > >
 > > --
 > >
 > > 73,
 > > Ged.
 > >
 > >
 > > ___
 > > BackupPC-users mailing list
 > > BackupPC-users@lists.sourceforge.net
 > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > > Wiki:http://backuppc.wiki.sourceforge.net
 > > Project: http://backuppc.sourceforge.net/
 > 
 > 
 > ___
 > BackupPC-users mailing list
 > BackupPC-users@lists.sourceforge.net
 > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
 > Wiki:http://backuppc.wiki.sourceforge.net
 > Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Guillermo Rozas
Yes, I wouldn't worry about collisions by chance.

However, there is a second aspect that is not covered here: if you
rely only on saved checksums in the server, it will not check again
unmodified pool files. This risks you missing file system corruption
or bit rot in the backup files that were previously caught by the V3
behaviour (which periodically checksummed the pool files).

Two solutions:
- put the pool in a file system with checksum verification included
- use a script to periodically traverse the pool and chesum the files

Best regards,
Guillermo



On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
 wrote:
>
> Hi there,
>
> On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
>
> > ... presumably a very rare event ...
>
> That's putting it a little mildly.
>
> If it's really all truly random, then if you tried random collisions a
> million times per picosecond you would (probably) need of the order of
> ten trillion years to have a good chance of finding one...
>
> $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
> 10.79
>
> I think it's safe to say that it's not going to happen by chance.
>
> If it's truly random.
>
> --
>
> 73,
> Ged.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 8 Jun 2020, Jeff Kosowsky wrote:


... presumably a very rare event ...


That's putting it a little mildly.

If it's really all truly random, then if you tried random collisions a
million times per picosecond you would (probably) need of the order of
ten trillion years to have a good chance of finding one...

$ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
10.79

I think it's safe to say that it's not going to happen by chance.

If it's truly random.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/