Re: [BackupPC-users] Outdated Files in Backup?

2023-11-15 Thread Guillermo Rozas
>
> For testing I did a new "full" backup.
> It has the same old files in it :\
>

If you check the full log of that backup, does it shows any error or
anything suspicious? I had that kind of things happen sporadically when
there is a reading error on the client side, and rsync disables deletions
for safety. "Reading error" in my case was usually just a symlink for which
the linked file was missing.

Best regards,
Guillermo

>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] How to really/immediately exclude stuff from next backup

2023-10-19 Thread Guillermo Rozas
It should be more or less as slow as doing the new backup without removing
them, as it's doing essentially the same thing. The difference would be to
do it out of "backup hours" or when the server has more free resources.

On Thu, Oct 19, 2023 at 11:49 AM marki  wrote:

> Alright. In any case removing files from the current backup (because
> they don't exist anymore / have been excluded / ...) is painfully slow.
>
> On 2023-10-19 16:04, Guillermo Rozas wrote:
> >> It's not a capacity problem.
> >> It's a performance problem.
> >
> > Sorry, you implied that the problem was the capacity when you said
> >
> >>> Also in the original example the disk isn't large enough, so
> > we're>> not
> >>> even making it to that stage.
> >
> >> But again, that was not the question.
> >> The question is, why is that new backup set being populated with
> >> stuff
> >> just to be deleted again after that (the exclusion list).
> >
> > It's by design, in V4 the backup is stored as reverse deltas, and the
> > latest backup starts as an identical copy of the previous one and it's
> > modified to match the current status. Check the manual for the
> > reasons:
> > https://backuppc.github.io/backuppc/BackupPC.html#BackupPC-4.0,
> > specially the "more detailed discussion".
> >
> > Perhaps a solution for you would be to delete the ignored folder from
> > the last valid backup, so the new backup doesn't have it at the start.
> > Check the BackupPC_backupDelete command
> > (
> https://backuppc.github.io/backuppc/BackupPC.html#Other-Command-Line-Utilities
> ),
> > but be careful as it could be pretty destructive.
> >
> > Regards,
> > Guillermo
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:https://github.com/backuppc/backuppc/wiki
> > Project: https://backuppc.github.io/backuppc/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] How to really/immediately exclude stuff from next backup

2023-10-19 Thread Guillermo Rozas
>
> It's not a capacity problem.
> It's a performance problem.
>

Sorry, you implied that the problem was the capacity when you said

>> Also in the original example the disk isn't large enough, so we're
>> not
>> even making it to that stage.

But again, that was not the question.
> The question is, why is that new backup set being populated with stuff
> just to be deleted again after that (the exclusion list).
>

It's by design, in V4 the backup is stored as reverse deltas, and the
latest backup starts as an identical copy of the previous one and it's
modified to match the current status. Check the manual for the reasons:
https://backuppc.github.io/backuppc/BackupPC.html#BackupPC-4.0, specially
the "more detailed discussion".

Perhaps a solution for you would be to delete the ignored folder from the
last valid backup, so the new backup doesn't have it at the start. Check
the BackupPC_backupDelete command (
https://backuppc.github.io/backuppc/BackupPC.html#Other-Command-Line-Utilities),
but be careful as it could be pretty destructive.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] How to really/immediately exclude stuff from next backup

2023-10-19 Thread Guillermo Rozas
Are you using V3 or V4? According to my understanding, this step

However 2846 seems to be populated with the content from 2845 (including
> the stuff I have excluded).
> It looks like it's first copying all the stuff from 2845 (even the
> excluded path) and then later tries to remove it again from 2846.
> Which is also taking forever in the original example as it's a directory
> tree with millions of files.
> Also in the original example the disk isn't large enough, so we're not
> even making it to that stage.
>

should require barely extra space. In V3 the "copy" is hard-linking and in
V4 it's just new entries in metadata files, in neither case an actual copy
of the files in the pool is done (because of deduplication). Maybe you're
running out of inodes?

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Different strategies on same host

2023-06-30 Thread Guillermo Rozas
On Fri, Jun 30, 2023, 04:18 Dave Sherohman via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote

> *I* don't do it, simply because there's little practical difference
>> between rsync'ing directories that don't change and not rsync'ing them.
>>
>
> Just to give reason *I* use it:
> - the cost/benefit of doing full backups for different folders is
> different (fulls take considerably longer than incrementals, and some
> folders like "Downloads" are not worth that extra time)
>
> What's your transfer method?  While retention policy is a good point, the
> post you replied to is correct that there is no extra time required for
> fulls when using rsync.
>
> [...]
>
> Fulls and incrementals take roughly the same amount of time when using an
> rsync transfer method, because rsync only sends changed data over the wire
> either way.
>
I use rsync. But although the *transfer time* is the same, the *checking
time* is not. I use -c on the fulls, so checksums are generated for
everything on the host for full backups (I don't remember if this is the
default on BackupPC 4), while only metadata is checked on the incrementals.
For a modest 600GB host that gives a 32 to 2 minutes time relation.

Best regards,
Guillermo

>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Different strategies on same host

2023-06-29 Thread Guillermo Rozas
>
> *I* don't do it, simply because there's little practical difference
> between rsync'ing directories that don't change and not rsync'ing them.
>

Just to give reason *I* use it:
- retention policy for different folders is different (I keep my photos
backup much longer than my system configuration)
- the cost/benefit of doing full backups for different folders is different
(fulls take considerably longer than incrementals, and some folders like
"Downloads" are not worth that extra time)

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Different strategies on same host

2023-06-29 Thread Guillermo Rozas
Hi,
what I do is to have two different hosts in the server configuration (myPC
and myPC_photo), and both hosts have defined "ClientNameAlias" to the same
real host name address (IP or host name, but help says dhcp must be
disabled for the host). That way I can define two completely separated
backup strategies for two sets of folders on the same host, it works like a
charm.
Best regards,
Guillermo

On Wed, Jun 28, 2023 at 10:00 PM Norman J. Goldstein 
wrote:

> I want to have a different strategy for backing up photos than for
> backing up documents that I am editing.  Basically, it would be a rare
> event to do a full backup of my photos.   I assume that the host name,
> say myPC, must correspond to myPC.pl in the pc folder. Does it sound
> reasonable to make myPC_photo an alias host name for myPC, and then
> myPC_photo.pl would list its own folders and strategy for backing up?
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Storing Identical Files?

2023-02-12 Thread Guillermo Rozas
On Sun, Feb 12, 2023, 04:36 Christian Völker  wrote:

> Meanwhile I realized I had a different issue. The share was backed up one
> day, on the other day (for different reasons) the clientB.pl was
> overwritten by a previous version and the share did not get backed up.
> Howeverm, in graphs I notice a shrink of the red line:
> [image: PoolUsage]
>
> So it is doing deduplication and I am just not patient enough?
>

When you say "I see an increase in pool usage that doesn't come down", you
refer to the green area or to the red line? The second, as stated in the
legend, is "Prior to pooling and compression". It's not affected by
deduplication or compression, it's the size the pool would have if both
features were deactivated. The real pool size is the green area, which
should be similar to making "du -s" on the pool folder.

Best regards,
Guillermo

>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Some hosts ignored on wake up

2023-01-25 Thread Guillermo Rozas
Hi,
on this particular case, the check is at 11:00am, the last good backup was
at 6:12am, and according to the config incrementals should be done only if
at least 0.54 ~ 13 hours have passed, so the server does nothing on that
machine.
Regards,
Guillermo

On Wed, Jan 25, 2023, 06:04 Giannis Economou  wrote:

> Hi,
>
> A backuppc server backs up about 85 hosts to backup.
> Everyday about 10 hosts are not backed up. They are like ignored.
> Those hosts seems like they are not checked for backup at all (during
> several wakeups).
>
> Long time user of backuppc, for years, I cannot find out the reason.
>
> My first wakeup is at 13.00 (to run the "nightly" jobs) and then I have
> several wakeups after midnight.
> All our pings are always successful (replaced ping with /bin/echo).
>
> The hosts that are ignored mention that they have been last checked at
> 13.00 (no backup to be done at 13:00 since 13:00 is only for nightly runs),
> but then they seem completely ignored at several wakeups after midnight
> (where backup should be made).
>
> Here is an example from status.pl on an ignored host:
> (now it is GMT: Wednesday, January 25, 2023 8:58:48 AM)
>
>   "samplehost1" => {
> "startTime" => 1674558002, # GMT:  Tuesday, January 24, 2023 11:00:02
> AM
> "activeJob" => 0,
> "aliveCnt" => 32,
> "state" => "Status_idle",
> "userReq" => undef,
> "reason" => "Reason_nothing_to_do",
> "deadCnt" => 0,
> "type" => "incr",
> "endTime" => "",
> "lastGoodBackupTime" => 1674540749 #GMT: Tuesday, January 24, 2023
> 6:12:29 AM
>   },
>
>
> Those ignored hosts are also not mentioned at all in the LOG file.
>
> Not the same hosts are ignored every day.
>
>
> Some related settings below:
> WakeupSchedule: 13, 0.25, 0.75, 1.25, 1.75, 2.75, 3.75, 4.75, 5.25, 5.75,
> 6.75, 7.25, 7.75
>
> MaxBackups: 10
> MaxUserBackups: 4
> MaxPendingCmds: 120
>
>
> FullPeriod: 13.97
> FillCycle: 7
> FullKeepCnt: 2
> FullKeepCntMin: 1
> FullAgeMax: 180
>
>
> IncrPeriod: 0.54
> IncrKeepCnt: 14
> IncrKeepCntMin: 1
> IncrAgeMax: 30
>
>
> BackupsDisable: 0
> BlackoutBadPingLimit: 3
> BlackoutGoodCnt: 5
> BlackoutPeriods:
> hourBegin: 8.5
> hourEnd: 23.5
> weekDays: 0, 1, 2, 3, 4, 5, 6
>
>
> Thank you in advance.
>
>
> Giannis
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] rsync backups: full xferlogs missing

2022-11-03 Thread Guillermo Rozas
There was a change in the rsync flags that are used between V4 and V3, due
to the new way of calculating the digests (can't check which flags now). As
a result, on V4 only the files that have really changed are logged (before
every file was logged, even if it didn't change). Maybe it's that?
Best regards,
Guillermo

On Thu, Nov 3, 2022, 14:26 Ludovic Drolez  wrote:

> Hi!
>
> I've noticed, maybe after a migration to v4, that I don't see the full list
> of transfered files for rsync based backups. Here is what I see:
>
> Running: /usr/libexec/backuppc-rsync/rsync_bpc --bpc-top-dir
> /var/lib/backuppc --bpc-host-name myhost.com --bpc-share-name
> /var/log/apache2 --bpc-bkup-num 2 --bpc-bkup-comp 3 --bpc-bkup-prevnum 1
> --bpc-bkup-prevcomp 3 --bpc-bkup-inode0 62956 --bpc-log-level 1
> --bpc-attrib-new -e /usr/bin/ssh\ -l\ toor --rsync-path=/usr/bin/rsync
> --numeric-ids --perms --owner --group --devices --links --times
> --block-size=2048 --recursive --timeout=7200 --include=/\*.log
> --include=/\*.log.1 --include=/\*.log.2 --exclude=/\* 
> myhost.com:/var/log/apache2/
> /
> incr backup started for directory /var/log/apache2
> Xfer PIDs are now 551492
> This is the rsync child about to exec /usr/libexec/backuppc-rsync/rsync_bpc
> Xfer PIDs are now 551492,551494
> xferPids 551492,551494
> Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal,
> 0 sizeTotal, 6 filesNew, 7971811 sizeNew, 827816 sizeNewComp, 62957 inode
> DoneGen: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 8
> filesTotal, 7972495 sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 62956
> inode
>
>
>
> Maybe I have some bad settings from my v3 install?
> Do you know which settings I should check?
>
> Thanks,
>
> --
> Ludovic.
>
> https://drolez.com/blog/   - Music and Tech Blog
> https://palmopensource.com - Open Source Tech Blog
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Incomplete Backup with rsync ssh

2022-09-16 Thread Guillermo Rozas
I'm not in the computer to check now, but there is an option in rsync that
controls wether it crosses or not filesystem boundaries
(--one-filesystem??). I think this is enabled by default in V4. If the
folders you're missing are mounted as a sub-directory of a folder you
backup, they won't be backup unless you disable that option or explicitly
add them to the backup list.
Best regards,
Guillermo

On Fri, Sep 16, 2022, 07:20 Frédéric Mathy  wrote:

> Thanks.
> I changed the exclude directories but the problem is the same.
> Backup is empty for [boot] directory on /dev/mmcblk0p2 and incomplete for
> data on /dev/sda2.
> The backup is correct for /dev/sda1.
>
> I don't understand why. All was ok on backuppc 3...
>
> Fred
>
>
> Hello, in my experience I avoid globing sub-directories using "*" with
> Backuppc 4, if I want to exclude "/var/foo" I just use:
> '/var/foo'
> in the config, not '/var/foo/*'. this way I don't end up with empty
> directories and many strange errors disappeared.
> Also I noticed that if a sub-directory is explicitly setup in
> "$Conf{BackupFilesOnly}", the other sub-directories won't be backed up.
> i.e. if '/var/log' and '/var/mail' are setup in "$Conf{BackupFilesOnly}",
> then no other sub-directories under /var/ will be backed up, even though
> they are not explicitly excluded in "$Conf{BackupFilesExclude}" .
>
> Hope it helps.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc pool integrity check

2022-08-18 Thread Guillermo Rozas
I'm lucky to never have faced that problem :)

I'm not sure a simple "touch" will force a new transfer. By default an
incremental backup will only check (and transfer) the new metadata, but it
will not transfer the file. A full will calculate the checksum, but again
it won't transfer the file because it will be the same checksum as the one
already in the cpool.

However, in the documentation it mentions that for pooling files the
checksum is used only to search for "candidates", but then a "complete file
comparison" is done to verify that they're actually the same file. So maybe
a name change or move to another folder would be enough to force a full
file comparison and transfer of the "new" file.

Regards,
Guillermo

On Tue, Aug 16, 2022, 05:23 Ludovic Drolez  wrote:

> Hi!
>
> So I found a few dozen corrupt files, and made some scripts to find the
> original
> files from the hash.
> Not sure how to fix the backups... Touch the original file and delete the
> one in the
> cpool?
>
> Regards,
>
> Ludo
>
> On Sun, Jul 31, 2022 at 07:22:11PM +0200, Ludovic Drolez wrote:
> > Hi!
> >
> > So I've tried it and I have a few v3 pool files still in the cpool.
> > I should try to modify your script to ignore them, because they give
> errors.
> >
> > Thanks!
> >
> > Ludo
> >
> >
> > On Thu, Jul 28, 2022 at 04:34:40PM -0300, Guillermo Rozas wrote:
> > >You may try (and modify)
> > >this: [1]https://github.com/guille2306/backuppc_check_pool
> > >It's a script I made to periodically check the V4 cpool in my
> server.
> > >Best regards,
> > >Guillermo
> >
> > --
> > Ludo
> >
> > https://drolez.com/blog/   - Music and Tech Blog
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Import pool or clients

2022-08-03 Thread Guillermo Rozas
I think there is a V3_to_V4 conversion tool (look for it in the manual),
but it works on data that BacupPC already knows. Maybe you could try to
create a V3 pool for a ficticious client, add the old data by hand into the
folder structure, and then convert it to V4, but it could be risky to your
current pool.
Regards,
Guillermo

On Wed, Aug 3, 2022 at 2:31 PM backuppc--- via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Hi,
>
> I pulled a V3 pool off an old hard disk that I had wrongly assumed was
> broken. Now I would like to import as much data as possible into my current
> V4 installation.
>
> Merging or importing pools does not seem to be an option, so I am
> wondering what could be done with the existing tools? I am thinking that I
> could create a config.pl that would allow me to run at least
> BackupPC_archiveHost/BackupPC_archiveStart so that I could extract backups
> into an archive. But then there is no facility to import archives, or did I
> overlook anything? Any other suggestions?
>
> Fabian
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc pool integrity check

2022-07-28 Thread Guillermo Rozas
You may try (and modify) this:
https://github.com/guille2306/backuppc_check_pool
It's a script I made to periodically check the V4 cpool in my server.
Best regards,
Guillermo

On Thu, Jul 28, 2022 at 4:26 PM Ludovic Drolez  wrote:

> Hi there!
>
> I had to copy a Backuppc pool disk which was failing. DDrescue had to skip
> a few bad blocks,
> and now I'd like to check the integrity of the pool.
>
> Is there is script to run and check which files do not have a matching MD5
> checksum?
>
> Best regards,
>
> --
> Ludo
>
> https://drolez.com/blog/   - Music and Tech Blog
> https://isabelleantoine.be - Coaching and NLP
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Some directories stay empty (The directory ... is empty).

2022-03-21 Thread Guillermo Rozas
On Mon, Mar 21, 2022 at 11:40 PM  wrote:

>  > I would consider that configuration (SHH for root with a passwordless
> key)
>  > a VERY VERY dangerous configuration.
> The poster is not saying no password, I think he/she is just saying
> use an unencrypted private ssh key...
>

I understood the same, my point is that a passwordless private key for root
is still very dangerous unless that key is locked down in hell.


> There are some things you can do to *partially* harden the situation,
> While this might be particularly dangerous, but if you are going to backup
> a machine fully then you will need at least root-like read access to all
> the
> files on that machine.
>

Agreed. But I would add these questions to the would-be-backupper: do you
really need to fully backup the machine, or is it just the easy way? Can
you (reasonably) break the backup in chunks that need access from a single
user each?

Would be good to hear what others do here...


I only backup user files (and only for a few users/machines), so maybe it
is a corner case, but here you go:

- I define a "host" for each user, using the ClientNameAlias to connect
multiple times to the same machine if needed.
- I then modify the RsyncSshArgs for each of these hosts to connect using
the corresponding username ($$sshPath -l user).
- Finally, on the clients the authorized_keys file has a "command=" part
included for the backup key that only allows to execute a modified version
of the rrsync script, with only read-only access to the root of the user's
folder to backup.

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Some directories stay empty (The directory ... is empty).

2022-03-21 Thread Guillermo Rozas
>
> > I don't think backuppc uses sudo. You should enable root access
> > "without-password" in your sshd.conf, then set up backuppc to use a key
> > to SSH in as root.
>

I would consider that configuration (SHH for root with a passwordless key)
a VERY VERY dangerous configuration.

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run command per file before storing in the pool

2022-02-16 Thread Guillermo Rozas
>
> Unless I am missing something...
>

No, you're not missing anything. Effectively THIS is the hard part:


> Even if you did somehow pre-compress files and pipe them onto the
> client side rsync, you would probably break the ability for rsync to
> tell changed files based on stat'ing the file size...
>

One can probably feed rsync with a pre-processed version of the filesystem,
and that's what will be backed up. But this will inevitably end up in
re-compressing everything on every rsync session. There is probably no way
out if one wants to do it client side, as the OP wanted.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run command per file before storing in the pool

2022-02-16 Thread Guillermo Rozas
>
> > Certainly, this would be more bandwidth-friendly if it was possible to
>  > do this compression before transferring to server, but I can't figure
>  > out how I could accomplish this.
> Presumably harder as it would require host-side code to do things
> such as running a patched host version of rsync (or other transfer
> method executable)
>

I think you can achieve this by running a wrapper script on the client and
tying its execution to the BackupPC key. The steps would be:

- generate a key to use by the rsync/ssh connection from BackupPC and
nothing else (you should do this for security reasons anyway)
- on the client's authorized_keys file, use the "command" option to execute
a script every time a connection from this key is made. This command will
run instead of the rsync command BackupPC wants to run, and the latter will
be saved in the SSH_ORIGINAL_COMMAND variable
- in this script, pipe in some way your compression process into the rsync
command sent originally by BackupPC

This is the method the rrsync [1] script uses to restrict rsync to a
certain folder and options (I'm actually using a modified version of it
with BackupPC). The hard part is to figure out the third point above...

Regards,
Guillermo

[1] http://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/

>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Corrupted poolUsage.rrd file???

2022-01-05 Thread Guillermo Rozas
>
> - Regardless, hope this helps others who may be encountering this error
>

Many thanks! I had an error with the graphs not showing on the GUI for
several months, but never had the time to find the reason. In my case the
error was " '/var/log/BackupPC/poolUsage.rrd' is not an RRD file ". The
same solution worked, taking into account the different file location. Also
no idea why it got corrupted, probably a power cut at the wrong time.

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Validating pool file checksums

2021-11-11 Thread Guillermo Rozas
Some time ago I wrote a script to periodically check the cpool of a running
BackupPC V4 installation (before the check option appeared on V4.4). You
may take it a modify it a bit to do what you want:
https://github.com/guille2306/backuppc_check_pool

Best regards,
Guillermo

On Thu, Nov 11, 2021, 00:38 Kenneth Porter  wrote:

> Does a script exist to run the whole pool to validate checksums? It's my
> understanding that the filename is the file's checksum, so in principle a
> simple utility could be used to look for potential disk problems.
>
> I'm rsync'ing my external drive to a new one after running out of space,
> and this occurred to me as I was watching the pool files scroll by. I'd
> like to know that the files are all healthy.
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] V4 Infinite Incrementals

2021-09-23 Thread Guillermo Rozas
>
> Lost data is obviously a no-go, but if the reverse-delta process is
> reliable, I would prefer to minimize the full backups to a few times a year.
>

One point you should consider in planning the schedule is that full backups
thoroughly check the source for changes (using checksum for rsync). So,
even if the reverse-delta for incrementals is reliable, if the change was
not picked up because it did not change metadata (size and time for rsync)
it will be lost.

The checksum has also the added benefit of checking for corruption for
"unchanged" files.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Move current data dir to another location and start from scratch.

2021-08-12 Thread Guillermo Rozas
>
> How do you archive your backups?
>

I use the Archive function from BackupPC:
https://backuppc.github.io/backuppc/BackupPC.html#Archive-functions

I save the archive to an external drive, which I then move off-site.

WHat I was thinking was simply moving/renaming the current backup set and
> making the dir immutable to prevent ransomware from getting to it.
> Then start from scratch again, but leave that one for 6 months then do the
> same thing again but deleting the older immutable copy.
>

What I don't get is why you want to start from scratch. Even if you
don't use the Archive function (there are good reasons no to) and want to
get a snapshot of the full Backup installation and data, you can just copy
the relevant folders and leave the BackupPC installation untouched.
Somebody else also proposed to use a filesystem with snapshot capabilities,
although I don't know if that is immune to ransomware. In any case, there
is no need for moving files and a clean start.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Move current data dir to another location and start from scratch.

2021-08-12 Thread Guillermo Rozas
Hi,

I have a full week's backup of all my servers, i want to make an immutable
> copy of them, so I was thinking of just "moving" the current data dir, and
> whatever else I might need to another partition, and start the backup from
> scratch again.
>

Why "move and start from scratch"? Why don't you just copy the folder and
let the backup keep working?

I want to do that every 6 months...
> Suggestions?
>

Do you really need the full BackupPC data structure, including the
different snapshots, or is one copy of the data enough? For the latter case
maybe the "Archive" function is better, as you get the files with the
original files structure and names. However, you also lose the
deduplication.

I do a full Archive of my backups every 2 months to an external drive, but
I have little overlap of files so losing deduplication is not an issue.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Conf{PoolNightlyDigestCheckPercent} on btrfs and zfs

2021-05-04 Thread Guillermo Rozas
>
> One ensures against file system bit rot, the other ensures backup file
> consistency.
>

I would say $Conf{PoolNightlyDigestCheckPercent} = 1 is also a check for
bit rot, as the only thing it does is to read the file, re-calculate the
md5 checksum, and compares it with its name (which is the md5 calculated at
the time of writing). It actually says it in the help, "This is check if
there has been any server file system corruption."

What controls the consistency of the backup are
$Conf{PoolSizeNightlyUpdatePeriod} and specially $Conf{RefCntFsck}.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Using BackupPC 4.x with rrsync on the client

2021-02-10 Thread Guillermo Rozas
Sorry, it seems the base version of rrsync I used is quite old compared to
the current one, and I can't pinpoint which modifications are mine and
which are 'upstream'. All I can say is that:

- the attached version works with BackupPC 4.3.2, using the following
command line in the client's authorized_keys:

command="$HOME/.bin/rrsync_bkpc -ro
/path/to/backup",from="192.168.1.2",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding

- I vaguely remember having added and/or modified the allowed options for
rsync in the rrsync script (lines 55 to 139). If there is more than that, I
don't remember

I hope this helps. Best regards,
Guillermo

PS: I don't use sudo or root, BackupPC connects as the (only) non-root user
of the client using rrsync, restricted to the folder I want to backup
(basically the user's home, so there is no need for root access). Even if
rrsync is compromised, the remote has no way to gain root access as there
is no passwordless sudo configured in the machines.

On Wed, Feb 10, 2021 at 10:51 AM Felix Wolters 
wrote:

> Thank you, Guillermo in advance, this will be great!
>
> Felix
>
>
> Am 10.02.21 um 14:01 schrieb Guillermo Rozas:
>
> Hi,
> I use rrsync in a couple of clients. You need to slightly modify the
> rrsync script to correctly parse BackupPC rsync call, and after that it
> will work without problems. Later in the week I'll look into my systems and
> post my rrsync modified version.
> Regards,
> Guillermo
>
> On Tue, Feb 9, 2021, 13:11 Felix Wolters  wrote:
>
>> Hello!
>>
>> Let me first thank you for providing BackupPC as open source software. I
>> appreciate it a lot and consider it to be one of the most usefull backup
>> systems out there!
>>
>> I’d like to use it with restricted access to the client, so a
>> potentially corrupted BackupPC server wouldn’t be able to damage the
>> client machine and data. Using rsync for transfer with a Linux client,
>> rrsync (restricted rsync – as part of the rsync package) would be a
>> straigt forward solution to restrict an incoming ssh connection to only
>> rsync and only a given folder which I will set read only – which would
>> perfectly do the trick. Unfortunately, this doesn’t seem to work with
>> BackupPC over rsync, as far as I can see. I’m positive rrsync generally
>> works on the client as I use it successfully with plain rsync over ssh
>> on the same machine.
>>
>> I’ve seen rare information on the internet about this, and it wouldn’t
>> help me so far.
>>
>> Thank you for some help or instruction!
>>
>> Best
>>
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:https://github.com/backuppc/backuppc/wiki
>> Project: https://backuppc.github.io/backuppc/
>>
>
>
> ___
> BackupPC-users mailing listbackuppc-us...@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>


rrsync_bkpc_mod
Description: Binary data
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Using BackupPC 4.x with rrsync on the client

2021-02-10 Thread Guillermo Rozas
Hi,
I use rrsync in a couple of clients. You need to slightly modify the rrsync
script to correctly parse BackupPC rsync call, and after that it will work
without problems. Later in the week I'll look into my systems and post my
rrsync modified version.
Regards,
Guillermo

On Tue, Feb 9, 2021, 13:11 Felix Wolters  wrote:

> Hello!
>
> Let me first thank you for providing BackupPC as open source software. I
> appreciate it a lot and consider it to be one of the most usefull backup
> systems out there!
>
> I’d like to use it with restricted access to the client, so a
> potentially corrupted BackupPC server wouldn’t be able to damage the
> client machine and data. Using rsync for transfer with a Linux client,
> rrsync (restricted rsync – as part of the rsync package) would be a
> straigt forward solution to restrict an incoming ssh connection to only
> rsync and only a given folder which I will set read only – which would
> perfectly do the trick. Unfortunately, this doesn’t seem to work with
> BackupPC over rsync, as far as I can see. I’m positive rrsync generally
> works on the client as I use it successfully with plain rsync over ssh
> on the same machine.
>
> I’ve seen rare information on the internet about this, and it wouldn’t
> help me so far.
>
> Thank you for some help or instruction!
>
> Best
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC 4.x on Raspberry PI?

2021-01-07 Thread Guillermo Rozas
>
> Interesting... how does Banana Pi Pro compare to Pi4? e.g.,
> performance, interfaces/ports, price, support...
>

I would say it's inferior in every aspect, except for the direct SATA
connector (I think it's roughly equivalent to a Pi2 or Pi3 in raw
specifications). If it wasn't for the reports about the Pi4 overheating /
needing an active cooler I would switch to a Pi4.

However, there are infinite other options out there...

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC 4.x on Raspberry PI?

2021-01-04 Thread Guillermo Rozas
On Sun, Jan 3, 2021 at 1:39 PM  wrote:

> I am considering either:
> 1. (old) Pi 3 - Quad core  ARM Cortex-A53, 1.2GHz.
> 1GB memory (with 256MB used for video)
> USB 2.0
>
> 2. (new) Pi 4 - Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz
> 8GB memory
> USB 3.0
>

One point you would want to take into account: the Pi3 LAN connectivity is
through the USB2 interface, and the bandwidth is shared with it. Between
the independent Gigabit LAN on the Pi4 and the USB3 drive connection, the
latter should be much faster for data transfer and file checksum. That's
the main reason I moved from a Pi2 to a Banana Pi Pro (gigabit, SATA drive).

How does 4.x compare with 3.x in terms of CPU and memory usage?
>

In terms of CPU the 4.x should be lighter, as the full backups do not
checksum the files in the server anymore (in the default configuration).

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Verify full integrity of pool and backups

2020-11-01 Thread Guillermo Rozas
>
> > Most of
> > the time the "ZLIB/PIGZ ERROR" above is related to this, so the script
> > re-checks them by uncompressing them using BackupPC_zcat. If the error
> > persists, then you will see an extra "CHECKSUM ERROR", which means the
> file
> > is actually corrupted.
>
> Is there some compelling reason not to use BackupPC_zcat in the first
> place?
>

Yes. At least on my server (Banana Pi), BackupPC_zcat is four times slower
than either zlib or pigz. As the number of errors of this kind is usually
really small, it's still faster to re-check the errors than using
BackupPC_zcat for all the files. But one should test it for their server
and select the best option, taking into account the balance between CPU,
memory and disk.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Verify full integrity of pool and backups

2020-10-31 Thread Guillermo Rozas
Hi,


> We have one question : what to do if/when the script detects a problem
> on a pooled file ? Can we just delete the file (based on the -maybe
> wrong- assumption it will be copied back on the next run) ?
>

Good question. Sincerely, I don't know. I've pushed back that question to
the moment I actually find a problem, and I hope that moment never comes ;)

I don't think deleting the file would make any difference, for the same
reason a corrupted file in the pool is not picked-up by BackupPC. Probably
the best would be to "touch" in some way the original file (in the client)
and wait for BackupPC to copy it again. This won't solve the corrupted file
for previous backups, though.

One extra point: given the MD5 checksum (the name of the pool file), it's
not trivial to find the original location and name of the file:
https://github.com/backuppc/backuppc/wiki/How-to-find-which-backups-reference-a-particular-pool-file
.


> An example of what we sometimes get :
>
> Sat 31 Oct 03:47:01 CET 2020
> /mnt/BackupPC/cpool/30
> pigz: warning:
> /mnt/BackupPC/cpool/30/24/3024a19df8965e84a1f5866ccd1927c8: trailing
> junk was ignored
> ZLIB/PIGZ ERROR: 3024a19df8965e84a1f5866ccd1927c8 ->
> 0af97176caed7c0152d8a5be1dd631e3
>

I should clarify this in the README, thanks for pointing this out. This is
not an actual error, it means that either zlib or pigz uncompressed the
file and the checksum didn't match the expected value (in the example, the
uncompressed version of 3024a19df8965e84a1f5866ccd1927c8 has an MD5
checksum of 0af97176caed7c0152d8a5be1dd631e3).

As explained in the README, the compression algorithm that BackupPC uses is
slightly different to the one used by zlib/pigz. In some rare occasions,
this results in a corrupted file when uncompressed by zlib/pigz. Most of
the time the "ZLIB/PIGZ ERROR" above is related to this, so the script
re-checks them by uncompressing them using BackupPC_zcat. If the error
persists, then you will see an extra "CHECKSUM ERROR", which means the file
is actually corrupted.

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Verify full integrity of pool and backups

2020-10-25 Thread Guillermo Rozas
Hi,

it took me a while to clean it and turn it into a more user friendly
version, but here it is: https://github.com/guille2306/backuppc_check_pool.

The script in the repo works by systematically comparing the full-file MD5
checksum of the cpool files to their filename, flagging any difference.
Check the README for a bit more of detail, use options, and
advantages/disadvantages with respect to the built-in check option. Feel
free to raise issues and contribute!

Best regards,
Guillermo



On Mon, Aug 24, 2020 at 3:31 PM Raoul Bhatia  wrote:

> Hi Guillermo, this is a nice script indeed.  How about hosting this
> somewhere were people can contribute?
>
> Raoul
>
> On 2020-08-23 15:43, Guillermo Rozas wrote:
>
> Hi Raoul,
>
> are you using BackupPC v4? If yes, you can use a modification of the
> script I posted here:
> https://sourceforge.net/p/backuppc/mailman/message/37032497/
>
> In the latest version (4.4.0) you also have the config option
> $Conf{PoolNightlyDigestCheckPercent}, which checks the md5 digest of this
> fraction of the pool files each night. You can probably set it to 1 and
> wait a night for it to run.
>
> Regards,
> Guillermo
>
> On Sun, Aug 23, 2020 at 5:38 AM Raoul Bhatia  wrote:
>
> Hi,
>
> related to my previous email, it seems that the cause of my issues was a
> file system corruption after a "power cut".
>
> I managed to recover (most of?) the data and would now like to do a
> thorough check of the data.
>
> Is there any way to "fully verify" the integrity of my backuppc
> installation, ideally in a nondestructive way ;-)
>
> Thanks,
> Raoul
>
> PS. My backuppc process is stopped.
> --
> DI (FH) Raoul Bhatia MSc
> E-Mail. ra...@bhatia.at
> Tel. +43 699 10132530
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
>
> --
> DI (FH) Raoul Bhatia MSc
> E-Mail. ra...@bhatia.at
> Tel. +43 699 10132530
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Can BackupPC_tarCreate restore large files?

2020-10-13 Thread Guillermo Rozas
Same here on a Banana Pi Pro running Armbian:

Distributor ID: Debian
Description:Debian GNU/Linux 9.13 (stretch)
Release:9.13
Codename:   stretch

Perl5 (revision 5 version 24 subversion 1)
ivtype='long long'
ivsize=8

Regards,
Guillermo

On Tue, Oct 13, 2020 at 1:21 PM Doug Lytle  wrote:

> >>> Ok, so that's the issue - your version of perl on arm only uses 32 bit
> IVs. I'm surprised it is built that way - does anyone know if that is
> default for perl on arm generally?
>
> I have a Raspberry Pi 4 running
>
> lsb_release -a
>
> No LSB modules are available.
> Distributor ID: Raspbian
> Description:Raspbian GNU/Linux 10 (buster)
> Release:10
> Codename:   buster
>
> And Perl shows:
>
> This is perl 5, version 28, subversion 1 (v5.28.1) built for
> arm-linux-gnueabihf-thread-multi-64int
>
> ivtype='long long'
> ivsize=8
>
> So, probably not the default for all ARM installs.
>
> Doug
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] set/shift preferred backup execution time?

2020-10-13 Thread Guillermo Rozas
On Mon, Oct 12, 2020, 22:29  wrote:

> Guillermo Rozas wrote at about 21:54:13 -0300 on Monday, October 12, 2020:
>
>  > Isn't exactly this how blackout periods work? The only difference would
> be
>  > that instead of defining the "forbidden" times, you would define the
>  > "allowed" times?
>  >
>
> The key point would be to do this keyed by host -- so that you could
> effectively define the times you want each host backed up... vs. the
> blackout period which defines when you don't want the server doing backups.
>

Can't you define blackouts per host? At least for me (V4.2.1) in the GUI it
appears in the Schedule section of the host configuration, with the
corresponding button to override the general option.

Best regards,
Guillermo

>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] set/shift preferred backup execution time?

2020-10-12 Thread Guillermo Rozas
> Rather than using cron, would it make sense to define a hash keyed by
> the host that would allow one to specify a desire time or time range
> for backups -- sort of like the inverse of blackouts but settable by
> host.
>
> Also, I would make this more of a "soft" preference in that if a
> backup can't be accomplished at the given preferred time, then a
> backup would be attempted outside of the preferred periods.


Isn't exactly this how blackout periods work? The only difference would be
that instead of defining the "forbidden" times, you would define the
"allowed" times?

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] set/shift preferred backup execution time?

2020-10-12 Thread Guillermo Rozas
> and, iiuc, BlackoutPeriods prevent any/all backups during their range,
> correct?
>

Yes and no. Under normal operation no backup will occur during
BlackoutPeriods. But if a client fails to backup consistently in the
allowed time periods, it will be tried also during blackouts periods. What
it means "fails to backup consistently" is controlled by the option
BlackoutBadPingLimit.

I _do_ want to have backups _available_ -- i.e., _not_ blacked out at any
> time -- so they can run if/as needed.

Good reminder re the 'risk' of eventual time-shift back ... I simply wanted
> to shift the _preference_ to a user-selected time, and will deal with it as
> it occurs.


But that's exactly the problem the BlackoutPeriods option tries to solve!.
Suppose a configuration like this (copied from my server):

$Conf{WakeupSchedule} = [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 1, 2];
$Conf{BlackoutBadPingLimit}   = 14;
$Conf{BlackoutGoodCnt}= 1;
$Conf{BlackoutPeriods} = [
{
hourBegin =>  7.5,
hourEnd   => 0.5,
weekDays  => [0, 1, 2, 3, 4, 5, 6],
};

Under normal operation the server will try to backup this host during the
1, 2, 3, 4, 5, 6 and 7am wakeup times, once per wakeup. No attempt to
backup will be made between 7:30am and 0:30am any day of the week.

However, if the host fails 14 consecutive pre-backup pings (in this case,
two full nights), then the server will try to contact the host at any of
the wakeup times, including those defined as blackout periods. Once a ping
works, the blackout is reinstated.

This way I make sure my host is backed up during the night, when I'm not
using it. However, if I forget to let my notebook on for a couple of nights
in a row, I'm safe as the server will perform the backup the next time it
sees the notebook, regardless of the time. And once it has made that
backup, it will go back to the normal night schedule.

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] set/shift preferred backup execution time?

2020-10-12 Thread Guillermo Rozas
You could also use $Conf{BlackoutPeriods) to shift the backups to the early
morning. This would also prevent the backup time eventually shifting to
daytime again if the client is sometime not available at the 3am.
Best regards,
Guillermo

On Mon, Oct 12, 2020, 13:39 PGNet Dev  wrote:

> On 10/12/20 9:34 AM, Craig Barratt wrote:
> >   * in the web interface, assuming no backup is running, select
> stop/dequeue backup and enter the number of hours until a bit before 3am.
> That will delay the next backup until then.
>
> aha, perfect!
>
> thx!
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Deleting old backups

2020-10-09 Thread Guillermo Rozas
>
> No delete button here, is that a version 4 thing?  I am still on 3.
>

Yes, probably. I'm on V4, and this version introduced some changes that
allowed the backups to be more independent between them. If I
remember correctly, in V3 you need to keep the base backup from a series,
so you can't simply remove anything you like.

I would say that in V3 the safest thing to do is to temporarily reduce the
number of backups to keep (both full and incremental), and leave BakupPC to
do its thing for a couple of days.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Deleting old backups

2020-10-09 Thread Guillermo Rozas
>
> a) a sensible plan?
>
> b) safer than manually removing the backups by number from the
> corresponding pc directory, removing the files from the 'backups' index
> (and then still letting backuppc_nightly do its thing to tidy the pool,
> recovering the space)?
>

Why not:
c) Press the "Delete" button for those backups on the Backup Summary page
for the host? I would guess it's similar to option b), but safer.

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Verify full integrity of pool and backups

2020-08-23 Thread Guillermo Rozas
Hi Raoul,

are you using BackupPC v4? If yes, you can use a modification of the script
I posted here: https://sourceforge.net/p/backuppc/mailman/message/37032497/

In the latest version (4.4.0) you also have the config option
$Conf{PoolNightlyDigestCheckPercent}, which checks the md5 digest of this
fraction of the pool files each night. You can probably set it to 1 and
wait a night for it to run.

Regards,
Guillermo

On Sun, Aug 23, 2020 at 5:38 AM Raoul Bhatia  wrote:

> Hi,
>
> related to my previous email, it seems that the cause of my issues was a
> file system corruption after a "power cut".
>
> I managed to recover (most of?) the data and would now like to do a
> thorough check of the data.
>
> Is there any way to "fully verify" the integrity of my backuppc
> installation, ideally in a nondestructive way ;-)
>
> Thanks,
> Raoul
>
> PS. My backuppc process is stopped.
> --
> DI (FH) Raoul Bhatia MSc
> E-Mail. ra...@bhatia.at
> Tel. +43 699 10132530
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Guillermo Rozas
> While it's helpful to check the pool, it isn't obvious how to fix any errors.

Sure. Actually I've put aside to interpret the error and the file
involved until I find an actual error (so I hope to never need that
information! :) )

> So it's probably best to have rsync-bpc implement the old 
> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly 
> skipping the --checksum short-circuit during a full.  For that fraction of 
> files, it would do a full rsync check and update, which would update the pool 
> file if they are not identical.

That would be a good compromise. It makes the fulls a bit slower in
servers with poor network and slow disks, but it's more clear what to
do in case of error. Maybe also add a "warning of possible pool
corruption" if the stored checksum and the new checksum differs for
those files?

Regards,
Guillermo


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Guillermo Rozas
I've attached the script I'm using. It's very rough, so use at your own risk!

I run it daily checking 4 folders of the pool per day, sequentially,
so it takes 32 days to check them all. You can modify the external
loop to change this. The last checked folder is saved in an auxiliary
file.

The checksum is done uncompressing the files in the pool using
zlib-flate (line 25), but it can be changed to pigz or BackupPC_zcat.
On my severely CPU-limited server (Banana Pi) both pigz and zlib-flate
are much faster than BackupPC_zcat, they take around a quarter of the
time to check the files (pigz is marginally faster than zlib-flate).
On the other hand, BackupPC_zcat puts the lowest load on the CPU,
zlib-flate's load is 30-35% higher, and pigz's is a whooping 80-100%
higher.

However, as BackupPC_zcat produces slightly modified gzip files, there
is a (very) small chance that a BackupPC_zcat compressed file is not
properly uncompressed by the other two (line 28 in the script). If
that happens, you need to re-check every zlib-flate or pigz failure
with BackupPC_zcat before calling it a real error. I think this gets
the best balance between load on the system and time spent checking
the pool (at least for my server and pool...).

Best regards,
Guillermo


On Mon, Jun 8, 2020 at 1:28 PM  wrote:
>
> Good point...
> Craig - would it make sense to add a parameter to BackupPC_nightly
> that would check a user-settable percentage of the files each night,
> say NightlyChecksumPercent. So if set to 3%, the pool would be checked
> (sequentially) over the period of ~1 month
>
> Guillermo Rozas wrote at about 11:12:39 -0300 on Monday, June 8, 2020:
>  > Yes, I wouldn't worry about collisions by chance.
>  >
>  > However, there is a second aspect that is not covered here: if you
>  > rely only on saved checksums in the server, it will not check again
>  > unmodified pool files. This risks you missing file system corruption
>  > or bit rot in the backup files that were previously caught by the V3
>  > behaviour (which periodically checksummed the pool files).
>  >
>  > Two solutions:
>  > - put the pool in a file system with checksum verification included
>  > - use a script to periodically traverse the pool and chesum the files
>  >
>  > Best regards,
>  > Guillermo
>  >
>  >
>  >
>  > On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
>  >  wrote:
>  > >
>  > > Hi there,
>  > >
>  > > On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
>  > >
>  > > > ... presumably a very rare event ...
>  > >
>  > > That's putting it a little mildly.
>  > >
>  > > If it's really all truly random, then if you tried random collisions a
>  > > million times per picosecond you would (probably) need of the order of
>  > > ten trillion years to have a good chance of finding one...
>  > >
>  > > $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
>  > > 10.79
>  > >
>  > > I think it's safe to say that it's not going to happen by chance.
>  > >
>  > > If it's truly random.
>  > >
>  > > --
>  > >
>  > > 73,
>  > > Ged.
>  > >
>  > >
>  > > ___
>  > > BackupPC-users mailing list
>  > > BackupPC-users@lists.sourceforge.net
>  > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > > Wiki:http://backuppc.wiki.sourceforge.net
>  > > Project: http://backuppc.sourceforge.net/
>  >
>  >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


check_pool
Description: Binary data
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Guillermo Rozas
Yes, I wouldn't worry about collisions by chance.

However, there is a second aspect that is not covered here: if you
rely only on saved checksums in the server, it will not check again
unmodified pool files. This risks you missing file system corruption
or bit rot in the backup files that were previously caught by the V3
behaviour (which periodically checksummed the pool files).

Two solutions:
- put the pool in a file system with checksum verification included
- use a script to periodically traverse the pool and chesum the files

Best regards,
Guillermo



On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
 wrote:
>
> Hi there,
>
> On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
>
> > ... presumably a very rare event ...
>
> That's putting it a little mildly.
>
> If it's really all truly random, then if you tried random collisions a
> million times per picosecond you would (probably) need of the order of
> ten trillion years to have a good chance of finding one...
>
> $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
> 10.79
>
> I think it's safe to say that it's not going to happen by chance.
>
> If it's truly random.
>
> --
>
> 73,
> Ged.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backups file structure

2020-04-10 Thread Guillermo Rozas
It's on the docs:

https://backuppc.github.io/backuppc/BackupPC.html#BackupPC-Design

(you should probably first read the introduction section on the same document)

Regards,
Guillermo


On Fri, Apr 10, 2020 at 4:27 PM R.C.  wrote:
>
> Hi all
>
> where can I find a description of "backups" file structure?
>
> thank you
>
> Raf
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] keeping permanent backups

2020-03-02 Thread Guillermo Rozas
Not the solutions that you're looking for, but maybe half-way there:

- for new backups, you could create a "new" host with the same
connection configuration (but a different name), and then configure it
to do only manual backups ($Conf{BackupsDisable} = 1) with no
expiration policy set for them ($Conf{FullKeepCnt} = 1000,
$Conf{FullKeepCntMin} = 1000). It's not automatic, but at least all
the data is already on the server so there is no extra space required.

- for old backups, you could copy the full configuration and backup
tree for the host, rename it so you have a duplicate (both in the
configuration files and the root tree for the backup), and delete the
extra backups you don't need from the web interface. Probably only
valid on V4, where the "backup tree" only contains a bunch of files
with references to the actual files. Check
https://backuppc.github.io/backuppc/BackupPC.html#Storage-layout to
track what you would need to copy, and of course test it before ;)

Regards,
Guillermo

On Mon, Mar 2, 2020 at 12:54 PM Graham Seaman  wrote:
>
> Is the idea that incompatible with the way it works? I haven't tried looking 
> at the code yet, maybe I should. I was hoping there was already an option to 
> mark one backup out as exceptional.
>
> Several people have suggested making a backup elsewhere, but I would like to 
> keep everything on the  same system I'm backing up to - I guess I could add 
> another drive for historical backups, but its also handy to have the backuppc 
> frontend to use to look for files.
>
> Some have also suggested increasing the number of backups per host, but I 
> don't have the space to do that - the oldest backup I have is 2013, so that 
> would be quite a long history.
>
> Graham
>
> On 02/03/2020 14:20, Richard Shaw wrote:
>>
>> On Mon, Mar 2, 2020 at 5:08 AM Graham Seaman  wrote:
>>>
>>> I'd like to be able to mark some older full backups as 'frozen', not to
>>> be deleted. Is that possible? I can't find any mention of this in the docs.
>
>
> I don't think it's possible to mark one specific backup to keep, it's not 
> really part of the way BackupPC is designed to work.
>
> I would just archive the backup you want to keep to another media.
>
> Thanks,
> Richard
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Incrementals and Full backups on v4

2020-02-13 Thread Guillermo Rozas
Yes. In your setup all the incremental-unfilled are deltas with
respect to the last incremental-filled (see the level column). And if
you delete the last incremental (which is filled, level 0), the system
will fill the previous one and refer all the others to it, so you
never end with a broken chain.

From the documentation
(https://backuppc.github.io/backuppc/BackupPC.html#Overview):
- Backups are stored as "reverse deltas" - the most recent backup is
always filled and older backups are reconstituted by merging all the
deltas starting with the nearest future filled backup and working
backwards.
- Since the most recent backup is filled, viewing/restoring that
backup (which is the most common backup used) doesn't require merging
any deltas from other backups.
- The concepts of incr/full backups and unfilled/filled storage are
decoupled. The most recent backup is always filled. By default, for
the remaining backups, full backups are filled and incremental backups
are unfilled, but that is configurable.
- Any backup can be deleted (deltas are merged into next older backup
if it is not filled).
- The reverse deltas allow "infinite incrementals" - no need for a
full backup if you are willing to trade speed for the risk that a file
change will not be detected if the metadata (eg, mtime or size)
doesn't change.

Regards,
Guillermo

On Thu, Feb 13, 2020 at 1:14 PM Gandalf Corvotempesta
 wrote:
>
> So, this is a perfectly working system:
> https://postimg.cc/PCZgN634
>
> with *ALL* backups available totally ?
> I'll be able to restore *any* file from *any* backup , in example,
> even from the #18 ?
>
> Il giorno gio 13 feb 2020 alle ore 16:07 Michael Huntley
>  ha scritto:
> >
> > Hi Gandalf,
> >
> > Not with v4.  V4 uses reverse deltas, so your most recent backup is a 
> > filled, or complete backup.
> >
> > V4 calculates the difference between today and yesterday, and so on 
> > backwards.  Just think of it as incrementals going  back in time and 
> > carrying your full with you each day.   You have a full basket of goodies 
> > each day and leave a trail behind you.
> >
> > You can also have older filled backups to reduce restore time as it lessens 
> > the calculations BackupPC must perform.
> >
> > So, if you have a complete trail of incrementals going back two weeks there 
> > is no data loss in that time period.
> >
> > If I am incorrect in any way in my analogy I am sure the list will correct 
> > me and we will both learn.
> >
> > Kind regards,
> >
> > mph
> >
> > > On Feb 13, 2020, at 3:49 AM, Gandalf Corvotempesta 
> > >  wrote:
> > >
> > > Just a confirm:
> > >
> > > if I have a full backup done on 2020-01-14 (doing 1 full each month)
> > > and daily incrementals, keeping up to 14 incrementals, I have data
> > > loss ?
> > >
> > > In example, the incremental done yesterday (2020-02-12), is relative
> > > to the incremental done on 2020-01-14 ?
> > >
> > > How does it work, exactly ? I have some "broken chain" ?
> > >
> > > With bacula, in example, I need at least 1 full backup and then each
> > > incremental after it, to restore from yesterday. When using
> > > differentials, I need 1 full, all differential, and all incrementals
> > >
> > >
> > > ___
> > > BackupPC-users mailing list
> > > BackupPC-users@lists.sourceforge.net
> > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > > Wiki:http://backuppc.wiki.sourceforge.net
> > > Project: http://backuppc.sourceforge.net/
> > >
> >
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Laptop backups

2018-12-30 Thread Guillermo Rozas
However, the main question: The backups over the cable are obviously

> faster,


How much faster? In my experience, the backups are more limited by disk
speed and cpu power than network. But I'm connected to the server via
either AC wifi or wired gigabit, so YMMV.

Be careful that the FIRST backup is always slow because of the full backup
network transfer, but afterwards network use should be minimal for a slowly
changing backup.

but for incrementals, it is not that big issue.


Are you sure? On V4 using rsync (recommended on  Linux) the only difference
between incrementals and fulls is the way the file changes are checked.
There shouldn't be big transfer differences unless you have big files that
change checksum but not size or modification time (wich is the usual case
for encrypted containers).

Cheers,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Reading multi-volume tar archive

2018-11-29 Thread Guillermo Rozas
Don't worry, it seems all three (!!) volumes I transferred to my computer
to test them were corrupted on transfer. Tar on the server correctly shows
the (partial) list of files on each volume, as expected.
Regards,
Guillermo

On Thu, Nov 29, 2018 at 9:14 PM Guillermo Rozas 
wrote:

> Hi,
> I'm testing the archive capabilities of BackupPC (V4). Is there a way to
> read (list files, extract some of them) a single volume of a split Archive
> without having to cat together all the volumes first?
> Regards,
> Guillermo
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Reading multi-volume tar archive

2018-11-29 Thread Guillermo Rozas
Hi,
I'm testing the archive capabilities of BackupPC (V4). Is there a way to
read (list files, extract some of them) a single volume of a split Archive
without having to cat together all the volumes first?
Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Find file given digest (and a decompression error)

2018-11-27 Thread Guillermo Rozas
>
> > Of course, you're right :) (although pigz failed only in 2 files out of
> > several thousands).
>
> oh well, I was wondering about that. I've yet to see such a file (and
> probably never will, because I disabled pool compression for good and
> now use btrfs' lzop filesystem-based compression), but...
>

I've found a third, all of them 6GB+ ISO. I'm starting to see a pattern :P

> BackupPC_zcat decompresses both files correctly and their checksums are
> > correct now. However, at least with one of the files there is something
> > fishy going on because the compressed version is 60KB, the decompressed
> > is 7GB!
>
> I'd bet that those two files are extremely sparse.
> There are good reasons for such a file to be generated: e.g., from a
> ddrescue run that skipped lots of bad areas on a drive, or a VM disk
> image with a recently formatted partition, or similar. On many modern
> file systems supporting sparse files, the overhead for the holes in the
> file is negligible, so it's easier from a user perspective to allocate
> the "full" file and rely on the filesystem's abilities to optimize
> storage and access.
> However, some of BackupPC's transfer methods (in particular, rsync)
> cannot treat sparse files natively, but since they compress so well,
> that's hardly an issue for transfer nor storage on the server.
>

Thanks for the nice explanation. Unfortunately in this case was a rather
more mundane reason, like me failing to properly read the number of digits
of a big number...


> The reason why I recommended pigz (unfortunately without an appropriate
> disclaimer) is that it
> - never failed on me, for the files I had around at that time, and
> - it was *magnitudes* faster than BackupPC_zcat.
>
> But I had a severely CPU-limited machine; YMMV with a more powerful CPU.
> Depending on your use case (and performance experience), it might still
> be clever to run pigz first and only run BackupPC_zcat if there is a
> mismatch. If a pigz-decompressed file matches the expected hash, I'd bet
> at approximately 1 : 2^64 that no corruption happened.
>

I'm very severely CPU-limited (Banana Pi), so this can make a huge
difference. I tested it by checking two top level cpool dirs (roughly 1/64
~ 1.5% of the pool). I compared pigz, zlib-flate and BackupPC_zcat and on
my system:
- both pigz and zlib-flate are much faster than BackupPC_zcat, they take
around a quarter of the time to check the files
- pigz is marginally faster than zlib-flate
- BackupPC_zcat puts the lower load on the CPU, zlib-flate's load is 30-35%
higher, and pigz's is a whooping 80-100% higher (pigz's load is actually
higher than 2 in this 2-core system)
- of course, BackupPC_zcat has the advantage of always working, zlib-flate
and pigz fail at the same files (very few)

With this data, I modified my script to normally run zlib-flate to check
the files, and re-check every failure with BackupPC_zcat before calling it
a real error. I think this gets the best balance between load on the system
and time spent checking the pool (I can traverse the entire pool in 32 days
with ~30 min of checking every day).

> I'll check those 2 files tonight, and hopefully
> > have a script working by the weekend.
>
> Cool! If you don't mind and are allowed to, please share here...
>

The check script is almost there, I want to verify it for a couple of days
more before sharing it. The find script seems a bit harder to code that
what I first thought :)

Cheers,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Find file given digest (and a decompression error)

2018-11-27 Thread Guillermo Rozas
>
> Pigz doesn't correctly support BackupPC compressed files, although it will
> in some cases.  The reported error is likely the problem.  Please use
> BackupPC_zcat instead and report back.
>

Of course, you're right :) (although pigz failed only in 2 files out of
several thousands).

BackupPC_zcat decompresses both files correctly and their checksums are
correct now. However, at least with one of the files there is something
fishy going on because the compressed version is 60KB, the decompressed is
7GB! Which brings me to:

I added a guide to the Wiki to find out where a pool file is referenced
> 
> .
>

That's great, thanks! I'll check those 2 files tonight, and hopefully have
a script working by the weekend.

Best regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] Find file given digest (and a decompression error)

2018-11-26 Thread Guillermo Rozas
Hi,

following the advice from Alex [1] I successfully created a script to check
the cpool on the server for checksum errors. So far I've found 2 files
whose name do not match their checksum. Is there a simple way to find to
which paths correspond those files given their MD5 digest?

Actually, I'm still unsure if there is a problem with the files themselves
or with the decompression, as pigz complains that "trailing junk was
ignored" on both files. Any idea why could be that?

Best regards,
Guillermo

[1] https://sourceforge.net/p/backuppc/mailman/message/36379588/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.2.1 apparently in an infinite loop.

2018-09-17 Thread Guillermo Rozas
>
> >>> ... BackupPC appears to think that it has now used 5TB of a 3TB
> >>> disc and the claimed usage is growing. ...
>
> ... and growing.  It's now claiming 8.5TB used by the pool.  Pretty
> soon the claimed pool usage will exceed the total storage used by all
> the hosts that are being backed up.  Real partition usage is ~80%, of
> course it's is using nothing like the storage it claims to be using.


Regarding this point: have you tried reducing the parameter
PoolSizeNightlyUpdatePeriod? It defaults to 16, and from the documentation:

> "The total size of the files in the new V4 pool is updated every night
when BackupPC_nightly runs BackupPC_refCountUpdate. Instead of adding up
the size of every pool file, it just updates the pool size total when files
are added to or removed from the pool.
>To make sure these cumulative pool file sizes stay accurate, we recompute
the V4 pool size for a portion of the pool each night from scratch, ie: by
checking every file in that portion of the pool.

> $Conf{PoolSizeNightlyUpdatePeriod} sets how many nights it takes to
completely update the V4 pool size. It can be set to: 0: never do a full
refresh; simply maintain the cumulative sizes when files are added or
deleted (fastest option) 1: recompute all the V4 pool size every night
(slowest option) 2: recompute 1/2 the V4 pool size every night 4: recompute
1/4 the V4 pool size every night 8: recompute 1/8 the V4 pool size every
night 16: recompute 1/16 the V4 pool size every night (2nd fastest option;
ensures the pool files sizes stay accurate after a few day, in case the
relative upgrades miss a file)"

I once had a similar problem (pool not showing real size) and forcing it to
check the size in a single night (PoolSizeNightlyUpdatePeriod=1) for a
couple of days solved it.

Regards,
Guillermo
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Troubleshooting assistance for Bad Transfer

2018-09-17 Thread Guillermo Rozas
I have BackupPC 4.2.1 connecting without issues to a Windows 10 machine via
rsync. If you think it could be useful, I can dig up what I did to
configure it.
Regards,
Guillermo

On Mon, Sep 17, 2018 at 2:22 PM Jaime Fenton 
wrote:

> *Bump*
>
>
>
> Anyone else have advice?
>
>
>
> *From:* Jaime Fenton [mailto:jaime.fen...@animallogic.ca]
> *Sent:* Wednesday, September 5, 2018 3:58 PM
> *To:* General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> *Subject:* Re: [BackupPC-users] Troubleshooting assistance for Bad
> Transfer
>
>
>
> No dice, rsync does not work (nor did rsyncd, failed outright).
>
>
>
> Any further advice? If it helps, the two machines failing are windows 10.
>
>
>
> Jaime
>
>
>
> *From:* Kris Lou via BackupPC-users [
> mailto:backuppc-users@lists.sourceforge.net
> ]
> *Sent:* Wednesday, September 5, 2018 12:05 PM
> *To:* General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> *Cc:* Kris Lou 
> *Subject:* Re: [BackupPC-users] Troubleshooting assistance for Bad
> Transfer
>
>
>
> Then you might be running into some of the file-locking issues inherent to
> SMB, especially if you can get a partial backup.  Consider trying a
> different method, such as VSS-rsync?
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backup in a special scenario

2018-08-05 Thread Guillermo Rozas
Hi,


> mmm, I'm not aware of a native rsync protocol for windows, how can I set
> this up? maybe it will solve all my issues without workarounds
>
It's described in the documentation here:
https://backuppc.github.io/backuppc/BackupPC.html#Step-5:-Client-Setup
All the information (and files) needed to run rsyncd on Windows is here:
https://github.com/backuppc/cygwin-rsyncd

What's described there is how to use the rsync daemon on Windows, but you
can also use the same minimal cygwin-rsyncd installation to connect to
rsync over SSH. For that you will need to install OpenSSH for Windows from
here: https://github.com/PowerShell/openssh-portable (if you have Windows
10 with the April 2018 Update, you may already have it).

Best regards,
Guillermo
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC V4 and --checksum

2018-07-30 Thread Guillermo Rozas
>
> IIUC, you want a way to check the integrity of the pool files on the
> server side.
>

Yes


> BackupPC 3 used to have such a function, by re-checksumming and
> verifying some percentage of the pool during a nightly (can't remember
> the details, and I don't have the v3 docs available).
>

Found it here:
https://backuppc.github.io/backuppc/BackupPC-3.3.2.html#Rsync-checksum-caching

The wording further confirms that V4 won't checksum the files once they're
added to the pool, contrary to what I believed.


> If you want to do this for yourself, it's pretty easy with a cronjob.
> Just compare, for all files in $topDir/pool/*/*/, their md5sum with the
> filename. Same = good, not the same = bad.
> If your pool is compressed, pipe the compressed files in
> $topDir/cpool/*/*/ through pigz [1] (which, as opposed to gzip, can
> handle the headerless gz format used there), as in the following piece
> of bash:
>
>digest=$(pigz -dc $file | md5sum -b | cut -d' ' -f1)
>
> Now, check if $digest == $file, and you have a sanity check. (It's
> slightly more annoying to find out where $file was referenced in case it
> is corrupted; but it's possible, and I recommend not to worry about that
> until it happens.)
>

Perfect, thanks! I can then use --checksum to verify the client, and a
script to checksum the server off-line from time to time. The best of both
worlds :)

Regards,
Guillermo
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC V4 and --checksum

2018-07-28 Thread Guillermo Rozas
Hi! Thanks for the answer


> > ... rsync --checksum only checksums files on the client, not the
> > server.  I find this strange because not only the manual says
> > otherwise ...
>
> It is not clear to me what document ("manual") you are reading which
> leads you to the conclusions which you seem to have drawn.  If you can
> give links to the document(s), and quote(s), that might assist.
>

I was reading the same passages you mentioned, but interpreting them in a
different way.


> [quote]
> *   Uses full-file MD5 digests, which are stored in the directory attrib
>  files. Each backup directory only contains an empty attrib file whose
>  name includes its own MD5 digest, which is used to look up the attrib
>  file's contents in the pool. In turn, that file contains the metadata
>  for every file in that directory, including each files's MD5 digest.
> [/quote]
>
> I take this to mean that, in order to find the checksums for the files
> on the client, the server looks in the files in its data directory for
> that client precisely because, when it does so, it does NOT then need
> to read pool files (to re-calculate the checksums) because it has done
> that work already and saved the results in the filesystem.


Agreed, this could be an interpretation. However, a bit below it says:

[quote]
*  rsync-bpc doesn't support checksum caching
[/quote]

Which I interpreted as 'It uses the MD5 digest names only for file
reference, but it doesn't rely on them for file integrity. Therefore, it
will checksum the files again'. After that, my mind was set: I knew
BackupPC already had the checksums, but I thought there were not used by
rsync-bpc.

Your email prompt me to check re-frame that, and sure enough there is this
comment on https://github.com/backuppc/rsync-bpc/blob/master/checksum.c:

[quote]
* Try to grab the digest from the attributes, which are both MD5 for
protocol >= 30.
* Otherwise fall through and do it the slow way.
[/quote]

so this solves the question? In V4, rsync-bpc uses the attributes' MD5 as a
cache for the full checksum (which is used by --checksum), but it doesn't
have caching capabilities for the block checksums (used by --ignore-times)?

Naturally,
> using this approach, you rely on the integrity of the previously saved
> pool data.


Agreed, that is my situation. I'm reasonably sure of the system(UPS, Debian
stable, ext4), but as my backups are relatively small (1 This seems to me further to confirm my interpretation of the earlier
> quote, and also to suggest the behaviour which you yourself describe
> in your posts.  It explicitly refers to "a more conservative approach"
> which may be what you want.
>

Yes. However, as the same documentation says:

[quote]
* The use of rsync --checksum allows BackupPC to guess a potential match
anywhere in the pool, even on a first-time backup. In that case, the usual
rsync block checksums are still exchanged to make sure the complete file is
identical.
[/quote]

I thought it would be better to use --checksum. But if --checksum doesn't
actually checksums the files on the server each time, I agree that using
--ignore-times is a better fit for my use case at this point. Thanks.

Regards,
Guillermo
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC V4 and --checksum

2018-07-28 Thread Guillermo Rozas
Hi,
does anybody have a tip on this? Both my computers (Win10, Ubuntu 17.10)
have already aged to the point of making their first full backups on their
own, and the 'problem' persists: rsync --checksum only checksums files on
the client, not the server.
I find this strange because not only the manual says otherwise, but there
are comments on this list and a even a feature request mentioning the slow
checksuming on server during full backups!
I'd appreciate any help to elucidate what's happening on my system. I need
cheksuming working on the server for data integrity reasons. Thanks!
Regards,
Guillermo

On Tue, Jun 26, 2018 at 10:44 PM Guillermo Rozas 
wrote:

> Hi,
>
> I've recently installed BackupPC 4.2.1 on my home server (ARMBIAN
> 5.38), and I'm trying to understand the behavior of rsync's --checksum
> option on V4.
>
> According to the docs, V4 doesn't have checksum caching, so I was
> expecting the server to read and checksum all the files during a full
> backup. What I'm seeing is the complete opposite: is seems my server
> is not checksuming any file during a full backup with the --checksum
> option. Reading operations on the backup disk are kept to a minimum
> (less than 300kB/s). Is this the expected behavior or maybe I have a
> problem somewhere?
>
> For reference, the same backup using --ignore-times maxes out the
> server's capacity at 20MB/s. In both cases the client behaves as
> expected, checksuming everything. Actually, the duration time of a
> full --checksum backup is essentially defined by the reading speed of
> the client disk, although I know that the server's reading speed is
> much slower (all this is running a full backup immediately after an
> incremental, so no files are transferred).
>
> Any help will be appreciated. I can send log files if needed.
> Best regards,
> Guillermo
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Large directory

2018-07-06 Thread Guillermo Rozas
I think the important point here is the number of hard-links, rsync
can have problems in those situations because it has to search them
all and keep track of them
(https://lists.samba.org/archive/rsync/2014-June/029537.html)

> > I am trying to backup a large directory tree with BackupPC v4.? This
> > directory is 660GB and contains over 25 million files with about 3
> > million hard links.? The initial backup ran for 2 weeks before dying
> > with an rsync error.? It is showing as a partial backup, but it doesn't
> > show a file count.
>
> My 'home' directory on my desktop machine at work for example is half
> that size, and just over a million files, although it contains only a
> few thousand hard links.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC V4 and --checksum

2018-06-26 Thread Guillermo Rozas
Hi,

I've recently installed BackupPC 4.2.1 on my home server (ARMBIAN
5.38), and I'm trying to understand the behavior of rsync's --checksum
option on V4.

According to the docs, V4 doesn't have checksum caching, so I was
expecting the server to read and checksum all the files during a full
backup. What I'm seeing is the complete opposite: is seems my server
is not checksuming any file during a full backup with the --checksum
option. Reading operations on the backup disk are kept to a minimum
(less than 300kB/s). Is this the expected behavior or maybe I have a
problem somewhere?

For reference, the same backup using --ignore-times maxes out the
server's capacity at 20MB/s. In both cases the client behaves as
expected, checksuming everything. Actually, the duration time of a
full --checksum backup is essentially defined by the reading speed of
the client disk, although I know that the server's reading speed is
much slower (all this is running a full backup immediately after an
incremental, so no files are transferred).

Any help will be appreciated. I can send log files if needed.
Best regards,
Guillermo

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/