Re: [BackupPC-users] Understanding pros/cons of full vs. incremental and filled vs. unfilled for v4

2019-04-21 Thread Craig Barratt via BackupPC-users
Jeff,

You statements about full vs incremental and filled vs unfilled are
correct.  In v3, all fulls are filled and all incrementals are unfilled.
In v4, that's the default, but you can configure them differently.  In
particular, you could require that all backups are filled (whether they are
incremental or full).

I haven't done testing to see if having 100% fulls would be faster.  On my
ext4 system running on sw raid 10, it is actually quite slow duplicating a
filled backup (which is the required step prior to starting a backup when
you want the prior one to remain filled), since the whole directory tree
has to be traversed.  So that part is definitely slower.  However, you are
right that, after that, the backup is somewhat simpler since it is only
modifying the current backup tree in place, since there's no need to update
the prior unfilled backup with the reverse deltas.  Another minor advantage
of only having filled backups is deleting any one of them is easier, as you
note.  Currently BackupPC needs to merge deltas into the immediate prior
backup (if unfilled) when you deleted a backup.

Craig



On Thu, Apr 11, 2019 at 9:50 AM  wrote:

> Presumably in an ideal world with no tradeoffs, every backup would be
> full and filled as that would make sure each backup is standalone
> independent without any need to merge or fill anything later on.
>
> *Full vs incremental*
> If I understand it correctly, the difference between incrementals and
> fulls is that for incrementals, if the attribs match, the files are
> assumed to match, avoiding the need for full file checksums.
>
> So it makes sense to mix in incrementals between
> full backups in order to speed up incremental backups relative to what
> a full would take.
>
>
> *Filled vs. Unfilled*
> If I understand it correctly, unfilled backups don't include unchanged
> files
> in the backup file tree.
>
> For v3, this made sense as it eliminated the need to add yet more hard
> linked files that take time to create and consume inodes.
>
> It's not as clear to me in v4 what the advantage is given that all the
> file information is stored in the attrib file itself which needs to be
> read anyway. It wouldn't seem that inefficent to me to just write out the
> full directory attrib for all backups regardless of whether changed or not.
>
> - The inode wastage is a lot less in that you are only storing one
>   file per directory and you would need an attrib file anyway if any
>   file were to change in the directory.
>
> - There is no "messy" creation of hard links
>
> In all, I would even think that filled would still in general be
> faster than unfilled in that you don't need to merge up the
> incremental tree to see if there are changes -- all presumably much
> slower than just writing the attrib file to each directory.
>
> Said another way, in v4, why not just make all backups filled while
> still mixing incrementals between filled to overall speed up average
> backup time.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Convert incremental/unfilled to full/filled

2019-04-29 Thread Craig Barratt via BackupPC-users
Jeff,

No, there isn't a built-in way to do this.

BackupPC_backupDuplicate does something similar to what you want, but it
would need some significant modifications.

Craig

On Mon, Apr 29, 2019 at 10:35 PM  wrote:

> I just converted some old v3 backups to v4.
> Is there any way to manually convert an incremental/unfilled to a
> full/filled backup under v4?
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why can't hostnames have capital letters?

2019-05-01 Thread Craig Barratt via BackupPC-users
Jeff,

BackupPC forces the host names to lower case when it reads the host file,
so everywhere they are lower case.  It's been that way from the
very beginning.

I think the reason I did it that way is that host names are generally
treated as case insensitive.  But I agree it would have been better not to
enforce the lower casing.

Craig


On Wed, May 1, 2019 at 10:02 PM  wrote:

> Any reason why capital letters don't work?
>
> Specifically, I found that renaming the 'pc' directory name to a name
> with capital letters (along with the corresponding change in the hosts
> file), causes backuppc to fail to recognize the machine backups.
>
> In terms of linux and windows, hostnames seem to be case insensitive
> so not sure why backuppc can't at a minimum be case insensitive,
> rather than failing to recognize hostnames with a capital letter.
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] SOLVED: Translating rsync command from v3 to v4

2019-05-05 Thread Craig Barratt via BackupPC-users
Yes, that's the best workaround.  Since BackupPC 4.x uses rsync_bpc on the
host (which is native rsync, modified to access the BackupPC storage),
unfortunately this is internal to how rsync specifies and executes the
remote command.

The problem, as you observed, is that rsync adds the host name to whatever
you specify in $Conf{RsyncSshArgs}.  So you need something that will
consume and ignore the first argument.  The form I use is "sudo -p", since
the prompt is not used for password-less ssh:

$Conf{RsyncSshArgs} = ['-e', '/usr/bin/sudo -p'];

Craig



On Fri, May 3, 2019 at 3:14 PM  wrote:

> backu...@kosowsky.org wrote at about 00:53:24 -0400 on Friday, May 3,
> 2019:
>  > backu...@kosowsky.org wrote at about 00:11:50 -0400 on Friday, May 3,
> 2019:
>  >  > In v3, when I was backing up on the same server, I eliminated an
>  >  > "unncessary" extra ssh by substituting 'sudo' for 'ssh' using:
>  >  >$Conf{RsyncClientCmd} = '/usr/bin/sudo $rsyncPath $argList+';
>  >  >
>  >  > Not sure how to do this in v4 where one doesn't have access to
>  >  > RsyncClientCmd but instead can only specify:
>  >  >RsyncClientPath
>  >  >RsyncSshArgs
>  >  >SshPath
>  >  > i.e., I don't see how to eliminate the use of ssh...
>  >  >
>  >
>  > To clarify, I am trying to generate the following resulting command:
>  >/usr/bin/sudo /usr/bin/rsync --server --sender -slHogDtpAXrcxe.iLsf
> --iconv=UTF-8
>  >
>  > Rather than the ssh version, which would be more like:
>  > /usr/bin/ssh -l root mymachine /usr/bin/sudo /usr/bin/rsync --server
> --sender -slHogDtpAXrcxe.iLsf --iconv=UTF-8
>  > (where mymachine really just equals localhost)
>  >
>  > If I set:
>  >SshPath=''
>  >RsynClientPath ='/usr/bin/sudo /usr/bin/rsync'
>  >RsyncSshArgs = ''
>  > I would presumably get the following, close, but  non-working commmand:
>  >mymachine /usr/bin/sudo /usr/bin/rsync --server --sender
> -slHogDtpAXrcxe.iLsf --iconv=UTF-8
>  >
>  > I could presumably write a short shell script like:
>  >   #!/bin/bash
>  >   shift
>  >   exec $*
>  >
>  > Then set SshPath to the above script (with RsyncSshArgs = '').
>  > But even if that would work, it seems klugey.
>  >
>  >
>
> Just noticed that there is a '-h ' switch to 'sudo', so the
> following works to allow rsync on the localhost without 'ssh'
>
>   $Conf{RsyncClientPath} = '/usr/bin/rsync';
>   $Conf{RsyncSshArgs} = ['-e', '/usr/bin/sudo -h'];
>   $conf{SshPath} = '';
>
> The resulting command is of form:
> usr/bin/sudo -h  /usr/bin/rsync --server --sender
> -slHogDtpAXrcxe.iLsf --iconv=UTF-8
>
> Since 'host' is the localhost (by choice), adding the '-h ' switch to
> sudo doesn't do anything different...
> Still a bit klugey but at least it's simple...
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Understanding options sent to rsync --server --sender

2019-05-05 Thread Craig Barratt via BackupPC-users
>
> - Is this just a concatenation of different command-line options sent to
> rsync?


Yes - rsync passes along short versions of the command-line arguments the
are relevant to the remote sender.  Many of them are based on the original
command-line arguments. Excludes/Includes are sent over the socket rather
than via the command-line.

You can see the code that builds this option string in
rsync-3.1.3/options.c:server_options().

If so, what is the meaning of the period?


It's not documented in the rsync man page, but by looking at the function
above it appears to be connected to the immediately-prior flag "e".  This
isn't the usual "-e" option to rsync.  Rather, it's an internal-only option
the server processes to set any potential rsync subprotocol version. "e."
means the default.  However, I haven't looked at the code enough to figure
out where these command-line options are decoded.

Craig

On Thu, May 2, 2019 at 9:28 PM  wrote:

> When I look at the rsync process running on the client, I get:
>  /usr/bin/ssh -l root host /usr/bin/sudo /usr/bin/rsync --server
> --sender -slHogDtpAXrcxe.iLsf --iconv=UTF-8
>
> - What is the meaning of the (option?) string: -slHogDtpAXrcxe.iLsf
> - Is this just a concatenation of different command-line options sent
>  to rsync?
> - If so, what is the meaning of the period?
> - Also, are these options fixed or do they vary with the setting of
>   rsync args in config.pl?
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups missing entire directories: "file has vanished"

2019-05-07 Thread Craig Barratt via BackupPC-users
What sort of filesystem is this?  Do those directory names contain
non-ascii characters?

However, the "file has vanished" error shouldn't occur on a directory, so
something strange is going on.

I'd recommend turning on additional debug in rsync (eg, add -vvv
to $Conf{RsyncArgs}, and also look at the --debug option) and looking in
the XferLOG file.  When the initial file list is sent, are those
directories and their contents present in the file list?

Craig

On Tue, May 7, 2019 at 4:26 PM Michael Stowe 
wrote:

> On 2019-05-07 13:39, Raman Gupta wrote:
>
> Certain directories (and their contents) on one of my hosts are not
> getting backed up at all, even with a “Full” backup.
>
> I use rsync as my Xfer method, with BackupPC 4.3.0 on Fedora (rpms
> BackupPC-4.3.0-1.fc29.x86_64, BackupPC-XS-0.58-1.fc29.x86_64).
>
> Looking at the backup logs, I see messages like the following related to
> the directories that are not being backed up:
>
> file has vanished: “/home/raman/x/y/a” file has vanished:
> “/home/raman/x/y/b” file has vanished: “/home/raman/x/y/c”
>
> I have other directories and files successfully backed up in
> “/home/raman/x/y”, but the directories “a”, “b”, and “c” (and their
> content) are not being backed up.
>
> Note that these files have *not* vanished — they are not ephemeral and
> they haven't been touched in days. For example:
>
>  File: /home/raman/x/y/a
> Size: 4096Blocks: 8  IO Block: 4096   directory
>
> Device: fd08h/64776d Inode: 33037482 Links: 5 Access: (0775/drwxrwxr-x)
> Uid: ( 1000/ raman) Gid: ( 1000/ raman) Context:
> unconfined_u:object_r:user_home_t:s0 Access: 2019-05-07 05:05:17.288857497
> -0400 Modify: 2019-04-30 00:56:22.914849594 -0400 Change: 2019-04-30
> 00:56:22.914849594 -0400 Birth: –
>
> Any idea what might be happening here?
>
> Regards, Raman
>
> “File has vanished” issues can be tricky to diagnose if the file appears
> to be there. What rsync is really telling you is that it built a file list,
> and some of the files or directories from that list are not accessible when
> it actually went to read them. Actually being deleted or ephemeral files
> are two reasons, but there are others, from filename encoding issues to
> inode changes to complications with remotely mounted filesystems to
> corruption issues to complex file permissions.
>
> While I might check the file's details both before and after the rsync run
> to look for changes, I recommend ensuring that these files are reliably
> accessible by the rsync user, check the logs for any problems, and working
> through filesystem issues. (XFS is notorious for this sort of thing.) Also,
> if the volume is anything other than a local mount, that's where I'd look
> first for issues; be aware that rsync's high read volume often exposes
> issues not evident under less stressful usage.
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups missing entire directories: "file has vanished"

2019-05-07 Thread Craig Barratt via BackupPC-users
Thanks for confirming; it's not a charset issue then.

The empty /home due to --one-file-system from 2017 shouldn't make a
difference by now, but it's good to keep in mind.

When you look through the XferLOG file, please look out for any other
unexpected errors.  You could also increase the XferLogLevel (eg, to 5 or
6).

Also, if the backup size is large, you could create a new test host (use
ClientNameAlias to point it back to the right host) that just backs up
/home/raman/x or even /home/raman/x/y.  If the same problem happens, then
it will be much easier to browse shorter log files.  If the problem doesn't
happen in that case, then that's a useful clue too.

Craig

On Tue, May 7, 2019 at 11:02 PM Raman Gupta  wrote:

> It is an ext4 filesystem. The directories are plain ASCII -- no
> strange characters in any way. The `rsync` process on the Linux client
> runs as root, and I have verified root has access to these files
> without any issue. There are plenty of inodes free (`df -i` shows that
> filesystem as only 5% used). There is no file corruption -- all the
> data is good.
>
> There is a seemingly weird coincidence here though... my backups have
> been running nightly for years. However, in late 2017 I updated
> BackupPC on my Fedora box, and `--one-file-system` was added to the
> rsync args without me realizing. This caused all backups from that
> point forward to a few days ago to be missing all the new files and
> modifications in `/home`, which is a mountpoint for a local LVM ext4
> partition. It seems that every file and directory that falls into this
> category are now failing with this "vanished" error. Is it possible
> that BackupPC is confused because it is expecting to find these files
> in prior backups?
>
> I will try the debugging you suggest.
>
> Regards,
> Raman
>
> On Wed, May 8, 2019 at 1:44 AM Craig Barratt via BackupPC-users
>  wrote:
> >
> > What sort of filesystem is this?  Do those directory names contain
> non-ascii characters?
> >
> > However, the "file has vanished" error shouldn't occur on a directory,
> so something strange is going on.
> >
> > I'd recommend turning on additional debug in rsync (eg, add -vvv to
> $Conf{RsyncArgs}, and also look at the --debug option) and looking in the
> XferLOG file.  When the initial file list is sent, are those directories
> and their contents present in the file list?
> >
> > Craig
> >
> > On Tue, May 7, 2019 at 4:26 PM Michael Stowe <
> michael.st...@member.mensa.org> wrote:
> >>
> >> On 2019-05-07 13:39, Raman Gupta wrote:
> >>
> >> Certain directories (and their contents) on one of my hosts are not
> getting backed up at all, even with a “Full” backup.
> >>
> >> I use rsync as my Xfer method, with BackupPC 4.3.0 on Fedora (rpms
> BackupPC-4.3.0-1.fc29.x86_64, BackupPC-XS-0.58-1.fc29.x86_64).
> >>
> >> Looking at the backup logs, I see messages like the following related
> to the directories that are not being backed up:
> >>
> >> file has vanished: “/home/raman/x/y/a” file has vanished:
> “/home/raman/x/y/b” file has vanished: “/home/raman/x/y/c”
> >>
> >> I have other directories and files successfully backed up in
> “/home/raman/x/y”, but the directories “a”, “b”, and “c” (and their
> content) are not being backed up.
> >>
> >> Note that these files have not vanished — they are not ephemeral and
> they haven't been touched in days. For example:
> >>
> >>  File: /home/raman/x/y/a
> >> Size: 4096Blocks: 8  IO Block: 4096   directory
> >>
> >> Device: fd08h/64776d Inode: 33037482 Links: 5 Access: (0775/drwxrwxr-x)
> Uid: ( 1000/ raman) Gid: ( 1000/ raman) Context:
> unconfined_u:object_r:user_home_t:s0 Access: 2019-05-07 05:05:17.288857497
> -0400 Modify: 2019-04-30 00:56:22.914849594 -0400 Change: 2019-04-30
> 00:56:22.914849594 -0400 Birth: –
> >>
> >> Any idea what might be happening here?
> >>
> >> Regards, Raman
> >>
> >> “File has vanished” issues can be tricky to diagnose if the file
> appears to be there. What rsync is really telling you is that it built a
> file list, and some of the files or directories from that list are not
> accessible when it actually went to read them. Actually being deleted or
> ephemeral files are two reasons, but there are others, from filename
> encoding issues to inode changes to complications with remotely mounted
> filesystems to corruption issues to complex file permissions.
> >>
> >> While I might check the file's details both before and after the rsync
>

Re: [BackupPC-users] Backups missing entire directories: "file has vanished"

2019-05-08 Thread Craig Barratt via BackupPC-users
man/x/y/2014,*,2)
> > G bpc_readdir -> 2011
> > G bpc_lstat(home/raman/x/y/2011)
> >
> > Lastly, I've tried a test of placing a new file into the 2015
> > directory, and this file does back up successfully. So it seems like
> > new files are captured, as long as they fall into an existing
> > directory, but new directories are not.
> >
> > One thing I did do on this server is cancel the first full backup
> > after removing `--one-file-system` and deleted the backup from the web
> > UI, whereas the other server completed its backup without
> > intervention. I wonder if the server pool was corrupted somehow by
> > doing that? Also, the `--one-file-system` was probably added at the
> > same time as v4 and the resulting pool migrations on subsequent
> > backups, which would have *removed* a lot of stuff that was being
> > backed up by v3. I have run a BackupPC_fsck and there were no errors.
> >
> > Regards,
> > Raman
> >
> > On Wed, May 8, 2019 at 2:11 AM Craig Barratt via BackupPC-users
> >  wrote:
> > >
> > > Thanks for confirming; it's not a charset issue then.
> > >
> > > The empty /home due to --one-file-system from 2017 shouldn't make a
> difference by now, but it's good to keep in mind.
> > >
> > > When you look through the XferLOG file, please look out for any other
> unexpected errors.  You could also increase the XferLogLevel (eg, to 5 or
> 6).
> > >
> > > Also, if the backup size is large, you could create a new test host
> (use ClientNameAlias to point it back to the right host) that just backs up
> /home/raman/x or even /home/raman/x/y.  If the same problem happens, then
> it will be much easier to browse shorter log files.  If the problem doesn't
> happen in that case, then that's a useful clue too.
> > >
> > > Craig
> > >
> > > On Tue, May 7, 2019 at 11:02 PM Raman Gupta 
> wrote:
> > >>
> > >> It is an ext4 filesystem. The directories are plain ASCII -- no
> > >> strange characters in any way. The `rsync` process on the Linux client
> > >> runs as root, and I have verified root has access to these files
> > >> without any issue. There are plenty of inodes free (`df -i` shows that
> > >> filesystem as only 5% used). There is no file corruption -- all the
> > >> data is good.
> > >>
> > >> There is a seemingly weird coincidence here though... my backups have
> > >> been running nightly for years. However, in late 2017 I updated
> > >> BackupPC on my Fedora box, and `--one-file-system` was added to the
> > >> rsync args without me realizing. This caused all backups from that
> > >> point forward to a few days ago to be missing all the new files and
> > >> modifications in `/home`, which is a mountpoint for a local LVM ext4
> > >> partition. It seems that every file and directory that falls into this
> > >> category are now failing with this "vanished" error. Is it possible
> > >> that BackupPC is confused because it is expecting to find these files
> > >> in prior backups?
> > >>
> > >> I will try the debugging you suggest.
> > >>
> > >> Regards,
> > >> Raman
> > >>
> > >> On Wed, May 8, 2019 at 1:44 AM Craig Barratt via BackupPC-users
> > >>  wrote:
> > >> >
> > >> > What sort of filesystem is this?  Do those directory names contain
> non-ascii characters?
> > >> >
> > >> > However, the "file has vanished" error shouldn't occur on a
> directory, so something strange is going on.
> > >> >
> > >> > I'd recommend turning on additional debug in rsync (eg, add -vvv to
> $Conf{RsyncArgs}, and also look at the --debug option) and looking in the
> XferLOG file.  When the initial file list is sent, are those directories
> and their contents present in the file list?
> > >> >
> > >> > Craig
> > >> >
> > >> > On Tue, May 7, 2019 at 4:26 PM Michael Stowe <
> michael.st...@member.mensa.org> wrote:
> > >> >>
> > >> >> On 2019-05-07 13:39, Raman Gupta wrote:
> > >> >>
> > >> >> Certain directories (and their contents) on one of my hosts are
> not getting backed up at all, even with a “Full” backup.
> > >> >>
> > >> >> I use rsync as my Xfer method, with BackupPC 4.3.0 on Fedora (rpms
> BackupPC-4.3.0-

Re: [BackupPC-users] Email configuration

2019-05-10 Thread Craig Barratt via BackupPC-users
BackupPC uses sendmail to send emails.  See the documentation

for how to test sending of emails.

Craig

On Wed, May 8, 2019 at 5:37 AM Gopal Dhapa  wrote:

> Hello Team,
> Please share link or document for email configuration in backuppc.
>
>
> Thank you
> Gopal Dhapa
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc Upgrade document required version from 4.2.1 to 4.3.0

2019-05-28 Thread Craig Barratt via BackupPC-users
It mainly depends on how you installed 4.2.1 (eg, package manager, from
released tarball?).

You will get more help from this list if you first show that you have made
a reasonable and sincere effort to solve the issue yourself (eg, by reading
the documentation, searching the mail list etc).

Craig

On Tue, May 28, 2019 at 5:45 AM Sriram ARUN EXCELLO 
wrote:

> Dear Team,
>
> Kindly guide us how to upgrade the backuppc from version 4.2.1 to 4.3.0
>  please.
>
> Regards,
>
> Sriram.p
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Failing incremental backups on Windows 10 (backuppc 3.x)

2019-06-23 Thread Craig Barratt via BackupPC-users
+1

Thanks for the detective work and the suggested fix.  Very interesting.

Craig

On Tue, Jun 4, 2019 at 5:58 PM Carl Soderstrom <
carl.soderst...@real-time.com> wrote:

> On 06/04 01:34 , Alan Ballard wrote:
> > Running a packet trace with WireShark was more informative.  It showed
> the
> > Windows client getting transmission timeouts and retransmitting; then
> after
> > 5 retransmits terminating the connection.  An interesting thing I noticed
> > was that the timeout interval was quite short... about 300msec.  It turns
> > out that Windows tunes the timeout based on the observed response on the
> > connection.  In this case, at the start of the rsync operation there are
> > many short fast interactions, then the server bogs down as it does the
> delta
> > copies of a large file.
>
> Huh, that's really interesing. Thanks for the troubleshooting!
>
> --
> Carl Soderstrom
> Systems Administrator
> Real-Time Enterprises
> www.real-time.com
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] changing hostname in /etc/backuppc/hosts

2019-06-23 Thread Craig Barratt via BackupPC-users
You definitely shouldn't do that.  BackupPC isn't aware that they point to
the same real host.  Currently you have 6 "different" hosts, each with
their own configuration and backups.

With your proposed change, only one host ("A") would be visible (duplicate
lines will be ignored), and reference counts for the missing hosts would
not be included, which would cause pool files to be incorrectly purged.

If you no longer need the different backup schedules, you could disabled
backups on that particular "host", but you should still keep it around
until its backups have all expired or been deleted.

Craig

On Sun, Jun 9, 2019 at 6:09 AM daggs  wrote:

> Greetings,
>
> I have various entries in /etc/backuppc/hosts that differs in name but are
> pointing to the same host, I need such separation for different backup
> scheme, what will be the outcome if I change all the hosts that are
> basically the same system into one name? e.g.
> A1 0 root
> A2 0 root
> A3 0 root
> A4 0 root
> A5 0 root
> A 0 root
>
> into:
> A 0 root
> A 0 root
> A 0 root
> A 0 root
> A 0 root
> A 0 root
>
> thanks,
>
> Dagg.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] restore issue on backuppc 4.1.5

2019-06-23 Thread Craig Barratt via BackupPC-users
I'd recommend downloading a tar or zip file instead, and then use native
windows tools for extracting them in the place you wish.

Craig

On Mon, Jun 10, 2019 at 3:24 AM Philip Parsons (Velindre - Medical Physics)
 wrote:

> Hi Michael,
>
>
>
> Thanks for that.  I’ve got that working.
>
>
>
> However, I appear to be back to a similar error as before.
>
>
>
> Running: /usr/bin/smbclient restoredestination\\D\$ -U account -E -d
> 1 -c tarmode\ full -mSMB3 -Tx -
>
> Running: /usr/local/BackupPC/bin/BackupPC_tarCreate -h restoresource -n
> 513 -s sharename -t /foldername
>
> Xfer PIDs are now 143159,143160
>
> WARNING: The "syslog" option is deprecated
>
> [ skipped 1 lines ]
>
> tar:316  tarmode is now full, system, hidden, noreset, quiet
>
> tar:1597 Can't mkdir foldername: NT_STATUS_OBJECT_NAME_COLLISION
>
> readOutput: sysread returns 0 and got EOF (exit ok = , )
>
> XferErr Non-zero exit status from smbclient
>
> restore failed: BackupPC_tarCreate failed
>
>
>
> It appears to create foldername in the share, but then stops.
>
>
>
> Thanks again for all your help.
>
>
>
> Thanks,
>
> Phil
>
>
>
>
>
> *From:* Michael Stowe [mailto:michael.st...@member.mensa.org]
> *Sent:* 04 June 2019 20:29
> *To:* Philip Parsons (Velindre - Medical Physics) <
> philip.pars...@wales.nhs.uk>
> *Cc:* General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> *Subject:* Re: [BackupPC-users] restore issue on backuppc 4.1.5
>
>
>
> On 2019-06-04 07:33, Philip Parsons (Velindre - Medical Physics) wrote:
>
> Thanks for the reply Michael.
>
>
>
> I’ve tried to restore to another share on another server and am getting
> NT_STATUS_BAD_NETWORK_NAME
>
>
>
> I don’t want to overwrite the existing data, I just want to be able to
> download data to another share so that I can QA our backups.
>
>
>
> Running: /usr/bin/smbclient destinationserver\\share -U domainaccount
> -E -d 1 -c tarmode\ full -mSMB3 -Tx -
>
> Running: /usr/local/BackupPC/bin/BackupPC_tarCreate -h sourceserver -n 510
> -s sourceshare -t /sourcefolder
>
> Xfer PIDs are now 104823,104824
>
> WARNING: The "syslog" option is deprecated
>
> [ skipped 1 lines ]
>
> This backup will fail because: tree connect failed:
> NT_STATUS_BAD_NETWORK_NAME
>
> tree connect failed: NT_STATUS_BAD_NETWORK_NAME
>
> [ skipped 1 lines ]
>
> tree connect failed: NT_STATUS_BAD_NETWORK_NAME
>
> readOutput: sysread returns 0 and got EOF (exit ok = , )
>
> XferErr Non-zero exit status from smbclient
>
> restore failed: tree connect failed: NT_STATUS_BAD_NETWORK_NAME
>
>
>
>
>
> thanks again for any advice!
>
> Phil
>
> Samba errors are maddeningly unspecific.  In this case, the error
> translates to "something about that share ain't working for me."  It could
> be anything from the share name being misspelled, or the wrong part of the
> tree, or permission errors.
>
> At any rate, the simplest way to diagnose this is to start with what the
> share names are:
>
> smbclient -L //destinationserver
>
> This should give you a list of valid shares -- if the sharename you tried
> doesn't exactly match one of the shares in that list, viola, that is the
> trouble.  (N.B. I do mean *exactly.*  There's no support for joining the
> tree at a different point in the hierarchy.)
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with multibyte character

2019-06-23 Thread Craig Barratt via BackupPC-users
The error you are getting is when rsync_bpc is trying to use iconv to
convert the client file name encoding to utf8 (which is the native format
that BackupPC on the server).

What is $Conf{ClientCharset} set to for this host?  If it is empty then
rsync_bpc should skip any charset conversion, so I assume it's not empty.
Next, it would appear that 'r'$'\351''ponse.doc' is not a valid encoded
name in that Charset.

In BackupPC 3.x, the xfer methods didn't support any charset conversion
(you could, for example, have smbclient do it instead).

Craig

On Sat, Jun 15, 2019 at 8:56 PM  wrote:

> Michael Stowe wrote at about 03:35:13 + on Sunday, June 16, 2019:
>  > On 2019-06-15 19:20, backu...@kosowsky.org wrote:
>  > > I am running backuppc 4.3.0 on Linux Ubuntu 18.04
>  > >
>  > > I have a file copied over from an old Windows installation to my Linux
>  > > server with name:
>  > >
>  > > -rwx-- 1 user1 user1  28672 May 11  2005 'r'$'\351''ponse.doc'
>  > >
>  > > This file gives an error under backuppc as follows:
>  > >  [sender] cannot convert filename:
>  > >  user1/docs/r\#351ponse.doc (Invalid or incomplete multibyte or
>  > > wide character)
>  > >
>  > > Note that rsync itself has no trouble copying this file when I run
>  > > rsync manually.
>  > >
>  > > Also, interestingly, this same file backed up fine on my old Fedora
> 12
>  > > server
>  > > running backuppc 3.x.
>  > >
>  > > Any ideas why backuppc is having trouble with this file?
>  >
>  > Almost definitely because the encoding is invalid relative to the
>  > locale.  The simplest thing to do is to rename the file so that it's a
>  > valid filename.
>
> I could do that of course, but i would like to "fix" it if possible.
> If the native filesystem and 'rsync' are both able to deal with the
> file, it seems to me that backuppc should also be able to deal with
> it.
>
> Backup should be maximally tolerant so as to duplicate the files as
> permissively as possible...
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup errors...

2019-06-23 Thread Craig Barratt via BackupPC-users
Jeff,

I'd like to get to the bottom of this.  Let's work on this off-list.

My first set of questions:

   - is this during a full or incremental backup?
   - can you send me an XferLOG file please?
   - what's in the attrib file in /etc/apt/apt.conf.d or
   /usr/lib/cgi-bin/nut on the most recent backup (eg, use
   BackupPC_attribPrint)?

Craig

On Sun, Jun 16, 2019 at 3:44 PM  wrote:

> backu...@kosowsky.org wrote at about 22:09:29 -0400 on Saturday, June 15,
> 2019:
>  > I am using backuppc 4.3.0 to backup an Ubuntu 18.04 Linux server using
>  > rsync.
>  >
>  > 1 get the following (unexplainable) rsync errors:
>  >
>  > [1] file has vanished: "/etc/apt/apt.conf.d/20archive.~1~
>  >
>  > This file which seemingly refers to an emacs backup file, has not
>  > existed on my server for months, Since I have neither touched nor
>  > edited 20archive in months, I don't know why backuppc is finding
>  > it and then saying it vanishes.
>  >
>
> I'm not even sure how the above even happens given that:
> 1. 20archive.~1~ is nowhere on my existing filesystem (and probably
>only existed transiently a few months back while I was editing the
>20archive file since I typically erase the emacs backups right away
>
>I can verify this by running 'find' or 'ls' as user root.
>Similarly, for 'locate' and 'lsof'.
>The file simply doesn't exist!
>
> 2. 20archive.~1~ doesn't appear in any prior backup.
>So not even sure how backuppc can know about it as it's only
>existence seems to be in the strange error log.
>
>
> So how the heck does backuppc find a non-existent file even before
> it goes on to claim that is has "vanished"?
> Where could it possibly be finding this long ago deleted file?
>
>
> This is almost spooky...
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] @args not used in cmdExecOrEval

2019-06-23 Thread Craig Barratt via BackupPC-users
Jeff,

You are correct that the $vars and %Conf options are missing.  I just committed
a fix

.

Craig

On Sat, May 18, 2019 at 9:29 PM  wrote:

> backu...@kosowsky.org wrote at about 23:19:58 -0400 on Saturday, May 18,
> 2019:
>  > Seems like @args is passed to cmdExecOrEval (e.g., when eval'ing
>  > Pre/Pos commands) but not used in the function itself.
>  > Same is true for the analogous cmdSystemOrEvalLong command.
>  >
>  >
>  > # Exec or eval a command.  $cmd is either a string on an array ref.
>  > #
>  > # @args are optional arguments for the eval() case; they are not used
>  > # for exec().
>  > #
>  > sub cmdExecOrEval
>  > {
>  > my($bpc, $cmd, @args) = @_;
>  >
>  > if ( (ref($cmd) eq "ARRAY" ? $cmd->[0] : $cmd) =~ /^\&/ ) {
>  > $cmd = join(" ", @$cmd) if ( ref($cmd) eq "ARRAY" );
>  >  print(STDERR "cmdExecOrEval: about to eval perl code $cmd\n")
>  >  if ( $bpc->{verbose} );
>  > eval($cmd);
>  > print(STDERR "Perl code fragment for exec shouldn't
> return!!\n");
>  > POSIX::_exit(1);
>  > } else {
>  > $cmd = [split(/\s+/, $cmd)] if ( ref($cmd) ne "ARRAY" );
>  >  print(STDERR "cmdExecOrEval: about to exec ",
>  >$bpc->execCmd2ShellCmd(@$cmd), "\n")
>  >  if ( $bpc->{verbose} );
>  >  alarm(0);
>  >  $cmd = [map { m/(.*)/ } @$cmd]; # untaint
>  >  #
>  >  # force list-form of exec(), ie: no shell even for 1 arg
>  >  #
>  > exec { $cmd->[0] } @$cmd;
>  > print(STDERR "Exec failed for @$cmd\n");
>  > POSIX::_exit(1);
>  > }
>  > }
>  >
>
> Note that the comment in the following code snippet from the subroutine
> 'UserCommandRun' in BackupPC_dump is confusing:
>
> #
> # Run the user's command, dumping the stdout/stderr into the
> # Xfer log file.  Also supply the optional $vars and %Conf in
> # case the command is really perl code instead of a shell
> # command.
> #
> $bpc->cmdSystemOrEval($cmd,
> sub {
> $XferLOG->write(\$_[0]);
> print($LogFd $bpc->timeStamp, "Output from $cmdType: ",
> $_[0]);
> });
>
> Specifically, where and how are the variables $vars and %Conf
> supplied?
> Indeed these are exactly the variables I am trying to reference but
> they are neither passed in the above code nor are they inherited (as
> per my previous post).
>
> Note that even if $vars and %Conf were passed as the 3rd argument to
> $bpc->cmdSystemOrEval, the arguments would be ignored when passed on
> to cmdSystemOrEvalLong as per my earlier post...
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with multibyte character

2019-06-24 Thread Craig Barratt via BackupPC-users
Jeff,

I suspect you'll get the same error if you use native rsync and tell it to
convert utf8 to utf8 (eg, add '--iconv=utf8,utf8' to your native rsync
command).

Since you don't need any conversion, you should set $Conf{ClientCharset} to
an empty string.  That should cause rsync to no longer attempt any charset
conversion.

Craig

On Sun, Jun 23, 2019 at 6:13 PM  wrote:

> Craig Barratt via BackupPC-users wrote at about 13:59:46 -0700 on Sunday,
> June 23, 2019:
>  > The error you are getting is when rsync_bpc is trying to use iconv to
>  > convert the client file name encoding to utf8 (which is the native
> format
>  > that BackupPC on the server).
>  >
>  > What is $Conf{ClientCharset} set to for this host?  If it is empty then
>  > rsync_bpc should skip any charset conversion, so I assume it's not
>  > empty.
>  I have it set to UTF-8
>  In my notes I have that this should be set to the output of the
>  command 'locale charmap' which is UTF-8 on Ubuntu 18.04 -- but I
>  can't remember where/why I determined that.
>
>  > Next, it would appear that 'r'$'\351''ponse.doc' is not a valid encoded
>  > name in that Charset.
>
> Well, when I type 'ls' on the server, it shows the file as:
>   -rwx-- 1 user1 user1  28672 May 11  2005 'r'$'\351''ponse.doc'
>
> Indeed when I run:
>echo 'r'$'\351''ponse.doc' | iconv -f UTF-8 -t UTF-8
> I get:
> riconv: illegal input sequence at position 1
> Not sure why this happens if I have UTF-8 as my 'locale charmap'
>
> Even so, shouldn't backuppc treat such an error more gracefully.
> For example,
> - If iconv gives an error, try without charset conversion (like
>   BackupPC 3.x)
>
> At least this would be better than the current setup where lack of
> charset conversion causes the file to fail to be backed up at all -
> which in many ways is the worst thing possible...
>
> >
>  > In BackupPC 3.x, the xfer methods didn't support any charset conversion
>  > (you could, for example, have smbclient do it instead).
>  >
>  > Craig
>  >
>  > On Sat, Jun 15, 2019 at 8:56 PM  wrote:
>  >
>  > > Michael Stowe wrote at about 03:35:13 + on Sunday, June 16, 2019:
>  > >  > On 2019-06-15 19:20, backu...@kosowsky.org wrote:
>  > >  > > I am running backuppc 4.3.0 on Linux Ubuntu 18.04
>  > >  > >
>  > >  > > I have a file copied over from an old Windows installation to my
> Linux
>  > >  > > server with name:
>  > >  > >
>  > >  > > -rwx-- 1 user1 user1  28672 May 11  2005
> 'r'$'\351''ponse.doc'
>  > >  > >
>  > >  > > This file gives an error under backuppc as follows:
>  > >  > >  [sender] cannot convert filename:
>  > >  > >  user1/docs/r\#351ponse.doc (Invalid or incomplete multibyte
> or
>  > >  > > wide character)
>  > >  > >
>  > >  > > Note that rsync itself has no trouble copying this file when I
> run
>  > >  > > rsync manually.
>  > >  > >
>  > >  > > Also, interestingly, this same file backed up fine on my old
> Fedora
>  > > 12
>  > >  > > server
>  > >  > > running backuppc 3.x.
>  > >  > >
>  > >  > > Any ideas why backuppc is having trouble with this file?
>  > >  >
>  > >  > Almost definitely because the encoding is invalid relative to the
>  > >  > locale.  The simplest thing to do is to rename the file so that
> it's a
>  > >  > valid filename.
>  > >
>  > > I could do that of course, but i would like to "fix" it if possible.
>  > > If the native filesystem and 'rsync' are both able to deal with the
>  > > file, it seems to me that backuppc should also be able to deal with
>  > > it.
>  > >
>  > > Backup should be maximally tolerant so as to duplicate the files as
>  > > permissively as possible...
>  > >
>  > >
>  > > ___
>  > > BackupPC-users mailing list
>  > > BackupPC-users@lists.sourceforge.net
>  > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > > Wiki:http://backuppc.wiki.sourceforge.net
>  > > Project: http://backuppc.sourceforge.net/
>  > >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with multibyte character

2019-06-24 Thread Craig Barratt via BackupPC-users
Jeff,

I don't think it's the same issue, even though the final state looks
similar.  In this case, since the file name isn't encoded correctly, I
suspect that when rsync goes back to actually open it, it does appear to be
missing since the file name byte string doesn't point to the file (ie, it
(incorrectly) looks like the file doesn't exist).

Let me try to re-create this (should be easy) and I'll try some experiments
to see if there is a sensible behavior that is easy to implement.

Craig

On Mon, Jun 24, 2019 at 10:46 PM  wrote:

> Hi Craig,
>
> I just tried a backup with:
>   $Conf{ClientCharset} = '';
>
> But now instead of the previous error:
> [sender] cannot convert filename: user1/docs/r\#351ponse.doc (Invalid
> or incomplete multibyte or wide character)
> I now get the (familiar?) error:
> rsync_bpc: fstat "user1/docs/r\#351ponse.doc" failed: No such file or
> directory (2)
>
> I suspect this is due to the same/similar bug I have been chasing down with
> you regarding the handling of deleted (or spuriously deleted) files.
>
> Indeed, attribPrint shows:
> $VAR1 = {
>   'r�ponse.doc' => {
> 'compress' => 3,
> 'digest' => '',
> 'gid' => 0,
> 'inode' => 0,
> 'mode' => 0,
> 'mtime' => 0,
> 'name' => 'r�ponse.doc',
> 'nlinks' => 0,
> 'size' => 0,
> 'type' => 10,
> 'uid' => 0
> }   }
> };
>
> Craig Barratt wrote at about 21:43:38 -0700 on Monday, June 24, 2019:
>  > Jeff,
>  >
>  > I suspect you'll get the same error if you use native rsync and tell it
> to
>  > convert utf8 to utf8 (eg, add '--iconv=utf8,utf8' to your native rsync
>  > command).
>  >
>  > Since you don't need any conversion, you should set
> $Conf{ClientCharset} to
>  > an empty string.  That should cause rsync to no longer attempt any
> charset
>  > conversion.
>  >
>  > Craig
>  >
>  > On Sun, Jun 23, 2019 at 6:13 PM  wrote:
>  >
>  > > Craig Barratt via BackupPC-users wrote at about 13:59:46 -0700 on
> Sunday,
>  > > June 23, 2019:
>  > >  > The error you are getting is when rsync_bpc is trying to use iconv
> to
>  > >  > convert the client file name encoding to utf8 (which is the native
>  > > format
>  > >  > that BackupPC on the server).
>  > >  >
>  > >  > What is $Conf{ClientCharset} set to for this host?  If it is empty
> then
>  > >  > rsync_bpc should skip any charset conversion, so I assume it's not
>  > >  > empty.
>  > >  I have it set to UTF-8
>  > >  In my notes I have that this should be set to the output of the
>  > >  command 'locale charmap' which is UTF-8 on Ubuntu 18.04 -- but I
>  > >  can't remember where/why I determined that.
>  > >
>  > >  > Next, it would appear that 'r'$'\351''ponse.doc' is not a valid
> encoded
>  > >  > name in that Charset.
>  > >
>  > > Well, when I type 'ls' on the server, it shows the file as:
>  > >   -rwx-- 1 user1 user1  28672 May 11  2005 'r'$'\351''ponse.doc'
>  > >
>  > > Indeed when I run:
>  > >echo 'r'$'\351''ponse.doc' | iconv -f UTF-8 -t UTF-8
>  > > I get:
>  > > riconv: illegal input sequence at position 1
>  > > Not sure why this happens if I have UTF-8 as my 'locale charmap'
>  > >
>  > > Even so, shouldn't backuppc treat such an error more gracefully.
>  > > For example,
>  > > - If iconv gives an error, try without charset conversion (like
>  > >   BackupPC 3.x)
>  > >
>  > > At least this would be better than the current setup where lack of
>  > > charset conversion causes the file to fail to be backed up at all -
>  > > which in many ways is the worst thing possible...
>  > >
>  > > >
>  > >  > In BackupPC 3.x, the xfer methods didn't support any charset
> conversion
>  > >  > (you could, for example, have smbclient do it instead).
>  > >  >
>  > >  > Craig
>  > >  >
>  > >  > On Sat, Jun 15, 2019 at 8:56 PM  wrote:
>  > >  >
>  > >  > > Mic

Re: [BackupPC-users] restore issue on backuppc 4.1.5

2019-06-25 Thread Craig Barratt via BackupPC-users
If you run BackupPC_tarCreate or BackupPC_zipCreate there shouldn't be any
file size limits.

If  you download a tar or zip archive from the CGI interface, the browser
might impose some limit, but I'm not sure.

Craig

On Mon, Jun 24, 2019 at 4:57 AM Philip Parsons (Velindre - Medical Physics)
 wrote:

> Hi Craig,
>
>
>
> Thanks for your reply.
>
>
>
> Is there a file size limit on the output zip or tar from BackupPC?
>
>
>
> It’s a bit odd that it’s able to create the folder it’s going to write to
> and then just stops.
>
>
>
> Thanks,
>
> Phil
>
>
>
> *From:* Craig Barratt via BackupPC-users [mailto:
> backuppc-users@lists.sourceforge.net]
> *Sent:* 23 June 2019 19:58
> *To:* General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> *Cc:* Craig Barratt 
> *Subject:* Re: [BackupPC-users] restore issue on backuppc 4.1.5
>
>
>
> I'd recommend downloading a tar or zip file instead, and then use native
> windows tools for extracting them in the place you wish.
>
>
> Craig
>
>
>
> On Mon, Jun 10, 2019 at 3:24 AM Philip Parsons (Velindre - Medical
> Physics)  wrote:
>
> Hi Michael,
>
>
>
> Thanks for that.  I’ve got that working.
>
>
>
> However, I appear to be back to a similar error as before.
>
>
>
> Running: /usr/bin/smbclient restoredestination\\D\$ -U account -E -d
> 1 -c tarmode\ full -mSMB3 -Tx -
>
> Running: /usr/local/BackupPC/bin/BackupPC_tarCreate -h restoresource -n
> 513 -s sharename -t /foldername
>
> Xfer PIDs are now 143159,143160
>
> WARNING: The "syslog" option is deprecated
>
> [ skipped 1 lines ]
>
> tar:316  tarmode is now full, system, hidden, noreset, quiet
>
> tar:1597 Can't mkdir foldername: NT_STATUS_OBJECT_NAME_COLLISION
>
> readOutput: sysread returns 0 and got EOF (exit ok = , )
>
> XferErr Non-zero exit status from smbclient
>
> restore failed: BackupPC_tarCreate failed
>
>
>
> It appears to create foldername in the share, but then stops.
>
>
>
> Thanks again for all your help.
>
>
>
> Thanks,
>
> Phil
>
>
>
>
>
> *From:* Michael Stowe [mailto:michael.st...@member.mensa.org]
> *Sent:* 04 June 2019 20:29
> *To:* Philip Parsons (Velindre - Medical Physics) <
> philip.pars...@wales.nhs.uk>
> *Cc:* General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> *Subject:* Re: [BackupPC-users] restore issue on backuppc 4.1.5
>
>
>
> On 2019-06-04 07:33, Philip Parsons (Velindre - Medical Physics) wrote:
>
> Thanks for the reply Michael.
>
>
>
> I’ve tried to restore to another share on another server and am getting
> NT_STATUS_BAD_NETWORK_NAME
>
>
>
> I don’t want to overwrite the existing data, I just want to be able to
> download data to another share so that I can QA our backups.
>
>
>
> Running: /usr/bin/smbclient destinationserver\\share -U domainaccount
> -E -d 1 -c tarmode\ full -mSMB3 -Tx -
>
> Running: /usr/local/BackupPC/bin/BackupPC_tarCreate -h sourceserver -n 510
> -s sourceshare -t /sourcefolder
>
> Xfer PIDs are now 104823,104824
>
> WARNING: The "syslog" option is deprecated
>
> [ skipped 1 lines ]
>
> This backup will fail because: tree connect failed:
> NT_STATUS_BAD_NETWORK_NAME
>
> tree connect failed: NT_STATUS_BAD_NETWORK_NAME
>
> [ skipped 1 lines ]
>
> tree connect failed: NT_STATUS_BAD_NETWORK_NAME
>
> readOutput: sysread returns 0 and got EOF (exit ok = , )
>
> XferErr Non-zero exit status from smbclient
>
> restore failed: tree connect failed: NT_STATUS_BAD_NETWORK_NAME
>
>
>
>
>
> thanks again for any advice!
>
> Phil
>
> Samba errors are maddeningly unspecific.  In this case, the error
> translates to "something about that share ain't working for me."  It could
> be anything from the share name being misspelled, or the wrong part of the
> tree, or permission errors.
>
> At any rate, the simplest way to diagnose this is to start with what the
> share names are:
>
> smbclient -L //destinationserver
>
> This should give you a list of valid shares -- if the sharename you tried
> doesn't exactly match one of the shares in that list, viola, that is the
> trouble.  (N.B. I do mean *exactly.*  There's no support for joining the
> tree at a different point in the hierarchy.)
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wik

Re: [BackupPC-users] Files not taken from pool even when they are identical

2019-06-30 Thread Craig Barratt via BackupPC-users
Jean-Louis,

What version of BackupPC are you running?

Craig

On Fri, Jun 28, 2019 at 5:11 AM Jean-Louis Biasini via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Hi all,
>
> I have a working installation on a centos7 server. I'm backing up 30+
> linux server hosts with rsync. Since I was a bit surprised by the
> growing space taken on the backup server, I started to investigate on
> the pooling mechanism. So I started to create simple identical text
> files on 2 servers to see if the file was showed as already in pool
> while backing up the second server after having back up the first. First
> I just created 2 times the same text file (ie same content, same md5sum,
> same permission, same selinux but different creating time) second I
> created it on the first server then rsync it to the second to have it
> absolutely identical (rsync -aAXv). In both case the file is showed as
> created by the second server's backup. Then I tried a bigger file
> created with fallocate -l 2M test.img and rsynced it the same way. In
> all case my file is created again on the second backup. I also checked
> the hard link limit that I increased from 32000 to 64000 (ext4
> file-system here) with no improvement. Am I missing something?
>
> Many thanks
>
> jean-louis
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with multibyte character

2019-07-05 Thread Craig Barratt via BackupPC-users
Jeff,

I tried recreating this but wasn't able to get the error you reported.  The
file name I created piped to "od -c" prints  'r 351   p   o   n   s   e   .
  d   o   c' (the 351 is octal, hex e9).

When I set ClientCharset to utf8, I do indeed get your first error (cannot
convert filename).  But when ClientCharset is empty, it backs up correctly
without an error message.  I tried this with rsync_bpc 3.0.9.13 and
rsync_bpc 3.1.2.0 (although these have the the BPC_FTYPE_DELETED fix
<https://github.com/backuppc/rsync-bpc/commit/07b18a3469caefc72c01546515973dd1dc21deb4>
I committed last week).

So maybe this indeed is another symptom of the other issue, as you first
suggested.

Craig

On Tue, Jun 25, 2019 at 8:07 AM  wrote:

> Craig Barratt wrote at about 23:00:28 -0700 on Monday, June 24, 2019:
>  > Jeff,
>  >
>  > I don't think it's the same issue, even though the final state looks
>  > similar.  In this case, since the file name isn't encoded correctly, I
>  > suspect that when rsync goes back to actually open it, it does appear
> to be
>  > missing since the file name byte string doesn't point to the file (ie,
> it
>  > (incorrectly) looks like the file doesn't exist).
>
> Note that when I manually run  'rsync -av' (without iconv) on either the
> file itself or the
> directory containing it, the transfer occurs just fine.
>
>  > Let me try to re-create this (should be easy) and I'll try some
> experiments
>  > to see if there is a sensible behavior that is easy to implement.
>  >
>  > Craig
>  >
>  > On Mon, Jun 24, 2019 at 10:46 PM  wrote:
>  >
>  > > Hi Craig,
>  > >
>  > > I just tried a backup with:
>  > >   $Conf{ClientCharset} = '';
>  > >
>  > > But now instead of the previous error:
>  > > [sender] cannot convert filename: user1/docs/r\#351ponse.doc
> (Invalid
>  > > or incomplete multibyte or wide character)
>  > > I now get the (familiar?) error:
>  > > rsync_bpc: fstat "user1/docs/r\#351ponse.doc" failed: No such
> file or
>  > > directory (2)
>  > >
>  > > I suspect this is due to the same/similar bug I have been chasing
> down with
>  > > you regarding the handling of deleted (or spuriously deleted) files.
>  > >
>  > > Indeed, attribPrint shows:
>  > > $VAR1 = {
>  > >   'r�ponse.doc' => {
>  > > 'compress' => 3,
>  > > 'digest' => '',
>  > > 'gid' => 0,
>  > > 'inode' => 0,
>  > > 'mode' => 0,
>  > > 'mtime' => 0,
>  > > 'name' => 'r�ponse.doc',
>  > > 'nlinks' => 0,
>  > > 'size' => 0,
>  > > 'type' => 10,
>  > > 'uid' => 0
>  > > }   }
>  > >             };
>  > >
>  > > Craig Barratt wrote at about 21:43:38 -0700 on Monday, June 24, 2019:
>  > >  > Jeff,
>  > >  >
>  > >  > I suspect you'll get the same error if you use native rsync and
> tell it
>  > > to
>  > >  > convert utf8 to utf8 (eg, add '--iconv=utf8,utf8' to your native
> rsync
>  > >  > command).
>  > >  >
>  > >  > Since you don't need any conversion, you should set
>  > > $Conf{ClientCharset} to
>  > >  > an empty string.  That should cause rsync to no longer attempt any
>  > > charset
>  > >  > conversion.
>  > >  >
>  > >  > Craig
>  > >  >
>  > >  > On Sun, Jun 23, 2019 at 6:13 PM  wrote:
>  > >  >
>  > >  > > Craig Barratt via BackupPC-users wrote at about 13:59:46 -0700 on
>  > > Sunday,
>  > >  > > June 23, 2019:
>  > >  > >  > The error you are getting is when rsync_bpc is trying to use
> iconv
>  > > to
>  > >  > >  > convert the client file name encoding to utf8 (which is the
> native
>  > >  > > format
>  > >  > >  > that BackupPC on the server).
>  > >  > >  >
>  > >  > >  > What is $Conf{ClientCharset} set to for this host?  If it is
> empty
>  > > then
>  > >  > >  > rsync_bpc should skip any charset conversion, so I assume
> it's not
>  > >  > >  > empty.
>  > >  >

Re: [BackupPC-users] rsync backup permissions

2019-07-06 Thread Craig Barratt via BackupPC-users
Is rsync running as root on the client side?  If not, it won't have
permission to chown files to root.  An example setting would be:

$Conf{RsyncClientPath} = 'sudo /usr/bin/rsync';


This requires the backuppc user to have sudo privileges.  You should
exercise some caution in doing this, since a successful attack on the
BackupPC server gives unlimited access to the clients.

Craig

On Sat, Jul 6, 2019 at 6:30 PM David Metcalf  wrote:

> Hi,
>
>
>
> New Backup PC user here.  I just ran a backup/restore test of a Linux
> client using the rsync backup method.  The restore worked but the file was
> not owned by root but rather the backuppc user.  Is there a way to change
> this behavior so that restored files retain the original ownership?
>
>
>
> Dave
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup fails after three days, possibly millions of files

2019-07-09 Thread Craig Barratt via BackupPC-users
In addition to the helpful suggestions, are you sure you are not backing up
any sparse or virtual files (eg, /var/log/lastlog, /proc etc)?

Craig

On Tue, Jul 9, 2019 at 5:43 AM G.W. Haywood via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Hi there,
>
> On Mon, 8 Jul 2019, David Koski wrote:
>
> > I am trying to back up about 24TB of data that has millions of files.?
> > It takes a day or to before it starts backing up and then stops ...
>
> Your post is a little light on detail.
>
> BackupPC installations routinely back up that kind of data volume.
>
> My own systems are relatively smaller.  At my place of business I have
> for example a dozen Linux hosts and about 14TB backed up nightly using
> BackupPC V4.  An incremental backup takes between a couple of minutes
> and 20 or 25 minutes, depending on machine.  Anything between a couple
> of hundred thousand to a million files.  For the machine with the most
> data to be backed up, a full can take 6 - 7 hours.  Most of these are
> fairly capable machines with 6GB/s SATA disc (no RAID), 1Gb/s network.
>
> In a case like yours I'd try to use the Razor technique, which means
> try to split the job into smaller pieces.  There are many ways to do
> it but without more information from you I hesitate.  The performance
> of a filesystem can leave something to be desired when an individual
> directory can contain hundreds of thousands or millions of files, but
> it very much depends on the filesystem.
>
> HTH
>
> --
>
> 73,
> Ged.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.3.1 released

2019-07-14 Thread Craig Barratt via BackupPC-users
BackupPC 4.3.1  has
been released on Github.  This release has a number of bug fixes.

In addition, rsync-bpc 3.0.9.14
 and rsync-bpc
3.1.2.1  have
been released.  Upgrading is recommended.

The changes since 4.3.0 are listed below.

I'd like to thank the folks that contributed to this release, and people
who helped debug issues.

Enjoy!

Craig

#
# Version 4.3.1, 14 Jul 2019
#


   - Merged pull request #240
   .
   - Fixed CGI host summary columns for new Comment value; reported by
   @karlis-irmejs  (#245
   ).
   - Added support to BackupPC_tarCreate for PAX headers to allow large
   metadata values to be correctly encoded; reported by @seblu
    (#254
   ).
   - Fixed regexp in BackupPC so that versions like rsync-bpc 3.1.2beta0
   are matched. PR submitted by Adrien Ferrand (#240
   ).
   - Added more details to error messages when failing to connect to
   BackupPC server; reported by @dmak (#261
   ).
   - bin/BackupPC Main_Initialize() checks if a BackupPC server is running
   by trying to connect to it, in addition to existing check that PID exists;
   reported by @toggoboge  (#264
   ).
   - Fixed utf8 encoded share names in deletion of orphan RsyncShareNames,
   reported by @Yann79 (#266
   ).
   - Fixed %Conf passing to perl code version of $bpc->cmdSystemOrEval;
   reported by Jeff Kosowsky.
   - BackupPC_backupDelete removes files with BPC_FTYPE_DELETED from filled
   merged backup; reported by Jeff Kosowsky.
   - added Data::Dumper SortKeys(1) to lib/BackupPC/Storage/Text.pm so that
   config hash writes have repeatable order; suggested by @kr4ut
    (#275
   ).
   - bin/BackupPC_archive: use $bpc->openPCLogFile() instead of manual LOG
   file rotation
   - lib/BackupPC/CGI/Archive.pm: create client directory if it doesn't
   exist
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC never ends

2019-07-29 Thread Craig Barratt via BackupPC-users
Could your backup be trying to backup large virtual or sparse files, eg, in
/proc or /var/wtmp?

Craig

On Mon, Jul 29, 2019 at 4:48 AM Mariano Sanfilippo 
wrote:

> Hello,  i have connected backuppc to my client server through ssh keys and
> rsync, on the client i had a user backuppc  that has access without
> password to sudo rsync only, i am testing on 2 clients, one client doesnt
> backup anything from /var, and the other client never ends the backup(
> 250GB), the   backup starts, can see like 80GBs on backuppc web, but it
> nevers finish the task, the log doesnt show any error, just that a backup
> started.
>
> Thanks.
>
> Sent from Mailspring
> ,
> the best free email app for work
> [image: Sent from Mailspring]
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync-bpc claims that "file has vanished", but tries to download it over and over again

2019-07-29 Thread Craig Barratt via BackupPC-users
Yes, that bug is fixed in the latest versions of BackupPC (4.3.1) and
rsync_bpc (3.1.2.1).  It sounds like you have the latest BackupPC, but you
will need to upgrade to the latest rsync_bpc.

Craig

On Mon, Jul 29, 2019 at 12:29 PM Alexander Kobel  wrote:

> Hi,
>
> On 29.07.19 18:30, Pierre-Yves Bonnetain-Nesterenko wrote:
> > On 29/07/2019 17:49, Alexander Kobel wrote:
> >> Any ideas about what could be the culprit?
> >
> > Looks like the « zombie files » bug which was corrected by last update
> > of BPC.
>
> huh. My google-fu fails me here... Would that be "remove any extraneous
> BPC_FTYPE_DELETED file types in non-merged backup" from the recent
> rsync-bpc releases?
>
> (And that's when I notice that both rsync-bpc and BackupPC-XS from the
> Arch package are not up to date...)
>
>
> Thanks,
> Alex
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Wrong backup count

2019-08-07 Thread Craig Barratt via BackupPC-users
Please look in the LOG files (main BackupPC LOG and per-host LOG) to see
whether backups were started or not, and whether that matches what your
configuration specifies.

Craig

On Tue, Aug 6, 2019 at 8:01 AM Norman Goldstein  wrote:

> Haven't seen your config.pl file, but this might explain/help:
>
> #
> # Number of incremental backups to keep.  Must be >= 1.
> #
> # Note: Starting in V4+, deleting backups is done based on Fill/Unfilled,
> # not whether the original backup was full/incremental.  For historical
> # reasons these parameters continue to be called IncrKeepCnt, rather than
> # UnfilledKeepCnt.  If $Conf{FillCycle} is 0, then incremental backups
> # continue to be unfilled, so the terms are interchangeable.  For V3
> backups,
> # the expiry settings have their original meanings.
> #
> # In the steady state, each time an incr backup completes successfully
> # the oldest one is removed.  If this number is decreased, the
> # extra old backups will be removed.
> #
> $Conf{IncrKeepCnt} = 6;
>
> #
> # Very old incremental backups are removed after $Conf{IncrAgeMax} days.
> # However, we keep at least $Conf{IncrKeepCntMin} incremental backups no
> # matter how old they are.
> #
> $Conf{IncrKeepCntMin} = 1;
> $Conf{IncrAgeMax} = 30;
>
>
>
> On 2019-08-06 5:27 a.m., Gandalf Corvotempesta wrote:
>
> Can someone explain me why I have only 4 backups ?
> It should keep at least 7 backups (1 full, 6 incrementals)
>
> Any idea ? Which setting should I check ?
>
>
>
> ___
> BackupPC-users mailing listbackuppc-us...@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Why ping before backup?

2019-08-10 Thread Craig Barratt via BackupPC-users
For an rsyncd client, the "rsync $host::" suggestion seems like an
excellent idea.  Its exit status appears to be correct (0 on success), and
as you point out it is confirming the desired service is actually available.

Craig

On Sat, Aug 10, 2019 at 6:07 PM Kenneth Porter 
wrote:

> On 8/10/2019 9:50 AM, Alexander Kobel wrote:
> > IMHO the only reason why one might want to replace the ping command is
> > hosts that don't reply to pings for security reasons
>
> I'm thinking that "rsync $host::" would be a good substitute for an
> rsyncd backup because it preloads the executable on the client into its
> disk cache and verifies that the service, not the host, is up and
> accessible.
>
> I set up a new Win10 client the other day and forgot to open the
> firewall to ping when I opened it to rsync.exe. I mostly work from home
> so fixing that means I have to coordinate with the user to meet him when
> he's at the office and not busy using the host. It's just easier to
> change the ping setting. And, since Windows defaults to disallowing
> ping, perhaps this change is a good choice for all Windows rsyncd clients.
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] UmaskMode change in config.pl.rpmnew

2019-08-28 Thread Craig Barratt via BackupPC-users
Jamie,

That should be same thing in perl: 23 (decimal) is the same as 027 (octal).

Craig

On Wed, Aug 28, 2019 at 1:11 PM Jamie Burchell  wrote:

> Hi
>
>
>
> I’ve been merging in updates to the config.pl file from config.pl.rpmnew
> and noticed that the UmaskMode has changed from 23 to 027. Does anybody
> know exactly which files and folders this relates to and what if anything I
> need to chmod after the change?
>
>
>
> From what I can see, the data dirs and files created are already 760 and
> 540.
>
>
> Kind regards,
>
> Jamie
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_dump stalls with SIGTTOU on Windows client using rsync

2019-09-04 Thread Craig Barratt via BackupPC-users
In addition to the higher log level, it would be helpful to see the rsync
command being run.  Is there anything in the XferLOG file?

Craig

On Wed, Sep 4, 2019 at 6:44 PM Michael Huntley  wrote:

> Perhaps cranking up log level to 8 or 9 may help.
>
> Mph
>
> On Sep 4, 2019, at 2:56 PM, Mike Hughes  wrote:
>
> No responses. Too much detail? Let me rephrase it:
>
> Windows rsync backup no worky!
> plz halp!!!
> :-D
>
> On Thu, 2019-08-15 at 14:44 -0500, Mike Hughes wrote:
>
> Working to add a few Windows clients to our BackupPC system. I have
> passwordless ssh working and when I try kicking off a backup using the GUI
> it fails immediately.
> Logs show:
> 2019-08-15 11:09:29 full backup started for directory /cygdrive/c
> 2019-08-15 11:09:31 Got fatal error during xfer (No files dumped for share
> /cygdrive/c)
> 2019-08-15 11:09:36 Backup aborted (No files dumped for share /cygdrive/c)
>
> 'BackupPC_dump -f -v ' from the command line looks like it's
> running and the log indicates "full backup started for directory
> /cygdrive/c" but it hangs there.
> Looking at the XFER PIDs I see:
> # strace -p 21912
> strace: Process 21912 attached
> --- stopped by SIGTTOU ---
>
> When I ctrl-c the inactive process these are some of the messages dumped
> to the screen:
> ^C^Cexiting after signal INT
> __bpc_progress_state__ fail cleanup
> BackupFailCleanup: nFilesTotal = 0, type = full, BackupCase = 1, inPlace =
> 1, lastBkupNum =
> Removing empty backup #0
> __bpc_progress_state__ delete #0
> cmdSystemOrEval: about to system
> /usr/share/BackupPC/bin/BackupPC_backupDelete -h  -n 0 -l
>
> BackupPC version: 4.3.1
> rsync_bpc: version 3.1.2.0  protocol version 31
>
> Client: Windows Server 2016
> Cygwin64:
> $ uname -r
> 3.0.7(0.338/5/3)
> openssh 8.0p1-2
> rsync 3.1.2-1
>
> Any help appreciated!
> Thanks!
>
> --
>
>
> Mike
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Multiple CPU usage for compression?

2019-09-30 Thread Craig Barratt via BackupPC-users
Each rsync backup has two processes on the backup server, but only one will
be doing compression.  So, yes, compression for a single backup is
single-threaded (as rsync is).

However, the backup server usually runs multiple backups (configurable),
and, in the steady state, the amount of compression required isn't very
large: only new files not already in the pool need to be compressed.  It's
not likely compression is a bottleneck.

Craig

On Mon, Sep 30, 2019 at 5:33 PM  wrote:

> Hey guys,
>
>
> Is there such a setting or is backuppc genuinely single threaded when it
> comes to compression? I ask because seems a little silly these days when
> pretty much all CPUs are multicore and all server are multicore and
> multcpu?
>
>
> So is there a setting I am missing? I scanned the documentation and it
> does not really talk about how to improver compression speeds. That
> seems to be a bottle next  to me?
>
>
> Thanks.
>
>
> --
> ==
>
>
> Don Alexander.
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupCP_tarcreate file location

2019-12-06 Thread Craig Barratt via BackupPC-users
It doesn't write a temporary file on the BackupPC server.  It builds the
tar file and directly streams it via the http connection to the browser
client.

Craig

On Fri, Dec 6, 2019 at 5:21 AM Gerald Brandt  wrote:

> Hi,
>
> Where does BackupPC_tarcreate create the tar file before it sends it to
> the client computer?
>
> Gerald
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync backup error

2019-12-26 Thread Craig Barratt via BackupPC-users
What versions of BackupPC and rsync-bpc are you using?

Craig

On Thu, Dec 26, 2019 at 8:58 AM Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> I've got this, and the backup is restarted. Any idea ?
> Both client and server are on a private gigabit lan, no firewall between
>
> R
> bpc_sysCall_checkFileMatch(var/www/clients/y/x/web/_vendor/jenky/laravel-plupload/tests/.gitkeep):
> file doesn't exist
> R
> bpc_sysCall_checkFileMatch(var/www/clients/y/x/web/_vendor/kodeine/laravel-acl/src/migrations/.gitkeep):
> file doesn't exist
> [ saltate 118 righe ]
> G
> bpc_poolWrite_unmarkPendingDelete(/var/backups/backuppc/pool/64/36/6436d29cc1b4f3faedf65d71035d0e46)
> failed; errno = 2
> G Couldn't unmark candidate matching file
> /var/backups/backuppc/pool/64/36/6436d29cc1b4f3faedf65d71035d0e46
> (skipped; errno = 2)
> G bpc_attribCache_dirWrite: failed to write attributes for dir
>
> feverything/fvar/fwww/fclients/fy/fx/fweb/f_vendor/fguestisp/fnews/fsrc/fNews/fviews/attrib
> G
> bpc_poolWrite_unmarkPendingDelete(/var/backups/backuppc/pool/66/8e/678f8b88482487bcb4de17bb54c3a520)
> failed; errno = 2
> G Couldn't unmark candidate matching file
> /var/backups/backuppc/pool/66/8e/678f8b88482487bcb4de17bb54c3a520
> (skipped; errno = 2)
> G bpc_attribCache_dirWrite: failed to write attributes for dir
> feverything/fvar/fwww/fclients/fy/fx/fweb/f_vendor/fjeremeamia/attrib
> [ saltate 22 righe ]
> rsync_bpc: [generator] write error: Broken pipe (32)
> [ saltate 201 righe ]
> DoneGen: 0 errors, 390 filesExist, 7215 sizeExist, 7215 sizeExistComp,
> 2002483 filesTotal, 32067721837 sizeTotal, 45 filesNew, 2230 sizeNew,
> 2230 sizeNewComp, 16624498 inode
> rsync error: error in socket IO (code 10) at io.c(820) [generator=3.1.2.1]
> rsync_bpc: [receiver] write error: Broken pipe (32)
> Done: 0 errors, 5170 filesExist, 57500670 sizeExist, 57500670
> sizeExistComp, 0 filesTotal, 0 sizeTotal, 9442 filesNew, 2245368408
> sizeNew, 2245368408 sizeNewComp, 16642117 inode
> rsync error: received SIGUSR1 (code 19) at main.c(1434) [receiver=3.1.2.1]
> rsync_bpc exited with fatal status 10 (2560) (rsync error: received
> SIGUSR1 (code 19) at main.c(1434) [receiver=3.1.2.1])
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] bad poolRangeStart

2020-01-19 Thread Craig Barratt via BackupPC-users
You also created git issue #304
, which is now closed.

Craig

On Fri, Dec 27, 2019 at 12:10 AM Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Got this error:
>
> 2019-12-26 22:44:19 Running BackupPC_nightly -m -P 10 864 867 (pid=20264)
> 2019-12-26 22:44:19  admin : /usr/local/backuppc/bin/BackupPC_nightly:
> bad poolRangeStart '864'
> 2019-12-26 22:44:19 Finished  admin  (BackupPC_nightly -m -P 10 864 867)
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rrdUpdate: illegal attempt to update using time

2020-01-19 Thread Craig Barratt via BackupPC-users
You created git issue #305
 for
this bug, which is now closed.

Craig

On Fri, Dec 27, 2019 at 12:11 AM Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2019-12-26 22:44:23 Running BackupPC_rrdUpdate (pid=20275)
> 2019-12-26 22:44:24  admin-1 : ERROR: /var/log/BackupPC/poolUsage.rrd:
> illegal attempt to update using time 1577404800 when last update time
> is 1577404800 (minimum one second step)
> 2019-12-26 22:44:24 Finished  admin-1  (BackupPC_rrdUpdate)
>
>
> Any better place where to post bugs ?
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Log flood due to "Botch on admin job for admin : already in use!!"

2020-01-19 Thread Craig Barratt via BackupPC-users
You created git issue #303
 for
this bug, which is now closed.

Craig

On Fri, Dec 27, 2019 at 2:32 AM Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Running multiple nighly jobs will slow down even more and i'm already
> using 1/128 as period.
> But that's not the issue, the issue is that BPC is flooding the log
> writing the same message tens time per second, every time.
>
> Il giorno ven 27 dic 2019 alle ore 10:19 Alfred Weintoegl
>  ha scritto:
> >
> > Maybe you should change the following options?:
> >
> > $Conf{MaxBackupPCNightlyJobs}
> > and
> > $Conf{BackupPCNightlyPeriod}
> >
> > The BackupPC Documentaion says:
> > If BackupPC_nightly takes too long to run, the settings
> > $Conf{MaxBackupPCNightlyJobs} and $Conf{BackupPCNightlyPeriod} can be
> > used to run several BackupPC_nightly processes in parallel, and to split
> > its job over several nights.
> >
> >
> > regards
> > Alfred
> >
> >
> > Am 27.12.2019 um 09:08 schrieb Gandalf Corvotempesta:
> > > When nighly job spawn multiple days (in my case, even 1 week), log are
> > > flooded with:
> > >
> > > Botch on admin job for  admin : already in use!!
> > >
> > > would be possible to relax this log or add a sort of ratelimit like
> > > syslog does ?
> > >
> > > There is no need to log the same error line 50 times per second, like
> > > the following:
> > >
> > > 2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
> > > 2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
> > > 2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
> > > 2019-12-26 16:36:41 Botch on admin job for  admin : already in use!!
> > ...
> > snip
> >
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Question on incremental archiving

2020-01-20 Thread Craig Barratt via BackupPC-users
Tarak,

No, there isn't a way to generate incremental archives.

If you really want to go to the trouble of creating full and incremental
tape archives (eg, using tar), it might be better to do that
directly without BackupPC.  That way you have fully redundancy (ie, the
tape method doesn't even depend on BackupPC).

Craig

On Mon, Jan 20, 2020 at 2:03 PM Patel, Tarak (SSC/SPC) <
tarak.pa...@canada.ca> wrote:

> Hello all,
>
> I'm using 3 different BackupPC instances to backup servers in different
> domains and a project specific instance. I have been working on tools to
> archive backups to tape for long term storage. The base archive feature
> works well however it long process when you have many TB with millions of
> files to archive. I was wondering if there was a way to archive only the
> incremental changes (only the new files in a given incremental backup).
>
> The idea would be to run a full archive once a week and run incremental
> archive daily. At the moment, I haven't found a way to accomplish this.
> Anyone insight would be appreciated.
>
> Thanks,
>
> Tarak
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.3.2 released

2020-02-19 Thread Craig Barratt via BackupPC-users
BackupPC 4.3.2  has
been released on Github.  This release has a number of bug fixes and a
couple of minor feature additions.

The changes since 4.3.1 are listed below.

I'd like to thank the folks that contributed to this release, and people
who helped debug issues.

Enjoy!

Craig


   - Merged pull requests #278
   , #281
   , #289
   , #295
   , #297
   , #307
   , #308
   , #311
   , #312
   , #313
   , #314
   .
   - Added per-host read/write exclusivity to worker programs, so
   command-line programs don't collide with backups or other operations;
   reported by @hamster65  (issue #299
   )
   - Added $Conf{RsyncIncrArgsExtra} (issue #284
   )
   - Added $Conf{ClientShareName2Path}, which allows mapping of share
   names/path to real paths on the client (eg, to backup a snapshot that's in
   a different directory to the share name path)
   - Fixed v3 digest calculation in bin/BackupPC_backupDuplicate and
   bin/BackupPC_migrateV3toV4; reported by @palmtop
    (issue #291
   )
   - Improved handling of BackupPC_nightly running for more than 24 hours:
   drop next queued run, and provide better log messages; reported by
   @guestisp  (issue #303
   )
   - Improved error checking on $Conf{BackupPCNightlyPeriod}, and update
   $Info->{NightlyPhase} if it's larger than $Conf{BackupPCNightlyPeriod};
   reported by @guestisp  (issue #304
   )
   - Fixed warnings in bin/BackupPC, submitted by @moisseev
    (#278
   )
   - Avoid rrd updates using the same time stamp, submitted by @moisseev
    (#311
   , issue #305
   )
   - Removed unused counting of renamed files, submitted by @moisseev
    (#281
   )
   - Make tar xfer output parsing compatible with FreeBSD tar, submitted by
   @haarp  (#289
   )
   - Fix daemon stdin open to read mode, submitted by @moisseev
    (#308
   )
   - Hosts config editor table layout CGI fix, submitted by @steven-de-wit
    (#297
   )
   - Fixes to French translation, submitted by @pjoubert-
    (#295
   )
   - Fixes to Italian translation, submitted by @guestisp
    (#314
   )
   - Updated comments for Apache 2.4 config in httpd/src/BackupPC.conf,
   submitted by @NotAProfessionalDeveloper
    (#307
   )
   - Documentation update for SCGI prerequisite, submitted by @guestisp
    (#312
   )
   - Documentation update for nginx config, submitted by @guestisp
    (#313
   )
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup aborted (No files dumped for share

2020-02-19 Thread Craig Barratt via BackupPC-users
Some of the Rsync config parameters are used differently in 4.x.  So if you
copied the config over you could have some settings that aren't right.

What's your value of $Conf{RsyncSshArgs} and $Conf{RsyncArgs}?

Please post the full rsync_bpc command from the XferLOG.bad file.

Craig

On Wed, Feb 19, 2020 at 1:05 PM Gerald Brandt  wrote:

> Hmm, this may be part of the issue:
>
> rsync: on remote machine: -slHogDtprcxe.iLsfxC: unknown option
>
> Gerald
> On 2020-02-19 2:28 p.m., Gerald Brandt wrote:
>
> Hi,
>
> I'm building a 4.3.2 backup server. Currently running a 3.3.0 server.
>
> I configured a few machine for test backups, and I'm getting strange
> results. For example, if I backup three directories (say /etc, /root, and
> /var/www I consistently get one of those back as no files dumped. There are
> definitely files there, as 3.3.0 backs them up just fine.
>
>
> 2020-02-19 13:56:11 Created directory /var/lib/backuppc/pc/website/refCnt
> 2020-02-19 13:56:11 full backup started for directory /etc
> 2020-02-19 13:57:12 full backup started for directory /root
> 2020-02-19 13:57:51 full backup started for directory /var/www
> 2020-02-19 14:04:18 full backup started for directory /var/lib/pydio
> 2020-02-19 14:04:31 Got fatal error during xfer (No files dumped for share
> /var/lib/pydio)
> 2020-02-19 14:04:36 Backup aborted (No files dumped for share
> /var/lib/pydio)
>
>
> This is via an rsync backup and happens are more than half my servers,
> though I seem to be having a similar issue with my one SMB backups as well.
>
>
> Any ideas what's going on?
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup aborted (No files dumped for share

2020-02-20 Thread Craig Barratt via BackupPC-users
>
> [server=2.6.8]


This is a very old rsync (~14 years old).  Can you try a more recent rsync
on the client?

Craig

On Wed, Feb 19, 2020 at 5:46 PM Gerald Brandt  wrote:

> Backup prep: type = full, case = 1, inPlace = 1, doDuplicate = 0, newBkupNum 
> = 0, newBkupIdx = 0, lastBkupNum = , lastBkupIdx =  (FillCycle = 0, noFillCnt 
> = )
> Running: /usr/local/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc 
> --bpc-host-name svn-ftp-db.erlphase.com --bpc-share-name /etc --bpc-bkup-num 
> 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1 
> --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level 1 -e /usr/bin/ssh\ -l\ 
> root --rsync-path=/usr/bin/rsync --super --recursive --protect-args 
> --numeric-ids --perms --owner --group -D --times --links --hard-links 
> --delete --delete-excluded --one-file-system --partial --log-format=log:\ %o\ 
> %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --checksum --timeout=18 
> svn-ftp-db.erlphase.com:/etc/ /
> full backup started for directory /etc
> Xfer PIDs are now 3568
> This is the rsync child about to exec /usr/local/bin/rsync_bpc
> rsync: on remote machine: -slHogDtprcxe.iLsfxC: unknown option
> rsync error: syntax or usage error (code 1) at main.c(1231) [server=2.6.8]
> rsync_bpc: connection unexpectedly closed (0 bytes received so far) [Receiver]
> Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 
> sizeTotal, 0 filesNew, 0 sizeNew, 0 sizeNewComp, 2 inode
> rsync error: error in rsync protocol data stream (code 12) at io.c(226) 
> [Receiver=3.1.2.1]
> rsync_bpc exited with fatal status 12 (3072) (rsync error: error in rsync 
> protocol data stream (code 12) at io.c(226) [Receiver=3.1.2.1])
>
>
> On 2020-02-19 7:26 p.m., Craig Barratt via BackupPC-users wrote:
>
> Some of the Rsync config parameters are used differently in 4.x.  So if
> you copied the config over you could have some settings that aren't right.
>
> What's your value of $Conf{RsyncSshArgs} and $Conf{RsyncArgs}?
>
> Please post the full rsync_bpc command from the XferLOG.bad file.
>
> Craig
>
> On Wed, Feb 19, 2020 at 1:05 PM Gerald Brandt  wrote:
>
>> Hmm, this may be part of the issue:
>>
>> rsync: on remote machine: -slHogDtprcxe.iLsfxC: unknown option
>>
>> Gerald
>> On 2020-02-19 2:28 p.m., Gerald Brandt wrote:
>>
>> Hi,
>>
>> I'm building a 4.3.2 backup server. Currently running a 3.3.0 server.
>>
>> I configured a few machine for test backups, and I'm getting strange
>> results. For example, if I backup three directories (say /etc, /root, and
>> /var/www I consistently get one of those back as no files dumped. There are
>> definitely files there, as 3.3.0 backs them up just fine.
>>
>>
>> 2020-02-19 13:56:11 Created directory /var/lib/backuppc/pc/website/refCnt
>> 2020-02-19 13:56:11 full backup started for directory /etc
>> 2020-02-19 13:57:12 full backup started for directory /root
>> 2020-02-19 13:57:51 full backup started for directory /var/www
>> 2020-02-19 14:04:18 full backup started for directory /var/lib/pydio
>> 2020-02-19 14:04:31 Got fatal error during xfer (No files dumped for
>> share /var/lib/pydio)
>> 2020-02-19 14:04:36 Backup aborted (No files dumped for share
>> /var/lib/pydio)
>>
>>
>> This is via an rsync backup and happens are more than half my servers,
>> though I seem to be having a similar issue with my one SMB backups as well.
>>
>>
>> Any ideas what's going on?
>>
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
>
>
> ___
> BackupPC-users mailing listbackuppc-us...@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Efficiently browsing backups

2020-04-16 Thread Craig Barratt via BackupPC-users
The storage of a full backup includes the full directory tree below each
share, although directory and share names are mangled - preceded by an "f"
and special characters (like "/") are URL encoded.

So, since you are looking for a directory, find is likely the most
efficient solution, starting as deep into the directory tree as possible.

If instead you were looking for file, then find won't work since files are
only stored as references in the attrib file in each directory.  So in that
case your approach of using BackupPC_ls is the best option (but slow as you
point out).

Craig


On Thu, Apr 16, 2020 at 7:14 AM G.W. Haywood via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Hi there,
>
> On Thu, 16 Apr 2020, Alwyn Kik wrote:
>
> > What is the best way to find a file or directory ...
>
> man find
>
> --
>
> 73,
> Ged.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] R: Efficiently browsing backups

2020-04-16 Thread Craig Barratt via BackupPC-users
Yes.  I think I made that clear in my reply.  Using find in the host backup
tree will work for directories (if you account for the directory name
mangling), but not for files.

Craig

On Thu, Apr 16, 2020 at 9:14 AM Raff Cssa  wrote:

> Don't BackupPc V4 backups have empty folders tree structure only? Find
> doesn't work there.
>
>
> ‎
>
> *Da: *Craig Barratt via BackupPC-users
> *Inviato: *giovedì 16 aprile 2020 18:08
> *A: *General list for user discussion, questions and support
> *Rispondi a: *General list for user discussion, questions and support
> *Cc: *Craig Barratt; G.W. Haywood
> *Oggetto: *Re: [BackupPC-users] Efficiently browsing backups
>
> The storage of a full backup includes the full directory tree below each
> share, although directory and share names are mangled - preceded by an "f"
> and special characters (like "/") are URL encoded.
>
> So, since you are looking for a directory, find is likely the most
> efficient solution, starting as deep into the directory tree as possible.
>
> If instead you were looking for file, then find won't work since files are
> only stored as references in the attrib file in each directory.  So in that
> case your approach of using BackupPC_ls is the best option (but slow as you
> point out).
>
> Craig
>
>
> On Thu, Apr 16, 2020 at 7:14 AM G.W. Haywood via BackupPC-users <
> backuppc-users@lists.sourceforge.net> wrote:
>
>> Hi there,
>>
>> On Thu, 16 Apr 2020, Alwyn Kik wrote:
>>
>> > What is the best way to find a file or directory ...
>>
>> man find
>>
>> --
>>
>> 73,
>> Ged.
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no errors but no file backuped in some directory not others

2020-04-17 Thread Craig Barratt via BackupPC-users
Are you using rsync?  The default in 4.x is --one-file-system.  You can
edit the config file to remove that if you prefer. I do realize you said
"the client machine has only one partition", so that might not be the issue.

It would be helpful for you to include the exact rsync_bpc command that is
being run from the XferLOG file.

Craig

On Fri, Apr 17, 2020 at 8:56 AM Ghislain Adnet  wrote:

> hi,
>
>   I am testing backuppc4 after quite some time using backuppc3. It is
> quite a pain as there is no packages. I used the
> commands from the source and (debian buster) :
>
>
> cpan install SCGI
> apt-get install  libcgi-pm-perl apache2-utils
>
>
> configure.pl\
>  --batch\
>  --cgi-dir /usr/lib/backuppc/cgi-bin\
>  --data-dir /var/lib/backuppc\
>  --hostname $(hostname -f)  \
>  --html-dir /usr/share/backuppc/image\
>  --html-dir-url /backuppc   \
> --run-dir /var/run/backuppc \
> --log-dir /var/log/backuppc \
> --config-dir /etc/backuppc  \
> --scgi-port 3000\
>  --install-dir /usr/share/backuppc;
>
>
>   to try to mimick the debian package of the 3.x, i use nginx with the
> exemple config to connect to the admin interface.
>
>   Using the same configuration than the 3.x version i have backups that
> just miss files, entire directory are skipped
> and i dont see why.
>
>   So i removed my cold onfiguration completly and put
>
> BackupFilesOnly  to /var/backups/mysql
> and removed all exclusions
>
> inside this directory /var/backups/mysql is just some gz of mysql dumps
> files.
>
>   The backups end well and tell me i have 0 files backuped.
>
>   If i add /etc to the BackupFilesOnly list then /etc is perfectly
> backuped but still no  /var/backups/mysql :(
>
>   So i am quite at a loss here, any idea of what could be going on ? This
> configuration works like a charm on backuppc3,
> the client machine has only one partition.
>
> regards,
> Ghislain.
>
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no errors but no file backuped in some directory not others

2020-04-18 Thread Craig Barratt via BackupPC-users
Yes, the rsync settings have changed in 4.x.  You'll need to
set $Conf{RsyncSshArgs} and $Conf{RsyncClientPath}.  You should be able to
put the chroot into $Conf{RsyncClientPath}.

Craig

On Sat, Apr 18, 2020 at 11:39 PM Ghislain Adnet  wrote:

> On 4/18/20 6:49 AM, Craig Barratt via BackupPC-users wrote:
> > Are you using rsync?  The default in 4.x is --one-file-system.  You can
> edit the config file to remove that if you
> > prefer. I do realize you said "the client machine has only one
> partition", so that might not be the issue.
> >
>
> oh sorry, yes i use rsync via ssh.
>
> I think i found the issue, i used to store the backuped file  those file a
> "container" like system. So backuppc connnect
> to the host and then backcup the guest:
>
>
> http://backuppc.sourceforge.net/faq/BackupPC.html#_conf_rsyncclientcmd_
>
> # rsync client commands
> $Conf{RsyncClientCmd}   = '$sshPath  -T -q -x -l aqbackup $host
> sudo vnamespace -e "'.$nomduvserveur.'"
> /usr/sbin/chroot "/vservers/'.$nomduvserveur.'" $rsyncPath $argList+';
> $Conf{RsyncClientRestoreCmd}= '$sshPath  -T -q -x -l aqbackup $host
> sudo vnamespace -e "'.$nomduvserveur.'"
> /usr/sbin/chroot "/vservers/'.$nomduvserveur.'" $rsyncPath $argList+';
>
>
> it seems this part is not working as it was and it is tryng to backup the
> host instead  of the guest.
>
> I dont find those in 4.0, have they disapeared ?
>
> regards,
> Ghislain.
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] no errors but no file backuped in some directory not others

2020-04-19 Thread Craig Barratt via BackupPC-users
No, those options aren't used in 4.x.

Craig

On Sun, Apr 19, 2020 at 9:55 AM Ghislain Adnet  wrote:

> On 4/19/20 8:45 AM, Craig Barratt via BackupPC-users wrote:
> > Yes, the rsync settings have changed in 4.x.  You'll need to
> set $Conf{RsyncSshArgs} and $Conf{RsyncClientPath}.  You
> > should be able to put the chroot into $Conf{RsyncClientPath}.
> >
> > Craig
>
> ok so i guess
>
> #$Conf{RsyncClientCmd}=
> '$sshPath  -T -q -x -l aqbackup $host
> sudo vnamespace -e "'.$nomduvserveur.'" /usr/sbin/chroot
> "/vservers/'.$nomduvserveur.'" $rsyncPath $argList+';
>
> convert to
>
> $Conf{RsyncClientPath}  =
> 'sudo vnamespace -e "'.$nomduvserveur.'" /usr/sbin/chroot
> "/vservers/'.$nomduvserveur.'" /usr/bin/rsync ';
>
> $Conf{RsyncSshArgs} = [
>  '-e', '$sshPath  -T -q -x -l aqbackup',
> ];
>
>
> seems to work, is it still necessary to have things like
>
>'--block-size=2048',
>'--checksum-seed=32761',
>
> to help the backups ?
>
> Regards,
> Ghislain.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync vs rsyncd speed for huge number of small files

2020-04-21 Thread Craig Barratt via BackupPC-users
What version of BackupPC are you running?  4.x will likely be a good deal
faster than 3.x for both rsync+ssh and rsyncd.

The penalty of rsync+ssh vs rsyncd is likely modest, although it depends on
how much data is changing between backups.

Craig

On Tue, Apr 21, 2020 at 1:33 AM R.C.  wrote:

> Hi
>
> What is the expected difference in performance between rsync+shh and
> rsyncd?
> I would use it over a private LAN, so no concerns about security.
> Currently rsync+ssh is way too slow for a huge number of very small
> files (about 700K email files in an imap server tree), even without
> --checksum.
>
> Thank you
>
> Raf
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to find files in the pool?

2020-04-23 Thread Craig Barratt via BackupPC-users
The attrib file contains the meta data (mtime, permissions etc) for all the
files in that directory, including the md5 digest of the contents of each
file.

You can use BackupPC_attribPrint to print the contents of the attrib file,
which will show the meta data for each file.

Craig

On Mon, Apr 20, 2020 at 9:58 AM R.C.  wrote:

> Hi all
>
> given the following folder attrib file:
> attrib_cdc5cda251876d069be82cd87feef573be
>
> in which subpath of the cpool folder are the contained files?
> I've tried walking the path starting from first and then from last
> octect of the MD5 number but an octect is missing at some time.
>
> Thank you
>
> Raf
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-24 Thread Craig Barratt via BackupPC-users
Graham,

Tar exit status of 512 means it encountered some sort of error (eg, file
read error) while it was running on the target client.  Please look at the
XferLOG.bad file carefully to see the specific error from tar.

If you are unable to see the error, please send me the entire XferLOG.bad
file?

Craig

On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman 
wrote:

> I have a persistent problem with backing up one host: I can do a full
> backup, but an incremental backup fails on trying to transfer the first
> directory:
>
> tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
> sizeExistComp, 2 filesTotal, 81381 sizeTotal
> Got fatal error during xfer (Tar exited with error 512 () status)
> Backup aborted (Tar exited with error 512 () status)
>
> All other hosts work ok. So I'm guessing it must be a file permission
> error. Looking at the files, everything seems to be owned by
> backuppc.backuppc, so I don't know where/what else to look for. Any
> suggestions?
>
> Thanks
> Graham
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-24 Thread Craig Barratt via BackupPC-users
Graham,

This is a problem with shell (likely ssh) escaping of arguments that
contain a space.

For incremental backups a timestamp is passed as an argument to tar running
on the client.  The argument should be a date and time, eg:

--after-date 2020-04-22\ 21:18:10'

Notice there needs to be a backslash before the space, so it is part of a
single argument, not two separate arguments.

You can tell BackupPC to escape an argument (to protect it from passing via
ssh) by adding a "+" to the end of the argument name, eg:

$Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';


Craig

On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman  wrote:

>
>
>
>
>
>
>
> Ok, I guess its this (from the start of XferLOG.bad):
>
> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
> 00:00:00
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> which is kind of confusing, as it goes on to copy the rest of the
> directory and then says '0 Errors'. Anyway, its correct that there is no
> file called '21:18:10'. Any idea why it thinks there should be?
>
> Graham
>
>
> On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
> > Graham,
> >
> > Tar exit status of 512 means it encountered some sort of error (eg, file
> > read error) while it was running on the target client.  Please look at
> > the XferLOG.bad file carefully to see the specific error from tar.
> >
> > If you are unable to see the error, please send me the entire
> > XferLOG.bad file?
> >
> > Craig
> >
> > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman  > <mailto:gra...@theseamans.net>> wrote:
> >
> > I have a persistent problem with backing up one host: I can do a full
> > backup, but an incremental backup fails on trying to transfer the
> first
> > directory:
> >
> > tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist, 18122
> > sizeExistComp, 2 filesTotal, 81381 sizeTotal
> > Got fatal error during xfer (Tar exited with error 512 () status)
> > Backup aborted (Tar exited with error 512 () status)
> >
> > All other hosts work ok. So I'm guessing it must be a file permission
> > error. Looking at the files, everything seems to be owned by
> > backuppc.backuppc, so I don't know where/what else to look for. Any
> > suggestions?
> >
> > Thanks
> > Graham
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > <mailto:BackupPC-users@lists.sourceforge.net>
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to find files in the pool?

2020-04-24 Thread Craig Barratt via BackupPC-users
In the example you showed, the file contents have
digest 4b544ad7b8992fbbc0fafe34ae6ab5d5.  You can pass that directly to
BackupPC_zcat if you want, which will uncompress the file to stdout, eg:

BackupPC_zcat 4b544ad7b8992fbbc0fafe34ae6ab5d5 | wc

The pool directly tree is described in the documentation
<https://backuppc.github.io/backuppc/BackupPC.html#Storage-layout>:

For V4+, the digest is the MD5 digest of the full file contents (the length
is not used). For V4+ the pool files are stored in a 2 level tree, using 7
bits from the top of the first two bytes of the digest. So there are 128
directories are each level, numbered evenly in hex from 0x00, 0x02, to 0xfe.

For example, if a file has an MD5 digest of
123456789abcdef0123456789abcdef0, the uncompressed file is stored in
__TOPDIR__/pool/12/34/123456789abcdef0123456789abcdef0.


In your example, the file will be at (assuming compression is
on): __TOPDIR__/cpool/4a/54/4b544ad7b8992fbbc0fafe34ae6ab5d5.  The two
directory entries are the first two bytes (4b and 54) of the filename,
rounded down to the nearest even number (ie, 4b -> 4a, 54 -> 54).
Numerically it's anding with 0xfe.

Craig

On Fri, Apr 24, 2020 at 4:57 AM R.C.  wrote:

>
> Il 24/04/2020 02:53, Craig Barratt via BackupPC-users ha scritto:
> > The attrib file contains the meta data (mtime, permissions etc) for all
> > the files in that directory, including the md5 digest of the contents of
> > each file.
> >
> > You can use BackupPC_attribPrint to print the contents of the attrib
> > file, which will show the meta data for each file.
> >
> > Craig
> >
>
> Thank you Craig.
>
> I'm sorry I can't still figure out the right way to get to the file.
>
> If I issue:
>
> sudo -u backuppc /usr/share/BackupPC/bin/BackupPC_attribPrint
> attrib_c5cda251876d069be82cd87feef573be |head -n 15
>
> the first file's metadata returned is:
>
> Attrib digest is c5cda251876d069be82cd87feef573be
> $VAR1 = {
>'0001E9891510415CBBFA53F685D8FF2C.Zip' => {
>  'compress' => 3,
>  'digest' => '4b544ad7b8992fbbc0fafe34ae6ab5d5',
>  'gid' => 0,
>  'inode' => 9,
>  'mode' => 484,
>  'mtime' => 1320069331,
>  'name' => '0001E9891510415CBBFA53F685D8FF2C.Zip',
>  'nlinks' => 0,
>  'size' => 101121,
>  'type' => 0,
>  'uid' => 0
>},
>
> How to retrieve the actual path of that file in the cpool tree?
> I cannot use the digest to walk down the cpool tree. Octects lead to non
> existent path.
> Using the inode to find the file is cumbersome and requires the use of
> low level fs tools.
>
> I'm referring to V4
>
> Thank you
>
> Raf
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] full backup ok, incremental fails

2020-04-25 Thread Craig Barratt via BackupPC-users
Graham,

Your script is the problem.  Using $* causes the shell the resplit
arguments at whitespace.  To preserve the arguments you need to put that in
quotes:

exec /bin/tar -c "$*"

Craig

On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman  wrote:

> Thanks Craig
>
> That's clearly the problem, but I'm still mystified.
>
> I have backuppc running on my home server; the storage is on a NAS NFS
> mounted on the home server. Backing up other hosts on my network (both
> full and incremental) over rsync works fine.
>
> The home server backs up using tar. The command in the log is:
>
> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
> /etc --totals --newer=2020-04-22 21:18:10 .
>
> If I set
>
>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>
>
> then incremental backups of the home server fail with:
>
> /bin/tar: Substituting -9223372036854775807 for unknown date format
> ‘2020-04-22\\’
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> If instead I set:
>
> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>
> then incremental backups fail with:
>
> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
> 00:00:00
> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>
> Could it be to do with my localtar/tar_create.sh? (I created this so
> long ago I no longer remember where it came from).
>
> This is just:
>
> #!/bin/sh -f
> exec /bin/tar -c $*
>
> Thanks again
>
> Graham
>
> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
> > Graham,
> >
> > This is a problem with shell (likely ssh) escaping of arguments that
> > contain a space.
> >
> > For incremental backups a timestamp is passed as an argument to tar
> > running on the client.  The argument should be a date and time, eg:
> >
> > --after-date 2020-04-22\ 21:18:10'
> >
> > Notice there needs to be a backslash before the space, so it is part of
> > a single argument, not two separate arguments.
> >
> > You can tell BackupPC to escape an argument (to protect it from passing
> > via ssh) by adding a "+" to the end of the argument name, eg:
> >
> > $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
> >
> >
> > Craig
> >
> > On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman  > <mailto:gra...@theseamans.net>> wrote:
> >
> >
> >
> >
> >
> >
> >
> >
> > Ok, I guess its this (from the start of XferLOG.bad):
> >
> > /bin/tar: Option --after-date: Treating date '2020-04-22' as
> 2020-04-22
> > 00:00:00
> > /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
> >
> > which is kind of confusing, as it goes on to copy the rest of the
> > directory and then says '0 Errors'. Anyway, its correct that there
> is no
> > file called '21:18:10'. Any idea why it thinks there should be?
> >
> > Graham
> >
> >
> > On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
> > > Graham,
> > >
> > > Tar exit status of 512 means it encountered some sort of error
> > (eg, file
> > > read error) while it was running on the target client.  Please
> look at
> > > the XferLOG.bad file carefully to see the specific error from tar.
> > >
> > > If you are unable to see the error, please send me the entire
> > > XferLOG.bad file?
> > >
> > > Craig
> > >
> > > On Fri, Apr 24, 2020 at 12:13 PM Graham Seaman
> > mailto:gra...@theseamans.net>
> > > <mailto:gra...@theseamans.net <mailto:gra...@theseamans.net>>>
> wrote:
> > >
> > > I have a persistent problem with backing up one host: I can do
> > a full
> > > backup, but an incremental backup fails on trying to transfer
> > the first
> > > directory:
> > >
> > > tarExtract: Done: 0 errors, 2 filesExist, 81381 sizeExist,
> 18122
> > > sizeExistComp, 2 filesTotal, 81381 sizeTotal
> > > Got fatal error during xfer (Tar exited with error 512 ()
> status)
> > > Backup aborted (Tar exited with error 512 () status)
> > >
> > > All other hosts work ok. So I'm guessing it must be a file
> 

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-25 Thread Craig Barratt via BackupPC-users
It would be helpful if you included the edited script in your reply.  Did
you use double quotes, or two single quotes?

I'd recommend trying without the script, just the make sure it works
correctly.  Then you can be sure it's an issue with how the script
handles/splits arguments.

Craig

On Sat, Apr 25, 2020 at 2:49 PM Graham Seaman  wrote:

> Craig
>
> Quoting $* gives me a new error:
>
> /bin/tar: invalid option -- ' '
>
> (I get exactly the same error whether I use $incrDate or $incrDate+)
>
> That script is to avoid potential security problems from relaxing the
> rules in sudoers, so I'd rather not get rid of it, but I'm a bit surprised
> no-one else has the same problems (and that it apparently used to work for
> me once)
>
> Graham
>
>
> On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:
>
> Graham,
>
> Your script is the problem.  Using $* causes the shell the resplit
> arguments at whitespace.  To preserve the arguments you need to put that in
> quotes:
>
> exec /bin/tar -c "$*"
>
> Craig
>
> On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman 
> wrote:
>
>> Thanks Craig
>>
>> That's clearly the problem, but I'm still mystified.
>>
>> I have backuppc running on my home server; the storage is on a NAS NFS
>> mounted on the home server. Backing up other hosts on my network (both
>> full and incremental) over rsync works fine.
>>
>> The home server backs up using tar. The command in the log is:
>>
>> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
>> /etc --totals --newer=2020-04-22 21:18:10 .
>>
>> If I set
>>
>>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>>
>>
>> then incremental backups of the home server fail with:
>>
>> /bin/tar: Substituting -9223372036854775807 for unknown date format
>> ‘2020-04-22\\’
>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>
>> If instead I set:
>>
>> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>>
>> then incremental backups fail with:
>>
>> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
>> 00:00:00
>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>
>> Could it be to do with my localtar/tar_create.sh? (I created this so
>> long ago I no longer remember where it came from).
>>
>> This is just:
>>
>> #!/bin/sh -f
>> exec /bin/tar -c $*
>>
>> Thanks again
>>
>> Graham
>>
>> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
>> > Graham,
>> >
>> > This is a problem with shell (likely ssh) escaping of arguments that
>> > contain a space.
>> >
>> > For incremental backups a timestamp is passed as an argument to tar
>> > running on the client.  The argument should be a date and time, eg:
>> >
>> > --after-date 2020-04-22\ 21:18:10'
>> >
>> > Notice there needs to be a backslash before the space, so it is part of
>> > a single argument, not two separate arguments.
>> >
>> > You can tell BackupPC to escape an argument (to protect it from passing
>> > via ssh) by adding a "+" to the end of the argument name, eg:
>> >
>> > $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>> >
>> >
>> > Craig
>> >
>> > On Fri, Apr 24, 2020 at 3:17 PM Graham Seaman > > <mailto:gra...@theseamans.net>> wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Ok, I guess its this (from the start of XferLOG.bad):
>> >
>> > /bin/tar: Option --after-date: Treating date '2020-04-22' as
>> 2020-04-22
>> > 00:00:00
>> > /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>> >
>> > which is kind of confusing, as it goes on to copy the rest of the
>> > directory and then says '0 Errors'. Anyway, its correct that there
>> is no
>> > file called '21:18:10'. Any idea why it thinks there should be?
>> >
>> > Graham
>> >
>> >
>> > On 24/04/2020 20:59, Craig Barratt via BackupPC-users wrote:
>> > > Graham,
>> > >
>> > > Tar exit status of 512 means it encountered some sort of error
>> > (eg, file
>> > > read error) while it was r

Re: [BackupPC-users] full backup ok, incremental fails

2020-04-26 Thread Craig Barratt via BackupPC-users
Sorry, the correct form should be "$@":

#!/bin/sh -f
exec /bin/tar -c "$@"

(Note that you want to force tar to have the -c option, not exec).

Craig



On Sun, Apr 26, 2020 at 5:14 AM Graham Seaman  wrote:

> Hi Craig
>
> I set sudoers to allow backuppc to run tar as root with no password, and
> incremental backups work fine.
>
> This is only marginally less secure than the old setup, which allowed
> backuppc to run the script which called tar, so I guess I can live with
> this.
>
> But in case you have any other ideas, here's my tiny script that's now
> definitely what's causing the problem (the quote marks are double quotes,
> not two single quotes):
>
> #!/bin/sh -f
>
> exec -c /bin/tar "$*"
>
>
> Graham
>
>
> On 26/04/2020 04:09, Craig Barratt via BackupPC-users wrote:
>
> It would be helpful if you included the edited script in your reply.  Did
> you use double quotes, or two single quotes?
>
> I'd recommend trying without the script, just the make sure it works
> correctly.  Then you can be sure it's an issue with how the script
> handles/splits arguments.
>
> Craig
>
> On Sat, Apr 25, 2020 at 2:49 PM Graham Seaman 
> wrote:
>
>> Craig
>>
>> Quoting $* gives me a new error:
>>
>> /bin/tar: invalid option -- ' '
>>
>> (I get exactly the same error whether I use $incrDate or $incrDate+)
>>
>> That script is to avoid potential security problems from relaxing the
>> rules in sudoers, so I'd rather not get rid of it, but I'm a bit surprised
>> no-one else has the same problems (and that it apparently used to work for
>> me once)
>>
>> Graham
>>
>>
>> On 25/04/2020 17:59, Craig Barratt via BackupPC-users wrote:
>>
>> Graham,
>>
>> Your script is the problem.  Using $* causes the shell the resplit
>> arguments at whitespace.  To preserve the arguments you need to put that in
>> quotes:
>>
>> exec /bin/tar -c "$*"
>>
>> Craig
>>
>> On Sat, Apr 25, 2020 at 5:04 AM Graham Seaman 
>> wrote:
>>
>>> Thanks Craig
>>>
>>> That's clearly the problem, but I'm still mystified.
>>>
>>> I have backuppc running on my home server; the storage is on a NAS NFS
>>> mounted on the home server. Backing up other hosts on my network (both
>>> full and incremental) over rsync works fine.
>>>
>>> The home server backs up using tar. The command in the log is:
>>>
>>> Running: /usr/bin/sudo /etc/backuppc/localtar/tar_create.sh -v -f - -C
>>> /etc --totals --newer=2020-04-22 21:18:10 .
>>>
>>> If I set
>>>
>>>  $Conf{TarIncrArgs} = '--newer=$incrDate+ $fileList+';
>>>
>>>
>>> then incremental backups of the home server fail with:
>>>
>>> /bin/tar: Substituting -9223372036854775807 for unknown date format
>>> ‘2020-04-22\\’
>>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>>
>>> If instead I set:
>>>
>>> $Conf{TarIncrArgs} = '--newer=$incrDate $fileList';
>>>
>>> then incremental backups fail with:
>>>
>>> /bin/tar: Option --after-date: Treating date '2020-04-22' as 2020-04-22
>>> 00:00:00
>>> /bin/tar: 21\:18\:10: Cannot stat: No such file or directory
>>>
>>> Could it be to do with my localtar/tar_create.sh? (I created this so
>>> long ago I no longer remember where it came from).
>>>
>>> This is just:
>>>
>>> #!/bin/sh -f
>>> exec /bin/tar -c $*
>>>
>>> Thanks again
>>>
>>> Graham
>>>
>>> On 25/04/2020 02:59, Craig Barratt via BackupPC-users wrote:
>>> > Graham,
>>> >
>>> > This is a problem with shell (likely ssh) escaping of arguments that
>>> > contain a space.
>>> >
>>> > For incremental backups a timestamp is passed as an argument to tar
>>> > running on the client.  The argument should be a date and time, eg:
>>> >
>>> > --after-date 2020-04-22\ 21:18:10'
>>> >
>>> > Notice there needs to be a backslash before the space, so it is part of
>>> > a single argument, not two separate arguments.
>>> >
>>> > You can tell BackupPC to escape an argument (to protect it from passing
>>> > via ssh) by adding a "+" to the end of the argument name, eg:
>>> >
>>> > $Conf{

Re: [BackupPC-users] Large rsyncTmp files

2020-05-01 Thread Craig Barratt via BackupPC-users
It could also be a sparse file (eg, below /proc or /var/log/wtmp) that
isn't being excluded.

Craig

On Fri, May 1, 2020 at 10:14 AM Alexander Kobel  wrote:

> Hi Marcelo,
>
> On 5/1/20 4:15 PM, Marcelo Ricardo Leitner wrote:
> > Hi,
> >
> > Is it expected for rsync-bpc to be writting such large temporary files?
>
> If and only if there is such a big file to be backed up, AFAIK.
>
> > It seems they are as big as the full backup itself:
> > # ls -la */*/rsync*
> > -rw--- 1 112 122 302598406144 May  1 10:54
> HOST/180/rsyncTmp.4971.0.29
>
> Did you double-check whether there really is no file of that size on the
> HOST? (Try running `find $share -size +10M` on it, or something like
> that.)
>
> Do you use the -x (or --one-file-system) option for rsync?
> I recently ran into a similar issue because I didn't. A chrooted process
> suddenly received its own copy of /proc under
> /var/lib//proc after a system update, and proc has the
> 128T-huge kcore. Not a good idea trying to back up that directory.
> (Running dhcpcd on Arch by any chance?)
> It also got other mounts, like sysfs and some tmpfs, but those were
> mostly harmless.
>
> > That's a 300GB file, it filled the partition, and the full size for
> > this host is 337GB.
> >
> > Thanks,
> > Marcelo
>
>
> HTH,
> Alex
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync restore from the command line

2020-05-20 Thread Craig Barratt via BackupPC-users
Jeff,

BackupPC_restore takes quite a few parameters, so many years ago I decided
to pass those parameters via a file rather than command-line arguments.
Probably a bad choice...

There are two alternatives:

   - write a script that creates the restore request parameter file (see
   lib/BackupPC/CGI/Restore.pm) and then runs BackupPC_restore
   - directly run rsync_bpc using an example of the many arguments from a
   successful restore log file

The drawback of the 2nd approach is that information about the restore
isn't saved or logged, and you have to make sure it doesn't run
concurrently with a backup.

Craig


On Wed, May 20, 2020 at 6:31 AM  wrote:

>
> Is it possible to do an rsync restore from the command line using
> BackupPC_restore?
> If so, it's not clear from the (limited) documentation how to use
> it. For example, how do you specify the desired backup number, share,
> paths, etc.
>
> Alternatively, is there an rsync analog of BackupPC_tarCreate?
>
> Said another way, rather than first creating a tar file via
> BackupPC_tarCreate and then untarring (or piping through tar), is
> there a way to hook directly into rsync to restore from the command line?
>
> This would have several advantages:
> 1. It would automatically incorporate the ssh and compression features
>to restore seamlessly, efficiently, and securely across platforms
>
> 2. It would allow for restoring special file types that tar doesn't
>support
>
> 3. It would be able to better and more exactly mirror the parameters
>given to Rsync dump (for example the same format of 'includes' and
>'excludes'
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync restore from the command line

2020-05-20 Thread Craig Barratt via BackupPC-users
Jeff,

For restores, there could be a long list of specific files or directories
to restore, which might not fit on the command line, so that's what
triggered putting everything in a request file and just passing its name.
There are also several other settings specific to the restore (eg, the path
to restore to etc), none of which are in config files.

Craig

On Wed, May 20, 2020 at 10:29 AM  wrote:

> Thanks Craig,
>
> Why is restore inherently that much more complicated than dump?
> It seems like config.pl already has a number of parameters built-in
> for both including rsync args and pre/post restore commands.
>
> Conceptually, I would think that what one needs to specify each time
> is:
> 1. Host
> 2. Backup number
> 3. Share
> 4. Additional includes/excludes to determine what to restore
> 5. Option to "delete" files no longer found
> 6. Path to root of restore.
>
> Otherwise, existing includes/excludes would be assumed...
>
> I'm sure one could make it more complicated but am I missing something
> basic???
>
> The reality is that rsync + backuppc is really awesome... and I can do
> (and automate) so much more with CLI and scripts than with a CGI.
>
> Jeff
>
> Craig Barratt via BackupPC-users wrote at about 09:32:48 -0700 on
> Wednesday, May 20, 2020:
>  > Jeff,
>  >
>  > BackupPC_restore takes quite a few parameters, so many years ago I
> decided
>  > to pass those parameters via a file rather than command-line arguments.
>  > Probably a bad choice...
>  >
>  > There are two alternatives:
>  >
>  >- write a script that creates the restore request parameter file (see
>  >lib/BackupPC/CGI/Restore.pm) and then runs BackupPC_restore
>  >- directly run rsync_bpc using an example of the many arguments from
> a
>  >successful restore log file
>  >
>  > The drawback of the 2nd approach is that information about the restore
>  > isn't saved or logged, and you have to make sure it doesn't run
>  > concurrently with a backup.
>  >
>  > Craig
>  >
>  >
>  > On Wed, May 20, 2020 at 6:31 AM  wrote:
>  >
>  > >
>  > > Is it possible to do an rsync restore from the command line using
>  > > BackupPC_restore?
>  > > If so, it's not clear from the (limited) documentation how to use
>  > > it. For example, how do you specify the desired backup number, share,
>  > > paths, etc.
>  > >
>  > > Alternatively, is there an rsync analog of BackupPC_tarCreate?
>  > >
>  > > Said another way, rather than first creating a tar file via
>  > > BackupPC_tarCreate and then untarring (or piping through tar), is
>  > > there a way to hook directly into rsync to restore from the command
> line?
>  > >
>  > > This would have several advantages:
>  > > 1. It would automatically incorporate the ssh and compression features
>  > >to restore seamlessly, efficiently, and securely across platforms
>  > >
>  > > 2. It would allow for restoring special file types that tar doesn't
>  > >support
>  > >
>  > > 3. It would be able to better and more exactly mirror the parameters
>  > >given to Rsync dump (for example the same format of 'includes' and
>  > >'excludes'
>  > >
>  > >
>  > > ___
>  > > BackupPC-users mailing list
>  > > BackupPC-users@lists.sourceforge.net
>  > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > > Wiki:http://backuppc.wiki.sourceforge.net
>  > > Project: http://backuppc.sourceforge.net/
>  > >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Rsync restore from the command line

2020-05-21 Thread Craig Barratt via BackupPC-users
Jeff,

Sure, that's possible.  But given that it was necessary to use a file to
pass along the list of files to restore, it seemed easier to just use that
one mechanism.

It would be pretty easy to write a wrapper script that takes command-line
arguments and writes the request file and passes it along to
BackupPC_restore ((see lib/BackupPC/CGI/Restore.pm) .

Craig

On Wed, May 20, 2020 at 11:59 AM  wrote:

> Couldn't their be an option to read the to-be-restored files from a
> file (similar to what tar and rsync allow) but allowing basic restores
> to be done form the command line.
> Other parameters could either be via command line or added config.pl
> settings if more permanent.
> Craig Barratt via BackupPC-users wrote at about 11:31:37 -0700 on
> Wednesday, May 20, 2020:
>  > Jeff,
>  >
>  > For restores, there could be a long list of specific files or
> directories
>  > to restore, which might not fit on the command line, so that's what
>  > triggered putting everything in a request file and just passing its
> name.
>  > There are also several other settings specific to the restore (eg, the
> path
>  > to restore to etc), none of which are in config files.
>  >
>  > Craig
>  >
>  > On Wed, May 20, 2020 at 10:29 AM  wrote:
>  >
>  > > Thanks Craig,
>  > >
>  > > Why is restore inherently that much more complicated than dump?
>  > > It seems like config.pl already has a number of parameters built-in
>  > > for both including rsync args and pre/post restore commands.
>  > >
>  > > Conceptually, I would think that what one needs to specify each time
>  > > is:
>  > > 1. Host
>  > > 2. Backup number
>  > > 3. Share
>  > > 4. Additional includes/excludes to determine what to restore
>  > > 5. Option to "delete" files no longer found
>  > > 6. Path to root of restore.
>  > >
>  > > Otherwise, existing includes/excludes would be assumed...
>  > >
>  > > I'm sure one could make it more complicated but am I missing something
>  > > basic???
>  > >
>  > > The reality is that rsync + backuppc is really awesome... and I can do
>  > > (and automate) so much more with CLI and scripts than with a CGI.
>  > >
>  > > Jeff
>  > >
>  > > Craig Barratt via BackupPC-users wrote at about 09:32:48 -0700 on
>  > > Wednesday, May 20, 2020:
>  > >  > Jeff,
>  > >  >
>  > >  > BackupPC_restore takes quite a few parameters, so many years ago I
>  > > decided
>  > >  > to pass those parameters via a file rather than command-line
> arguments.
>  > >  > Probably a bad choice...
>  > >  >
>  > >  > There are two alternatives:
>  > >  >
>  > >  >- write a script that creates the restore request parameter
> file (see
>  > >  >lib/BackupPC/CGI/Restore.pm) and then runs BackupPC_restore
>  > >  >- directly run rsync_bpc using an example of the many arguments
> from
>  > > a
>  > >  >successful restore log file
>  > >  >
>  > >  > The drawback of the 2nd approach is that information about the
> restore
>  > >  > isn't saved or logged, and you have to make sure it doesn't run
>  > >  > concurrently with a backup.
>  > >  >
>  > >  > Craig
>  > >  >
>  > >  >
>  > >  > On Wed, May 20, 2020 at 6:31 AM  wrote:
>  > >  >
>  > >  > >
>  > >  > > Is it possible to do an rsync restore from the command line using
>  > >  > > BackupPC_restore?
>  > >  > > If so, it's not clear from the (limited) documentation how to use
>  > >  > > it. For example, how do you specify the desired backup number,
> share,
>  > >  > > paths, etc.
>  > >  > >
>  > >  > > Alternatively, is there an rsync analog of BackupPC_tarCreate?
>  > >  > >
>  > >  > > Said another way, rather than first creating a tar file via
>  > >  > > BackupPC_tarCreate and then untarring (or piping through tar), is
>  > >  > > there a way to hook directly into rsync to restore from the
> command
>  > > line?
>  > >  > >
>  > >  > > This would have several advantages:
>  > >  > > 1. It would automatically incorporate the ssh and compression
> features
>  > >  > >to restore seamlessly, efficiently, and securely across
> platform

Re: [BackupPC-users] replication of data pool

2020-05-21 Thread Craig Barratt via BackupPC-users
Mike,

The cpool isn't structured in a way that makes it possible to just copy
recently backed-up files.

Ged's suggestion (just run another BackupPC instance offsite) is
worth considering.  That also provides more robustness to certain failures
(eg undetected filesystem corruption on the primary BackupPC server).

Craig

On Wed, May 20, 2020 at 12:37 PM Mike Hughes  wrote:

> Hi, we're currently syncing our cpool to an off-site location on a weekly
> basis. Would it be feasible to only sync the latest of each backup rather
> than the entire pool?
>
> To elaborate, on Saturdays we run an rsync of the entire cpool to another
> server to provide disaster recovery options. Is it possible/reasonable to
> just copy the data from the night before? Or, with de-duplication and
> compression, would we really save much space/transfer time? If so, what is
> the best way to grab just one night's worth of backups while still
> preserving a full recovery?
>
> Just curious if someone is already doing this and how you sorted it out.
>
> Thanks!
> Mike
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] List of All Files Being Backed Up

2020-05-21 Thread Craig Barratt via BackupPC-users
DW,

I'd recommend using BackupPC_ls -R to recursively list all the files in a
particular backup.  You'd need to run it only each of your hosts on the
latest backup.

Craig

On Tue, May 19, 2020 at 4:41 PM David Wynn via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> I’ve tried to search using some different keywords and combinations but
> have had no luck in finding an answer.  Here’s my situation … I am backing
> up around 150 different files from a NAS but using multiple hosts to do so
> in order to keep the file transfer size down to <20GB at a time.  This is
> across the ‘internet’ so bandwidth is not under my control and I want to
> make sure the jobs don’t crap out at bad times.  I’ve found that <20GB at a
> time usually works great.
>
>
>
> But, in order to manage the sizes I first created the individual lists
> based on the size of the files/directories and input them manually into my
> config files for each host.  For example, HOST1 may have 40 smaller files
> to get to the 20GB limit whereas HOST10 may only have 1 file/directory to
> get to the limit.
>
>
>
> Now I have the problem of trying to find an individual file/directory from
> around 18 different HOSTx setups.
>
>
>
> Is there an easy way to get/create a listing that would should the HOSTx
> and the files/directories that are being backed up under it?  I have
> thought of trying to write a ‘script’ to traverse the individual HOSTx.pl
> files and extract the info – but my scripting is purely of a W10 nature and
> even at that is poor and wordy.   (Oh for the days of COBOL and PL/1.)
>
>
>
> Just wondering if there is something I have missed in the documentation or
> in trying to search the forums.  Someone must have had this problem before
> me.
>
>
>
> Thanks for your help
>
>
>
> DW
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... [private version]

2020-05-21 Thread Craig Barratt via BackupPC-users
Jeff,

Unfortunately BackupPC_tarCreate doesn't support acls.  Over the years
different flavors of tar supported different archive formats for certain
extensions (eg, long file names etc).  The POSIX standard for PAX headers
unified some of the those disparate formats, but didn't define acl or xattr
support.

Over the last few years it does look like GNU tar provides support for
acls, but using PAX headers that are not standard.  Looking at the tar
source, it uses headers like SCHILY.acl.access, SCHILY.xattr etc.
Supporting those headers appears to require the acls and xattrs to be
converted to descriptive strings.  Currently BackupPC rsync treats acls and
xattr as binary blobs of data that it doesn't need to interpret.  So
unfortunately it would be quit difficult to add acl and xattr support to
BackupPC_tarCreate.

Craig

On Tue, May 19, 2020 at 11:49 PM  wrote:

>
> Now that I have btrfs snapshots set up, I decided to test a full
> backup and restore by comparing the snapshot with the backup-restore
> via rsync, using the following command:
> sudo -u backuppc /usr/share/backuppc/bin/BackupPC_tarCreate -h myhost
> -n -1 -s myshare . | sudo tar --acls --selinux --xattrs -xvf -
>
> Interestingly, I found that everything worked *except* that it failed
> to copy any sockets or any extended attributes.
>
> 1. Sockets were not copied at all - but that is seemingly just a tar
>limitation since tar can't copy 'special' files.
>Indeed, backuppc-fuse shows that the files are actually backed up by
> bakcuppc
>
> 2. Extended attributes (ACLs and SELinux context) were *never* restored
>
>This seems to be a problem with 'BackupPC_tarCreate" since:
>a] Using tar alone, I can copy the files with all their extended
> attributes
> cd ; tar --acls --selinux --xattrs -cf - mac ) | tar xf -
>b] Similarly, raw rsync copies all the files faithfully
>rsync -navcxXAOH --delete  .
>b] Backuppc-fuse shows the extended attributes
>   (though that being said backuppc-fuse adds SELinux context attributes
>   to files that don't have them... perhaps there is something wrong
>   with the inheritance??
>
> Note: I tried adding ' --xargs --acls --selinux --xattrs'
> to $Conf{TarClientRestoreCmd} but that didn't help.
>
> So, 2 questions:
> 1. Why doesn't BackupPC_tarCreate restore the extended attributes?
> 2. Why does backuppc-fuse show extended attributes for files that
>don't have them originally?
>
> --
> Note: I am running ubuntu 18.04 with rsync 3.1.2 and backuppc 4.3.2
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... [private version]

2020-05-21 Thread Craig Barratt via BackupPC-users
Jeff,

The tar XferMethod doesn't capture acls and xattrs during backup.

Direct Restore in the CGI interface uses the XferMethod setting.

Craig

On Thu, May 21, 2020 at 10:05 PM  wrote:

> I also assume that tar doesn't capture ACLs and XATTRs for backup
> either then
>
> What transfer mechanism does the CGI restore use?
> Because when I use the direct download mode, it also doesn't restore
> the ACLs and XATTRs.
>
> In any case, I guess I really need to figure out how to use rsync for
> restore...
>
>
> Craig Barratt via BackupPC-users wrote at about 21:50:40 -0700 on
> Thursday, May 21, 2020:
>  > Jeff,
>  >
>  > Unfortunately BackupPC_tarCreate doesn't support acls.  Over the years
>  > different flavors of tar supported different archive formats for certain
>  > extensions (eg, long file names etc).  The POSIX standard for PAX
> headers
>  > unified some of the those disparate formats, but didn't define acl or
> xattr
>  > support.
>  >
>  > Over the last few years it does look like GNU tar provides support for
>  > acls, but using PAX headers that are not standard.  Looking at the tar
>  > source, it uses headers like SCHILY.acl.access, SCHILY.xattr etc.
>  > Supporting those headers appears to require the acls and xattrs to be
>  > converted to descriptive strings.  Currently BackupPC rsync treats acls
> and
>  > xattr as binary blobs of data that it doesn't need to interpret.  So
>  > unfortunately it would be quit difficult to add acl and xattr support to
>  > BackupPC_tarCreate.
>  >
>  > Craig
>  >
>  > On Tue, May 19, 2020 at 11:49 PM  wrote:
>  >
>  > >
>  > > Now that I have btrfs snapshots set up, I decided to test a full
>  > > backup and restore by comparing the snapshot with the backup-restore
>  > > via rsync, using the following command:
>  > > sudo -u backuppc /usr/share/backuppc/bin/BackupPC_tarCreate -h
> myhost
>  > > -n -1 -s myshare . | sudo tar --acls --selinux --xattrs -xvf -
>  > >
>  > > Interestingly, I found that everything worked *except* that it failed
>  > > to copy any sockets or any extended attributes.
>  > >
>  > > 1. Sockets were not copied at all - but that is seemingly just a tar
>  > >limitation since tar can't copy 'special' files.
>  > >Indeed, backuppc-fuse shows that the files are actually backed up
> by
>  > > bakcuppc
>  > >
>  > > 2. Extended attributes (ACLs and SELinux context) were *never*
> restored
>  > >
>  > >This seems to be a problem with 'BackupPC_tarCreate" since:
>  > >a] Using tar alone, I can copy the files with all their extended
>  > > attributes
>  > > cd ; tar --acls --selinux --xattrs -cf - mac ) | tar
> xf -
>  > >b] Similarly, raw rsync copies all the files faithfully
>  > >rsync -navcxXAOH --delete  .
>  > >b] Backuppc-fuse shows the extended attributes
>  > >   (though that being said backuppc-fuse adds SELinux context
> attributes
>  > >   to files that don't have them... perhaps there is something
> wrong
>  > >   with the inheritance??
>  > >
>  > > Note: I tried adding ' --xargs --acls --selinux --xattrs'
>  > > to $Conf{TarClientRestoreCmd} but that didn't help.
>  > >
>  > > So, 2 questions:
>  > > 1. Why doesn't BackupPC_tarCreate restore the extended attributes?
>  > > 2. Why does backuppc-fuse show extended attributes for files that
>  > >don't have them originally?
>  > >
>  > > --
>  > > Note: I am running ubuntu 18.04 with rsync 3.1.2 and backuppc 4.3.2
>  > >
>  > >
>  > > ___
>  > > BackupPC-users mailing list
>  > > BackupPC-users@lists.sourceforge.net
>  > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > > Wiki:http://backuppc.wiki.sourceforge.net
>  > > Project: http://backuppc.sourceforge.net/
>  > >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to install SCGI , exactly??

2020-05-23 Thread Craig Barratt via BackupPC-users
There are two different components that have to be installed, one for perl
(the client end) and another for apache (the server end).

The perl module SCGI needs to be installed, which can be done via cpan.  If
cpan doesn't work you can install it manually from the tarball, which can
be found in many places (eg, here

).

Second, apache needs the scgi module (typically called mod-scgi) installed
and enabled.  As Doug mentions that can be done using your favorite package
manager.

Craig

On Fri, May 22, 2020 at 10:39 AM Doug Lytle  wrote:

> >>> I am currently running BackupPC  version 4.3.2 on Ubuntu 18.04.4 LTS .
> >>> Everything seems to be working perfectly, except this pesky
> "2020-05-22 10:02:30 scgi : BackupPC_Admin_SCGI: can't load perl SCGI
> module - install via CPAN; exiting in 60 seconds" error
>
> Mike,
>
> The only thing I have on my backuppc server is the same as yours
>
> dpkg -l|grep -i scgi
>
> ii  libapache2-mod-scgi   1.13-1.1amd64
> Apache module implementing the SCGI protocol
>
>
> How are you trying to access the admin page?
>
> I don'g use sgi in my URL.  I use
>
> http://192.168.145.99/backuppc
>
> The description of the SGI admin is
>
> # BackupPC_Admin_SCGI: An SCGI implementation of the BackupPC
> #  admin interface.
>
> Which is something I don't use, just the CGI version.
>
> Doug
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... MULTIPLE BUGS???

2020-05-23 Thread Craig Barratt via BackupPC-users
While I agree with Michael that restoring sockets isn't that useful (since
they are only created by a process that is receiving connections on a
unix-domain socket), I did fix the bug

that causes them to be incorrectly restored by rsync_bpc.

I'm quite unfamiliar with selinux attributes.  Is it possible to add
selinux attributes to a file (with setfilecon) when selinux is disabled?
Unfortunately my attempt to turn selinux on didn't go well - my machine
didn't boot into a usable state, so I'm not willing to turn on selinux.

Craig

On Fri, May 22, 2020 at 8:26 PM Michael Stowe <
michael.st...@member.mensa.org> wrote:

> On 2020-05-22 16:49, backu...@kosowsky.org wrote:
> > Michael Stowe wrote at about 22:18:50 + on Friday, May 22, 2020:
> >  > On 2020-05-22 11:42, backu...@kosowsky.org wrote:
> >  > > 1. Sockets are restored as regular files not special files -->
> > BUG?
> >  >
> >  > Why would one back up a socket?
> > I am testing the fidelity of the backup/restore cycle..
> >>
> >  > If you really think this is sensible, you should be able to
> > accomplish
> >  > it with "--devices --specials" as part of your rsync command lines.
> >  >  From the symptoms, you have this in backup but not restore.
> >
> > Actually, in the original text (which you snipped), I shared my
> > rsync_bpc commands for both 'dump' and 'restore', which include the
> > '-D' flag (actually it's the default in the config.pl for both rsync
> > dump and restore)... and '-D' is *equivalent* to '--devices
> > --specials'
> >
> > And since I suspected some readers might miss that, I even noted in
> > the text that:
> >"Also, special files (--specials) should be included under the -D
> >flag that I use for both rsync dump and restore commands (see
> >below)"
> >
> > Hence, why I suggested this is a *BUG* vs. user error or lack of
> > knowledge :)
>
> You've mistaken my point -- sure, the -D flag is there, but it's
> behaving like it isn't.  Let's review:
>
> --devices
> This option causes rsync to transfer character and block  device
> files  to  the  remote  system  to recreate these devices.  This
> option has no effect if the receiving rsync is not  run  as  the
> super-user (see also the --super and --fake-super options).
>
> Naturally this begs the question as to whether you're running it as the
> super-user, and if you've seen the options as referred to in the man
> page, which I've quoted above.
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BUG? Using --omit-dir-times in rsync backup sets all dir dates to beginning of Epoch

2020-05-23 Thread Craig Barratt via BackupPC-users
I wasn't previously familiar with the --omit-dir-times option.  As you
discovered, the low-level parts of rsync_bpc don't 100% faithfully mimic
the native linux system calls.  In particular, the mkdir() in rsync_bpc
doesn't set the mtime, since rsync updates it later (assuming
--omit-dir-times is not specified).

It would be a one-line change to set the mtime to the current time in
bpc_mkdir() in bpc_sysCalls.c.  However, I don't think it makes sense to
"fix" it, since a backup shouldn't add metadata that changes each time you
backup some data that hasn't changed.

Craig

On Fri, May 22, 2020 at 8:17 PM Michael Stowe <
michael.st...@member.mensa.org> wrote:

> On 2020-05-22 16:52, backu...@kosowsky.org wrote:
> > Michael Stowe wrote at about 23:46:54 + on Friday, May 22, 2020:
> >  > On 2020-05-22 16:19, backu...@kosowsky.org wrote:
> >  > > Michael Stowe wrote at about 22:24:13 + on Friday, May 22,
> > 2020:
> >  > >  > On 2020-05-22 09:15, backu...@kosowsky.org wrote:
> >  > >  > What it does is omit directories from the modification times
> > that it
> >  > >  > sets.  In other words, you're telling it not to set the times
> > on
> >  > >  > directories it copies.  The beginning of the epoch is pretty
> >  > > reasonable
> >  > >  > for directories which have no specific time set.
> >  > >  >
> >  > >
> >  > > Actually, at least the manpage is unclear.
> >  > > And *differs* from the default behavior of native rsync (at lesat
> > on
> >  > > Ubuntu) that sets the dir time to the current time -- which is
> > more
> >  > > reasonable than some arbitrary epoch = 0 time.
> >  > >
> >  > > That is what I would have expected and I believe should be the
> > default
> >  > > behavior...
> >  > >
> >  > >  > This option has no implications for which directories are
> > selected
> >  > > to be
> >  > >  > copied.
> >  >
> >  > Unset is unset, it's not the option to use if you want the directory
> >  > modification time set.
> >
> > Regardless, behavior should be consistent with normal rsync...
> >
> > If you can show me a standard *nix version of rsync that uses Epoch as
> > the default then I would retract my point... but otherwise Epoch is
> > totally arbitrary and illogical... while at least the current time has
> > a good rationale... Choosing 1/1/1970 not so much...
>
> It's not that "epoch is the default" it's that that's what a timestamp
> of 0 is.  When you tell rsync not to set the timestamps, it doesn't.
>
> If you want to touch the directories and update their timestamps to the
> current time, you can do that, but it's an odd thing to expect rsync to
> take care of for you when you explicitly tell it not to.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc-fuse hard links

2020-05-23 Thread Craig Barratt via BackupPC-users
Jeff,

I remember looking into this long ago, and I recall that fuse makes up its
own fake inode numbers, which creates exactly the problem you noticed -
hardlinked files don't show the same inode number.  The Git issue you
mentioned reports that problem.

Craig

On Sat, May 23, 2020 at 8:52 PM  wrote:

> It seems like backuppc-fuse correctly lists the number of hard links
> for each file *but* the corresponding inodes are not numbered the
> same.
>
> For example:
>
> #Native file system
> ls -il /usr/bin/pigz /usr/bin/unpigz
> 564544 -rwxr-xr-x 2 root root 116944 Dec 27  2017 /usr/bin/pigz*
> 564544 -rwxr-xr-x 2 root root 116944 Dec 27  2017 /usr/bin/unpigz*
>
> #Backuppc-fuse version
> ls -il /mnt/backuppc/consult/root/{/usr/bin/pigz,/usr/bin/unpigz}
> 386328 -rwxr-xr-x 2 root root 116944 Dec 27  2017
> /mnt/backuppc/myhost/root/usr/bin/pigz*
> 827077 -rwxr-xr-x 2 root root 116944 Dec 27  2017
> /mnt/backuppc/myhost/root/usr/bin/unpigz*
>
> Is there any way to fix this???
>
> I couldn't find much on Google, but it seems like there is a low and a
> high level inode notion in fuse filesystems and that the low-level one
> has the right inode number. See:
> https://github.com/libfuse/libfuse/issues/79
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Testing full restore of backuppc... MULTIPLE BUGS???

2020-05-24 Thread Craig Barratt via BackupPC-users
Jeff,

I did set the policy to permissive.  If I get some time I'll try again.

Craig

On Sat, May 23, 2020 at 10:30 PM  wrote:

> Thanks Craig.
> The --specials now works (and I agree with both you and Michael that
> it is not useful... but it validates that the restore is 'perfect' as
> far as rsync is concerned)
>
> Regarding selinux, you can turn it on in 'permissive' (non-enforcing)
> mode in which case it shouldn't do anything other than create messages
> of selinux policy violations... but it shouldn't block (or otherwise
> affect any parts of your running system)
>
> Check out the following for some details:
>
> https://docs.fedoraproject.org/en-US/quick-docs/changing-selinux-states-and-modes/
>
> Craig Barratt wrote at about 18:26:34 -0700 on Saturday, May 23, 2020:
>  > While I agree with Michael that restoring sockets isn't that useful
> (since
>  > they are only created by a process that is receiving connections on a
>  > unix-domain socket), I did fix the bug
>  > <
> https://github.com/backuppc/rsync-bpc/commit/3802747ab70c8d1a41f051ac9610b899352b5271
> >
>  > that causes them to be incorrectly restored by rsync_bpc.
>  >
>  > I'm quite unfamiliar with selinux attributes.  Is it possible to add
>  > selinux attributes to a file (with setfilecon) when selinux is disabled?
>  > Unfortunately my attempt to turn selinux on didn't go well - my machine
>  > didn't boot into a usable state, so I'm not willing to turn on selinux.
>  >
>  > Craig
>  >
>  > On Fri, May 22, 2020 at 8:26 PM Michael Stowe <
>  > michael.st...@member.mensa.org> wrote:
>  >
>  > > On 2020-05-22 16:49, backu...@kosowsky.org wrote:
>  > > > Michael Stowe wrote at about 22:18:50 + on Friday, May 22, 2020:
>  > > >  > On 2020-05-22 11:42, backu...@kosowsky.org wrote:
>  > > >  > > 1. Sockets are restored as regular files not special files -->
>  > > > BUG?
>  > > >  >
>  > > >  > Why would one back up a socket?
>  > > > I am testing the fidelity of the backup/restore cycle..
>  > > >>
>  > > >  > If you really think this is sensible, you should be able to
>  > > > accomplish
>  > > >  > it with "--devices --specials" as part of your rsync command
> lines.
>  > > >  >  From the symptoms, you have this in backup but not restore.
>  > > >
>  > > > Actually, in the original text (which you snipped), I shared my
>  > > > rsync_bpc commands for both 'dump' and 'restore', which include the
>  > > > '-D' flag (actually it's the default in the config.pl for both
> rsync
>  > > > dump and restore)... and '-D' is *equivalent* to '--devices
>  > > > --specials'
>  > > >
>  > > > And since I suspected some readers might miss that, I even noted in
>  > > > the text that:
>  > > >"Also, special files (--specials) should be included under the -D
>  > > >flag that I use for both rsync dump and restore commands (see
>  > > >below)"
>  > > >
>  > > > Hence, why I suggested this is a *BUG* vs. user error or lack of
>  > > > knowledge :)
>  > >
>  > > You've mistaken my point -- sure, the -D flag is there, but it's
>  > > behaving like it isn't.  Let's review:
>  > >
>  > > --devices
>  > > This option causes rsync to transfer character and block
> device
>  > > files  to  the  remote  system  to recreate these devices.
> This
>  > > option has no effect if the receiving rsync is not  run  as
> the
>  > > super-user (see also the --super and --fake-super options).
>  > >
>  > > Naturally this begs the question as to whether you're running it as
> the
>  > > super-user, and if you've seen the options as referred to in the man
>  > > page, which I've quoted above.
>  > >
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems with latest rsync-bpc 3.1.3 - zlib??

2020-05-24 Thread Craig Barratt via BackupPC-users
Thanks for the updates.  Yes, rsync's included zlib isn't compatible with
system zlib.  However, since you are not using the -z option, I don't think
that's the issue.

Can you try rsync-bpc 3.1.2.1?  It has more testing than 3.1.3.beta0.

Craig

On Sun, May 24, 2020 at 7:43 PM  wrote:

> Upgrading to the latest rsync-bpc 3.1.3 fixed the problem with
> specials.
> And restores all seemed to work last night, until I tried dumps today.
>
> Now all my scheduled backups fail with error message:
> rsync error: error in rsync protocol data stream (code 12) at
> io.c(226) [Receiver=3.1.3.beta0]
>
> Also, when I run BackupPC_dump, it hangs at the beginning:
>   Running: /usr/bin/rsync_bpc --bpc-top-dir /var/lib/backuppc
> --bpc-host-name testmachine --bpc-share-name /usr/local/bin --bpc-bkup-num
> 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
> --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level 6 -e /usr/bin/sudo\ -h
> --rsync-path=/usr/bin/rsync --super --recursive --protect-args
> --numeric-ids --perms --owner --group -D --times --links --hard-links
> --delete --delete-excluded --one-file-system --partial --log-format=log:\
> %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --checksum --acls --xattrs
> --timeout=72000 myhost:/usr/local/bin/ /
>   full backup started for directory /usr/local/bin
>   started full dump, share=/usr/local/bin
>   Xfer PIDs are now 7793
>   xferPids 7793
>   This is the rsync child about to exec /usr/bin/rsync_bpc
>   cmdExecOrEval: about to exec /usr/bin/rsync_bpc --bpc-top-dir
> /var/lib/backuppc --bpc-host-name testmachine --bpc-share-name
> /usr/local/bin --bpc-bkup-num 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1
> --bpc-bkup-prevcomp -1 --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level
> 6 -e /usr/bin/sudo\ -h --rsync-path=/usr/bin/rsync --super --recursive
> --protect-args --numeric-ids --perms --owner --group -D --times --links
> --hard-links --delete --delete-excluded --one-file-system --partial
> --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats --checksum --acls
> --xattrs --timeout=72000 myhost:/usr/local/bin/ /
> bpc_path_create(/var/lib/backuppc/pc/testmachine/0)
>   bpc_attrib_backwardCompat: WriteOldStyleAttribFile = 0,
>   KeepOldAttribFiles = 0
>
> This problem resolves when I downgrade back to rsync-bpc 3.0.9.
>
> Googling suggest this might have something to do with internal
> vs. external zlib.h
>
> I tried configuring with --with-included-zlib=yes (default) and =no.
> But both had the same error.
>
> Note that when =yes, in order to compile, I had to change:
>  #include  --> #include "zlib/zlib.h"
> in token.c (and also changed for consistency in batch.c and options.c)
> since the symbol Z_INSERT_ONLY was not defined in my /usr/include/zlib.h
>
> Any thoughts on what I need to do to make this work?
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Spurious "file doesn't exist" messages... BUG?

2020-05-25 Thread Craig Barratt via BackupPC-users
Jeff,

Thanks for figuring that out.  I pushed a fix

so the warning doesn't get printed on zero-length files.

Craig

On Mon, May 25, 2020 at 8:08 AM  wrote:

> Presumably the problem is in rsync-bpc: bpc_sysCalls.c
>
> int bpc_sysCall_checkFileMatch(char *fileName, char *tmpName, struct
> file_struct *rsyncFile,
>char *file_sum, off_t fileSize)
> {
> bpc_attrib_file *fileOrig, *file;
> char poolPath[BPC_MAXPATHLEN];
>
> if ( !(fileOrig = bpc_attribCache_getFile(&acNew, fileName, 0, 0)) ) {
> /*
>  * Hmmm.  The file doesn't exist, but we got deltas suggesting the
> file is
>  * unchanged.  So that means the generator found a matching pool
> file.
>  * Let's try the same thing.
>  */
> if ( bpc_sysCall_poolFileCheck(fileName, rsyncFile)
> || !(fileOrig = bpc_attribCache_getFile(&acNew, fileName,
> 0, 0)) ) {
> bpc_logErrf("bpc_sysCall_checkFileMatch(%s): file doesn't
> exist\n", fileName);
> return -1;
> }
> }
> ...
>
> But the zero length file (with md5sum
> d41d8cd98f00b204e9800998ecf8427e) is not in the pool.
>
> Presumably, one should add a check to eliminate...
>
> backu...@kosowsky.org wrote at about 10:36:55 -0400 on Monday, May 25,
> 2020:
>  > Seems like these are all the zero length files...
>  > Could it be that backuppc is checking for file length rather than file
> existence???
>  >
>  >
>  > Note: when I delete the backup and run it again, the *exact* same
>  > "file doesn't exist' errors reappears (even though a new btrfs snapshot
>  > has been created).  So I am pretty sure it is not a filesystem issue
>  > but rather likely a bug in backuppc...
>  >
>  >
>  > backu...@kosowsky.org wrote at about 10:00:31 -0400 on Monday, May 25,
> 2020:
>  >  > I am still occasionally but not at all consistently getting errors
> of form:
>  >  >
>  
> Rbpc_sysCall_checkFileMatch(var/lib/spamassassin/3.004002/updates_spamassassin_org/STATISTICS-set3-72_scores.cf.txt):
> file doesn't exist
>  >  >
>  >  > Here is some background:
>  >  > - Only seems to occur on full backups
>  >  > - These messages correspond to files that have changed or been added
> since the previous backup
>  >  > - However, they only occur on some incrementals and only for a small
> subset of the changed files, even when they do occur
>  >  > - The files are still backed up properly
>  >  > - The files never 'vanished' or changed since I am using read-only
> btrfs snapshots
>  >  > - My system is rock-solid and I have not had any other file system
> troubles
>  >  >
>  >  > Is this really an error?
>  >  > What is causing it?
>  >  > Why does it happen seemingly randomly?
>  >  >
>  >  >
>  >  > ___
>  >  > BackupPC-users mailing list
>  >  > BackupPC-users@lists.sourceforge.net
>  >  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  >  > Wiki:http://backuppc.wiki.sourceforge.net
>  >  > Project: http://backuppc.sourceforge.net/
>  >
>  >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Spurious "file doesn't exist" messages... BUG?

2020-05-25 Thread Craig Barratt via BackupPC-users
I'm not sure why you are having problems with bpc-rsync 3.1.2 and
3.1.3-beta0.  Maybe try higher levels of logging and adding -vv to the
remote rsync.  How quickly does it fail?  Is it some ssh-related issue?

I finally was able to get ubuntu to boot on my machine with selinux turned
on in permissive mode, although it's clearly not configured correctly.  I
can see default selinux file attributes with ls -Z.  And I can change them
with chcon or semanage fcontext.  But the actual files I set don't have any
xattr settings.  I assume all those commands are doing is updating files
below /etc/selinux/targeted/contexts/files.  Directly running rsync -X to
copy those files doesn't preserve their selinux attributes.

Anyhow, I separately reconfirmed that a user xattr setting on a regular
file is correctly backed up and restored.

Craig

On Mon, May 25, 2020 at 3:43 PM  wrote:

> Thanks Craig.
> This problem was bothering me for the longest of times... but I always
> assumed it was due to files changing or some other spurious factors...
> But now that I am backing up against fixed snapshots, it has become
> easier to one-by-one track down unexpected bugs & error messages...
>
> The only remaining issue I see now is with SELinux extended attributes
> :)
>
> Plus, the challenges with hangs on bpc-rsyn 3.1.2 and 3.1.3-beta0
>
> Craig Barratt via BackupPC-users wrote at about 13:08:31 -0700 on Monday,
> May 25, 2020:
>  > Jeff,
>  >
>  > Thanks for figuring that out.  I pushed a fix
>  > <
> https://github.com/backuppc/rsync-bpc/commit/96e890fc3e5bb53f6618bd8650e8400f355b243a
> >
>  > so the warning doesn't get printed on zero-length files.
>  >
>  > Craig
>  >
>  > On Mon, May 25, 2020 at 8:08 AM  wrote:
>  >
>  > > Presumably the problem is in rsync-bpc: bpc_sysCalls.c
>  > >
>  > > int bpc_sysCall_checkFileMatch(char *fileName, char *tmpName, struct
>  > > file_struct *rsyncFile,
>  > >char *file_sum, off_t fileSize)
>  > > {
>  > > bpc_attrib_file *fileOrig, *file;
>  > > char poolPath[BPC_MAXPATHLEN];
>  > >
>  > > if ( !(fileOrig = bpc_attribCache_getFile(&acNew, fileName, 0,
> 0)) ) {
>  > > /*
>  > >  * Hmmm.  The file doesn't exist, but we got deltas
> suggesting the
>  > > file is
>  > >  * unchanged.  So that means the generator found a matching
> pool
>  > > file.
>  > >  * Let's try the same thing.
>  > >  */
>  > > if ( bpc_sysCall_poolFileCheck(fileName, rsyncFile)
>  > > || !(fileOrig = bpc_attribCache_getFile(&acNew,
> fileName,
>  > > 0, 0)) ) {
>  > > bpc_logErrf("bpc_sysCall_checkFileMatch(%s): file doesn't
>  > > exist\n", fileName);
>  > > return -1;
>  > > }
>  > > }
>  > > ...
>  > >
>  > > But the zero length file (with md5sum
>  > > d41d8cd98f00b204e9800998ecf8427e) is not in the pool.
>  > >
>  > > Presumably, one should add a check to eliminate...
>  > >
>  > > backu...@kosowsky.org wrote at about 10:36:55 -0400 on Monday, May
> 25,
>  > > 2020:
>  > >  > Seems like these are all the zero length files...
>  > >  > Could it be that backuppc is checking for file length rather than
> file
>  > > existence???
>  > >  >
>  > >  >
>  > >  > Note: when I delete the backup and run it again, the *exact* same
>  > >  > "file doesn't exist' errors reappears (even though a new btrfs
> snapshot
>  > >  > has been created).  So I am pretty sure it is not a filesystem
> issue
>  > >  > but rather likely a bug in backuppc...
>  > >  >
>  > >  >
>  > >  > backu...@kosowsky.org wrote at about 10:00:31 -0400 on Monday,
> May 25,
>  > > 2020:
>  > >  >  > I am still occasionally but not at all consistently getting
> errors
>  > > of form:
>  > >  >  >
>  > >
> Rbpc_sysCall_checkFileMatch(var/lib/spamassassin/3.004002/updates_spamassassin_org/STATISTICS-set3-72_scores.cf.txt):
>  > > file doesn't exist
>  > >  >  >
>  > >  >  > Here is some background:
>  > >  >  > - Only seems to occur on full backups
>  > >  >  > - These messages correspond to files that have changed or been
> added
>  > > since the previous backup
>  > >  >  > - However, they

Re: [BackupPC-users] how to install SCGI , exactly??

2020-05-26 Thread Craig Barratt via BackupPC-users
Is there any chance you have two different perls installed?  Check the
first line of bin/BackupPC to see what version it is running.

What is the output from these commanda?

which perl
perl -e 'use SCGI; print $SCGI::VERSION;'

Craig

On Tue, May 26, 2020 at 12:51 PM Michael Walker - Rotech Motor Ltd. <
m...@rotech.ca> wrote:

> Well... CPAN certainly wasn't working. So I tried installing from the
> tarball:
>
>
> root@redacted:/usr/share/SCGI# perl Build.PL
> Created MYMETA.yml and MYMETA.json
> Creating new 'Build' script for 'SCGI' version '0.6'
> root@redacted:/usr/share/SCGI# ./Build
> Building SCGI
> root@redacted:/usr/share/SCGI# ./Build test
> t/blocking.t .. ok
> t/non-blocking.t .. ok
> t/pod-coverage.t .. skipped: Test::Pod::Coverage 1.00 required for testing
> POD coverage
> t/pod.t ... skipped: Test::Pod 1.00 required for testing POD
> All tests successful.
> Files=4, Tests=20,  1 wallclock secs ( 0.03 usr  0.01 sys +  0.52 cusr
> 0.10 csys =  0.66 CPU)
> Result: PASS
> root@redacted:/usr/share/SCGI# ./Build install
> Building SCGI
> Installing /usr/local/share/perl/5.26.1/SCGI.pm
> Installing /usr/local/share/perl/5.26.1/SCGI/Request.pm
> Installing /usr/local/man/man3/SCGI.3pm
> Installing /usr/local/man/man3/SCGI::Request.3pm
> root@redacted:/usr/share/SCGI#
>
> So.. it is installed, and system rebooted, but I continue to get the error:
>
> 2020-05-26 12:37:47  scgi : BackupPC_Admin_SCGI: can't load perl SCGI module 
> - install via CPAN; exiting in 60 seconds
> 2020-05-26 12:37:47 Running BackupPC_Admin_SCGI (pid=1751)
>
> Is there any information published detailing how to get BackupPC to find and 
> load the freshly-installed SCGI module?
>
> The configuration instructions online are rife with options, and frankly it's 
> mindbogglingly complex for someone like me who is trying just to install a 
> 'ready-made backup system'. (Right now I am looking at 
> https://backuppc.github.io/backuppc/BackupPC.html#Step-9:-CGI-interface )
>
> The amount of time spent analyzing all this stuff might be better spent just 
> building something from scratch :(
>
>
> -- Original Message --
> From: "Craig Barratt via BackupPC-users" <
> backuppc-users@lists.sourceforge.net>
> To: "General list for user discussion, questions and support" <
> backuppc-users@lists.sourceforge.net>
> Cc: "Craig Barratt" 
> Sent: 5/23/2020 2:51:54 PM
> Subject: Re: [BackupPC-users] how to install SCGI , exactly??
>
> There are two different components that have to be installed, one for perl
> (the client end) and another for apache (the server end).
>
> The perl module SCGI needs to be installed, which can be done via cpan.
> If cpan doesn't work you can install it manually from the tarball, which
> can be found in many places (eg, here
> <http://www.namesdir.com/mirrors/cpan/authors/id/V/VI/VIPERCODE/SCGI-0.6.tar.gz>
> ).
>
> Second, apache needs the scgi module (typically called mod-scgi) installed
> and enabled.  As Doug mentions that can be done using your favorite package
> manager.
>
> Craig
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Regarding Transport mechanism limitations

2020-06-02 Thread Craig Barratt via BackupPC-users
Kris is correct - the limits in 4.x (and 3.x) are much higher than
suggested by that very old FAQ, certainly for rsync.

Craig

On Tue, Jun 2, 2020 at 11:44 AM Kris Lou via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:

> Updated Docs (4.x) are here:   https://backuppc.github.io/backuppc/
>
> And no problems here with larger files (18GB) using rsync.exe from Cygwin.
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Thank you far an easier installation

2020-06-02 Thread Craig Barratt via BackupPC-users
Thanks to folks who have developed packages for several linux flavors, and
also for updating the wiki .

It would be great to get additional volunteers to develop and maintain
packages for remaining linux distros, and submitting them upstream.  If you
are interested please let me know.

Craig

On Tue, Jun 2, 2020 at 11:14 AM Bob Wooden  wrote:

> For those of you who may not be aware of this.
>
> In the process of building a replacement machine (hardware update) I
> have discovered that CentOS 8, the EPEL repo offers BackupPC 4.3.2. (The
> CentOS 7 EPEL repo offers BackupPC 3.3.2.)
>
> After years of building from source, building *.deb packages and various
> other build processes, one of the Linux distros has "caught up" to a
> current version of BackupPC (4.3.2) in one of their repos.
>
> (Not throwing "stones" at any of the other distros. Just sharing
> information.)
>
> --
> ^^
>
> Bob Wooden
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Regarding Transport mechanism limitations

2020-06-02 Thread Craig Barratt via BackupPC-users
Ok, I'll remove that.

There isn't an 18GB limit.  Kris was simply saying that 18GB files work
fine.

Craig

On Tue, Jun 2, 2020 at 12:41 PM  wrote:

> The url in "Some Limitations" section of updated docs are pointing to old
> docs
>
> Thanks. Is there a way to work around 18 GB limit ?
>
> On June 3, 2020 12:13:12 AM GMT+05:30, Kris Lou via BackupPC-users <
> backuppc-users@lists.sourceforge.net> wrote:
> >Updated Docs (4.x) are here:   https://backuppc.github.io/backuppc/
> >
> >And no problems here with larger files (18GB) using rsync.exe from
> >Cygwin.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-07 Thread Craig Barratt via BackupPC-users
Jeff,

Yes, that's correct.

In v4 a full backup using --checksum will compare all the metadata and
full-file checksum.  Any file that matches all those will be presumed
unchanged.  In v4 the server load for a full is very low, since all that
meta data (including the full-file checksum) is stored and easily accessed
without needing to look at the file contents at all.  An incremental backup
just checks all the metadata and not the full-file checksum, which is fast
on both the server and client side.  V4 also supports incremental-only (by
periodically filling a backup), in cases where that is sufficient.
However, that's more risky and not the default.

In v3, a full backup checks the block-based deltas and full-file checksum
for every file.  That's a lot more work and seems unnecessary.  You can get
that behavior in v4 too by replacing --checksums with --ignore-times, but
it's a lot more expensive on the server side since v4 doesn't cache the
block and full-file checksums.

While md5 collisions can be constructed with various properties, the chance
of a random file change creating a hash collision is 2^-128, as you note.

Craig


On Sun, Jun 7, 2020 at 9:11 PM  wrote:

> Silly me... the '--checksum' is only for 'Full' so that explains the
> difference between 'incrementals' and 'fulls'... along with presumably
> why my case wasn't caught by an incremental.
>
> I still don't fully understand the comment referencing V3 and replacing
> --checksum with --ignore-times.
>
> Is the point that v3 compared both full file and block
> checksums while in v4 --checksum only compares full file checksums?
> And so v3 is more conservative since there might be checksum
> collisions of 2 non-identical files at the file-checksum level that
> would be unmasked by checksum differences at the block level?
> (presumably a very rare event -- presumably < 2^128 since the hash
> itself is 128 bits and the times and size are also checked)
>
> "" wrote at about 23:54:14 -0400 on Sunday, June 7, 2020:
>  > Can someone clarify how --checksum works in v4?
>  > And specifically, when could it get 'fooled' thinking 2 files are
>  > identical when they really aren't...
>  >
>  > According to config.pl:
>  >
>  >The --checksum argument causes the client to send full-file
>  >checksum for every file (meaning the client reads every file and
>  >computes the checksum, which is sent with the file list).  On the
>  >server, rsync_bpc will skip any files that have a matching
>  >full-file checksum, and size, mtime and number of hardlinks.  Any
>  >file that has different attributes will be updating using the block
>  >rsync algorithm.
>  >
>  >In V3, full backups applied the block rsync algorithm to every
>  >file, which is a lot slower but a bit more conservative.  To get
>  >that behavior, replace --checksum with --ignore-times.
>  >
>  >
>  > While according to the 'rsync' man pages:
>  >-c, --checksum
>  >This changes the way rsync checks if the files have been changed
>  >and are in need of a transfer.  Without this option, rsync uses a
>  >"quick check" that (by default) checks if each file’s size and time
>  >of last modification match between the sender and receiver.  This
>  >option changes this to compare a 128-bit checksum for each file
>  >that has a matching size.  Generating the checksums means that both
>  >sides will expend a lot of disk I/O reading all the data in the
>  >files in the transfer (and this is prior to any reading that will
>  >be done to transfer changed files), so this can slow things down
>  >significantly.
>  >
>  >
>  > Note by default:
>  > $Conf{RsyncFullArgsExtra} = ['--checksum'];
>  >
>  > So in v4:
>  > - Do incrementals and fulls differ in how/when checksums are used?
>  > - For each case, what situations would cause BackupPC to be fooled?
>  > - Specifically, I don't understand the comment of replacing --checksum
>  >   with --ignore-times since the rsync definition of --checksum
>  >   says that it deosn't look at times but a 128-bit file checksum.
>  >
>  > The reason I ask is that I recompiled a debian package (happens to be
>  > libbackuppc-xs-perl) to pull in the latest version 0.60. But I forgot
>  > to change the date in the Changelog. When installing the package, the
>  > file dates were the same even though the content and file md5sums for
>  > some files had changed.
>  >
>  > Specifically,
>  > /usr/lib/x86_64-linux-gnu/perl5/5.26/auto/BackupPC/XS/XS.so
>  > had the same size (and date due to my mistake) but a different file
>  > md5sum.
>  >
>  > And an incremental backup didn't detect this difference...
>  >
>  >
>  > ___
>  > BackupPC-users mailing list
>  > BackupPC-users@lists.sourceforge.net
>  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>  > Wiki:http://backuppc.wiki.sourceforge.net
>  > Project: http://backuppc.sourceforge.net/
>

Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
Guillermo,

Yes, that's an excellent point.  Actually v3 suffers from this too since,
with cached block and full-file checksums, it doesn't recheck the file
contents either.  However, v3 had a
parameter $Conf{RsyncCsumCacheVerifyProb} (default 0.01 == 1%) that caused
rsync to verify that random fraction of the file contents.  Other xfer
methods (eg, tar and smb) always do a full-file compare during a full, so
shouldn't be undetected server-side corruption with those XferMethods.

Thanks for the script.  While it's helpful to check the pool, it isn't
obvious how to fix any errors.  So it's probably best to have rsync-bpc
implement the old $Conf{RsyncCsumCacheVerifyProb} setting.  It could do
that by randomly skipping the --checksum short-circuit during a full.  For
that fraction of files, it would do a full rsync check and update, which
would update the pool file if they are not identical.

If folks agree with that approach, that's what I'll implement.

Craig

On Mon, Jun 8, 2020 at 10:16 AM Guillermo Rozas 
wrote:

> I've attached the script I'm using. It's very rough, so use at your own
> risk!
>
> I run it daily checking 4 folders of the pool per day, sequentially,
> so it takes 32 days to check them all. You can modify the external
> loop to change this. The last checked folder is saved in an auxiliary
> file.
>
> The checksum is done uncompressing the files in the pool using
> zlib-flate (line 25), but it can be changed to pigz or BackupPC_zcat.
> On my severely CPU-limited server (Banana Pi) both pigz and zlib-flate
> are much faster than BackupPC_zcat, they take around a quarter of the
> time to check the files (pigz is marginally faster than zlib-flate).
> On the other hand, BackupPC_zcat puts the lowest load on the CPU,
> zlib-flate's load is 30-35% higher, and pigz's is a whooping 80-100%
> higher.
>
> However, as BackupPC_zcat produces slightly modified gzip files, there
> is a (very) small chance that a BackupPC_zcat compressed file is not
> properly uncompressed by the other two (line 28 in the script). If
> that happens, you need to re-check every zlib-flate or pigz failure
> with BackupPC_zcat before calling it a real error. I think this gets
> the best balance between load on the system and time spent checking
> the pool (at least for my server and pool...).
>
> Best regards,
> Guillermo
>
>
> On Mon, Jun 8, 2020 at 1:28 PM  wrote:
> >
> > Good point...
> > Craig - would it make sense to add a parameter to BackupPC_nightly
> > that would check a user-settable percentage of the files each night,
> > say NightlyChecksumPercent. So if set to 3%, the pool would be checked
> > (sequentially) over the period of ~1 month
> >
> > Guillermo Rozas wrote at about 11:12:39 -0300 on Monday, June 8, 2020:
> >  > Yes, I wouldn't worry about collisions by chance.
> >  >
> >  > However, there is a second aspect that is not covered here: if you
> >  > rely only on saved checksums in the server, it will not check again
> >  > unmodified pool files. This risks you missing file system corruption
> >  > or bit rot in the backup files that were previously caught by the V3
> >  > behaviour (which periodically checksummed the pool files).
> >  >
> >  > Two solutions:
> >  > - put the pool in a file system with checksum verification included
> >  > - use a script to periodically traverse the pool and chesum the files
> >  >
> >  > Best regards,
> >  > Guillermo
> >  >
> >  >
> >  >
> >  > On Mon, Jun 8, 2020 at 10:58 AM G.W. Haywood via BackupPC-users
> >  >  wrote:
> >  > >
> >  > > Hi there,
> >  > >
> >  > > On Mon, 8 Jun 2020, Jeff Kosowsky wrote:
> >  > >
> >  > > > ... presumably a very rare event ...
> >  > >
> >  > > That's putting it a little mildly.
> >  > >
> >  > > If it's really all truly random, then if you tried random
> collisions a
> >  > > million times per picosecond you would (probably) need of the order
> of
> >  > > ten trillion years to have a good chance of finding one...
> >  > >
> >  > > $ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
> >  > > 10.79
> >  > >
> >  > > I think it's safe to say that it's not going to happen by chance.
> >  > >
> >  > > If it's truly random.
> >  > >
> >  > > --
> >  > >
> >  > > 73,
> >  > > Ged.
> >  > >
> >  > >
> >  > > ___
> >  > > BackupPC-users mailing list
> >  > > BackupPC-users@lists.sourceforge.net
> >  > > List:
> https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >  > > Wiki:http://backuppc.wiki.sourceforge.net
> >  > > Project: http://backuppc.sourceforge.net/
> >  >
> >  >
> >  > ___
> >  > BackupPC-users mailing list
> >  > BackupPC-users@lists.sourceforge.net
> >  > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> >  > Wiki:http://backuppc.wiki.sourceforge.net
> >  > Project: http://backuppc.sourceforge.net/
> >
> >
> > ___
> > BackupPC-users mailing list

Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
Jeff & Guillermo,

Agreed - it's better to scan small subsets of the pool.  I'll add that
to BackupPC_refCountUpdate (which does the nightly pool scanning to delete
unused files and update stats).

Craig

On Mon, Jun 8, 2020 at 2:35 PM  wrote:

> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>  > > While it's helpful to check the pool, it isn't obvious how to fix any
> errors.
>  >
>  > Sure. Actually I've put aside to interpret the error and the file
>  > involved until I find an actual error (so I hope to never need that
>  > information! :) )
>  >
>  > > So it's probably best to have rsync-bpc implement the old
> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly
> skipping the --checksum short-circuit during a full.  For that fraction of
> files, it would do a full rsync check and update, which would update the
> pool file if they are not identical.
>  >
>  > That would be a good compromise. It makes the fulls a bit slower in
>  > servers with poor network and slow disks, but it's more clear what to
>  > do in case of error. Maybe also add a "warning of possible pool
>  > corruption" if the stored checksum and the new checksum differs for
>  > those files?
>  >
>
> The only problem with this approach is that it never revisits pool
> files that aren't part of new backups.
>
> That is why I suggested a nightly troll through the cpool/pool to
> check md5sums going sequentially through X% each night...
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread Craig Barratt via BackupPC-users
I pushed a commit
<https://github.com/backuppc/backuppc/commit/7273cd15b0414c6551df4066b52ecc042e1716b6>
that implements nightly pool checking on a configurable portion of the pool
files.  It needs the latest version of backuppc-xs, 0.61.

Craig

On Mon, Jun 8, 2020 at 4:22 PM Michael Huntley  wrote:

> I’m fine with both action items.
>
> I back up millions of emails and so far the restores I’ve performed have
> never been an issue.
>
> mph
>
>
>
> On Jun 8, 2020, at 3:01 PM, Craig Barratt via BackupPC-users <
> backuppc-users@lists.sourceforge.net> wrote:
>
> 
> Jeff & Guillermo,
>
> Agreed - it's better to scan small subsets of the pool.  I'll add that
> to BackupPC_refCountUpdate (which does the nightly pool scanning to delete
> unused files and update stats).
>
> Craig
>
> On Mon, Jun 8, 2020 at 2:35 PM  wrote:
>
>> Guillermo Rozas wrote at about 16:41:03 -0300 on Monday, June 8, 2020:
>>  > > While it's helpful to check the pool, it isn't obvious how to fix
>> any errors.
>>  >
>>  > Sure. Actually I've put aside to interpret the error and the file
>>  > involved until I find an actual error (so I hope to never need that
>>  > information! :) )
>>  >
>>  > > So it's probably best to have rsync-bpc implement the old
>> $Conf{RsyncCsumCacheVerifyProb} setting.  It could do that by randomly
>> skipping the --checksum short-circuit during a full.  For that fraction of
>> files, it would do a full rsync check and update, which would update the
>> pool file if they are not identical.
>>  >
>>  > That would be a good compromise. It makes the fulls a bit slower in
>>  > servers with poor network and slow disks, but it's more clear what to
>>  > do in case of error. Maybe also add a "warning of possible pool
>>  > corruption" if the stored checksum and the new checksum differs for
>>  > those files?
>>  >
>>
>> The only problem with this approach is that it never revisits pool
>> files that aren't part of new backups.
>>
>> That is why I suggested a nightly troll through the cpool/pool to
>> check md5sums going sequentially through X% each night...
>>
>>
>> ___
>> BackupPC-users mailing list
>> BackupPC-users@lists.sourceforge.net
>> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
>> Wiki:http://backuppc.wiki.sourceforge.net
>> Project: http://backuppc.sourceforge.net/
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Discrepancy in *actual* vs. *reported* missing pool files

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

We've discussed at least one issue off-list - making sure you consider
inodes too.

It looks like BackupPC_fsck -f only rebuilds the last two backup refcounts
for each host.  It should use the -F option instead of -f when it calls
BackupPC_refcountUpdate (see line 630).  So you should try changing that
and re-running.

Craig

On Tue, Jun 9, 2020 at 8:44 AM  wrote:

> For the longest of time my log files have warned about 37 missing pool
> files.
> E.g.
> admin : BackupPC_refCountUpdate: missing pool file
> 718fc4796633702979bb5edbd20e27a6
>
> So, I decided to find them to see what is going on...
>
> I did the following:
>
> 1. Stopped the running of further backups
> Ran: BackupPC_fsck -f' to do a full checkup
> Ran: BackupPC_nightly to prune the pool fully
>
> 2. Created a sorted, uniq list of all the cpool files, using 'find'
>and 'sort -u' on TopDir/cpool
>
> 3. Created a program to iterate through all the attrib files in all my
>backups and print out the digest and name of each file (plus also
>size and type). I also included the md5sum encoded in the name of
>each attrib file itself.
> Ran the program on all my hosts and backups
> Sorted and uniquified the list of md5sum
>
> 4. Used 'comm -1 -3' and 'comm -2 -3' to find missing ones from each
>listing
>
> Result:
> 1. Relative to the attrib listing, the pool was missing *105* files
>including the 37 that were found in the LOG
>
>INTERESTINGLY, all 105 were from previously migrated v3 backups.
>Actually, from the last 3 backups on that machine (full, incr, incr)
>
> 2. Relative to the pool listing, there were *1154* files in the pool
>that were not mentioned in the attrib file digests (including the
>digest of the attrib itself)
>
> So,
> - Why is BackupPC_fsck not detecting all the missing pool files?
> - Why is BackupPC_nightly not pruning files not mentioned in the
>   attrib listing?
> - Any suggestions on how to further troubleshoot?
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] attrib_0 files?

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

I don't think there's much different whether a directory has an empty
attrib file or not.  The reason they exist is when a directory ends up
being empty after updating the directory.  The reason a directory might
exist without one is when reverse deltas require a change deeper in the
directory tree, which causes the unfilled backup to create the intermediate
directories, which won't get attrib files unless rsync needs to make
changes at that level too.

Craig

On Mon, Jun 8, 2020 at 9:34 PM  wrote:

> I have some empty attrib files, labeled attrib_0.
> Note that the directory it represents, has no subdirectories. So, I would
> have
> thought that no attrib file was present/necessary -- which seems to be
> the case in most of my empty directories.
>
> So what is the difference (and rationale) for attrib_0 vs no attrib
> file.
> Does that have to do with a prior/subsequent file deletion?
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] What is the best way to add back MISSING pool files?

2020-06-09 Thread Craig Barratt via BackupPC-users
Jeff,

The first method seems simpler.  Don't you just have to mv the file based
on BackupPC_zcat file | md5sum?  BackupPC_nightly shouldn't need to run
(other than to check you no longer get the missing error).

Btw, where did you find the missing pool files?

For the benefit of people on the list, Jeff and I are addressing the other
issues off-list.

Craig

On Tue, Jun 9, 2020 at 6:48 PM  wrote:

> Of course, the unanswered interesting question is why did this small
> number of 37 files out of about 3.5M pool files fail to migrate
> properly from v3 to v4...
>
> Note: I ran as many checks before and after as possible on the pool
> and pc heirarchy integrity (using my old v3 routines I had written) as
> well as checked error messages from the migration itself. I also of
> course had the BackupPC service off...
>
> "" wrote at about 21:41:27 -0400 on Tuesday, June 9, 2020:
>  > I found some of the missing v4 pool files (mentioned in an earlier
>  > post) in a full-disk backup of my old v3 setup.
>  >
>  > I would like to add them back to the v4 pool to eliminate the missing
>  > pool file messages and thus fix my backups.
>  >
>  > I can think of several ways:
>  >
>  > - Method A.
>  >   1. Create a script to first BackupPC_zcat each recovered old v3 pool
>  >  file into a new file named by its uncompressed md5sum and then move
>  >  it appropriately into the v4 cpool 2-layer directory heirarchy.
>  >
>  >   2. Run BackupPC_nightly assuming that it will clean up the cpool ref
>  >  counts to coincide with the now correct pc-branch ref count
>  >
>  > - Method B
>  >   1. BackupPC_zcat the recovered files from the v3 pool into a new
>  >  directory. Naming of the files is immaterial.
>  >   2. Create a new temporary host and use that to backup the folder
>  >   3. *Manually* delete the host by deleting the entire host folder
>  >   4. Run BackupPC_nightly to correct the ref counts (assuming needed)
>  >
>  > - Method C
>  >   1. Use some native code or routines that Craig may already have
>  >  written that do most or all of the above
>  >
>  > Any thoughts on which of these work and which way is preferable?
>  >
>  > Jeff
>  >
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to disable full backups and keep just incremental

2020-06-10 Thread Craig Barratt via BackupPC-users
Dagg,

What version of BackupPC are you using?

Craig

On Wed, Jun 10, 2020 at 7:24 AM daggs  wrote:

> Greetings,
>
> I have two large (several hundreds gigabytes) backups that keep failing
> due to abort or child end unexpectedly. I ran it three times already.
> I want to disable the full backup and keep the incremental, what is the
> proper way do that?
>
> Thanks,
>
> Dagg.
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-18 Thread Craig Barratt via BackupPC-users
Stefan,

BackupPC_backupDelete is only available in 4.x.

Craig

On Thu, Jun 18, 2020 at 2:33 AM Stefan Schumacher <
stefan.schumac...@net-federation.de> wrote:

>
> > If you want to remove a backup, best to use a script built to do it
> > right -- BackupPC_backupDelete. Not sure if it is bundled with 3.x
> > but
> > it exists out there.
> >
> >
>
> Hello,
>
> in that case I would be grateful if someone could share the link to
> this script with me.
>
> Thanks in advance
> Stefan
>
>
> Stefan Schumacher
> Systemadministrator
>
> NetFederation GmbH
> Sürther Hauptstraße 180 B -
> Fon:+49 (0)2236/3936-701
>
> E-Mail:  stefan.schumac...@net-federation.de
> Internet:   http://www.net-federation.de
> Besuchen Sie uns doch auch auf facebook, twitter, Google+, flickr,
> Slideshare, XING oder unserem Blog. Wir freuen uns!
>
>
>
> ***
> Nachhaltigkeit bleibt Trendthema: Der neue CSR Benchmark ist live!
>
> ***
> Wie gut funktioniert die digitale CSR-Kommunikation in der deutschen
> Konzernlandschaft?
> Antworten darauf und zahlreiche Good Practices finden Sie unter
> www.csr-benchmark.de
>
>
>
>
>
> *
>
> NetFederation GmbH
> Geschäftsführung: Christian Berens, Thorsten Greiten
> Amtsgericht Köln, HRB Nr. 32660
>
> *
>
> The information in this e-mail is confidential and may be legally
> privileged. It is intended solely for the addressee and access to the
> e-mail by anyone else is unauthorised. If you are not the intended
> recipient, any disclosure, copying, distribution or any action taken or
> omitted to be taken in reliance on it, is prohibited and may be unlawful.
> If you have received this e-mail in error please forward to:
> post...@net-federation.de
>
>
> Die in dieser E-Mail enthaltenen Informationen sind vertraulich und können
> von rechtlicher Bedeutung sein. Diese Mail ist ausschließlich für den
> Adressaten bestimmt und jeglicher Zugriff durch andere  Personen ist nicht
> zulässig. Falls Sie nicht der beabsichtigte Empfänger sind, ist jegliche
> Veröffentlichung, Vervielfältigung, Verteilung oder sonstige in diesem
> Zusammenhang stehende Handlung untersagt und unter Umständen ungesetzlich.
> Falls Sie diese E-Mail irrtümlich erhalten haben, leiten Sie sie bitte
> weiter an: post...@net-federation.de
>
>
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


[BackupPC-users] BackupPC 4.4.0 released

2020-06-22 Thread Craig Barratt via BackupPC-users
BackupPC 4.4.0  has
been released on Github.

This release contains several new features and some bug fixes. New features
include:

   - any full/filled backup can be marked for keeping, which prevents any
   expiry or deletion
   - any backup can be annotated with a comment (eg, "prior to upgrade of
   xyz")
   - added metrics CGI (thanks to @jooola ) that
   replaces RSS and adds Prometheus support
   - tar XferMethod now supports xattrs and acls
   - rsync XferMethod now correctly supports xattrs on directories and
   symlinks
   - nightly pool scanning now verifies the md5 digests of a configurable
   fraction of pool files
   - code runs through perltidy so format is now uniform (thanks to @jooola
   , with help from @shancock9
    and @moisseev
   )

New versions of BackupPC::XS (0.62
) and rsync-bpc (
3.0.9.15 ,
3.1.2.2  or
3.1.3beta0 )
are required.

Thanks to Jeff Kosowsky for extensive testing and debugging for this
release, particularly around xattrs.

Enjoy!

Craig

Here are the more detailed changes:

   - Merged pull requests #325
   , #326
   , #329
   , #330
   , #334
   , #336
   , #337
   , #338
   , #342
   , #343
   , #344
   , #345
   , #347
   , #348
   , #349
   
   - Filled/Full backups can now be marked as "keep", which excludes them
   from any expiry/deletion. Also, a backup-specific comment can be added to
   any backup to capture any important information about that backup (eg,
   "pre-upgrade of xyz").
   - Added metrics CGI, which adds Prometheus support and replaces RSS, by
   @joola  (#344
   , #347
   )
   - Tar XferMethod now supports xattrs and acls; xattrs should be
   compatible with rsync XferMethod, but acls are not
   - Sort open directories to top when browsing backup tree
   - Format code using perltidy, and included in pre-commit flow, by @joola
    (#334
   , #337
   , #342
   , #343
   , #345
   ). Thanks to @joola
    and @shancock9
 (perltidy
   author) for significant effort and support, plus improvements in perltidy,
   to make this happen.
   - Added $Conf{PoolNightlyDigestCheckPercent}, which checks the md5
   digest of this fraction of the pool files each night.
   - $Conf{ClientShareName2Path} is saved in backups file and the share to
   client path mapping is now displayed when you browse a backup so you know
   the actual client backup path for each share, if different from the share
   name
   - configure.pl now checks the per-host config.pl in a V3 upgrade to warn
   the user if $Conf{RsyncClientCmd} or $Conf{RsyncClientRestoreCmd} are used
   for that host, so that the new settings $Conf{RsyncSshArgs} and
   $Conf{RsyncClientPath} can be manually updated.
   - Fixed host mutex handling for dhcp hosts; shifted initial mutex
   requests to client programs
   - Updated webui icon, logo and favicon, by @moisseev
    (#325
   , #326
   , #329
   , #330
   )
   - Added $Conf{RsyncRestoreArgsExtra} for host-specific restore settings
   - Language files are now all use utf8 charsets
   - Bumped required version of BackupPC::XS to 0.62 and rsync-bpc to
   3.0.9.15.
   - Ping failure message only written to stdout only if verbose
   - BackupPC_backupDelete removes partial v3 backup in HOST/new; fixes #324
   

Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-24 Thread Craig Barratt via BackupPC-users
Mike,

It's possible you have two different versions of perl installed, or for
some reason the BackupPC user is seeing an old version of BackupPC::XS.

Try some of the suggestions here:
https://github.com/backuppc/backuppc/issues/351.

Craig

On Wed, Jun 24, 2020 at 10:12 AM Richard Shaw  wrote:

> On Wed, Jun 24, 2020 at 11:58 AM Mike Hughes  wrote:
>
>> I'm getting a service startup failure claiming my version of BackupPC-XS
>> isn't up-to-snuff but it appears to meet the requirements:
>>
>> BackupPC: old version 0.57 of BackupPC::XS: need >= 0.62; exiting in 30s
>>
>
> I don't have a CentOS 7 machine handy so I'm downloading the minimal ISO
> for boxes...
>
> Thanks,
> Richard
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is there a reason that DumpPreUserCmd (and its analogs) are executed without a shell?

2020-06-24 Thread Craig Barratt via BackupPC-users
Jeff,

The reason BackupPC avoids running shells for sub-commands is security, and
the extra layer of argument escaping or quoting.  It's easy to
inadvertently have some security weakness from misconfiguration or misuse.

Can you get what you need by starting the command with "/bin/bash -c"?  You
can alternatively set $Conf{DumpPreUserCmd} to a shell script with the
arguments you need, and then you can do whatever you want in that script.

Craig

On Wed, Jun 24, 2020 at 10:20 AM  wrote:

> I notice that in Lib.pm, the function 'cmdSystemOrEvalLong'
> specifically uses the structure 'exec {$cmd->[0]} @$cmd;' so that no
> shell is invoked.
>
> I know that technically it's a little faster to avoid calling the
> shell, but in many cases it is very useful to have at least a
> rudimentary shell available.
>
> For example, I may want to read in (rather than execute a script).
>
> Specifically say,
> (1)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s <
> /etc/backuppc/scripts/script-\$hostIP)
> would allow me to run a hostIP specific script that I store in
> /etc/backuppc/scripts.
>
> - This is neater and easier to maintain than having to store the script
>   on the remote machine.
> - This also seems neater and nicer than having to use an executable
>   script that would itself need to run ssh -- plus importantly it
>   removes a layer of indirection and messing with extra quoting.
>
>
> Similarly, it would be great to be able to support:
> (2)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s < 
> EOF)
>
> Or similarly:
> (3)$Conf{DumpPreUserCmd} = qq(\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s <<< $bashscript
> where for example
> my $bashscript = <<'EOF'
> 
> EOF
>
> Though this latter form is a bash-ism and would not work in /bin/sh
>
> The advantage of the latter examples is that it would allow me to
> store the bashscript in the actual host.pl config scripts rather than
> having to have a separate set of scripts to load.
>
> Note that I am able to roughly replicate (3) using perl code, but it
> requires extra layers of escaping of metacharacters making it hard to
> write, read, and debug.
>
> For example something like:
> my $bashscript = <<'EOF';
> 
> EOF
>
> $bashscript =~ s/([][;&()<>{}|^\n\r\t *\$\\'"`?])/\\$1/g;
> $Conf{DumpPreUserCmd} = qq(&{sub {
> open(my \$out_fh, "|-", "\$sshPath -q -x -i $BackupPCsshID -l
> $Conf{RsyncdUserName} \$hostIP bash -s")
> or warn "Can't start ssh: \$!";
> print \$out_fh qq($bashscript);
> close \$out_fh or warn "Error flushing/closing pipe to ssh: \$!";
> }})
>
> Though it doesn't quite work yet...
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] [BackupPC-devel] BackupPC 4.4.0 released

2020-06-25 Thread Craig Barratt via BackupPC-users
You can install the perl module Module::Path to find the path for a module.

After installing, do this:

perl -e 'use Module::Path "module_path";
print(module_path("BackupPC::XS")."\n");'

Example output:

/usr/local/lib/x86_64-linux-gnu/perl/5.26.1/BackupPC/XS.pm

Now try as root and the BackupPC user to see the difference.  Does the
BackupPC user have permission to access the version root uses?

You can also print the module search path with:

perl -e 'print join("\n", @INC),"\n"'


Does that differ between root and the BackupPC user?

Craig

On Thu, Jun 25, 2020 at 9:48 AM Les Mikesell  wrote:

> > The system got itself into this state from a standard yum update.
>
> That's why you want to stick to all packaged modules whenever
> possible.   Over time, dependencies can change and the packaged
> versions will update together.  You can probably update a cpan module
> to the correct version manually but you need to track all the version
> dependencies yourself.   There are some different approaches to
> removing modules: https://www.perlmonks.org/?node_id=1134981
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-28 Thread Craig Barratt via BackupPC-users
The CGI script is trying to connect to the BackupPC server using the
unix-domain socket, which is at $Conf{LogDir}/BackupPC.sock. From your
email, on your system that appears to be /var/lib/log/BackupPC.sock.

Are you running nfs v3 or v4?  I have had experience with v3 not working
reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
does rely on lock files working, so it's definitely not recommended to turn
locking off.

You said you deleted the BackupPC.sock file.  That would explain why the
CGI script can't connect to the server.  Why did you delete it?  You
said "deleting
those files doesn't always let the service restart" - deleting those files
should not be used to get the server to restart.

Craig

On Sat, Jun 27, 2020 at 9:26 AM Phil Kennedy <
phillip.kenn...@yankeeairmuseum.org> wrote:

> I've hit my wits end on an issue with my backuppc instance. The system ran
> fine, untouched, for many months. This is an ubuntu 16.0.4 system, running
> backuppc 3.3.1, installed via apt. When accessing the index (or any other
> pages), I get the following:
> Error: Unable to connect to BackupPC server
> This CGI script (/backuppc/index.cgi) is unable to connect to the BackupPC
> server on pirate port -1.
> The error was: unix connect: Connection refused.
> Perhaps the BackupPC server is not running or there is a configuration
> error. Please report this to your Sys Admin.
>
> The backuppc & apache services are running, and restarting without error.
> The backuppc pool (and other important folders, such as log) lives on an
> NFS mount, and /var/lib/backuppc is symlinked to /mnt/backup. Below is the
> fstab entry that I use:
>
> 10.0.0.4:/backup /mnt/backup nfs users,auto,nolock,rw 0 0
>
> (I'm specifically using nolock, since that can cause a similar issue.
> Mounting an NFS mount via some of the off the shelf NAS's out there can
> have performance issues without nolock set.)
>
> I've been able to get the instance to start and run briefly by deleting
> the BackupPC.sock and LOCK files from /var/lib/log, but the instance
> doesn't stay running for very long (minutes to an hour or two), and the LOG
> isn't giving me much data. On top of that, deleting those files doesn't
> always let the service restart. Thoughts? This box lives a pretty stagnent
> life, nothing tends to change configuration-wise.
> ~Phil
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc process will not stop

2020-06-29 Thread Craig Barratt via BackupPC-users
Mark,

Perhaps systemd is being used to run BackupPC?

What output do you get from:

systemctl status backuppc

If it shows as active/running, then the correct command to stop BackupPC is:

systemctl stop backuppc


Craig

On Mon, Jun 29, 2020 at 11:05 AM Mark Maciolek  wrote:

> hi,
>
> Running BackupPC v4.3.2 on Ubuntu 18.04 LTS. I want to upgrade to 4.4.0
> but I can't get the backuppc process to stop. I can do
> /etc/init.d/backuppc and it starts again. If I do kill -9  it
> also just restarts.
>
> I have several other BackupPC servers and yet this is the only one that
> does this.
>
> Does anyone have a clue to where I should start troubleshooting this issue?
>
> Mark
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup installation of BackupPC

2020-07-18 Thread Craig Barratt via BackupPC-users
>From the web interface, can you see the old hosts information?

What happens when you select one of the hosts?

The most likely issue is that $Conf{TopDir} in the config file isn't
pointing to the top-level store directory on the old disk.

If you need the file urgently, rather than just testing the 4.3.2 standby
installation, you can do that from the command-line just by navigating to
the relevant host and directory.  If you know the directory where the file
is stored, but not the backup it changed in, just use a shell wildcard for
the backup number.  In 3.x the file paths are mangled (each entry starts
with "f"), but every full backup's directory tree will have all the files.

Craig

On Sat, Jul 18, 2020 at 2:16 PM daveinredm...@excite.com <
daveinredm...@excite.com> wrote:

> I am currently running BackupPC 4.3.2. I have created a second
> installation of BackupPC on a spare machine to have the capability of using
> my backups if the server hosting the main installation dies. I also have
> several older backup disks from several years back that was made on
> BackupPC 3.x. I chown'd an old disk to ensure proper rights and copied the
> current hosts file to the test server but when I run BackupPC on the test
> machine it doesn't see any of the backups. I am trying to find a fairly old
> file that was damaged at an unknown time and used the test server to verify
> functionality. What am I missing? I've Googled "move backuppc to new
> server" but none of the responses seems relevant.
>
> TIA,
> Dave
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Sign out

2020-07-21 Thread Craig Barratt via BackupPC-users
I appreciate the feedback.  Please use the "List
" link in the
footer of any email to unsubscribe.

Mailing to the list mails everyone on the list.  It doesn't help you
unsubscribe.

Craig

On Tue, Jul 21, 2020 at 2:22 PM Ants Mark  wrote:

> ThNK YOU 4 providing me good info. But i don't wanna received any mor
> mails. Congrats for having such useful program
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backuppc won't start after host migration

2020-08-12 Thread Craig Barratt via BackupPC-users
Dagg,

The error should be self-explanatory - either /mnt/backup is a file system
that doesn't support hardlinks, or /mnt/backup/pc and /mnt/backup/cpool are
two different file systems (and therefore hardlinks between them fail).
Are they two different file systems?

Craig

On Wed, Aug 12, 2020 at 9:33 AM daggs  wrote:

> Greetings,
>
> I had a backuppc server running on debian 1o stable, the backups are on an
> external hdd.
> I've migrated the machine to alpine linux, the backuppc version of both is
> 3.2.2.
> I've copied over all the content of /etc/backuppc to the new os and when I
> start it up, I get this error:
> 2020-08-12 19:09:48 Can't create a test hardlink between a file in
> /mnt/backup/pc and /mnt/backup/cpool.  Either these are different file
> systems, or this file system doesn't support hardlinks, or these
> directories don't exist, or there is a permissions problem, or the file
> system is out of inodes or full.  Use df, df -i, and ls -ld to check each
> of these possibilities. Quitting...
> both /mnt/backup/pc and /mnt/backup/cpool exists and they are folders.
>
> how can I go about fixing this without loosing my backups?
>
> Thanks,
>
> Dagg
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backups vanished from UI but present in file system

2020-08-13 Thread Craig Barratt via BackupPC-users
Raoul,

Thanks for maintaining the Debian packages!

I'm not sure how the backups files got corrupted.

First, do you really have a host called $host, or are you just obfuscating
the real host name?  The reason I ask if there is this line in the LOG file:

Added host $host to backup list


When you did this, I assume $host is replaced by your shell; is that
correct?

ls -altr pc/$host


Anyhow, you can recover the backups file
using BackupPC_fixupBackupSummary.  As the BackupPC user, run it with the
host name argument(s) you want to fix:

BackupPC_fixupBackupSummary hostName


Just to be safe, you should save away the current backups file for that
host, eg:

cp pc/hostName/backups pc/hostName/backups.save

Yes, as you mentioned, I'd recommend not running BackupPC until it looks
like the backups files are correct and the overall the cpool and pc
directories look complete, with correct permissions.  You could run test
command like BackupPC_tarCreate to see that the backups can be accessed
correctly.

Craig

On Thu, Aug 13, 2020 at 5:41 AM Raoul Bhatia  wrote:

> Dear community,
>
> I am more or less lurking on this mailing list. I am also trying to
> maintain unofficial BackupPC Debian packages, see
> https://github.com/backuppc/backuppc/wiki/Build-Your-Own-Packages
>
>
> First off: How do I prevent nightly cleanups to mess with the data in the
> pool?  I currently have stopped the BackupPC process.
>
>
>
> Yesterday, I performed maintenance work on my backup server incl. building
> 4.4.0 Debian packages, upgrading my own server & rebooting the system.
>
> I now have one host where the old backups vanished from the UI i.e. #0
> (Full) is the current backup.
> However, in the filesystem I see all the old backups in
> $Conf{TopDir}/pc/$host/:
>
> find pc/$host/ -maxdepth 1 -type d | wc -l
> 103
>
> After upgrading BackupPC & before rebooting, I tested my new package by
> trying to do Incremental backups of said host.
> FYI: I had an issue with missing +x flags on the binaries (my own
> wrong-doing) and thus I had to upgrade the package & initiate the backup a
> couple of times.
> Last but not least, there is a small chance that I rebooted at an
> inconvenient time, i.e. during some house keeping?
>
> Any help would be appreciated.  Also, how do I prevent nightly cleanups to
> mess with the data in the pool?
>
>
> Debug information (times are in CEST)
>
> Log files excerpt:
>
> # last reboot
> reboot   system boot  4.15.0-112-gener Wed Aug 12 23:58   still running
> reboot   system boot  4.15.0-112-gener Tue Aug 11 22:28 - 23:52 (1+01:24)
>
> # ls -altr pc/$host/
> [...]
> *drwxr-x--- 1 backuppc backuppc396 Aug  3 07:26 5503*
> *-rw-r- 1 backuppc backuppc   4050 Aug  3 07:37 XferLOG.5503.z*
> *drwxr-x--- 1 backuppc backuppc568 Aug 12 00:44 5504*
> *-rw-r- 1 backuppc backuppc  14460 Aug 12 00:59 XferLOG.5504.z*
> *-rw-r- 1 backuppc backuppc  0 Aug 12 23:34 XferLOG.5505.z*
> *drwxr-x--- 1 backuppc backuppc568 Aug 12 23:47 5505*
> drwxr-xr-x 1 backuppc backuppc226 Aug 13 00:01 ..
> -rw-r- 1 backuppc backuppc 65 Aug 13 02:21 backups.old
> *drwxr-x--- 1 backuppc backuppc568 Aug 13 09:37 0*
> -rw-r- 1 backuppc backuppc125 Aug 13 09:37 backups
> drwxr-x--- 1 backuppc backuppc   3998 Aug 13 09:37 .
> drwxr-x--- 1 backuppc backuppc   3080 Aug 13 09:43 refCnt
> *-rw-r- 1 backuppc backuppc 745080 Aug 13 09:43 XferLOG.0.z*
> -rw-r- 1 backuppc backuppc   9479 Aug 13 13:00 LOG.082020
> -rw-r- 1 backuppc backuppc  0 Aug 13 13:14 LOCK
>
> # cat pc/$host/backups.old
> 0   active
> 1597273203
> 3   0   1   rsync   0   4.4.0
> 58824   0
>
> # cat pc/$host/backups
> 0   full1597273203  1597304229  66263   231293732766
> 904 29562   43  303772510   0   0   0
> 3   35321   4433611 0   1
> rsync   0   4.4.0   139720  0
>
> LOG.0.z
>
> 2020-08-12 22:46:23 Got signal TERM... cleaning up (exit code = 0)
> 2020-08-12 23:10:57 Reading hosts file
> 2020-08-12 23:10:57 BackupPC 4.4.0 (Perl v5.26.1) started, pid 10828
> 2020-08-12 23:10:57 Next wakeup is 2020-08-13 01:00:00
> *2020-08-12 23:17:55 User backuppc requested backup of $host ($host)*
> *2020-08-12 23:17:55 can't exec BackupPC_dump for $host*
> 2020-08-12 23:26:35 Got signal TERM... cleaning up (exit code = 0)
> 2020-08-12 23:26:36 Reading hosts file
> 2020-08-12 23:26:36 BackupPC 4.4.0 (Perl v5.26.1) started, pid 11605
> 2020-08-12 23:26:36 Next wakeup is 2020-08-13 01:00:00
> 2020-08-12 23:58:30 Reading hosts file
> 2020-08-12 23:58:30 Added host abc to backup list
> 2020-08-12 23:58:30 Added host def to backup list
> 2020-08-12 23:58:30 Added host ghi to backup list
> 2020-08-12 23:58:30 Added host jkl to backup list
> 2020-08-12 23:58:30 Added host mno to backup list
> 2020-08-12 23:58:30 Added host $host to backup list
> 2020-08-12 23:58:30 Added host pqr to backup list
> 2020-08-12 23:58:30 Adde

Re: [BackupPC-users] DumpPreUserCmd didnt run anything

2020-08-22 Thread Craig Barratt via BackupPC-users
For security reasons, BackupPC doesn't use a shell to run any of the
external commands, including DumpPreUserCmd.  It uses fork() and exec().
It does split the string you provide at spaces (but doesn't understand
quoting).  Or you can set those commands to an array - the first entry is
the full path to the command, and the remaining entries are the arguments
(eg, the array is argv).

But the examples you gave should work.  Could you tell us exactly what's in
the config file setting for DumpPreUserCmd?  You said you set it to
'/bin/bash -c /bin/touch /tmp/test.txt'; if you did that in the CGI then
you shouldn't use the outermost quotes.

Craig





On Sat, Aug 22, 2020 at 10:27 AM Taste-of-IT  wrote:

> Hi Mike,
> yes thanks :).
>
> Yes tried this too. But in my simple example, backuppc should write on own
> disk, as a test but whithout suchen.
>
> Taste
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Verify full integrity of pool and backups

2020-08-23 Thread Craig Barratt via BackupPC-users
$Conf{PoolNightlyDigestCheckPercent} is in percent, so you should set it to
100 to check all the pool file's MD5 digest against their file names.

As Guillermo mentions, to check the pool MD5 digests, you can set
temporarily set $Conf{PoolNightlyDigestCheckPercent} to 100
and $Conf{PoolSizeNightlyUpdatePeriod} to 1.  If you stop BackupPC, to
check all the pool digests, run:

BackupPC_refCountUpdate -m

If you want to also regenerate all the host reference counts (which will
take a long time), you could run:

BackupPC_refCountUpdate -m -F


Craig


On Sun, Aug 23, 2020 at 6:45 AM Guillermo Rozas 
wrote:

> Hi Raoul,
>
> are you using BackupPC v4? If yes, you can use a modification of the
> script I posted here:
> https://sourceforge.net/p/backuppc/mailman/message/37032497/
>
> In the latest version (4.4.0) you also have the config option
> $Conf{PoolNightlyDigestCheckPercent}, which checks the md5 digest of this
> fraction of the pool files each night. You can probably set it to 1 and
> wait a night for it to run.
>
> Regards,
> Guillermo
>
> On Sun, Aug 23, 2020 at 5:38 AM Raoul Bhatia  wrote:
>
>> Hi,
>>
>> related to my previous email, it seems that the cause of my issues was a
>> file system corruption after a "power cut".
>>
>> I managed to recover (most of?) the data and would now like to do a
>> thorough check of the data.
>>
>> Is there any way to "fully verify" the integrity of my backuppc
>> installation, ideally in a nondestructive way ;-)
>>
>> Thanks,
>> Raoul
>>
>> PS. My backuppc process is stopped.
>> --
>> DI (FH) Raoul Bhatia MSc
>> E-Mail. ra...@bhatia.at
>> Tel. +43 699 10132530
>>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Missing variables

2020-08-30 Thread Craig Barratt via BackupPC-users
What process did you use to install BackupPC (eg, from the tarball source
with configure.pl, or a package)?

Craig

On Sun, Aug 30, 2020 at 2:56 PM Tom Cada  wrote:

> This is an installation of version 4.4.0 on an Ubuntu Server running 20-04.
>
> All of the various BackupPC environment variables such as __BACKUPPCUSER__
> or __TOPDIR__ are missing.
>
> If I try as suggested in your documentation
>
> $ sudo su - __BACKUPPCUSER__
> I get the error
> su: user __BACKUPPCUSER__ does not exist
>
> Does anyone have any suggestions as to what is going on.
>
> Thanks... Tom.
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Missing variables

2020-08-30 Thread Craig Barratt via BackupPC-users
Tom,

Those names should all be replaced with the actual names or paths specific
to your installation.  When you ran configure.pl, the values you chose (eg,
the name of the BackupPC user, and the TopDir) are used to replace those
values.

Craig

On Sun, Aug 30, 2020 at 4:28 PM Tim Evans  wrote:

> On 8/30/20 6:47 PM, Tom Cada wrote:
>
> > All of the various BackupPC environment variables such as
> > __BACKUPPCUSER__ or __TOPDIR__ are missing.
> >
> > If I try as suggested in your documentation
> >
> > $ sudo su - __BACKUPPCUSER__
> > I get the error
> > su: user __BACKUPPCUSER__ does not exist
>
> Not sure those variables would be set for a non-BackupPC user, but, in
> any case, shouldn't that be:
>
> $ su - $__BACKUPPCUSER__
>
> (i.e., variables are referenced in most cases with the preceding dollar
> sign.)
>
> --
> Tim Evans   |   5 Chestnut Court
> |   Owings Mills, MD 21117
> |   443-394-3864
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Inconsistent backup size

2020-08-31 Thread Craig Barratt via BackupPC-users
Does your backup include sparse files?

Look in the XferLOG file to see the sizes of individual files - it
shouldn't be too hard to spot one that is large.

Craig

On Sun, Aug 30, 2020 at 11:51 PM Ján ONDREJ (SAL) 
wrote:

> Hello,
>
>   I am using BackupPC for years, but after update to v4 (4.4.0), some
> backups have inconsistent size dislayed in "File Size/Count Reuse Summary"
> table.
>
>   This is my servers directory, which should be in backup:
>
> Filesystem Size  Used Avail Use% Mounted on
> /dev/md0   4.0G  3.0G  1.1G  75% /
> /dev/mapper/vg_server-www  200G  119G   82G  60% /var/www
>
> There is no other filesystem mounted, binded to this directory.
>
> It's complete size is 200 GB, used only 118 GB. Some files are excluded
> from
> backup, so an full backup should be about 120 GB uncompressed, after
> compression less. But our File Size table looks like:
>
>   TotalsExisting Files New Files
> Backup# Type  #Files  Size/MiB  MiB/sec  #Files  Size/MiB  #Files Size/MiB
> 0   full  3220584 943488.6  26.862625461 872577.7  1232   537.4
>
> As you see, this backup total size is 921 GiB. How it's possible, that
> an 200 GB partition is stored as 900 GB?
> Also according to i-nodes in linux, my server has:
>
> Filesystem   Inodes   IUsed IFree IUse% Mounted on
> /dev/md04194240   51082   41431582% /
> /dev/mapper/vg_fusion-www 209715200 2470434 2072447662% /var/www
>
> So there is 2.5 millions of files, some excluded, but results as 3.2
> millions
> on backup.
>
> Trying to restore files. Restore downloaded a 68GB tar package.
> This looks to be real, but if there are only 70 GB of data, why on backuppc
> status it's displayed as 900 GB?
>
> I need to find, which server is using most of my backup space.
> I know, that it's hard to find, because files are shared between servers
> (deduplicated), but at least I can estimate it. But if there are multiples
> of usage displayed in statistics, then it's impossible to aproximate.
>
> I deleted all backups of this server, in hope, that it helps, but don't.
> :-(
>
> Thank for help.
>
> SAL
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Inconsistent backup size

2020-08-31 Thread Craig Barratt via BackupPC-users
 That file is a hardlink, not a symlink. In the backup stats, each instance
of a hardlink is counted towards the total file size.

If your file system has a lot of hardlinks, perhaps that's why the reported
number is higher than you expect?

Craig

On Mon, Aug 31, 2020 at 12:40 AM Ján ONDREJ (SAL) 
wrote:

> Hello,
>
> On Mon, Aug 31, 2020 at 12:08:50AM -0700, Craig Barratt via BackupPC-users
> wrote:
> > Does your backup include sparse files?
>
> I think no.
>
> > Look in the XferLOG file to see the sizes of individual files - it
> > shouldn't be too hard to spot one that is large.
>
> There is no one large file. As I wrote, restored backup is not so large
> too.
> But you pointed me to right plate. I see this line in XferLOG:
>
> newrecv hf..tpog... rw-r--r-- 1000,1000  25089367
> var/www/public/media/598522/catalogue.pdf =>
> var/www/private/import/docs/catalogue.pdf
>
> This is a symlink and it's size is counted as 25089367.
> According to "=>" symbol, this symlink is properly identified as symlink,
> but it's size is stored as symlinks target file. This is why backup
> size is larger than my filesystem. Can this be fixed?
>
> SAL
>
> > On Sun, Aug 30, 2020 at 11:51 PM Ján ONDREJ (SAL) 
> > wrote:
> >
> > > Hello,
> > >
> > >   I am using BackupPC for years, but after update to v4 (4.4.0), some
> > > backups have inconsistent size dislayed in "File Size/Count Reuse
> Summary"
> > > table.
> > >
> > >   This is my servers directory, which should be in backup:
> > >
> > > Filesystem Size  Used Avail Use% Mounted on
> > > /dev/md0   4.0G  3.0G  1.1G  75% /
> > > /dev/mapper/vg_server-www  200G  119G   82G  60% /var/www
> > >
> > > There is no other filesystem mounted, binded to this directory.
> > >
> > > It's complete size is 200 GB, used only 118 GB. Some files are excluded
> > > from
> > > backup, so an full backup should be about 120 GB uncompressed, after
> > > compression less. But our File Size table looks like:
> > >
> > >   TotalsExisting Files New Files
> > > Backup# Type  #Files  Size/MiB  MiB/sec  #Files  Size/MiB  #Files
> Size/MiB
> > > 0   full  3220584 943488.6  26.862625461 872577.7  1232   537.4
> > >
> > > As you see, this backup total size is 921 GiB. How it's possible, that
> > > an 200 GB partition is stored as 900 GB?
> > > Also according to i-nodes in linux, my server has:
> > >
> > > Filesystem   Inodes   IUsed IFree IUse% Mounted on
> > > /dev/md04194240   51082   41431582% /
> > > /dev/mapper/vg_fusion-www 209715200 2470434 2072447662% /var/www
> > >
> > > So there is 2.5 millions of files, some excluded, but results as 3.2
> > > millions
> > > on backup.
> > >
> > > Trying to restore files. Restore downloaded a 68GB tar package.
> > > This looks to be real, but if there are only 70 GB of data, why on
> backuppc
> > > status it's displayed as 900 GB?
> > >
> > > I need to find, which server is using most of my backup space.
> > > I know, that it's hard to find, because files are shared between
> servers
> > > (deduplicated), but at least I can estimate it. But if there are
> multiples
> > > of usage displayed in statistics, then it's impossible to aproximate.
> > >
> > > I deleted all backups of this server, in hope, that it helps, but
> don't.
> > > :-(
> > >
> > > Thank for help.
> > >
> > > SAL
> > >
> > >
> > >
> > > ___
> > > BackupPC-users mailing list
> > > BackupPC-users@lists.sourceforge.net
> > > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > > Wiki:https://github.com/backuppc/backuppc/wiki
> > > Project: https://backuppc.github.io/backuppc/
> > >
>
>
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki:https://github.com/backuppc/backuppc/wiki
> > Project: https://backuppc.github.io/backuppc/
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki:https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC 4.3.0 Backups not complete

2020-08-31 Thread Craig Barratt via BackupPC-users
You need to tell us more information.

What is the XferMethod?  If it's rsync, perhaps you have the
--one-file-system option, which causes it to only backup one of the file
systems?

Are you excluding certain files or directories?

If you browse the backup, it is obvious what is missing?

Craig

On Mon, Aug 31, 2020 at 8:50 AM Pim Rupert (Lemonbit) 
wrote:

> Hi Tshepo,
>
> Your screen shot shows that the most recent “full” backup was 4.3 days
> ago. I merely suggested that the extra data could be newer than the latest
> full backup.
>
> Incremental backups still contain all data. It’s just that the web
> interface only reports the size of the most recent full backup. So, if you
> thought this number was wrong, check again and let us know.
>
> Best regards,
>
> Pim
>
> Op 31 aug. 2020 om 17:27 heeft Tshepo Msimango via BackupPC-users <
> backuppc-users@lists.sourceforge.net> het volgende geschreven:
>
> 
>
> Hi Pim
>
>
>
> Should BackupPC not backup the whole share if I specify or start a full
> backup in the first place?
>
>
>
> Regards
>
> Tshepo Msimango
>
>
>
> *Tshepo Msimango*
> *Cloud Engineer - Subscriber Management* at *Vox*
> --
> *T:*  087 805  | *D:* 087 805 1288
> *M:* N/A
> *E:* tshepo.msima...@voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> <4.png>
> 
>
> <2.png>
> 
>
> <1.png>
> 
>
> <3.png>
> 
>
> <0.png>
> 
>
> *From:* Pim Rupert 
> *Sent:* 31 August 2020 16:02
> *To:* General list for user discussion, questions and support <
> backuppc-users@lists.sourceforge.net>
> *Subject:* Re: [BackupPC-users] BackupPC 4.3.0 Backups not complete
>
>
>
> Hello Tshepo,
>
> The full size in the web interface shows the size of the last FULL backup.
>
> The data may be relatively new and not yet part of the last full backup.
> Could you check that?
>
> Best regards,
>
> Pim Rupert
>
> > On 31 Aug 2020, at 15:38, Tshepo Msimango via BackupPC-users <
> backuppc-users@lists.sourceforge.net> wrote:
> >
> >
> >
> > Hi BackupPC Users
> >
> > I am not seeing all the data on the BackupPC server, the client being
> backup is 786GB in size, however the Full size on the CGI interface shows
> 177.49Gib,even if data is compressed it should show a higher value plus
> some of the directories being backed up are empty.
> >
> > /dev/sdh1 ext4 197G 160G 28G 86% /var/lib/mariadb/backup
> > /dev/sdd1 ext4 99G 2.4G 91G 3% /var/lib/mariadb/db2
> > /dev/sdc1 ext4 99G 79G 15G 85% /var/lib/mariadb/db1
> > /dev/sdf1 ext4 99G 82G 12G 88% /var/lib/mariadb/db3
> > /dev/sdg1 ext4 99G 28G 67G 30% /var/lib/mariadb/db4
> > /dev/sda1 xfs 497M 167M 330M 34% /boot
> > /dev/sdi1 ext4 197G 518M 187G 1% /var/lib/mariadb-log/db
> > /dev/sde1 ext4 493G 437G 31G 94% /var/lib/mariadb/db
> >
> > [root@VoxCore-MySQL ~]# du -sch /var/lib/mariadb*
> > 786G /var/lib/mariadb
> > 454M /var/lib/mariadb-log
> > 0 /var/lib/mariadb-log1
> > 786G total
> >
> > 
> >
> > 
> >
> > Regards
> >
> >
> > Tshepo Msimango
> > Cloud Engineer - Subscriber Management at Vox
> > T: 087 805  | D: 087 805 1288
> > M: N/A
> > E: tshepo.msima...@voxtelecom.co.za
> > A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> > www.vox.co.za
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Disclaimer
> >
> > The contents of this email are confidential to the sender and the
> intended recipient. Unless the contents are clearly and entirely of a
> personal nature, they are subject to copyright in favour of the holding
> company of the Vox group of companies. Any recipient who receives this
> email in error should immediately report the error to the sender and
> permanently delete this email from all storage devices.
> >
> > This email has been scanned for viruses and malware, and may have been
> automatically archived by Mimecast Ltd, an innovator in Software as a
> Service (SaaS) for business. Providing a safer and more useful place for
> your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here.
> >
> >
> >
> > ___
> > BackupPC-users mailing list
> > BackupPC-users@lists.sourceforge.net
> > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki: https://github.com/backuppc/backuppc/wiki
> > Project: https://backuppc.github.io/backuppc/
>
>
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: https://github.com/backuppc/backuppc/wiki
> Project: https://backuppc.github.io/backuppc/
>
> ___
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List:https://lists.s

Re: [BackupPC-users] Inconsistent backup size

2020-08-31 Thread Craig Barratt via BackupPC-users
I've tried to suggest a couple of reasons that could explain what you are
seeing, based on very incomplete information.

Without you confirming what the issue actually is, your conclusion is
already that it's "absolutely buggy" and "useless".  To pick another
example, rsync -aHv will also report a total file size that is the sum of
the hardlink file sizes, and it also reports the actual bytes transferred
and the speedup.  So by your logic, does that mean rsync is also
"absolutely buggy" and "useless"?

I'd recommend you actually understand the issue, and then decide what the
best options are.

If your question is "is there a reasonable way to potion pool usage among
backup clients?" you are asking a question that doesn't have a simple
answer, because of hardlinks on the clients and pooling among all the
clients.

That said, it wouldn't be too hard to write a script that reads the
reference counts for a client (which includes all the backups for that
client), and apportioning the pool file sizes to that client based on the
ratio of its own reference count to the total pool reference count for each
pool file.  But that's just one way of doing it.  And commercial
auditing/billing tools are well out of scope for BackupPC, but you are most
welcome to contribute anything you develop.

Craig

On Mon, Aug 31, 2020 at 9:38 AM Ján ONDREJ (SAL)  wrote:

> Hello,
>
>   thanks for explanation, but how I can check in backuppc, which user
> uses how much disk space of my backuppc storage? This way File Size counter
> is absolutelly buggy.
>
>   I need to check, which backup uses most of my space and need to find,
> where I should exclude more files. But there is no information, which
> I can use. New files are only new files, doesn't count how much files
> there are. In Total files hardlinked files are counter multiple times,
> which ends in 10x more space usage in this counter like it's real.
>
>   This way total files counter is useless, only useful for windows users,
> which don't use hardlinks.
>
> SAL
>
> On Mon, Aug 31, 2020 at 09:09:02AM -0700, Craig Barratt via BackupPC-users
> wrote:
> >  That file is a hardlink, not a symlink. In the backup stats, each
> instance
> > of a hardlink is counted towards the total file size.
> >
> > If your file system has a lot of hardlinks, perhaps that's why the
> reported
> > number is higher than you expect?
> >
> > Craig
> >
> > On Mon, Aug 31, 2020 at 12:40 AM Ján ONDREJ (SAL) 
> > wrote:
> >
> > > Hello,
> > >
> > > On Mon, Aug 31, 2020 at 12:08:50AM -0700, Craig Barratt via
> BackupPC-users
> > > wrote:
> > > > Does your backup include sparse files?
> > >
> > > I think no.
> > >
> > > > Look in the XferLOG file to see the sizes of individual files - it
> > > > shouldn't be too hard to spot one that is large.
> > >
> > > There is no one large file. As I wrote, restored backup is not so large
> > > too.
> > > But you pointed me to right plate. I see this line in XferLOG:
> > >
> > > newrecv hf..tpog... rw-r--r-- 1000,1000  25089367
> > > var/www/public/media/598522/catalogue.pdf =>
> > > var/www/private/import/docs/catalogue.pdf
> > >
> > > This is a symlink and it's size is counted as 25089367.
> > > According to "=>" symbol, this symlink is properly identified as
> symlink,
> > > but it's size is stored as symlinks target file. This is why backup
> > > size is larger than my filesystem. Can this be fixed?
> > >
> > > SAL
> > >
> > > > On Sun, Aug 30, 2020 at 11:51 PM Ján ONDREJ (SAL) <
> ondr...@salstar.sk>
> > > > wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > >   I am using BackupPC for years, but after update to v4 (4.4.0),
> some
> > > > > backups have inconsistent size dislayed in "File Size/Count Reuse
> > > Summary"
> > > > > table.
> > > > >
> > > > >   This is my servers directory, which should be in backup:
> > > > >
> > > > > Filesystem Size  Used Avail Use% Mounted on
> > > > > /dev/md0   4.0G  3.0G  1.1G  75% /
> > > > > /dev/mapper/vg_server-www  200G  119G   82G  60% /var/www
> > > > >
> > > > > There is no other filesystem mounted, binded to

  1   2   3   4   >