Re: [BackupPC-users] BackupPC administrative attention needed email incorrect?

2018-10-30 Thread Holger Parplies
Hi,

Jamie Burchell wrote on 2018-10-30 09:31:13 - [[BackupPC-users] BackupPC 
administrative attention needed email incorrect?]:
> [...]
> Yesterday, I received the following email from the BackupPC process:
> [...]
> > Yesterday 156 hosts were skipped because the file system containing
> > /var/lib/BackupPC/ was too full.  [...]
> 
> The email was correct in that disk space was low, but the number of
> reported ???hosts skipped??? doesn???t seem right. I have 39 hosts, 152 full
> backups and 952 incrementals. The email says they were skipped, but there
> are no gaps that I can see in any of the backups. Just wondering if this is
> a bug.

without looking into the code, 156 seems to be 4 * 39 - could it be that
after 4 wakeups disk space dropped low enough for backups to resume (by
backup expiration or someone deleting something from the partition)? That
would explain that there is no gap. You just might find the backups happened
at a slightly later point in time than you would normally expect.

Hope that helps.

Regards,
Holger


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Weird backup commands

2018-10-30 Thread Holger Parplies
Hi,

Jaime Fenton wrote on 2018-10-24 16:17:24 + [Re: [BackupPC-users] Weird 
backup commands]:
> Also, my apologies if my tone came across as snippy or rude [...] I???m just
> frustrated with the issues I???m having [...]

well ...

> [...]
> If I thought samba was not going to work, then I would dump it.

Believe it or not, Samba *can* never work properly, presuming you want both
incremental and exact backups. As Craig pointed out, Samba *can* never give
you the in-/exclude options rsync does. If you're fine with all of that, then
go ahead. The only benefit Samba gives you is presumably the ease of setup,
and you're apparently not getting much out of that, because it doesn't "just
work". You're going to have to put in some work one way or the other. There
is simply no way around that.

> [...]
> Out of curiosity, has anyone else had a problem with samba 4.7.1? I did see
> one mention in a forum somewhere that there issues were resolved with samba
> 4.7.3. Thoughts?

It seems highly unlikely that one Samba version changes the command line
BackupPC passes to it. Much more likely is an issue with your BackupPC
configuration. That is why Craig asked:

> [...]
> What are your settings for $Conf{SmbClientFullCmd}, $Conf{SmbShareName},
> $Conf{BackupFilesOnly} and $Conf{BackupFilesExclude} on the two machines?

So, my thought is: answer the question. Without that information, the only
help we can give you is, "there's probably something wrong with your
configuration".

I might add that each of these settings individually might be global or
host specific. How do you manipulate the settings? Do you edit the config
file or use the web interface?

> [...]
> ??? Good example: Running: /usr/bin/smbclient 
> computername.domainname\\C\$ -U 
> AL\\backuppc -E -d 1 -c tarmode\ full -TcN /data/backuppc/pc/ 
> computername.domainname /timeStamp.level0 - Users\\username\\AppData Users\\ 
> username \\Documents Users\\ username 
> \\Downloads Users\\ username \\My\ 
> Documents Users\\ username \\Desktop
> 
> ??? Error example: Running: /usr/bin/smbclient  
> computername.domainname \\C\$ -U AL\\backuppc -E -d 1 -c 
> tarmode\ full -Tc - Users\\\*\\AppData\\\* Users\\\*\\Documents\\\* 
> Users\\\*\\Downloads\\\* Users\\\*\\Desktop\\\* Users\\\*\\My\ Documents\\\*

This seems to be badly mangled by both your MUA as well as yourself - I'm
guessing the spaces around "computername.domainname" as well as "username"
are probably not there (they're not escaped, so they would separate arguments,
basically meaning the good example command would be nonsense, even with the
file tags removed).
In any case, the confusion Windoze path separators introduce is a pain. I'm
not sure this is better with rsync, but it probably is - at least you *should
be* able to consistently use "/", which doesn't need escaping. Oh, and you
can use both in- and excludes with rsync. You can't with Samba. You know that,
right?

My reaction, at least, is, I'm not taking a closer look at any of that,
because it's bogus, confusing, and irrelevant. We need the configuration
details.

Hope that helps.

Also, my apologies if my tone came across as snippy or rude, I'm just
frustrated with questions (and followups) that prevent the possibility
of meaningful help.

Regards,
Holger


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] fatal error during xfer (tar:632)

2018-10-12 Thread Holger Parplies
Hi,

Paul Littlefield wrote on 2018-10-12 12:22:48 + [Re: [BackupPC-users] fatal 
error during xfer (tar:632)]:
> On 11/10/2018 23:44, Craig Barratt wrote:
> >[...]
> >What version of tar are you using?  Can you run gnu tar instead (that's the 
> >default under cygwin)?
> >[...]
> 
> $ apt-cache policy tar
> tar:
>   Installed: 1.28-2.1ubuntu0.1
>   Candidate: 1.28-2.1ubuntu0.1
>   Version table:
>  *** 1.28-2.1ubuntu0.1 500
> 500 http://gb.archive.ubuntu.com/ubuntu xenial-updates/main amd64 
> Packages
> 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 
> Packages
> 100 /var/lib/dpkg/status
>  1.28-2.1 500
> 500 http://gb.archive.ubuntu.com/ubuntu xenial/main amd64 Packages

you're running Ubuntu on the Windoze host? Amazing ...

Regards,
Holger


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] 4.2.1 Rsync and BackupFilesOnly

2018-09-27 Thread Holger Parplies
Hi,

Kris Lou via BackupPC-users wrote on 2018-09-27 15:22:20 -0700 
[[BackupPC-users] 4.2.1 Rsync and BackupFilesOnly]:
> [...]
> Need More Eyes!

8-)

> $Conf{BackupFilesOnly} = {
>   '*' => [
> '/SysData'
>   ]
> };

This is translated by BackupPC to what you see below - --include=/SysData 
--exclude=/* -
because rsync has no notion of "only". "Only" means "include /SysData but 
exclude everything
else".

> [...]
> /usr/bin/rsync_bpc --bpc-top-dir /data/BackupPCPool --bpc-host-name
> shares.axlmusiclink.biz --bpc-share-name /sharedfolders --bpc-bkup-num
> 0 --bpc-bkup-comp 3 --bpc-bkup-prevnum -1 --bpc-bkup-prevcomp -1
> --bpc-bkup-inode0 2 --bpc-attrib-new --bpc-log-level 3 -e
> /usr/bin/ssh\ -l\ root --rsync-path=/usr/bin/rsync --super --recursive
> --protect-args --numeric-ids --perms --owner --group -D --times
> --links --hard-links --delete --delete-excluded --one-file-system
> --partial --log-format=log:\ %o\ %i\ %B\ %8U,%8G\ %9l\ %f%L --stats
> --checksum --timeout=72000 --include=/SysData --exclude=/\*
> shares.axlmusiclink.biz:/sharedfolders/ /
> 
> [...]
> If I change the share name to "/sharedfolders/SysData" and blank out the
> Only/Excludes, then it'll dump it correctly.

Let me guess. /sharedfolders/SysData is a mount point, right? You have 
--one-file-system
in the rsync args, so starting from /sharedfolders, rsync won't recurse into a 
different
file system. If you start at /sharedfolders/SysData, you're already on the 
target fs, so
everything will work as expected. Try removing the --one-file-system.

Regards,
Holger


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc | Got fatal error during xfer (aborted by signal=PIPE)

2018-08-06 Thread Holger Parplies
Hi,

jerome winmedia wrote on 2018-08-06 15:29:45 +0200 [[BackupPC-users] Backuppc | 
Got fatal error during xfer (aborted by signal=PIPE)]:
> I am running backuppc version 3.3.1 on an Ubuntu 16.04.5 LTS, I try to
> backup several Linux servers, almost Debian, ubuntu or proxmox hosts.
> 
> My issue is that when I run the Full/Incermental backup from webUI I got
> the message:
> 
> 2018-08-06 14:22:32 full backup started for directory /var/lib/vz/dump/dump
> (baseline backup #1) 2018-08-06 14:22:33 Aborting backup up after signal
> PIPE 2018-08-06 14:22:34 Got fatal error during xfer (aborted by
> signal=PIPE) <

this is a very generic error message, about as specific as "something went
wrong", but it turns out to be quite obvious, if you look close enough (and
thank you for providing all the necessary information for making that
possible!):

> [...]
> Running: /usr/bin/ssh -o ServerAliveInterval=300 -q -x -l root HOST 
> /usr/bin/rsync --server --sender --protocol=29 --numeric-ids --perms --owner 
> --group -D --links --hard-links --times --block-size=2048 --ignore-times . 
> /var/lib/vz/dump/dump/

You ignored the comment about $Conf{RsyncArgs} - "Do not edit the first set
unless you have a thorough understanding of how File::RsyncP works". I
strongly doubt you should be setting "--protocol=29", but thank you for
doing so and thereby making me have a closer look at the rsync arguments.
"--protocol=29" is probably not the problem *here* (it may well be if that
is not what would be automatically selected anyway), but rather what is
*missing*: "--recursive". If you continue looking, you'll notice:

> Remote[2]: skipping directory .

... and with that the backup was finished, as far as the remote rsync was
concerned. It was supposed to non-recursively backup a directory, which
means "do nothing". It did exactly that. BackupPC, on the other hand, was
expecting more than this.

Re-add "--recursive" to $Conf {RsyncArgs}. I doubt you meant to remove it
in the first place. Better yet, don't set $Conf {RsyncArgs} in your HOST.pl
and don't change the default in config.pl. There should very rarely be any
reason to do so (except for adding "--checksum-seed=32761", and even that
might be better placed in $Conf {RsyncArgsExtra}).

> [...]
> When I run the rsync command manually, the backup is started correctly:
> 
> rsync -av -e ssh root@HOST:/var/lib/vz/dump/dump 
> /mnt/backuppc/backuppc/pc/pHOST/2<

Well, yes, it's a different command, isn't it? Note that you should *not*
really be using a BackupPC host directory as target for a plain rsync command.
That is *not* the same as what BackupPC does! Be sure to remove that directory
after testing.

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow local backup

2018-06-20 Thread Holger Parplies
Hi,

Carl W. Soderstrom wrote on 2018-06-14 16:03:24 -0400 [Re: [BackupPC-users] 
Slow local backup]:
> On 06/14 03:38 , Bowie Bailey wrote:
> > On 6/14/2018 3:27 PM, Michael Stowe wrote:
> > > Why are you using rsyncd over the loopback instead of ??? rsync?
> > 
> > Mainly because that's the way all of my other clients are being backed
> > up [...]
> 
> I've always used tar for local backups. The advantage of rsync is greater in
> bandwidth-constrained environments because it saves moving whole files over
> the network. However, if the file needs to be read anyway to see if anything
> has changed, then nothing is saved because the local machine is the same as
> the remote machine.

well, mostly true. You still save copying large amounts of data from one
process address space to another and possible some context switches. While
that may not make rsync *faster* than tar on local backups, it might mean
it's not much slower. It probably depends on your setup. And it probably
has low enough impact not to worry about it.

> I may be incorrect about some of my understanding here, I know rsync does a
> few things which tar does not, but which slip my brain at the moment.

tar only has a single timestamp to go by (for incremental backups), rsync has
a complete file list. This means that tar will miss *all file deletions* -
deleted files will continue to show up in your backups until the next full -
as well as renamed or moved files and probably files created with an old
timestamp (eg. 'touch --date', 'touch --reference', 'cp -p', 'unzip', 'tar x'
etc.). This is important!

*tar incrementals will not be exact snapshots*!

Well, neither will rsync backups, unless you use a snapshot of the underlying
file system (or your file system is quiescent during backup), but that's a
different matter. How much effort you should put into getting exact backups
depends on what you expect to get out of them.

* For restoring the occasional bunch of files from a backup because someone
  deleted them or messed them up, tar backups are fine.

* For restoring a complete file system in the state it was in before it failed,
  you will want rsync backups, possibly with snapshots (depending on the type
  of activity on the file system).

* If you're not sure what you need, but want to avoid surprises, go with
  rsync ;-).

> Also, some uses of rsync may be more efficient than this by only checking
> timestamps.

That sounds like rsync incrementals. They will catch

* files not in the backup,
* files with modified timestamps, and
* files with different size, even if the timestamp appears unchanged.

Only rsync fulls will actually read all files. My recommendation for local
backups: rsync(d).

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow local backup

2018-06-14 Thread Holger Parplies
Hi,

Bowie Bailey wrote on 2018-06-14 15:38:44 -0400 [Re: [BackupPC-users] Slow 
local backup]:
> On 6/14/2018 3:27 PM, Michael Stowe wrote:
> > On 2018-06-14 10:05, Bowie Bailey wrote:
> >
> > I just installed BackupPC v4 on a CentOS 7 server with 4G ram.  I
> > am trying to back up a local 318G filesystem.  I am using rsyncd
> > over the loopback connection.  It has been running for 17 hours so
> > far and has backed up less than half of the directory (based on
> > the size of the backup filesystem).  Running top does not show any
> > excessive cpu or iowait.  ???free??? shows no swap usage and 1.5G
> > available memory.

if I'm not miscalculating, that is roughly 10GB/hour or 3MB/s. From what I've
read about BackupPCv3 performance, that wouldn't seem extremely unreasonable,
especially for a first backup. For V4, I have no idea what the common figures
are. The C implementation (rsync_bpc) might have a performance benefit. But I
would expect one core of your CPU to be almost 100% busy, while the others may
be idle. Depending on which figure you are looking at, that might not seem
excessive, but it's the most you can get for single threaded compression
performance (you *are* using compression, right?).

> [...]
> I use rsyncd rather than rsync to avoid the ssh overhead.

For a local backup, you can achieve the same by using 'sudo' instead of 'ssh'.
At least you could with V3. I've forgotten if and how you can use 'sudo' with
rsync in V4.

> I expected a backup done via the loopback interface to be fast since it
> doesn't have the normal networking bandwidth limitations.

Well, yes, but you still have a network stack and probably at least two copies 
between kernel and user space. loopback networking does not come for free. A
quick 'netperf' test gives me a bit less than 9 Gbit/s throughput. That's
certainly faster than Gbit networking, but only by a factor of 10.

More importantly, the other limitations don't change - disk and compression
speed, for instance. If your bottleneck is not the network, a faster network
won't change anything. Keep in mind that a local backup means that client
and server are the same computer, so a loopback backup might actually be
*slower* than a remote backup. Your source file system and BackupPC pool are
on different physical disks, hopefully?

> > Is it normal for the backup to take this long?
> >
> > While that's hard to guess without knowing the particulars of your
> > system, I'm going to go out on a limb and say, no. No it is not.

I believe it is still important whether it is the first backup or not. The
recommendation used to be "ignore the timing of the first backup - fix your
problem only if the second (or third) backup is still too slow". That would
still seem to apply to BackupPCv4.

> > Is there a better way to back up a local filesystem?
> >
> > Personally, I use rsync (not rsyncd) and in the cases where I have
> > experienced slowness, it was due to poorly chosen rsync parameters (I
> > note that this would not differ between rsync and rsyncd), a broken
> > filesystem, or a specific bug in a specific version of rsync.
> 
> How do you go about setting up rsync so that it does a local copy rather
> than going through ssh over the network?

I would try setting $Conf {RsyncSshArgs} = [ '-e', '/usr/bin/sudo' ], but
you might run into quoting problems. You could then try using a script
containing something like 'exec /usr/bin/sudo $@'. Or see if using 'ssh'
makes much of a difference at all ...

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] backuppc 4.2.1 perl issue

2018-06-12 Thread Holger Parplies
Hi,

mlm wrote on 2018-06-12 17:20:06 -0400 [[BackupPC-users] backuppc 4.2.1 perl 
issue]:
> [...]
> New system running Scientific Linux 7.5, installed 4.2.1 from tar file
> 
> Global symbol "%In" requires explicit package name at 
> /usr/share/BackupPC/sbin/BackupPC_Admin line 90.
> [...]
> Bareword "BackupPC::CGI::Lib::NewRequest" not allowed while "strict subs" in 
> use at /usr/share/BackupPC/sbin/BackupPC_Admin line 56.
> [...]
> Any suggestions on troubleshooting further?

what do the first few non-comment lines of your BackupPC_Admin look like
(about 12, but I don't have 4.2.1 here right now, so that might have changed;
up to the first "my" variable declaration)? It would seem that the
BackupPC::CGI::Lib module is not correctly imported. Is the path in the
"use lib" pragma correct for your installation? What are the permissions on
the library file and its parent directories?

What Perl version are you running ('perl -v')?

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Out of inodes, how to recover?

2018-05-24 Thread Holger Parplies
Hi,

Tapio Lehtonen wrote on 2018-05-15 14:14:36 +0300 [[BackupPC-users] Out of 
inodes, how to recover?]:
> Backing up a single desktop computer, BackupPC host ran out of
> inodes at 15 million inodes used. Reading old messages from this
> list I was surprised to learn BackupPC uses only one (or two?)
> inodes per backed up file, no matter how  many hard links are used.

I believe the details depend on the BackupPC version you are using (4.x
vs. 3.x).

> I am thus at a loss to explain how come 15 million inodes is not
> enough for backing up this single desktop computer.

Explaining will only help if something is actually going wrong - which
might be the case (something not excluded that should be?).

It may be worth noting, though, that each *directory* in each backup will
also use up one inode, and possibly another one for an attrib file (in 3.x,
attrib files are pooled, in 4.x, I believe they aren't, but I'm not sure).
And it's more like "one inode per backed up version of each file", so
quickly changing data together with a large number of backups might be a
factor.

> [...]
> Question: The host is already out of inodes on backuppc partition,
> can it still remove old backups now obsolete since lower
> FullKeepCnt?

For 3.x, I can only imagine that you might need some few free inodes
for log files, the server unix domain socket, and a backup of a
'backups' file. Aside from that, expiring a backup is just deleting
files and directories - and for the directories, you immediately
regain a free inode. If nothing else helps, you can get yourself back
into business by deleting (part of) a directory tree that BackupPC
is supposed to expire. That will delete some directories and thus
free up inodes. For the actual expiration, BackupPC will look at some
meta data and then recursively delete the tree structure without
further inspecting attrib files, so it won't even notice that part of
the tree is already missing. To be safe, start somewhere below the root
and choose the correct backup (in particular, not a full backup with
dependant incrementals):

rm -r $TopDir/pc/host/num/f%2fmy%2fshare%2fname/ffoo

Again, you might not even need to worry about this. It may just work
by itself.

For 4.x, that is a good question. I could imagine the reverse delta
storage might cause problems on a full file system (storage or inode
wise), but I'm only guessing here. Deleting the oldest backups (i.e.
not intermediate ones) is probably safe.

> [...]
> From what I learned from reading discussions, copying the pool to
> larger disk with more inodes is not feasible.

That is not true. It will take its time, and it may require some thought.
cp, tar, rsync et al. may or may not work for you, BackupPC_copyPool likely
will, but it is rather experimental as in "not widely tested" (please ask
me if you're interested). And this is for 3.x. A 4.x pool should be easily
copied with the usual tools without any problems.

> [...]
> So it is time to
> start planning a new backuppc host and use lower bytes-per-inode or
> use a filesystem with dynamic inodes.

Whether a new host or just a new file system to copy the existing pool to
- I would recommend dynamic inode allocation, if just to avoid running into
the same problem again at a later point in time.

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Out of inodes, how to recover?

2018-05-24 Thread Holger Parplies
Hi,

Carl W. Soderstrom wrote on 2018-05-15 09:56:13 -0400 [Re: [BackupPC-users] Out 
of inodes, how to recover?]:
> [...]
> It is possible to run BackupPC_nightly by hand.

unfortunately true. You should never do that under any circumstances.

> [...]
> and here's an example usage (from memory, so this may be wrong):

The BackupPC daemon remembers the correct usage, and tends to ensure that
BackupPC_nightly is not run concurrently with BackupPC_link (or whatever
restrictions may apply for your version of BackupPC).

BackupPC_serverMesg BackupPC_nightly run

appears to be the correct invocation to *let the BackupPC server* run
BackupPC_nightly as soon as it is safe to do so. You will need to run
that as the backuppc user.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.2.0 released

2018-05-03 Thread Holger Parplies
Hi,

Steve Palm wrote on 2018-05-03 09:55:38 -0500 [Re: [BackupPC-users] BackupPC 
4.2.0 released]:
> I think that would cover it here, as you said, if you give someone Admin
> rights, then they can alter any other settings. Only a
> compile-build-install-time option to totally remove it would eliminate this
> possibility.

well, not really. If you can change the host settings, you can probably change
backup expiry to keep only one or two backups, change the data set to only
include /tmp, for example, and then manually force one or two backups. It's
not as fast and easy as a "maliciously delete all backups" button, but if
we're talking about security, it doesn't have to be easy, just possible.
With root access to the BackupPC server, it's just a matter of 'rm -r',
really. If you give someone 'Admin' capabilities (for whatever definition
may be applicable), he can administratively break things. There is really
no way to tell a computer to let someone only do constructive things.
'gzip /etc/passwd' is a good thing, right? ;-)

For the 'home use' type scenario (as in "backup *PC*"), where people "own"
machines they backup and restore as they like, there may be some merit in
allowing them to delete backups on their own.

For the 'office' type scenario (as in "*Backup* pc"), I would expect an IT
department (or some member(s) of it) to be responsible for backups, and
*nobody* else to have any access to them. The access control mechanism in
BackupPC is just not fine-grained enough - if you can see any data within a
backup, you can see all of it. In this scenario, you probably won't ever
manually delete backups, and if you do, you'll do it through shell access
to the BackupPC server from the command line. So you'll have a gratuitious
"shoot myself in the foot" button in the web interface, nothing more.

There will always be people who use BackupPC somewhere in between those
scenarios, so yes, why not give them the option of deleting backups through
the web interface?

> It is a great feature to have, especially with some restrictions on
> availability. Thanks!
> 
> > On Apr 21, 2018, at 7:43 PM, Craig Barratt via BackupPC-users 
> >  wrote:
> > 
> > I just pushed some changes [...] that add a new config variable
> > CgiUserDeleteBackupEnable (default off) which sets whether users can
> > delete backups via the CGI interface.

I agree that this makes sense (both the option and the default).

> >  Admins always have the delete feature enabled.

Absurdly, I'd suggest to always *dis*able the feature for admins. Well, no,
that doesn't make much sense, either. But it's so easy - even for admins -
to press the wrong button (just imagine an unresponsive browser or X server)
and then answer the confirmation dialog the wrong way. If a site has the
policy (or maybe even legal requirement) "we *never* manually delete backups",
they should be able to prevent this from happening accidentally (or
maliciously, if you prefer).

In fact, it's possible to disable direct restores, which can do great harm,
so I'd argue it should be possible to disable backup deletion, too.

Disclaimer: no, I haven't looked at the new version or its web interface, so
reality might be less problematic than the theory sounds. But even if it's
hard to shoot yourself in the foot, someone will manage ;-).

> > On Fri, Apr 20, 2018 at 11:05 AM, Craig Barratt 
> > > 
> > wrote:
> > [...]
> > How about I add a configuration setting that has three values - completely
> > off, admin only, or any user?  The default setting could be admin only.

I would prefer that implementation. Personally, I'd make the default setting
"completely off", though I trust people really *wanting* that setting could
easily enough change it, if the default were different. In a way, the default
setting seems to be a recommendation. Is manually deleting backups that are
no longer needed something the average BackupPC admin should do, or was it
added for the sake of being able to easily fix commonly made mistakes without
creating more problems along the way?

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Serious error: last backup ... directory doesn't exist!!! - reason found

2018-03-13 Thread Holger Parplies
Hi,

f...@igh.de wrote on 2018-03-08 16:59:37 +0100 [Re: [BackupPC-users] Serious 
error: last backup ... directory doesn't exist!!! - reason found]:
> [...]
> Meanwhile I found the reason: the partition ran out of inodes. As you
> wrote under "How much disk space do I need?" one has to have "plenty
> of inodes". But what does that mean? 

as has been said, that depends directly on what you are backing up.

> May I ask the following:
> 
> - in the "General Server Information" you give some statistical
>   information about disk usage; would it be a good idea also to give
>   information about inode consumption? 

You mean 'df -i'?

> - is it possible and would it make sense to separate the "pc" and the
>   "pool/cpool" directories into different partitions? I just did an
>   rsync of a BackupPC-directory and found that the files on "pc" are
>   mostly empty or small. The file sizes in "pool/cpool" are remarkable
>   bigger - I assume these are the "real" files. So one could create
>   one partition for "pool/cpool" having about e.g. 64kB per inode and
>   another partition having a block size of 1 kB and also 1 kB per
>   inode. Maybe this would reduce disk space consumption and also allow 
>   rsyncing somewhat faster. 

My first thought is to avoid the issue altogether by using a file system
that doesn't statically allocate inodes (e.g. XFS or reiserfs, the latter
I wouldn't recommend for other reasons, though; I don't know about ext4,
btrfs and ZFS, but my guess would be that ext4 has static allocation and
the others dynamic). Why worry about a problem modern file systems simply
don't have?

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best way to copy backuppc data

2018-02-03 Thread Holger Parplies
Hi,

Adam Pribyl wrote on 2018-02-02 08:09:47 +0100 [[BackupPC-users] Best way to 
copy backuppc data]:
> What is to best way to copy the BackupPC data?
> [...]
> Still its taking somewhat too long... should I use a different method?

BackupPC_copyPool ... I'll send it to you on Monday. Sorry, in a hurry right
now.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Trouble with restore

2017-12-13 Thread Holger Parplies
Hi,

let's start over.

Jens Potthast wrote on 2017-12-13 14:01:46 +0100 [[BackupPC-users] Trouble with 
restore]:
> [...]
> I needed to restore a couple of files
> recently. The last time, I had to restore some files was years ago. At that
> time it was no problem. Right now, restore fails: 
> 
> Running: /usr/bin/smbclient server\\sharename -U backuppcuser -E -d 1 -c
> tarmode\ full -mSMB3 -Tx -
> Running: /usr/local/BackupPC/bin/BackupPC_tarCreate -h hostname -n 1886 -s
> sharename -t -r /Path\ to\ file -p /Path\ to\ file/filename.txt

you've got an error in your arguments, or you've misquoted from your log file
(or your version of BackupPC is broken). That may or may not be related, but
it's something to check. You are apparently using the web interface for a
direct restore to the client system.

a) Personally, I wouldn't do that, but you may have good reasons.
b) You are possibly filling in some incorrect values into the web form. The
   arguments to BackupPC_tarCreate are directing it to
   - remove "/Path to file" from the paths in the backup and
   - add"/Path to file/filename.txt" in front of all paths.
   Is that what you want? The "filename.txt" seems misplaced, and it will
   likely be difficult to create a directory of that name on the target
   system if there happens to be a file of that name present, which might
   explain your error (or not).
   Also, there are no file name arguments, which should cause
   BackupPC_tarCreate to terminate with an error message. Strange.
   Note: the '-r' and '-p' arguments wouldn't be created if the two paths
   were equal (at least the code says so).

> [...]
> tarCreate: Done: 1 files, 237695 bytes, 0 dirs, 0 specials, 0 errors

That sounds as if it had worked ...

> tar:1596 Can't mkdir Path to filename: NT_STATUS_OBJECT_NAME_COLLISION

... and that doesn't mention "filename.txt".

What's in the RestoreInfo.xxx file for the restore attempt?

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Host Priority Option

2017-12-12 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2017-12-12 09:39:41 -0600 [Re: [BackupPC-users] Host 
Priority Option]:
> On Tue, Dec 12, 2017 at 8:19 AM, Gurvinder  wrote:
> >
> > I have been using BackupPc from past few months however I could not find
> > any option where I can prioritize hosts for backups from the list of hosts
> > we add. Let say i have 10 hosts added for backup, now I want specific
> > hosts to start backups earlier than other backup hosts.

"earlier" is not the same as "prioritize". You can achieve "earlier" with
different blackout windows. "Prioritize", on the other hand, is difficult
to even meaningfully define. If host A is "more important than" host B, does
that change when host A has had a successful backup every day for a whole
month and host B has never had a single one (i.e. are backups for host A
mandatory and for host B optional)? BackupPC assumes you want backups for
all your hosts and tries to "do the right thing". Put differently, what
situation are you envisioning where there is not enough capacity to backup
all hosts you are trying to backup? Maybe you need to fix the concept
(like add more backup servers, faster media, decrease backup volume, whatever)
instead of working around problems?

I do understand that there may be transient problems (like unusual amounts of
new data in your backup set) where regular backups of host A are more important
than of host B, but that sounds like "earlier", and will only fix itself
without manual intervention if it is the "more important" (i.e. "earlier")
hosts with the large amount of data (or if you let BackupPC proceed as it
is programmed to do, hopefully ;-).
Note that blackout windows might not work exactly as you expect. A running
backup will not be stopped when entering its blackout window, and the check,
I believe, is done when the backup is run, not when it is queued (queueing
happens according to the WakeupSchedule, running might happen any time
another backup finishes, subject to your MaxBackups setting, which could be
hours after queueing). In any case, an overly long backup for host B might
prevent a backup for host A from starting. I don't see any way within BackupPC
to prevent this. Again, this greatly depends on what kind of problems you
are expecting.

> I don't think there is a setting to force an order - they are just
> scheduled as the age of the previous backup hits the interval set
> between them.   You can manage it with a manual intervention, like
> starting a backup on the one(s) you want to happen first just before
> the blackout time ends.  If everything has the same schedule, these
> will continue to run first as long as everything completes in order.

Working with blackout windows and the periods, you can probably force this.
Let the blackout window for the "earlier" backup end at least one wakeup
earlier and keep the periods smaller so that it will run on the first wakeup
after the blackout even if it runs longer. Remember, it only needs to *start*
within the blackout window ...

Finally, the order, in which hosts are queued, is defined something like:
* hosts with errors before hosts without errors
* hosts with older last good backups earlier
* alphabetically on host name (presuming previous backup age identical to the
  second, which seems rather unlikely, unless there are no previous backups)

So host names might actually provide a way to start backups in the right
order on a new BackupPC installation :-) (hint: ClientNameAlias). And, of
course, to document your "priorities" and make it easy to check whether
things are backing up in the right order.

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error: Wrong user: my userid is 48, instead of 955(backuppc)

2017-12-12 Thread Holger Parplies
Hi,

fujisan wrote on 2017-12-12 13:08:08 +0100 [Re: [BackupPC-users] Error: Wrong 
user: my userid is 48, instead of 955(backuppc)]:
> [...]
> I have the following:
> -r 1 backuppc apache29 Sep 11  2015 /etc/httpd/conf/password.conf

that would seem to be a problem (unrelated), because authentication is probably 
done by Apache, not the BackupPC_Admin script ... ah ... it appears it won't
be, if you've changed the user for Apache (which I wouldn't do).

> -rwsr-x--- 1 backuppc apache  7128 Aug  2 18:51 
> /usr/libexec/BackupPC/BackupPC_Admin

Well, and /usr/libexec/BackupPC?

> in /etc/httpd/conf/httpd.conf
> User backuppc
> Group apache

Presuming setuid is working for Perl scripts (which it may not), there would
be no reason to change this. Even if setuid Perl scripts don't work, there
are probably better ways to do this. Is your web server running anything else
besides BackupPC?

Presuming you really *do* keep this, remove the setuid on BackupPC_Admin.

> [...]
> On Tue, Dec 12, 2017 at 11:46 AM, Jamie Burchell  wrote:
> > You need to configure Apache (/etc/httpd/conf/httpd.conf on CentOS 7) to
> > run under user ???backuppc???:

Can't say I'd agree. Besides the obvious implications, that sort of puts you
back on square one, where you might have to change all sorts of things on your
system just to get Apache to run. Repeat on every security update of Apache.
You'd better know what you're doing, in which case you wouldn't be here.

It certainly is *a* possible solution, but it's neither elegant nor mandatory,
nor should it be suggested without warning about the many implications.

> [...]
> > -rwsr-x--- 1 apache apache 7128 Aug  2 18:51 
> > /usr/libexec/BackupPC/BackupPC_Admin

At this point, it was obvious, but you've figured that out. Did you mention
whether the error message stayed the same after you changed that?

> > I have changed the ownership of BackupPC_Admin to backuppc and the
> > permissions as well.:
> >
> > drwxr-xr-x 2 backuppc backuppc 4096 Dec 11 16:53 /usr/libexec/BackupPC/
> > -rwsr-x--- 1 backuppc backuppc 7128 Aug  2 18:51 /usr/libexec/BackupPC/
> > BackupPC_Admin
> >
> > And the log says:
> > [Tue Dec 12 10:50:58.607612 2017] [cgid:error] [pid 715:tid
> > 139944750074112] (13)Permission denied: AH01241: exec of
> > '/usr/libexec/BackupPC/BackupPC_Admin' failed

Well, yes, that's still obvious, because Apache has no execute permission.

> > Well, I cannot figure out how to setup backuppc.

And I can't figure out how to stop people from top-posting.

> > Any help welcome

Same here.

Regards,
Holger

P.S.: If your distribution's BackupPC package doesn't work with your
  distribution's Apache package and doesn't provide information on how
  to fix that, you should probably file a bug report.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Documentation version discrepancies

2017-11-18 Thread Holger Parplies
Hi,

Jamie Burchell wrote on 2017-11-18 16:23:41 + [[BackupPC-users] 
Documentation version discrepancies]:
> As I've been learning backuppc and studying the documentation I noticed
> that the documentation supplied in the Web UI is different to what's
> currently online. This has undoubtedly contributed towards my confusion as
> the terminology changes depending on which document I happen to have access
> to when I'm looking.
> 
> For example, this excerpt talking about full and filled backups. It sounds
> like filled backups are more of a legacy thing too.

quite the opposite, I believe. Filled backups didn't have much meaning before
BackupPC 4.x.

> Why not remove the docs from the web UI and point the menu to the latest
> online docs? I know this can be done in the admin, I mean by default.

Off the top of my head, I can think of two good reasons.

1.) Internet access is by no means a requirement of BackupPC. Supposing you
don't have internet access, the reference to the online docs won't work.
As long as you have access to the Web UI, documentation contained therein
will always work.

2.) There are different versions of BackupPC in use, and as development 
continues, there will be more of them. Documentation may change as
BackupPC changes and not be applicable to previous versions. While you
*can* reference distinct online versions of the documentation, you gain
exactly nothing. If you do that, you might as well distribute the online
documentation you would want to reference with the BackupPC version it
applies to.
What you are probably thinking of is an evolving documentation that gets
better while the installed BackupPC version stays the same. This evolving
documentation would need to account for differences in BackupPC versions,
thus becoming more and more complex and confusing - much the opposite from
what you are trying to achieve.

As for the pure information, there is simply so much of it out there, that
people don't find (or don't read) what they are looking for. I've explained
the difference between full and incremental backups here so many times, that
I'm simply tired of repeating it when it comes up once again. In a nutshell,
incremental backups are a trade-off of exactness for speed, invented in an
era when it was simply not feasible to do daily (tape-based) full backups.
There might be valid reasons even today for not doing full backups daily, but
the apparent general attitude of "I don't need exact backups, I just need fast
backups" begs for the answer "don't do any backups at all - that's as fast
as it gets, and it's not much worse than inaccurate backups". Of course, your
application may vary (and BackupPC incremental backups are in general
reasonably accurate, presuming you are using rsync(d)).

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] error in rsync protocol data stream (code 12) (Restoring)

2017-11-16 Thread Holger Parplies
B wrote on 2017-11-16 00:50:52 +0100 [Re: [BackupPC-users] error in rsync 
protocol data stream (code 12) (Restoring)]:
> [...]
> In short: being root and (especially) removing directories is bad, on
> the other hand, using root as part of a controlled process doesn't mean
> that you'll be hacked or whatever - furthermore, doing some stuffs as
> root is compulsory for some maintenance work.

wrong. Not understanding a concept and giving advice about it is bad.

Jamie Burchell wrote on 2017-11-15 22:48:01 - [[BackupPC-users] error in 
rsync protocol data stream (code 12) (Restoring)]:
> [...]
> I followed the instructions to make a restricted backuppc user on client
> machines with limited sudo permission thus:
> 
> backuppc ALL=NOPASSWD: /usr/bin/rsync --server --sender *
> 
> This works fine for backing up, but I just discovered I can no longer
> restore directly,

That is by design. I believe this suggestion was originally mine, and the
intention is exactly to disable write access to arbitrary files via the
backuppc user. As you wrote, if you don't want that restriction, leave out
at least the '--sender' parameter. In this case, you might as well leave
out the parameters altogether and just allow /usr/bin/rsync (with any
parameters). In fact, I would even suggest narrowing down the allowed
command further yet, if that wasn't tedious to implement and maintain (and
error-prone, because you will, at some point, forget to adjust it to a
BackupPC configuration change).

The problem is not that BackupPC somehow guarantees that you will be hacked.
Fact is, if your BackupPC server *is* compromised, the attacker (local or
remote) gets a free passwordless login into all the clients. For B, that
is a free root shell. No problem (not mine, anyway). For you and me, it's only
an unprivileged user shell. With the '--server --sender' above, all that can
be done with that (without a further exploit) is reading all files (including
/etc/shadow -- that is basically why I would want to further restrict the
allowed command). Without '--sender', you get *write* access to /etc/shadow
(and everything else, of course), meaning you can change the root password.
Well, that obviously gives you a root shell again. There are tons of other
(more subtle) ways to do that, but this is the most obvious.

Also note that you don't even need to be exploited. As Les pointed out,
anyone who can trigger a direct restore can get this root shell. So, the
question is, is everyone who can trigger a direct restore *supposed to*
have root access to the client in question?

Les also mentions that direct restores are more error-prone than, e.g.,
downloading a tar file with the files you want from the backup. In my
experience, I often prefer to compare the current version with the contents
in my backup before overwriting it - it may not be retrievable afterwards
if it was modified after the last backup or turns out to have failed to be
backed up recently for any reason. So I tend to disable direct restores.
Should I ever need a complete restore, I'll do it from the command line
anyway. Of course, your mileage may vary.

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Execute bash script after each share on backup server

2017-11-10 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2017-11-10 11:13:23 -0600 [Re: [BackupPC-users] Execute 
bash script after each share on backup server]:
> On Fri, Nov 10, 2017 at 10:11 AM, Julian Zielke
>  wrote:
> > I???ve written a script which should be run after each processed share of a
> > host.
> >
> > So I???ve tried using:
> > /bin/bash /etc/BackupPC/notify.sh -c $host -s $share -t $type -r $xferOK
> >
> > The log says it is being executed but also:
> >
> > Backup failed on server01 (DumpPostUserCmd returned error status 256)
> >
> > When I run the script using sh -c ? backuppc iot runs fine.
> >
> > So what???s my mistake here?
> 
> Your script needs to return an exit code of 0. [...] you can add a line
> 'exit 0' at the end to make backuppc consider it successful regardless.

that was my first thought, too, but there is a bit of confusion in the
question.

1. You are talking about running a script after each processed *share*, but
   the error message complains about DumpPostUserCmd, not DumpPostShareCmd.
   The remark about 'exit 0' (and all of what I'm saying) might apply to a
   different script, not your notify.sh, or you are doing something slightly
   different from what you are describing (or misquoting from memory).

2. You omit which BackupPC configuration variable you are setting to which
   value. It appears you might be setting

   $Conf{DumpPostShareCmd} = '/bin/bash /etc/BackupPC/notify.sh -c $host ...';

   but then again, you might be trying the 'bash notify.sh' invocation on the
   command line with the variables $host, $share, $type, $xferOK quoted here
   instead of the actual values for clarity (or secrecy) (or because you are
   actually using shell variables :).

   In any case, I wonder why you are explicitly invoking a shell instead of
   using a "#!/bin/bash" line in notify.sh. With that line (and execute
   permission on notify.sh), things would be more consistent than what you
   are describing above. [There might be valid reasons like noexec mounts
   preventing you from doing so.]

3. You say "When I run the script using sh -c ? backuppc it runs fine".
   First observation: 'sh' is not necessarily 'bash'. Debian made a point of
   teaching the world that a while back. When you mean 'bash', you should say
   'bash', and when you say 'sh', you should be sure you aren't using bashisms.
   Second observation: I don't understand the 'backuppc' in there. Did you
   actually mean 'su' instead of 'sh'?

4. You mention "bash notify.sh" vs. "sh -c notify.sh". Note that the semantics
   of

   ./notify.sh
   bash notify.sh
   bash -c notify.sh

   might be subtly different. I'd recommend selecting one version (the order
   says something about my preference; yours may differ), making sure it
   works, and then sticking to it.

   The point being: an "error status 256", if I'm not mistaken, might as well
   be a simple failure to execute the command at all. BackupPC might be
   telling you "I'm about to execute the following command: ..." followed by
   "It didn't work." - without being able to distinguish between "it failed
   to execute" and "it executed but returned a failure". If you want to be
   sure, put something like "echo `date`: here I am >> /tmp/notify.log" at
   the beginning of notify.sh, supposing you don't already have a different
   way of making sure it is really executed (I'm guessing you don't, because
   you say that the log tells you it is being executed, not "I'm getting
   the notification, so I know it is being executed").

   If it's not executed, it's likely about file system permissions (of
   notify.sh or of /etc/BackupPC) or perhaps selinux ...

If you need more help, you'd need to be more verbose on your BackupPC
configuration settings and log file content. Also, file permissions as noted
above and the first line of notify.sh would be interesting ...

Then again, maybe it's just the missing 'exit 0'.

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to solve an error exteracting error

2017-11-09 Thread Holger Parplies
Hi,

Michael Stowe wrote on 2017-11-09 05:47:01 + [Re: [BackupPC-users] how to 
solve an error exteracting error]:
> On 2017-11-08 13:40, Mohammad Goodarzi wrote:
> >hi,
> >
> >I get this error when I do ssh -l root myservice service ssh reload
> >
> >ssh: unrecognized service
> 
> ssh is [probably] telling you that "service" isn't what your distro
> uses to reload services.

actually, I think "service" is correct, but "ssh" isn't. Might be "openssh"
or "ssh-server" or "i-named-this-script-secure-remote-login-service". Hint
to the original poster: "ls /etc/init.d" (supposing that is where your system
keeps init scripts) ... (*) (**)

> Since you haven't mentioned your Linux
> flavor -- or even *whether* you're using Linux -- it's very
> difficult to steer you in the direction of what exact command will
> reload ssh on the remote server.

Given the large amount of missing detail, it's not even sure PermitRootLogin
was the cause of the problem - it's just one possibility. With ":set terse"
I seem to have not mentioned that.

All of that said, reloading your ssh server configuration isn't *strictly*
a BackupPC issue. I'm glad I didn't suggest "vi /etc/ssh/sshd_config" as I
was first intending to ...

Regards,
Holger

(*) For consistency, I should have said "ssh -l root myserver ls /etc/init.d".
(**) Disclaimer: if your system uses systemd, things might be different. I
 won't know until the last way around using systemd disappears.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] how to solve an error exteracting error

2017-11-08 Thread Holger Parplies
Mohammad Goodarzi wrote on 2017-11-08 14:54:41 -0500 [[BackupPC-users] how to 
solve an error exteracting error]:
> [...]
> Running: /usr/bin/ssh -q -x -l root myservername /usr/bin/rsync
> --server --sender --numeric-ids --perms --owner --group -D --links
> --hard-links --times --block-size=2048 --recursive --ignore-times .
> /etc/
> [...]
> Read EOF: Connection reset by peer
> [...]
> Got fatal error during xfer (Unable to read 4 bytes)
> Backup aborted (Unable to read 4 bytes)
> [...]

ssh -l root myservername perl -pi -e 
\''s/^(\s*PermitRootLogin\s+)no\s*$/${1}yes\n/'\' /etc/ssh/sshd_config
ssh -l root myservername service ssh reload

> Thanks

Welcome.

If you need more detailed help, consider asking a more detailed question.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up Windows using rsync

2017-09-21 Thread Holger Parplies
Hi,

just for the record,

Jeffrey West via BackupPC-users wrote on 2017-09-19 12:10:56 -0400 [Re: 
[BackupPC-users] Backing up Windows using rsync]:
> [...]
> 
> $Conf{XferMethod} = 'rsyncd';
> $conf{RsyncShareName} = 'c-drive';

you need to assign to the correct variable - it's %Conf ... %conf is simply
a different (unused) variable. The assignment itself is valid, it just doesn't
do what you want.

> [...]

Jeffrey West via BackupPC-users wrote on 2017-09-19 15:21:11 -0400 [Re: 
[BackupPC-users] Backing up Windows using rsync]:
> [...]
> The following is what works.  The RsyncShareName needed [ ]

Your conclusion is incorrect. It needed a capital C. The array form is
optional. See the comments in config.pl.

> $Conf{RsyncShareName} = [
>   'c-drive'
> ];

That works, as would have

  $Conf {RsyncShareName} = 'c-drive';

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Question about transient inodes

2017-05-30 Thread Holger Parplies
Hi,

Ray Frush wrote on 2017-05-30 16:53:18 -0600 [Re: [BackupPC-users] Question 
about transient inodes]:
> Holger-
> 
> Thanks for the followup. That's not exactly the answer I was expecting
> based on the behavior we experienced.   Once all of the initial 'full'
> backups of our systems were taken, the inode situation seems to have calmed
> down.

the point is you don't "allocate inodes" - if I'm not totally mistaken,
there's no syscall "reserve XXX inodes for me for later use". You open files
for writing - one at a time - allowing (or requiring) the system to create
the files in the process. There is a system limit to the number of files that
can be open at any one point in time (the reason probably being that kernel
memory needs to be statically reserved - some number of bytes for each), which,
as far as I can tell, is nowhere near the numbers you quoted. The number of
files a single process can open simultaneously is, I believe, lower yet. In
any case, to "allocate" millions of inodes, a process would have to open as
many files, closing them again to make space for new ones. The process would
have to keep track of all the names of those files (that is many million
multiplied with as many bytes as the file names would be long * bytes of
memory; quite noticeable on an average system) and then "quickly" delete
them again. My guess is that both creating and deleting those files would
take a considerable amount of time, depending on the filesystem (let's hope
they're at least not all in one directory ...). Above all, it would be quite
pointless.

I don't know what your NFS system is doing or showing, but I'd presume it is
in some way responsible for what you were seeing. How do you "set the maximum
number of inodes" for an NFS filesystem, and what do you mean by "pre-allocated
inodes"? Were you migrating a V3 pool from which some pool files have now
expired?

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Question about transient inodes

2017-05-30 Thread Holger Parplies
Hi,

Ray Frush wrote on 2017-05-23 15:37:36 -0600 [[BackupPC-users] Question about 
transient inodes]:
> [...]
> Can a developer comment on under what conditions BackupPC might be
> temporarily allocating a lot of extra inodes, and then quickly releasing
> them?

none.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Updated BackupPC 4 Debian Packages

2017-05-30 Thread Holger Parplies
Hi,

(shouldn't this really be on backuppc-devel?)

Ludovic Drolez wrote on 2017-05-30 20:45:35 +0200 [Re: [BackupPC-users] Updated 
BackupPC 4 Debian Packages]:
> > On Fri, May 26, 2017 at 10:34:11PM -0700, Craig Barratt wrote:
> > >No, rsync-bpc isn't usable without BackupPC.

stupid question: should it even be installed in /usr/bin then?

> > > [...]
> > >The main upgrade risk area is around rsync config parameters and 
> > > arguments
> > >not being compatible between 3.x and 4.x. Configure.pl tries to
> > >extract $Conf{RsyncSshArgs} (a new 4.x setting) from the
> > >old $Conf{RsyncClientCmd} setting.

As far as I can tell, an automatic conversion is not always possible. For
simple cases, it's easy enough. Varying orders of ssh command line options
make things complicated. And in the general case, RsyncClientCmd could be
virtually *anything* that leads to a connection to something that emulates
an rsync protocol. I'm not sure RsyncSshArgs can be as flexible, or at least
that this can be achieved by an automated configuration translation.

Also, I believe configure.pl doesn't handle host configuration files, and I
would assume that doing so in postinst would violate policy, because host
configuration files don't belong to the package, do they?

Aside from that, there is no longer an RsyncClientRestoreCmd, so part of the
formerly possible configuration simply does not translate.

Finally, the configuration file may contain arbitrary Perl code for
determining the value of RsyncClientCmd (or anything else, for that matter),
defeating conversion as with the web configuration editor.

Thinking about it, for config.pl, simply including a new version would leave
it up to the user to resolve the differences between his local version and
the new version, wouldn't it?

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Limit Disk Space for a Host

2017-05-30 Thread Holger Parplies
Hi,

Oliver Lippert wrote on 2017-05-30 22:32:49 +0200 [[BackupPC-users] Limit Disk 
Space for a Host]:
> [...]
> is there any way to set an „limit“ to an Host?

no.

> I want to backup an Host but want to make sure that all his backup-files max
> reach 200GB.

Even the concept of the storage space used up "by one host" is difficult to
define. Due to pooling, a file found on N hosts in M locations during O backups
is only stored once - how much of the storage space is allocated to each host?
How much computing power do you want to put into finding that out? When doing
a new backup, do you want to disallow creation of a new reference to a file
already existing in the pool (i.e. requiring no new space), because the
storage space accounted to this host would exceed its quota? Do you want
quotas to apply before or after pooling and/or compression? Do you even really
want quotas, or are you just trying to prevent mistakes from filling up your
pool? (Actually, that might even be a neat feature request - abort a backup
if it exceeds $Conf{BackupMaxSize} (definable per host or even per share).)

Your usage case may (possibly) not have any content shared between hosts, but
the concept of BackupPC does, so any solution would have to work for the
general case.

> [...]
> Current solution are two seperate docker backuppc-instances, but that is
> administration overhead…

And you don't get the benefit of pooling between the pools.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problems Starting BackupPC

2017-05-30 Thread Holger Parplies
Hi,

I really don't know where to start on this one, and my reply is off-topic,
even if the original post wasn't, so sorry about that.

For the archives: don't "kill -9". Period.

"kill -9" means "I saw someone use this for a different problem, and I wanted
to try it, too. I don't really know what it does or why I am using it. I don't
really care whether things get better or worse, I'll just take my chances."

If you think you forgot your keys, what do you do? "kill -9" is the "break
down the door" variant. More sane approaches might be
- check your pockets to make sure you really don't have your keys with you
- knock on the door or ring the bell. Maybe someone is home.
- call a locksmith.

Of course, breaking down the door always works, whereas the other approaches
might not, so that's what you should start with, right?

I'll skip the details on why this was a bad idea in this instance.

Hope that helps someone.

Regards,
Holger

P.S.: Yes, it might ultimately turn out that you need to break down the door,
  but, in my experience, you usually don't break down a *random* door
  even then.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to backup a laptop over internet

2017-05-30 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2017-05-30 14:38:46 -0500 [Re: [BackupPC-users] How to 
backup a laptop over internet]:
> On Tue, May 30, 2017 at 2:18 PM, Holger Parplies <wb...@parplies.de> wrote:
> >
> > You *can* use a VPN locally, too. That way, you'd be reachable via the same
> > address in both cases (plus a local one if you're on the local net; probably
> > faster for non-BackupPC traffic). Routing might be a little tricky.
> 
> If the VPN is configured as point-to-point with the other end on the
> backuppc server, routing would take care of itself via the subnet
> masks.

true enough. If that works for you, do it like that.

> [...] The routing is only complicated if the endpoint is on a router or some
> other device.

Unfortunately, that is probably the normal case, the point being remote
connections via the internet. You won't *usually* have your BackupPC server
on your firewall (or router, if you don't have a firewall), though there
might be instances where you do.

Well, you could port-forward the VPN traffic from your router/firewall to the
BackupPC server. If you're only using the VPN for BackupPC, that doesn't
sound unreasonable. If, however, you're using it for general access to your
home network, well, you're basically back to "running the BackupPC server
on your firewall", even if it might not look like it ...

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unexplained Error Code 255 - BackupPC

2017-05-30 Thread Holger Parplies
Hi,

Akibu Flash wrote on 2017-05-29 05:27:04 + [Re: [BackupPC-users] 
Unexplained Error Code 255 - BackupPC]:
> [...]
> I am running backuppc version 4.41 on a Debian computer.

wow ... where did you get that version from?

> When I start a backup, it appears to properly connect to my Windows
> computer and then gives me an error message.  ?Got fatal error during
> xfer (rsync error: error in rsync protocol data stream (code 12) at
> io.c(629) [Receiver=3.0.9.6])? [...] Please note that I am able to ssh
> and rsync into the Windows computer manually from the linux computer
  
> without problem.

Also without extraenous output? You seem to be using rsync over ssh, and
rsync is quite fussy about messages that it does not expect. As a rule of
thumb, to a *Linux* computer,

   % ssh clientcomputer true
   %

should immediately display the next prompt as indicated above, i.e. nothing
extra, not even blanks or newlines. For Windoze, your mileage may vary, but
it probably won't. The actual invokation used by BackupPC (or rather rsync_bpc)
might include extra options to ssh, e.g. -q -x -l someuser ... include those
when testing.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to backup a laptop over internet

2017-05-30 Thread Holger Parplies
Hi,

Xuo wrote on 2017-05-30 19:03:03 +0200 [Re: [BackupPC-users] How to backup a 
laptop over internet]:
> I'm using BackupPC 3.3.0 (got from the Web gui).
> Yes this feature would be really great (at least for me !!).

sorry, really busy, two quick remarks:
Can't be too difficult to backport "this feature" (context = array argument
to ClientNameAlias possible) to 3.3.0. Supposing I find the time before
someone else does, I'd look at it ...

> Thank you to all for your help. I'm still fighting with the vpn
> connection. I'll tell you if I succeed (one day) to backup through
> this f... vpn connection.
> vpn + nice feature => king of the world.

You *can* use a VPN locally, too. That way, you'd be reachable via the same
address in both cases (plus a local one if you're on the local net; probably
faster for non-BackupPC traffic). Routing might be a little tricky. You'd
probably need to NAT to the VPN gateway address for connections from the LAN
to your notebook (via the VPN address) - something along the lines of

  iptables -t nat -A POSTROUTING -o tun0 -d 1.2.3.4 -j SNAT --to-source 1.2.3.1

(replace tun0 with the VPN device name, 1.2.3.4 with the notebook VPN address
and 1.2.3.1 with the VPN gateway VPN address). You'll also need to prevent
routing to your local subnet via the VPN when you're on the local subnet
(which OpenVPN with "push route ..." would do, for instance).

Or you could DNAT connections to the VPN address to the local address when
the VPN isn't connected (that's a DNAT in PREROUTING *and* an SNAT in
POSTROUTING, because you need to catch the return packets).

Requires some networking skills and possibly some experimentation, but it
can definitely be done. The ClientNameAddress variant is easier to implement
without doubt, but the VPN variant is more generic, i.e. it doesn't require
 to be able to handle multiple addresses.

If you need help with routing or firewall rules, feel free to contact me
off-list (hopefully, my MTA will behave ...), and I'll try to respond soon.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backing up Windows 7 Pro, not enough permissions

2017-05-17 Thread Holger Parplies
Hi,

Michael Stowe wrote on 2017-05-15 09:58:08 -0500 [Re: [BackupPC-users] Backing 
up Windows 7 Pro, not enough permissions]:
> On 2017-05-15 05:20, Tapio Lehtonen wrote:
> > [...]
> > The share C is shared to network and user backuppc has full rights.
> > [...]
> 
> There are a few important things to understand here.  One is that not 
> all transport mechanisms are created equal when backing up Windows 
> systems.

what a nice way to say that :-).

I'm not a Windoze expert at all, but I vaguely remember you (Michael) stating
in the past that there is a difference between the default C$ share and an
explicitly created share ... assuming my memory is correct, might that be
the issue here? I.e. does anything change if you (Tapio) explicitly create
a share and use that (or use the C$ share if you were previously using
something different)?

> At any rate, these reasons are why I personally switched to a 
> combination of rsync and vshadow (to handle open files) and put together 
> a package to install the proper files on the client side.

If I were in the unlucky situation of having to deal with Windoze hosts,
that is what I would use. From what I have read on this list, it seems
to be the best choice.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Recommend add to manual

2017-05-15 Thread Holger Parplies
Hi,

Kenneth Porter wrote on 2017-05-14 18:37:53 -0700 [Re: [BackupPC-users] 
Recommend add to manual]:
> [...]
> What if you have TWO drives you want to mount to /var/lib/backuppc? With 
> BackupPC 4, it's not unreasonable to swap your backup media every week 
> to an offsite location.
> 
> Just create the skeleton directory structure in the external media and 
> run restorecon once it's mounted.
> 
> Another thing one can do is move /etc/BackupPC to the external media and 
> mount that directory over /etc/BackupPC so that your settings follow 
> your media.

I'm not sure I like the idea of mounting over parts of /etc, but that might
be a matter of taste. Why not symlink /etc/BackupPC to /var/lib/backuppc/etc
(or wherever your backup drive is mounted)? That way, a failing mount would
lead to an unresolvable symlink rather than an empty directory (or worse, a
directory containing something outdated, though, come to think of it, you
could populate /etc/BackupPC with something appropriate, assuming there *is*
anything appropriate in such a case; I suspect in the absense of your backup
drive you'd want BackupPC *not* to start).

All of that said, do you *really* want different configurations for your
different external media? Isn't the point to have an *identical* configuration
and swap around media in order to have an off-site copy?

> This makes it easier to do bare-metal restores.

Well, yes, but you can just as well rsync /etc/BackupPC to
/var/lib/backuppc/etc (or whatever) to keep an up-to-date copy on your
external media. For a bare-metal restore, you'll need to setup whatever
you need anyway, i.e. bind-mount /etc/BackupPC or create a symlink or
copy over the contents or whatever. You might have a script on your backup
media which does this for you. You might even have a working BackupPC
installation on your backup media. Your script could copy/symlink/bind-mount
that into your new system, supposing it can't be run directly from the
external disc. Just some ideas that spring to mind ...

> Just install 
> a minimal OS and BackupPC, then mount your backup drive and you've got 
> all your settings ready to do the restore.

Installing BackupPC from a distribution package is probably a better choice,
agreed, as you get dependency resolution (a web server, for instance). Call
me paranoid, but if I'd go to the trouble of keeping off-site backups, I'd
keep a self-contained BackupPC installation on my backup media. I can *still*
install BackupPC on the native OS and use that, but I'm less dependent on
external resources.

And, as I wrote, "mount your backup drive" is really at least two mounts,
which doesn't sound that much easier than one mount and one softlink. From
my experience, I'd say it's important to *document* what needs to be done,
and, personally, I'd document it in the form of a shell script containing
the needed commands.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Working on keeping this list on-topic

2017-05-15 Thread Holger Parplies
Robert Katz wrote on 2017-05-13 11:33:57 -0300 [[BackupPC-users] Working on 
install of backuppc in fedora 26]:
> running on Fedora 26 Congratulate me  :-)

I'll congratulate you as soon as you realize that this list is not meant for
*anything* concerning distribution packages, package management questions,
or linux questions. This is the *BackupPC* *upstream* users mailing list,
meant for questions relating to the BackupPC software as distributed by
git or tarball. Or at least it used to be. I'm at the point of unsubscribing
due to the large amount of noise here lately. This is not about mistakenly
asking one question here instead of at the appropriate place, it's about
repeatedly raising issues that are not even about BackupPC.

Whoever might be responsible, please kindly consider creating a backuppc-users
list at Fedora, Redhat or whereever, and announcing creation of the list here,
so whoever is interested can subscribe there, and we can point others toward
it when appropriate. Thank you.

Kind regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Nightly does not find some pool files, but they exist

2017-05-04 Thread Holger Parplies
Hi,

Dieter Fauth wrote on 2017-05-04 10:20:10 +0200 [[BackupPC-users] Nightly does 
not find some pool files, but they exist]:
> in the log I see many lines like these:
> [...]
> 2017-05-04 09:04:01  admin : BackupPC_refCountUpdate: missing pool file 
> 11d9a30eb21b173a1e073e7a86b83f2e count 16
> [...]
> The files itself exist though. This is what ls shows:
> ls /var/lib/backuppc/cpool/10/d8/11d9a30*
>   10/d8/11d9a30eb21b173a1e073e7a86b83f2e

I believe a file '11d9...' should be in 11/d9/, not in 10/d8/ (off-by-one in
both path components).

> Version: 4.1.1git
> Repo has been migrated to V4 (I hope all now, just checking right now).

A brief look at the code doesn't suggest the migration might be at fault. I
wonder how the pool files ended up in the wrong place, particularly since it
only seems to apply to some of the files ...

Is it ok for migration and refCountUpdate to run in parallel?

Regards,
Holger

P.S.: While "lists.sourceforge.net." (with trailing dot) is a perfectly
  valid DNS name, at least my MTA doesn't like e-mail addresses ending
  on a dot. Could you please use the canonical form we all use?

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4.1.1 and NFS shares

2017-04-24 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2017-04-21 18:39:59 -0500 [Re: [BackupPC-users] BackupPC 
4.1.1 and NFS shares]:
> On Fri, Apr 21, 2017 at 5:09 PM, Bedynek, Matthew J.  
> wrote:
> > With version 3 I am using Rsync instead of tar to backup a NFS share which
> > the backupPC host has direct access to. [...] with version 4, there have
> > been changes to rsync such that am forced to use tar for a local copy.

if that is really the case, I would consider it a bug. However, I would
suspect that you could set

$Conf {RsyncSshArgs} = [ '-e', '/usr/bin/sudo -u username -p' ];

(this is an untested hack ... I'm guessing rsync will append a hostname which
the sudo '-p' option will silently swallow) to get the equivalent of your
V3 settings:

> > [...]
> > $Conf{RsyncClientCmd} = 'sudo -u username $rsyncPath $argList+';
> > $Conf{RsyncClientRestoreCmd} = 'sudo -u username $rsyncPath $argList+???;

If that doesn't work, you could use a script instead and modify the
arguments in any way you need to.

> > I believe the RsyncClientCmd and RsyncClientRestoreCmd are gone in V4.

Correct.

> > I did get Rsync to work with V4 but it seems to ssh to localhost which
> > consumes additional host resources.

Yes, and there might be other valid reasons not to want that (e.g. not running
sshd on the host).

> > Rsync isn???t a big deal [...]

Well, I would think it is ... as you say yourself ...

> > [...] but am I correct in reading that Rsync might be better for
> > incremental backups in terms of handling deletions?

Yes. It handles them. tar doesn't. Period. And, more important, it handles
files not present in your reference backup (or modified since then) that can't
be caught by comparing timestamps (renamed, moved into the backup set,
extracted from an archive, included by changing in-/excludes, 'touch'ed to a
past date, ...). I wouldn't want to go back to tar any more than use SMB ...

> If you think about what rysnc is supposed to do, it doesn't make much
> sense to run both ends locally accessing data over NFS.

I tend to disagree. As far as I have understood Matthew's situation, rsync
is *supposed to* give more exact backups than tar, which it will do just
fine running both ends locally accessing data over NFS. And, I believe,
we're talking about his application of rsync here, not yours.

> For any file that is not skipped by the timestamp/length check in
> incrementals, you are going to read the entire file over NFS so rsync
> can compute the differences (where the usual point is to only send the
> differences over the network).

This is worth pointing out, but, again, there may be reasons to do this.
tar certainly won't do any better - it will also read the complete content
of any file not skipped over NFS. And the 'usual point' of running rsync
locally instead of tar is getting more exact incremental backups, not saving
bandwidth.

> Is there any way you can run rsync remotely against the NFS host instead?

This would save bandwidth, and it would spread some of the load over two
machines, which is either good or bad, depending on whether you want the
extra CPU load on your NFS server or not. During your backup window, this
is likely not an issue, but your mileage may vary. In any case, if you could
spare the bandwidth with BackupPC V3, there is no reason to get overly worried
now. You can try to tune your backup system to better performance or leave it
as it is.

Don't get me wrong - I'm not advising *against* changing to rsync over ssh to
the NFS server. I've been there myself. I've gone from (tar over NFS) ->
(rsync over NFS) -> (rsync over ssh). I'm just saying it doesn't seem to be
*essential*, as you didn't state any problems you had with BackupPC V3.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Excluding folders from backup

2017-04-12 Thread Holger Parplies
Hi,

Sorin Srbu wrote on 2017-04-10 06:14:07 + [Re: [BackupPC-users] Excluding 
folders from backup]:
> > From: Holger Parplies [mailto:wb...@parplies.de]
> > [...]
> > 
> > Almost. In fact, an exclude for "*" is applied to all shares *which don't
> > have a specific exclude*. So, if you used, for example,
> > 
> > $Conf {BackupFilesExclude} = {
> > '/home/sorin' => [
> > '/Videos'
> > ],
> > '*' => [
> > '/ownCloud'
> > ]
> > };
> > 
> > /home/sorin/Videos would be excluded, but /home/sorin/ownCloud would
> > *not*!
> 
> Hmm. This could be applied to all cache-folders I don't want backed up.
> With the info above, I'd need to add * and /cache, /Cache and perhaps
> /.cache and /.Cache to catch them all.
> 
> Am I thinking correctly now?

I'm not sure I understand you correctly. If you want to exclude those
directories everywhere, the exclude

$Conf {BackupFilesExclude} = {
'*' => [
'/cache',
'/Cache',
'/.cache',
'/.Cache',
],
};

would work, ***however***, if you have more keys, for example

'/home/sorin' => [
'/Videos',
'/ownCloud',
],

you would need to copy the cache-related excludes into that also, i.e.

'/home/sorin' => [
'/Videos',
'/ownCloud',
'/cache',
'/Cache',
'/.cache',
'/.Cache',
],

because the '*' excludes *don't apply* for shares with a specific exclude,
as I tried to explain above.

Hope that makes it clearer.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Excluding folders from backup

2017-04-07 Thread Holger Parplies
Hi,

Bowie Bailey wrote on 2017-04-07 09:17:29 -0400 [Re: [BackupPC-users] Excluding 
folders from backup]:
> On 4/7/2017 2:55 AM, Sorin Srbu wrote:
> >> From: Markus Koeberl [mailto:markus.koeb...@tugraz.at]
> >> On Friday 07 April 2017 07:29:30 Sorin Srbu wrote:
> >>> [...]
> >>> The field BackupFilesOnly is empty, as this would override anything in
> >>> BackupFilesExclude as per the documentation.

that is not true for rsync, only for smb (and possibly tar, though I don't  
think so). With rsync, you *can* use both BackupFilesOnly and 
BackupFilesExclude in combination.

> >> [...]
> >> It should look like this in your config file:
> >> $Conf{RsyncShareName} = [
> >>'/home/sorin'
> >> ];
> >> $Conf{BackupFilesExclude} = {
> >>'/home/sorin' => [
> >>  '/Videos',
> >>  '/ownCloud'
> >>]
> >> };
> > [...]
> > Doh! Now I get it! Thank you!
> > You first have to add a "root" and then the folders you want excluded.
> > Seeing it now, it's rather obvious.

It doesn't seem to be that intuitive ... this mistake has been made a lot.
Please note that, as Markus implied, the excludes need to be relative to the
share, i.e. "/Videos" and "/ownCloud" - whether you use the share name or "*"
as the key. An exclude of "/home/sorin/Videos" would match a file/directory
"/home/sorin/home/sorin/Videos" (which you likely don't have, so the exclude
would not have any effect).

> AND the "root" must either match one of your share names or * to match 
> all of your shares.

Almost. In fact, an exclude for "*" is applied to all shares *which don't
have a specific exclude*. So, if you used, for example,

$Conf {BackupFilesExclude} = {
'/home/sorin' => [
'/Videos'
],
'*' => [
'/ownCloud'
]
};

/home/sorin/Videos would be excluded, but /home/sorin/ownCloud would *not*!

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC v4 for Fedora / EPEL Update

2017-03-30 Thread Holger Parplies
Hi,

Richard Shaw wrote on 2017-03-30 07:19:11 -0500 [Re: [BackupPC-users] BackupPC 
v4 for Fedora / EPEL Update]:
> [...]
> > Error: Package: BackupPC-4.1.0-1.el7.centos.x86_64 (hobbes1069-BackupPC)
> >Requires: perl(File::RsyncP)

is this correct?

Just wondering ...

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Error ' getaddrinfo is not a valid Socket macro' after upgrading to 4.1

2017-03-29 Thread Holger Parplies
Hi,

John Barton wrote on 2017-03-29 00:30:30 + [Re: [BackupPC-users] Error ' 
getaddrinfo is not a valid Socket macro' after upgrading to 4.1]:
> [...]
> # perl -e 'use Socket; print $Socket::VERSION . "\n"'
> 1.82

for reference, I get the following for different Debian releases:

etch1.78   (for the fun of it)
squeeze 1.82
wheezy  1.94
jessie  2.013

> From: Craig Barratt [mailto:cbarr...@users.sourceforge.net]
> [...]
> Try upgrading Socket.pm:
> 
> sudo cpan
> install Socket

I'm not sure I agree with that idea. You're mixing a package manager
installation with a manual upgrade here. That's not only a cause of headaches
in the BackupPC case. I believe Socket is part of the base Perl distribution
(package perl-base in Debian), so you're messing up an essential core package,
and the package might even get its revenge by silently reverting your changes
on package upgrade. Or you might get a broken system. Or it might work (by
putting the module somewhere else in the module path).

For Debian, at a quick glance, I find the package libsocket-getaddrinfo-perl,
which sounds promising. I haven't got time to go more into detail right now,
though.

> I'll add a version check on Socket to configure.pl.

Which version of Socket is required? Or rather, what is the exact requirement
- is it even a minimum version of Socket or rather the availability of a
specific function? I'm not sure if libsocket-getaddrinfo-perl adds this
functionality to the Socket class or provides an alternative in a different
namespace (or does something completely different) - I'll try to have a closer
look soon ...

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: After migration to V4, rsync backups not working

2017-03-16 Thread Holger Parplies
Hi,

after downloading 4.0.0 from sourceforge (which has apparently been hacked
and/or turned to the dark side), I need to correct myself:

Holger Parplies wrote on 2017-03-16 17:09:53 +0100 [Re: [BackupPC-users] Fwd:  
After migration to V4, rsync backups not working]:
> [...]
> If I read Craig's answer correctly, a typical setting would be
> 
>   $Conf {RsyncSshArgs} = '-l root';

actually, the default setting from config.pl is:

$Conf{RsyncSshArgs} = [
'-e', '$sshPath -l root',
];

which you'd use if you *used to* have

$Conf{RsyncClientCmd} = '$sshPath -l root $host $rsyncPath $argList+'

whereas something like the more complex

$Conf {RsyncClientCmd} = '$sshPath -l backupuser -i 
/var/lib/backuppc/.ssh/id_backuppc_rsa -q -x $host /usr/bin/sudo $rsyncPath 
$argList+';

would translate to

$Conf{RsyncSshArgs} = [
'-e', '$sshPath -l backupuser -i 
/var/lib/backuppc/.ssh/id_backuppc_rsa -q -x',
];
$Conf{RsyncClientPath} = '/usr/bin/sudo /usr/bin/rsync';


This translation is fairly easy for a human, but coding it for a computer
is dreadful (and it gets worse if I find creative names for my ssh identities
like /.../id/usr/bin/rsync or something ;-).


[OT]
Speaking of "getting worse" - is it normal that the sourceforge web page is
completely littered with ads and forces you with a popup to accept cookie
settings which it takes "several minutes" to save? They don't seem to have
any visible effect, and keeping my browser busy without plausible reason is 
fishy in any case. So, I guess that's "bye sourceforge" ...
[/OT]

Regards,
Holger

---
git clone https://github.com/backuppc/backuppc

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fwd: After migration to V4, rsync backups not working

2017-03-16 Thread Holger Parplies
Hi,

ads wrote on 2017-03-16 08:58:20 + [[BackupPC-users] Fwd:  After migration 
to V4, rsync backups not working]:
> [...]
> I use Arch Linux, and installed (upgraded) backuppc using the provided
> package in pacman format. I already suspected configure.pl hadn't been run,
> because I had to manually adjust RsyncBackupPCPath (it was empty)
> and PoolV3Enabled (it was off).
> 
> I couldn't find a copy of configure.pl on my system. I guess the package
> gets expanded in some temporary location which is cleaned after install. I
> will have a more thorough search.

this is a misunderstanding. When installing the *upstream* package, you run
configure.pl. For a packaged version as you are using, the package's pre-/
postinstall/configure/whatever_may_apply scripts are responsible for doing
whatever needs to be done for an upgrade. The package maintainer may or may
not have used parts of (or ideas from) the upstream configure.pl for
implementing that.

The correct thing to do is:

> I think my choices now are:
> 
> 1) [...] I would also report the issue to the package maintainer.

and update the setting(s) yourself. (*)

Updating user configuration as in this specific case is in itself a problem.
BackupPC V3 gave you the flexibility to specify an arbitrary command which
ends up with a connection to a remote piece of software that speaks the
rsync protocol - however the individual BackupPC user might choose to
implement that. For V4, the mechanism is slightly different, as the command
is ultimately called by rsync (rsync_bpc, specifically), which, I believe,
makes different assumptions than BackupPC used to. So, it might not always
be possible to convert one form to the other without further manual steps.
For many common cases, it *is* possible, and I assume that is what
configure.pl attempts to do as a convenience (I haven't looked at the code).
Your package maintainer may have chosen not to attempt that, and he may have
valid reasons. Ultimately, you always have to check the results of the
conversion anyway and possibly correct them. Remember, this is one of the
settings where there is, upon *initial* installation of BackupPC, no generic
works-automatically-for-everyone default value. Even for the very common
case "rsync over ssh as root" you always need to set up passwordless ssh
access.

If I read Craig's answer correctly, a typical setting would be

$Conf {RsyncSshArgs} = '-l root';

but if you could quote the setting from your V3 configuration for
$Conf{RsyncClientCmd}, I'm sure we'll be able to figure out what you need
to set and whether you might need to change anything else. Hint: if you
can't find the configuration value, an appropriate extract from a V3 log
file would help, but my guess is that if it wasn't converted, it's still
in config.pl or the respective host.pl file.

Regards,
Holger

(*) Note: even if the maintainer decides to "fix" this issue, conversion
might only happen when upgrading from a 3.x version to the new package,
not from one 4.x version to the next.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] After migration to V4, rsync backups not working

2017-03-15 Thread Holger Parplies
Hi,

ads wrote on 2017-03-15 11:46:39 + [[BackupPC-users] After migration to V4, 
rsync backups not working]:
> I have migrated from V3 to V4,

how did you do that? By running configure.pl?

> and now my rsync backups are not working. [...] The 3 'Permission denied'
> lines look as if they might be ssh permission problems.

I agree with that conclusion.

Craig recently wrote on [backuppc-devel]:
> The configure.pl script updates the relevant rsync settings in the main
> config.pl file.  In particular, it updates $Conf{RsyncArgs}
> and $Conf{RsyncRestoreArgs} with the 4.0 settings.
>
> $Conf{RsyncSshArgs} is a new 4.0 setting, and $Conf{RsyncClientCmd}
> and $Conf{RsyncClientRestoreCmd} are no longer used.  configure.pl tries to
> extract a sensible default setting for $Conf{RsyncSshArgs} from
> $Conf{RsyncClientCmd}.
>
> The potential problems are that there are per-client overrides for these
> settings (which is not checked by configure.pl), and the computed value for
> $Conf{RsyncSshArgs} isn't correct.

I'm guessing something went wrong. Either you have per-client overrides or
the value of RsyncSshArgs is computed incorrectly. What was your (V3)
RsyncClientCmd, what is your (V4) RsyncSshArgs?

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] ssh and Mac 10.12 Sierra

2016-12-02 Thread Holger Parplies
Hi,

Adam Goryachev wrote on 2016-11-30 09:03:01 +1100 [Re: [BackupPC-users] ssh and 
Mac 10.12 Sierra]:
> [...]
> > After permitting root log in the sshd_config file, [...] I could do
> > an ssh from the BPC server to the mac without password.

that is correct - if you are using the root user, root login needs to be
*allowed*. It can be forbidden altogether in /etc/ssh/sshd_config (on Linux
systems, path may vary on other systems), and that would prevent backups
from working. I have no idea how MacOS/X sets this up by default, but it's
definitely a thing to check.

> > [...] However, it quit quickly, with an ???Unable to read 4 bytes error.???
> However, when I set the client name alias to the ip address, it seems to
> have taken. It has been running a backup for 10 minutes, so hopefully that
> is it.

That is strange. The only things I could think of to explain this would be
either a messed up ~backuppc/.ssh/config or messed up DNS resolution,
resulting in BackupPC contacting the wrong server or using the wrong options
when you specify the host name, both of which sound unlikely.

> I would suggest that you revert the ssd_config so that you can benefit 
> from the improved security of rejecting password based root login. 

The thing to note here is that /etc/ssh/sshd_config (or wherever that may
be on MacOS/X) should probably contain

PermitRootLogin without-password

whereas

PermitRootLogin no

*will not work* (for you, that is; people using a different account combined
with 'sudo' can even use this setting).

PermitRootLogin yes

will work but may be less secure. If *you* don't *need* it, don't use it.
***BackupPC does not need or even use it in any case.***

There is a fourth variant which could also be used with BackupPC, but *it
would require additional setup*:

PermitRootLogin forced-commands-only

Since BackupPC may generate different argument lists to rsync, and ssh forced
commands only allow one fixed argument string, you would need a wrapper script
to accommodate for this case. If you don't understand what I'm talking about,
just ignore it and set PermitRootLogin to without-password. I'm just
mentioning it, because sshd_config(5) expressly says

 If this option is set to ``forced-commands-only'', root login
 with public key authentication will be allowed, but only if the
 command option has been specified (which may be useful for taking
 remote backups even if root login is normally not allowed).

This is correct, but, again, will require additional configuration.

To sum it up:

PermitRootLogin
yes .. ok, less secure
no ... not ok
without-password . ok, recommended setting
forced-commands-only . not ok without additional consideration/configuration

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backup Running Forever

2016-11-02 Thread Holger Parplies
Hi,

Christian V?lker wrote on 2016-11-02 19:04:58 +0100 [Re: [BackupPC-users] 
Backup Running Forever]:
> [...]
> > If you tail the transfer log:
> > /usr/share/backuppc/bin/BackupPC_zcat ~backuppc/pc//XferLOG.z|tail
> > what file does it stop on, and does it always stop there?
> I will restart an see if it stops on the same file. Up to now the
> command shows me:
> same 644   0/0 8618994 usr/share/cracklib/pw_dict.pwd
>   same 644   0/0  458640 usr/share/cracklib/pw_dict.pwi
>   create d 755   0/04096 usr/share/davical
>   create d 755   0/04096 usr/share/davical/dba
>   same 644   0/01635 
> usr/share/davical/dba/appuser_permissions.txt
>   same 644   0/02

as you can see, output to the XferLOG is buffered (and compressed), so the
file it is hanging on can be anything between the next file (after the one
whose name you can't see) to some thousands of files later. If you could get
the BackupPC_dump process to terminate and flush the buffer, that might help
you, but I don't think you can ...

My quick guess is attrib file corruption, apparently causing a
misunderstanding between Xfer::Rsync and rsync on the client, where each end
is expecting protocol data from the other. The most non-invasive thing to try
would seem to be setting

$Conf {PartialAgeMax} = 0;

and seeing if BackupPC really does ignore the partial on the next backup. If
that doesn't help, you might need to delete the partial backup.

Hope that helps.

Regards,
Holger

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC 4 questions

2016-10-27 Thread Holger Parplies
Hi,

Gerald Brandt wrote on 2016-10-25 11:43:22 -0500 [[BackupPC-users] BackupPC 4 
questions]:
> Can I run BackupPC 4 on the same machine, and at the same time, as I'm 
> running BackupPC 4?

obviously: yes :-).

You probably meant to ask whether it is possible to run BackupPC 3 and 4
concurrently. I can't see any reason why it shouldn't be. You will obviously
need to choose distinct values for $TopDir, $LogDir, and $ConfDir between both
instances (the latter two may or may not default to a value dependent on
the first). I don't really know much about version 4, but from BackupPC 3's
perspective, these seem to be the only resources that won't like shared write
access. If you've defined $Conf {ServerPort}, you'll obviously need distinct
values there, too. BackupPC 4 might name some of these settings differently
(though I doubt it), so you might need to figure out what they correspond to.

It probably goes without saying that you'll need to install to different
paths and maybe rename an init script. This may be a problem if you're
installing from distribution packages (in which case they should also conflict
with one another) - supposing there even are distribution packages for
BackupPC 4 yet.

I probably don't need to mention that running BackupPC 3 and 4 concurrently
on the same system may impose loads on the hardware that it can't handle
(meaning backups taking too long), but that doesn't seem to be relevant in
your case:

> I'd like to do some tests, and maintain my old system at the same time, 
> without buying new hardware.

So, try it out and tell us if you found a conflict that I missed. At least
I can't see any obvious reason not to try, providing you choose different
paths as mentioned above. If it makes you feel more secure, stop the BackupPC
3 daemon and remount your pool FS read-only for testing - at least for the
first backup. Obviously, you'd need to store your V4 pool on a different FS
then, but that's probably not a bad idea anyway.

Of course, if you're really cautious, you could always run V4 in a VM ...

#include   /* or rather */
use Disclaimer::Std;  # since we're talking Perl here

Hope that helps.

Regards,
Holger

--
The Command Line: Reinvented for Modern Developers
Did the resurgence of CLI tooling catch you by surprise?
Reconnect with the command line and become more productive. 
Learn the new .NET and ASP.NET CLI. Get your free copy!
http://sdm.link/telerik
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC hangs

2016-10-10 Thread Holger Parplies
Hi,

Markus Hirschmann wrote on 2016-10-10 17:54:45 +0200 [[BackupPC-users] BackupPC 
hangs]:
> 
> I backup a huge QNAS Storage with rsync over ssh and it always hangs 
> with 100% load. I tried to find the problem with lsof:
> 
> [...]
> BackupPC_ 1842 backuppc6r   REG  253,330938 1330743 
> /var/lib/backuppc/cpool/d/6/d/d6dfb50e7648054a9c2f83537f956c2c_185

185 ... wow ...

> BackupPC_ 1842 backuppc7r   REG  253,377320 419524 
> /var/lib/backuppc/cpool/d/6/d/d6dfb50e7648054a9c2f83537f956c2c_0
> BackupPC_ 1842 backuppc9r   REG  253,3   171056 419525 
> /var/lib/backuppc/cpool/d/6/d/d6dfb50e7648054a9c2f83537f956c2c_1
> BackupPC_ 1842 backuppc   11r   REG  253,372265 419526 
> /var/lib/backuppc/cpool/d/6/d/d6dfb50e7648054a9c2f83537f956c2c_2

[and so on]

You've got a rather long hash chain here (185 candidates for a pool match).
For the current file in question, BackupPC needs to compare the contents with
all candidates (or at least those that haven't been ruled out yet ... see
the missing numbers further down:

> BackupPC_ 1842 backuppc   27r   REG  253,3   100064 419572 
> /var/lib/backuppc/cpool/d/6/d/d6dfb50e7648054a9c2f83537f956c2c_18
> BackupPC_ 1842 backuppc   28r   REG  253,3   113544 419587 
> /var/lib/backuppc/cpool/d/6/d/d6dfb50e7648054a9c2f83537f956c2c_33

). I'm guessing the file is rather large, so the comparison takes quite long.
Note also that BackupPC won't take all candidates into account at once, due
to the limit on concurrently open files, though I'm not sure what the strategy
is (it might mean the comparison needs to be repeated).

Hash collisions happen (mainly) for files with identical *length* and identical
first and last 128 KiB chunk within the first 1 MiB of data (if I remember
correctly; the details aren't important, so I won't check). Maybe you're
backing up a large database with changes just in the wrong places? Log files
would tend to grow, i.e. change length (and therefore hash), so they usually
won't cause trouble in form of hash chains.

Hint: 'ls -ali /var/lib/backuppc/cpool/d/6/d/d6dfb50e7648054a9c2f83537f956c2c_0'
(or any other file in the chain) for a quick glance at (compressed) length and
number of links (an extremely large number of copies would also tend to make
hash chains longer than they need to be as well as make comparisons tedious -
your $Conf{HardLinkMax} isn't by chance set to a ridiculously small value, is
it?). You might also want to 'BackupPC_zcat !$ | wc -c' (please expand the
!$ ;-) for an exact uncompressed file size. The inode number is included in
the ls example, in case you feel like doing a 'find $TopDir/pc -inum ...' for
locating the file(s) in question.

> [...] I should delete my pool and start new but I don't want to do that. :/

I'm not sure that would help much. If your backup strategy continues to backup
this content, you'll probably run into the same situation again in the future.
If you can and want to remove the file from (some) previous backups, search
the list for ... err ... something like BackupPC_delete - a script to safely
delete individual files from backups. BackupPC_nightly should take care of
cleaning up (i.e. chain renumbering), *BUT* *PLEASE don't run it manually*!

Of course, if you don't need to backup the file, simply excluding it would
also fix things, as the comparison wouldn't happen in the future.

Hope that helps.

Regards,
Holger

--
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] access backups created by different server

2016-09-13 Thread Holger Parplies
Hi,

Michele Zarri wrote on 2016-09-08 22:20:32 +0100 [Re: [BackupPC-users] access 
backups created by different server]:
> > On 08/09/16 02:37, Michele Zarri wrote:
> >> [...]
> >> /usr/share/backuppc/bin/BackupPC_tarCreate: bad backup number 224 for
> >> host localhost
> >> [...]
> I found out that the issue was relatively trivial: the file "backups" in 
> the root localhost (/var/lib/backuppc/pc/localhost/backups) was 
> overwritten when backuppc was reinstalled so that it only had one entry.
> 
> I guess it could be possible to re-build the previous backups using the 
> file backupInfo within each backup but it is too much work.

really? As in

su - backuppc
/usr/share/backuppc/bin/BackupPC_fixupBackupSummary

? Admitted, you might need to delete the single entry from the backups file
first, but it will probably work equally well if you don't. Why is there an
entry anyway? Does your distribution ship BackupPC with an initial backup
for localhost? ;-)

> Instead I renamed the last directory with a full backup (in my case 224) 
> 0 and I was back in action.

Well, yes, but you lose your backup history, and you'll probably run into
problems when you try to do backups ...

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Moving /var/lib/BackupPC to a new disk fails with rsync and OOM

2016-09-06 Thread Holger Parplies
Hi,

Colin wrote on 2016-09-05 17:46:34 +0100 [Re: [BackupPC-users] Moving 
/var/lib/BackupPC to a new disk fails with rsync and OOM]:
> Thank you Les!
> I'll have a go tomorrow after backups with something similar as:
> dd if=/dev/sdb1 of=/dev/sdc1 bs=16M
> [...]
> On 5 September 2016 at 17:03, Les Mikesell  wrote:
> > On Mon, Sep 5, 2016 at 10:07 AM, Colin  wrote:
> > > [...]
> > > I currently have lots of data on a normal parition (no LVM) and when
> > trying
> > > to copying to a new disk, it always fail after a while:
> > > [...]
> > Your best shot would be to use dd to copy the partition to the new
> > disk, then grow the filesystem to fill the space using the filesystem
> > tools. [...]

if that (at the top ... would have been at the bottom without top-posting)
doesn't work for some reason or you want to change file system types, then
please get back to us. I have a script that handles it at the file level,
though not well tested. And dd will definitely be faster, so if that works
for you, it is certainly the better option.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync exclusion list - apply to multiple shares

2016-08-31 Thread Holger Parplies
Hi,

cardiganimpatience wrote on 2016-08-19 06:24:24 -0700 [[BackupPC-users]  rsync 
exclusion list - apply to multiple shares]:
> [...]
> 
> $Conf{BackupFilesExclude} = {
>   '*' => [
> '*access_log*',
> '.apdisk',
> '*/.apdisk',
> '.cache'
>   ]

};

(supplied free of charge).

> Is there a method to make an exclusion list for all shares, even if
> individual shares have specific lists of their own?

That question doesn't make sense, but I presume you want a global exclusion
list that applies in addition to optional individual share lists. You can get
close to what you want, but you'll break the web configuration editor, or
rather, the web configuration editor will not correctly preserve your changes
if you use it on the host configuration file(s) in question.

You simply create a variable in the host (or even global) config file ...

my @common_excludes = ('*access_log*', '.apdisk', '*/.apdisk','.cache');

(I'm not sure what you are trying to achieve by listing '*/.apdisk' as well
as '.apdisk', though) and then referencing that multiple times:

$Conf {BackupFilesExclude} = {
'/home' => [ @common_excludes ],
'/var'  => [ @common_excludes, '/lib/mysql' ],
'/usr'  => [ @common_excludes ],
'/boot' => [ @common_excludes ],
'/data' => [ @common_excludes ],
};

or

$Conf {BackupFilesExclude} = {
'/var' => [ @common_excludes, '/lib/mysql' ],
'/example' => [ ], # no excludes here
'*'=> [ @common_excludes ],
};

> Would it be valid to list all shares in a common definition, and also break
> them out into individual lists like this:
> 
> $Conf{BackupFilesExclude} = {
>   '/home, /var, /usr, /boot, /data' => [
> '*access_log*',
> '.apdisk',
> '*/.apdisk',
> '.cache'
>   ]
> 
> $Conf{BackupFilesExclude} = {
>   '/var' => [
> '/lib/mysql'
> ]

Well, syntactically you are missing closing braces and a semicolon.
Semantically, you are assigning one value and then overwriting it with
another, just like in

$a = 'foo';
$a = 'bar';

For BackupPC, you are first creating an exclude list for a share with the
somewhat awkward name '/home, /var, /usr, /boot, /data' (which is "data"
within "boot, " within "usr, " within "var, " within "home, " within the
root directory), which is doubtlessly a valid path, just not one whose
name you'd like to quote to a shell.

So, no, that won't work for several reasons. Aside from that, it would also
break the web configuration editor.

> My list of files to exclude from every ShareName includes over 40 items
> so it's not trivial (or aesthetically pleasing) to append this list to
> each share definition.

Considering "trivial", I tend to disagree, at least with vi ;-). Consistently
*changing* the list might be a different matter. But I agree with your point.


As a side note, I believe you can prevent a host configuration file from
being edited (by the user) with the web configuration editor by setting

$Conf{CgiUserConfigEditEnable} = 0;

(inside the host configuration file). This might be advisable if you made
changes incompatible with the web configuration editor to prevent accidental
damage. Note, though, that this will probably not prevent an *admin* user
from clobbering the file with the web configuration editor.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Is this required?

2016-08-29 Thread Holger Parplies
Hi,

Mark Neidorff wrote on 2016-08-26 12:21:03 -0400 [[BackupPC-users] Is this 
required?]:
> I'm setting up a backuppc server using OpenSuse Leap 42.1.  I installed from 
> the OpenSuse repos, but in doing so, it found that:
> perl_Net_FTP_AutoReconnect-0.3.tar.gz was missing. [...]
> 
> Since I'm not going to use FTP, do I have to worry about installing this 
> missing perl module?  (I checked, it is not in the repos for OpenSuse Leap 
> 42.1)

no, I believe you don't, but I'd guess your packaging system is going to give
you a headache ...

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Difference between terms in logs backuppc

2016-08-14 Thread Holger Parplies
Hi,

alex8276 wrote on 2016-07-07 01:20:44 -0700 [[BackupPC-users]  Difference 
between terms in logs backuppc]:
> Can you explain the difference between : pool, same in the backup's log ?

yes.

> pool 644   400/401   13372 vendor/PhpExcel/xx.php
> same 644   400/4013415 app/config/config.yml

"same" means that rsync determined that the file is unchanged since the
reference backup so no data was transferred. "pool" means that BackupPC
found a file with identical content in the pool and thus created a link
to the pool file instead of independently storing the transmitted data
in a new file (i.e. rsync saw a change and transferred data).

> Also, sometimes I have this error : 
> "Read EOF: Connexion ré-initialisée par le correspondant
> Tried again: got 0 bytes

That seems to translate to "connection reset by peer" meaning the TCP
connection was dropped by the remote end. Either rsync closed the
connection for whatever reason, the host running rsync rebooted, or a
firewall inbetween timed out the connection, usually because no data
was exchanged for some amount of time (e.g. when comparing large files
or building a file list).

Hint: you probably have an incorrect language setting somewhere which
*may* case problems elsewhere, because BackupPC doesn't understand the
error messages of an external program, though I very much doubt that is
the issue here.

> Can't write 4 bytes to socket
> finish: removing in-process file xx/xx/xx.jpg
> 
> What is the problem ? The file xx.jpg or an other file not mentioned ?

Difficult to say. Is it always the same file? Is it a large file? Does
it change regularly? How often do you get the error? Is there possibly
something external related to it (e.g. DSL provider hangup, bandwidth
used by other processes, server reboot, ...)?

Hope that helps.

Regards,
Holger

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. http://sdm.link/zohodev2dev
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BacupPC -How not to start backup when lock file is present ?

2016-06-29 Thread Holger Parplies
Hi,

Johan Ehnberg wrote on 2016-06-27 16:03:42 +0300 [Re: [BackupPC-users] BacupPC 
-How not to start backup when lock file is present ?]:
> Hi,
> 
> You can do it the other way around:

... by top-posting?

> run the database dump with 
> $Conf{DumpPreUserCmd}.
> [...]
> On 06/27/2016 03:54 PM, phil123456 wrote:
> > hello,
> >
> > I am exporting my database before backup
> > but the time it takes may vary, and I want things to be automatic
> > so I create a lock file when export starts and remove it when it is finished
> >
> > I would like backup pc to start the backup when everything is done
> >
> > is there a wy to achieve this ?

Yes.

You can also use DumpPreUserCmd together with UserCmdCheckStatus to check
for the existance of the lock file and abort the backup attempt if it exists.
This might have advantages by not keeping BackupPC busy for a long time while
creating the database export (other backups could run during this time), and
you would avoid rebuilding the database export every time in case the backup
fails (and is retried on the next wakeup).
You could even abuse PingCmd for that (not sure which is better).

Also, it might not be easy to start the command from the BackupPC daemon (e.g.
if it needs to run on a different machine or with different user/group or is
needed for something besides the backup), while it might be trivial to check
for the lock file (NFS or local stat).

One thing to keep in mind, though, is that BackupPC's backup might run
*before* the database export is created. You could circumvent that by using
"lock files" including the date, creating them *after* the export and
requiring their existance rather than non-existance. Perhaps the created
dump file itself could serve this purpose ...

Hope that helps.

Regards,
Holger

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_trashClean (?) freezes system

2016-06-29 Thread Holger Parplies
Hi,

Witold Arndt wrote on 2016-06-27 08:53:40 +0200 [Re: [BackupPC-users] 
BackupPC_trashClean (?) freezes system]:
> On Sonntag, 26. Juni 2016 22:21:45 CEST Adam Goryachev wrote:
> > Can you login to the server after it has "hung"? I'm assuming yes since
> > you can try to kill the process.
> > I'd strongly suggest checking the various logs, starting with dmesg
> > Also, check the physical "host" to see what it thinks the status of the
> > VM is.
> 
> Jep, I can login to the vm and everything besides backuppc is running and 
> instantly responsive. Other processes which use the disk have no problem 
> reading or wrting and iotop shows no hangups.

are they using the same file system? Can you show us a 'df -T' and perhaps
'df -i' of your BackupPC VM?

> > Almost every time I've tried to kill a process and seen it turn into a
> > zombie, it's because the process was sleeping / waiting for disk IO, and
> > it won't die until after the OS decides the disk IO has failed or succeeded.
> 
> This is consistent with the 85% waiting usage, but there are no errors any 
> log 
> (dmesg, syslog, backuppc/log/*) whatsoever.
> 
> I'm a bit lost since there were no configuration changes (besides removal and 
> addition of backup clients) and this setup has been running since 04/2014.

I would suspect file system corruption. Is the trash directory empty when the
freeze occurs? In general, I'd suggest an 'fsck', but with a BackupPC pool
that might not work. You *could* try moving the trash directory out of the way
and recreating it with the same permissions. This would avoid accessing a
problematic file within it, supposing this is causing the problems. Though,
normally, I'd expect something in the system log files in case of a file
system panic. Well, 'df -T' might tell us more.

Regards,
Holger

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Full period

2016-06-21 Thread Holger Parplies
Hi,

Adam Goryachev wrote on 2016-06-21 20:17:43 +1000 [Re: [BackupPC-users] Full 
period]:
> On 21/06/2016 17:28, absolutely_f...@libero.it wrote:
> > [...]
> > I noticed that I am unable to complete backups for each server in one 
> > day; probably it takes 3 days to finish.
> > [...]
> >
> > Is there a way to configure BackupPC to run Full backup every X 
> > backups (and NOT every X days)?

have you understood the concept of full and incremental backups? Is your
requirement really "I need every Xth backup to be exact, no matter how far
in time they are appart, and need to trade exactness for speed in between"?

> > I think that probably I'll save some bandwidth, am I wrong?

Yes, quite the opposite, if you are using rsync(d) (and if not, you probably
should be).

> Why does the backup take so long? Is it bandwidth between the two 
> machines or something else?

You have a good point here. From the data given - none - we can't even begin
to guess whether the long backup times are a flaw in design or in
implementation. Very, very likely, backups *shouldn't* be taking 3 days to
finish. I don't know what you are backing up, but apparently it is not
exactly close to static. A three day duration backup won't give you anything
even close to a consistent snapshot of your backup set (unless you are using
file system snapshots, obviously). Are the backup times plausible for the
amount of data? Should you be splitting your backups between multiple
BackupPC servers? I'd suggest fixing the issue rather than trying to
awkwardly work around it.

> [...]
> Other than that, I don't think there is any way to automatically do more 
> incrementals, short of increasing the FullPeriod to a higher value

I should hope there isn't ;-). Increasing the FullPeriod makes sense, because
it corresponds to a *common* requirement: I need a precise reference point
every N days (e.g. once a month). Of course, *in BackupPC*, a full backup is
also a backup I can keep for a long time. I can't do that with a BackupPC
incremental.

Hope that helps.

Regards,
Holger

--
Attend Shape: An AT Tech Expo July 15-16. Meet us at AT Park in San
Francisco, CA to explore cutting-edge tech and listen to tech luminaries
present their vision of the future. This family event has something for
everyone, including kids. Get more information and register today.
http://sdm.link/attshape
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC - take a day off?

2016-06-01 Thread Holger Parplies
Hi,

bpb21 wrote on 2016-05-18 10:22:22 -0700 [[BackupPC-users]  BackupPC - take a 
day off?]:
> [someone else wrote:]
> > Maybe it's possible to create a blackout period for that day? Not sure if
> > one should specify saturday or sunday as weekdays. 
> > [...]
> 
> Thanks!  I think this is it.  I completely overlooked this for all the
> other settings.  Sounds promising.

well, yes, that is the solution that comes to mind. I'm not sure the problem
you are solving is the important one, though.

What exactly are you trying to scan for viruses, your backups or the rest
of the system?

Assuming you are scanning the backups:
Does your anti-virus software handle compression (i.e. the non-standard
format used by BackupPC)? If not, then your anti-virus software will run
just fine, consume lots of CPU time and keep your hard disks busy, but it
will be pretty much guaranteed to never find anything (which might be what
you are actually hoping for, but it would also be very much meaningless).
If it does work, remember that you want to scan only the pool, not the pc/
directory structure. Since BackupPC conveniently only stores the data once,
you only need to scan it once.

Assuming you are *not* scanning the backups:
Be sure to exclude the backups from the AV scan - it would take forever
and probably be quite pointless.

As a side note, the nightly job you seem to be referring to -
BackupPC_nightly - is not "pooling and compression", it's more like
"housekeeping and statistics". Compression is done during backups.
Pooling is done during backups for existing files (i.e. where an
identical file is known to BackupPC) and during BackupPC_link for
new files.

Hope that helps.

Regards,
Holger

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Win 10 issue with NT_STATUS_BAD_NETWORK_NAME

2016-05-31 Thread Holger Parplies
Hi,

Jeff Boyce wrote on 2016-05-31 13:40:20 -0700 [Re: [BackupPC-users] Win 10 
issue with NT_STATUS_BAD_NETWORK_NAME]:
> On 5/26/2016 1:51 PM, Jeff Boyce wrote:
> [..]
> Ok I solved all my backup issues, so this post will give a summary of 
> what worked for me and hopefully help others.
> [...]
> I was unable to connect to the default C$ share for conducting the 
> backup.  [...]
> 
> So in *almost* all of my desktop boxes that are backing up to BackupPC 
> they are using these same settings and the Users share.  I am not sure 
> what is going on internally in the Windows networking that doesn't allow 
> smbclient to connect to the default C$ share, but I am sure that 
> something within Windows is stopping it.

stupid question: is it possible to connect to an administrative share with
user (i.e. non-admin) credentials?

> [...] The exception is my own desktop system; BackupPC is using the
> default C$ share without any problem.

Which user are you using to connect in this case? Is that user a member of
a privileged group?

> [...] If anyone has an idea, please enlighten me.

I can't enlighten you, because I don't know the answer. I just have an idea
which question might help :). Hope it does.

Regards,
Holger

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] error crontab

2016-05-31 Thread Holger Parplies
Hi,

Bowie Bailey wrote on 2016-05-31 11:34:43 -0400 [Re: [BackupPC-users] error 
crontab]:
> On 5/31/2016 9:53 AM, Carl Wilhelm Soderstrom wrote:
> > Put your entries in /etc/crontab. They will be easier to find there.
> >
> > [...]
> 
> Alternately, you can use the per-user crontab correctly.  Use 'crontab 
> -e backuppc' to create/edit the crontab for backuppc and then enter in 
> your command without the username.
> 
> 20 20 * * * /usr/share/backuppc/bin/BackupPC_serverMesg backup 
> example-2.example.com example-2.example.com backuppc 1

all of that said, you need to get the BackupPC_serverMesg command right, as
indicated here, which you did not in your original post. The syntax for
requesting a backup is

BackupPC_serverMesg backup

where  *needs to* match a host you defined in BackupPC (i.e. if you
named your host "foo", you can't use "foo.domain.com", "cname-for-foo", or
"1.2.3.4" here) and  should probably be identical to  unless
you know what you are doing (hint: you don't; using an IP or something else
that resolves to the same IP might occasionally work, but a quick glance at
the code suggests that this may break BackupPC internal synchronisation). 
 is informational only, if I remember correctly, and backupType is one
of "auto", -1, "doIncr", 0, "doFull", 1 with the text corresponding to the
following number.

If you want to request three backups, then that's three individual
BackupPC_serverMesg commands.

Hope that helps.

Regards,
Holger

--
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-01-27 Thread Holger Parplies
Gandalf Corvotempesta wrote on 2016-01-27 11:28:24 +0100 [Re: [BackupPC-users] 
BPC 4 very slow]:
> [basically the same thing over and over again]

In case it's not obvious, adjust the settings so they specify to do what you
want. Before that, preferably, understand the settings. Use the suitable tool
for your purpose. You keep repeating that you get no advantage from pooling,
because all your files constantly change, and thus you must keep only minimal
history. But you want to restore things you are not keeping. Strange.

Hope that helps.

Regards,
Holger

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BPC 4 very slow

2016-01-14 Thread Holger Parplies
Hi,

much has been said in this thread, but I believe this has not:
You do realize that BackupPC 4.x is alpha software, right? You're lucky if it
works at all, you should be surprised if it performs well, and you're not
using it in a production environment anyway, right? Ok, the reality might not
be quite as bad, but you need to be aware that version 4 is not a newer
version 3 with some improvements, it is basically a re-write with a whole new
concept behind it.

If you are looking for something stable, I'd recommend BackupPC 3.x, which
you'll undoubtedly also get more and better support for here, if you even
need it. And yes, it *will* be slower than native rsync.

For me (and, I believe, for many others), version 3 (and even version 2!) works
so well that there is no reason to move to version 4.

Hope that helps.

Regards,
Holger

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Best to Move a Large Pool to Different FS?

2015-11-11 Thread Holger Parplies
Hi,

Christian Völker wrote on 2015-11-11 21:10:56 +0100 [Re: [BackupPC-users] Best 
to Move a Large Pool to Different FS?]:
> [...]
> What does the BackupPC_tarPCCopy do in detail? Would it be faster?

it computes the pool file names from file contents. If I'm not mistaken, this
involves reading and decompressing each complete file. It won't be fast in any
case. "Faster"? Maybe, maybe not. I wish I'd find some free time, because the
task is really not *that* difficult, it's just not something general purpose
software can handle well.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Command line restore

2015-11-09 Thread Holger Parplies
Hi,

absolutely_f...@libero.it wrote on 2015-11-09 16:24:29 +0100 [[BackupPC-users] 
Command line restore]:
> Hi,I am using BackupPC 3.2.1-4 (official Debian package).Is there a way to
> launch a restore process through command line?

yes.

> I mean, I don't want to create a tar / zip archive. I need to restore files
> to original server.Thank you very much

Considering the web server doesn't do the restore itself but rather instructs
the BackupPC server to do so, there must be a way.

Regards,
Holger

P.S.: In case you were wondering *how* to launch a restore via command line,
  it's a bit complicated. The command as such is something like

  BackupPC_serverMesg restore

  where  should probably be the IP address of  (but will
  apparently be looked up(*) if it isn't - presuming some piece of code
  doesn't complain first),  is only for logging purposes, if I
  remember correctly, and  might be somewhat difficult
  to construct. Technically speaking, it isn't, it's just a Data::Dumper
  dump of a Perl hash containing the relevant information. So, what is
  the relevant information? Let's do it the easy way (for both you and
  me): initiate a restore from the web interface (and make sure to either
  direct it somewhere it won't do any harm or make (absolutely) sure you
  actually can't restore; better yet, do both), and after it has completed
  or failed, look in $TopDir/pc/ for a file named RestoreInfo.n (and
  unless that turns out to be RestoreInfo.0, you can skip that part and
  just look at one of the preexisting files straightaway). Figure out what
  the individual hash entries mean and fill the values to match your needs.
  You can probably get away with setting 'num' => -1 to always refer to
  the latest backup and leaving 'reqTime' as it is (even though that will,
  strictly speaking, be incorrect), but I'd test that, just to be sure.
  Hint: for a full restore, I get "fileList => [ '/' ]" (among other hash
  entries).

  As always, you need to run BackupPC_serverMesg as the backuppc user.

  Hope that helps.

  (*) As I read the code,  will be looked up if  doesn't look
  like an IP. You might expect  to be looked up, but that
  apparently is not the case.

--
Presto, an open source distributed SQL query engine for big data, initially
developed by Facebook, enables you to easily query your data on Hadoop in a 
more interactive manner. Teradata is also now providing full enterprise
support for Presto. Download a free open source copy now.
http://pubads.g.doubleclick.net/gampad/clk?id=250295911=/4140
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Fatal error (bad version): /sbin/nologin: invalid option

2015-10-28 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2015-10-28 12:56:16 -0500 [Re: [BackupPC-users] Fatal 
error (bad version): /sbin/nologin: invalid option]:
> On Wed, Oct 28, 2015 at 12:04 PM, fujisan
>  wrote:
> > [...]
> > So this nologin erro message does not seem to come from user backuppc.
> 
> Could it be coming from the remote side via ssh?

yes, it obviously is, and no, I don't understand that either. I went through
the BackupPC code involved yesterday, and I'm convinced the error message can
*only* come from the remote side, because no shell on the BackupPC server is
involved.

> I don't see why that would be different when starting from the server
> command line vs the web interface but maybe there is something different
> about the ssh key that is used.

Well, we're guessing here. I don't see how a different ssh key would change
the remote user's shell (and please note that '-l root' was explicitly
specified in the RsyncClientCmd), but the one difference I do see is that

# su -s /bin/bash backuppc

(from memory) doesn't force a "login environment", so you *might* be using
root's .ssh/config in one case and backuppc's in the other, which could, in
theory, map your client host name to a different machine with a

Host foo
  HostName bar

type directive (or to a different ssh port - e.g. for an sshd inside a chroot
- in much the same way). That is the only reason that I can think of why
root@whateverthenamewas would have /sbin/nologin as shell in one case and a
working shell in the other. Well, maybe PAM could also do that somehow.

I'm convinced it will turn out to be something completely different, though,
but I would try using

su -s /bin/bash - backuppc

(note the '-' (or '-l') parameter) and look whether that *stops* the command
line backup from working.

> In any case, the issue seems to be something about your shell setup rather
> than in backuppc itself.

Definitely.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backuppc and acls / xattrs

2015-10-19 Thread Holger Parplies
Hi,

Nicola Ferrari (#554252) wrote on 2015-10-17 10:26:51 +0200 [Re: 
[BackupPC-users] Backuppc and acls / xattrs]:
> [...]
> Sorry but I cant understand. -A -X options are already activated in the 
> rsync command in my xfer configuration in backuppc.
> Do I have to upgrade to BackupPC 4.0 ?

as far as I know, ACLs and extended attributes are neither supported in
BackupPC 3.x nor in 4.x. Support in 4.x will probably be added in a future
version, in 3.x that is highly unlikely. That is not a bug as such, it is
simply a feature that has not been implemented (yet).

> [...]
> So the one that comes installed with backuppc in debian, isn't the 
> "original" rsync executable?

BackupPC 3.x does not use native rsync at all on the server side, it uses the
Perl module File::RsyncP. This only supports certain options. In 4.x this was
changed to a patched version of rsync written in C (possibly stripped down to
not support options that BackupPC can't handle).

The general problem with options like -A and -X is that the additional
information needs to be *stored* somewhere in the backup - e.g. in an attrib
file for 3.x or the database for 4.x. That doesn't just magically happen but
rather needs to be carefully designed and implemented, taking into
consideration what other similar meta-data there might be now or in the
future. As there are workarounds available (dump the information into a file
in a pre-backup script), Craig's main focus is probably on stable backup
functionality first and "features" later on.

Hope that helps.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackuPC with Monit

2015-10-12 Thread Holger Parplies
Hi,

madams9 wrote on 2015-10-11 15:28:21 -0700 [[BackupPC-users]  BackuPC with 
Monit]:
> [...]
> At any rate, I have a BackupPC installation on a server in a closet. It is
> connected to the network wirelessly. I understand the implications but
> remember - this is my personal network in my home and there is nothing
> mission critical for that backup server.

well, "mission critical" is certainly one point (*), but the other one - the
one you are trying to solve - is ease of administration. An ethernet connection
is, in general, simply rock solid. Ask anyone operating Linux servers. Reboots
are for kernel security updates or hardware upgrades mostly, occasionally for
resetting misbehaving hardware, but my feeling tells me ethernet cards are
near the bottom of the list. I've got a server at hand with an uptime of 881
days - evidently missing security updates. What happened 881 days ago? The
hardware was replaced. Network problems since then: zero.

I have seen flaky hardware losing an ethernet connection (i.e. unreachable
from the network though locally responsive), but that is a rare exception
in my experience.

The second point is that I recall numerous reports on this list of people
having problems with backups of *clients* connected via WLAN. There is really
not much difference where between the server and the client the wireless
connection is, but experience seems to show that backups will (occasionally)
fail and/or be (much) slower than expected.
It's fine if you can detect problems with your wireless network and restart
services accordingly, but that will almost definitely interrupt a backup in
progress, and if you're unlucky, there might be corner cases where this can
go undetected, leading to inconsistent backups (as in "the one full backup
that will be kept for ages contains only the first few files").

All of that said, there might be valid reasons for sticking to the wireless
connection, and my points may be non-issues for you.

> [...]
Carl Wilhelm Soderstrom wrote on 2015-10-12 12:00:12 -0400 [Re: 
[BackupPC-users] BackuPC with Monit]:
> On 10/11 03:28 , madams9 wrote:
> > ## BACKUP PC
> > check process backuppc with pidfile "/var/log/backuppc/BackupPC.pid"
> > start program "/usr/sbin/service backuppc start"
> > stop program "/usr/sbin/service backuppc stop"
> > if failed host adamsmdk port 16073 proto http then restart # change if not 
> > listening locally
> > ###
> > 
> > Obviously, that last line is trash because the backuppc socket process
> > starts a different port everytime it's initiated.

Does it? If I read the code correctly, BackupPC uses a Unix domain socket and
only opens an additional TCP socket if a port is specified in the config file,
to which it then binds. So you either have no TCP socket at all, or you have
one bound to a fixed port. Of course, your packaged version of BackupPC may
vary (though I'm not sure why it should).

> > Aside from that, I have no idea how monit is supposed to test for the
> > presence of backuppc. 
> 
> 
> Hmm, the stock monitrc file provides the following suggestion among others:
> #if failed url http://user:passw...@www.foo.bar:8080/?querystring
> #   and content == 'action="j_security_check"'
> 
> So perhaps monit could log into the BackupPC host and check one of bpc's
> output pages for certain content.

The problem with that is that it checks for a running web server, not a
running BackupPC daemon (well, unless the requested web page in turn tries to
contact the server, in which case you'd have to parse the result out of the
web page). If the web server is, in fact, down, you would keep restarting
the BackupPC daemon, which won't help.
I'd rather use something like

sudo -u backuppc /usr/share/backuppc/bin/BackupPC_serverMesg status

and check the return code. You might need to adjust the path and redirect
output to /dev/null (and, of course, set up sudo to allow that without
password). And please test that before depending on it :).

Have you actually observed problems with the BackupPC daemon that *don't* show
up immediately on startup? The usual (startup) failures that spring to mind
would seem to require manual intervention rather than a simple restart via the
init script.

Regards,
Holger

(*) One more note on "mission critical": it might not seem so now, but if you
were to, one day, find out that that one file you would have needed to
restore just isn't in your backups in a sufficiently recent version, you
*would* ask yourself why you went to the cost and trouble of doing regular
automatic backups in the first place.

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow transfer via rsync?

2015-09-28 Thread Holger Parplies
Hi,

Christian Völker wrote on 2015-09-28 14:55:35 +0200 [Re: [BackupPC-users] Slow 
transfer via rsync?]:
> [...]
> Why is it still so slow?

it's been mentioned before, though not explicitly. What is the RTT over the
VPN (i.e. between BackupPC server and client host)?

How many files are we talking about? Is the firewall blocking (any) ICMP
traffic between the two endpoints? A share "/srv" suggests this probably isn't
Windoze ;-). Is there anything special about this client host (kernel version,
distribution release, ...)? Can you use tcpdump to look at the characteristics
of the connection (e.g. is there a constant (slow) exchange, or are there
hangs, timeouts, retransmissions)?

Is your OpenVPN connection over TCP or UDP? Tunnelling TCP over TCP might give
such problems, depending on the characteristics of the underlying network
(e.g. significant packet loss/congestion).


All of that said, the *first* backup of a host is usually special in that all
data not already in the pool (which might in this case mean all data) needs to
be compressed. The *second* backup may add rsync checksums. All of this might
show up as CPU usage, I/O waits, etc., but it might also just add some more
latency to the network latency. If your first backup has been running for 24
hours, that is merely twice the time a quick best case estimate gives. I'd be
annoyed, but I'd give it some more time before starting to worry. Granted, the
failing backups on the old host *are* something to worry about. ClientTimeout?
Also, the network usage you are seeing is way too low and seems to indicate
that you will reach the point when worrying is warranted ;-).

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Slow transfer via rsync?

2015-09-28 Thread Holger Parplies
Hi,

Christian Völker wrote on 2015-09-28 22:02:26 +0200 [Re: [BackupPC-users] Slow 
transfer via rsync?]:
> >> Why is it still so slow?
> [...]
> traceroute to 192.168.1.3 (192.168.1.3), 30 hops max, 60 byte packets
>  1  10.12.0.1 (10.12.0.1)  344.955 ms  355.805 ms  355.818 ms
>  2  infra2 (192.168.1.3)  359.722 ms  359.737 ms  364.983 ms

are you serious? Where do you get over 1/3 second delay from? I'd expect about
a tenth of that (just tested over a UDP OpenVPN link). This is where I'd look.
It obviously *is* latency related. What do you get from a traceroute from one
VPN endpoint to the other (not through the VPN)?

Oh, is this traceroute in parallel to the running rsync exchange? Not that it
should make much difference, considering the low bandwidth usage ...

> [...]
> >  Is the firewall blocking (any) ICMP traffic between the two endpoints? 
> No. As you can see in the above commands it appears everything if fine.

No, I don't really see that. I see that ICMP echo request/reply are working,
not that PMTU discovery is working (but maybe I missed it).

> > Can you use tcpdump to look at the characteristics
> > of the connection (e.g. is there a constant (slow) exchange, or are there
> > hangs, timeouts, retransmissions)?
> See above- there is an exchange but I am unsure how to read the output
> of tcpdump in detail.

It wasn't about the detail, it was about the timing. The timestamps seem to
indicate a rhythm corresponding to the RTT with something like 1340 bytes of
[raw IP] data sent every 300ms - faster at the end (because more is sent
before the ACK arrives). The fragment you quoted seemed to show data transfer
only from "infra2" to "bu" (and ACKs without data in the other direction), so
- at this point - it doesn't seem to be the rsync protocol exchange limiting
the speed. It also doesn't *appear* to be the TCP window size [yet], because
the last three packets from infra2 to bu are sent without waiting for the ACK
whereas the first ones apparently aren't.
With an RTT of around about 333 ms, you would need a window size of 213KB for
a theoretical throughput of 5 Mbit/s (assuming data is only transfered in one
direction without waiting for rsync protocol responses from the other side).

Please avoid line wrapping in future tcpdump quotes.

> [...]
> UDP for OpenVPN:
> [...]
> tls-client

The server side could have more options that affect speed ...

> > [...] the *first* backup of a host is usually special in that all data
> > not already in the pool (which might in this case mean all data) needs to
> > be compressed. 
> Prior to be compressed it should have been transferred. Does BackuPC
> compress "on-the-fly" or after all data is transferred?  However- CPU
> cycles are close to zero (see above).

On the fly. The bandwidth of the data stream is not high enough to make
compression use significant CPU resources, but it *will* take *some* time,
which *might* add another few (tens of) ms to a RTT if the File::RsyncP
implementation compresses data before sending a protocol reply the remote
end is waiting for. I don't know the protocol exchange well enough to know
whether this will have a measurable effect (if any).

> > [...] I'd give it some more time before starting to worry.
> I wouldn't worry if I at least one of the parameters (IO, CPU, memory,
> network) would show some usage. But none of them really does.

It's about latency, not about saturation.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Can`t Change {TopDir} Location . .

2015-09-22 Thread Holger Parplies
Hi,

tdevoye wrote on 2015-09-16 05:16:34 -0700 [[BackupPC-users]  Can`t Change 
{TopDir} Location . .]:
> [...]
> [root@]# /etc/init.d/backuppc restart

you probably mean 'service backuppc restart' ...

> Shutting down BackupPC:[  OK  ]
> Starting BackupPC: [  OK  ]
> BackupPC: WARNING:  Your BackupPC $Conf{TopDir} is not listed in the locate
> BackupPC: database configuration's PRUNEPATHS.  This may cause all of your
> BackupPC: backed up files to be indexed!

This message is not from any init script distributed with BackupPC. You're
talking about 3.3.1, since you don't mention any version, and that is the
default these days. Just in case you were wondering what you were referring
to.

> I have just solve this problem in few steps.

Glad to hear that you solved your problem, but I'm not sure where anyone else
comes into the picture.

> Tommy DEVOYE
> Network Admin
> 
> +--
> |This was sent by to...@devoye.fr via Backup Central.
> |Forward SPAM to ab...@backupcentral.com.
> +--

Network admin. Backup Central. No comment.

If you ever find out what Linux(?) distribution you're using, please forward
your message to the relevant BTS (not sure if it's actually a bug, but they
can close it if it isn't).

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Forbidden You don't have permission to access /BackupPC on this server.

2015-09-22 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2015-09-22 14:30:34 -0500 [Re: [BackupPC-users] Forbidden 
You don't have permission to access /BackupPC on this server.]:
> On Tue, Sep 22, 2015 at 2:20 PM, Brad Alexander  wrote:
> > [...]
> > I don't know that that is the case, since I have been running it on
> > debian/sid since before apache 2.4 came out:
> > [...]
> 
> I didn't mean there was a problem running 2.4, just that if you put
> 'Require Local'
> which I believe was in the other config posted, clients other than
> from the same host as the server will be denied access.
>   http://httpd.apache.org/docs/2.4/mod/mod_authz_host.html

or, more to the point, it seems to be a bug in the Centos 7 BackupPC package,
which provides an incorrect Apache configuration. It's neither an (upstream)
BackupPC issue nor an Apache problem/incompatibility. It might not even be a
Centos packaging error, if Centos default policy were to limit access to
localhost, but then again, it would be for both Apache 2.2 and 2.4, wouldn't
it?

Access policy really is a topic the individual admin will need to *think
about* and *set up correctly* anyway, unless you're on some sort of home
network ...

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Forbidden You don't have permission to access /BackupPC on this server.

2015-09-22 Thread Holger Parplies
Hi,

Stefan Peter wrote on 2015-09-21 20:20:44 +0200 [Re: [BackupPC-users] Forbidden 
You don't have permission to access /BackupPC on this server.]:
> Dear Bob of Donelson Trophy
> 
> Please find my comments interspersed below:
> (this is the stanza I use for noobs)

[this is worth resending]

> On 21.09.2015 14:03, Bob of Donelson Trophy wrote:
> > First, my only comment on top posting.
> > 
> > [...]
> > I, like you, have to live with top posts occasionally, sorry.

Are you asking for our help, or are we asking for your vague descriptions of
off-topic problems (Apache misconfigurations by packaged versions of BackupPC)?
You see, *we* don't really *have* to live with top posts.

> You are not supposed to bottom post, either. The idea behind the whole
> eMail rule is to have a dialogue, to allow a reader who is subscribed to
> a couple of mailing lists to be able to follow the conversation without
> digging through a 2000+ line email you have to read *bottom up* if
> she/he wants to understand the context.

In particular, you are supposed to omit parts of the quoted mail no longer
neccessary for understanding the dialogue at the current point. As a rule of
thumb, that is at least everything after the end of your reply.

> > We mailing list normal users, although annoyed with top posting, can
> > learn to follow the combination of top posts with bottom posts and keep
> > up with answers.

We normal mailing list posters can also learn to move the cursor around as Les
has already suggested. It's not hard, even in vi. It's a matter of courtesy,
really. Could I be bothered to put in some work to make things easier for the
people whose help I am asking for?

> One could, but I have other emails to read, my boss does not pay me for
> reading eMails and I don't have the time to dig through the mess top
> posters produce. Result: Your eMail most probably will be ignored by a
> majority of the participants of a tech related mailing list. Not what
> you aim at, I suppose.

I can't speak for the majority, but I can confirm this for myself. I have
recently read quite some mails from you (Bob of Donelson Trophy) with problems
which were partly quite obvious and obviously somewhere else than you were
looking. I didn't feel inclined to respond to any of them, and I have noticed
that many others also have not, who I would expect could answer your questions.

In this particular thread, Les has pointed out the key misunderstanding (even
twice, in the mean time):
this is *not* an OS permission issue but rather an Apache authorization issue.
To fix this, you need to understand (a bit of) Apache configuration, not
switch off selinux, worry about Unix group membership, downgrade any software
or anything like that.

You can, of course, do whatever you like, but by disregarding advice and
instead working around the problem as you could have done right away without
bothering us, you are not exactly encouraging anyone to help you in the
future.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Digest::MD5 vs md5sum

2015-09-15 Thread Holger Parplies
Hi,

gregrwm wrote on 2015-09-15 14:08:20 -0500 [[BackupPC-users] Digest::MD5 vs 
md5sum]:
> the following commands demonstrate that either Digest::MD5 and gnu md5sum
> are not compatible, or that i haven't got the backuppc md5 formula quite
> right.  can anyone set me straight?

probably not, considering your obfuscated bash code. Where did you get the
idea to 4b $s? You might try echo -n ning it instead. Not that I'd tried it or
would feel inclined to do so. Actually, I have to admit that I just *did* try
it.

Hope that helps. Not sure what you are actually trying to do, though.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] add rsync -A -X and get 'Got fatal error during xfer (fileListReceive failed)'

2015-06-26 Thread Holger Parplies
Hi,

Robert Wooden wrote on 2015-06-26 11:45:05 + [Re: [BackupPC-users] add 
rsync -A -X and get 'Got fatal error during xfer (fileListReceive failed)']:
 I also posted this on the rsync mailing list . . . more soon. 

and also here on the list. Sorry for not replying yet.

This was:
 - Original Message -
 From: Bob of Donelson Trophy b...@donelsontrophy.net 
 Sent: Thursday, June 25, 2015 6:56:52 AM 
 Subject: [BackupPC-users] add rsync -A -X and get 'Got fatal error during 
 xfer (fileListReceive failed)' 
 
 I am trying to add acl file permissions and attr to rsync in RsyncArgs and
 RsyncRestoreArgs and have discovered that their addition is not that easy.
 When backups are run the log files contains: Got fatal error during xfer
 (fileListReceive failed). Removing the -A and the -X allows the backup
 to proceed without errors. Is there another 'switch' that needs to be added?
 Anyone familiar with this addition? 

Simple answer: it won't work.

BackupPC (version 3) uses the Perl module File::RsyncP on the server side to
do its part of the communication. It does this for the simple reason that it
stores file data and metadata differently than the native file system does:
file data may be optionally compressed and is pooled to save large amounts of
storage space. File metadata needs to be stored independently because it may
differ across instances of one pooled file. Adding arbitrary new metadata
won't just magically work. Someone would need to change the format of BackupPC
attrib files to allow storing the new (meta-)data as well as implement
extracting the data from what the remote system sends. This means the rsync
XferMethod would need to handle multiple combinations of -A and -X switches
(including both not set). All other XferMethods might need to be adjusted,
regardless of whether they are even capable of transferring this information.
All of that could be done, but I don't see anyone doing it any time soon, if
at all.

I believe some people use a DumpPreUserCmd (or DumpPreShareCmd) to dump ACLs
and extended attributes to a file which is then included in the backup.
Restoring a subset of the backup is probably not easy in this case, though,
and might require some manual work.

I'm not sure whether BackupPC v4 supports ACLs and extended attributes.

Hope that helps.

Regards,
Holger

--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Make a Inc to Full

2015-06-04 Thread Holger Parplies
Hi,

Christian Völker wrote on 2015-06-04 18:34:29 +0200 [[BackupPC-users] Make a 
Inc to Full]:
 [...]
 But now I created a VM where I took a lot of snapshots and the VM was
 backed up by my script and BackupPC. Fine so far- but all snapshots got
 removed after backup.
 
 Now I have my snapshot in ab Incr backupc. As I want to keep the
 snapshot at least for a year: is there a way to upgrade this incremental
 backup to be a full one? Or to prevent BackupPC removing this single
 incremental one?

no, but you can restore the files from the incremental backup and back them up
again in the next full ...

I don't think there is a good way to ensure that files that only exist for a
short time will be present in your backups for a long time. You'd need to time
their existance to coincide with a full backup that will be kept for a long
time. When using exponential backup expiry, that may prove challenging.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Backups Schedule

2015-06-04 Thread Holger Parplies
Hi,

Demon wrote on 2015-06-04 01:29:46 -0700 [[BackupPC-users]  Backups Schedule]:
 [a lot of things]

is this a question or just random noise not applying to BackupPC?

 +--
 |This was sent by demon_brads...@yahoo.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--

This probably answers my question.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Archive Failing

2015-06-04 Thread Holger Parplies
Hi,

backu...@kosowsky.org wrote on 2015-06-02 20:03:04 -0400 [Re: [BackupPC-users] 
BackupPC Archive Failing]:
 Mark Campbell wrote at about 10:53:04 -0700 on Tuesday, June 2, 2015:
   [...]
   Firstly, what is the difference between running nightly manually vs it
   being run by the daemon?  It performs the same tasks either way, no?  So,
   shouldn't the inverse be true as well?  In other words, if it's not safe
   to run nightly manually, wouldn't it also be unsafe being run by the
   daemon?  What makes it safe for the daemon to run it?
 
 Because there is a (very small) chance of collisions if BackupPC_link and
 BackupPC_nightly run concurrently...

I don't want to imagine what could happen if BackupPC decided to start a
second instance of BackupPC_nightly (for the same part of the pool).

 BackupPC is beautifully constructed so that almost all operations are
 atomic and non-interfering.
 However, an error could occur if BackupPC_nightly deletes or chain
 renumbers a pool file that BackupPC_link is simultaneously trying to link
 to.
 
 The daemon makes sure that BackupPC_nightly won't run if BackupPC_link
 is running. Conversely, the daemon holds off starting new
 BackupPC_link processes while BackupPC_nightly is running.

I would just like to add that this is dependent on the version of BackupPC. In
earlier versions, the conflict used to be between BackupPC_nightly and
BackupPC_dump (and, possibly, BackupPC_link). In future versions it might be
yet different. The point being that the daemon is programmed to do the right
thing for the version of BackupPC it belongs to. As Jeffrey said:

 BackupPC_nightly was never meant to be run by users.

So it may not always be clearly documented when it would be safe to do so,
just like it is not clearly documented how you should manipulate attrib files
or the on-disk metadata of your favorite file system. You just shouldn't.

   Secondly, if it's always been a terrible idea to do so, how did the idea
   get started to run it manually in the first place, and why is it so
   pervasive?
 
 People do a lot of stupid things pervasively. Like pulling the power
 cord on a PC before shutting it down gracefully...

You mean, they shut it down gracefully *after* having pulled the plug? :-)

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC Archive Failing

2015-06-02 Thread Holger Parplies
Hi,

Random wrote on 2015-05-20 05:55:49 -0700 [[BackupPC-users]  BackupPC Archive 
Failing]:
 [...] an archive started failing. (I run it manually twice a month). I've
 tried to run it twice and it's failed each time with the same error.
 [...]
 Archive failed (Error: /usr/share/backuppc/bin/BackupPC_tarCreate, compress
 or split failed)
 [...]
 The archive runs part way, before failing, so I can see two backup files
 show up on the drive, but not the other 4. Individually, the all 6 machines
 are backing up as usual, except a 2 millisecond timeout issue

that's actually 2 seconds, right?

 that happens periodically. (An issue I recently discovered but don't believe
 is related to my current problem). 

It might be. Did the failure occur in the last backup of one of the machines
you are trying to create an archive for? Is it always one machine, or does the
timeout occur for all of the machines? Periodically means what exactly? Once
per backup? Once per month? Occasionally?

Does the *archive* always fail on the same machine? Probably so, since you get
archives for two machines. Have you tried to start the archive a third time,
preferrably for different backups of the target machines? Just above the error
message, there should be a line reading Executing: [details of the command].
Try running the command quoted manually *as the backuppc user* to see if you
get a more verbose diagnostic.

 What could be going wrong? 

Destination file system full? Hardware error? Corruption of metadata within
the BackupPC pool (as in: attrib file corrupt or data file corrupt in a way
that it will fail to decompress)? Something else less obvious?

Random wrote on 2015-05-22 05:18:23 -0700 [[BackupPC-users]  BackupPC Archive 
Failing]:
 Anyone?

Your messages didn't reach the mailing list until 2015-06-01, so it was hard
for anyone to answer before that. Probably a problem with Backup Central.

Regards,
Holger

--
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Issue with multiple restores

2015-05-23 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2015-05-22 11:53:10 -0500 [Re: [BackupPC-users] Issue 
with multiple restores]:
 On Fri, May 22, 2015 at 11:17 AM, Andy Taylor andy.tay...@sift.com wrote:
  Thank you for the advice Les. I compiled rsync 2.6.9 and tried a restore
  with it. It seems to hang now once it's got past the last file of the
  restore - I've run strace against the rsync processes and I can see it close
  the last file and then just hang, with this sort of output:
 
  [pid 15625] select(4, [3], [], NULL, {53, 660732} unfinished ...
  [pid 15624] select(7, [3 6], [], NULL, NULL unfinished ...
  [pid 15662] select(1, [0], [], NULL, {6, 915513}) = 0 (Timeout)
  [pid 15662] select(1, [0], [], NULL, {60, 0} unfinished ...
  [pid 15625] ... select resumed )  = 0 (Timeout)
  [pid 15625] select(4, [3], [], NULL, {60, 0} unfinished ...
  [pid 15662] ... select resumed )  = 0 (Timeout)
 
  Do you know what that might indicate about the restore?

perhaps a corrupt attrib file, as in: one end of the connection is expecting
[much] more data than the other end sends. This would seem to fit in with your
error message from newer rsync versions as well as the fact that it only
happens for one host. Have you tried forcing a full backup for that host?

 [...]
 What's the big picture here?

Whenever I revisit this thread, I immediately think BackupPC fuse module?,
but I haven't used it myself yet, so I can't give you any detail. But it might
well be something to look into - providing it fits into the big picture you
haven't given ;-).

Regards,
Holger

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate with only certain types of files

2015-05-23 Thread Holger Parplies
Hi,

Timothy J Massey wrote on 2015-05-22 20:40:52 -0400 [Re: [BackupPC-users] 
BackupPC_tarCreate with only certain types of?files]:
 Les Mikesell lesmikes...@gmail.com wrote on 05/22/2015 04:24:56 PM:
 
   What am I missing?  How do I get BackupPC_tarCreate to create a 
   tar file that contains all PDF's stored in that path?
   [...]
  
  Can't help with BackupPC_tarCreate's wildcard concepts

I can: there are none. BackupPC_tarCreate gets a list of path names to include
in the tar. Each of these is looked up verbatim and included (with
substructure if it happens to be a directory).

 The problem is not that I couldn't figure out how to get the PDF's at all,
 but how I could avoid restoring 500GB of data for the 500*MB* I actually
 need!  :)

It should be *fairly* simple to patch BackupPC_tarCreate for the simple case
you need. A more general case would add an option and include correct handling
of hardlinks and symlinks. In sub TarWriteFile, after the calculation of
$tarPath and before the big file type if (line 455 in the 3.3.0 source),
I'd add

return
  if $hdr-{type} != BPC_FTYPE_DIR and $tarPath !~ /\.pdf$/i;

(omit the i modifier if you really only mean lowercase pdf). Note that
this is once again completely untested ;-).

Hope that helps.

Regards,
Holger

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_tarCreate with only certain types of files

2015-05-23 Thread Holger Parplies
Hi,

Timothy J Massey wrote on 2015-05-23 08:40:21 -0400 [Re: [BackupPC-users] 
BackupPC_tarCreate with only certain types of?files]:
 [...]
 In this instance, I was able to come up with enough free space to be able
 to do the complete restore and then grab the data I needed from that.

for the archives: you don't strictly *need* the free space. You can pipe the
output of BackupPC_tarCreate directly into a 'tar x' and tell tar to only
extract files named '*.pdf', something like

BackupPC_tarCreate -h host -n 123 -s /share /path \
| tar xf - --wildcards '*.pdf' '*.PDF'

(just to demonstrate that you can specify more than one glob). This can be
problematic, in a way: if you get something wrong about the tar command line,
you will likely find out that the file(s) you were expecting were not
extracted, and you will need to repeat the whole lengthy process, whereas
with a complete restore, you can simply navigate the result and select
whatever you need.

This seems to be another good case for using the fuse module. You can navigate
the backup view (or run a 'find', 'rsync', ...) at a relatively low cost and
only need to read/decompress the file data you actually act on - and you don't
need any intermediate storage space.

Regards,
Holger

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] manual start backup on dhcp laptop

2015-05-20 Thread Holger Parplies
Hi,

zdravko wrote on 2015-05-20 12:56:29 -0700 [[BackupPC-users]  manual start 
backup on dhcp laptop]:
 Obviously it is both.

what is it? Both of what?

Network administration can be quite complicated and is beyond the scope of
this *mailing list*. From what I remember from this poor imitation of a mail
thread (I know why I hate Backup Central), I'm not surprised. It's perfectly
possible (though usually not sensible) to masquerade outgoing connections (as
seen by the VPN endpoint) from a routeable IP while passing through incoming
connections unaltered. In a Linux firewall, you'd have to -m state --state
NEW in the SNAT rule in order to match outgoing traffic but not the return
packets for incoming traffic (though when I think about it, I think that's
implicit, i.e. the nat table is only considered for new connections anyway).

I can't tell you how to switch off masquerading on some random network
appliance, just that you probably want to switch it off (presuming you don't
need it for something else ... really hard to guess from here).

As for the matter of DHCP, there are valid uses for non-static IPs, and there
are cases where assigning static IPs (with DHCP, if you are so inclined) makes
more sense. If you use non-static IPs, you will need some method of resolving
host names. That could be DNS or WINS. Frankly, my knowledge of BackupPC's
handling of 'DHCP' hosts is somewhat limited, and I don't much care about the
topic, because I would avoid using it if at all possible (I *can* read config
file comments, though). But that is your choice, as is the setup of your
network.

I can tell you this much: if your client host had a static IP (whether or not
assigned by DHCP), things would be simple.

What is your $Conf {DHCPAddressRanges} set to? Does your host have the 'dhcp'
flag set to 0 or 1 in the hosts file?

 Go figure. :)

That's the whole point. We can't.

 Seems that this IP forwarding is one layer to many.

Seems you haven't got a deep enough understanding of your network topology.
Hint: without the *forwarding*, you wouldn't be able to connect across the
VPN.

Regards,
Holger

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC fails to start in daemon mode on Ubuntu 14.04

2015-05-12 Thread Holger Parplies
Hi,

(many people wrote many things at many times, but I won't quote them due to
top posting - sorry) ...

while this seems to be an Ubuntu packaging problem, strictly, so really the
wrong list, I do wonder if the relevant log file contains anything ...

Regards,
Holger

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude by file size

2015-05-11 Thread Holger Parplies
Hi,

Marios Zindilis wrote on 2015-05-11 19:02:36 +0300 [Re: [BackupPC-users] 
Exclude by file size]:
 You can -most probably- do that with the --max-size option of rsync,

I would tend to agree. Most probably means that the BackupPC side of the
transfer uses the Perl module File::RsyncP rather than native rsync, and this
module doesn't implement all valid rsync options. From what the man page says,
the --max-size option would actually need to be implemented by File::RsyncP
(rather than the remote native rsync), so you'd need to test and see if it is.

Of course, if that doesn't work, you always have the option of excluding the
files you are having problems with by name (BackupFilesExclude). I realize
that this doesn't automatically adapt to new large files appearing, which will
cause the same problem all over again, but it would be a way to get your
backup running again. The question to think about is: do you really want
backups of these files, or don't you? A size limit is not an answer to this
question, it's a workaround for an unsuited network link.

This seems to be another instance of a common problem: how do I handle backup
volume that [initially] won't complete over the limited bandwidth of the
network link? The part you'd need to solve yourself in any case would be to
get copies of the files in question to somewhere on your BackupPC server -
assuming you want them in your backup. Most likely solutions would be
sneakernet or some manual invocations of 'rsync -z ...'.
I'll try to find some time to work on integrating those copies into an
existing backup, so the next backup could use them as reference and avoid the
network transfer or limit it to the changes in the files.

Hope that helps.

Regards,
Holger

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Manual delete script

2015-05-07 Thread Holger Parplies
Hi,

Gerald Brandt wrote on 2015-05-07 12:54:51 -0500 [Re: [BackupPC-users] Manual 
delete script]:
 Sorry.  I'm trying to delete specific backups.  Some older fulls that I 
 don't need anymore.  If I recall, the script took the computer name and 
 the backup number as arguments, and deleted the files and updated 
 anything else it needed to.

I believe 'rm' will get you most of the way. It sort of takes the computer
name and backup number as arguments, too:

rm -r $TopDir/pc/hostname/num

It doesn't check dependencies, though, so it will mess up any incremental
backups still depending on the backup you are removing. It will also leave the
line for the backup in the backups file, so BackupPC might still show the
backup in the web interface (I'm not sure whether it checks for the directory
or not). This is easy to fix in a text editor. It also won't remove the
corresponding backup log file. Again: rm.

If I remember correctly, the script you were referring to went to some trouble
to handle backup dependencies correctly, as well as delete individual *files*
from one or more backups, including fixing the attrib files, which is far more
difficult. It sounds as if you don't need all of that.

 It didn't run nightly though.

I wonder why everyone is so obsessed about running BackupPC_nightly *now* (for
every possible value of *now*). BackupPC runs it anyway. Each day. Or
rather, each night. Running it *now* rather than in a few hours doesn't make
your hard disk(s) weigh less, consume less power, produce less noise and heat
in the time between, because the file system is less full.

I see two reasons for wanting to run it out of schedule, and I'll omit both of
them here. Chances are, if you *think* you need to run BackupPC_nightly, you
really need more disk space.

Regards,
Holger

--
One dashboard for servers and applications across Physical-Virtual-Cloud 
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC_fixLinks.pl

2015-04-22 Thread Holger Parplies
Hi,

gregrwm wrote on 2015-04-22 11:48:16 -0500 [[BackupPC-users] 
BackupPC_fixLinks.pl]:
 i've used BackupPC_fixLinks.pl in the past with success, on RHEL.  i'm
 trying to use it again, this time on ubuntu:
 
  # sudo -ubackuppc -H ./BackupPC_fixLinks.pl -f -q
  Use of qw(...) as parentheses is deprecated at
  /usr/share/backuppc/lib/BackupPC/Storage/Text.pm line 302.
  Use of qw(...) as parentheses is deprecated at
  /usr/share/backuppc/lib/BackupPC/Lib.pm line 1425.
  String found where operator expected at ./BackupPC_fixLinks.pl line 243,
  near warnerr Can't read : $File::Find::name\n
  (Do you need to predeclare warnerr?)

I believe the problem is in line 49:

49 use BackupPC::jLib;

Though there is no error message stating that jLib.pm was not found, this
would explain all of your error messages, both the non-predeclared warnerr and
the global variables. It would also be consistent with the fact that it does
not work on Ubuntu: you probably didn't copy jLib.pm (or perhaps an incorrect
version of it) to /usr/share/backuppc/lib/BackupPC ...

On the bright side, this reminds me of why there is no hLib.pm and there never
will be (at least not by me): I believe in self-contained scripts without
dependencies on some version of some non-standard library - almost impossible
to track. Yes, there will be code duplication. If you want to avoid that, roll
everything into one script which uses $0 to decide what to do ;-). Or
distribute everything as .deb packages (libbackuppc-jlib-perl and
backuppc-fixlinks perhaps?) and let rpm-based users figure out how to use
alien (or something equivalent) to install them :).

 any possible clues about an updated or ported version somewhere?

Yes, I've got a 0.3 version here dated something like 2010-12-10, but I can't
make much sense of the diff to your [apparently modified?] 0.2 without closer
investigation. I'll send it to you off-list (along with the advice to never
trust something someone sends you off-list without taking a close look). Note,
though, that that version has the same (implied) dependency on jLib.pm.

Regards,
Holger

--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to manage disk space?

2015-04-15 Thread Holger Parplies
Hi,

Kris Lou wrote on 2015-04-15 12:57:54 -0700 [Re: [BackupPC-users] How to manage 
disk space?]:
 On Wed, Apr 15, 2015 at 12:00 PM, Dave Sill de5-backu...@sws5.ornl.gov
 wrote:
 
  A corollary would be: how do I know that the space BackupPC is using
  doesn't include a bunch of cruft like files from systems that have been
  removed from BackupPC, or file systems that have been removed, ...
 
 Somebody might have a script to check this,

I doubt that, because it seems to be impossible to exactly define what the
script should look for :-).

If you change a backup definition to no longer include part of the files it
used to include, existing backups will still include those files, and that is
how it should be. In some cases you may wish to remove those files from
previous backups (because they were erraneously included), in others, you may
simply not need to back them up in the future (e.g. they were previously
created by hand, and now they're generated from data included in the backup).
There is no automatic way to decide this. You can always delete files you do
not need, but you could not undo the effect of files you would have needed
being automatically purged from the backup. Just imagine *accidentally*
removing files from your backup definition. If that would immediately mangle
your backup history, you would undoubtedly immediately switch to another
backup tool :-).

Yes, it would be possible (but complicated) to check if existing backups match
the current backup definition and alert you to differences, but it seems like
a *lot* of work without much gain. If you find out that your backups include
something they shouldn't, you should really change the backup definition *and*
remove the extraneous files (or decide that they won't do any harm until the
backups expire). While BackupPC does not natively support changing existing
backups, I believe there are user contributed scripts to do such things,
probably written by Jeffrey ;-).

As for hosts that have been removed, that is really easy to check:
ls -l $topdir/pc

If there are directories not corresponding to existing hosts, you can 'simply'
remove them - if you don't want to wait, move them to $topdir/trash and
BackupPC will take care of it for you. You won't immediately get back much
space, because all files are linked to the pool. BackupPC_nightly will proceed
to delete the pool files not referenced by other backups (which might take
several nights, depending on your configuration).

Aside from that, you probably need to trust BackupPC to work as designed :-).

Regards,
Holger

--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to manage disk space?

2015-04-15 Thread Holger Parplies
Hi,

Mark Campbell wrote on 2015-04-15 14:37:53 -0700 [Re: [BackupPC-users] How to 
manage disk space?]:
 [...] The best way to remove ???orphaned??? files (i.e., files that do not
 have any other hard links), is to run:
 
 su ???s /bin/bash ???c ???/usr/share/BackupPC/bin/BackupPC_nightly 0 255??? 
 backuppc

as always when that is suggested: WRONG. NEVER call BackupPC_nightly directly.

 What this does is,

possibly trash your pool. And no, you are very unlikely to notice.

 In short, I???m not sure of a way to check ahead of time whether there???s
 files that are ???cruft???, but you can be sure that after running this
 script, the number of ???cruft??? files will be zero.

For an extremely meaningless definition of 'cruft'. BackupPC_nightly is
automatically run by BackupPC when it is safe to do so every night (in
particular, BackupPC won't interfere while it *knows* BackupPC_nightly is
running, which it doesn't if you run it by hand), so this sort of 'cruft'
is regularly removed anyway without any manual action.

Regards,
Holger

--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to manage disk space?

2015-04-14 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2015-04-14 09:34:35 -0500 [Re: [BackupPC-users] How to 
manage disk space?]:
 On Mon, Apr 13, 2015 at 4:57 PM,  backu...@kosowsky.org wrote:
  Dave Sill wrote at about 15:28:49 -0400 on Monday, April 13, 2015:
We've been using BackupPC for a couple years and have just encountered
the problem of insufficient disk space on the server. [...]
   
What I'd like to know is (1) where is the disk space going,
  To store ayour backups
 
and (2) how can adjust BackupPC to use less space?
  Save fewer backups or backup fewer machines

Jeffrey has a point here. You don't give us much detail to guess on. A couple
dozen Linux servers can mean just about anything.

 But more specifically, a likely problem is that you have some very
 large files like databases, log files, virtual machine images or
 mailboxes that change daily and thus are not pooled.

That is one possibility. Another would be keeping several years worth of daily
history of large mail servers. Either your history is too long (for the disk
space available), or your backups are too large, or most likely a combination
of both. Backups may be too large either by design (you need to backup too
much data) or by malfunction (you are backing up something you don't mean to
backup).

Yet other possibilities would be that BackupPC_nightly is not running, or that
linking is not working.

Then again, you might have meant to ask, how do I find out where the disk
space is going?. I can't think of a good answer to that. BackupPC's pooling
mechanism means that if you have 100 copies of one file content (all linked
to one pool file by BackupPC), deleting 99 of them won't save you anything, as
long as one remains. Put differently, one host *might* seem very large in
terms of total backup size, yet share all files with other seemingly smaller
hosts. You really have to look at your source data: what are you backing up,
how often does it change, how unique is it? And you have to know your
constraints. If you *need* to keep a long history of a large amount of data,
there is nothing much you can do (except from getting more disk space). If you
don't, the easiest option is to expire old backups and see what happens - just
keep in mind that you don't get back any disk space for content still present
in more recent backups.
Reducing the size of existing backups is somewhat tricky, and reducing the
size of future backups won't gain you anything until the old backups expire.

Actually, there might be a way to shed some light. I'd probably look for large
files with a low link count (-links 2 or 3) in the pc/ tree. You need to be
aware that 'find' will take a *long* time to traverse such a large pool. It
just might be worthwhile to run a rather general 'find' command with output
redirected to a file and then filter that repeatedly to narrow down your
search, rather than running several different 'find' invocations. Or even
looking in the {c,}pool/ rather than the pc/ tree (faster, but you don't get
any file paths, just file content).

Running 'find $topdir/pc/$host/$num -type f -links -3 -ls' should give you an
approximate list of files that would actually be deleted by deleting [only]
backup $num of host $host ('-links -3' takes into account files for some
reason not linked into the pool; in theory, these *should* all be zero length,
but in case of some malfunction, they might not).

Much of that might not make any sense for your particular case, but I hope
some of it helps.

Regards,
Holger

--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15utm_medium=emailutm_campaign=VA_SF
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Different Blackout Periods

2015-04-01 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2015-04-01 13:25:56 -0500 [Re: [BackupPC-users] Different 
Blackout Periods]:
 On Wed, Apr 1, 2015 at 1:03 PM,  alexander.rehb...@fmex.de wrote:
 
  I want to setup the following backup plan. Every week one full backup
  and every hour an incremental backup.
 
  This works but i have a problem with the balackout periods. Full backups
  should only done at night. is it possible to configure this?
 
 There is no way to specify different blackouts for fulls vs.
 incrementals.

true.

 However fulls won't happen until the FullPeriod time
 has expired since the last one, so if you force one at an appropriate
 time, subsequent runs will happen at about the same time a week later.

That is not strictly true. If I remember correctly, a new full backup will not
happen until the FullPeriod has expired since *completion* of the last one
which might be significantly later, depending on the time the backup takes and
backup concurrency issues. Usually, you can compensate for that with the
schedule, blackouts, and an appropriately lower FullPeriod, to keep your
backups from wandering through the day.

Here, however, blackouts can't be used. What you suggest is very much
equivalent to implement the effect of blackouts by just forgetting about
them. If blackouts are a requirement, i.e. full backups *must* only be done
at night, rather than it would be rather convenient if they mostly run at
night until something unexpected happens, when we will be happy to manually
intervene to correct the timing, then this is not a solution.

I've got a small patch which I would have first expected would do what you
want, however the task is not as trivial as it seems. BackupPC schedules an
incremental backup or a full one as appropriate. When a full one is due, it
is rescheduled until it completes successfully, perhaps being skipped several
times due to blackouts. In the mean time, there is no point in attempting
incremental backups, because the same blackouts would apply. What you really
want to happen when a full run is skipped due to a blackout period applying
only to full backups is an incremental backup to be run instead. However, a
manually scheduled full backup should probably not be automatically turned
into an incremental. Overlapping blackout periods for full and incremental
backups should work as expected, i.e. an incremental replacement backup should
still be restricted by blackouts for incrementals, regardless of the order in
which blackouts are specified. So there are some things to keep in mind.

An additional problem is that, programmatically, BackupPC currently handles
blackouts *before* making the decision whether a full or incremental backup
is needed. There may be a good reason for that (such as avoiding pings during
blackouts - pingCmd might attempt to wake up the remote host) or not, I'd
have to take a closer look. In any case, it would be significantly more
intrusive than changing two lines of code as my first attempt would have.

Configuration wise, I would want to add an optional hash member backupType
with supported values 'full', 'incr' or 'all', defaulting to 'all', which
would make the blackout period apply to backups of the selected type(s), e.g.:

$Conf{BlackoutPeriods} = [
{
'weekDays' = [ 1 .. 7 ],
'hourBegin' = 4,
'hourEnd' = 2,
'backupType' = 'full',
},
{
'weekDays' = [ 1 .. 7 ],
'hourBegin' = 2,
'hourEnd' = 4,
'backupType' = 'incr',
},
{
'weekDays' = [ 3 ],
'hourBegin' = 1,
'hourEnd' = 3,
},
];

(meaning no full backups from 04:00 to 02:00, no incremental backups from
02:00 to 04:00, no backups at all on wednesdays from 01:00 to 03:00, just to
give an example).

Ah, right, the web GUI would need to support setting this value, too :-).


So, while I can appreciate the need for this feature and don't see a way to
emulate it with current BackupPC capabilities, I can't offer you a solution
right now. I would suggest trying something like triggering full backups with
a cron job (and raising FullPeriod accordingly to avoid interfering automatic
fulls) for now, though this is obviously only an approximation, too.


You do realize that with your configuration the hourly incrementals will
be growing deltas to the previous weekly full backup, right? That may not be a
problem, but with hourly backups you probably need these to be really fast. Is
retransmitting the same changes over and over again going to work for you?

Hope this helps.

Regards,
Holger

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by 

Re: [BackupPC-users] Wake-on-LAN setup - no ping response

2015-03-15 Thread Holger Parplies
Hi,

Russ Russ wrote on 2015-03-06 09:07:30 +0300 [[BackupPC-users] Wake-on-LAN 
setup - no ping response]:
 [...]
 I have backuppc server on Gentoo which works fine with regular client
 backups. Now I am configuring wake-on-lan to wake windows clients up before
 backup.
 [...]
 3. Appropriate 777 rights have been granted to wolping.sh.

a=rwx is hardly appropriate. Actually, having 'others' being able to modify
the script is an attack waiting to happen. Anyone at all with access to the
machine can put anything they want into the script and just have to wait for
it to be executed by the BackupPC daemon with its associated privileges. That
is basically the same as giving anyone a free backuppc uid shell, including
full access to all backups and presumably root access to all client machines
you are backing up. Not necessarily a good idea ;-).

I don't believe BackupPC actually checks the permissions before executing the
PingCmd (or any other), but I wouldn't blame *anything* (BackupPC, Perl, bash,
kernel) for blandly refusing to execute a world-writable file as a general
security measure.

 This is output of ls ???l: 
 -rwxrwxrwx 1 backuppc backuppc 1246 12:00 wolping.sh

The ownership looks good enough, though 'root:root' might be more suitable,
now you just have to 'chmod go-w' it. And afterwards check it has not been
modified. We'll just assume no malicious party has the file opened for writing
across your chmod and review, but for the record I'll mention the theoretical
possibility.

Curious that your 'ls -l' output contains no date :-).

 4. I have tested wolping.sh and it works great from terminal under backuppc
 user

Well, you use 'sudo'. You haven't by chance previously input a sudo password
in your terminal session? You are sure you are executing
/usr/local/bin/wolping.sh and not a different version of the script somewhere
in your path? ;-) What is the return code of the script when you execute it
from a shell? Your standard shell usually doesn't tell you if you don't ask
explicitly, but it might make a difference for BackupPC.

 Seems that backuppc does not even fire the script as ???logger??? does not
 appear in tail -f /var/log/messages

You should see more in BackupPC's log files.

I don't regularly use the 'logger' command, so I'm not familiar with its
intricacies. Is it not called, or does it fail to log, or does it log
somewhere else than you are looking? Does the command line invocation of
wolping.sh cause a log entry (in /var/log/messages)?

Looking at your script, I am wondering what your $Conf{PingCmd} looks like
(out of habit, I'd also like to see $Conf{PingPath}). It's obviously not the
default ('$pingPath -c 1 $host'), because you get your host name from $5:

 [...]
 WAKEHOST=$5
 [...]
 function fwol {
      TO_WAKEUP=$1

(apparently never used)

      sudo $ETHWAKE $1

I suppose you've got an entry similar to this in your /etc/sudoers:

backuppc ALL=NOPASSWD: /sbin/etherwake -i enp3s0 *

You use different invocations of the ping command - is that intentional?

 [...]
 $PING $ARG1 $ARG2 $ARG3 $ARG4 $WAKEHOST /dev/null 21
 $PING $ARG1 $ARG2 $WAKEHOST
 $PING $ARG1 $ARG2 $ARG3 $ARG4 $WAKEHOST

Wouldn't it make more sense to control that in the script and call it with the
parameters that are truely variable, e.g.

wolping.sh hostname interval

Your script is implying semantics for $ARG3 and $ARG4, so you can't really
configure PingCmd to use wolping.sh with arbitrary 'ping' switches (let alone
a variable number of them) in arbitrary order, as it might first appear. You
might also want to replace the sleep command with a fancy ping invocation to
be more adaptive to the time an individual host actually requires to boot. You
could experiment with something like

ping -i 10 -c 18 -A -q hostname

which should wait at most 180 seconds (like your sleep command), probe your
host in 10 second intervals and terminate shortly after the host begins to
respond. That way, you could increase the tolerance (raise the -c argument) to
accomodate for slow hosts and yet not have to unnecessarily wait for a fixed
long time for fast hosts. Of course, you would also wait the maximum time for
hosts which simply fail to wake up.


Hope that helps.

Regards,
Holger

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] BackupPC-4.0.0alpha3 - Can't create a test hardlink between a file...

2015-02-25 Thread Holger Parplies
Hi,

for the sake of completeness (though Adam has already made the point: it helps
to understand what you are asked to do rather than blindly doing it - people
make mistakes when giving advice, as the OP has proven ;-):

Akibu Flash wrote on 2015-02-19 13:32:14 -0800 [Re: [BackupPC-users] 
BackupPC-4.0.0alpha3 - Can't create a test hardlink between a file...]:
 [...]
 Once I fixed the permissions issue, the command start stop command would
 work and give me an ok.  

Beware!

*When starting daemons*, the invoking process can only detect errors the
daemon encounters *before* forking into the background. Often enough and not
necessarily in the context of BackupPC, I've seen the ok type success
message only to find out that the daemon had terminated immediately after
forking, sometimes due to the inability to open a log file, meaning there's
absolutely no trace of why the daemon was not running. This can be extremely
annoying to debug.

BackupPC does some tests before forking, but there are several fatal errors
that can happen after forking. In particular, the log file is opened after
forking, and this *can* fail (e.g. permission problems), so BackupPC *can*
potentially silently die after you get the ok.

What this means is: I got 'ok', so it's running makes no sense for daemon
processes. It's I can see it in 'ps', so it's running. Ok only means it
didn't fail for the more obvious reasons, or rather, not for those of the more
obvious reasons that could be tested beforehand :-).

 [...]
 backuppc@linux-pc5:/$ ls -ld /data
 drwxrwx---+ 3 root root 4096 Jan 18 23:00 /data

This is actually interesting. ACLs? SElinux? I'm referring to the + at the
end of drwxrwx---+. It might even be important, assuming your backuppc user
is *not* in the root group!?

More explicitly: these permissions seem strange to me, but I suppose someone
set them up this way (your distribution? your distribution's BackupPC
package?). Without the whole picture (ACLs, effective UID, GID and
supplementary groups of the running BackupPC process) I can't say whether
this will work or not, though if it didn't, BackupPC should fail before you
get the ok message, meaning you wouldn't get the ok message.

I'd normally have expected something like

 drwxr-xr-x 3 root root 4096 Jan 18 23:00 /data

Access to sensitive information below /data can be limited by subdirectory.
I don't see the point in hiding the subdirectories altogether.

 backuppc@linux-pc5:/$ ls -ld /data/BackupPC
 drwxrwx---+ 5 backuppc backuppc 4096 Jan 18 23:00 /data/BackupPC

Except for the +, that is what I'd expect here.

 [...]
 backuppc@linux-pc5:/$ tune2fs -l /dev/mapper/dreux-root
 tune2fs 1.42.12 (29-Aug-2014)
 tune2fs: Permission denied while trying to open /dev/mapper/dreux-root
 Couldn't find valid filesystem superblock.
 
 Is the output what was to be expected?

Yes. This is the understanding what you're doing part again. As non-root
user, you don't have permission to access the device file. The error message
is not Couldn't find ..., it's tune2fs: Permission denied ..., and it's
quite clear :-). Couldn't find ... is tune2fs's explanation why it gave up
as a result of not being allowed to read from the device.

If you want meaningful output, you'll need to run the tune2fs command as
root. And *before* you do, you might as well try

backuppc@linux-pc5:/$ df -T /data/BackupPC

and see if the type of the file system is really one of ext2, ext3, ext4;
otherwise 'tune2fs' is simply the wrong tool. All of that said, I'm not sure
why the OP wanted you to run tune2fs. Perhaps he wanted to see whether it
really is an ext[234] file system :-).

On a side note, if you're serious about running BackupPC, you definitely
*don't* want to have the pool on your root file system. For testing,
evaluation, and perhaps even debugging problems, you might be fine, but for
production use, you are asking for problems. You're apparently using LVM,
so creating an independent logical volume is really easy (presuming you
haven't allocated all of the space to your root FS).

 Or should my merely removing the linked target also remove the linked file?

No, that will never happen.

I hope some of that helps rather than confuses ...

Regards,
Holger

--
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] tdb_transaction_recover: attempt to recover read only database

2015-01-08 Thread Holger Parplies
Hi,

G.W. Haywood wrote on 2015-01-08 11:33:40 + [[BackupPC-users] 
tdb_transaction_recover: attempt to recover read only database]:
 When attempting to back up Samba shares on a Linux host using BackupPC
 version 3.2.0 (yes I know that's quite old:) and the 'smb' method,

you mean odd, right? ;-)

You've stated the solution yourself:

 [...] The shares
 are also backed up remotely using rsync over a chain of (relatively
 slow) wireless links, and that gives no trouble.

rsync also works well over a fast local loopback interface or even a simple
UNIX pipe :-). The only advantage I can see of using smb for a localhost
backup on a Linux machine is that you can change the file system layout and
only have to change the Samba configuration, but that hardly seems worth the
headache, unless you do so on a regular basis.

I recommend you change your backup to rsync with a sudo rsync type
RsyncClientCmd (assuming you need root privileges to read all files). You
*can* limit that to read-only backups of certain paths via appropriate
entries in /etc/sudoers, if you feel so inclined.
After all, if you ever restore anything, restoring the exact files,
ownerships and permissions on the file system accessed by Samba has no
disadvantages over restoring what Samba presents to Windoze clients
through Samba.

 The other backup clients are all Windows boxes - a couple of servers
 and a few dozen workstations, all conected via a gigabit LAN.  They
 don't seem to give this problem, or if they do it's only rarely.

There is no reason you can't continue to run smb backups on those.

 Nothing useful has appeared in my searches and the specific function
 name does not seem to appear anywhere in the BackupPC list archives.

I doubt the problem is related to BackupPC. tdb sounds like a Samba
database to me (which also explains why Windoze servers don't exhibit
the problem ;-). From the Samba perspective, you might want to continue
investigating, in case you've got something there that might turn into
a problem (or already be one). Then again, maybe it's just unusual strain
on the Samba server caused by the backup process (reading many files in
a short time frame). From the BackupPC perspective, just circumvent it :-).

Hope that helps.

Regards,
Holger

--
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Problem with Full Backups and Exclussions

2015-01-05 Thread Holger Parplies
Hi,

Michelle Dupont wrote on 2015-01-05 17:22:35 +0100 [Re: [BackupPC-users] 
Problem with Full Backups and Exclussions]:
 On Mon, 5 Jan 2015, Francisco Suarez wrote:
 
 Thanks Michelle,
 I'm not sure I understand well how the exclusions work still. Only need to 
 all content exclude these directories. When you say share do you mean 
 share's already entered under RsyncShareName for example /home.

if you're using rsync(d), then, yes, what you enter as $Conf{RsyncShareName}
is the (list of) share name(s). You need to use *either* these *exact* values
as keys in BackupFilesExclude or the literal value * for all shares for which
there is no matching key. For example,

 If you define 2 RsyncShareName :
 /home
 /var

(i.e. $Conf {RsyncShareName} = [ '/home', '/var' ]), then the following
(alternative) examples would give you meaningful excludes:

# 1.) exclude /home/cache, no excludes under /var
$Conf {BackupFilesExclude} = {
'/home' = [ '/cache' ],
};

# 2.) exclude /home/cache and /var/cache
$Conf {BackupFilesExclude} = {
'*' = [ '/cache' ],
};

# 3.) exclude /home/foo and /var/cache (note how the * entry matches /var
# because there is no explicit entry for /var, and how it doesn't match
# /home because there is an explicit entry)
$Conf {BackupFilesExclude} = {
'/home' = [ '/foo' ],
'*' = [ '/cache' ],
};

# 4.) exclude /home/cache and /var/cache (note how the /home/ entry doesn't
# match /home because it is not literally identical, even though file
# system semantics would suggest otherwise)
$Conf {BackupFilesExclude} = {
'/home/' = [ '/foo' ],
'*' = [ '/cache' ],
};

 [...]
 Which of these will accomplish the desired result to exclude content on 
 the directories listed?
 
 $Conf{BackupFilesExclude} = {
   '/' = [
     '/home/theplans/log/*',
     '/home/run/positions/trace/*',
     '/home/hun/res/received/*',  
   ]
 };

Since you *appear* to have $Conf{RsyncShareName} = '/home', it would be

$Conf {BackupFilesExclude} = {
'/home' = [
'/theplans/log',
'/run/positions/trace',
'/hun/res/received'
]
};

presuming you also want to exclude the directories. If you want to get them
and only exclude the contents of the directories, add the '/*' as you did
above.

Excludes and includes are always specific to the XferMethod used. For more
details on what you can do with rsync(d), see the rsync man page. For tar and
smb, the syntax may differ, though in the simple case (only fixed paths, no
wildcards) it should work much the same (though smb always seems to cause
confusion with the path separators).

Hope that helps.

Regards,
Holger

--
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Exclude all cache folders in Unix

2015-01-05 Thread Holger Parplies
Hi,

Jose Miranda wrote on 2015-01-04 21:55:01 -0400 [[BackupPC-users] Exclude all 
cache folders in Unix]:
 I'm trying to exclude all cache folders in the whole server. I'm backing up
 root and I want to exclude anything containing cache, such as:
 
 /home/admin/.../.../cache/.../...
 /home/admin/.../cache/.../
 /.../.../cache/.../...
 
 What should I use as filter in BackupFileExclude in the web interface?
 
 I tried the following and it didn't work:
 
 /*/cache/*/
 */*cache*

as always, that really depends on the XferMethod, but I'd expect a simple

cache

to work. For rsync(d), there's also

/**/cache

(with no apparent advantage over just cache). You could also limit that to
only *directories* named cache:

cache/

(not sure if that works for tar et al.).

When using the web interface, make sure you get the entry right. You need a
share name to which the exclude applies and one or more patterns to exclude
(not sure what it looks like exactly, I don't use the web interface).

That said, I hope you have no source code repositories with caching mechanisms
implemented in subdirectories named cache. Relying solely on the name of a
directory for an exclude might not be a good idea. I tend to prefer gathering
a list of cache directories actually used, so I don't end up excluding
something I didn't intend to, because someone named it in a way I didn't
anticipate. This might take some work and will tend to require occasionally
adding new cache directories to the list. It really depends on what is worse
for you: missing files in your backup or having extraneous cache directories
ending up in your backup (which *can* make the whole backup fail). Just
remember that you likely won't notice files being missed, while you will
notice failing backups. Hopefully :-).

Regards,
Holger

P.S.: I can't see anything wrong with using excludes like
  /home/*/.mozilla/firefox/*/Cache
  (also note that not all cache directories are named cache ;-).

--
Dive into the World of Parallel Programming! The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Execute script before backup

2014-12-18 Thread Holger Parplies
Hi,

Colin Shorts wrote on 2014-12-17 09:17:58 + [Re: [BackupPC-users] Execute 
script before backup]:
 On 17/12/14 07:50, Ibon Castilla wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  Hi again,
 
  I have tested the workaround proposed by Colin, and I have a comment
  and a question about it:
  [...]
 That said, there was a follow-up from Les Mikesell that might work better.

I need to fully agree with this. The suggestion to use BackupPC_serverMesg has
downsides not yet covered here. You lose the scheduling capabilities of
BackupPC. If a backup fails, BackupPC will retry it. Your cron job won't.
BackupPC will keep track of when full backups are required and when
incrementals will do. Your cron job can emulate that by requesting an auto
backup instead of an explicit full or incremental, but that won't work together
with BackupsDisable (see below). BackupPC has settings to regulate the number
of simultaneous backups. A backup initiated with a serverMesg counts as user
backup, not as automatically scheduled backup. There is no guarantee how close
to the serverMesg the backup will actually be started.

  * Comment: the idea was to use BackupPC_serverMesg to launch the
  backup, and then use that command again to test if the server ended
  that backup successfully.
  I've realized that the command just tells the server to launch the
  backup, but it really doesn't do that action by itself, so I have to
  ask to the server many times if the backup has finished yet.

Correct. Even worse, it will only *queue* the backup. Depending on
configuration and other backups running, it could be hours before the backup
even *starts*.
This somewhat defeats the whole point, if your VPN is not guaranteed to
survive a long idle period, and even if it is, you could just as well leave
the VPN running permanently. Starting and stopping it suggests that you don't
want it active outside the backup window.

  Instead of BackupPC_serverMesg I've decided to use BackupPC_dump,
  because this one does the backup by itself.

By doing that, you lose even more. You do realize that you need to run a
BackupPC_link, and that you cannot safely run that yourself, don't you?
BackupPC will not really be aware of the fact that the backup has been run
outside its control. I believe it will show up in the web interface by virtue
of BackupPC_dump having updated the backups file, but I'm not sure what
statistical values will be maintained correctly and whether automatic
scheduling and error reporting would work.

It's simply not the way BackupPC is designed to work. You're really taking a
central piece of BackupPC and throwing the rest away, or rather, hoping the
rest won't interfere. You might be better advised to just use plain rsync from
the start. That won't give you pooling, compression or a web interface, but it
would at least be a clean and flexible solution.

  [...]
  * Question: to use BackupPC_dump, the host must be defined on the host
  file. [...] avoid that I've been trying to use $Conf{BackupsDisable} on
  my_host.pl file. If I set that parameter to 1 or 2, then BackupPC_dump
  complains about it, saying that the host is configured to avoid
  backups.

A value of 1 should disable scheduled backups while still running manual
backups, but you would have to explicitly request full or incremental backups.
What you are trying to do is not really supported.

 You might be better to leave the backups enabled and just increase the 
 ages (FullPeriod and IncrPeriod) from 6.97 and 0.97 respectively, to 
 something a little higher, that way the backuppc won't attempt to backup 
 the host as long as your scheduled task is working.

Yes, but it also won't *succeed* in backing up the host when the scheduled
task is *not* working, because it won't set up the VPN link, so you don't gain
the fallback you are aiming at. Setting FullPeriod and IncrPeriod to really
high values (say 1 = almost thirty years) should avoid automatic backups,
unless, of course, something strange happens to your system clock ;-).

Regards,
Holger

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to delete backups

2014-12-10 Thread Holger Parplies
Hi,

Colin Shorts wrote on 2014-12-10 11:45:41 + [Re: [BackupPC-users] How to 
delete backups]:
 You might want to press Enter before typing 
 `/usr/share/BackupPC/bin/BackupPC_nightly 0 255', otherwise it will get 
 deleted too.

right, press Enter *before* the command, *not after*. ***Never*** run
BackupPC_nightly from the command line. ***Never*** advise other to do so.
Really quite simple.


Before that, on 10/12/14 11:41, Stephen Mew had written:
 Solution
 
 
 cd /var/lib/BackupPC/pc/client/
 ls
 rm -fR backup_number [...]

Yes, it's a solution. The question remains, for what problem. If you know
*exactly* what you're doing, you can try that. Chances are, if you're asking
here how to delete backups, you *don't* know *exactly* what you're doing. If
you're recommending this, you probably also don't *exactly* know what you're
doing, otherwise you wouldn't recommend it. Removing one backup directory will
usually work quite well, if and only if no backups depend on it. Nothing will
prevent you from removing data still needed by other backups with 'rm'. Don't
expect BackupPC to handle the resultant pc directory gracefully.

If you want to remove *all* backups of one client (the original question
suggests this might be the case),

rm -rf /var/lib/BackupPC/pc/client

will work (supposing /var/lib/BackupPC is your TopDir ... mine is different).
No dependency problems, no summary files to patch. For removing single backups,
get Jeffrey's script.

 nano /var/lib/BackupPC/pc/client/backups

nano? The only thing you need to know about nano is Ctrl-x. Tip for Debian
users:

update-alternative --set editor /usr/bin/nvi
update-alternative --set view /usr/bin/nview

But that's kind of off-topic (and a matter of opinion - I'm sure there are
other vi clones that are rather acceptable ;-).

Yet earlier still, on 10 December 2014 at 11:04, 
lanceh1412-busin...@yahoo.co.uk mailto:lanceh1412-busin...@yahoo.co.uk had 
written:
 How do you delete backups? I've been messing around with lots of
 different hosts trying to work out how to use the software and I
 want to get rid of all the unwanted backups.

If you want to get rid of all backups of one or more hosts, delete the host
directories (or move them to $TopDir/trash and BackupPC will do it for you).

 [...]
 There is no BackupPC_delete whch I have seen mentioned in the
 archive.

That's correct. It's not part of the official BackupPC software, it's a user
contribution by Jeffrey J. Kosowsky. I don't have a recent copy, but I'm sure
Jeffrey does ;-).


Let me state this clearly:

After deleting backups, there is no *need* to initiate a BackupPC_nightly run.
BackupPC will automatically do that at some configured point within the next
24 hours. Only if you absolutely want to see the space freed as soon as
possible do you need to politely ask BackupPC to run the command as soon as it
is safe to do so:

# su backuppc -c /usr/share/backuppc/bin/BackupPC_serverMesg 
BackupPC_nightly run


Hope that helps.

Regards,
Holger

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to delete backups

2014-12-10 Thread Holger Parplies
Hi,

Colin Shorts wrote on 2014-12-10 16:30:40 + [Re: [BackupPC-users] How to 
delete backups]:
 On 10/12/14 15:59, Holger Parplies wrote:
  Colin Shorts wrote on 2014-12-10 11:45:41 + [Re: [BackupPC-users] How 
  to delete backups]:
  You might want to press Enter before typing
  `/usr/share/BackupPC/bin/BackupPC_nightly 0 255', otherwise it will get
  deleted too.
  right, press Enter *before* the command, *not after*. ***Never*** run
  BackupPC_nightly from the command line. ***Never*** advise other to do so.
  Really quite simple.
 
 I was just trying to point out that the commands provided would have 
 deleted BackupPC_nightly ;)

yes, and I agree with you there. I strongly disagree with the previous
poster's intended advice, though.

 On 10/12/14 11:41, Stephen Mew wrote:
 ~snip~
   rm -fR backup_number /usr/share/BackupPC/bin/BackupPC_nightly 0 255
 ~snip~
 
 That wouldn't be good, would it?

I'm tempted to say that it would be better than running the command, but the
comparison is really pointless. Of two things you should simply refrain from
doing, who cares which one would be more harmful?

 But as you say, there is no real reason to run the command as cron 
 should take care of it soon enough -

Actually, it's not cron, it's the BackupPC daemon, and this is the whole
point: the daemon needs to have control over when the command is run(ning).

 unless you were deleting a backup to gain disk space urgently ;)

Exactly, and this did not seem to be the case here. And again: even in this
case, you don't run the command manually, you let the daemon run it. It will
do so right away if it is safe, so there's no penalty. If it is *not* safe,
you don't want to override the safeguard.

Regards,
Holger

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to delete backups

2014-12-10 Thread Holger Parplies
Hi,

Timothy J Massey wrote on 2014-12-10 11:53:38 -0500 [Re: [BackupPC-users] How 
to delete backups]:
 Holger Parplies wb...@parplies.de wrote on 12/10/2014 10:59:17 AM:
  [...]
  ***Never*** run BackupPC_nightly from the command line. ***Never*** advise
  other to do so.
  Really quite simple.
 
 Why not, exactly?  I do it all the time.  I'm not saying you're wrong:  I 
 just want to know where the harm might be.  But you state the never do 
 that in such strong terms, I'm wondering where the disconnect it.

BackupPC_link and BackupPC_nightly shouldn't run concurrently because of
race conditions in the code. As far as I remember, BackupPC_nightly might
remove (or chain-renumber) a file BackupPC_link is about to create a link
to, which would result in the file missing from the backup or containing
incorrect data. The odds of this happening might not be high, but why take
the risk? There might be further problems I am unaware of. If you let the
scheduler run BackupPC_nightly, it will take care not to run BackupPC_link
concurrently, so possible problems will be avoided. It will also receive
the statistical output from BackupPC_nightly, which, I believe, it won't
if you run BackupPC_nightly from the command line.

 [...]
 By the way, that's probably the best way to handle this problem.  Set the 
 FullKeepCnt and IncKeepCnt to 1, do a full backup and let BackupPC manage 
 deleting everything for you!  :)

I agree with that (except that it's IncrKeepCnt ;-) in that it is the
easiest and least error-prone way. This will leave you with one backup though,
so it's mostly equivalent to the delete entire host case (you have the
choice of running a backup after deleting the host depending on what you want)
except for backup numbering (deleting the host will restart at 0). If you want
to keep the last 10 backups, for instance, you would need to count your fulls
and incrementals and adjust the KeepCnts to match.

Regards,
Holger

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=164703151iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Retrying failed files from a full backup in subsequent incremental backups

2014-12-01 Thread Holger Parplies
Hi,

Adam Goryachev wrote on 2014-12-01 10:34:18 +1100 [Re: [BackupPC-users] 
Retrying failed files from a full backup in subsequent incremental backups]:
 On 30/11/14 23:28, Bodo Eggert wrote:
  On 28/11/14 12:59, Bodo Eggert wrote:
  If a file from a full backup failed to transfer, what is the best way
  to make a subsequent, incremental backup include these (few) files?

for rsync(d), the next backup (incremental or full) should always contain any
files not in the previous backup (or reference backup, to be exact),
regardless of modification time. This is because rsync, in contrast to tar/smb,
always compares a full file list on both sides.

 [...]
  I'm using Version 3.3 (Ubuntu), and since skipping
  one file does not count as failure,
 How do you define skipping one file does not count as failure ?
 I suppose it depends on why the file was skipped. I think failure due 
 to permission error, or similar are not considered failures for the 
 backup overall, so in some cases they will be skipped, while in others, 
 it will be a fatal failure and the backup will not continue passed that 
 file.

I agree with that, but I can't say exactly which failures there can be or
which are treated how. I also don't know what happens when a file is
skipped. It could be treated as unchanged or marked as deleted. This
probably depends on what native rsync on the remote side does.

  and since that file isn't written
  since the full backup, it doesn't get backed up in subsequent
  (incremental) backups.

I don't agree with that, at least for rsync(d) (which, I believe, we're
talking about). See above (full file list comparison).

  Also, whenever theren is a connection problem, all the downloaded files
  from the next, incremental backup are thrown away. I'm currently
  circumventing that problem manually.

I'm not sure I understand you correctly. Are you talking about an incremental
backup following an interrupted backup (that's what it sounds like to me) or
an interrupted incremental backup (answered by Adam)?

 Although I think that is a config setting, so your config might be 
 different to mine (ie, keeping failed full backups is a config setting, 
 I don't think there is any option to keep failed incremental backups).

Correct. $Conf {PartialAgeMax} limits the applicability of a partial backup to
speed up the next attempt at the (failed) full backup or disables keeping
partial backups altogether.

Regards,
Holger

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Turning an incremental backup into a full backup

2014-12-01 Thread Holger Parplies
Hi,

Adam Goryachev wrote on 2014-11-28 13:24:41 +1100 [Re: [BackupPC-users] Turning 
an incremental backup into a full backup]:
 On 28/11/14 13:11, Bodo Eggert wrote:
  Is there a way to create a full backup by joining an incremental backuop
  and it's full backup?

short answer: no, and for a good reason.

  I need to backup a remote system, and while differential backups are
  possible (with increasing difficulty), a full backup would require moving
  the backup system to the remote site.

As Adam has pointed out, this is not true for rsync(d). A full rsync(d) backup
might even use *less* bandwidth, as it always uses the most recent backup as a
reference. For incremental backups, this would only be the case if the
IncrLevel is increased for each and every backup.

For other XferMethods, you need to realize that incremental backups are
inexact. You miss all deletions of files (meaning your latest backup will
contain files deleted long since) as well as files appearing somewhere
with old timestamps (e.g. renamed files, files extracted from archives or
copied with timestamp preserving options active). Thus you *need* regular
full backups.
Even for rsync(d), incremental backups can conceivably miss changes in
files, though this is much less likely.

It is true that rsync(d) full backups may take far longer than corresponding
incremental backups would, but this doesn't change by moving the backup system
to the remote site.

  For now, I'm increasing the IncrLevels settings, but this does not seem to
  be a good way, it's just the least bad way I can see.

This probably doesn't scale well. Constructing a view of a high level
incremental backup requires the BackupPC server to do a lot of work, and
it has to do it for each new backup to construct the reference backup
(assuming rsync(d) as XferMethod, that is), as well as for browsing backups
or restoring (for any XferMethod). At which IncrLevel are you currently?

Aside from that, your incremental backups cannot expire as long as higher
level incremental backups depend on them. You would effectively be forced to
keep all data forever, and when you finally run out of space, there is no easy
way out, because you cannot delete any backup except the most recent one(s).

 Given that is the whole point of rsync (reduce 
 data transferred between the source and destination at the cost of 
 RAM/CPU/time).

Put differently, if you are bandwidth limited, you should be using rsync(d)
anyway.

 Can you provide some more detail on your requirements, or the problem, 
 and current configuration?

Regards,
Holger

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] PIPE error

2014-11-20 Thread Holger Parplies
Hi,

Les Mikesell wrote on 2014-11-20 12:26:39 -0600 [Re: [BackupPC-users] PIPE 
error]:
 On Thu, Nov 20, 2014 at 12:08 PM, tschmid4 tschm...@utk.edu wrote:
 
  Correct, It's a question, here's another:
  You are correct, the configuration was backing up a dir that was not there.
  However, after removing the entry, it returns the same error.
  What else would cause 'unable to read 4 bytes?'

An error in your configuration would.

 Unable to read 4 bytes is not the same error as:
  Remote[1]: rsync: change_dir /systems failed: No such file or directory 
 (2)

In particular, unable to read 4 bytes is not the error we're talking about.
Unable to read 4 bytes was *eventually* resolved despite your (the OPs)
efforts. The same error is Got fatal error during xfer (aborted by
signal=PIPE). So why are you interested in further causes of unable to read
4 bytes? Doing a survey?

Hope that helps.

Regards,
Holger

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] rsync out of memory

2014-11-20 Thread Holger Parplies
Hi,

Christian Völker wrote on 2014-11-20 21:07:59 +0100 [[BackupPC-users] rsync out 
of memory]:
 [...]
 Unfortunately, I'm getting errors when backing up one host. My debugging
 gave the following results:
 
 bash-4.1$ ./BackupPC_dump -v -f pdc.evs-nb.de
 cmdSystemOrEval: about to system /bin/ping -c 1 -w 3 abc.domain.de

so ClientNameAlias for pdc.evs-nb.de is abc.domain.de? ;-)

 [...]
 Running: /usr/bin/ssh -q -x -l root abc.domain.de /usr/bin/rsync
 --server --sender --numeric-ids --perms --owner --group -D --links
 --hard-links --times --block-size=2048 --recursive -x -z -v
 --no-inc-recursive -v --ignore-times . /
 [...]
 Anyone having a clue?

Yes. Remove the '-z' option. It won't work with File::RsyncP.

Regards,
Holger

--
Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server
from Actuate! Instantly Supercharge Your Business Reports and Dashboards
with Interactivity, Sharing, Native Excel Exports, App Integration  more
Get technology previously reserved for billion-dollar corporations, FREE
http://pubads.g.doubleclick.net/gampad/clk?id=157005751iu=/4140/ostg.clktrk
___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   3   4   5   6   7   8   9   >