Re: [BackupPC-users] restore a backup from command line

2023-11-03 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 3 Nov 2023, Ghislain Adnet wrote:


? Well i saw the exemples but they seems to create tar on the backup
server then moving the tar then untar it.

? unfortunatly I do not have the disk space for that nor on the
backup server nor on the destination server :(

?I would like to do a restore the same way i do it in the web
interface to automate this but seems there is not a tool for it.


I think what you're looking for is 'BackupPC_tarCreate'.  You will
find it mentioned in both the BackupPC FAQ and in the list archives
(at least eight hundred times by my quick count this evening).

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] restore a backup from command line

2023-10-31 Thread G.W. Haywood via BackupPC-users

Hi there,
On Tue, 31 Oct 2023, Ghislain Adnet wrote:


?i was wondering if there was any way to restore from the command
line. I need to restore all /home/ daily to another server

?I was trying to use


/usr/share/backuppc/bin/BackupPC_restore


?but there is no docs and it fails for me as i try various things like

[~]: sudo -u backuppc /usr/share/backuppc/bin/BackupPC_restore 127.0.0.1 
myclientserver.com? /etc/vim/vimrc
/usr/share/backuppc/bin/BackupPC_restore: bad reqFileName (arg #3): 
/etc/vim/vimrc

usage: /usr/share/backuppc/bin/BackupPC_restore [-p] [-v] [-m]   



any idea if this is possible ?


I'm sure it's possible, but not like that. :(

If I look at the code in BackupPC_restore (it's at line 72 in my
version but that's from 2020, yours may be different) I see

if ( $ARGV[2] !~ /^([\w.]+)$/ ) {
print("$0: bad reqFileName (arg #3): $ARGV[2]\n");
exit(1);
}

This tells me that you can't have anything other than a "word character"
or a dot in the third argument.  The '/' character isn't allowed.

$ perl -e '$word = "hello"; if( $word =~ /^([\w\.]+)$/ ) { print "matches\n"; } else { 
print "does not match\n"; }'
matches
$ perl -e '$word = "hello.world"; if( $word =~ /^([\w\.]+)$/ ) { print "matches\n"; } 
else { print "does not match\n"; }'
matches
$ perl -e '$word = "hello/world"; if( $word =~ /^([\w\.]+)$/ ) { print "matches\n"; } 
else { print "does not match\n"; }'
does not match
$

I suspect you might find an answer more to your liking in the mailing
list archives.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Deleting backups does not free up disk space

2023-10-26 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 26 Oct 2023, Tobias wrote:


... in another installation, we have backups of no longer existing
hosts that cannot be simply deleted. ...


If I understand correctly, and you mean that these cannot be deleted
because they are your only copies of the data from those hosts, then
that means that they are *not* backups.  They're at risk of loss if
they're the only copies.

It sounds to me like you're trying to fit several quarts into a pint
pot, and some of them aren't even of the right beverage.  BackupPC is
a backup system.  If you want an archiving system, don't try to force
BackupPC to be that system.  That's not what it was designed to be.

Your immediate problem seems to be that a full backup is copying a lot
of files which shouldn't have been in the backups in the first place.
To get around that, you could simply call the problem machine(s) by a
different name as far as BackupPC is concerned, set up the exclusions,
and start those backups from backup number one.  Then BPC won't copy
the cruft which you don't need from the previous backup (which as has
been explained is how BackupPC V4 works and there's not much that you
can do about that).  Deduplication and compression will still happen,
unless you do something wilful to prevent it.  You can delete the old
backups in their entirety at your leisure when you have the new ones.

Obviously the main problem is that you need more backup storage space.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Ever-growing backups

2023-08-23 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 23 Aug 2023, Kirby wrote:


I am running BackupPC-4.4.0-1.el8.x86_64 on an up-to-date CentOS Stream
release 8 system. This system has been runing since early 2015 with
updates, both hardware and software. Things work wonderfully, one might
say too good. It seems old backups are not going away.

I have the following in my config.pl -
$Conf{FullPeriod} = '6.97';

$Conf{FullKeepCnt} = [
 4, 0, 3, 0, 2, 0, 1
];

My understanding is that this should keep at most 10 backups distributed
across the last 64 weeks (448 days). Instead a I have hosts with 60 full
weekly backups going back over 500 days. Am I misunderstanding the
FullKeepCnt option?


I don't know if this is your issue, but in config.pl did you see this?

8<--
# There are two reasons you will want a non-zero $Conf{FillCycle} setting
# when you are only doing incrementals:
#
# - a filled backup is a starting point for merging deltas when you restore
#   or view backups.  So having periodic filled backups makes it more
#   efficient to view or restore older backups.
#
# - more importantly, in V4+, deleting backups is done based on Fill/Unfilled,
#   not whether the original backup was full/incremental.  If there aren't any
#   filled backups (other than the most recent), then the $Conf{FullKeepCnt}
#   and related settings won't have any effect.
8<--

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] A Perl error?

2023-08-13 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 13 Aug 2023, Jan Stransky wrote:

On Sun, 23 Jul 2023, G.W. Haywood via BackupPC-users wrote:

On Sun, 23 Jul 2023, Jan Stransky wrote:


I started to notice following error. Does any one have any ideas? It
seems to be more related to configure.pl genrerator, rather than the
actual configuration.

Can't use string ("1") as a HASH ref while "strict refs" in use at
configure.pl line 742.

...
What version of BackupPC are you using, and was it installed from a
package or did you install it manually for example from some tarball?
If the latter, please tell us what it was and where it came from.


I am actually not aware of doing any modifications. I run the BackupPC
in as a docker image, and the only thing I did was rebuild the image so
the underlaying system was updated. The BPC was running just fine by
that point. Of course, I cannot exclude a file corruption.
I am using this, but up-to-date Alpine (3.18):
https://github.com/adferrand/docker-backuppc.git
BackupPC is 4.4.0, BackupPC_XS 0.62 and Rsunc_bpc is 3.1.3.0


First off I've never run BackupPC on anything except bare metal.  As
far as I can remember I haven't even tried to run it in a VM let alone
using Docker.  You're using somebody else's docker installation and I
can imagine that things could go wrong which have nothing to do with
BackupPC itself.  By using this form of installation you might be
making it easy to install BackupPC if it works, but I don't think it's
necessarily going to make it easy to troubleshoot things if it doesn't.
Looking at 'Dockerfile' in .../adferrand/docker-backuppc.git makes me
ask more questions than it answers for me.

You said "I started to notice" which says to me you've been using some
procedure on at least a semi-regular basis which did *not* produce the
error, and then, from some point onward, it did.  If that's the case
please can you explain this procedure for us?

The configure.pl script is usually run only to install BackupPC, and I
wouldn't expect that anyone would run it routinely.  For example it
doesn't even exist on my BackupPC server, as it's my usual practice to
deleted the source trees after running an installation.  The script is
able to recognize some command-line arguments; presumably in your case
the script is being run by something from the github package which you
mentioned.  Do you know how and with what arguments configure.pl ran?
Have you read the script's 'man' page and/or tried to run it yourself?
Did the script leave you with a log file?  If so, it might help if we
can see it.

Please also show us the few lines around line 742 in your configure.pl
script.  I'm not sure we can know what's there without duplicating
your setup, and that would be prohibitively time-consuming - unless
someone on the list happens to be using the same docker image and OS.

Did you check that BackupPC was not running before configure.pl ran?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Why not use Backuppc to backup server systems?

2023-08-10 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 10 Aug 2023, Les Mikesell wrote:

On Wed, Aug 9, 2023 at 8:42?AM W?chtler, Jan wrote:


...
Is there any reason why this list does not include Server systems?

I am using it for years now to backup (web/application/database)-server
data and configuration, is there any reason why I should not do so?


Databases can be a problem ...


Most of our application of BackupPC is for what you could describe as
"user data".  Mainly things in home directories.  But I tend to back
up /etc/ on everything with BackupPC, sometimes also /usr/local/etc/
and occasionally the odd subdirectory in /var/.  People sometimes do
some pretty strange things in /opt/.  The historical information for
files available from BackupPC can prove useful for almost anything.

For database *data* I don't let BackupPC get involved.  The snapshot
utility for the database creates a file, and that file is transferred
by rsync, all from a crontab entry, to remote storage.  The file won't
benefit from BackupPC's deduplication, and nobody wants an old copy of
a busy database so BackupPC's aged versions are no use either.

For similar reasons, mail server data likely won't benefit much from
the BackupPC approach.  Web servers might - much will depend on the
design and application.  Name servers will range from our tiny setups
with just a few domains to ISP setups with many millions, so each is
going to need to be studied as a special case, as I guess will most
other servers.  We have a virus scanning server; we don't back it up,
since everything on it is downloaded from somewhere else.  Of all our
servers it's probably the one most at risk.  If it gets compromised
(it never has been) it would just be wiped and reinstalled.


... bootable image ... integrate that into backuppc ...


My head hurts. :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] A Perl error?

2023-07-23 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 23 Jul 2023, Jan Stransky wrote:


I started to notice following error. Does any one have any ideas? It
seems to be more related to configure.pl genrerator, rather than the
actual configuration.
Cheers,
Jan
Can't use string ("1") as a HASH ref while "strict refs" in use at
configure.pl line 742.


Let's try that again :)

What version of BackupPC are you using, and was it installed from a
package or did you install it manually for example from some tarball?
If the latter, please tell us what it was and where it came from.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] A Perl error?

2023-07-23 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 23 Jul 2023, Peter Major wrote:


I don't usually wade in to discussions like these, but as I understand
perl fairly well, I feel the need to point out that you are on the
wrong track G.W.Haywood.The error message does not refer to line 742 of
config.pl, but to that of configure.pl, presumably the script which
reads config.pl.


Ah, yes, you're quite right.  Apologies for my inexcusable carelessness.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] A Perl error?

2023-07-23 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 23 Jul 2023, Jan Stransky wrote:


I started to notice following error.
...
Can't use string ("1") as a HASH ref while "strict refs" in use at configure.pl 
line 742.
...


Looks like you broke it.

Please let us see what you have around line 742 of configure.pl.

In the vanilla configure.pl that would be somewhere around the part
which sets up things to be backed up and/or ignored, but if your
version of config.pl has been heavily modified it could be anything.
This is from a current config.pl here:

8<--
$ cat -n /etc/BackupPC/config.pl | head -n 770 | tail -n 45
   726  # Examples:
   727  #$Conf{BackupFilesExclude} = '/temp';
   728  #$Conf{BackupFilesExclude} = ['/temp']; # same as first example
   729  #$Conf{BackupFilesExclude} = ['/temp', '/winnt/tmp'];
   730  #$Conf{BackupFilesExclude} = {
   731  #   'c' => ['/temp', '/winnt/tmp'], # these are for 'c' 
share
   732  #   'd' => ['/junk', '/dont_back_this_up'], # these are for 'd' 
share
   733  #};
   734  #$Conf{BackupFilesExclude} = {
   735  #   'c' => ['/temp', '/winnt/tmp'], # these are for 'c' 
share
   736  #   '*' => ['/junk', '/dont_back_this_up'], # these are for other 
shares
   737  #};
   738  #
   739  $Conf{BackupFilesExclude} = undef;
   740
   741  #
   742  # PCs that are always or often on the network can be backed up after
   743  # hours, to reduce PC, network and server load during working hours. For
   744  # each PC a count of consecutive good pings is maintained. Once a PC has
   745  # at least $Conf{BlackoutGoodCnt} consecutive good pings it is subject
   746  # to "blackout" and not backed up during hours and days specified by
   747  # $Conf{BlackoutPeriods}.
   748  #
   749  # To allow for periodic rebooting of a PC or other brief periods when a
   750  # PC is not on the network, a number of consecutive bad pings is allowed
   751  # before the good ping count is reset. This parameter is
   752  # $Conf{BlackoutBadPingLimit}.
   753  #
   754  # Note that bad and good pings don't occur with the same interval. If a
   755  # machine is always on the network, it will only be pinged roughly once
   756  # every $Conf{IncrPeriod} (eg: once per day). So a setting for
   757  # $Conf{BlackoutGoodCnt} of 7 means it will take around 7 days for a
   758  # machine to be subject to blackout. On the other hand, if a ping is
   759  # failed, it will be retried roughly every time BackupPC wakes up, eg,
   760  # every one or two hours. So a setting for $Conf{BlackoutBadPingLimit} 
of
   761  # 3 means that the PC will lose its blackout status after 3-6 hours of
   762  # unavailability.
   763  #
   764  # To disable the blackout feature set $Conf{BlackoutGoodCnt} to a 
negative
   765  # value.  A value of 0 will make all machines subject to blackout.  But
   766  # if you don't want to do any backups during the day it would be easier
   767  # to just set $Conf{WakeupSchedule} to a restricted schedule.
   768  #
   769  $Conf{BlackoutBadPingLimit} = 3;
   770  $Conf{BlackoutGoodCnt}  = 7;
8<--

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backups failing, file contents printed to screen

2023-07-20 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 20 Jul 2023, David Raison wrote:

On 19/07/2023 13:25, David Raison wrote:


I guess I'll try to pipe the process' output to files so that I may
be able to spot when and/or where exactly the switch from listing
to dumping happens ...

... where it started dumping content during that first run ...
...
I tried excluding the entire directory to see if the problem still
occurred, this time on a different location perhaps, and it did, so
it's not any specific file in that directory.

The second time, it started acting up here ...
...
So I don't know where to head on from here.


I've been watching this thread, but I haven't chimed in because I've
had no experience like yours so I can't offer you any particular gold
nugget.  But I know that if that were my system I'd be able to fix it
so here's how I'd go about it.  In the absence of some of the detail,
in places I'm going to have make some assumptions.

I think there are four possibilities, I'd attack them in this order:

1. A hardware problem.
2. Something caused entirely by the BackupPC config.
3. Something caused entirely by the machine being backed up.
4. Something caused by a combination of 2 and 3 above, which we can
   probably leave for now.  We might just have to come back to it.

The main assumption I'm making is that all these boxes are on the same
Ethernet LAN, and they're using the same switches and cabling, and you
either have great confidence in the network itself or you have already
tried swapping the cables, switches, ports, etc. with no effect so you
know that the network hardware is beyond suspicion.

Firstly, if this is a hardware box and not a VM, it could be as simple
as a broken electronic component.  To find that component might not be
perfectly straightforward but it's very straightforward to replace the
hardware in its entirety.  If you do that and the problem goes away it
points strongly to the hardware being the issue.  In that case I'd be
looking at disecting into the major parts: PSU, RAM, motherboard, CPU
and mass storage devices, in more or less that order.  If the network
device is something like a plug-in card that can be high on the list,
if for no other reason than that it's easy to swap.  (As far as I'm
concerned, dodgy firmware in interface cards comes under the heading
of hardware problems.)

Secondly, you've said that you have some number of other boxes being
backed up which aren't giving this kind of trouble.  The configuration
of the backup software can be *very* different for the different boxes
to be backed up, and I'd want to check that I had at least one other
box with exactly the same BackupPC configuration apart obviously from
the IP address.  I'd also want to have the backups running by means of
an automatic schedule rather than starting them from the command line.

Thirdly, I'd want to check that the supporting utilities and operating
system are exactly the same on the box that's giving trouble and again
at least one that isn't.  Particularly things like the versions of the
utilities which transfer the data - e.g. rsync, ssh, inetd, whatever -
and that these things are also configured in the same way, for example
you might have rsync running as a daemon on one box but running from a
super-server like xinetd on another.  You might not even have the same
user running the remote utilities, so permissions might come into play.
All this might be a tall order; you might be reduced to running md5sum
on files and doing directory listings all over the place, which is one
reason it's last in this checklist.

When the checklist is exhausted we'll know a lot more.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC 4.4.0 on a RaspberryPi4 with rsync failure

2023-05-22 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 22 May 2023, Gregory P. Ennis wrote:


I recently upgraded my raspberrypi4 to a 64 bit os from 32 which had an upgrade
of BackupPc from 3 to 4.  This created an enhancement going from the 5 kernel to
6.  BackupPC 3 on the 5 kernel worked perfectly, but so far BackupPC 4 fails to
wprk on the 6 kernel.

So far I have not been able to get rsync to work.  I keep getting the following
error :

Got fatal error during xfer (rsync error: unexplained error (code 255) at
io.c(226) [Receiver=3.1.3.0])

I get this error when backing up the raspberrypi itself as well as other remote
machines.

So far I have not been able to find a fix on the internet.  The setup I am using
for 4 is the same as 3.  I have checked the ssh keys and I am able to log on to
each machine with the user being backuppc and the remote user as root but I am
unable to get rsync to function without the above message and failure.


Do you have any particular reason for blaming the kernel?  I think it
unlikely that the kernel is the issue.  Can you boot an older kernel
to test if the theory holds water?  [Aside: I *never* let a package
management system delete an old kernel.]

There have been issues with differing versions of rsync, if you search
the list archives you'll find plenty of mention of that - you'll even
see the exact same error message which you posted.

I don't want to send you off on a wild goose chase by suggesting that
you build a version of rsync from source, but I've seen people do that
to fix this kind of problem.  It shouldn't be too onerous for you to
try a different version using the package management system, or you
might try tar instead of rsync (I've seen people do that too) but it
isn't really the right answer.  More just a test, to see if you're on
the right lines with your diagnosis.

FWIW my main BackupPC server is also a Pi.  It's a 3B+.  It backs up
more than a dozen other machines, some of them over WAN links, and I
have no problems with rsync, although out of an abundance of caution
it's still running a fairly old kernel.  The rsync is version 3.2.3:

8<--
$ rsync -V
rsync  version 3.2.3  protocol version 31
Copyright (C) 1996-2020 by Andrew Tridgell, Wayne Davison, and others.
Web site: https://rsync.samba.org/
Capabilities:
64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints,
socketpairs, hardlinks, hardlink-specials, symlinks, IPv6, atimes,
batchfiles, inplace, append, ACLs, xattrs, optional protect-args, iconv,
symtimes, prealloc, stop-at, no crtimes
Optimizations:
no SIMD, no asm, openssl-crypto
Checksum list:
xxh128 xxh3 xxh64 (xxhash) md5 md4 none
Compress list:
zstd lz4 zlibx zlib none

rsync comes with ABSOLUTELY NO [snip]
8<--


--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] rsync_bpc on thecommand line "works" but same rsync_bpc executed via apache fails

2023-05-17 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 17 May 2023, Bob Tanner wrote:


...
backuppc-4.4.0-5ubuntu2
Ubuntu 22.04.2 LTS

I issue a ?Start Full Backup? via the web UI and the backup errors out, Errors 
from the web UI states
...
testrs...@test.lab.real-time.com: Permission denied (publickey,password).   <== 
 permission denied
rsync_bpc: connection unexpectedly closed (0 bytes received so far) [Receiver]

If I cut-n-paste the ?Running? command into the backuppc user?s shell on the 
backuppc server the command works as expected.

I feels like the when the backup command is run via web it does not have access 
to the backuppc users ssh keys?


Wild guess:

In your configuration file(s), what do you have for

$Conf{BackupPCUser}

The configuration files are (er, sometimes:)

/etc/BackupPC/config.pl

and anything under

/etc/BackupPC/pc/

but they may be different in the Ubuntu packages, I don't know.

You might also want to check that you have

$Conf{BackupPCUserVerify} = 1;

which is theoretically a few lines below the line containing the
$Conf{BackupPCUser} definition.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] files removed

2023-05-06 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 5 May 2023, gregrwm wrote:


i'm curious ...


Me too. :)  I'm replying largely because nobody else did so far, not
because I think I have any brilliant insights or anything.  Sorry.


full backup started for directory /; updating partial #1


This is what I'm curious about.

You seem to be updating a failed backup.  Does that sound right?  If
so, I'd expect to see things that I wouldn't see when creating a fresh
backup.  I don't know what things because I don't remember that I ever
updated a failed backup.  I don't remember running BackupPC_dump from
the command line like you're doing either, I guess I must have in the
distant past but it would only ever have been out of curiosity.  Seems
to me to be a lot safer to let the backup system do what it's supposed
to do by itself.  There's a lot of time and date stuff goes on that I
can't be bothered to think about.


do they mean the backed-up copies were removed?  because they no longer
match the "original" file?


I can't comment, I no longer keep the V3 code lying around.  The message
that's spooking you doesn't seem to be in the V4 code.  I had a quick look
for anything which might resemble the same thing but it was very quick and
I wasn't amazed when I didn't get any results.


if so, umm..!!  heck, isn't that one of the times when backups are sorely
wanted?  when the original flakes or fades and no longer has a valid copy
of the file?


I think you may be misinterpreting message, or jumping to unwarranted
conclusions, or possibly both.  In any case I guess if you only have
one aged backup something like that might be thought of as an issue,
but if you have the two dozen or so I keep it's less of a worry. :)

Have you actually verified that something you're fond of has been lost?


maybe that's a bug fixed in backuppc4?


I think it's too early to talk about bugs, or fixing them.  We need to
establish what's going on in your system first.  Have you actually got
at least one of what you consider to be a *complete* full backup?  And
have you also got a bunch of completed incremental backups?  Have you
any reason to believe that the 'complete' backups are not complete?

My take on it is that you let the backup system run for a while, and
then you exercise it.  Gratuitously recover a file or three at random.
When you need to copy a file, copy it from the backup rather than from
the running filesystem, just for the hell of it, and check it against
the original.  Whenever I've done this I've never had a problem.

Anyway after all that, version 4.0.0alpha of BackupPC was released ten
years ago next month.  V4 offers advantages over V3.  Just sayin'...

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC (tar) has started failing

2023-04-05 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 5 Apr 2023, Mark Murawski wrote:


...
$Conf{BackupFilesOnly} = {
  '/' => [
'bin',
'apps',
'opt',
'lib32',
'intellasoft',
'etc',
'manageServ',
'boot',
'var',
'lib',
'lib64',
'usr',
'home',
'sbin'
  ]
};
$Conf{TarShareName} = [
  '/'
];
...


Is the above what you really want?

What happens if you add '--ignore-failed-read' to the tar options?

Are you writing backups to the same filesystem that you're backing up?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Files in backup with wrong uid/gid

2023-03-14 Thread G.W. Haywood via BackupPC-users

Hi there,

Sorry, wrote this last week and didn't manage to hit 'send'. :/

On Fri, 10 Mar 2023, David Heap wrote:


I'm seeing an issue where the uid/gid of two files inside the backup are
incorrectly root instead of the correct owner (openldap in this case).

Files on the server:

-rw--- 1 openldap openldap 10485759 Apr  8  2022 log.04
-rw--- 1 openldap openldap 10485759 Sep 21 14:16 log.05
-rw--- 1 openldap openldap 10485759 Feb 22 10:50 log.06
-rw--- 1 openldap openldap 8192 Apr 19  2022 memberUid.bdb

Files in the backup (output of BackupPC_ls):


-rw---   105/107   10485759 2022-04-08 08:38:34  
/srv/ldap//log.04 (d363e1e6aa3efc35e027e45f2279aa3f)

-rw---   0/0   10485759 2022-09-21 14:16:55  
/srv/ldap//log.05 (c4f72b8befc18f3ba2dce596572529cf)

-rw---   0/0   10485759 2023-02-22 10:50:42  
/srv/ldap//log.06 (c42c092af72eb67324424c609be769ba)

-rw---   105/107   8192 2022-04-19 14:03:41  
/srv/ldap//memberUid.bdb (73a58e4133305486ea3db35b05431534)


All the other files in the folder have the correct owner/group. The
file ownership isn't changing on the server at the time of backup.

I've tried removing these files from all the backups and running a
full nightly to (hopefully) clean out something that's cached wrong
in the pool, but they came back again the next night with the same
issue. Any ideas?


My guess is that you're doing something funky with filesystems, for
example the BackupPC data is a network mount.  Can you give us a bit
more information?

And do you have a compelling reason to back up ten megabyte log files
that are a year old? :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] cpool not shrinking on deleting hosts or changing schedule (Github #427)

2023-02-24 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 24 Feb 2023, Paul Fox wrote:


...
Nothing makes me nervous like having someone tell me that my backup
strategy, which has been rock solid for almost 20 years, might now
have problems. ...


In this thread I don't think I've seen anything to cause that kind of
concern for anyone who amongst other things checks now and again that
random backups can be recovered correctly.  I do that - most recently
this morning - and I've never been disappointed.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] cpool not shrinking on deleting hosts or changing schedule (Github #427)

2023-02-21 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 21 Feb 2023, David Raisonwrote:



So since reducing the IncrKeepCnt for some hosts, I have actually lost
another 6Gb in disk storage over the past night.
... results seem to indicate that I have no orphaned files:
...
I have made sure that my $Conf{RsyncArgs} are the defaults for v4 and
include the `--delete` and `--delete-excluded` flags.

So I'm still at a loss.


I wonder if you install the (read-only) FUSE filesystem 'backuppcfs'
it will help you to diagnose this further?  As you seem to be having
trouble with a small number of large files it shouldn't be difficult
to isolate the problem areas and maybe post some details here.

After version 4 of BackupPC was released there was a corresponding
update to backuppcfs.  If you do use it make sure you get the right
one.  You should be able to find it in the list archives.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] cpool not shrinking on deleting hosts or changing schedule (Github #427)

2023-02-20 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 20 Feb 2023, David Raison wrote:


I'd like to pull your attention to this ticket on Github
(https://github.com/backuppc/backuppc/issues/427) ...
... pool/cpool isn't going down in size anymore with version 4.x after
hosts or old backups are removed.


I don't know if this is relevant to your situation or not, but issue 427
refers to

https://github.com/backuppc/backuppc/issues/301

which in turn refers to

https://github.com/backuppc/backuppc/issues/135

and in that thread Mr. Barratt asks what transfer method is used, the
reason for the question being that incrementals will not recognize
deleted or renamed files if the transfer method is SMB or tar.  These
will be recognized if the transfer method is rsync.  Full backups also
handle deleted files.

Perhaps incremental backups are the issue?  Maybe you could test that
by deleting them.

Which transfer method are you using?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Storing Identical Files?

2023-02-12 Thread G.W. Haywood via BackupPC-users

Hi there,

Today I realized I hadn't sent this.  It may have been overtaken by
events but here it is anyway...

On Sat, 11 Feb 2023, Christian V?lker wrote:

I have two clients which have a large share. These two (Debian) clients 
sync this share on a daily base through rsync (through a third clientC, 
but this should not make a difference). On clientA there is a cron job 
doing rsync to clientC and on clientB there is a cron job doing rsync 
from clientC. So in the end all three hosts have identical data. ...


BackupPC itself is only backing up host clientA so far (since months 
now).? So the data is stored in /var/lib/backuppc.


Now I added the clientB share to BackupPC ... usage of the pool
increased approximately about the size of the share ...


You have missed some important information.

1. May we see does your BackupPC configuration files?

2. What is 'large' in 'large share'?  Obviously adding an extra client
to the backup will produce a requirement for storage of a large amount
of metadata.  Perhaps that's what you're seeing, although without more
information about data volume it's difficult to guess what's going on.

3. Do the files in the shares change?  I presume that they do or there
would be no need to sync them, so that begs the next two questions

4. When do the files change? and

5. When do the backups take place?

Obviously if large numbers of the files change between backups and the
first backup takes place before the changes while the second backup
takes place after it, then you cannot expect deduplication to help.


*  Is there dupe detecion on BackupPC?


Yes.  We routinely back up just under 20 Terabytes of data from 12
hosts.  After pooling and compression the pool size is 640 Gigabytes.


* If so, why does my pool size not decrease after a while?

* If by default it has to decrease, is there an explanation why it
does not on my host?


I do not know the answers to these questions.  More information is needed.

Faced with this kind of situation I would investigate, in order to

(1) justify my trust in the numbers on which I base any conclusions and

(2) verify (if possible for a few, hopefully large, sample duplicated
files) that the physical storage location for the duplicated files on
the storage medium was the same - thus demonstrating deduplication.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] how to handle missing pool files reported in server log each night

2023-02-10 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 10 Feb 2023, Samual Flossie wrote:


After a partial disk problem on the backuppc server storage area, I am
seeing the following in the log:

... BackupPC_refCountUpdate: missing pool file ...
...
...
1. How can I get to point that resolves these missing pool files? no
more messages about missing files would be a nice result.


There have been issues in the past, some useful information here:

https://www.mail-archive.com/backuppc-users@lists.sourceforge.net/msg33090.html


2. How can I get a list of files that are potentially forever lost
because of the missing pool file(s)?  need to know the damage.


No files in a backup are potentially lost forever just because you
lose the backup.  You just run the backup again on the original files.
If they are potentially lost forever because for example they were the
only copies, then it wasn't a backup and you need to take a fresh look
at your backup strategy.  For anything important, I recommend that you
keep at least two backups in different locations.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Some hosts ignored on wake up

2023-01-25 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 25 Jan 2023, Giannis Economou wrote:


... hosts that are ignored mention that they have been last checked at
13.00 (no backup to be done at 13:00 since 13:00 is only for nightly
runs), but then they seem completely ignored at several wakeups after
midnight (where backup should be made).
...
...
Not the same hosts are ignored every day.


Does a manual backup for a host which has been ignored complete, both
*reliably* and successfully?  This is just to make sure that there is
no communications problem - although if there were, I'd have expected
the logs to warn about it.

Everything else I suggest here should be considered shots in the dark. :/


...
MaxPendingCmds: 120


I've never seen MaxPendingCmds cause a problem but I wonder if this
has a bearing on the issue because you have 85 hosts to back up.  I'd
try increasing this to a much larger value.  I don't know if there's
supposed to be a log message to warn if the limit is exceeded.


...
BlackoutBadPingLimit: 3
BlackoutGoodCnt: 5
BlackoutPeriods:
hourBegin: 8.5
hourEnd: 23.5
weekDays: 0, 1, 2, 3, 4, 5, 6


Again I've never seen problems with blackout periods but I'd try
removing them altogether to see what happened.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


[BackupPC-users] Slow web page loading of admin pages

2023-01-18 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 18 Jan 2023, Kenneth Porter wrote:

Two PCs at the office that are much newer than mine are very slow at 
loading the web pages. ...


My first thought is name resolution.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Just done 3 to 4 migration - couple of UI related problems

2023-01-17 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 17 Jan 2023, Laurence Hurst wrote:


Yes, I know I'm very late to the v4 party...

Just upgraded my version 3 to 4 (3 days into the process) and all seems
to be working again but I cannot find 2 things I used regularly in the
v3 UI in the v4 one:


Quite a few years ago when I upgraded 3 to 4 I had similar problems.


1. I cannot find the XferLogs in the UI, which I could in v3?


On the Host Summary page

.../BackupPC_Admin?action=summary

click on a host.  That takes you to the host summary page

.../BackupPC_Admin?host=

There are three tables on that page, you may need to scroll vertically
to reach them.  (I've always thought it would be better if there were
only one table on the page, and you had to scroll horizontally to see
the columns you probably only need infrequently, but I digress.)  The
first table is "Backup Summary", the second is "Xfer Error Summary",
the third is "File Size/Count Reuse Summary".

The second table is the one you want.  Here's a cut-n-paste of the
first two lines of that table in one of my backup pages:

Backup#   Type   View   #Xfer errs   #bad files   #bad share   #tar errs 
1087 	incr 	XferLOG, Errors 	0 	0 	0 	0


The third colum contains two links, for "XferLOG" and "Errors".

In my installation if I click the "XferLOG" link I get to

.../BackupPC_Admin?action=view=XferLOG=1087=

and the log is rendered on the screen.  The first line on the screen is

"File /var/lib/BackupPC/pc//XferLOG.1087.z"

[Aside] If I log in as the 'backuppc' user and give the command

$ /usr/local/BackupPC/bin/BackupPC_zcat /var/lib/BackupPC/pc//XferLOG.1087.z 
2>&1 | head
TOPDIR=[]
INSTALLDIR=[]
CONFDIR=   []
USING FHS:
TOPDIR=[/var/lib/BackupPC]
INSTALLDIR=[/usr/local/BackupPC]
CONFDIR=   [/etc/BackupPC]
Attempting to read configuration
$host=[]
XferLOG file /var/lib/BackupPC/pc//XferLOG.1087.z created 2023-01-17 
01:02:38

I see what I expect to see.  It's a long time since I've looked at any
of those, longer than it should have been, so thank you for the prompt.
[/Aside]


2. I also don't seem to have any storage graphs in the UI - these were
working in v3, so I'm not sure what I've done to lose/break these?


IIRC I had the same problem.  The list didn't help much so I just blew
away the V4 installation and reinstalled from scratch.  Got graphs now.


3. I cannot edit the configurations through the v4 UI, which I could
through v3 - again, I'm not sure what I've broken here?


Sorry, can't help.  Wherever possible I avoid editing a configuration
through a Web UI.  My single experience of using the UI for BackupPC
was was that the V3 UI deleted all the comments in the file and I had
to restore from a backup... since then I've never used the UI to edit.


I'm sure it's something simple I've overlooked but BackupPC has a _lot_
of options, so if anyone knows the answer off the top of their head I
would really appreciate the timesaver having already spent 3 days buried
in BackupPC!


Been there, got the t-shirt.  I feel your pain.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backup virtual machine vmware

2023-01-13 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 13 Jan 2023, backuppc-users-requ...@lists.sourceforge.net wrote:


Il giorno gio 12 gen 2023 alle ore 14:32 G.W. Haywood ha scritto:
> On Thu, 12 Jan 2023, lu lu wrote:
> > > On Wed, 11 Jan 2023, G.W. Haywood wrote:
> > > > On Wed, 11 Jan 2023, lu lu wrote:
> > >
> > > > ... is it possible with backuppc to backup a complete virtual machine?
> > > > ...
> > >
> > > Yes, but it's pointless.  You may as well just make a copy.
> > >
> > > There are better ways to do what you want, for example you might look
> > > into snapshotting filesystems.
> > >
> > ...
>
> If you backup an entire VM as a file it will always be different (from
> minute to minute, let alone from backup to backup).  ...
>
> Snapshotting filesystems 'freeze' an image of the filesystem at some
> particular time but they permit that filesystem to continue working ...
>
> Why would you want to back up a complete VM anyway?  It's usually much
> better to have a template for the VM and then back up for example just
> the user data, which will usually be tiny by comparison with a full VM.

I have various VMs and from those servers I only backup the data and so it
will remain for all the reasons you explained to me.


:)


of a single VM, which is a windows10, there is software installed that acts
as a switchboard and has various installations to make it work including
java and various plug-ins which are not backed up so in case of machine
crash to install new the whole switchboard is crazy ...


Agreed. :)


... if I copied the whole VM in case of crash I recover everything
including the operating system


Do you experience a lot of crashes?  Backups are not meant to be an
alternative to running a reliable system.  If that is what you are
trying to do then perhaps you could configure BackupPC to make only
one copy of its backup - so that the deduplication issue is moot - but
you would still have to deal with the fact that the operating system
will be changing data on the VM faster than BackupPC can copy it so I
can't say that I'd recommend this approach.

It still feels to me like you ought to be investigating snapshots of
some kind.  The VM software can probably do that.  You could back up
user data using BackupPC in the way BackupPC is intended to be used.
Now if the VM does crash you could revert to a previously running VM
snapshot and just recover the changed user data.  It's a two-staged
recovery approach, but it would probably be much faster (and yield a
more up-to-date result) than taking and recovering full VM copies.

https://endoflife.date/windows

Which version of Windows 10?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backup virtual machine vmware

2023-01-12 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 12 Jan 2023, lu lu wrote:

> On Wed, 11 Jan 2023, G.W. Haywood wrote:
> > On Wed, 11 Jan 2023, lu lu wrote:
>
> > ... is it possible with backuppc to backup a complete virtual machine?
> > ...
>
> Yes, but it's pointless.  You may as well just make a copy.
>
> There are better ways to do what you want, for example you might look
> into snapshotting filesystems.
>
...
Can you explain better what you mean?


Read the documentation.

One of the most important features of BackupPC is its deduplication.

That means even if it keeps hundreds of backups, it need only keep one
copy of any particular file if the file hasn't changed from one backup
to the next.

If you backup an entire VM as a file it will always be different (from
minute to minute, let alone from backup to backup).  Not only will the
BackupPC system not be able to deduplicate the file, it probably won't
be able even to complete a copy before the file has changed - so files
it tries to copy might not actually be usable.  You might need to stop
the VM while the backup is taking place in order to get an internally
consistent, usable result.

Snapshotting filesystems 'freeze' an image of the filesystem at some
particular time but they permit that filesystem to continue working
normally.  You can then copy any frozen files at your leisure, without
resorting to things like stopping your VMs during backups.

Backing up large databases for example may need similar consideration.

Why would you want to back up a complete VM anyway?  It's usually much
better to have a template for the VM and then back up for example just
the user data, which will usually be tiny by comparison with a full VM.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] backup virtual machine vmware

2023-01-11 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 11 Jan 2023, lu lu wrote:


... is it possible with backuppc to backup a complete virtual machine?
...


Yes, but it's pointless.  You may as well just make a copy.

There are better ways to do what you want, for example you might look
into snapshotting filesystems.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Missing backup files

2022-11-05 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 5 Nov 2022, Adam Goryachev wrote:


I don't understand why you would NOT want backuppc to have at least
read access to ALL data, including data only accessible to root.


Because I wouldn't want BackupPC to be a vector for compromise.  The
whole point of giving read access only to root is so that *only* root
has access. :)  It's a security posture.


I assume you would not be suggesting that you run a separate backup
system for each user


Correct.


so why would you want to either:

1) Not backup root data


??!!


2) Run a separate backup solution just for root data

I guess this will go back to how you setup your data security etc,


Exactly.  The BackupPC system is rather complex.  As commonly used it
involves, for example, a Web server and a browser.  These things are
notoriously insecure.  Browsers are amongst the most complex bits of
software on the planet and amongst the most frequently compromised.
You could even have pathological cases like a well-secured Linux box
running BackupPC accessed by Internet Explorer running under XP...

Rather than let a random browser have access to things which properly
only root should be able to access, if I were being cautious I'd back
up the private things separately.  It might be as simple as creating a
cron job or something like that to zip (with a passphrase known only
to me) the root-only files to an archive, and then copy that archive
to a place accessible to the backup run by BackupPC.  Clearly there's
a bit of work to be done there, keeping things current.


but regardless of what you do, I would strongly suggest you ensure
ALL data is backed up...


Well at least all important data, agreed, obviously. :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Missing backup files

2022-11-04 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 4 Nov 2022, Mark Murawski wrote:


...
This is the most recently finished full backup [51] for /etc/ssl/private
...
There's no files in there!! Just directories!? Everything is missing

And it looks like the *entire* backup system looks like this.? I didn't
even know that my backups are completely broken and missing all files.


Trying to sut through all the emotion, if I'm reading your posts
correctly your backup system has, since the time when you started
using BackupPC, performed 51 full backups plus I'd guess quite a lot
more incrementals.  Obviously it all depends on your configuration,
but you didn't show us any of that.

Guessing that your full backups aren't being run more often than about
daily - and a lot of mine only run monthly - it seems to me likely that

(1) your backup system has been running for at least several weeks,

and

(2) this is the first time you've tested a backup.

If my deductions are correct, I'd be inclined to say you should take
your backups more seriously.  And please don't be so defensive when
people try to help, even if they're perhaps understandably impatient
with posters who haven't done their homework.  Humility works a lot
better than escalation.  Bear in mind that some people will probably
just ignore argumentative posters, and you could be cutting yourself
off from a great deal of valuable experience that way.

Incidentally I'm not sure that I'd want the 'backuppc' user to be able
to read private data normally only readable by root, but it's your call
and it might even be that you have it set up that way - I don't know.
FTAOD I'm just trying to help.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Tar causing problems with removable drive on localhost

2022-10-20 Thread G.W. Haywood via BackupPC-users

Hello again,

On Thu, 20 Oct 2022, Adam Hardy wrote:


I scanned the problem USB drive with smartctl and with
gnome-utilities and it logged nothing.?


Be careful with smartctl if you use it for anything other than reading
information from the drive.  Heed the warnings in the 'man' page, and
before you do anything like setting or changing drive characteristics
search online for reports from people who've done similar things with
your particular drive model.  One wrong move could be serious.  Maybe
it won't be necessary.  I've used it to stop drives from spinning down
in problem cases but only when the timeouts interacted very badly with
the Linux kernel timeouts for flushing sectors from RAM to disc.  I've
been prepared to take the risk of damaging drives, but only because my
alternative would have been to put them in the recycling bin. :/

Don't forget I'm still guessing. :)


I checked in the syslog and I can't see any other log files that it
might be using.


You can spend hours trawling through logs, but mostly I'd search in
/var/log/(daemon.log|debug|kern.log|messages|syslog) - not necessarily
in that order.


How would I run something automatically before backup? Or do you
mean manually?


I meant automatically.  I was thinking of one of these:

$Conf{DumpPreUserCmd}
$Conf{DumpPostUserCmd}
$Conf{DumpPreShareCmd}
$Conf{DumpPostShareCmd}
$Conf{RestorePreUserCmd}
$Conf{RestorePostUserCmd}
$Conf{ArchivePreUserCmd}
$Conf{ArchivePostUserCmd}

Check the documentation for "Optional commands to run before and after
dumps and restores, and also before and after each share of a dump."

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Tar causing problems with removable drive on localhost

2022-10-19 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 19 Oct 2022, Adam Hardy wrote:


I keep getting a problem from tar when backuppc goes to back up the localhost, 
which includes a removable drive in the backup targets:

Running: /usr/bin/sudo /bin/tar -c -v -f - -C /media/adam/Videos-2TB/Videos 
--totals --newer=2022-10-11\ 13:11:32 .
incr backup started back to 2022-10-11 13:11:32 (backup #1343) for directory 
/media/adam/Videos-2TB/Videos
Xfer PIDs are now 22832,22831
/bin/tar: /media/adam/Videos-2TB/Videos: Cannot open: No such file or directory
/bin/tar: Error is not recoverable: exiting now
readOutput: sysread returns 0 and got EOF
Tar exited with error 512 () status

50% of the time, the backup works. When I run the command above as
the backuppc user, it never fails. I'm not sure what to do now. The
removable drive is always connected. I can only assume it has
something to do with Ubuntu managing the drive somehow so that it is
connected when any GUI process tries to access it, but I'm not sure
why it shouldn't fail for me when I run it on the command line.


Don't assume what the problem is and try to solve it.  First find it.
There may be something interesting in the system logs (in /var/log/).

Your subject line says "tar causing problems" but I feel sure it won't
be tar which is causing the problems.  I also doubt that it's anything
as complicated as Ubuntu doing things for GUIs.

Would I be right in guessing that it's a USB-connected drive?  They've
almost always caused problems for me.  USB disconnects happen all the
time, and for most USB devices a reconnect is fine.  For mass storage
devices unfortunately that's often not the case.

Is the drive spinning down when not busy?  That's the default for many
USB devices I've seen and that often causes problems too - but you can
prevent it from doing that easily enough.  Maybe you'd have more luck
if you run something just before the backup takes place which accesses
the drive, to spin it up, if that's a problem.

Are you backing up 2TB of videos?  I'd probably exclude them from the
regular backups.

Well I've done enough guessing for one post. :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] errors on network-mounted pool filesystem

2022-09-03 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 3 Sep 2022, Sandro Bordacchini wrote:


...
I needed to switch to a cloud solution, though.
...


I think that's a very bad idea.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC failed after upgrading client to Debian 11

2022-07-25 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 25 Jul 2022, Iosif Fettich wrote:


... you might fight a TCP level failure between some intermediate routers.


Which I acknowledge that almost anything is possible, it's difficult
to see how a problem like that would suddenly present itself simply
because of an upgrade to Debian 11.

I've only glanced through this thread but it seems like something
that's most likely to be resolved by increasing the amount of debug
information that's recorded until the explanation appears in a log.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Serious error: last backup directory doesn't exist!!! Need to remove back to last filled backup

2022-06-14 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 14 Jun 2022, gregrwm wrote:


... interrupted BackupPC_dump.  on the next invocation i got:
2022-06-12 21:35:02 Serious error: last backup
/var/lib/backuppc/pc/avocado/32 directory doesn't exist!!!  Need to remove
back to last filled backup
2022-06-12 21:35:02 Deleting backup 14
2022-06-12 21:35:08 Deleting backup 15
2022-06-12 21:35:14 Deleting backup 16
2022-06-12 21:35:20 Deleting backup 17
2022-06-12 21:35:27 Deleting backup 18
2022-06-12 21:35:34 Deleting backup 19
2022-06-12 21:35:41 Deleting backup 22
2022-06-12 21:35:47 Deleting backup 30
2022-06-12 21:36:00 Deleting backup 32

wow.  not too robust!  doesn't that seem like an inordinate consequence?


Mr. Kosowsky didn't specifically address the robustness issue so I'll
chime in here about that.  No, it doesn't seem inordinate if you think
about how BackupPC manages backups.  The non-filled backups are based
on a filled backup.  If you don't have that, then backups which are
based on it are useless so there's no point in keeping them.  I think
the moral of the story is that if you care about your backups, don't
do what you did (nor anything like it) without taking precautions.

Having said that I don't generally mess with BackupPC (whether it's in
the middle of doing something or not).  After a couple of false starts
(which must have been at least partly my fault, while I was migrating
from V3 to V4) once I got version 4 settled in it has never put a foot
wrong backing up dozens of machines, which aren't even all in the same
country, with tens of terabytes of data.  Occasionally I recover files
and directories from the backups; it's often much easier than fetching
them from the backed up machines directly.  I've found that doing this
makes me more confident of BackupPC.  That then makes it more likely
that when I need to fetch more files I'll grab backups rather than go
to the originals.  It gives me a warm fuzzy feeling I suppose, to know
the recovered backed up data is exactly what I expect it to be, so I'm
that much more confident that if I needed it because I've managed to
lose the original then it would be there for me.

I have no axe to grind.  I'm not in any way connected with BackupPC
development nor with the developers, I'm just a very satisfied user
and I thought that a message which could be seen as critical needed
something to balance it.  Of course there will be faults to be fixed
in any even moderately complex software.  BackupPC is probably a bit
more than just moderately complex, but I've found it very robust if
treated with reasonable care.  I check the hosts page once a day to
see that the backups are all less than a day old.  That's about it.

If BackupPC hadn't existed I think I'd have had to create it myself.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] How to do a initial seed on pool

2022-06-03 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 3 Jun 2022, Bruno Rogerio Fernandes wrote:


... about 15TB of data ...  planning to migrate to backuppc V4 ...
backup is done every day through an Internet connection and would be
pretty dangerous waiting for many weeks to do a first full backup
(wait for transfers to complete through Internet - about 100Mbps
client uplink speed).


Without more information about your data it's hard to be sure, but I'm
not convinced you've yet made the case for employing BackupPC.  In any
case I make it only about two weeks to transfer your 15TBytes of data
at 100MBit/s, even if it's uncompressed.

Is there nothing useful to be gained by compressing the data?


... I don't have enough space to accommodate two backups ...


Buy more space.   Compared with most data, it's very cheap.


So, I'm wondering, is it ok to seed the backuppc pool manually? ...
If I ... could ... backuppc won't transfer this file over Internet,
so solving my issues.
...
Are there any other things that I'm missing? Is it ok to do that?


Don't do it.  You will almost certainly create issues which few here
will be able to help you with.  It could easily then take longer than
two weeks to sort out the resulting mess.  Why not just start the new
backup now?

If you value your data more than the cost of more storage, you have no
excuse for not buying more storage.  It sounds to me like you already
don't have enough, because most people who are serious about backups
will have a minimum of three copies of the data - often kept at three
different locations on the planet.  Until a few months ago I wasn't so
worried about the planet, but it's probably worth bearing in mind that
it's the only one we have.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] are 2 backuppc hosts safe from each other's activity?

2022-05-15 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 15 May 2022, gregrwm wrote:


... is my assumption correct?  i assume that the backuppc hosts are
where it is saved which backups were done last, and when.


Yes, your assumption is correct.  Consider for example what happens if
the client to be backed up is switched off.  But assumptions generally
are not the best base on which to build, and that's especially true if
what you're building is your backup system.  Have you tried to recover
a backup, just to make sure that you can?


i have brought up backuppc4 on a new kvm guest, and still also have
backuppc3 on another kvm guest.


Consider also what can happen if both guests live on the same hardware.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] rsync/File::RsyncP conflict

2022-04-13 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 13 Apr 2022, Dave Sherohman wrote:

On 4/11/22 18:22, G.W. Haywood via BackupPC-users wrote:


Looking at

https://metacpan.org/dist/File-RsyncP/changes

it seems that there is only one later version (0.76) so your options
seem to be somewhat limited. :)


Is it even still being used?? My BPC server is running 4.4.0, installed
from the debian 11/bullseye package, and `locate RsyncP` reports no
matching files exist on the system.


Version 4 of BackupPC uses rsync_bpc instead of File::RsyncP, see

http://backuppc.sourceforge.net/BackupPC-4.2.1.html#BackupPC-4.0

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] rsync/File::RsyncP conflict

2022-04-11 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 11 Apr 2022, Matthew Pounsett wrote:


I seem to have encountered the same problem described in this thread
which ends here:


To summarize what I understand from that thread, there's a bug in
File::RsyncP prior to some unidentified version that is exposed by a
change in rsync as of 3.2.3.

I've started running into the above-described issue since I upgraded
one of my hosts to Debian Bullseye, which uses rsync 3.2.3.  That host
hasn't completed a backup since ..
...
My backuppc host is still Debian Buster which has File::RsyncP 0.74.

The thread above mentions that File::RsyncP is fixed in a recent
version, but doesn't indicate what version that is.  I'm trying to
identify possible fixes for my issue here... and I'm wondering if
simply upgrading my backup server to Bullseye is going to fix my
problem, or if I should be grabbing libfile-rsyncp-perl and/or
backuppc from backports, or what...

Knowing which version of File::RsyncP fixes the issue would be a big
help.  Does anyone know that with certainty?


Looking at

https://metacpan.org/dist/File-RsyncP/changes

it seems that there is only one later version (0.76) so your options
seem to be somewhat limited. :)

The changelog mentions handling stderr output from rsync 3.2.3 but I
haven't really investigated.  On the File-RsyncP page there's a link

http://perlrsync.sourceforge.net/

but it didn't seem very helpful (talks about version 0.68, Nov. 2006)
so I don't know if the latest changes address your issue or not.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Needed patch for BackupPC::Storage::Text.pm

2022-04-09 Thread G.W. Haywood via BackupPC-users

Hi Iosif,

On Sat, 9 Apr 2022, Iosif Fettich wrote:


 just to let you know:

 BackupPC::Storage::Text.pm needs a patch.

 in sub ConfigFileMerge, the line reading

 my $d = Data::Dumper->new([$newConf->{$var}], [*value]);

 should ...


Which one?


I'm not sure what you're asking...


$ grep -n 'my $d = Data::Dumper->new' 
/usr/local/BackupPC/lib/BackupPC/Storage/Text.pm
379:my $d = Data::Dumper->new([$newConf->{$var}], [*value]);
413:my $d = Data::Dumper->new([$newConf->{$var}], [*value]);


 I've tried to get in touch with Craig Barrat (the BackupPC author and
 maintainer, afaik), but haven't succeded yet.


I think you have, now. :)


Are you saying that Craig is following the list? ...


Yes.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Needed patch for BackupPC::Storage::Text.pm

2022-04-09 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 9 Apr 2022, Iosif Fettich wrote:


just to let you know:

BackupPC::Storage::Text.pm needs a patch.

in sub ConfigFileMerge, the line reading

my $d = Data::Dumper->new([$newConf->{$var}], [*value]);

should ...


Which one?


I've tried to get in touch with Craig Barrat (the BackupPC author and
maintainer, afaik), but haven't succeded yet.


I think you have, now. :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Rotating multiple drives on one mount point

2022-03-26 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 26 Mar 2022, Kenneth Porter wrote:


I'm adding a second external drive to my rotation so I can keep one
off-site in case of disaster.


Without looking into it carefully I couldn't be sure that swapping the
storage device from under BackupPC wouldn't actually cause more issues
than the one which you're trying to address.  At the very least I think
you'd want to look at the 'nightly' processes (these *may* take several
nightly runs to complete) to make sure you haven't clobbered something.


How do people handle this? What do your systemd mount/automount unit files
look like? Do you use a single drive label so a single systemd unit works
to mount any backup drive to the same mount point?


Generally I run completely separate instances of BackupPC on different
computers housed in different buildings which are linked by fast LAN.
For some systems the links are slow VPNs, but it's the same principle.
It's sort of fire and forget, except there's a Web interface that you
can look at to get a warm fuzzy feeling that things are backing up OK.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] HELP - FW: BackupPC administrative attention needed

2022-03-24 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 24 Mar 2022, Edward Cotter wrote:


New user to your mailing list and completely new to the product.


Hello and welcome.  A BackupPC system is necessarily and unfortunately
a little complex, but I'm sure you'll get the hang of it given time.
It's not exactly a 'product', it's more a sort of adventure.


As Senior IT Admin - tasked with resolving issues with our existing
backup solution through your software.


Given other things in your mail I'm not sure I understand you - do you
intend to replace BackupPC with something else or are you replacing an
existing backup system with BackupPC?


These alerts are coming out everyday for our company - looking for
basic guidance and direction with suggested steps to resolve to
avoid any data loss for our organization.

Please advise at your earliest convenience.


Well short of deleting old backups, if the filesystem that you've told
BackupPC to use is very nearly full then there's little that it can do
to help you.  Either delete some junk from the system, or failing that
make the filesystem bigger.  You might simply need to buy and install
a bigger storage device, and copy the data from the old device to the
new one, or add a device to a RAID array, or ... this kind of thing is
probably the job of the Senior IT Admin. :)


...
Working to discover the file system location /srv/backuppc to see
what space is available and if it can be expanded.
...
| Yesterday 156 hosts were skipped because the file system containing
| /srv/backuppc/ was too full.  The threshold in the configuration
| file is 95%, while yesterday the file system was up to 97% full.
| Please find more space on the file system, or reduce the number of
| full or incremental backups that we keep.
| Regards,
| PC Backup Genie


Some familiarity with Linux or other Unix-like system is more or less
a prerequisite for working with BackupPC.  Look at the 'man' pages for
the 'mount' and 'df' commands to see how to find the storage space
used and available, and the 'cd' and 'ls' commands will let you romp
around the filesystems investigating their contents.  According to
your log snippets the filesystem in use by BackupPC is 97% full, so it
seems likely that you're already in dangerous territory and need to do
something about it quickly.  When the filesystem does not have enough
storage space available, it might not be just BackupPC which fails as
a reasult.

I'd cheerfully log in remotely and take a look at your system with no
obligation, and you'd be crazy to let me do that.  Be *careful* what
you do when you're logged in as the 'root' user.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Some directories stay empty (The directory ... is empty).

2022-03-21 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 21 Mar 2022, chiel wrote:


... a few directories are not included ...

One of these directories is "/var/lib/automysqlbackup". Which is created
after installing "automysqlbackup" which creates auto backup's mysql db's.

This folder has the following rights:

root@server-123:/var/lib/automysqlbackup# ls -allh
total 24K
drwxr-x---? 6 root root 4.0K Mar 20 12:59 .
...


Although it seems to me a little off the wall, I'll assume - for the
purpose of this thread - that you really *do* want BackupPC to back up
a bunch of backup files.  I would normally exclude such things - and
I'd also put them on a different box.

As you have it set up, the directory '/var/lib/automysqlbackup' is
only readable by user 'root' or by a user which is a member of the
group 'root'.  This means that, if a user which is *not* root and *not*
in the root group, unless the user knows the names of directories/files
under that directory and *additionally* has read access to them, that
user cannot read the directories/files contained within.


drwxr-xr-x? 3 root root 4.0K Mar 14 10:50 daily
drwxr-xr-x? 2 root root 4.0K Mar 14 10:50 monthly
drwxr-xr-x? 3 root root 4.0K Mar 14 10:50 weekly


The 'backuppc' user could read the contents of the directories daily,
monthly and weekly - but note that the directory content is things
like the names of the files contained in the directories, not the
content of the files themselves - to read the contents of the files
the files themselves (not their containing directories) must have the
appropriate permissions.  So for example if you explicity ask BackupPC
to back up

/var/lib/automysqlbackup/daily
/var/lib/automysqlbackup/monthly
/var/lib/automysqlbackup/weekly

by putting those paths in your configuration (assuming that the files
within those directories are readable by BackupPC) there would be no
problem for BackupPC to back them up.


When backuppc does a full backup it says that the backup has completed
successfully. However when I check this directory backuppc has created
the directory but says:

The directory /var/lib/aytomysqlbackup is empty


This message is possibly misleading.


On the client machine (Ubuntu 20.04.4) I used "sudo visudo" to edit the
/etc/sudoers file and added the following:

# Allow backuppc to run rsync as root
backuppc ALL=NOPASSWD: /usr/bin/rsync


Just because something is possible it doesn't necessarily mean that it
actually happens.  You would also need to get BackupPC to read the
directories and run rsync (or whatever transfer utility) as user root.
Perhaps you haven't done one or both of those things.


Furthermore I changed authorized_keys to add a couple of options to it:
no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa
XX backuppc@backup


Why?  Without some investigation I'm afraid that I have no idea what
effects, if any, this will have on your BackupPC setup.


So the questions is why is that particular directory not being backed up?


Very likely the directory permissions.  If you really need to back up
the backups, why not just give, to the automysqlbackup directory, read
permission for the backuppc user's group?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Fill older backups retroactively

2022-03-15 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 15 Mar 2022, Samantha Rapp wrote:


Is there any way to decide, after a backup has been taken, to fill said
backup? ...


https://backuppc.github.io/backuppc/BackupPC.html#Other-Command-Line-Utilities

8<--
BackupPC_backupDuplicate

BackupPC_backupDuplicate duplicates the last backup, which is used
to create a filled backup copy, and also to convert a V3 backup to
a new V4 starting point. Usage:

BackupPC_backupDuplicate -h host [-p]
Options:
   -h host hostname
   -p  don't print progress information
8<--

Does that help?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Archive ACLs

2022-03-08 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 8 Mar 2022, ralph strebbing wrote:


what we're trying to do is get the ACLs working from the Archive
host. It is correctly exporting the tar.gz files, but when I extract
them onto my PC, the files have lost their ownership/group ...


It isn't clear to me exactly what you're doing here, but I thought it
might be worth a mention that if you back up files from Linux box A
and restore them to Linux box B you need to be a bit careful about the
owner and group IDs.  This is because (even if you have exactly the
same sets of user and group *names* on the two machines) unless you've
been very careful to keep them in sync there's no guarantee that the
user and group *names* are associated with the same *numeric* IDs on
each machine.  Assuming that the IDs having the same relationship on
different machines is IMHO in most circumstances asking for trouble.

If that's not the sort of thing you're doing, or you have it all under
control, please forgive the intrusion. :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Real Time View of What Is Being Copied

2022-03-07 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 7 Mar 2022,  Dave Sherohman wrote:

Re: Real Time View of What Is Being Copied

On 3/5/22 14:36, G.W. Haywood via BackupPC-users wrote:

On Sat, 5 Mar 2022, Les Mikesell wrote:

Unix/Linux has something calle 'sparse' files ...


I can't remember the last time I saw a sparse file used ...


One common legitimate use case for sparse files is virtual disk images.


Ah, good point!  But nobody in his right mind would use BackupPC
to back them up directly as files, would they?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Real Time View of What Is Being Copied

2022-03-05 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 5 Mar 2022, Les Mikesell wrote:

On Fri, Mar 4, 2022 G.W. Haywood via BackupPC-users wrote:

> Another possibility (and the only time I've ever seen this happen) is
> that there might be a large file which is causing a timeout.  It was
> on a Linux box here, something went off the reservation and started
> writing gigabytes of messages to a file called '.xsession-errors'.

Unix/Linux has something calle 'sparse' files used by some types of
databases where you can seek far into a file and write without
using/allocating any space up to that point.  The file as stored may
not be large but most tools to copy it will act as though the empty
parts were filled with nulls. Rsync might handle them these days but
may still take the time to send the stream of nulls.  But in any case
they are rarely used on Windows.


I can't remember the last time I saw a sparse file used *anywhere* in
'real life', although they are occasionally found in malicious mail.
Apparently they're available on NTFS:

https://docs.microsoft.com/en-us/windows/win32/fileio/sparse-files

so it might be another line worth investigating - but in any case,
that .xsession-errors file was just one great big ordinary file full
of garbage text messages about X.  IIRC most of it was the same 30 or
40 lines repeated ad nauseam - and every second line said exactly the
same thing. :/

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Real Time View of What Is Being Copied

2022-03-04 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 4 Mar 2022, Les Mikesell wrote:

On Mon, Feb 28, 2022 at 3:34 PM Akibu Flash wrote:
>
> I backup a couple of Windows machines to a linux machine via rsync.
> However, the backup for one of the Windows machines never finishes
> and results in a partial backup.  In the CGI user interface Count
> column, the backup always seems to hang at 311941 for hours ...
> 
> Is there a way to start a backup from the command line and have the

> screen show what in real time is actually being backed up?

Not exactly what you are asking for, but since the problem target is
windows you are probably hitting a file that windows has locked, and
the solution might be to use windows VSS snapshots.


Another possibility (and the only time I've ever seen this happen) is
that there might be a large file which is causing a timeout.  It was
on a Linux box here, something went off the reservation and started
writing gigabytes of messages to a file called '.xsession-errors'.

On Linux, by default, filenames beginning with a dot aren't displayed
by the command 'ls -l' so it took longer to spot than it should have. :/

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] How to check total disk usage of backups for each client PC

2022-02-27 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 27 Feb 2022, Kenneth Porter wrote:

--On Sunday, February 27, 2022 12:20 AM + Chris Wu wrote:


Is there one way to know how much disk space has been used for backups of
each client PC?


No, because of de-duplication.
...


Mr. Porter might be putting it a little strongly but I think basically
he's correct.

One might imagine for example measuring the total space consumed by
all backups, then deleting all backups for a client, then measuring it
again, but it's difficult to see how anyone would recommend that.  The
thread entitled "How to remove a PC and its backups" which you started
last week should have given you food for thought.

One might also set up a second pool, add to it all machines except one
(or even *only* one machine), and then measure the size of that pool.
These would give different numbers, and they might not be the numbers
that you want to know.

If you did not have a rough idea of the numbers for each machine, and
thus have a fairly good idea of what you were going to be asking the
backup system to do to begin with, it would have been difficult even
to estimate the storage capacity that would be needed when BackupPC
was run for the first time.

Is there a problem that you're trying to solve?  If so, what is it?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Run command per file before storing in the pool

2022-02-17 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 17 Feb 2022, brogeriofernandes wrote:


I'm wondering if would be possible to run a command just after
client transfers file data but before it's stored in backuppc
pool. My idea is to do an image compression, like jpeg-xl lossless,
instead of the standard zlib one.


Have you considered using a compressing filesystem on the server?


... more bandwidth-friendly ... compression before transferring to
server ...


The data can be compressed on the client by the transfer tools during
the transfer.  This can be purely to reduce network load and it can be
independent of any compression (perhaps by a different method) of the
data when it is stored by the server.  The compression algorithms for
transfer and storage can be chosen for different reasons.  Of course
if it is required to perform multiple compression and/or decompression
steps for each file, the server will have to handle an increased load.

This can all be more or less transparent to BackupPC.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC timesout backing up (old) Android phone (but equivalent standalone rsync command works fine)

2022-01-22 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 22 Jan 2022, Jeff Kosowsky wrote:


...
  log: recv cS+ rwxrwx--- 1006,1006 0 cam_socket0
  log: recv cS+ rwxrwx--- 1006,1006 0 cam_socket1
  log: recv >f+ rwx-- 1000,1000 8 hidden_volume.txt

it seems like it gets stuck somewhere between backing up
'cam_socket1' and 'hidden_volume.txt' since the last BackupPC log line
I see is:
 recv cS+ rwxrwx--- 1006,1006 0 cam_socket1

If the problem is backing up a 'socket', the problem is not in native
rsync as it proceeds fine but rather presumably in rsync_bpc.
...


Seems odd to me that it's trying to back up a socket at all.  AFAICR
neither rsync nor rsync-bpc (at least recent versions) will read a
socket without for example being given the '--specials' option.  I
believe it's mentioned in the rsync-bpc changelogs for around 2006.

Anyway I'd definitely want to avoid trying to back up sockets, raw
devices, and anything like that.

Last time I had BackupPC transfers timing out was when something which
used X went off the reservation and wrote *huge* hidden errors file in
a user's home directory.  Took a while to find. :/

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc malcompsoed command Was: issue with RsyncClientCmd being ignored

2022-01-15 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 15 Jan 2022, robin hammond wrote:


 '--exclude=/*'\
 '--exclude=/home/*'\
 'phantom.databit7.com:/home/administrator/'\
 '/'


but rsync fails to function with the error

bash: phanton.databit7.com: command not found


The log extract looks odd to me.  You seem to be missing an option
(something beginning with a '--' before 'phantom') but I can hardly
make out what you've really posted because it's been jumbled up by
some helpful mail client or something.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Raspberry Pi 4 stops at /run

2022-01-04 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 4 Jan 2022, Jeff Kosowsky wrote:

Gregory P. Ennis wrote at about 21:02:14 -0600 on Monday, January 3, 2022:
> ...
> I also am trying to backup a RaspberryPi4 that keeps hanging an stopping at 
the
> directory /run .  BackupPC does not error out and I have not found any log
> entries of concern.  It just pauses at /run and stays at that point until I
> either reboot or restart backuppc. 


Why are you backing up /run?
The entries are volatile and are cleared on reboot so you should
exclude them.
...


It's probably also worth mentioning that under Linux, BSD, and other
Unix-type systems in general there are a few other directories which
should not be backed up - nor, indeed, even accessed unnecessarily.

For example raw hardware devices, including mass storage devices, are
exposed in the filesystem.  To mess with them carelessly is perilous.

--

73,
Ged.



___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Excluding a directory.

2021-12-13 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 13 Dec 2021, Pete Geenhuizen wrote:


... I have a pretty long exclude list, and now I need to add an
additional directory to exclude.? I added the new directory
/home/new/* but the directory and it's contents are being backed up.
I tried /home/new/ and /home/new with no success.? ...


You need to show us exactly what you have in the configuration which
you think is excluding things.  It's not obvious what might be wrong
if we can't see it.

By way of example, here's an exclude of mine:

8<--
$Conf{XferMethod} = 'rsyncd';
$Conf{RsyncShareName}  = [ 'Config', 'Homes', ];
$Conf{BackupFilesExclude} =
{
'Homes'  =>
[
'//src',
'//DVD',
'//temp',
'//.cache',
'//.mozilla',
]
};
$Conf{RsyncdUserName} = 
$Conf{RsyncdPasswd} = 
8<--

This is in a per-PC file at

/etc/BackupPC/pc/receptionpc.pl

As you can see I'm using rsyncd.  I've redacted parts of the config
and any comments.  Otherwise, what you see there is the entire file.

The exclude itself is a Perl structure.  There are two files and three
directories in it.  If you aren't familiar with Perl the syntax can be
a little taxing but you can probably use my example by just tweaking
the paths to suit.  Note that there IS supposed to be a trailing comma
at the end of my lists of items; Perl doesn't care either way but it's
convenient to leave one there for the next time it gets changed.  This
example however hasn't changed since February 2020.  It runs nightly.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backup aborted (Non-zero exit status from smbclient)

2021-12-03 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 3 Dec 2021, Technik ITEAS wrote:


...
2021-12-03 10:47:07 Got fatal error during xfer (Non-zero exit status from 
smbclient)
...



... Where can the error lie here? ...


On the very rare occasions when I've seen this error (the last event
was at least two years ago, and was backing up a non-Windows machine
using rsync or rsyncd) the problem has been that a file on the client
which was being backed up had become ridiculously large, because some
process on the client had gone off the reservation.  In my case it was
a log file created by the X windowing system, all the more amusing was
that the file itself (.xsession-errors) was a hidden one...

To find the problem I logged each file as it was being backed up and
watched for the error to appear.  Perhaps *if* it is the same problem
that I had, you might find such a problematic file in the same way.

Of course I am only guessing what the problem might be, so YMMV.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Best Practice BPC from Home -> Internet

2021-11-17 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 17 Nov 2021, Taste-Of-IT wrote:


i tested BackupPC since a few months at home with 5 Linux Clients
and it is working well. Thanks all for this great peace of software.

I now want that my family can backup from there Windows, Ubuntu and
Linux Mint Clients to that server. The Server is running Debian with
latest BPC. What could a good practicable, automatic, secure and at
all a simple way, for my family to run backups on that server?


Assuming that the computers used by you and by your family are in more
than one physical location and on separate independent LANs, then I
agree with Mr. Lytle that you will find a VPN very useful.  Be aware
however that if any of the networks use the same network masks you may
need to be creative in the way that you address remote hosts from your
local BackupPC server.  The way I do it is to assign to hosts in all
the 192.168.0.0/16 networks a second set of IPs in the 10.2.x.0/16
network.  Each remote network has its own /24 netmask within the /16
netmask of the larger 10.2.0.0/16 network, e.g. 10.2.1.0/24 etc., so
that any host in the entire network can be addressed by a 10.x.x.x IP.

In addition to making backups easier, it helps with things like remote
security cameras.

You will probably want to use SMB to back up the Windows boxes, but
with some effort it is possible to use rsync.  Presumably you'll back
up only the personal files, not most of the easily-replaced OS stuff?

What kind of Internet connections will you be using?  This kind of
thing can easily saturate an ADSL connection for hours at a Stretch
when one day someone in the family decides to download a few movies.
You might want to think carefully about exclude patterns, but if your
family is anything like my clients you won't want to waste your time
trying to get them to be disciplined about the way they use storage.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Can't access older backups after upgrade

2021-11-04 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 4 Nov 2021, Anthony Chavez wrote:


On 11/2/21 7:54 AM, G.W. Haywood via BackupPC-users wrote:



Hi G.W.! Thanks for responding.


:)


FWIW I wouldn't use NFS for anything that even vaguely bears a passing
resemblance to a backup system.? In my experience NFS is always flaky.
Even if you do get it more or less under control, there always seem to
be gotchas lurking in there.


Namely?


Well my absolute favourite was when the OOM killer decided that nfsd
was the obvious candidate.  A *lot* of machines went down that day.


... apart from an abysmal security model ...


There you go!


Indirection of configuration variables? I'm not so sure that's an issue
with my config, but how would I know?


There are some values like TOPDIR, you could check them.


what you have done so far seems to me a bit on the rash side.


How do you mean?


You had what appears to have been a working backup system.  Then you
changed it, and it no longer works.


So I would like to think that point (2) you make is inaccurate and would
hope that in doing so my backups are indeed safe.


Well, perhaps I could have phrased it better. :)  But you do seem to
have broken it.  When I decided on upgrading to version 4.x of BPC,
I left one machine which ran version 3 *completely* untouched and
installed a *completely* new machine.  So then I had three backups
running nightly.  Yes, there were two already.  After I got BPC4 in
some semblance of order I turned off one of the machines (but it's
still sitting there, turned off, and BPC says it now hasn't been up
for six hundred and twenty-two days).  And so on.


... it seems that for certain machines, the entire backup process has
restarted, numbering from 0, with more inaccessible backups. It seems to
be turning into quite a mess and it seems the cleanup system is
spiraling out of control and deleting all my files. Joy.

I have attached a PNG of the current rrdtool graph of the 4-week Pool
Size, which might give you an idea of what is happening. I have a 
feeling it will be rejected by the mailing list but I am CC:'ing you ...


The private mail was rejected, we don't accept anything from AS14061,
but I have it.  Sometimes I think I never should have patched RRDTool
to do those logarithimic graphs. :/It doesn't tell me much, but I
don't like the look of those sudden steps in week 44.


... This is BackupPC 4.4.0.

Again, this is the Docker container ...


My preference for all backup machines is to avoid anything resembling
containers, VMs, cloud, ...  I want to see the lights flashing on some
hard, bare metal - in the same rack that contains the UPS - and I want
to be able to pick it up and carry it out to the Jeep if I have to.

If I thought there's any risk that old data will get overwritten as a
result of restarting the backup number count from 0 then I'd turn off
the machine and install a new one.  If the backups are valuable it's
just not worth the risk, it would be a *lot* cheaper than my fee and
it will Just Work.  If you need to go to old backups later on you can
get down and forensically dirty when you have your current backups in
good shape.

HTH

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Can't access older backups after upgrade

2021-11-02 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 2 Nov 2021, Anthony Chavez wrote:

On 10/19/21 7:06 PM, Anthony Chavez wrote:

On 10/9/21 3:02 PM, Anthony Chavez wrote:


So I have had to make a slight change to my BackupPC setup and while I
am able to access new backups in the web interface, I am now unable to
access my older backups.

My old setup was under a FreeNAS Jail, using the FreeBSD pkg of
BackupPC with a nullfs mount from the jail to a ZFS dataset housing
the backups themselves. It worked great for months.

When upgrading from FreeNAS 11 to TrueNAS 12, my jail refused to
upgrade so the option was to either create a new jail or install Linux
in a VM and mount the backups directory over NFSv3. I chose the latter
(ArchLinux) and because Docker is becoming a thing in my LAN,
adferrand/backuppc.


FWIW I wouldn't use NFS for anything that even vaguely bears a passing
resemblance to a backup system.  In my experience NFS is always flaky.
Even if you do get it more or less under control, there always seem to
be gotchas lurking in there.


The error message for older backups I am seeing in the web interface
is "Error: Directory /data/backuppc/pc/$host/$baknum is empty".

When trying to access an older backup, BackupPC_ls comes back with,
for example: "bpc_attrib_dirRead: can't open
/data/backuppc/cpool/72/ca/72ca0cc1bf7a179f8c3cdabc187e3887".

Permissions check out (uid 126/backuppc, gid 126/backuppc, umask 027)
so the only thing I can think of is that I need to re-index or
re-build the pool somehow, but I am not familiar enough with BackupPC
command line utilities to know what to do at this point.

Any help would be much appreciated.


Could anyone help with this please? It's making no sense to me:


Sometimes some of the things that go on with BackupPC make no sense to
me either.  I've yet to see BackupPC_ls produce any output at all that
makes sense to me but I haven't tried too hard.  That said, things can
often work out all right in the end anyway. :)

One of the things to watch out for is indirection, where processes are
using some configured value.  This is particularly true when the value
is a filesystem path.  That might be helpful and it might not, in this
case, I throw it in because at this stage I can't immediately think of
anything else _concrete_ that might be relevant to your issue.  You've
done enough already, however, to give anybody, er, pause for thought.


$ ls -ahln /data/backuppc/pc/$host/$baknum
total 65K
drwxr-x---??? 8 126? 126?? 10 Aug 29 16:22 .
drwxr-x---? 110 126? 126? 233 Oct 19 15:12 ..
-rwxr-x---??? 1 126? 126??? 0 Aug 29 16:22
attrib_5e6a2266a222d07587f687f9df5ac097
-rwxr-x---??? 1 126? 126? 695 Aug 29 16:00 backupInfo
drwxr-x---??? 3 126? 126??? 4 Aug 29 16:01 f%2fetc
drwxr-x---??? 3 126? 126??? 4 Aug 29 16:14 f%2fhome
drwxr-x---??? 3 126? 126??? 3 Aug 29 16:18 f%2fsrv
drwxr-x---??? 4 126? 126??? 4 Aug 29 16:20 f%2fvar
drwxr-x---??? 2 126? 126??? 3 Aug 29 16:22 f%2fvar%2flib
drwxr-x---??? 2 126? 126?? 95 Oct? 7 14:13 refCnt
$ /usr/local/BackupPC/bin/BackupPC_ls -h $host -n $baknum -s /etc /
/:
bpc_attrib_dirRead: can't open
/data/backuppc/cpool/72/ca/72ca0cc1bf7a179f8c3cdabc187e3887
bpc_attrib_dirRead: can't open
/data/backuppc/cpool/68/40/6940aeccff660743da3041191216d260


Is there nobody that can help me with this? Are my backups lost?


My gut feeling is no, your backups will not be lost - unless you do
something rash.  But given that

(1) you seem to think that your backups are important to you and
(2) you've been messing about wantonly with the architecture and
(3) it appears that this is your only backup then

what you have done so far seems to me a bit on the rash side.


I am open to paying for one-on-one assistance with this. I would very
much like to salvage this situation, if at all possible.


Well I probably should't pass up an offer like that, but first things
first: what version(s) of BackupPC are we talking about?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Recommend an external drive

2021-10-19 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 19 Oct 2021, Kenneth Porter wrote:


My Western Digital MyPassport 4TB external drive has filled up so it's time
to buy something bigger. I'm thinking an 8 or 10, possibly a Western
Digital, as I've had fairly good experience with them over the years.

Any suggestions on which to look at? ...


Like Mr. Mikesell I feel that there's less than there used to be to
choose between manufacturers thesedays.  A while ago I used to go for
WD rather than Seagate at any price because I had very bad experience
indeed of Seagate Barracuda drives, then WD seemed to go downhill on
reliability so for some years I went with HGST.  Then WD bought HGST.

For Linux box, you probably won't want something which parks the heads
if there's no activity for eight seconds.  Such things have happened.

Another word of caution, there's quite a bit of marketing speak, and,
dare I say it, even misrepresentation around drive descriptions and
specifications.  See for example

https://arstechnica.com/gadgets/2020/05/western-digital-gets-sued-for-sneaking-smr-disks-into-its-nas-channel/
https://arstechnica.com/gadgets/2020/06/western-digital-adds-red-plus-branding-for-non-smr-hard-drives/
https://arstechnica.com/gadgets/2020/09/western-digital-is-trying-to-redefine-the-word-rpm/
https://arstechnica.com/gadgets/2021/08/silent-changes-to-western-digitals-budget-ssd-may-lower-speeds-by-up-to-50/

These articles all happen to mention WD, but they by no means invented
the concept of 'caveat emptor'.

Finally, I personally would avoid anything filled with helium - but
that's because I happen to know how difficult it is to keep helium in
*any* container (and even *out* of some containers).  This isn't from
personal experience of helium-filled drives, of which I have none, but
if a drive relies on its gas filling for its operation I don't like to
think what might happen when the gas escapes.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] "mutt" and "/etc/aliases"

2021-09-29 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 29 Sep 2021, orsomannaro wrote:


To send BackupPC notifications I use "sendmail" (Postfix) forwarding the
messages to a smarthost. And when I have to send an attachment I use "mutt".

For both I would like to take advantage of "/etc/aliases" using the email
address assigned to "root" as the recipient.

With "sendamil" this is trivial. But how can I use the "root" alias with
"mutt"?


It seems to me that this is a mail client question, not a BackupPC
question.  The answer is probably to read the mutt documentation,
which explains how mutt handles aliases.  I have occasionally used
mutt, but never its alias facilities, and from what I've seen it
expects a format which is different from that expected by Sendmail.
Perhaps you will need to create a script which takes your /etc/aliases
and outputs text suitable for mutt, possibly appending it to a mutt
configuration file e.g. ~/.muttrc or whatever config file(s) you use.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] V4 Infinite Incrementals

2021-09-25 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 25 Sep 2021, Kenneth Porter wrote:


On 9/24/2021 8:25 AM, Stan Larson wrote:



BTW, I'm also running weekly air-gap backups to external media that
is stored offsite.? Having critical data backed up both locally and
offsite is reassuring, but no guarantee against an organized
ransom-ware attack.


The incremental nature of BackupPC should address the ransomware
issue, provided your backups (and your air gap swap) go back before
the incident.


Also provided that you don't do insane things like permitting shell
access to the backup server from client boxes.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc-4 on Debian-11

2021-09-15 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 15 Sep 2021, Jeffrey J. Kosowsky wrote:

G.W. Haywood via BackupPC-users wrote:
> On Tue, 14 Sep 2021, Juergen Harms wrote:
> 
> > Well, this discussion really looks like a non-issue ...
> 
> Unless you have a host named 'config'.


Or more precisely config.pl :)


No, that would be OK.  BackupPC itself adds '.pl' to the name of the
host - so the configuration file for a host called 'config.pl' would
be 'config.pl.pl'. :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc-4 on Debian-11

2021-09-14 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 14 Sep 2021, Juergen Harms wrote:


Well, this discussion really looks like a non-issue ...


Unless you have a host named 'config'.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Setting up BackupPC config.pl file

2021-08-31 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 31 Aug 2021, Gary L. Roach wrote:


... I have set up my installation with *rsyncd*. I think I am OK
dowh to "$Conf{RsyncSshArgs} = ['-e', '$sshPath -l root'];" ( Line
#1241 on my file).  After that point, I am totally confused.  The
instructions for "$Conf{RsyncShareName} = '/'; " would not work for
both debian and windoz 7.


It is sort of documented in the config.pl comments - at least if they
haven't been stripped out by some helpful GUI - but there's no example
for use with modules.

8<--
# For $Conf{XferMethod} = "rsyncd" this should be the name of the module
# to backup (ie: the name from /etc/rsynd.conf).
#
# This can also be a list of multiple file system paths or modules.
# For example, by adding --one-file-system to $Conf{RsyncArgs} you
# can backup each file system separately, which makes restoring one
# bad file system easier.  In this case you would list all of the mount
# points:
#
# $Conf{RsyncShareName} = ['/', '/var', '/data', '/boot'];
8<--

When you use the rsyncd transfer method you have to give the name of
what rsync calls a 'module' (not a filesystem path) as the share name,
and that module must be predefined in the rsync configuration.  I like
modules, they're flexible.


How does these settings effect the settings in the GUI.


Haven't a clue, I don't use the GUI.


I want to backup the /etc, /home, /root, /boot/grub, /opt, /run and
/var/www directories on the Debian machine ...


Here's one of my per-host config files, $Conf{RsyncShareName} contains
just a list (in fact a reference to an anonymous array) of modules.

8<--
# On the server, rsyncd is started by xinetd when we connect.  The
# auth info on mail6 is in '/etc/rsyncd.auth' which should only be
# readable by root on the server!

$Conf{XferMethod}   = 'rsyncd';
$Conf{RsyncShareName}   = 
['Config','Homes','crontabs','usr_local_sbin','site_perl','usr_share_perl5_email'];
$Conf{RsyncdUserName}   = 'ged';
$Conf{RsyncdPasswd} = 'redacted';
$Conf{BackupFilesExclude} = { 'Config' => [ '/etc/mail/old','/home/ged/junk' ] 
};
8<--

Here's the rsyncd.conf from the machine being backed up - the things
in [square_brackets] below are the 'modules'.

8<--
[Config]
path=/etc
uid=root
gid=root
auth users=ged
secrets file=/etc/rsyncd.auth

[Homes]
path=/home
uid=root
gid=root
auth users=ged
secrets file=/etc/rsyncd.auth

[crontabs]
path=/var/spool/cron/crontabs
uid=root
gid=root
auth users=ged
secrets file=/etc/rsyncd.auth

[Mail]
path=/var/mail
uid=root
gid=root
auth users=ged
secrets file=/etc/rsyncd.auth

[site_perl]
path=/usr/local/share/perl/5.28.1/Sendmail
uid=root
gid=root
auth users=ged
secrets file=/etc/rsyncd.auth

[usr_local_sbin]
path=/usr/local/sbin
uid=root
gid=root
auth users=ged
secrets file=/etc/rsyncd.auth

[usr_share_perl5_email]
path=/usr/share/perl5/Email
uid=root
gid=root
auth users=ged
secrets file=/etc/rsyncd.auth
8<--

You might be able to economize a bit on the rsyncd config options
cf. what I have here, I tend to go for the verbosely obvious rather
than the concise.

HTH

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] issue with one client

2021-08-25 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 25 Aug 2021,  klemens wrote:


... slw ...
... any direction might help.


My first guess would be running out of RAM because rsync can use a lot
of RAM for directories which contain large numbers of files.

If my guess is right you might see heavy swap usage in 'top' or similar
utilties.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] New debian version "bullseye"

2021-08-24 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 24 Aug 2021, Pascal Legrand wrote:


I am upgrading debian servers currently "Buster" to "BullsEye".

I notice that backuppc is upgrading from
3.3.2-2+deb10u1 (buster) to version 4.4.0-3 (bullseye).

Are there any prerequisites before upgrading or is there nothing to do?
Will the configurations and backups still be correct?


YMMV.  My personal approach will (as always) be to allow some of the
inevitable problems to appear - and be ironed out - before I upgrade
anything to Debian 11.

Assuming that you are using the Debian packages, I would expect that
the Debian maintainer will document any changes necessary in the
information accompanying the package upgrades.

There are some significant changes.  I would note in particular that
without a change to the default configuration, version 4.x of BackupPC
does not traverse filesystems.  See the archives of this mailing list
for more information, e.g. search for '--one-file-system'.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Move current data dir to another location and start from scratch.

2021-08-13 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 13 Aug 2021, Rob Morin wrote:


WHat I was thinking was simply moving/renaming the current backup
set and making the dir immutable to prevent ransomware from getting
to it.


I suspect that your idea won't work.  How do you propose to make the
directory immutable?  If it's by means of some utility like 'chattr'
then what's to stop the ransomware from using the same techniques to
remove the immutable flag?

The best way to avoid malicious access to the backup is to have it on
a separate machine which can't be accessed from the network.  It can
use firewall techniques to drop all attempts to connect *from* remote
machines, yet still be able to connect *to* the same remote machines
to run the backups.  Of course this assumes that you trust the network
stacks, the implementation of the firewall etc., but those things are
usually fairly trustworthy - and of course since you're paranoid they
are all kept up to date with security patches.

If you're really paranoid (not unreasonable in many situations), then
switch the backup machine off while it isn't doing backups.  I'm a big
fan of having more than one backup; a second backup could be another
BackupPC machine, but I'm also a fan of using more than one method of
backing up; if it's something you do only every few months, you could
use something like tar and a USB stick.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Move current data dir to another location and start from scratch.

2021-08-12 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 12 Aug 2021, Rob Morin wrote:


I have a full week's backup of all my servers, i want to make an immutable
copy of them, so I was thinking of just "moving" the current data dir, and
whatever else I might need to another partition, and start the backup from
scratch again.

Mean while i make the dir on that new partition immutable.

I want to do that every 6 months...

Suggestions?


To some extent you'd be trying to defeat what BackupPC is trying to do
for you.  I suggest that you don't want to do that at all.

What about things like filesystem-implemented snapshots?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Problem with WakupSchedule and Backupplan

2021-05-05 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 5 May 2021, Ralph Sikau wrote:


I have a large media library which is too big to be backed
up on a single day.


Does it matter that it's too large to be backed up in a single day?

You could run a weekly or even monthly schedule.

It doesn't have to be daily.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Adding a max and warning line to the backup pool size?

2021-03-15 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 15 Mar 2021, Sorin Srbu wrote:

On Sun, 2021-03-14, G.W. Haywood via BackupPC-users wrote:
> On Sun, 14 Mar 2021, Sorin Srbu wrote:
> 
> > ... add a red max and yellow warning line to the BackupPC pool size chart ...
> 
> ... Icinga/Nagios might be the way to go ...


... I sit down to go through a Icinga setup guide, I end up watching ultimate 
fails on youtube instead.


I hear you.  But I have the advantage - I don't do Youtube.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Adding a max and warning line to the backup pool size?

2021-03-14 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 14 Mar 2021, Sorin Srbu wrote:


Is it possible to add a red max and yellow warning line to the BackupPC pool
size chart, reading from the df or OS partition size?

Just to get an idea when the disks, pool or partitions are getting on the
full side.


I'd suggest that there's a lot more than that to monitor, and that
Icinga/Nagios might be the way to go.  It's a fair amount of effort
but I don't know what I'd do without it now.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] btrfs questions

2021-03-06 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sat, 6 Mar 2021, John Botha (SourceForge) wrote:


... take the plunge with BackupPC, ... bit rot protection is key ...
...
... fragmentation ... how best to approach this with a combination
of rebalancing and scrubbing, or if there is another way or other
aspects to keep in mind.
...
... I thought it would be safest to use nodatacow, but then read
that doing so would also stop bit rot protection, so that's a real
bummer. Am I missing something, or do I have that right?
...
... have btrfs handle de/compression, as that would seem to involve
less time doing redundant calculations. Does that make sense?
...
... seen some flame wars around the use of btrfs.


I don't want to add fuel to any flames.

In my view you're making it more difficult for yourself than you need
to (or indeed should do) if you're just starting out with BackupPC.

My take on it is that you will have quite enough on your plate getting
BackupPC bedded down - so it's doing what you want in your particular
circumstances, and you're comfortable with that - without adding into
the mix a whole bunch of variables which don't need to be variables.

If 'bit rot' protection is key to you, then set up BackupPC to avoid
any possibility of it happening, spend a few months (or perhaps years)
making sure that it isn't happening, and worry about filesystem(s),
and any quirks they may have, some other time.

I personally have never seen any evidence of what I imagine might be
called 'bit rot' because in my view if something like that's happening
then the system is badly broken and it needs fixing.  But I have seen
plenty of damaged filesystems.  When I have had experience of damaged
filesystems, I believe it's fair to say that the newer the filesystem,
the more difficult it has been to repair it.  The first (and last!!!)
ReiserFS I ever used failed catastrophically and was never recovered.
I've recovered everything from DOS to EXT/2/3/4 systems, usually with
little difficulty; I've never used BTRFS so I can't offer any comment
on its repairability.

Right now I use EXT4 almost exclusively, and there would have to be a
really technologically disruptive development in filesystem capability
(like an order of magnitude improvement in some performance metric) to
encourage me even to consider changing to anything else.  I don't care
if anybody thinks I'm an old stick-in-the-mud, I just want it to work.

The other day when I was out with one of my dogs I fell into chatting
with a couple of other walkers.  This particular dog is a difficult
case from the rescue.  One of the walkers said "you seem to have a
calm aura about you".  Of course that's necessary for these difficult
rescue cases.  I thanked her for the compliment although I didn't say
"it's because I use BackupPC and EXT4" - which wouldn't have been too
far from the truth.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Simple server side embedded config file to allow full shadow backups of Windows host

2021-02-27 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 26 Feb 2021, backu...@kosowsky.org wrote:


... useful in case you have multiple backuppc servers backing up a
single client (I do this so that I have totally redundant backups in
case one backuppc server fails)


Good idea to have "totally" independent backups, but do note that if
they're both provided by BackupPC then the word "totally" is perhaps
overstating the case somewhat.  There's a risk that a failure in the
backup software itself will give you the same problem on both of the
"totally" independent backup systems and there's even a risk that, if
both systems are running simultaneously, something malicious will get
into both systems simultaneously and do damage.  Result, misery.  The
use of e.g. a different operating system on the backup system is also
worth considering, so backing up a few Windows boxes using a Linux box
makes sense to me (although I'm not sure that I'd ever consider doing
that the other way around - I'd consider that a Windows backup server
would put any linux boxes on the same network at unnecessary risk).

I too have an ("almost") independent BackupPC system on an old surplus
box, but it's switched OFF most of the time, so even if a malicious act
or software fault causes data loss on the main backup, the inactive one
won't suffer a coincident loss.  I also employ ad-hoc, parallel backups
using tar and rsync, which copy things which change infrequently, and a
few database dumps (that covers most stuff here) to things big SD cards
(via USB sticks, which are almost inherently offline most of the time).

The backup boxes also run their own (more or less independent) firewalls.

Of course the independent backup boxes aren't all in the same building.

PS: The wife's Pi4B has been running fine for a few weeks but then she
plugged in a DVD reader.  It's now crashed (judging by the language) at
least five or six times in the last 24 hours.  I just thought that I'd
mention it, in case you use Pi4Bs for backup despite my dire warnings.

PPS: Guys, for those of us on the digest list, do you think you could
trim your posts a bit?  Most of the time I don't read the dozen-times
copied diatribes so it almost defeats the point of a mailing list.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Which filesystem for external backup drive?

2021-02-11 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 11 Feb 2021, backu...@kosowsky.org wrote:

Michael Stowe wrote at about 20:50:45 + on Wednesday, February 10, 2021:
> On 2021-02-09 16:34, G.W. Haywood via BackupPC-users wrote:
> > Hi there,
> > 
> > On Tue, 9 Feb 2021, backu...@kosowsky.org wrote:
> > 
> >> G.W. Haywood via BackupPC-users wrote at about 14:26:30 + on 
> >> Friday, February 5, 2021:

> >> >
> >> > [Red Hat is] dropping BTRFS because they can't support it in the way 
they'd
> >> > like to for their commercial customers.  That's because it's unstable.
> >> > It's been said that it's been almost ready for production for about a
> >> > decade, and I can't help thinking that it will probably stay that way
> >> > until it expires during the heat death of the universe.
> >> 
> >> Any objective data or recent link to such instability.

> >> Would be very interested in validating that.
> > 
> > https://access.redhat.com/discussions/3138231
> 
> Not sure if you misunderstood the question, or didn't follow the link, 
> or didn't realize it appeared earlier in the thread, but that absolutely 
> does not qualify as objective data, nor is it particularly accurate.


Good point!
While people will (and should) compare the pros/cons of different
filesystems until the end of time (like vi vs. emacs), it is either
naive or highly partisan to think that a well-distributed and accepted
filesystem like btrfs is 'unstable'.


I don't want to get into a pointless argument but I do feel the need to
get the point across.  Apparently I haven't yet done a good job of that.

The problem seems to be that people don't understand what's meant in
this context by the word 'unstable'.

Several people seem to think it means "contains faults".  It doesn't.
It means that it's a moving target.  In the case of BTRFS it's been a
moving target more or less since its creation, and people at Red Hat
were unable to keep up with it for that reason.  Which is what I said
at the outset, and what is expressed in comments in the link I posted.
(This is, incidentally and despite specious argument to the contrary,
perfectly objective.)

Each of us must draw his own conclusions about how a lack of stability
might or might not affect any uses which he might make of any product.
In this case, I've drawn mine and I consider the matter now closed.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Using BackupPC 4.x with rrsync on the client

2021-02-10 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 10 Feb 2021, Felix Wolters wrote:


I'd like to use it with restricted access to the client ...


If I understand you correctly there's no need for complications, you
can do that with plain vanilla rsyncd.  It's what I do on my machines.
Forbidding ssh access further reduces the attackable surface.

1.  Set up an rsyncd daemon on the client which listens for connections
from rsync on the backup server.  Configuration like this for example:

8<--
...
[Config]
path=/etc
auth users=whatever
secrets file=/etc/rsyncd.auth

[Homes]
path=/home
auth users=whatever
secrets file=/etc/rsyncd.auth
...
8<--

The [words] in square brackets are what rsyncd knows as 'modules'.
Any module is read-only by default, so, even if it can connect to the
client's rsyncd daemon, the backup server can't write to anything in
those directories.  You forbid access to anything else.  The file
named in the 'secrets file' line contains just a single line with the
username 'whatever' (with no quotes), a colon, and the password for
that user. The user's password also appears in the Perl variable
$Conf{RsyncdPasswd} in the config fragment (usually in the file called
/.../BackupPC/pc/client.pl) on the backup server.  The user can be one
which you create purely for backup purposes, and probably should be.

2.  Set up 'rsyncd' transfers on the backup server like this, in the
file /.../BackupPC/pc/client.pl.  Obviously any other special config
for the client goes in there too.

8<--
$Conf{XferMethod}   = 'rsyncd';
$Conf{RsyncShareName}   = 
['Config','Homes','usr_local_sbin','site_perl','usr_share_perl5_email'];
$Conf{RsyncdUserName}   = 'whatever';
$Conf{RsyncdPasswd} = 'redacted';
8<--

3. (Optional)

Instead of running the rsyncd daemon on the client 24/7 you can run it
via one of the super-servers, inetd or xinetd.  Nowadays I tend to use
xinetd but that's up to you.  The configurations are very different
for inetd and xinetd.  Using a super-server like this lets you further
restrict connections as you'll see below in the 'only_from' line:

8<--
client:~ cat /etc/xinetd.d/rsyncd
...
service rsync
{
flags   = REUSE
socket_type = stream
port= 873
wait= no
user= root
group   = root
server  = /usr/bin/rsync
server_args = --daemon
log_on_failure  += USERID
disable = no
only_from   = 127.0.0.1 192.168.1.5 192.168.1.47 192.168.1.246
}
8<--

If you run a super-server, don't forget that you'll need to restart it
after making any change to its configuration.

I've used this setup for quite a few years with no issues.

HTH

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Which filesystem for external backup drive?

2021-02-09 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 9 Feb 2021, backu...@kosowsky.org wrote:


G.W. Haywood via BackupPC-users wrote at about 14:26:30 + on Friday, 
February 5, 2021:
>
> [Red Hat is] dropping BTRFS because they can't support it in the way they'd
> like to for their commercial customers.  That's because it's unstable.
> It's been said that it's been almost ready for production for about a
> decade, and I can't help thinking that it will probably stay that way
> until it expires during the heat death of the universe.

Any objective data or recent link to such instability.
Would be very interested in validating that.


https://access.redhat.com/discussions/3138231

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Which filesystem for external backup drive?

2021-02-07 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 7 Feb 2021,  Les Mikesell wrote:


... I have had several drive hardware failures and one instance of
one port on a powered USB hub going bad with intermittent failures
causing disc corruption.


Glad you mentioned that.  There's a seven port hub here which has duct
tape stuck over one of its ports because that one port (and only that
port) has given us intermittent troubles over several years.  It's my
guess that there's something marginal, rather than something like a
dry soldered joint which in my experience will give progressively more
trouble until at last it fails completely.  The last thing you want in
any computer system is something that's marginal, you'll never be able
to figure out what's going on.

I cut my design teeth on 74-series logic noise immunity in the 1970s. :/

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Which filesystem for external backup drive?

2021-02-06 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 5 Feb 2021, backu...@kosowsky.org wrote:


So what filesystem is safest to run over USB?


Frankly I don't know.  Like Mr. Mikesell I've had good experience with
the tools for EXT4, but my wife has managed to trash at least one EXT4
partition by simply plugging a USB device into a powered USB hub which
at the time happened to be supporting the USB disc drive on a Pi4B.

Unfortunately the recovery tools failed in that case (and in fact they
seemed to compound the problems).  Fortunately the drive itself didn't
seem to have been damaged, and the loss was limited primarily to our
convenience copies of about a thousand movies.  No small loss of her
time copying them I'm afraid.

See also

https://en.wikipedia.org/wiki/USB_mass_storage_device_class#Design

which explains one reason to be very cautious about what you do with
your drives if they're connected via a USB interface.  There's often
no obvious way to find out what commands the interface supports other
than trial and error, which could work out expensive.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Which filesystem for external backup drive?

2021-02-05 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 5 Feb 2021 Les Mikesell wrote:


On Thu, Feb 4, 2021 at 11:05 PM  wrote:

> ... Snapshots have also saved me when I have run across the occasional
> backuppc gremlin of disappearing files in that I can find the cpool
> file and revert it from past snapshots.

Are you sure that the disappearing files aren't a quirk of btrfs in
the first place?


:):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):):)

Seriously, I've never seen a BackupPC file "disappear" - but are we not
in danger of missing the point that the OP is planning that this drive
will use a USB interface?

Be *very* careful with any 'modern' filesystem if you use a USB interface.
I've seen filesystems trashed by USB communication failures, and the more
'modern' the filesystem, the more irretrievable seems to be the failure.

My advice would be don't do that, unless you're comfortable with trashed
filesystems and it won't matter to you very much when it happens.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Which filesystem for external backup drive?

2021-02-05 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 4 Feb 2021, Kenneth Porter wrote:

On Wed, 3 Feb 2021, backuppc@kosowsky wrote:
> On Wed, 3 Feb 2021, Kenneth Porter wrote:
> 
> > I'm deploying a system with an external (USB) 1.5 TB drive. How should I 
> > format it? (Ie. which filesystem is best for this?)
> 
> I just built backuppc for my Raspberry PI and ordered an external SSD

> drive that I plan to format in btrfs.

I'm using CentOS, and it looks like Red Hat is dropping btrfs ...


They're dropping BTRFS because they can't support it in the way they'd
like to for their commercial customers.  That's because it's unstable.
It's been said that it's been almost ready for production for about a
decade, and I can't help thinking that it will probably stay that way
until it expires during the heat death of the universe.

For a backup system, I wouldn't even consider BTRFS.

If I were going to try BTRFS for anything at all (I have no reason at
all to want to explore that kind of a headache) then first I'd create
an EXT4 filesystem, and then I'd convert that to BTRFS in situ.  Then,
if all else failed, I could revert to EXT4 at, er, the drop of a hat.

Theoretically.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Defining all-day blackout periods

2021-01-31 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 31 Jan 2021, Ari Sovij?rvi wrote:


I need to define couple of all-day blackout periods for BackupPC and the
manual wasn't clear if I can just specify 0 in the start and end field
to accomplish this.

I'm also wondering if 0 is a valid ending time or is it considered start
of that day?


$Conf{BlackoutPeriods} = [
{
hourBegin =>  0,
hourEnd   => 23.999,
weekDays  => [2, 5],
},
];

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Return to BackupPC

2021-01-10 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 10 Jan 2021,  several authors wrote:


> > Why do people like to install from source instead from packages?
> > ... manually installing software ... would be the very last resort.


Sometimes packages are out of date.  For Debian, they can be
ridiculously out of date and sometimes there isn't even any maintained
package at all.  Although I use Debian a lot because it saves me a lot
of time for things like security updates, much of the time I'll install
from source because I want a reasonably up-to-date version of whatever
it is I happen to be installing.


> I agree! I am OCD about keeping even my personal systems clean...


It's subjective.  You stick to the packages, I make notes.  Apparently
we both think of our systems as 'clean'.


> ...and I hate the idea that ...

... at least the Debian project seems to have a strong opinion ...


Opinions are fine, but sometimes they border on the irrational, and if
they get embodied into packages they can be forced onto other people.
If a package embodies opinion which I don't share I probably won't use
it, so I have to decide whether to make my own package or build from
source.  The main thing that I've had to build a Debian package for is
the kernel, because on some hardware that I have to support the Debian
version runs several orders of magnitude slower than the one I build.
Almost everything else which isn't stock Debian stable would be simple
source installations.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC 4.x on Raspberry PI?

2021-01-08 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 8 Jan 2021, backu...@kosowsky.org wrote:

Les Mikesell wrote at about 14:19:03 -0600 on Thursday, January 7, 2021:
 > On Thu, Jan 7, 2021 at 1:57 PM Guillermo Rozas  wrote:
 > >>
 > >> Interesting... how does Banana Pi Pro compare to Pi4? e.g.,
 > >> performance, interfaces/ports, price, support...
 > >
 > >
 > > I would say it's inferior in every aspect, except for the direct SATA 
connector (I think it's roughly equivalent to a Pi2 or Pi3 in raw specifications). If 
it wasn't for the reports about the Pi4 overheating / needing an active cooler I 
would switch to a Pi4.
 >
 > There are several fanless cases that act as a huge heat sink.  I like
 > the flirc aluminum one but there are others and you can find some
 > YouTube demonstrations/reviews of them.
 >

So is the issue purely an "overheating" and adequate "heat sinking" or
is there a more fundamental issue with the hardware design ...


In my (expert) opinion it's fundamental to the design.  It's well
known that the USB design was flawed when it was first brought out.

It was widely publicized that the incorrect use of a single 'pull-up'
resistor violated the USB spec (two resistors were required, and I
believe that later versions were updated to have two resistors).
The fact that a mistake like that could get into production makes me
wonder what else is wrong.

Although I have no hard evidence to offer, the behaviours I've seen
lead me to suspect that something else in the design is at the very
least marginal.  Obvious candidates are power conditioning and the USB
circuits but it could be something completely different and it could
be more than one thing.  At one time I had the idea of using the 4B in
a product but I abandoned the idea.  If I were going to use the 4B I'd
be finding out exactly what the problems are before I'd commit to it.


i.e., if I can keep the heat down, will it be stable?


Not if your definition of stable is anything like mine, and your Pi4Bs
behave anything like the several I've used.  See my previous posts.

I strongly recommend that the 4B is not used for anything as important
as your backups.  In my experience the 3B+ is fine.  I've been using
one for backups for more than a year, and one as a database and file
server for quite a bit longer than that, with no issues although the
database server does I admit occasionally creak a bit.  It's only ever
fallen over when the OOM killer killed something that I shouldn't have
let it kill (rpc.mountd!) and having written the appropriate values to
/proc//oom_score_adj it now trundles along quite unflappably.  In
any case, a 3B+ is well up to it, the extra power of the 4B won't be
needed unless your backups are extraordinary.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC 4.x on Raspberry PI?

2021-01-05 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 5 Jan 2021, backu...@kosowsky.org wrote:


Robert E. Wooden wrote at about 10:02:58 -0600 on Monday, January 4, 2021:



 > On 1/4/2021 9:55 AM, Les Mikesell wrote:
 > > On Mon, Jan 4, 2021 at 9:32 AM G.W. Haywood via BackupPC-users wrote:
 > >> I've tried using a 4B and gave up with it - unreliable. ...
 > > Maybe you just have a bad one or it is overheating in its case ...


Nope.  We have several.  They all do it.  It isn't an overheating
problem.  Some of them aren't even in cases - they're mounted on flat
panels (which are in turn mounted vertically for maximum convection
cooling) and the rooms in which they run are very cool.  Nagios says
the temperatures are fine at the crash but they do throttle themselves
at high temperatures anyway.  There's also a chip register which can
be inspected to see if CPU temperature limits have been exceeded.


 > > ... I've left a 4B powered up for months without crashing.


My wife uses one for desktop stuff and it's more or less fine for that
use.  It can be left running idle for many weeks; the best uptime ever
according to Nagios was about 110 days.  It crashed (around lunchtime)
on New Year's Day after 68 days when (we think) she was thrashing a
database.  But if for example she connects a powered USB device like a
second HDD it's almost bound to crash immediately.  So after a few bad
experiences there are things we just don't do now, and that's how we
can manage to get more than a couple of days uptime.


 > > Mine doesn't have a load like backuppc ...


I'm pretty sure that load is one of the issues.  Running as a database
server here, the disc is thrashing constantly, 24/365.  The backup server
is obviously only a heavy load during backups, and then only some backups.


Robert E. Wooden wrote at about 10:02:58 -0600 on Monday, January 4, 2021:
 > I'm going to add here that I have seen a few posts (elsewhere) that some
 > 4B w/8Gb RAM had crash issues and a 4B 4Gb running the same program runs
 > ... Wondering what model 4B the OP is using? (2Gb, 4Gb, or 8Gb?)


They're all 4GB.  They were purchased well before the 8GB was released.


Well I was considering buying a 4B 8GB mounted in my router "cabinet"


Keep an eye on the cooling.


to do the following 3 things:
1. Backuppc server (secondary ...


If it's a secondary I'd be a little anxious for you but just about OK
with that.  I wouldn't now consider a 4B for any backup of my own, as
it would be too easy to let things slide and create your own accident.


2. Using NUT, monitor the UPS powering the switches/routers/VOIP
   devices in my router cabinet


As I said I run Nagios/Icinga on 4Bs, I also run Smokeping which does
quite a bit of network activity too.  But if it crashes, no big deal.


3. Potentially also serve as an occasional mythtv frontend for a
   basement TV used during exercise


Well my preferred forms of exercise precludes anything like that, but
I'm sure it would be OK.  We watch all our movies on a Pi 2!  On the
whole pretty reliable although we only run it in the evenings, but it
doesn't like mplayer.  Omxplayer seems not to be in development any
more (this is hearsay from her indoors:) but that's what we use.


Would be good to understand if the issues with crashing 4B are due to:
1. One-off faulty device


Absolutely not.  We've tried really hard to get the things going with
multiple different discs, USB/SATA interfaces, power supplies, cables
and the 4B devices themselves.  The only common theme is the 4B.  The
same discs, interfaces, PSUs and cables are fine on (again, several)
Pi3B+ devices.  The 4Bs don't run reliably with just a Webcam.


2. General hardware design issue


Yes.  It could be more than one thing.  Could be that there's more
wrong with the USB design than the already well-documented problems.
Could be that the power stability is an issue - being an erstwhile
electronics designer I've never liked the idea of hanging the PSU on
the end of two metres of skinny flex for example, but I've never gone
into that and tried running with a shorter cable.  If I wanted to do
it myself I'd probably do it all myself, but I don't have the time.


3. General OS/software issue


We can't rule out supplier- or hardware- specific drivers.


4. "Overload" of some type


Don't think so, but the opportunity for power line noise bothers me,
I've seen both CPUs and RAM devices fall over for that reason.


5. Weird other interaction among HW/OS/apps etc


Assuming the interaction isn't because of something specific to the Pi
(my comment under your 3 above) I don't think so.  I've had problems
with very different setups - file servers, database servers, backup
servers, performance monitoring, security cameras (both USB and IP)
and desktop.  All were the same basic OS, but different application
software.  All continuous, if not necessarily all heavy, system usage.

The uptime on the 3B+ that I'm using to type

Re: [BackupPC-users] BackupPC 4.x on Raspberry PI?

2021-01-04 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 4 Jan 2021, backuppc-users-requ...@lists.sourceforge.net wrote:


Has anybody had good success running BackupPC 4.x on a Raspberry PI?


Yes, I have.  Very good, with one reservation.


I am considering either:
1. (old) Pi 3 - Quad core  ARM Cortex-A53, 1.2GHz.
2. (new) Pi 4 - Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz


I've tried using a 4B and gave up with it - unreliable.  That was the
one reservation.  I think I've posted about my experience on the list,
check the archives.  The 3B+ is fine.  Installed from source, fiddled
about a bit, and it's been running for well over a year as the main
backup for about a dozen hosts.  I don't worry about the performance,
it runs when I'm asleep so I haven't spent any time on measurements.
I do run Nagios/Icinga and it doesn't show any memory or CPU problems.
Nagios/Icinga runs on a 4B as it happens, it's a bit heavy for the 3B+
and it doesn't really matter if it crashes now and again.  And it does.

Typically there's no more than 1 core used for 10 minutes and there's
at least 250 MBytes free RAM at the time the backups run.  Total about
1.5 million files backed up in 16,500 directories, about 12TB.  Around
580GB right now in the pool filesystem after pooling and compression.
This is all using a USB connected 3TB Western Digital USB drive in its
own little plastic box off the shelf from some computer store.  I did
tweak it so it doesn't spin down and park the heads every 8 seconds or
whatever stupid number it was.


How does 4.x compare with 3.x in terms of CPU and memory usage?


I can't comment on that for the Pi, as I've only ever seriously run
version 4.x on it.

Please let me know if you'd like any more detailed information, I can
easily run some instrumentation for a few days/weeks.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Backuppc in large environments

2020-12-01 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 1 Dec 2020, backuppc-users-requ...@lists.sourceforge.net wrote:


How big can backuppc reasonably scale?


You can scale it yourself as has already been suggested, but I don't
think you'd have any problems with a single backup server and the data
volumes you've described if you were sensible about the configuration,
which is very flexible.  However...


... daily backup volume is running around 750 GB per day, with two
database servers providing the majority of that volume (400 GB/day
from one and 150 GB/day from the other).


That's the part which bothers me.  I'm not sure that BackupPC's ways
of checking for changed files marry well with database files.  In a
typical relational database server you'll have some *big* files which
are modified by more or less random accesses.  They will *always* be
changed from the last backup.  The backup of virtual machines is not
dissimilar at the level of the partition image.  You need to stop the
machine to get a consistent backup, or use something like a snapshot.

Normally I do some sort of separate database dump for database files,
and run that system separately from run-of-the-mill Linux/Windows box
server/workstation backups.  After all, I usually just want a single
good backup of any database.  Having several copies, aged at one day,
one week, two weeks, a month etc. would usually be of no use to me.


... I have no idea what to expect the backup server to need in the
way of processing power.


Modest.  I've backed up dozens of Windows workstations and five or six
servers with just a 1.4GHz Celeron which was kicking around after it
was retired from the sales office.  The biggest CPU hog is likely to
be data compression, which you can tune.  Walking directory trees can
cause rsync to use quite a lot of memory.  You might want to look at
something like Icinga/Nagios to keep an eye on things.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Turn-key backup solution for all-Windows network

2020-11-19 Thread G.W. Haywood via BackupPC-users

Hello again,

On Thu, 19 Nov 2020, Kenneth Porter wrote:


Do you install to your Pi using just a tarball, or is there some
packaging as there is for RPM?


We use Raspbian, or Raspberry Pi OS as they've decided to call it now.
It's based on Debian, with the same packaging tools.  You can grab a
bootable image or you can be creative.  You can boot with a filesystem
image on a micro SD card (including read-only if need be), boot with a
USB filesystem, boot with an NFS filesystem, boot as a USB 'gadget'.
We do all the above for our applications.

Once you have a system that boots you can do pretty much all the APT
things that you can do with Debian.  I personally never really think
about it not being Debian although people here who get into the guts
sometimes do things with the firmware that make my toes curl.

Some Pis have restrictions on how you can boot them that others don't.
Generally the newer ones provide more flexibility.

The thing I like most about them is that they don't need cooling fans.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Turn-key backup solution for all-Windows network

2020-11-18 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 18 Nov 2020, Kenneth Porter wrote:


I'm advising a friend on what to use to back up his networked
Windows PCs (including one Windows Server) in a machine shop. I
deploy BackupPC at home and at my own office on CentOS servers, but
this shop has no technical people and no Linux machines. I'm
wondering if anyone has bundled BackupPC into, say, a Raspberry Pi
as a "backup appliance". Or is there some other product I should be
looking at?


I've been using BackupPC on a Raspberry Pi 3B+ for a couple of years
to back up up my own office systems (all Linux boxes) and I've used
BackupPC to back up multiple Windows/Linux servers and workstations
for clients, including some with machine/fabrication shops, for almost
two decades.  I routinely monitor the status of the backup servers and
many of the other servers and clients, using Nagios and/or Icinga and
Smokeping, so I get email alerts if something drops off the network.
For remote backup systems I use OpenVPN to get access through client
firewalls.

FWIW, I can say that after getting over hiccups in the systems' setups
they have been extremely reliable.  The Pi 3B+ on my desk (mostly used
as a thin client and for backup) is currently at 237 days uptime.  I'd
recommend them without hesitation for backing up anything up to a few
dozen machines and at least tens of terabytes of total backup storage.
I couldn't offer an opinion on anything much larger as I don't have
personal experience of large systems but I wouldn't expect issues with
storage of hundreds of terabytes of data - even using Pi 3B+ - if the
network infrastructure will support it.  You'd want at least gigabit
Ethernet, which might be something to consider if the machine shop
installation is very old.  After I upgraded the network for a client
(a fab shop), every now and then we'd suddenly run into connectivity
issues when somebody cleared out a desk and found an old switch (or
hub!) which was only capable of 10 or 100 MBit/s, and patched it into
the network without asking.  And once upon a time the site electrician
juggled most of the patch cables in the server cabinet because he'd
had a row with the site foreman but that was really exceptional.  The
cheaper switches (Netgear and TP-Link) have given trouble - they would
work fine for weeks and then suddenly start chattering away to nothing
that I could ever find and ignore all legitimate traffic until a power
cycle fixed them - but it seems that only a couple of examples out of
dozens that I installed ever did that, and they've been replaced.

The Pi 4B irregularly crashes for no apparent reason, the 3B+ doesn't.
So I wouldn't recommend the 4B (not *any* 4B version) for anything as
important as backup (nor for anything in or near a machine shop where
frequent power spikes could be expected).  I run three on my own site,
24/365.  They seem to struggle to get past ten days of uptime, but it
isn't really an issue for the jobs they're doing.  I don't think the
USB port design is quite right yet.  Sometimes they'll just lose the
ports altogether and you have to reboot to recover them.  It's a pain
because the hard discs are USB connected of course.  This will sound
weird, I know, but there seems to be something bi-stable about them.
Once they're going OK they seem to keep going for a while.  But after
a crash, it seems to take anything up to a dozen reboots over a period
of perhaps a couple of days before they settle down to give reasonable
reliability - until the next crash.  I've been working with electronic
kit for more than half a century, so I'm conscious that sounds crazy
and I know there's still something I'm missing.  I'll find it one day
unless they bring out a better 4B and these go in the parts bin.

I can let you have screenshots and such if you'd like to see them and
I'll be happy to help more if required.  If you'd like to discuss it
privately please drop me a line on the list and I can let you have a
private email address - my list address accepts only list mail - or
just send a message to this address and live with the rejection, I'll
most likely see your mail in the logs because I spend most of my life
reading them.

HTH

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Verify full integrity of pool and backups

2020-11-01 Thread G.W. Haywood via BackupPC-users

Hi there,

On Sun, 1 Nov 2020, Guillermo Rozas wrote:


As explained in the README, the compression algorithm that BackupPC uses is
slightly different to the one used by zlib/pigz. In some rare occasions,
this results in a corrupted file when uncompressed by zlib/pigz. Most of
the time the "ZLIB/PIGZ ERROR" above is related to this, so the script
re-checks them by uncompressing them using BackupPC_zcat. If the error
persists, then you will see an extra "CHECKSUM ERROR", which means the file
is actually corrupted.


Is there some compelling reason not to use BackupPC_zcat in the first place?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] Windows backups failing with special characters

2020-09-02 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 2 Sep 2020, Mariano Aliaga wrote:

...
tar:974  Fatal: Can't translate pathname './Ajuste Inflaci?n A?o2019.xps' to 
UTF-8
...
... So I wonder if this is a known bug, it's smbclient, backuppc,
tar or what? Is there some known workaround? I'd appreciate any help
you could give.


I think it's a message from 'tar'.  (That's because that's what it says. :)

It's not what I'd call a bug, it's the unfortunate result of the many
changes to the capabilities of both utilties and filesystems over the
time that they've been in use.  There can be ambiguities when strings
are translated between some character sets, and you really don't want
that kind of ambiguity in file names.

What's your output from 'locale'?

Just a stab in the dark but you might try UTF-16, as mentioned here:

https://forums.freebsd.org/threads/tar-cant-translate-pathname.32262/

Life was so simple when everything was done with 7-bit ASCII codes...

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] BackupPC Service Status "Count" Column

2020-07-23 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 23 Jul 2020, Akibu Flash wrote:


In the CGI Interface on the BackupPC Service Status Page there is a
Column labelled "Count".  What exactly is that determining? Is it
the number of files that have been backed up from that share?


It's the count of files transferred.

Look for $jobStr in .../lib/BackupPC/CGI/GeneralInfo.pm for more.

I don't normally see a row of data below that line, because it's for
currently running jobs, and I'm not normally looking at the BackupPC
GUI at two o'clock in the morning.


The reason I ask is because mine has been stuck on 58721 for quite
some time.  What could be causing this and how can I determine what
is happening? There is nothing in the log file currently that I can
see which could be causing a problem.


What logs are you looking at, and what do you see in them which makes
you think everything is normal?  I'd expect it to be obvious from the
logs what's going on.

It's not a silly browser page-caching thing is it?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:https://github.com/backuppc/backuppc/wiki
Project: https://backuppc.github.io/backuppc/


Re: [BackupPC-users] config.pl "built-in" script to backup Windows shares with Virtual Shadow Service Copy

2020-06-30 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, 30 Jun 2020, Jeff Kosowsky wrote:


It should just work...
[snip]
-- next part --
A non-text attachment was scrubbed...
Name: BackupPCShadowConfig.pl
Type: application/octet-stream
Size: 8533 bytes
Desc: not available

--


Don't you just hate it when that happens? :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Unable to connect on port -1

2020-06-29 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 29 Jun 2020,  Craig Barratt wrote:


...
Are you running nfs v3 or v4?  I have had experience with v3 not working
reliably with BackupPC (related to buggy lock file behaviour).  BackupPC
does rely on lock files working, so it's definitely not recommended to turn
locking off.
...


I would go further than that.  My feeling is that NFS is not suitable
for something so important as your backups.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Keep only one Full Backup as Archive

2020-06-17 Thread G.W. Haywood via BackupPC-users

Hi there,

On Wed, 17 Jun 2020, J.J. Kosowsky wrote:


...
FYI - Backuppc 4.x is really significantly better than Backuppc 3.x.
...
To all those out there still using 3.x, if you haven't tried upgrading
to 4.x yet, I suggest you do. If you have, I suggest you try again.
...


For the record, I'm one of those who had tried 4.x a few years ago and
been bitten by it.  So I put it back in the tar.gz and stayed with 3.x
for a while longer.  About a year ago I did try again, and things went
very much better.  I now believe that Jeff is right in all he says, so

+1

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How does --checksum work in v4?

2020-06-08 Thread G.W. Haywood via BackupPC-users

Hi there,

On Mon, 8 Jun 2020, Jeff Kosowsky wrote:


... presumably a very rare event ...


That's putting it a little mildly.

If it's really all truly random, then if you tried random collisions a
million times per picosecond you would (probably) need of the order of
ten trillion years to have a good chance of finding one...

$ echo ' scale=2; 2^128 / 10^6 / 10^12 / 86400 / 365 / 10^12 ' | bc
10.79

I think it's safe to say that it's not going to happen by chance.

If it's truly random.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] replication of data pool

2020-05-21 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 21 May 2020, Mike Hughes wrote:


we're currently syncing our cpool to an off-site location on a
weekly basis. Would it be feasible to only sync the latest of each
backup rather than the entire pool?



To elaborate, on Saturdays we run an rsync of the entire cpool to
another server to provide disaster recovery options. Is it
possible/reasonable to just copy the data from the night before? Or,
with de-duplication and compression, would we really save much
space/transfer time? If so, what is the best way to grab just one
night's worth of backups while still preserving a full recovery?


Why not simply run a second BackupPC instance on the off-site server?

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore only corrupted files?

2020-05-15 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 15 May 2020, Richard Shaw wrote:


Funny enough (well, actually I'm still kinda pissed ... with
Seagate, ... this is my SECOND RMA for the same drive.


Let me guess - Barracuda?

I stopped buying Seagate drives years ago when it became clear that,
running 24/7, if they lasted more than six months we'd been pretty
lucky and they ALL failed inside a couple of years.

They might have improved since then of course, but after reading an
article about reliability based on experience at Google (who obviously
buy orders of magnitude more drives the we ever will) I bought a bunch
of HGST drives (in the halcyon days before WD bought *them*) and have
never needed to buy another drive since!

It wasn't necessarily poor manufacturing at Seagate.  There were some
scary firmware problems with many Seagate drives; 'smartctl' has a
database which might tell you if an upgrade is available for a drive
(I don't know how complete the database might be, but I have seen it
warn about that for some old spare drives kicking around here) and
there will be other ways of finding out e.g. the Seagate Website.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Restore only corrupted files?

2020-05-14 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 14 May 2020, Richard Shaw wrote:


...
Is it possible to do a conditional restore? Something like:

Only restore files which are the same date (mtime?) and the hashes don't
match.

Thoughts?


Assuming that it's worth recovering the data, the data presumably must
have some value.  In your situation I'd be reluctant to do anything
like that to valuable data, since I might unnecessarily be overwriting
it with something old or even corrupt.  I think I'd restore my backup
to a scratch partition, then use something like 'rsync --dry-run' to
show me the differences.

OTOH I run two backup servers, so the situation is most unlikely here.
Consider that a hint. :)

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] Filesystem Recommendation for 100 TB

2020-04-30 Thread G.W. Haywood via BackupPC-users

Hi there,

On Tue, Apr 28, 2020 at 1:02 PM Andrew Maksymowsky wrote:


I have no strong preference for either xfs or zfs (our team is
comfortable with either) was mainly just curious to hear about what
folks were using and if they've run into any major issues or found
particular file-system features they really like when coupled with
backuppc.


Data volumes of the systems I back up approach those with which you're
working, and I have had no issues with ext4.  Being very conservative
about filesystem choice now (after a disastrous outing with ReiserFS,
a little over a decade ago) I haven't yet taken the plunge with any of
the more modern filesystems.  It's probably past time for me to put a
toe in the water once more, but there are always more pressing issues
and I *really* don't need another episode like that with Reiser.

At one time I routinely used to modify the BackupPC GUI to display the
ext4 inode usage on BackupPC systems, but happily I no longer need to
do that. :)  Although I'd have said my systems tend to have lots of
small files, typically they're only using a few percent of inode
capacity at a few tens % of storage capacity; I have no clue what the
fragmentation is like, and likely won't unless something bites me.

There's no RAID here at all, but there are LVMs, so snapshots became
possible whatever the filesystem.  Although at one time I thought I'd
be using snapshots a lot, and sometimes did, now I seem not to bother
with them.  Large databases tend to be few in number and can probably
be backed up better using the tools provided by the database system
itself; directories containing database files and VMs are specifically
excluded in my BackupPC configurations; some routine data collection
like security camera video is treated specially in the config too, and
what's left is largely configuration and users' home directories.  All
machines run Linux or similar, thankfully no Windows boxes any more.

Just to state one possibly obvious point, the ability to prevent the
filesystem used by BackupPC from writing access times would probably
be important to most, although I'm aware that you're interested more
in the reliability of the system and this is a performance issue.  On
1GBit/s networks I see backup data rates ranging from 20MByte/s for a
full backup to 3GByte/s for an incremental.  Obviously the network is
not the bottleneck and from that point of view I think the filesystem
probably doesn't matter; you're looking at CPU, I/O (think SSDs?) and
very likely RAM too, e.g. for rsync transfers which can be surprising.

HTH

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to find files in the pool?

2020-04-24 Thread G.W. Haywood via BackupPC-users

Hi there,

On Fri, 24 Apr 2020, R.C. wrote:


How to retrieve the actual path of that file in the cpool tree?  I
cannot use the digest to walk down the cpool tree. Octects lead to
non existent path.


Have you just missed the little wrinkle that the subdirectories are
all even numbers?

You need to clear the least significant bit of the least significant
octet in each two-digit directory name as you go down the tree.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


Re: [BackupPC-users] How to find files in the pool?

2020-04-24 Thread G.W. Haywood via BackupPC-users

Hi there,

On Thu, 23 Apr 2020, Robert Sommerville wrote:


You can use the locate [...] command ...


That's likely to be unreliable - not only because many systems don't
have 'locate' installed by default but also because many systems will
exclude BackupPC databases from indexing.

--

73,
Ged.


___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/


  1   2   >