Thanks, everyone! Looks like backuppc should be able to handle my
network, no problem. To hit on specific points, in threaded order:
- I'll be sure to get plenty of RAM. We're going to be buying a new,
probably Dell, rackmount system for this and I wouldn't have been
getting any less than
Hey, all!
I've been looking at setting up amanda as a backup solution for a fairly
large environment at work and have just stumbled across backuppc. While
I love the design and scheduling methods of amanda, I'm also a big fan
of incremental-only reverse-delta backup methods such as that used
I have a test install of backuppc up and running, and backing up a
half-dozen Debian servers with no problems. Now our NAS admin has asked
me to add a freenas machine to the test, and it's just giving me
"fileListReceive failed" whenever I try to run a backup.
I've verified that I can ssh in
as it should.
On 1/14/21 3:25 PM, Dave Sherohman wrote:
I have a test install of backuppc up and running, and backing up a
half-dozen Debian servers with no problems. Now our NAS admin has
asked me to add a freenas machine to the test, and it's just giving me
"fileListReceive failed"
When I ran into a similar case while leaving a system
partially-configured, I handled it by (temporarily) blanking out the
"user" for that host, so there was no address for the mail to be sent
to. Worked fine, and then I re-set "user" once the host was
successfully running backups.
On
I think you'd first have to define what you mean by "how much disk space
is used by a single host's backups". Because of BPC's deduplication
functions, the answer will be very different if you mean "how much space
would I need to make a full copy of this host's data" vs. if you mean
"how much
Daily schedule seems to work, too. I've got a NAS with 20 T of
backed-up data which takes a little over 3 days to do a full backup and
its daily incrementals will patiently wait for that to finish before
they try to run. A couple other hosts in the 7-8 T range also take over
24 hours to
I've just added a "guest" user to my bpc htpasswd, with the intention of
allowing coworkers to view the overall status of the system without
needing to go through me and... well... it doesn't show anything at all,
since "guest" doesn't (and shouldn't!) own any of the machines that are
being
while the file "C:\cygwin\backuppc\shadow_del.pid" is created" when the
postrun script executes.
This is using the windows client from
https://sourceforge.net/p/backuppc-windows-client/code/ci/master/tree/
On 3/11/21 2:29 PM, Adam Goryachev via BackupPC-users wrote:
On 12/3/21 00:
If I were to set $Conf{MaxBackups} = 1 for one specific host, how would
that be handled? Would it prevent that specific host from running
backups unless there are no other backups in progress? Would it prevent
any other backups from being started before that host finished? Would
it do both?
On 3/11/21 4:36 PM, backu...@kosowsky.org wrote:
I don't see how this would make sense at a per-host level. And any
behavior to have it differ by host is undocumented and not necessarily
predictable.
That's why I asked - I can't predict what it would do if it were
allowed. :D
Look at the
On 3/11/21 4:40 PM, backu...@kosowsky.org wrote:
Sounds like the shadow creation script or your implementation of it is
broken.
The precmd script fails to create the shadow volume when it is run from
the backuppc user account on the backup server, but works when it's run
from just about any
On 3/13/21 5:24 PM, Sorin Srbu wrote:
Is it possible to add a red max and yellow warning line to the BackupPC pool
size chart, reading from the df or OS partition size?
Speaking of the pool size chart, was that removed in BPC 4.x? I did a
test install on Debian 10 (bpc 3.3.1), then set up my
The latest new beast to be added to my backup zoo is a synology NAS
device. It is being uncooperative, and the only error message it
provides is
rsync error: error in rsync protocol data stream (code 12) at io.c(226)
[Receiver=3.1.3.0]
which is rather less than helpful.
Online searches
On 3/11/21 4:49 PM, Dave Sherohman wrote:
On 3/11/21 4:36 PM, backu...@kosowsky.org wrote:
Look at the code that I recently submitted to the group to streamline
creation/deletion of shadow backups.
I saved those posts, but, honestly, I don't see the advantage of using
a large script
into multiple backup 'hosts' by
using the ClientNameAlias setting. I create hosts based on the share
or folder for each job, then use the ClientNameAlias to point them to
the same host.
*From:* Dave Sherohman
*Sent:* Thursday
On 4/8/21 8:46 PM, Les Mikesell wrote:
On Thu, Apr 8, 2021 at 8:25 AM Dave Sherohman wrote:
rsync error: error allocating core memory buffers (code 22) at util2.c(118)
[sender=3.2.0dev]
This is more about the number of files than the size of the drive. Do
you happen to know
I have a server which I'm not able to back up because, apparently, it's
just too big.
If you remember me asking about synology's weird rsync a couple weeks
ago, it's that machine again. We finally solved the rsync issues by
ditching the synology rync entirely and installing one built from
I installed BPC4 from the pre-release debian 11 repo back in March and
it Just Worked(TM), no problems at all. I'm currently backing up 89
hosts with it and haven't had to touch any of the BPC infrastructure
aside from setting up appropriate configs. It's solid.
I'm not sure why Debian
appliances,
and these 18 use per-host override configs. All 89 work, with no special
handling beyond (for the 18 non-linux machines) creating the per-host
configs.
On 9/11/21 9:23 AM, Juergen Harms wrote:
On 10/09/2021 09:18, Dave Sherohman wrote:
I'm not sure why Debian decided to do the pc/ sym
On 9/13/21 6:00 PM, Juergen Harms wrote:
This is not the place to fight for being right, but to understand and
document help for users who hit this kind of problem.
Agreed. But what kind of response is expected when you say "what makes
it *look as if* your installation works is that...", other
On 9/16/21 4:25 PM, Kimmo Hedman via BackupPC-users wrote:
Thank you, those warnings now did go away.
But this still exists (attached image).
Now that those packages are installed, run `sudo systemctl start
backuppc` to start the BPC service.
If it still says it's not running, `systemctl
Timing, basically. The row is only green for a certain amount of time
(not positive, but probably 1.2 hours - as long as the "Last Backup
(days)" number rounds to 0.0) after a backup is completed, then it
reverts to the white "idle" state. Think of green/"done" as "a backup
just finished"
According to a quick web search, it looks like rsync error code 5
indicates authorization problems. Here are a couple links which may
provide solutions:
https://unix.stackexchange.com/questions/71719/rsync-error-starting-client-server-protocol
This part, at least, I can explain: mutt (and mail) knows nothing about
the aliases in /etc/aliases. It only knows its own aliases (defined in
.muttrc).
Any mail sent to an address without a hostname is for the local system
by default, so mail to just "root" gets @localhost appended.
The
On 9/23/21 7:09 PM, Stan Larson wrote:
A few of the servers that are being backed up take many hours to run a
full backup, which can modestly impact the end users of those
servers. Currently my FullPeriod value is set at the default 6.97
days. Since V4 uses reverse-delta, I should be able
Per-host config files are working fine for me. Maybe your
RsyncClientCmd has the rsync path hardcoded in it instead of referencing
$rsyncPath?
My environment is mostly-linux with a few BSD-based hosts and, to just
pick a .pl that contains "local" at random-ish, I have:
What seems suspicious to you about it? It's not currently in the
process of backing up files or doing anything with the host right this
minute; it is idle.
On 11/2/21 7:47 AM, Norman Goldstein wrote:
My question is in the last line of this email.
I was having errors backing up, so I decided
What file transfer method are you using with the Windows hosts? I've
seen a couple mentions on the mailing list that BPC4 added
--one-file-system to the default set of flags for rsync file transfers,
which would prevent rsync from crossing over onto a remotely-mounted
filesystem, such as an
On 3/5/22 14:36, G.W. Haywood via BackupPC-users wrote:
On Sat, 5 Mar 2022, Les Mikesell wrote:
Unix/Linux has something calle 'sparse' files used by some types of
databases where you can seek far into a file and write without
using/allocating any space up to that point. The file as stored may
In the directory where config.pl lives, there should be a pc/
subdirectory. The per-host config files go in that subdirectory. (Note
that, on debian, pc/ is a symlink back to the same directory, so the
global config.pl and the per-host configs are all in the same place.)
The per-host config
The first thing I'd try is adding it as \@Recycle. Much (all?) of the
BPC code is written in Perl, which will (in some contexts) interpret
"@Recycle" to mean "an array named Recycle" rather than the literal text
"@Recycle". Adding the backslash prevents the @ from being interpreted
as an
This is basically what I've done, with the addition (which may have been
an unstated assumption) that the backuppclogin user on each client
machine has a disabled password, so that it can only be accessed via ssh
public key login, or by "sudo su [user]" on the local machine.
This is, IMO,
On 4/11/22 18:22, G.W. Haywood via BackupPC-users wrote:
Looking at
https://metacpan.org/dist/File-RsyncP/changes
it seems that there is only one later version (0.76) so your options
seem to be somewhat limited. :)
Is it even still being used? My BPC server is running 4.4.0, installed
from
Since you mentioned different versions of rsync, I assume you already
checked this, but, just in case: Double-check that rsync is still
installed on the deb11 system. I upgraded a ton of systems from deb9 to
deb11 earlier in the summer, and apt decided to uninstall rsync during
the upgrades
While there may be a better way, my first thought would be to check the
backup summary page for each host and look at backup durations (longer
time should correlate with more data transferred) and/or the "new files"
columns in the "File Size/Count Reuse Summary" section.
On 9/22/22 11:03,
On 8/4/22 07:44, backu...@kosowsky.org wrote:
On Wed, Aug 3, 2022 at 2:31 PM backuppc--- via BackupPC-users <
backuppc-users@lists.sourceforge.net> wrote:
I pulled a V3 pool off an old hard disk that I had wrongly assumed was
broken. Now I would like to import as much data as possible into my
Thanks, that did get me additional information. With those settings, my
XferLOG now shows, for example:
log: recv >f.s... rw-r--r-- 6, DEFAULT 1081344
var/cache/man/index.db
However, when I get a tar archive of /var/cache/man, `tar tvf` shows:
-rw-r--r-- root/root 1081344
?
On 11/16/22 20:56, backu...@kosowsky.org wrote:
'backuppcfs' is a (read-only) FUSE fileystem that allows you to see
the contents/ownership/perms/dates/Xattrs etc. of any file in your
backup.
It is great for troubleshooting as well as for partial restores...
Dave Sherohman via BackupPC-users
: recv >f.s... rw-r--r-- 6, DEFAULT 1081344
var/cache/man/index.db
when using "--log-format=log: %o %i %B %8U,%8G %9l %f%L" as suggested by
Kris Lou.
On 11/18/22 14:17, backu...@kosowsky.org wrote:
The former which might help with the latter...
Dave Sherohman via B
Ah, thanks - adding --super appears to have resolved the problem. Ran
another full of the test system, and backuppcfs now shows correct file
ownerships.
I'm still missing --protect-args, --delete-excluded, and --partial from
the defaults, so I'll probably also add those after checking the
I've just encountered the same problem from issue #171 on github, "on
restore all files owner root"
https://github.com/backuppc/backuppc/issues/171
My RsyncRestoreArgs are the same as reported there, with the exception
that I don't have --ignore-times. I have done the tar restore check
111 = 37 * 3
Presumably it made three attempts to back up each host, but all three
attempts (per host, 111 attempts total) were skipped due to insufficient
space to store the backups. And, for simplicity, the code just counted
the attempts without attempting to deduplicate multiple attempts
By default, the RsyncArgs for BPC4 includes --one-file-system, which
tells it not to descend into mounted filesystems. To include other
filesystems in your backups, you can either add a second "share" to the
machine for /var/files so that it's explicitly backed up, or you can
remove
On 6/29/23 15:11, Guillermo Rozas wrote:
*I* don't do it, simply because there's little practical difference
between rsync'ing directories that don't change and not rsync'ing
them.
Just to give reason *I* use it:
- the cost/benefit of doing full backups for different folders is
When I was debugging an issue some time ago, I added a setting to RsyngArgs
which caused all files to be listed in the XferLOGs. I believe the relevant
setting was:
--log-format=log: %o %i %B %8U,%8G %9l %f%L
but I can't find documentation to confirm that. In any case, I now get entries
in
Beyond the safety precaution you mentioned, rsync doesn't delete files
at all unless your RsyncArgs include "--delete". There have been a few
previous people mailing the list who didn't have --delete in there, so I
wonder whether that might be the problem here.
On 11/15/23 12:42, Guillermo
47 matches
Mail list logo