Folks,
I am running BPC 4.4.0 across 57 pretty much identical hosts (all
Debian) and it's all working great, except today I noticed that on 4
of those hosts, there are XferErros, and they are all along the
following lines:
```
file has vanished: "/var/lib/ntp/f%2f/fvar/flib/fntp/attrib"
```
Hello,
I have a very old BackupPC server (coming up to 15 years, now
running 4.4.0). I've been migrating the entire pool to v4, basically
one backup per day (since the migration requires downtime). This has
been going fine for months, but recently, one of the backups exposed
some problems in
Hey there,
If I need to remove a file from all backups on a host using the v4
format, is it better to step through all the backups from new to
old, or the other way?
I am thinking that since the backups are all incremental, if I were
to touch an old one, all new ones would need to be
Hello,
has anyone written a tool to identify the hosts that contribute most
to recent pool size increase for v4? I see the data in pc/*/backups
and I could extract the size of new files for each backup, then
correlate with the time of the backup and then… With v3, it was
possible to use
also sprach Les Mikesell [2016-08-06 18:19 +0200]:
> Why is it likely that you would want to read backuppc logs on
> systems that don't have backuppc installed.
Well, I collect all logs to a central location, but sure, this isn't
the normal usecase.
> And why not do it
Thanks Adam for your patience and insights! And everyone else for
putting up with me! ;)
also sprach Adam Goryachev [2016-08-05
01:12 +0200]:
> > But instead of (what seems to be) chunk-wise checksum transmission,
> > why don't we (also) store the whole-file
also sprach Adam Goryachev [2016-08-05
01:31 +0200]:
> Maybe try this:
> tail -f -n +0 blah.log | /usr/share/backuppc/bin/BackupPC_zcat -
> You need to include the beginning of the file or else it won't
> detect the compression header Also, the -
also sprach Adam Goryachev [2016-08-05
01:08 +0200]:
> > Why aren't we just using standard gzip or bzip2 or xz, for which
> > decompressors exist on pretty much every Unix system?
> I'm pretty sure there is a backuppc package for debian :)
Oh yes, sure, but
also sprach Adam Goryachev [2016-08-05
01:04 +0200]:
> We do update a partial backup, as long as the new partial contains
> more files than the previous partial, but it doesn't even save
> a partially transferred file.
I think this is fine, and a partially
also sprach martin f krafft <madd...@madduck.net> [2016-08-04 22:16 +0200]:
> Right now, I am staring at the lsof output of the rsync process on
> a backup client, spawned by BackupPC. It's processing a 3.5G file
> that has not been touched in 5 years and has been backed up n
Hello,
the fact that BackupPC compresses log files using zlib and requires
/usr/share/backuppc/bin/BackupPC_zcat for their uncompression is
a bit of a nuisance, not only when log files are being
sync'd/analysed on a system where there is no BackupPC installed.
I also can't find a suitable
also sprach Adam Goryachev <mailingli...@websitemanagers.com.au> [2016-08-04
15:47 +0200]:
> On 4/08/2016 23:43, martin f krafft wrote:
> > 3) Ensure that you can backup any file within the ClientTimeout,
> > Is this necessary? Isn't ClientTimeout about killing the connecti
also sprach Adam Goryachev [2016-08-04
16:04 +0200]:
> I've used ls -l /proc/pid/fds or strace or lsof etc... all work,
> some are better on the client rather than the backuppc server.
In fact, I found none of those useful on the server.
> I've also used
Hey Adam, thanks for your quick response. I have a few points to
add:
> 1) Ensure you enable SSH keepalives to keep your NAT firewall open
Yes, of course these are enabled.
> 2) You can look to split remote files before the backup, and exclude the
> original large files (sometimes this is
Hello,
the issue I've raised 4 years ago:
https://sourceforge.net/p/backuppc/mailman/message/29727529/
still persists with BackupPC 3.3.0. Basically what happens is that
a combination of large new files and a slow connection between
backup server and client means that a full backup eventually
Hello,
is there an easy way to identify which files were added to a backup?
The File Size/Count Reuse Summary table tells me about a number of
files and their size, but I wonder if there's an easy way to get
a file list, other than increasing the verbosity and parsing the
XferLog?
Thanks,
--
also sprach Hanspeter Kunz [2015-12-03 21:42 +1300]:
> In my opinion, a clean, easy and efficient way to have a BackupPC
> mirror is to use some disk-mirroring as drbd. with this you can avoid
> to run backups twice on the clients (one for master backup, another one
> for the
also sprach Timothy J Massey [2015-12-03 11:59 +1300]:
> Others use BackupPC_tarCreate by hand.
What's the difference between that and BackupPC_archiveHost anyway?
--
@martinkrafft | http://madduck.net/ | http://two.sentenc.es/
"politics is the entertainment branch of
Hey folks,
having been hit yet again and unexpectedly by the fact that BackupPC
transfers all files' data just to figure out whether the file is
already in the pool, I found an old mail of mine to this list from
five years ago
Hey folks,
I have a BackupPC server that has been in operation for almost
5 years. It's been running 3.1.0 for over two years, and over the
last few weeks, it's been exhibiting a problem whose symptoms are
best described by the following dmesg output:
kernel: BackupPC_dump invoked oom-killer:
Hi,
I hope you don't mind me using this list for the following question.
I have been a happy BackupPC_deleteFile user (version 0.1.5) for
many moons.
For the first time today, however, it failed me:
# su backuppc -c '/usr/local/sbin/BackupPC_deleteFile -h jugband.madduck.net
-n- -s- -r -m
also sprach Les Mikesell lesmikes...@gmail.com [2012.08.28.1455 +0200]:
The ClientTimeout setting may be your real issue though. A running
backup should never time out as long as anything is transferring, even
if it takes days to complete but with rsync that value can be for the
whole backup.
also sprach Les Mikesell lesmikes...@gmail.com [2012.08.28.1650 +0200]:
Maybe it is a bug that has been fixed in the current version but I
know I have seen situations where the backup ended with an Alarm
signal but files in the backup had been updating not long before the
timeout. I assumed
Dear list,
I am very happy with BackupPC, except that it hasn't been able to
back up my workstation for more than 3 months. It started a full
backup on 14 May 2012, shortly after I had stored a set of large
files to disk. Since my workstation is behind an ADSL connection,
the backup eventually
also sprach Mike ispbuil...@gmail.com [2012.08.27.1502 +0200]:
The fix I've used is to --exclude a good chunk of the backup so it does
finish in a day, then --exclude a little less the next time it runs, and
so forth.
I have used this method, which you and Les suggested. And yes, it
works.
also sprach backu...@kosowsky.org backu...@kosowsky.org [2012.08.28.0208
+0200]:
Given that the fate and timing of a new 4.x version of BackupPC is
uncertain at best and that this is a relatively common problem, maybe
we should think about how to *patch* BackupPC 3.x so that partially
backup
also sprach Les Mikesell lesmikes...@gmail.com [2012.08.27.2326 +0200]:
The only setting I can see that relates to this would be
PartialAgeMax. Are the retries happening before this expires?
Otherwise you'd have to poke though the code to see why it prefers the
previous full.
This is a good
also sprach Holger Parplies wb...@parplies.de [2011.06.01.2010 +0200]:
To get my desired behaviour, I set BlackoutGoodCnt to 0 globally and
deleted all global BlackoutPeriods.
[…]
As I understand the concept of blackouts in BackupPC, I think it
is meant to automagically adapt backup strategy
Is there some secret sauce for editing the wiki? I am logged in but
there are no edit buttons.
Cheers,
--
martin | http://madduck.net/ | http://two.sentenc.es/
there are more things in heaven and earth, horatio,
than are dreamt of in your philosophy.
Dear list,
since I cannot edit the wiki, I wanted to make sure to share this
supplemental method of
https://sourceforge.net/apps/mediawiki/backuppc/index.php?title=How_to_import_data_for_a_backup
with you.
We were tasked with importing a host with several hundreds of
gigabytes behind a
also sprach martin f krafft madd...@madduck.net [2011.05.19.0925 +0200]:
I have a few hosts that should only be backed up between 23:00 and
7:00, i.e. when the network is not used much. Hence, I set
BlackoutPeriod
[…]
However, all the PCs ended up being queued at 09:00 this morning
Hello,
I have a few hosts that should only be backed up between 23:00 and
7:00, i.e. when the network is not used much. Hence, I set
BlackoutPeriod for them to
$Conf{BlackoutPeriods} = [
{
'hourEnd' = '23',
'weekDays' = [
'1',
'2',
'3',
'4',
also sprach Holger Parplies wb...@parplies.de [2011.04.20.2001 +0200]:
We'd all like to be able to choose an existing *pool file* as
reference - this would save us transfers of *any* file already
existing in the pool (e.g. from other hosts). Unfortunately, this
is technically not possible
also sprach Les Mikesell lesmikes...@gmail.com [2011.04.27.1507 +0200]:
I think a safer alternative would be to do what the OP proposes
-- that way you always have one safe copy not part of the RAID
in case something messes up
You need that anyway, although starting from scratch it
Dear list,
we are facing a policy change requiring people to rename data files
in a trivial way (replace ':' with '-').
In terms of backuppc, this means that the files will have to be
transferred again, completely, right?
Or is there a way in which I can prepare the server for this change
and
also sprach John Rouillard rouilj-backu...@renesys.com [2011.04.17.1625
+0200]:
In terms of backuppc, this means that the files will have to be
transferred again, completely, right?
Correct.
Actually, I just did a test, using iptables to count bytes between
the two hosts, and then renamed
Dear list,
we are facing a policy change requiring people to rename data files
in a trivial way (replace ':' with '-').
In terms of backuppc, this means that the files will have to be
transferred again, completely, right?
Or is there a way in which I can prepare the server for this change
and
also sprach martin f krafft madd...@madduck.net [2011.02.02.1607 +0100]:
One of the hosts' last full backup is now 8.2 days old, while it's
incremental backup is 0.9 days old.
BackupPC just woke up, but it did not schedule the host,
A colleague suggests this is because backuppc only
Hello,
I have a BackupPC configured with FullPeriod at 7.97 and daily
incremental backups, no blackout periods, hourly wakeup, and
2 consecutive backups permitted.
One of the hosts' last full backup is now 8.2 days old, while it's
incremental backup is 0.9 days old.
BackupPC just woke up, but
also sprach Robin Lee Powell rlpow...@digitalkingdom.org [2011.02.02.1704
+0100]:
What does the host log say?
Nothing, really:
2011-02-02 15:00:00 Next wakeup is 2011-02-02 16:00:00
2011-02-02 16:00:00 Next wakeup is 2011-02-02 17:00:00
2011-02-02 17:00:00 Next wakeup is 2011-02-02
also sprach Robin Lee Powell rlpow...@digitalkingdom.org [2011.02.02.1720
+0100]:
2011-02-02 15:00:00 Next wakeup is 2011-02-02 16:00:00
2011-02-02 16:00:00 Next wakeup is 2011-02-02 17:00:00
2011-02-02 17:00:00 Next wakeup is 2011-02-02 18:00:00
2011-02-02 17:00:21 Started full
also sprach Robin Lee Powell rlpow...@digitalkingdom.org [2011.02.02.1757
+0100]:
There's a log for each client, which you can see by accessing the
client in the GUI (it'll be the top-most LOG File link) and a log
for the server as a whole.
Oh, of course, sorry. But there is nothing in that
also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.12.09.1538 +0100]:
I did a test run of this tool and it took 12 days to run across the
pool. I cannot take the backup machine offline for so long. Is it
possible to run this while BackupPC runs in the background?
It can run
also sprach Jeffrey J. Kosowsky backu...@kosowsky.org [2010.11.17.0059 +0100]:
I wrote two programs that might be helpful here:
1. BackupPC_digestVerify.pl
If you use rsync with checksum caching then this program checks the
(uncompressed) contents of each pool file against the stored md4
also sprach Stephen Joyce step...@physics.unc.edu [2010.11.11.1459 +0100]:
Sure. Take a look at
http://sourceforge.net/apps/mediawiki/backuppc/index.php?title=Distribute_Full_Backups
Hm, thanks for that. I am sorry I didn't find this myself and had to
bother the list.
Maybe there'd be reason
Hello,
After a longer downtime, my backuppc daemon came back online and
started a full backup of all hosts, which was quite
a resource-intensive process.
In 6.97 days, there'll be again a full backup of all hosts. I wonder
how I could go about distributing that over the period of the week
so
also sprach Les Mikesell lesmikes...@gmail.com [2010.11.03.2156 +0100]:
Yes, anything that is not linked by a current backup will be removed in
the nightly runs.
I might thus want to disable that cronjob for now.
The more subtle problem is that the corruption may
have overwritten the
Hello,
My filesystem holding the backuppc pool was corrupted. While e2fsck
managed to fix it all and now doesn't complain anymore, I am a bit
scared that the backuppc pool isn't consistent anymore.
Is there a tool to check the consistency of the pool?
Is there a tool to repair an inconsistent
also sprach Carl Wilhelm Soderstrom chr...@real-time.com [2010.11.03.2020
+0100]:
Run full backups on all hosts, then BackupPC_nightly?
also sprach Les Mikesell lesmikes...@gmail.com [2010.11.03.2022 +0100]:
I'd run new full backups as soon as practical. That will at least
fix up anything
Here is how I think an algorithm could work, which requires no
changes on the remote side, so it should work with the plain rsync
protocol, but likely will require some changes to rsyncp.
Let:
B be the BackupPC_dump process
P be the peer (PC to be backed up)
R
over a slow link, even though it's already present remotely.
also sprach Les Mikesell lesmikes...@gmail.com [2010.08.29.0102 +0200]:
On 8/28/10 3:22 PM, martin f krafft wrote:
also sprach Les Mikeselllesmikes...@gmail.com [2010.08.28.2151 +0200]:
If it is one or a few files or constrained
also sprach Les Mikesell lesmikes...@gmail.com [2010.08.29.1814 +0200]:
I haven't followed the code, but I think the idea is that if the file - or
maybe
the difference from the existing guessed match - fits in memory it doesn't
need
to write a tmp file before writing to the pool. I'm not
Hello,
Using rsync+ssh as my transfer method, I find that backuppc, when
backing up a new host, transfers all files, even if specific files
are already in the pool.
The docs[0] say:
As BackupPC_tarExtract extracts the files from smbclient or tar,
or as rsync or ftp runs, it checks each file
also sprach martin f krafft madd...@madduck.net [2010.08.28.1718 +0200]:
This is in contrast to what I am experiencing. Is backuppc fetching
the file into memory to compare it with the pool from there?
Thinking about it, this is what needs to happen because it needs to
have the file
also sprach martin f krafft madd...@madduck.net [2010.08.28.1854 +0200]:
Using lsof, I found that the BackupPC_dump process actually has the
corresponding pool file up for reading, so it has identified it.
This makes me wonder even more why the client still transfers the
whole file
also sprach Les Mikesell lesmikes...@gmail.com [2010.08.28.2151 +0200]:
If it is one or a few files or constrained to a directory that you know you
already have backed up locally, why not just exclude it on the remote
machines?
It happens regularly.
Don't you think BackupPC could be
56 matches
Mail list logo