On 05/08/16 04:52, martin f krafft wrote:
> also sprach Adam Goryachev [2016-08-04
> 16:04 +0200]:
>> I've used ls -l /proc/pid/fds or strace or lsof etc... all work,
>> some are better on the client rather than the backuppc server.
> In fact, I found none of those useful on the server.
>
>> I've
On 05/08/16 06:31, martin f krafft wrote:
> also sprach martin f krafft [2016-08-04 22:16 +0200]:
>> Right now, I am staring at the lsof output of the rsync process on
>> a backup client, spawned by BackupPC. It's processing a 3.5G file
>> that has not been touched in 5 years and has been backed u
On 05/08/16 05:33, martin f krafft wrote:
> Hello,
>
> the fact that BackupPC compresses log files using zlib and requires
> /usr/share/backuppc/bin/BackupPC_zcat for their uncompression is
> a bit of a nuisance, not only when log files are being
> sync'd/analysed on a system where there is no Back
On 05/08/16 04:42, martin f krafft wrote:
> also sprach Adam Goryachev [2016-08-04
> 15:47 +0200]:
>> On 4/08/2016 23:43, martin f krafft wrote:
>
>> It should work as you said, but if you never have enough time to
>> transfer the second file, then you won't actually proceed.
>> BackupPC will sti
also sprach martin f krafft [2016-08-04 22:16 +0200]:
> Right now, I am staring at the lsof output of the rsync process on
> a backup client, spawned by BackupPC. It's processing a 3.5G file
> that has not been touched in 5 years and has been backed up numerous
> times. According to strace, the en
Hello,
Right now, I am staring at the lsof output of the rsync process on
a backup client, spawned by BackupPC. It's processing a 3.5G file
that has not been touched in 5 years and has been backed up numerous
times. According to strace, the entire file is being read, and it's
taking a toll:
- u
Hello,
the fact that BackupPC compresses log files using zlib and requires
/usr/share/backuppc/bin/BackupPC_zcat for their uncompression is
a bit of a nuisance, not only when log files are being
sync'd/analysed on a system where there is no BackupPC installed.
I also can't find a suitable decompr
also sprach Adam Goryachev [2016-08-04
16:04 +0200]:
> I've used ls -l /proc/pid/fds or strace or lsof etc... all work,
> some are better on the client rather than the backuppc server.
In fact, I found none of those useful on the server.
> I've also used tail -f XferLOG | Backuppc_zcat which do
also sprach Adam Goryachev [2016-08-04
15:47 +0200]:
> On 4/08/2016 23:43, martin f krafft wrote:
> > 3) Ensure that you can backup any file within the ClientTimeout,
> > Is this necessary? Isn't ClientTimeout about killing the connection
> > after a period of time without any traffic?
> Almost,
On 04/08/2016 14:59, martin f krafft wrote:
> some of the backup processes here run for hours, and there are often
> reasons why I want to check on what's going on.
In the past, I've used things similar to what's mentioned at these
pages:
http://sysadminnotebook.blogspot.nl/2011/09/watch-backuppc
You can follow XferLog with
/usr/share/BackupPC/bin/BackupPC_zcat /XferLOG.z | tail
but it is buffered, so not completely current
Also on the backup host, you can get the process id of the current dump
processes (there will be two per host during file transfer), and do
(sudo) ls -l /proc/{pid1
On 04 Aug 2016 14:04, Adam Goryachev wrote:
I can't comment on the rest, but directory entries are always created,
because backuppc needs them for the backup structure (and there is no
disk saving/not possible to hard link them)...
Makes perfect sense! Thanks for the response Adam.
+---
On 4/08/2016 22:59, martin f krafft wrote:
> Hey,
>
> some of the backup processes here run for hours, and there are often
> reasons why I want to check on what's going on.
>
> How do you monitor backups in real-time? XferLOG.z can't be tail'd,
> and attaching strace or lsof to the running processe
On 4/08/2016 23:56, cardiganimpatience wrote:
> Backups are taking about three hours for a particular fileserver and records
> indicate that over 300k new directories are being created every run.
>
> I opened the XferLOG in a browser and searched for the word "create d" which
> catches every ne
Backups are taking about three hours for a particular fileserver and records
indicate that over 300k new directories are being created every run.
I opened the XferLOG in a browser and searched for the word "create d" which
catches every newly-created directory. The count was 348k matches. But t
On 4/08/2016 23:43, martin f krafft wrote:
> 3) Ensure that you can backup any file within the ClientTimeout,
> Is this necessary? Isn't ClientTimeout about killing the connection
> after a period of time without any traffic?
Almost, but the timer is only updated after each file has been
transfe
Hey Adam, thanks for your quick response. I have a few points to
add:
> 1) Ensure you enable SSH keepalives to keep your NAT firewall open
Yes, of course these are enabled.
> 2) You can look to split remote files before the backup, and exclude the
> original large files (sometimes this is helpfu
Hey,
some of the backup processes here run for hours, and there are often
reasons why I want to check on what's going on.
How do you monitor backups in real-time? XferLOG.z can't be tail'd,
and attaching strace or lsof to the running processes just isn't
very sexy.
Can you fathom a good method b
On 4/08/2016 22:17, martin f krafft wrote:
> Hello,
>
> the issue I've raised 4 years ago:
>
>https://sourceforge.net/p/backuppc/mailman/message/29727529/
>
> still persists with BackupPC 3.3.0. Basically what happens is that
> a combination of large new files and a slow connection between
>
Hello,
the issue I've raised 4 years ago:
https://sourceforge.net/p/backuppc/mailman/message/29727529/
still persists with BackupPC 3.3.0. Basically what happens is that
a combination of large new files and a slow connection between
backup server and client means that a full backup eventually
20 matches
Mail list logo