Hi Adam,

thanks for your reply.


> I'm assuming you are using BPC v3 but you didn't let us know....

My Version is 3.3.0-2+deb8u1 on Debian 8.


> I would guess that there is a large (possibly sparse) file that is in
> the process of being backed up, and it takes a long time.

I searched the Server to be backed up and didnt find any files above 2GB.
I finally found it - it was a logfile that was never rotated for some
reason:

root@gb-srv08.igzev.intern /var/log # ls -lha lastlog
-rw-rw-r-- 1 root utmp 198G Okt  1 17:03 lastlog

root@gb-srv08.igzev.intern /var/log # du -sh .
687M    .

Using strace on the clients rsync showed me that information. Thank you
for the hint!



On 09/23/2016 02:31 AM, Adam Goryachev wrote:
> On 22/09/16 19:54, Peter Thurner wrote:
>> Hi Guys,
>>
>> I'm running a backuppc Server Installation on Debian 8. I'm using rsync
>> + filling method on all clients. I'm backing up several Debian 8
>> Clients, one of which makes problems. When I start the the backup, the
>> hook Script works fine, the rsync seems to run through - I strace it on
>> the client and see that it does stuff, also the new directory fills up.
>> After the rsync however "nothing" more happens and the client hangs. If
>> I wait for two days, the backup aborts. When I abort myself I get the
>> following errors in the bad log (I set log level to 2 and added
>> --verbose as rsync option)
>>
>>    skip     600       0/0      934080 var/log/installer/syslog
>>    skip     600       0/0     1162350 var/log/installer/partman
>> Can't write len=1048576 to
>> /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp
>> Can't write len=1048576 to
>> /var/lib/backuppc/pc/sw.example.de/new//f%2f/RStmp
>> [...]
>> lots of those cant write
>> [...]
>> Parent read EOF from child: fatal error!
>> Done: 0 files, 0 bytes
>> Got fatal error during xfer (Child exited prematurely)
>> Backup aborted by user signal
>>
>>
>> I tried writing to the RStmp file during a backup - if I touch or echo
>> fo > RStmp it, I can write to it. If I dd if=/dev/zero of=RStmp bs=1M
>> count=1000, the file disappears right away, as in:
>>
>> dd if=/dev/zero of=RStmp ... ; ls RStmp
>> no such file or directory
>>
>> Any Ideas to what might cause this?
> Ummm, backuppc is in the process of backing up data, and you want to 
> start stepping on it's toes by writing to it's temp file? That doesn't 
> make any sense to me, but I guess you have your reasons.
> I think you 
> might see more information by examining the client rsync process with 
> either strace, or when it is "stalled" (ie, backing up the large file), 
> look at ls -l /proc/<rsyncpid>/fds which will show which file it has 
> open. Then you can check what is wrong with the file (unexpected large 
> file, or sparse file, or whatever). Once identified, you can either 
> arrange for the file to be deleted, or excluded from the backup, or be 
> more patient and/or extend your timeout to allow this large file to 
> complete.
> 
> If you are unable to solve the issue, please provide some additional 
> details. Especially a look at strace of rsync on the client while it is 
> "stalled" will help identify what it is doing.
> 
> Regards,
> Adam


Best,
Peter

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to