Hello.

I know there have been several threads about slowness and I searched through them; however they mostly seem to refer to outdated versions.



I've got a system which has always worked quite good, but lately, I've got a job which starts at good speed, but then slows down progressively.
Example of status client:
JobId 14115 Job XXXXX.2019-06-22_22.45.00_21 is running.
    Full Backup Job started: 22-Jun-19 23:13
Files=91,084 Bytes=108,902,910,528 AveBytes/sec=2,428,266 LastBytes/sec=769,424 Errors=0
    Bwlimit=0 ReadBytes=128,684,055,592
    Files: Examined=91,142 Backed up=91,084

Notice bytes/sec: after about 12 hours, average is 2.5MB/s, while current speed is about 770KB/s!
It will get much slower as it goes.



Director is on FreeBSD 11.2/amd64 with Bacula 9.4.3, and Postgres 9.4.3 for DB backend. The client this job is backing up is the server itself; storage is on a NAS connected with a GB LAN.

I'm using gzip compression, but CPU stays under 1%, so that should not be the problem (I also tried disabling it with no change). The system is not swapping and in this particular moment there is not much other traffic on the LAN and no big load on the disks either. Other jobs (from other clients, but also from this client) usually work fine. tcpdump shows communication with the SD happens in small chunks, with several seconds of "silence" between them.

This job should copy some 150GB in a little less than 300000 files.



I tried setdebug in the console, but came up with nothing; enabling trace didn't help either, as I found no file to check.



What can I do to understand with it's so slow? Especially, why it starts fast and get slower as it goes?


 bye & Thanks
        av.


_______________________________________________
Bacula-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to