Ziga,

It is sad to hear your having issues with Bacula. Some of your concerns
have been here since 2005. The only thing you can do to speed things up
is to spool the whole job to very fast disk(SSD), break up your large
job(number of files), make sure your database is on very fast disk(SSD)
and have a person that is very familiar with Postgres look at your DB to
see if it needs some tweaking.

Here is a post from Andreas Koch back in 2017 with similar issues and
at the time VERY powerful hardware getting poor performance : 

"It appears that such trickery might be unnecessary if the Bacula FD
could
perform something similar (hiding the latency of individual meta-data
operations) on-the-fly, e.g. by executing in a multi-threaded fashion.
This
has been proposed as Item 15 in the Bacula `Projects' list since
November
2005 but does not appear to have been implemented yet (?)."

https://sourceforge.net/p/bacula/mailman/message/36021244/

Thanks,
Joe




This message and any documents attached are confidential - without any
specifications - created for the exclusive use of its intended
recipient(s), and may be legally privileged. Any modification, printing,
use, or distribution of this email that is not authorised is prohibited.
If you have received this email in error, please notify us immediately,
delete it from your system and destroy any attachments.
-- French version --
Ce message et toutes les pièces jointes sont confidentiels et - sans
mention particulière - établis à l'intention et pour l'exploitation
exclusive de son ou ses destinataires. Toute modification, édition,
utilisation ou diffusion non autorisée est interdite. Si vous avez reçu
ce message par erreur, merci d'en avertir immédiatement l'émetteur et de
détruire le message et pièces jointes.



>>> Žiga Žvan <ziga.z...@cetrtapot.si> 10/6/2020 9:56 AM >>>
Hi,
I have done some testing:
a) testing storage with dd command (eg: dd if=/dev/zero 
of=/storage/test1.img bs=1G count=1 oflag=dsync). The results are:
-writing to IBM storage (with cloud enabled) shows 300 MB/sec
-writing to local SSD storage shows 600 MB/sec.
I guess storage is not a bottleneck.
b) testing file copy from linux centos 6 server to bacula server with 
rsync (eg. rsync --info=progress2 source destination)
-writing to local storage: 82 MB/sec
-writing to IBM storage: 85 MB/sec
I guess this is ok for a 1 GB network link.
c) using bacula:
-linux centos 6 file server: 13 MB/sec on IBM storage, 16 MB/sec on 
local SSD storage (version of client 5.2.13).
-windows file server:  around 18 MB/sec - there could be some
additional 
problem, because I perform a backup from a deduplicated drive (version

of client 9.6.5)
d) I have tried to manipulate encryption/compression settings, but I 
believe there is no significant difference

I think that  bacula rate (15 MB/sec) in quite slow comparing to file 
copy results (85 MB/sec) from the same client/server. It should be 
better... Do you agree?

I have implemented autochanger in order to perform backup from both 
servers at the same time. We shall see the results tomorrow.
I have not changed the version of the client on linux server yet. My 
windows server uses new client version, so that was not my first
idea... 
Will try this tomorrow if needed.

What about retention?
I would like to:
- create incremental daily backup (retention 1 week)
- create weekly full backup (retention 1 month)
- create monthly full backup (retention 1 year)

At the moment I use different job/schedule for monthly backup, but that

triggers full backup also on Monday after monthly backup (I would like

to run incremental then). Is there a better way? Relevant parts of conf

below...

Regards,
Ziga

JobDefs {
Name = "bazar2-job"
Schedule = "WeeklyCycle"
...
}
Job {
   Name = "bazar2-backup"
   JobDefs = "bazar2-job"
   Full Backup Pool = bazar2-weekly-pool
   Incremental Backup Pool = bazar2-daily-pool
}
Job {
   Name = "bazar2-monthly-backup"
   Level = Full
   JobDefs = "bazar2-job"
   Pool = bazar2-monthly-pool
   Schedule = "MonthlyFull"  #schedule : see in bacula-dir.conf
(monthly 
pool with longer retention)
}




Example output:

06-Oct 12:19 bacula-dir JobId 714: Bacula bacula-dir 9.6.5 (11Jun20):
   Build OS:                     x86_64-redhat-linux-gnu-bacula redhat
(Core)
   JobId:                           714
   Job:                                  
bazar2-monthly-backup.2020-10-06_09.33.25_03
   Backup Level:                 Full
   Client:                         "bazar2.kranj.cetrtapot.si-fd" 5.2.13
(19Jan13) x86_64-redhat-linux-gnu,redhat,(Core)
   FileSet:                       "bazar2-fileset" 2020-09-30 15:40:26
   Pool:                                 "bazar2-monthly-pool" (From Job
resource)
   Catalog:                       "MyCatalog" (From Client resource)
   Storage:                       "FSTestBackup" (From Job resource)
   Scheduled time:         06-Oct-2020 09:33:15
   Start time:                     06-Oct-2020 09:33:28
   End time:                     06-Oct-2020 12:19:19
   Elapsed time:                 2 hours 45 mins 51 secs
   Priority:                     10
   FD Files Written:     53,682
   SD Files Written:     53,682
   FD Bytes Written:     168,149,175,433 (168.1 GB)
   SD Bytes Written:     168,158,044,149 (168.1 GB)
   Rate:                                 16897.7 KB/s
   Software Compression:   36.6% 1.6:1
   Comm Line Compression:  None
   Snapshot/VSS:                 no
   Encryption:                     no
   Accurate:                     no
   Volume name(s):         bazar2-monthly-vol-0300
   Volume Session Id:      11
   Volume Session Time:    1601893281
   Last Volume Bytes:      337,370,601,852 (337.3 GB)
   Non-fatal FD errors:    0
   SD Errors:               0
   FD termination status:  OK
   SD termination status:  OK
   Termination:                   Backup OK


On 06.10.2020 14:28, Josh Fisher wrote:
>
> On 10/6/20 3:45 AM, Žiga Žvan wrote:
>> I believe that I have my spooling attributes set correctly on
jobdefs 
>> (see bellow). Spool attributes = yes; Spool data defaults to no. Any

>> other idea for performance problems?
>> Regard,
>> Ziga
>>
>
> The client version is very old. First try updating the client to
9.6.x.
>
> For testing purposes, create another storage device on local disk and

> write a full backup to that. If it is much faster to local disk 
> storage than it is to the s3 driver, then there may be an issue with

> how the s3 driver is compiled, version of s3 driver, etc.
>
> Otherwise, with attribute spooling enabled, the status of the job as

> given by the status dir command in bconsole will change to
"despooling 
> attributes" or something like that when the client has finished 
> sending data. That is the period at the end of the job when the 
> spooled attributes are being written to the catalog database. If 
> despooling is taking a long time, then database performance might be

> the bottleneck.
>
>
>
>
> _______________________________________________
> Bacula-users mailing list
> Bacula-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/bacula-users



_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to