Re: [Bacula-users] Problems doing concurrent jobs, and having lousy performance
Do you have Maximum Concurrent Jobs set in the Director and storage sections in bacula-dir.conf? I just added it to the storage section; it seems to have been removed somehow. Can someone please explain to me why bacula still is not able to run concurrent Jobs? Do I have to create a storage for each client (for instance)? And what's the reason for having to do so? Only 1 volume and thus pool can be loaded in a storage device at a time so if you have several pools that you want to run backups on you need more than 1 storage device. For disk based backups, I highly recommend using the bacula virtual autochanger. http://sourceforge.net/projects/vchanger/ This will greatly simplify the setup of multiple pools, devices and concurrency. Just send all jobs to the virtual autochanger resource and let bacula handle the devices. Is there any configuration required for doing so? the autochanger seems (to me) to be fairly complex. Software compression is a very heavy CPU usage process on the FD and will certainly slow down your backups. When having a look at the FD hosts, bacula-fd doesn't really pop up when running 'top', nor does the system's load increase a lot (these machines are also quite over dimensioned for their purpose). But that's of later concern, I'd be very very happy if I would be able to get concurrent jobs to work first. Boudewijn -- All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2dcopy1 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Problems doing concurrent jobs, and having lousy performance
Hi Guys, For some time, I've been trying to get concurrent jobs in bacula to work. For doing so, I've created a pool for each client, and made sure all parts of the setup have got the max concurrent jobs = 1 . Please allow me to elaborate about my configuration: This is part of my bacula-dir (well, this is a file for a client 'www', and it's being included in bacula-dir, along with some exactly the same files except for passwords/hostnames): JobDefs { Name = www-weekly Type = Backup Level = Incremental Client = www FileSet = Full Set Schedule = WeeklyCycle Storage = leiden-filestorage Messages = Standard Pool = wwwPool Priority = 10 } Job { Name = wwwjob JobDefs = www-weekly Write Bootstrap = /var/lib/bacula/www.bsr } Client { Name = www Address = www.KNIP FDPort = 9102 Catalog = MyCatalog Password = KNIP # password for FileDaemon File Retention = 30 days# 30 days Job Retention = 6 months# six months AutoPrune = yes # Prune expired Jobs/Files } Pool { Name = wwwPool LabelFormat = wwwVol Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Volume Use Duration = 23h } As you can see, I've removed some sensitive information. A clone of this config is also used for 'mail', and some more machines. Each has it's own pool (because of concurrency). Well the bacula-sd.conf: Storage { # definition of myself Name = leiden-filestorage WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 50 SDAddresses = { ip = { addr = 192.168.1.44; port = 9103 } ip = { addr = 127.0.0.1; port =9103 } } } Director { Name = leiden-dir Password = * } Director { Name = leiden-mon Password = * Monitor = yes } Device { Name = leiden-filestorage Media Type = File Archive Device = /bacula LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; } Messages { Name = Standard director = leiden-dir = all } Pretty standard, should I change something in here? And my bacula-fd.conf: Director { Name = leiden-dir Password = * } Director { Name = www.*-mon Password = * Monitor = yes } FileDaemon { # this is me Name = www.*-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula HeartBeat Interval = 15 Maximum Concurrent Jobs = 20 FDAddress = * } Messages { Name = Standard director = www.*-dir = all, !skipped, !restored } Also quite boring. Can someone please explain to me why bacula still is not able to run concurrent Jobs? Do I have to create a storage for each client (for instance)? And what's the reason for having to do so? Furthermore, I've enabled the compression on some clients, but nevertheless the system's performance isn't very good. It tends to stagger at about 1800kb/s , but both ends of the line are 100mbit... and almost not being used at all. The director and sd are on the same machine, attached to a NAS (which performs fine by itself), and the machine has a dual-core Atom CPU running debian and 2gb of RAM. It also has no other jobs except for Nagios (which is not very heavily loaded). Cheers, Boudewijn Ector -- All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2dcopy2 ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] connections timing out
On 07/27/2011 10:31 AM, Pietro Bertera wrote: 2011/7/26 Boudewijn Ectorboudew...@boudewijnector.nl: Can someone please point me out where I should start to investigate this problem? From the internet, I can reach the director and the SD @ the 'leiden' system. I can reach the FD's at all servers which are to be backed up. the command status client=xxx in bconsole returns everything correctly ? Regards, Pietro Hi Pietro, Sorry for the late reply, since I've been on a holiday. Nothing has changed, and the problem can still be reproduced: *status client=www Connecting to Client www at www.boudewijnector.nl:9102 www.boudewijnector.nl-fd Version: 5.0.2 (28 April 2010) x86_64-pc-linux-gnu debian squeeze/sid Daemon started 11-Aug-11 18:22, 1 Job run since started. Heap: heap=1,597,440 smbytes=176,189 max_bytes=267,404 bufs=145 max_bufs=279 Sizeof: boffset_t=8 size_t=8 debug=0 trace=0 Running Jobs: JobId 293 Job wwwjob.2011-08-11_21.45.14_07 is running. Full Backup Job started: 11-Aug-11 21:45 Files=3,607 Bytes=11,011,930 Bytes/sec=1,101,193 Errors=1 Files Examined=3,608 Processing file: /root/home/boudewijn/IMG_9895.JPG SDReadSeqNo=5 fd=5 Director connected at: 11-Aug-11 21:45 Terminated Jobs: JobId LevelFiles Bytes Status FinishedName == 292 Full 93,8766.468 G Error11-Aug-11 20:23 wwwjob * So the director seems to be able to connect to the file daemon, am I correct? Cheers, Boudewijn Ector -- Get a FREE DOWNLOAD! and learn more about uberSVN rich system, user administration capabilities and model configuration. Take the hassle out of deploying and managing Subversion and the tools developers use with it. http://p.sf.net/sfu/wandisco-dev2dev ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] connections timing out
bytes when setting up the connection. Can someone please point me out where I should start to investigate this problem? From the internet, I can reach the director and the SD @ the 'leiden' system. I can reach the FD's at all servers which are to be backed up. Cheers, Boudewijn Ector -- Got Input? Slashdot Needs You. Take our quick survey online. Come on, we don't ask for help often. Plus, you'll get a chance to win $100 to spend on ThinkGeek. http://p.sf.net/sfu/slashdot-survey ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users