Re: [Bacula-users] tuning Bacula - Maximum Spool Size
Hello Robert, I´m affraid the spool directory is a device directive. I have it configured in my device: Device { ... Spool Directory = /opt/bacula/spool Maximum Spool Size = 20 G } Best regards, Ana On Thu, Apr 30, 2015 at 4:42 PM, Robert A Threet rober...@netzero.net wrote: Ok, I greatly increased my spool sizes. It appears to be placing the spool in /opt/bacula/working (I'm using BaculaSystems 6). I read there was a Spool Directory = parameter. I put it in the Tape Pool definition. After doing that, bacula wouldn't start. I have about 4-6TB of disk I wish to use for spooling to each tape head. How do I get this configured? On Tue, 28 Apr 2015 15:37:33 -0500 Robert A Threet rober...@netzero.net wrote: Looks like I have about 4TB of local SAS drives to play with on my Dell 720. I was thinking of bumping up Maximum Spool Size x10 = 240GB And x10 the Maximum Job Spool Size to 80G. Based on this, it seems logical that Maximum Concurrent Jobs = 3 (not 21 as in current config). Q: Does this sound reasonable? Device {# I have 4 of this in a Dell TL-4000 tape library Name = tl4000-0 Drive Index = 0 Media Type = LTO6 Archive Device = /dev/tape/by-id/scsi-35000e1116097b001-nst # /dev/nst0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Autoselect = yes # Offline On Unmount = yes Maximum File Size = 16 G Maximum Job Spool Size = 8G Maximum Spool Size = 20G Maximum Concurrent Jobs = 21 Alert Command = sh -c 'smartctl -H -l error %c' } System -- Robert A Threet rober...@netzero.net Old School Yearbook Pics View Class Yearbooks Online Free. Search by School Year. Look Now! http://thirdpartyoffers.netzero.net/TGL3231/554285be5fb855bd7e83st02vuc -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] tuning Bacula - Maximum Spool Size
Ok, I greatly increased my spool sizes. It appears to be placing the spool in /opt/bacula/working (I'm using BaculaSystems 6). I read there was a Spool Directory = parameter. I put it in the Tape Pool definition. After doing that, bacula wouldn't start. I have about 4-6TB of disk I wish to use for spooling to each tape head. How do I get this configured? On Tue, 28 Apr 2015 15:37:33 -0500 Robert A Threet rober...@netzero.net wrote: Looks like I have about 4TB of local SAS drives to play with on my Dell 720. I was thinking of bumping up Maximum Spool Size x10 = 240GB And x10 the Maximum Job Spool Size to 80G. Based on this, it seems logical that Maximum Concurrent Jobs = 3 (not 21 as in current config). Q: Does this sound reasonable? Device {# I have 4 of this in a Dell TL-4000 tape library Name = tl4000-0 Drive Index = 0 Media Type = LTO6 Archive Device = /dev/tape/by-id/scsi-35000e1116097b001-nst # /dev/nst0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Autoselect = yes # Offline On Unmount = yes Maximum File Size = 16 G Maximum Job Spool Size = 8G Maximum Spool Size = 20G Maximum Concurrent Jobs = 21 Alert Command = sh -c 'smartctl -H -l error %c' } System -- Robert A Threet rober...@netzero.net Old School Yearbook Pics View Class Yearbooks Online Free. Search by School Year. Look Now! http://thirdpartyoffers.netzero.net/TGL3231/554285be5fb855bd7e83st02vuc -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] tuning Bacula - Maximum Spool Size
Looks like I have about 4TB of local SAS drives to play with on my Dell 720. I was thinking of bumping up Maximum Spool Size x10 = 240GB And x10 the Maximum Job Spool Size to 80G. Based on this, it seems logical that Maximum Concurrent Jobs = 3 (not 21 as in current config). Q: Does this sound reasonable? Device { # I have 4 of this in a Dell TL-4000 tape library Name = tl4000-0 Drive Index = 0 Media Type = LTO6 Archive Device = /dev/tape/by-id/scsi-35000e1116097b001-nst # /dev/nst0 AutomaticMount = yes; # when device opened, read it AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes Autoselect = yes # Offline On Unmount = yes Maximum File Size = 16 G Maximum Job Spool Size = 8G Maximum Spool Size = 20G Maximum Concurrent Jobs = 21 Alert Command = sh -c 'smartctl -H -l error %c' } System -- Robert A Threet rober...@netzero.net Old School Yearbook Pics View Class Yearbooks Online Free. Search by School Year. Look Now! http://thirdpartyoffers.netzero.net/TGL3231/553fef964b7866f95340bst02vuc -- One dashboard for servers and applications across Physical-Virtual-Cloud Widest out-of-the-box monitoring support with 50+ applications Performance metrics, stats and reports that give you Actionable Insights Deep dive visibility with transaction tracing using APM Insight. http://ad.doubleclick.net/ddm/clk/290420510;117567292;y ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
I'm going to try to reply to all the responses I got together. Have you tried backing up other hosts on your network? What are the speeds with these hosts? I've noticed that different host respond with varying speeds despite being on the same network. Wondering if this has to do the client OS doing some throttling based on work load. I am backing up the Bacula server itself, my workstation (with is a FreeBSD box as well), and our main file server, which is a SunOS server. We aren't doing any throttling intentionally, but I also see a large variation in throughput depending on the client in question, but none of them - not even the local server backing up itself - are all that impressive right now. I would start by turning off software compression and do performance tests with full backups. A second thing to try is to enable attribute spooling so the database does not slow down the backup. This can be useful if you have millions of files. We do not have software compression enabled, as far as I can tell. I've turned on the Spool Attributes option in my job definition, and we'll see if that helps. Compare against a stock, non tuned, Bacula install. Are you going between building where you get the slow transfer speed? UCSC has 1 Gb links between buildings from my recollection. The link to the outside world is not much more than that. Bacula also has a batch mode which you can twiddle around with. For the slowest backup job, the two servers are sitting in the same rack on the same gigabit switch. The fastest client actually is in a different building. Yes, we have 1Gb between buildings here, but out Internet connection was recently upgraded to 10Gb (not that it really applies to this situation anyhow). I found some Google hits that talked about batch mode, but no documentation that tells me how to enable it. Can you provide a link? Is the MySQL database storage on the same RAID array you are writing backups to? Yes and no. Currently, in our dev environment, they are both on the same physical RAID array, but Bacula operates in a separate jail from mySQL. When we move to production, the director will probably run on one server and the storage daemon on another, so maybe that will help? It may be useful to run iftop on the network interfaces of the Bacula server to see what the network IO is like, and then compare that to iotop to see what the disk IO is like. We actually run Cacti against all our servers. Disk throughput for the Bacula server can hit as much as 240Mb/s during a backup, whereas the network throughput at the same time is around 80Mb/s, with a few spikes to 96Mb/s. For what it's worth, iperf can hit about 780Mb/s between these hosts. I just twiddled some ZFS parameters last night (turning off the primary and seconday caches) and reconfigured the zpool to let ZFS handle the striping (rather than the Adaptec controller handling the RAID array), so we'll see what numbers we come back with tomorrow. I've also added some other different hardware/OS combination clients to see if we can work out a pattern. Tim Gustafson Baskin School of Engineering UC Santa Cruz t...@soe.ucsc.edu 831-459-5354 -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
Is the MySQL database storage on the same RAID array you are writing backups to? Yes and no. Currently, in our dev environment, they are both on the same physical RAID array, but Bacula operates in a separate jail from mySQL. When we move to production, the director will probably run on one server and the storage daemon on another, so maybe that will help? Having the database on the same hard drives or raid array will greatly reduce the filesystem performance because of all the seeking back and forth to write to the database. Without attribute spooling or batch (not sure if that is postgres only) after each file is read the database needs to add records. John -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
Without attribute spooling or batch (not sure if that is postgres only) after each file is read the database needs to add records. We have attribute spooling activated right now. Tim Gustafson Baskin School of Engineering UC Santa Cruz t...@soe.ucsc.edu 831-459-5354 -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
Compare against a stock, non tuned, Bacula install. Are you going between building where you get the slow transfer speed? UCSC has 1 Gb links between buildings from my recollection. The link to the outside world is not much more than that. Bacula also has a batch mode which you can twiddle around with. For the slowest backup job, the two servers are sitting in the same rack on the same gigabit switch. The fastest client actually is in a different building. Yes, we have 1Gb between buildings here, but out Internet connection was recently upgraded to 10Gb (not that it really applies to this situation anyhow). I found some Google hits that talked about batch mode, but no documentation that tells me how to enable it. Can you provide a link? The batch mode is a compiling option and is described in the manual (http://bacula.org/5.0.x-manuals/en/main/main/Installing_Bacula.html) below. I recall this *might* be helpful in speeding up handling of many, small files. -enable-batch-insert This option enables batch inserts of the attribute records (default) in the catalog database, which is much faster (10 times or more) than without this option for large numbers of files. Mehma -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
On Mon, 04 Oct 2010 19:37:32 +0200, Tim Gustafson t...@soe.ucsc.edu wrote: However, we're getting pretty pitiful throughput numbers. When I scp a file from my workstation to the Bacula server, I get something like 40MB/s (320Mb/s). When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often get numbers closer to 10MB/s (80Mb/s). Are you scp-ing one large file to establish base speed? Your average server's filesystem seldom allows 40 MB/s sustained because it often consists of many thousands of small and often fragmented files. Over time W2k3 suffers most from this, a defrag run or two will often yield the biggest speed increase of them all. Linux with ext3 is much more robust in this respect, although some new W2k8 servers are doing pretty well here so far. As long as you are using something like an average 7200 rpm 2 disk RAID1 setup speed will also degrade very quickly if a few other read/write actions are taking place at the same time simply due to seeking. The only solution for that is to move the main bottlenecks to memory and/or use SSDs. For ext3/4 you might also want to try the noatime mount option in /etc/fstab. Lastly, if you depend on every server doing high speeds it will be an expensive exercise, you should concentrate on saturating the backup storage by running more than one job at the same time. -- Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
[Bacula-users] Tuning Bacula
We have recently installed Bacula onto a FreeBSD server and several Linux, SunOS and FreeBSD clients. The Bacula director and storage daemon run on a box with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, Adaptec RAID controller with 512MB cache). The box has 16GB of RAM and is not really doing much else right now. We're using mySQL for our database back-end, and we have MD5 hashing of files turned off (Accurate = mcs and Verify = mcs are set in bacula-dir.conf). However, we're getting pretty pitiful throughput numbers. When I scp a file from my workstation to the Bacula server, I get something like 40MB/s (320Mb/s). When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often get numbers closer to 10MB/s (80Mb/s). I Googled tuning bacula and came up with primarily stuff related to tuning Postgres as it relates to Bacula, but nothing about tuning the file daemon or the storage daemon. Can anyone point me to some leads as far as what I can do to bump up the throughput? We have a data set that is several terabytes large to back up, and it will never complete in a reasonable amount of time at 10MB/s. I need to achieve something closer to 40MB/s to make this a workable option. Tim Gustafson Baskin School of Engineering UC Santa Cruz t...@soe.ucsc.edu 831-459-5354 -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
Tim, Have you tried backing up other hosts on your network? What are the speeds with these hosts? I've noticed that different host respond with varying speeds despite being on the same network. Wondering if this has to do the client OS doing some throttling based on work load. JJ -Original Message- From: Tim Gustafson [mailto:t...@soe.ucsc.edu] Sent: Monday, October 04, 2010 10:38 AM To: bacula-users@lists.sourceforge.net Subject: [Bacula-users] Tuning Bacula We have recently installed Bacula onto a FreeBSD server and several Linux, SunOS and FreeBSD clients. The Bacula director and storage daemon run on a box with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, Adaptec RAID controller with 512MB cache). The box has 16GB of RAM and is not really doing much else right now. We're using mySQL for our database back-end, and we have MD5 hashing of files turned off (Accurate = mcs and Verify = mcs are set in bacula-dir.conf). However, we're getting pretty pitiful throughput numbers. When I scp a file from my workstation to the Bacula server, I get something like 40MB/s (320Mb/s). When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often get numbers closer to 10MB/s (80Mb/s). I Googled tuning bacula and came up with primarily stuff related to tuning Postgres as it relates to Bacula, but nothing about tuning the file daemon or the storage daemon. Can anyone point me to some leads as far as what I can do to bump up the throughput? We have a data set that is several terabytes large to back up, and it will never complete in a reasonable amount of time at 10MB/s. I need to achieve something closer to 40MB/s to make this a workable option. Tim Gustafson Baskin School of Engineering UC Santa Cruz t...@soe.ucsc.edu 831-459-5354 -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
On Mon, Oct 4, 2010 at 1:37 PM, Tim Gustafson t...@soe.ucsc.edu wrote: We have recently installed Bacula onto a FreeBSD server and several Linux, SunOS and FreeBSD clients. The Bacula director and storage daemon run on a box with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, Adaptec RAID controller with 512MB cache). The box has 16GB of RAM and is not really doing much else right now. We're using mySQL for our database back-end, and we have MD5 hashing of files turned off (Accurate = mcs and Verify = mcs are set in bacula-dir.conf). However, we're getting pretty pitiful throughput numbers. When I scp a file from my workstation to the Bacula server, I get something like 40MB/s (320Mb/s). When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often get numbers closer to 10MB/s (80Mb/s). I Googled tuning bacula and came up with primarily stuff related to tuning Postgres as it relates to Bacula, but nothing about tuning the file daemon or the storage daemon. Can anyone point me to some leads as far as what I can do to bump up the throughput? We have a data set that is several terabytes large to back up, and it will never complete in a reasonable amount of time at 10MB/s. I need to achieve something closer to 40MB/s to make this a workable option. I would start by turning off software compression and do performance tests with full backups. A second thing to try is to enable attribute spooling so the database does not slow down the backup. This can be useful if you have millions of files. John -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
On 10/4/10 10:37 AM, Tim Gustafson wrote: We have recently installed Bacula onto a FreeBSD server and several Linux, SunOS and FreeBSD clients. The Bacula director and storage daemon run on a box with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, Adaptec RAID controller with 512MB cache). The box has 16GB of RAM and is not really doing much else right now. We're using mySQL for our database back-end, and we have MD5 hashing of files turned off (Accurate = mcs and Verify = mcs are set in bacula-dir.conf). However, we're getting pretty pitiful throughput numbers. When I scp a file from my workstation to the Bacula server, I get something like 40MB/s (320Mb/s). When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often get numbers closer to 10MB/s (80Mb/s). I Googled tuning bacula and came up with primarily stuff related to tuning Postgres as it relates to Bacula, but nothing about tuning the file daemon or the storage daemon. Can anyone point me to some leads as far as what I can do to bump up the throughput? We have a data set that is several terabytes large to back up, and it will never complete in a reasonable amount of time at 10MB/s. I need to achieve something closer to 40MB/s to make this a workable option. Tim Gustafson Baskin School of Engineering UC Santa Cruz t...@soe.ucsc.edu 831-459-5354 -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users Hi Tim, Compare against a stock, non tuned, Bacula install. Are you going between building where you get the slow transfer speed? UCSC has 1 Gb links between buildings from my recollection. The link to the outside world is not much more than that. Bacula also has a batch mode which you can twiddle around with. Mehma -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
On 10/4/2010 1:37 PM, Tim Gustafson wrote: We have recently installed Bacula onto a FreeBSD server and several Linux, SunOS and FreeBSD clients. The Bacula director and storage daemon run on a box with about 6 terabytes of RAID6 storage (SATA 300 drives, 1TB each, Adaptec RAID controller with 512MB cache). The box has 16GB of RAM and is not really doing much else right now. We're using mySQL for our database back-end, and we have MD5 hashing of files turned off (Accurate = mcs and Verify = mcs are set in bacula-dir.conf). However, we're getting pretty pitiful throughput numbers. When I scp a file from my workstation to the Bacula server, I get something like 40MB/s (320Mb/s). When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often get numbers closer to 10MB/s (80Mb/s). Is the MySQL database storage on the same RAID array you are writing backups to? I Googled tuning bacula and came up with primarily stuff related to tuning Postgres as it relates to Bacula, but nothing about tuning the file daemon or the storage daemon. Can anyone point me to some leads as far as what I can do to bump up the throughput? We have a data set that is several terabytes large to back up, and it will never complete in a reasonable amount of time at 10MB/s. I need to achieve something closer to 40MB/s to make this a workable option. Tim Gustafson Baskin School of Engineering UC Santa Cruz t...@soe.ucsc.edu 831-459-5354 -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users
Re: [Bacula-users] Tuning Bacula
On 04/10/10, Tim Gustafson (t...@soe.ucsc.edu) wrote: ...we're getting pretty pitiful throughput numbers. When I scp a file from my workstation to the Bacula server, I get something like 40MB/s (320Mb/s). When Bacula runs, we're lucky to get 20MB/s (160Mb/s), and we often get numbers closer to 10MB/s (80Mb/s). As others have mentioned, the key is to try and work out where the contention is. It may be useful to run iftop on the network interfaces of the Bacula server to see what the network IO is like, and then compare that to iotop to see what the disk IO is like. Bear in mind that if you are using spooling (although I assume you aren't), the fd-client status throughput stats reported are half of the actual native speed. This is because the throughput calculation is based on the speed from client to destination, so the time taken is the sum of the network transfer from the client to the spool, and then from the spool to the tape. That, anyhow, might be a reason for the roughly 50% factor you report. If disk IO is the issue it might be useful to verify that your database (what sort?) is running on a separate disk array, that your raid controller has caching enabled (you need a BBU for this to be safe) and that you have a good filesystem for your backup needs (the best one for us is XFS). Rory -- Rory Campbell-Lange r...@campbell-lange.net Campbell-Lange Workshop www.campbell-lange.net 0207 6311 555 3 Tottenham Street London W1T 2AF Registered in England No. 04551928 -- Virtualization is moving to the mainstream and overtaking non-virtualized environment for deploying applications. Does it make network security easier or more difficult to achieve? Read this whitepaper to separate the two and get a better understanding. http://p.sf.net/sfu/hp-phase2-d2d ___ Bacula-users mailing list Bacula-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users