Hi John,

On Thursday 23 August 2007 03:30:53 pm John Drescher wrote:
> > "When data spooling is enabled, Bacula automatically turns on attribute
> > spooling."
>
> That was just added last week as this is what happens in bacula 2.X and
> above.

No, the same text is already present in bacula.pdf for version 1.38.11 (rpm 
packager: D. Scott Barninger). In fact, none of the tips has changed since 
1.38.

> > This is not the case with 1.38.11 although it used to be with 1.36. I had
> > to enable attribute spooling explicitly in director config, otherwise it
> > stopped working. Not really a problem but I am wondering whether
> > documentation is indeed wrong.
> >
> > "When Bacula begins despooling data spooled to disk, it takes exclusive
> > use of the tape."
> > Again, this is not the case with 1.38.11 while I am pretty sure it was
> > before with 1.36. Now, all 5 concurrent jobs write to a single tape
> > simultaneously, which is a big problem.
>
> You don't want this? I consider this a very good feature as it allows
> me to combine clients to supply data  to my 2 drive LTO2 auto changer
> at the rate it can handle (instead of having it stop and start) which
> is over 40 MB/s. Do you have a slow tape drive?

I can't see how concurrent tape writes can possibly improve speed when data  
spooling to disk is enabled. Since all data are completely spooled to disk 
first, prior to engaging tape drives, despooling will be limited either by 
disk subsystem speed or tape drive speed (whichever comes the lowest). In 
either case the number of jobs despooled concurrently would not play any role 
in determining total rate of data transfer. On the other hand, having data 
blocks from several different jobs intermixed and dispersed across several 
tapes is a bad thing (which is correctly stated in bacula documentation). For 
one thing, restoring a single job will take more tapes (and time). There also 
could be issues with job retention vs tape retention, etc. Specifically, I 
have a setup with a few very large jobs (fileserver backups) and a large 
number of relatively small jobs (desktop clients). I would like to have large 
jobs to be stored continuously, spanning minimal number of tapes. It appears 
that now they got dispersed by smaller jobs despooled simultaneously. As I 
said, it is possible to prevent this by carefully tuning priorities and 
scheduled times but documentation stated it was not necessary.

> > I can fix it fiddling with priorities and schedule
> > times of course but is it a known bug?
>
> Bug? Again I am confused.

Well, at least a documentation discrepancy.

--Ivan


The information transmitted in this electronic communication is intended only 
for the person or entity to whom it is addressed and may contain confidential 
and/or privileged material. Any review, retransmission, dissemination or other 
use of or taking of any action in reliance upon this information by persons or 
entities other than the intended recipient is prohibited. If you received this 
information in error, please contact the Compliance HelpLine at 800-856-1983 
and properly dispose of this information.

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to