Hello,

Can you give me the exact location of that text in the document and
what Bacula version you are referring to?

The facts as we know them, including the design of the software, and what
we have seen from testing are:

- When a Job begins despooling, that Job and only that Job stops
spooling and begins despooling.
- When the Job is done despooling, it will begin spooling again.
- While any Job is despooling, all other Jobs will continue spooling
providing there is sufficient spooling space.
- If Jobs run out of spooling space, they start despooling, but only
one job at a time despools to any given volume, so they may wait.
- Once the spooling space fills, if you are writing to several Volumes,
you can get into a sort of spooling thrashing situation where various jobs
finish despooling and others start using the space liberated.  Thus it
is important not to run out of spooling space.
- Note, when I speak about running out of spooling space, in the terms
I am using those words, it can never happen to a single job.  It happens
only when running multiple jobs to multiple devices.

We have a project planned for Bacula Enterprise where a Job is blocked
because it is despooling will be able to continue spooling providing there
is sufficient space.  But this is an Enterprise feature, and unless someone
from the community implements it, it will only be in the Enterprise version.
In fact, we have a series of plans for adding high end Enterprise features
to the Storage daemon.  Most of them should be completed by the end of
the year (hopefully before).

Best regards,
Kern

On 03/20/2012 10:46 PM, Stephen Thompson wrote:
>
> Hello,
>
> I was wondering if anyone could confirm what I've noticed on my own
> instance of bacula, which seems contrary to the Bacula manual.
>
>   From DataSpooling section:
>> If you are running multiple simultaneous jobs, Bacula will continue spooling 
>> other jobs while one is despooling to tape, provided there is sufficient 
>> spool file space.
>> This seems to be true, only if the jobs in question were launched at the
>> same time/concurrently.  New jobs launched while a job is despooling,
>> are launched into a "running" state, but they do not begin to spool until 
>> the existing job(s) finish despooling.
> This is very sad, because I just came into a windfall of spool space and
> I was hoping to run jobs back to back, such that while one set of jobs
> were despooling, I could have the next set spooling, and so on.
> Note, all jobs use the same Pool and have the same priority.
>
> I noticed this because I've increased the spool file size to the size of
> my largest job.  Now, rather than before (with smaller spool file sizes)
> when the new jobs had a chance to start spooling while the existing jobs
> were in a spooling phase, the new jobs cannot being spooling apparently
> because the existing jobs stay in a despooling phase, which can be for
> quite some time.
>
> I see an old bug 0001231 with a similar issue, which in the history it
> is pointed out that it may not be that new jobs can't spool while
> existing jobs despool, but that the new jobs cannot verify that they
> will have tape access, which is a step before spooling begins.
>
> I wonder if this is simply the state of affairs and if so, are there are
> any plans to improve upon this 'inefficiency'.
>
> thanks!
> Stephen


------------------------------------------------------------------------------
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here 
http://p.sf.net/sfu/sfd2d-msazure
_______________________________________________
Bacula-devel mailing list
Bacula-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-devel

Reply via email to