I’m trying to understand the Just-In-Device scheduling and maybe how it 
interacts with Spooling.

I have always used spooling but the document says:
https://docs.bareos.org/TasksAndConcepts/DataSpooling.html

"This means that you can spool multiple simultaneous jobs to disk”

This never worked for me, if I tried to let more than one job run against a 
pool with a single tape drive even wtih spooling when the second job would try 
to start it would say no device was available and not start.

Does Just-In-Time address this?  Given spools are attached to devices it’s not 
clear how it would know what spool file to use or how to limit it’s size. 

A few of my clients are road warriors and their home upload is very very slow, 
3GB incremental might take 5 hours, this causes devices to be tied up for a 
long time doing nothing often causing Consolidation and Copy jobs to wait.

How does Just-In-Time and Spool work together to address this?  In a perferct 
world the FD wites to spools, in parallel up to conccurrent jobs allowed, but 
then is serilized to despool only one at a time on the tape drive, but 
interleave between despooling.  That is a job that needs to do multiple 
despools, while the second spooling is happening the device is freed to despool 
something else, and not be tied up the whole time.

Everything I found about Just-In-Time says it keeps tape drives busy, but it 
says ‘until ready to write’ but almost all our jobs are writing something 
quickly, it just might be slow because of the network.   

Thanks I guess there just isn’t much in the documentation that fully explains 
the expected behavior. 

Brock Palen
[email protected]
www.mlds-networks.com
Websites, Linux, Hosting, Joomla, Consulting



-- 
You received this message because you are subscribed to the Google Groups 
"bareos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/bareos-users/AF4387BB-5D1D-4BA8-804C-A49BB62F535F%40mlds-networks.com.

Reply via email to