Ok, will do.
Thanks for providing some context on this topic.
Alex
On Sun, Jan 11, 2015 at 8:34 PM, Patrick Wendell wrote:
> Priority scheduling isn't something we've supported in Spark and we've
> opted to support FIFO and Fair scheduling and asked users to try and
> fit these to the needs of
Priority scheduling isn't something we've supported in Spark and we've
opted to support FIFO and Fair scheduling and asked users to try and
fit these to the needs of their applications.
In practice from what I've seen of priority schedulers, such as the
linux CPU scheduler, is that strict priority
Yes, if you are asking about developing a new priority queue job scheduling
feature and not just about how job scheduling currently works in Spark, the
that's a dev list issue. The current job scheduling priority is at the
granularity of pools containing jobs, not the jobs themselves; so if you
re
Cody,
While I might be able to improve the scheduling of my jobs by using a few
different pools with weights equal to, say, 1, 1e3 and 1e6, effectively
getting a small handful of priority classes. Still, this is really not
quite what I am describing. This is why my original post was on the dev
lis
Is it possible to specify a priority level for a job, such that the active
jobs might be scheduled in order of priority?
Alex