[
https://issues.apache.org/jira/browse/SPARK-15176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15311073#comment-15311073
]
Mark Hamstra commented on SPARK-15176:
--------------------------------------
I'm not strongly committed to any API (other than the fact that most of it is
already public), but I think we should strive to have as much symmetry as makes
sense between the-thing-that-enforces-a-lower-resource-bound and
the-thing-that-enforces-an-upper-resource-bound on a pool. That includes how
the documentation talks about these things.
To encumber the discussion with something that likely needs to go into another
JIRA and additional PRs, what I think we want longer term is not just a static
upper bound on the number of cores that a pool can use, but rather to allow the
pool to acquire as many cores over its minShare as are available, but for
running Tasks to be preemptible until no more than the maximum number of cores
for that pool are used. With static upper bounds, we're likely to leave cores
unused and for Jobs to take longer than they need to in many instances.
> Job Scheduling Within Application Suffers from Priority Inversion
> -----------------------------------------------------------------
>
> Key: SPARK-15176
> URL: https://issues.apache.org/jira/browse/SPARK-15176
> Project: Spark
> Issue Type: Bug
> Components: Scheduler
> Affects Versions: 1.6.1
> Reporter: Nick White
>
> Say I have two pools, and N cores in my cluster:
> * I submit a job to one, which has M >> N tasks
> * N of the M tasks are scheduled
> * I submit a job to the second pool - but none of its tasks get scheduled
> until a task from the other pool finishes!
> This can lead to unbounded denial-of-service for the second pool - regardless
> of `minShare` or `weight` settings. Ideally Spark would support a pre-emption
> mechanism, or an upper bound on a pool's resource usage.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]