[
https://issues.apache.org/jira/browse/HADOOP-4803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12671720#action_12671720
]
Hemanth Yamijala commented on HADOOP-4803:
------------------------------------------
So, do you mean that you won't maintain job deficits at all, and instead you
would maintain how much time pools (or jobs) have not got their min share, and
sort by that ?
> large pending jobs hog resources
> --------------------------------
>
> Key: HADOOP-4803
> URL: https://issues.apache.org/jira/browse/HADOOP-4803
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/fair-share
> Reporter: Joydeep Sen Sarma
> Assignee: Matei Zaharia
>
> observing the cluster over the last day - one thing i noticed is that small
> jobs (single digit tasks) are not doing a good job competing against large
> jobs. what seems to happen is that:
> - large job comes along and needs to wait for a while for other large jobs.
> - slots are slowly transfered from one large job to another
> - small tasks keep waiting forever.
> is this an artifact of deficit based scheduling? it seems that long pending
> large jobs are out-scheduling small jobs
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.