[
https://issues.apache.org/jira/browse/MAPREDUCE-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12767352#action_12767352
]
Hemanth Yamijala commented on MAPREDUCE-1105:
---------------------------------------------
Comments on the 20 patch:
- Some of the comments in trunk apply for the 20 patch as well, please check
them once to ensure they are covered too.
- Updates to forrest documentation needs to be done.
- Javadoc for the API CapacitySchedulerConf.getMaxCapacity can be improved.
Something like: Return the maximum percentage of the cluster capacity that can
be used by the given queue. Also, @param and @return values can be filled up.
- The call to getRaw seems redundant, as we can query with default value -1
right from the beginning.
- Since maxCapacity can be equal to capacity, in the IllegalArgumentException
in getMaxCapacity, we can say maxCapacity should be greater than or equal to
capacity.
- Comments in testHighMemoryBlockingWithMaxCapacity are broken. For instance,
we have 4 maps and 4 reduces but are saying 2 reduces. Can we please not
comment on or log things that are obvious ? Then we wouldn't have the pain of
having to keep them in sync with code.
- Also, testHighMemoryBlockingWithMaxCapacity may need change after fixing the
bug about adhering to limits while scheduling high RAM tasks, since we won't be
able to assign tasks from the high RAM reduces without crossing limits.
- For testUserLimitsWithMaxCapacities, the last call should be null. As
discussed offline, I think there's a bug with checkMultipleAssignment which is
throwing a false positive.
- checkMultipleAssignment is being called only when multiple assignment is set
to true. So we can verify for the exact number of tasks returned, i.e.
tasks.size should never be more than the expected number of tasks.
> CapacityScheduler: It should be possible to set queue hard-limit beyond it's
> actual capacity
> --------------------------------------------------------------------------------------------
>
> Key: MAPREDUCE-1105
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-1105
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: contrib/capacity-sched
> Affects Versions: 0.21.0
> Reporter: Arun C Murthy
> Priority: Blocker
> Fix For: 0.21.0
>
> Attachments: MAPRED-1105-21-1.patch,
> MAPREDUCE-1105-version20.patch.txt
>
>
> Currently the CS caps a queue's capacity to it's actual capacity if a
> hard-limit is specified to be greater than it's actual capacity. We should
> allow the queue to go upto the hard-limit if specified.
> Also, I propose we change the hard-limit unit to be percentage rather than
> #slots.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.