[
https://issues.apache.org/jira/browse/MAPREDUCE-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291382#comment-13291382
]
Hadoop QA commented on MAPREDUCE-4311:
--------------------------------------
+1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12531301/MAPREDUCE-4311.patch
against trunk revision .
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 5 new or modified test
files.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 eclipse:eclipse. The patch built with eclipse:eclipse.
+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9)
warnings.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
+1 core tests. The patch passed unit tests in
hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/2446//testReport/
Console output:
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/2446//console
This message is automatically generated.
> Capacity scheduler.xml does not accept decimal values for capacity and
> maximum-capacity settings
> ------------------------------------------------------------------------------------------------
>
> Key: MAPREDUCE-4311
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4311
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: contrib/capacity-sched, mrv2
> Affects Versions: 0.23.3
> Reporter: Thomas Graves
> Assignee: Karthik Kambatla
> Attachments: MAPREDUCE-4311.patch
>
>
> if capacity scheduler capacity or max capacity set with decimal it errors:
> - Error starting ResourceManager
> java.lang.NumberFormatException: For input string: "10.5"
> at
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
> at java.lang.Integer.parseInt(Integer.java:458)
> at java.lang.Integer.parseInt(Integer.java:499)
> at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:713)
> at
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getCapacity(CapacitySchedulerConfiguration.java:147)
> at
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue.<init>(LeafQueue.java:147)
> at
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.parseQueue(CapacityScheduler.java:297)
> at
> 0.20 used to take decimal and this could be an issue on large clusters that
> would have queues with small allocations.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira