Tianyin Xu created YARN-4499:
--------------------------------

             Summary: Bad config values of "scheduler.maximum-allocation-vcores"
                 Key: YARN-4499
                 URL: https://issues.apache.org/jira/browse/YARN-4499
             Project: Hadoop YARN
          Issue Type: Bug
          Components: scheduler
    Affects Versions: 2.6.2, 2.7.1
            Reporter: Tianyin Xu


Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} is 
{{4}}, according to {{YarnConfiguration.java}}

{code}
  public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES =
      YARN_PREFIX + "scheduler.maximum-allocation-vcores";
  public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4;
{code}

However, according to 
[yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml],
 this value should be {{32}}.

Yes, this seems to be a doc error, but I feel that the default value should be 
the same as {{yarn.nodemanager.resource.cpu-vcores}} ---if we have {{8}} cores 
for scheduling, there's few reason we only allow the maximum of {{4}}...

The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) 
|http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html]
 also suggests that "the maximum value (of 
{{yarn.nodemanager.resource.cpu-vcores}}) is usually equal to 
{{yarn.nodemanager.resource.cpu-vcores}}..."

The doc error is pretty bad. A simple search on the Internet shows some ppl are 
confused by this error, for example,
https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098

(seriously, I think we should have an automatic default which is equal to the 
number of cores on the machine...)






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to