[ https://issues.apache.org/jira/browse/YARN-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Tianyin Xu updated YARN-4499: ----------------------------- Description: Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} is {{4}}, according to {{YarnConfiguration.java}} {code} public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = YARN_PREFIX + "scheduler.maximum-allocation-vcores"; public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4; {code} However, according to [yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml], this value should be {{32}}. Yes, this seems to be a doc error, but I feel that the default value should be the same as {{yarn.nodemanager.resource.cpu-vcores}} (whose default is {{8}}) ---if we have {{8}} cores for scheduling, there's few reason we only allow the maximum of {{4}}... The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) |http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html] also suggests that "the maximum value (of {{yarn.nodemanager.resource.cpu-vcores}}) is usually equal to {{yarn.nodemanager.resource.cpu-vcores}}..." The doc error is pretty bad. A simple search on the Internet shows some ppl are confused by this error, for example, https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098 (but seriously, I think we should have an automatic default which is equal to the number of cores on the machine...) was: Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} is {{4}}, according to {{YarnConfiguration.java}} {code} public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = YARN_PREFIX + "scheduler.maximum-allocation-vcores"; public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4; {code} However, according to [yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml], this value should be {{32}}. Yes, this seems to be a doc error, but I feel that the default value should be the same as {{yarn.nodemanager.resource.cpu-vcores}} (whose default is {{8}}) ---if we have {{8}} cores for scheduling, there's few reason we only allow the maximum of {{4}}... The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) |http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html] also suggests that "the maximum value (of {{yarn.nodemanager.resource.cpu-vcores}}) is usually equal to {{yarn.nodemanager.resource.cpu-vcores}}..." The doc error is pretty bad. A simple search on the Internet shows some ppl are confused by this error, for example, https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098 (seriously, I think we should have an automatic default which is equal to the number of cores on the machine...) > Bad config values of "scheduler.maximum-allocation-vcores" > ---------------------------------------------------------- > > Key: YARN-4499 > URL: https://issues.apache.org/jira/browse/YARN-4499 > Project: Hadoop YARN > Issue Type: Bug > Components: scheduler > Affects Versions: 2.7.1, 2.6.2 > Reporter: Tianyin Xu > > Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} > is {{4}}, according to {{YarnConfiguration.java}} > {code} > public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = > YARN_PREFIX + "scheduler.maximum-allocation-vcores"; > public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4; > {code} > However, according to > [yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml], > this value should be {{32}}. > Yes, this seems to be a doc error, but I feel that the default value should > be the same as {{yarn.nodemanager.resource.cpu-vcores}} (whose default is > {{8}}) ---if we have {{8}} cores for scheduling, there's few reason we only > allow the maximum of {{4}}... > The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) > |http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html] > also suggests that "the maximum value (of > {{yarn.nodemanager.resource.cpu-vcores}}) is usually equal to > {{yarn.nodemanager.resource.cpu-vcores}}..." > The doc error is pretty bad. A simple search on the Internet shows some ppl > are confused by this error, for example, > https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098 > (but seriously, I think we should have an automatic default which is equal to > the number of cores on the machine...) -- This message was sent by Atlassian JIRA (v6.3.4#6332)