[ 
https://issues.apache.org/jira/browse/YARN-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated YARN-4499:
-----------------------------
    Description: 
Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} is 
{{4}}, according to {{YarnConfiguration.java}}

{code}
  public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES =
      YARN_PREFIX + "scheduler.maximum-allocation-vcores";
  public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4;
{code}

However, according to 
[yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml],
 this value should be {{32}}.

Yes, this seems to be a doc error, but I feel that the default value should be 
the same as {{yarn.nodemanager.resource.cpu-vcores}} (whose default is {{8}}) 
---if we have {{8}} cores for scheduling, there's few reason we only allow the 
maximum of {{4}}...

The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) 
|http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html]
 also suggests that "the maximum value ( 
{{yarn.scheduler.maximum-allocation-vcores}}) is usually equal to 
{{yarn.nodemanager.resource.cpu-vcores}}..."

At least, we should fix the doc. The error is pretty bad. A simple search on 
the Internet shows some ppl are confused by this error, for example,
https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098
\\

(but seriously, I think we should have an automatic defaults with the min as 1 
and the max equal to the number of cores on the machine...)

IMAO, look at [IBM's KB Center's 
recommendation|https://www-01.ibm.com/support/knowledgecenter/SSGSMK_7.1.0/management_sym/yarn_configuring_resource_scheduler.html]
 which set the min > max...




  was:
Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} is 
{{4}}, according to {{YarnConfiguration.java}}

{code}
  public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES =
      YARN_PREFIX + "scheduler.maximum-allocation-vcores";
  public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4;
{code}

However, according to 
[yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml],
 this value should be {{32}}.

Yes, this seems to be a doc error, but I feel that the default value should be 
the same as {{yarn.nodemanager.resource.cpu-vcores}} (whose default is {{8}}) 
---if we have {{8}} cores for scheduling, there's few reason we only allow the 
maximum of {{4}}...

The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) 
|http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html]
 also suggests that "the maximum value ( 
{{yarn.scheduler.maximum-allocation-vcores}}) is usually equal to 
{{yarn.nodemanager.resource.cpu-vcores}}..."

At least, we should fix the doc. The error is pretty bad. A simple search on 
the Internet shows some ppl are confused by this error, for example,
https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098
//

(but seriously, I think we should have an automatic defaults with the min as 1 
and the max equal to the number of cores on the machine...)

IMAO, look at [IBM's KB Center's 
recommendation|https://www-01.ibm.com/support/knowledgecenter/SSGSMK_7.1.0/management_sym/yarn_configuring_resource_scheduler.html]
 which set the min > max...





> Bad config values of "yarn.scheduler.maximum-allocation-vcores"
> ---------------------------------------------------------------
>
>                 Key: YARN-4499
>                 URL: https://issues.apache.org/jira/browse/YARN-4499
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: scheduler
>    Affects Versions: 2.7.1, 2.6.2
>            Reporter: Tianyin Xu
>
> Currently, the default value of {{yarn.scheduler.maximum-allocation-vcores}} 
> is {{4}}, according to {{YarnConfiguration.java}}
> {code}
>   public static final String RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES =
>       YARN_PREFIX + "scheduler.maximum-allocation-vcores";
>   public static final int DEFAULT_RM_SCHEDULER_MAXIMUM_ALLOCATION_VCORES = 4;
> {code}
> However, according to 
> [yarn-default.xml|https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml],
>  this value should be {{32}}.
> Yes, this seems to be a doc error, but I feel that the default value should 
> be the same as {{yarn.nodemanager.resource.cpu-vcores}} (whose default is 
> {{8}}) ---if we have {{8}} cores for scheduling, there's few reason we only 
> allow the maximum of {{4}}...
> The Cloudera's article on [Tuning the Cluster for MapReduce v2 (YARN) 
> |http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html]
>  also suggests that "the maximum value ( 
> {{yarn.scheduler.maximum-allocation-vcores}}) is usually equal to 
> {{yarn.nodemanager.resource.cpu-vcores}}..."
> At least, we should fix the doc. The error is pretty bad. A simple search on 
> the Internet shows some ppl are confused by this error, for example,
> https://community.cloudera.com/t5/Cloudera-Manager-Installation/yarn-nodemanager-resource-cpu-vcores-and-yarn-scheduler-maximum/td-p/31098
> \\
> (but seriously, I think we should have an automatic defaults with the min as 
> 1 and the max equal to the number of cores on the machine...)
> IMAO, look at [IBM's KB Center's 
> recommendation|https://www-01.ibm.com/support/knowledgecenter/SSGSMK_7.1.0/management_sym/yarn_configuring_resource_scheduler.html]
>  which set the min > max...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to