[ 
https://issues.apache.org/jira/browse/YARN-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14572628#comment-14572628
 ] 

Rohith commented on YARN-3758:
------------------------------

Had looked into code for CS and FS. The minimum allocation understanding and 
its behavior is different acros CS and FS.
# CS : It is straight forward that if any request with less than 
min-allocation-mb, then the CS normalize the request to min-allocation-mb. And 
containers are allocated with minimum-allocation-mb. 
# FS : if any request with less than min-allocation-mb then the FS normalize 
the request with the factor {{yarn.scheduler.increment-allocation-mb}}. Example 
in description, min-alocation-mb is 256mb, but increment-allocation-mb default 
1024mb which always allocate 1024mb to containers. There is huge effect of 
{{yarn.scheduler.increment-allocation-mb}} which changes the requested memory 
and assign with newly calculated resource.

The behavior is not consistent with CS and FS. I am not sure why there an 
additional configuration introduced in FS? Is it bug ?

> The mininum memory setting(yarn.scheduler.minimum-allocation-mb) is not 
> working in container
> --------------------------------------------------------------------------------------------
>
>                 Key: YARN-3758
>                 URL: https://issues.apache.org/jira/browse/YARN-3758
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: resourcemanager
>    Affects Versions: 2.4.0
>            Reporter: skrho
>
> Hello there~~
> I have 2 clusters
> First cluster is 5 node , default 1 application queue, Capacity scheduler, 8G 
> Physical memory each node
> Second cluster is 10 node, 2 application queuey, fair-scheduler, 230G 
> Physical memory each node
> Wherever a mapreduce job is running, I want resourcemanager is to set the 
> minimum memory  256m to container
> So I was changing configuration in yarn-site.xml & mapred-site.xml
> yarn.scheduler.minimum-allocation-mb : 256
> mapreduce.map.java.opts : -Xms256m 
> mapreduce.reduce.java.opts : -Xms256m 
> mapreduce.map.memory.mb : 256 
> mapreduce.reduce.memory.mb : 256 
> In First cluster  whenever a mapreduce job is running , I can see used memory 
> 256m in web console( http://installedIP:8088/cluster/nodes )
> But In Second cluster whenever a mapreduce job is running , I can see used 
> memory 1024m in web console( http://installedIP:8088/cluster/nodes ) 
> I know default memory value is 1024m, so if there is not changing memory 
> setting, the default value is working.
> I have been testing for two weeks, but I don't know why mimimum memory 
> setting is not working in second cluster
> Why this difference is happened? 
> Am I wrong setting configuration?
> or Is there bug?
> Thank you for reading~~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to