Chuan Liu commented on YARN-2190:

[~vvasudev], thanks for the review! Please see my answers below.

bq. 1. What is the behaviour of a process that tries to exceed the allocated 
memory? Will it start swapping or will it be killed?
OS will limit the process's memory to the specified value. After the limit is 
reached, further API calls to memory allocation, e.g. {{malloc()}}, will get an 
error. The process behavior depends. In JVM, the application will get an OOM 
error and terminate. In C#, I was told CLR can detect memory pressure. I think 
swapping is an orthogonal OS feature and not related to this.

bq. 2. Your code assumes a 1-1 mapping of physical cores to vcores. This 
assumption is/will be problematic, especially in heterogeneous clusters. You're 
better off using the ratio of (container-vcores/node-vcores) to determine cpu 
The implementation follows the existing 
{{org.apache.hadoop.yarn.api.records.Resource}}. In the existing doc, it says 
"A node's capacity should be configured with virtual cores equal to its number 
of physical cores". (Source: 
http://hadoop.apache.org/docs/current/api/index.html) I checked existing 
{{CgroupsLCEResourcesHandler}} implementation, which also does not use 
node-vcores, i.e. {{yarn.nodemanager.resource.cpu-vcores}}, to determine CPU 

bq. 3. Can you explain why you are modifying DefaultContainerExecutor? You've 
added a method for the old signature in ContainerExecutor.
I need addtional {{Resource}} which does not exist in existing 
{{getRunCommand()}} method parameters. I added a method of the old signature to 
maintain backward compatibility as this is an abstract class and there are 
other child implementations out there.

bq. 4. Can you modify the comments/usage to specify the units of memory(bytes, 
MB, GB)?
This is already documented in existing patch, which I have cited as follows. 
Where exactly do you want to see more comments? 
+         OPTIONS: -c [cores] set virtual core limits on the job object.\n\
+                  -m [memory] set the memory limit on the job object.\n\
+         The core limit is an integral value of number of cores. The memory\n\
+         limit is an integral number of memory in MB. The definition\n\
+         follows the org.apache.hadoop.yarn.api.records.Resource model.\n\
+         The limit will not be set if 0 or negative value is passed in as\n\
+         parameter(s).\n\

> Provide a Windows container executor that can limit memory and CPU
> ------------------------------------------------------------------
>                 Key: YARN-2190
>                 URL: https://issues.apache.org/jira/browse/YARN-2190
>             Project: Hadoop YARN
>          Issue Type: New Feature
>          Components: nodemanager
>            Reporter: Chuan Liu
>            Assignee: Chuan Liu
>         Attachments: YARN-2190-prototype.patch, YARN-2190.1.patch, 
> YARN-2190.2.patch, YARN-2190.3.patch, YARN-2190.4.patch, YARN-2190.5.patch
> Yarn default container executor on Windows does not set the resource limit on 
> the containers currently. The memory limit is enforced by a separate 
> monitoring thread. The container implementation on Windows uses Job Object 
> right now. The latest Windows (8 or later) API allows CPU and memory limits 
> on the job objects. We want to create a Windows container executor that sets 
> the limits on job objects thus provides resource enforcement at OS level.
> http://msdn.microsoft.com/en-us/library/windows/desktop/ms686216(v=vs.85).aspx

This message was sent by Atlassian JIRA

Reply via email to