[ 
https://issues.apache.org/jira/browse/MAPREDUCE-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12857468#action_12857468
 ] 

Scott Chen commented on MAPREDUCE-1221:
---------------------------------------

Arun, Thanks for the review
bq. We shouldn't be using the TT.fConf to read config values each time, please 
save it and re-use:
Good catch. I made some change. A member called reservedPhysicalMemoryOnTT is 
created to save this value.

bq. I don't understand why 'limitPhysical > 0' is necessary. If 
'doCheckPhysicalMemory' returns true, shouldn't limitPhysical always be 
positive? Please help me understand, thanks.
You are right about this one. That check is redundant. If doCheckPhysicalMemory 
returns true, it means the user has configured physical memory limit (should 
include per task limit) and this value should not be negative.
The logic should be the same as the one in virtual memory limit. I have removed 
this redundant check.

> Kill tasks on a node if the free physical memory on that machine falls below 
> a configured threshold
> ---------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-1221
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1221
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: tasktracker
>    Affects Versions: 0.22.0
>            Reporter: dhruba borthakur
>            Assignee: Scott Chen
>             Fix For: 0.22.0
>
>         Attachments: MAPREDUCE-1221-v1.patch, MAPREDUCE-1221-v2.patch, 
> MAPREDUCE-1221-v3.patch, MAPREDUCE-1221-v4.patch, MAPREDUCE-1221-v5.txt
>
>
> The TaskTracker currently supports killing tasks if the virtual memory of a 
> task exceeds a set of configured thresholds. I would like to extend this 
> feature to enable killing tasks if the physical memory used by that task 
> exceeds a certain threshold.
> On a certain operating system (guess?), if user space processes start using 
> lots of memory, the machine hangs and dies quickly. This means that we would 
> like to prevent map-reduce jobs from triggering this condition. From my 
> understanding, the killing-based-on-virtual-memory-limits (HADOOP-5883) were 
> designed to address this problem. This works well when most map-reduce jobs 
> are Java jobs and have well-defined -Xmx parameters that specify the max 
> virtual memory for each task. On the other hand, if each task forks off 
> mappers/reducers written in other languages (python/php, etc), the total 
> virtual memory usage of the process-subtree varies greatly. In these cases, 
> it is better to use kill-tasks-using-physical-memory-limits.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to