[
https://issues.apache.org/jira/browse/HADOOP-5883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12713006#action_12713006
]
Vinod K V commented on HADOOP-5883:
-----------------------------------
Looked at the patch. Looks very good overall. Only few comments:
TaskMemoryMangerThread.java
- (+292) limit != JobConf.DISABLED_MEMORY_LIMIT This check is not needed
anymore as JT doesn't access any jobs with no configuration with HADOOP-5881.
- (+207) We can avoid duplicate calls to ProcessTree.getCumulativeVmem() just
before and inside ProcessTreeOverLimit(). For this, we can retain the current
method for usage in testcases and add a new method that takes both the limits
as parameters.
TestProcfsBasedProcessTree.setupProcfsRootDir() :
- Cant' we just clean the proc dir ourselves before the test?
- We can check for the return value of mkdirs instead of making a new
File.exists() call.
Minor:
- Missing @throws IOException in javadoc of all the methods newly added in the
two testcases.
> TaskMemoryMonitorThread might shoot down tasks even if their processes
> momentarily exceed the requested memory
> --------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5883
> URL: https://issues.apache.org/jira/browse/HADOOP-5883
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Reporter: Hemanth Yamijala
> Attachments: HADOOP-5883.patch, HADOOP-5883.patch
>
>
> Currently the TaskMemoryMonitorThread kills tasks as soon as it detects they
> are consuming more memory than the max value specified. There are valid cases
> (see HADOOP-5059) where if a program is executed from the task, it might
> momentarily occupy twice the amount of memory for a short time. Ideally the
> monitoring thread should handle this case.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.