[
https://issues.apache.org/jira/browse/HADOOP-5883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12713050#action_12713050
]
Hemanth Yamijala commented on HADOOP-5883:
------------------------------------------
Results of test-patch on the new patch.
[exec] +1 overall.
[exec]
[exec] +1 @author. The patch does not contain any @author tags.
[exec]
[exec] +1 tests included. The patch appears to include 6 new or
modified tests.
[exec]
[exec] +1 javadoc. The javadoc tool did not generate any warning
messages.
[exec]
[exec] +1 javac. The applied patch does not increase the total number
of javac compiler warnings.
[exec]
[exec] +1 findbugs. The patch does not introduce any new Findbugs
warnings.
[exec]
[exec] +1 Eclipse classpath. The patch retains Eclipse classpath
integrity.
[exec]
[exec] +1 release audit. The applied patch does not increase the
total number of release audit warnings.
Also, ran all unit tests. TestQueueCapacities timed out. All other test cases
passed.
> TaskMemoryMonitorThread might shoot down tasks even if their processes
> momentarily exceed the requested memory
> --------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-5883
> URL: https://issues.apache.org/jira/browse/HADOOP-5883
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Reporter: Hemanth Yamijala
> Attachments: HADOOP-5883.patch, HADOOP-5883.patch, HADOOP-5883.patch
>
>
> Currently the TaskMemoryMonitorThread kills tasks as soon as it detects they
> are consuming more memory than the max value specified. There are valid cases
> (see HADOOP-5059) where if a program is executed from the task, it might
> momentarily occupy twice the amount of memory for a short time. Ideally the
> monitoring thread should handle this case.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.