[ 
https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15591860#comment-15591860
 ] 

Daryn Sharp commented on HADOOP-13709:
--------------------------------------

This approach will cause a memory leak and eventually lead to an unrecoverable 
OOM.  The shutdown thread instances and the Shell instances referenced by said 
threads will not be garbage collected until shutdown.

A way to avoid this is Shell instances registering themselves in a static 
collection via the ctor.  A single shutdown hook will destroy all the Shell 
instances in the collection.  The remaining issue is removing completed Shell 
instances to avoid a similar leak.  It's probably not safe to assume the Shell 
instance can be trusted to always remove itself so using a WeakHashMap is 
probably the safest approach.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> -----------------------------------------------------------------------------------
>
>                 Key: HADOOP-13709
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13709
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.2.0
>            Reporter: Eric Badger
>            Assignee: Eric Badger
>         Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch
>
>
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to