Jason Lowe commented on HADOOP-13709:

Good catch, Daryn!  I missed the memory-model races introduced by moving the 
stdout processing to a new thread.  We could fix that with appropriate 
synchronization in the derived classes.  However Shell is explicitly marked 
Public, so it's prudent to avoid disrupting that too much.

It would be really nice to be able to kill the subprocess if the thread 
executing Shell is interrupted, but without moving the stdout processing to 
another thread I don't see a good way to do that.  Reading from a stream 
doesn't seem to be interruptible in practice.  If we can't make the Shell 
command reliably interruptible then the next best thing is to make sure we 
don't leave them lingering when this process exits so we can fix the problem 
described in YARN-5641.  Having a live Shell list that we can traverse on 
shutdown should fix that particular issue.

> Clean up subprocesses spawned by Shell.java:runCommand when the shell process 
> exits
> -----------------------------------------------------------------------------------
>                 Key: HADOOP-13709
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13709
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.2.0
>            Reporter: Eric Badger
>            Assignee: Eric Badger
>         Attachments: HADOOP-13709.001.patch
> The runCommand code in Shell.java can get into a situation where it will 
> ignore InterruptedExceptions and refuse to shutdown due to being in I/O 
> waiting for the return value of the subprocess that was spawned. We need to 
> allow for the subprocess to be interrupted and killed when the shell process 
> gets killed. Currently the JVM will shutdown and all of the subprocesses will 
> be orphaned and not killed.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to