[ 
https://issues.apache.org/jira/browse/YARN-949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe reassigned YARN-949:
-------------------------------

    Assignee: Kihwal Lee

The change is OK in the sense that it makes it more robust to errors occurring 
during log uploading, but we're masking an NPE here which seems bad.

I think we should also fix the cause of the NPE itself, which is in 
LogValue.write.  This code in 0.23 should have a null check for {{in}} before 
trying to call close:

{code}
         // Write the log itself
          FileInputStream in = null;
          try {
            in = new FileInputStream(logFile);
            byte[] buf = new byte[65535];
            int len = 0;
            while ((len = in.read(buf)) != -1) {
              out.write(buf, 0, len);
            }
          } finally {
            in.close();
          }
{code}

Looks like this was already fixed in trunk and branch-2 by YARN-578.
                
> Failed log aggregation can leave a file open.
> ---------------------------------------------
>
>                 Key: YARN-949
>                 URL: https://issues.apache.org/jira/browse/YARN-949
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: nodemanager
>    Affects Versions: 2.1.0-beta, 0.23.9
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>         Attachments: YARN-949.patch
>
>
> If log aggregation fails on a node manager, the output file in hdfs can be 
> left open.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to