[ 
https://issues.apache.org/jira/browse/YARN-10320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17163817#comment-17163817
 ] 

Bilwa S T commented on YARN-10320:
----------------------------------

Thanks for patch [~tanu.ajmera]

I think you need to replace read with readFully in below code too
{code:java}
 while ((len = in.read(buf)) != -1) {
          //If buffer contents within fileLength, write
          if (len < bytesLeft) {
            outputStreamState.getOutputStream().write(buf, 0, len);
            bytesLeft-=len;
          } else {
            //else only write contents within fileLength, then exit early
            outputStreamState.getOutputStream().write(buf, 0,
                (int)bytesLeft);
            break;
          }
        }
{code}
 

> Replace FSDataInputStream#read with readFully in Log Aggregation
> ----------------------------------------------------------------
>
>                 Key: YARN-10320
>                 URL: https://issues.apache.org/jira/browse/YARN-10320
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: log-aggregation
>    Affects Versions: 3.3.0
>            Reporter: Prabhu Joseph
>            Assignee: Tanu Ajmera
>            Priority: Major
>         Attachments: YARN-10320-001.patch, YARN-10320-002.patch
>
>
> Have observed Log Aggregation code has used FSDataInputStream#read instead of 
> readFully in multiple places like below. One of the place is fixed by 
> YARN-8106.
> This Jira targets to fix at all other places.
> LogAggregationIndexedFileController#loadUUIDFromLogFile
> {code}
>           byte[] b = new byte[uuid.length];
>           int actual = fsDataInputStream.read(b);
>           if (actual != uuid.length || Arrays.equals(b, uuid)) {
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to