[ 
https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16611101#comment-16611101
 ] 

Ajay Kumar commented on HDDS-419:
---------------------------------

[~msingh] thanks for posting the fix.
Correct me if i am wrong but it seems there is a subtle bug in while loop.
 {code}while (len > 0) {
      int available = prepareRead(len);
      if (available == EOF) {
        return EOF;
      }
      buffers.get(bufferIndex).get(b, off + total, available);
      len -= available;
      total += available;
    } {code}
If after subsequent iterations, prepareRead return EOF we will return EOF 
instead of total bytes read till last iteration. 

> ChunkInputStream bulk read api does not read from all the chunks
> ----------------------------------------------------------------
>
>                 Key: HDDS-419
>                 URL: https://issues.apache.org/jira/browse/HDDS-419
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Client
>    Affects Versions: 0.2.1
>            Reporter: Mukul Kumar Singh
>            Assignee: Mukul Kumar Singh
>            Priority: Blocker
>             Fix For: 0.2.1
>
>         Attachments: HDDS-419.001.patch
>
>
> After enabling of bulk reads with HDDS-408, testDataValidate started failing 
> because the bulk read api does not read all the chunks from the block.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to