[ 
https://issues.apache.org/jira/browse/HDFS-6002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13911199#comment-13911199
 ] 

Liang Xie commented on HDFS-6002:
---------------------------------

If i understand correctly, this field intends to be used in recovery scenario, 
skip over the sender/receiver, finally it's used inside 
FsDatasetImpl.recoverRbw() for doing a sanity checking only. see:
{code}
    // check replica length
    if (rbw.getBytesAcked() < minBytesRcvd || rbw.getNumBytes() > maxBytesRcvd){
      throw new ReplicaNotFoundException("Unmatched length replica " + 
          replicaInfo + ": BytesAcked = " + rbw.getBytesAcked() + 
          " BytesRcvd = " + rbw.getNumBytes() + " are not in the range of [" + 
          minBytesRcvd + ", " + maxBytesRcvd + "].");
    }
{code}

so current impl is not strict enough. Probably need a recovery expert to 
confirm above:)

> DFSOutputStream.DataStreamer.bytesSent not updated correctly
> ------------------------------------------------------------
>
>                 Key: HDFS-6002
>                 URL: https://issues.apache.org/jira/browse/HDFS-6002
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Binglin Chang
>            Assignee: Binglin Chang
>            Priority: Minor
>         Attachments: HDFS-6002.v1.patch
>
>
> DFSOutputStream.DataStreamer.bytesSent record bytes sent in current block, a 
> simple search for all the references would discover that it is only updated 
> larger and larger, but never got reset when a new block is allocated. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to