[ 
https://issues.apache.org/jira/browse/HDFS-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15072472#comment-15072472
 ] 

Walter Su commented on HDFS-9342:
---------------------------------

{code}
    for (int i = 0; i < numAllBlocks; i++) {
      final StripedDataStreamer streamer = getStripedDataStreamer(i);
      if (streamer.isHealthy()) {
        long expected = StripedBlockUtil.getInternalBlockLength(
            currentBlockGroup.getNumBytes(), cellSize, numDataBlocks, i);
        if (expected == 0) {
          continue;
        }
        long acked = streamer.getBlock().getNumBytes();
        if (expected != acked) {
          return ackedBGLength;
        }
      }
    }
    return currentBlockGroup.getNumBytes();
{code}

The calculation of acked size of a being-written last partial stripe is 
complicated. So I do it stripe by stripe. if any cell of last partial stripe is 
not acked, I discard the stripe ( return ackedBGLength which contains only full 
stripes).

Sorry for the confusion, I'll remove it.

> Erasure coding: client should update and commit block based on acknowledged 
> size
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-9342
>                 URL: https://issues.apache.org/jira/browse/HDFS-9342
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: erasure-coding
>    Affects Versions: 3.0.0
>            Reporter: Zhe Zhang
>            Assignee: Walter Su
>         Attachments: HDFS-9342.01.patch, HDFS-9342.02.patch
>
>
> For non-EC files, we have:
> {code}
> protected ExtendedBlock block; // its length is number of bytes acked
> {code}
> For EC files, the size of {{DFSStripedOutputStream#currentBlockGroup}} is 
> incremented in {{writeChunk}} without waiting for ack. And both 
> {{updatePipeline}} and {{commitBlock}} are based on size of 
> {{currentBlockGroup}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to