[
https://issues.apache.org/jira/browse/HDFS-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14568756#comment-14568756
]
Jing Zhao commented on HDFS-8453:
---------------------------------
bq. So this patch takes another approach: refactor DFSInputStream with a new
refreshLocatedBlock method when the located block is to be refreshed instead of
calling getBlockAt at first time.
I think this is the correct way to fix the issue. One quick comment is that you
can also enhance {{TestWriteReadStripedFile#testWritePreadWithDNFailure}} to
test the logic. More specifically, if the file length is {{cellSize *
dataBlocks}} the test will fail without the fix.
> Erasure coding: properly handle start offset for internal blocks in a block
> group
> ---------------------------------------------------------------------------------
>
> Key: HDFS-8453
> URL: https://issues.apache.org/jira/browse/HDFS-8453
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Zhe Zhang
> Assignee: Zhe Zhang
> Attachments: HDFS-8453-HDFS-7285.00.patch
>
>
> {code}
> void actualGetFromOneDataNode(final DNAddrPair datanode,
> ...
> LocatedBlock block = getBlockAt(blockStartOffset);
> ...
> fetchBlockAt(block.getStartOffset());
> {code}
> The {{blockStartOffset}} here is from inner block. For parity blocks, the
> offset will overlap with the next block group, and we may end up with
> fetching wrong block. So we have to assign a meaningful start offset for
> internal blocks in a block group, especially for parity blocks.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)