[
https://issues.apache.org/jira/browse/HDFS-8481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564260#comment-14564260
]
Zhe Zhang commented on HDFS-8481:
---------------------------------
Thanks Kai and Walter for the comments.
The new patch moves the decoder to the {{DFSStripedInputStream}} level.
bq. Assume we has a 768mb file (128mb * 6) which exactly contains 1 block
group. We lost one block so we have to decode until 768mb data has been read.
This is a good point. But to address this issue we need some nontrivial logic
to call {{decode()}} multiple times. I suggest we do this optimization as a
follow-on under HDFS-8031. Per Walter's suggestion above, we can also think of
a better way to abstract {{decodeAndFillBuffer}} in that follow-on JIRA (it
will be easier when both client and DN codes are stabilized).
Let me know if the new patch looks good to you in respect of removing the
decoding workaround.
> Erasure coding: remove workarounds in client side stripped blocks recovering
> ----------------------------------------------------------------------------
>
> Key: HDFS-8481
> URL: https://issues.apache.org/jira/browse/HDFS-8481
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Zhe Zhang
> Assignee: Zhe Zhang
> Attachments: HDFS-8481-HDFS-7285.00.patch,
> HDFS-8481-HDFS-7285.01.patch, HDFS-8481-HDFS-7285.02.patch
>
>
> After HADOOP-11847 and related fixes, we should be able to properly calculate
> decoded contents.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)