[ 
https://issues.apache.org/jira/browse/HDFS-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533969#comment-14533969
 ] 

Kai Zheng commented on HDFS-8347:
---------------------------------

Zhe, this is not about how much bytes we use to encode once a time, but about 
whether we should use the same chunk size to perform the decoding with the 
chunk size used in encoding. Let's see how HitchHicker experts would tell on 
this. Thanks.

> Erasure Coding: whether to use chunkSize as the to decode buffersize for 
> Datanode striped block reconstruction.
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-8347
>                 URL: https://issues.apache.org/jira/browse/HDFS-8347
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Kai Zheng
>
> Currently decode buffersize for Datanode striped block reconstruction is 
> configurable and can be less or larger than chunksize, it may cause issue for 
> Hitchhiker which may require encode/decode using same buffersize.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to