[
https://issues.apache.org/jira/browse/HDFS-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14571900#comment-14571900
]
Jing Zhao commented on HDFS-8319:
---------------------------------
bq. Using the same buffer type (no mixing) will also make the buffer allocation
and management easier overall. I'm wondering could we avoid the mixing?
Yes, we can avoid this by always allocating direct buffer for parity blocks as
well. But unlike the buffer used by data blocks, this (64KB * 3) buffer may
never be used if decoding is unnecessary.
More importantly, from the decoder API point of view, there is no API level
restriction to prevent the mixing util we hit exception during the decoding. I
would prefer the API to be more clear and friendly. And before we have this
improvement, I think we need the direct buffer detection change to avoid
exception in mixing scenarios.
bq. Did you see any problem doing that way?
Yes, the current decode function directly assign array() to newInputs, which
means newInputs take the complete underlying byte array, instead of the slices.
This lead to wrong the decoding result in my test.
{code}
ByteBuffer buffer;
for (int i = 0; i < inputs.length; ++i) {
buffer = inputs[i];
if (buffer != null) {
inputOffsets[i] = buffer.position();
newInputs[i] = buffer.array();
}
}
{code}
> Erasure Coding: support decoding for stateful read
> --------------------------------------------------
>
> Key: HDFS-8319
> URL: https://issues.apache.org/jira/browse/HDFS-8319
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Reporter: Jing Zhao
> Assignee: Jing Zhao
> Attachments: HDFS-8319.001.patch, HDFS-8319.002.patch,
> HDFS-8319.003.patch
>
>
> HDFS-7678 adds the decoding functionality for pread. This jira plans to add
> decoding to stateful read.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)