[ 
https://issues.apache.org/jira/browse/HDFS-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15408477#comment-15408477
 ] 

Kai Zheng commented on HDFS-8668:
---------------------------------

Thanks [~Sammi] for resuming this task. The updated patch looks good overall.

{code}
+      /**
+       * This was because we don't have appropriate efficient ByteBuffer 
version
+       * downstream calls, so we have to convert to heap buffer to proceed.
+       * We can consider to optimize this separately into the underlying layer.
+       */
+      heapBuffer = BUFFER_POOL.getBuffer(false, buffer.remaining());
+      buffer.get(heapBuffer.array());
+      buffer.position(0);
{code}

The situation indicated by the comment is obsolete and not true, since 
{{DataChecksum}} is of {{calculateChunkedSums(ByteBuffer data, ByteBuffer 
checksums)}}. Could you help check if the buffer converting is really needed? 
Better to avoid the buffer copy for performance. Thanks.

> Erasure Coding: revisit buffer used for encoding and decoding.
> --------------------------------------------------------------
>
>                 Key: HDFS-8668
>                 URL: https://issues.apache.org/jira/browse/HDFS-8668
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Yi Liu
>            Assignee: SammiChen
>         Attachments: HDFS-8668-v1.patch, HDFS-8668-v2.patch, 
> HDFS-8668-v3.patch
>
>
> For encoding and decoding buffers, currently some places use java heap 
> ByteBuffer,  some use direct byteBUffer, and some use java byte array.  If 
> the coder implementation is native, we should use direct ByteBuffer. This 
> jira is to  revisit all encoding/decoding buffers and improve them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to