[ 
https://issues.apache.org/jira/browse/HDFS-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563090#comment-14563090
 ] 

Kai Zheng commented on HDFS-8328:
---------------------------------

Thanks Yi for working on this. The patch looks good in logic, just some 
comments for better readable:
1. {{minRequiredSources}} looks like a little confusing, because from coder's 
point of view, suppose 6+3, minRequiredSources would be 6 as dataBlkNum at the 
first glance. Better to rename it or have some comments for it.
{code}
+      final int cellsNum = (int)((blockGroup.getNumBytes() - 1) / cellSize + 
1);
+      minRequiredSources = Math.min(cellsNum, dataBlkNum);
{code}
2. How about renaming {{nullInputBuffers}} to nullCellBuffers, zeroCellBuffers 
or paddingCellBuffers, as you know, in the {{inputs}} array for the decode 
call, null entries indicate erasure or not to read. It could avoid some 
misunderstanding. Better to have some comments for it.
{code}
+      if (minRequiredSources < dataBlkNum) {
+        nullInputBuffers = 
+            new ByteBuffer[dataBlkNum - minRequiredSources];
+        nullInputIndices = new short[dataBlkNum - minRequiredSources];
+      }
{code}
3. Any better name for {{success}}? nsuccess => numSuccess or ...
{code}
+        int[] success = new int[minRequiredSources];
         int nsuccess = 0;
{code}
4. I guess the following utilities can be moved elsewhere and shared with 
client side. {{targetsStatus}} could have a better name.
{code}
+    private int[] getErasedIndices(boolean[] targetsStatus) {
+      int[] result = new int[targets.length];
+      int m = 0;
+      for (int i = 0; i < targets.length; i++) {
+        if (targetsStatus[i]) {
+          result[m++] = covertIndex4Decode(targetIndices[i]);
+        }
+      }
+      return Arrays.copyOf(result, m);
+    }
+
+    private int covertIndex4Decode(int index) {
+      return index < dataBlkNum ? index + parityBlkNum : index - dataBlkNum;
+    }
+
{code}
5. I'm wondering if the following codes can be better organized, like all the 
codes can be split into two functions: newStrippedReader and newBlockReader.
{code}
+    private StripedReader addStripedReader(int i, long offset) {
+      StripedReader reader = new StripedReader(liveIndices[i]);
+      stripedReaders.add(reader);
+
+      BlockReader blockReader = newBlockReader(
+          getBlock(blockGroup, liveIndices[i]), offset, sources[i]);
+      if (blockReader != null) {
+        initChecksumAndBufferSizeIfNeeded(blockReader);
+        reader.blockReader = blockReader;
+      }
+      reader.buffer = ByteBuffer.allocate(bufferSize);
+      return reader;
+    }
+
{code}
6. Is it easy to centralize all the input/output buffers allocation in a 
function, so in future it would be easier to enhance respecting the fact that 
Java coders like on-heap buffer, but native coders prefer direct buffer.

> Follow-on to update decode for DataNode striped blocks reconstruction
> ---------------------------------------------------------------------
>
>                 Key: HDFS-8328
>                 URL: https://issues.apache.org/jira/browse/HDFS-8328
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>         Attachments: HDFS-8328-HDFS-7285.001.patch
>
>
> Current the decode for DataNode striped blocks reconstruction is a 
> workaround, we need to update it after the decode fix in HADOOP-11847.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to