[ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976024#comment-14976024
 ] 

Yi Liu commented on HADOOP-12040:
---------------------------------

Generally looks good, Kai.

1. You need to cleanup the checkstype issue. For example, some line is longer 
than 80 characters. 
2. Some related tests show failure, such as TestRecoverStripedFile
3. 
{code}+    for (int i = 0; i < erasedIndexes.length; i++) {
+      if (erasedIndexes[i] >= getNumDataUnits()) {
+        erasedIndexes2[idx++] = erasedIndexes[i] - getNumDataUnits();
+        numErasedParityUnits++;
+      }
+    }
+    for (int i = 0; i < erasedIndexes.length; i++) {
+      if (erasedIndexes[i] < getNumDataUnits()) {
+        erasedIndexes2[idx++] = erasedIndexes[i] + getNumParityUnits();
+        numErasedDataUnits++;
+      }
+    }
{code}
This can be done in a {{for}}.



> Adjust inputs order for the decode API in raw erasure coder
> -----------------------------------------------------------
>
>                 Key: HADOOP-12040
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12040
>             Project: Hadoop Common
>          Issue Type: Sub-task
>            Reporter: Kai Zheng
>            Assignee: Kai Zheng
>         Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch, 
> HADOOP-12040-v3.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to