[
https://issues.apache.org/jira/browse/HADOOP-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15101407#comment-15101407
]
Walter Su commented on HADOOP-12041:
------------------------------------
The new coder passes test and is stable locally.
bq. What did you mean by "still possible GF256 be inited twice"?
{code}
156 public static void init() {
157 if (inited) {
158 return;
159 }
160
161 synchronized (GF256.class) {
162 theGfMulTab = new byte[256][256];
163 for (int i = 0; i < 256; i++) {
164 for (int j = 0; j < 256; j++) {
165 theGfMulTab[i][j] = gfMul((byte) i, (byte) j);
166 }
167 }
168 inited = true;
169 }
170 }
{code}
{{inited}} is initially {{false}}, and 2 threads may reach line 157 at the same
time, then both goto line 161.
bq. The old HDFS-RAID originated coder will still be there for comparing, and
converting old data from HDFS-RAID systems.
HDFS-RAID is no longer in latest release. So HDFS-RAID system is an old
cluster, we should use DistCp or etc. I guest it's ok to remove the old coder?
> Implement another Reed-Solomon coder in pure Java
> -------------------------------------------------
>
> Key: HADOOP-12041
> URL: https://issues.apache.org/jira/browse/HADOOP-12041
> Project: Hadoop Common
> Issue Type: Sub-task
> Reporter: Kai Zheng
> Assignee: Kai Zheng
> Attachments: HADOOP-12041-v1.patch, HADOOP-12041-v2.patch,
> HADOOP-12041-v3.patch, HADOOP-12041-v4.patch, HADOOP-12041-v5.patch
>
>
> Currently existing Java RS coders based on {{GaloisField}} implementation
> have some drawbacks or limitations:
> * The decoder computes not really erased units unnecessarily (HADOOP-11871);
> * The decoder requires parity units + data units order for the inputs in the
> decode API (HADOOP-12040);
> * Need to support or align with native erasure coders regarding concrete
> coding algorithms and matrix, so Java coders and native coders can be easily
> swapped in/out and transparent to HDFS (HADOOP-12010);
> * It's unnecessarily flexible but incurs some overhead, as HDFS erasure
> coding is totally a byte based data system, we don't need to consider other
> symbol size instead of 256.
> This desires to implement another RS coder in pure Java, in addition to the
> existing {{GaliosField}} from HDFS-RAID. The new Java RS coder will be
> favored and used by default to resolve the related issues. The old HDFS-RAID
> originated coder will still be there for comparing, and converting old data
> from HDFS-RAID systems.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)