[
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13493965#comment-13493965
]
Yu Li commented on HADOOP-8419:
-------------------------------
Test result on branch-1:
Both with and w/o my patch, below UT cases failed, not sure whether it's env
issue, but from error message it should be irrelavant with compression:
========================================================
[junit] Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
[junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 2.067 sec
[junit] Running org.apache.hadoop.hdfs.TestRestartDFS
[junit] Tests run: 2, Failures: 0, Errors: 2, Time elapsed: 16.016 sec
[junit] Running org.apache.hadoop.hdfs.TestSafeMode
[junit] Tests run: 3, Failures: 0, Errors: 2, Time elapsed: 64.601 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
[junit] Tests run: 3, Failures: 0, Errors: 3, Time elapsed: 41.901 sec
[junit] Running org.apache.hadoop.hdfs.server.namenode.TestStartup
[junit] Tests run: 3, Failures: 2, Errors: 0, Time elapsed: 44.583 sec
========================================================
All cases with error has error messages like:
=======================================================
Edit log corruption detected: corruption length = 9748 > toleration length = 0;
the corruption is intolerable.
java.io.IOException: Edit log corruption detected: corruption length = 9748 >
toleration length = 0; the corruption is intolerable.
at
org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkEndOfLog(FSEditLog.java:608)
=======================================================
The case with failure has error message like:
=======================================================
java.io.IOException: Failed to parse edit log
(/home/biadmin/hadoop/build/test/data/dfs/chkpt/current/edits) at position 555,
edit log length is 690, opcode=0, isTolerationEnabled=false, Rec
ent opcode offsets=[65 124 244 388]
at
org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:84)
=======================================================
> GzipCodec NPE upon reset with IBM JDK
> -------------------------------------
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
> Issue Type: Bug
> Components: io
> Affects Versions: 1.0.3
> Reporter: Luke Lu
> Assignee: Yu Li
> Labels: gzip, ibm-jdk
> Attachments: HADOOP-8419-branch-1.patch, HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is
> not loaded. When the native zlib is loaded the codec creates a
> CompressorOutputStream that doesn't have the problem, otherwise, the
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10,
> GZIPOutputStream#finish will release the underlying deflater, which causes
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK
> doesn't have this issue.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira