[
https://issues.apache.org/jira/browse/HADOOP-11938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14534949#comment-14534949
]
Hadoop QA commented on HADOOP-11938:
------------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch | 14m 55s | Pre-patch HDFS-7285 compilation
is healthy. |
| {color:green}+1{color} | @author | 0m 0s | The patch does not contain any
@author tags. |
| {color:red}-1{color} | tests included | 0m 0s | The patch doesn't appear
to include any new or modified tests. Please justify why no new tests are
needed for this patch. Also please list what manual steps were performed to
verify this patch. |
| {color:green}+1{color} | javac | 7m 31s | There were no new javac warning
messages. |
| {color:green}+1{color} | javadoc | 9m 45s | There were no new javadoc
warning messages. |
| {color:red}-1{color} | release audit | 8m 3s | The applied patch generated
394 release audit warnings. |
| {color:red}-1{color} | checkstyle | 0m 40s | The applied patch generated
1827 new checkstyle issues (total was 0, now 1823). |
| {color:red}-1{color} | whitespace | 0m 7s | The patch has 42 line(s) that
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install | 1m 36s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse | 0m 32s | The patch built with
eclipse:eclipse. |
| {color:red}-1{color} | findbugs | 3m 13s | The patch appears to introduce 8
new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native | 3m 15s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 175m 5s | Tests failed in hadoop-hdfs. |
| | | 224m 46s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
| | Inconsistent synchronization of
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time
Unsynchronized access at DFSOutputStream.java:89% of time Unsynchronized
access at DFSOutputStream.java:[line 146] |
| | Possible null pointer dereference of arr$ in
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
Dereferenced at BlockInfoStripedUnderConstruction.java:[line 206] |
| | Unread field:field be static? At ErasureCodingWorker.java:[line 251] |
| | Should
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader
be a _static_ inner class? At ErasureCodingWorker.java:inner class? At
ErasureCodingWorker.java:[lines 914-916] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
ECSchema):in
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
ECSchema): String.getBytes() At ErasureCodingZoneManager.java:[line 117] |
| | Found reliance on default encoding in
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):
new String(byte[]) At ErasureCodingZoneManager.java:[line 81] |
| | Result of integer multiplication cast to long in
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
int, int, int, int) At StripedBlockUtil.java:to long in
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
int, int, int, int) At StripedBlockUtil.java:[line 84] |
| | Result of integer multiplication cast to long in
org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long,
int, int) At StripedBlockUtil.java:to long in
org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long,
int, int) At StripedBlockUtil.java:[line 204] |
| Failed unit tests | hadoop.hdfs.TestDFSPermission |
| | hadoop.fs.TestSymlinkHdfsFileContext |
| | hadoop.hdfs.TestDistributedFileSystem |
| | hadoop.hdfs.server.namenode.TestAuditLogs |
| | hadoop.hdfs.server.namenode.TestFileTruncate |
| | hadoop.fs.TestSymlinkHdfsFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12731439/HADOOP-11938-HDFS-7285-workaround.patch
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 12e267b |
| Release Audit |
https://builds.apache.org/job/PreCommit-HADOOP-Build/6541/artifact/patchprocess/patchReleaseAuditProblems.txt
|
| checkstyle |
https://builds.apache.org/job/PreCommit-HADOOP-Build/6541/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
|
| whitespace |
https://builds.apache.org/job/PreCommit-HADOOP-Build/6541/artifact/patchprocess/whitespace.txt
|
| Findbugs warnings |
https://builds.apache.org/job/PreCommit-HADOOP-Build/6541/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
|
| hadoop-hdfs test log |
https://builds.apache.org/job/PreCommit-HADOOP-Build/6541/artifact/patchprocess/testrun_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HADOOP-Build/6541/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output |
https://builds.apache.org/job/PreCommit-HADOOP-Build/6541/console |
This message was automatically generated.
> Fix ByteBuffer version encode/decode API of raw erasure coder
> -------------------------------------------------------------
>
> Key: HADOOP-11938
> URL: https://issues.apache.org/jira/browse/HADOOP-11938
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: io
> Reporter: Kai Zheng
> Assignee: Kai Zheng
> Attachments: HADOOP-11938-HDFS-7285-workaround.patch
>
>
> While investigating a test failure in {{TestRecoverStripedFile}}, one issue
> in raw erasrue coder, caused by an optimization in below codes. It assumes
> the heap buffer backed by the bytes array available for reading or writing
> always starts with zero and takes the whole space.
> {code}
> protected static byte[][] toArrays(ByteBuffer[] buffers) {
> byte[][] bytesArr = new byte[buffers.length][];
> ByteBuffer buffer;
> for (int i = 0; i < buffers.length; i++) {
> buffer = buffers[i];
> if (buffer == null) {
> bytesArr[i] = null;
> continue;
> }
> if (buffer.hasArray()) {
> bytesArr[i] = buffer.array();
> } else {
> throw new IllegalArgumentException("Invalid ByteBuffer passed, " +
> "expecting heap buffer");
> }
> }
> return bytesArr;
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)