[ 
https://issues.apache.org/jira/browse/HDFS-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547935#comment-14547935
 ] 

Hadoop QA commented on HDFS-8418:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  4s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 45s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 54s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 14s | The applied patch generated 
1 release audit warnings. |
| {color:red}-1{color} | checkstyle |   2m 21s | The applied patch generated  1 
new checkstyle issues (total was 325, now 324). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 38s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   3m 14s | The patch appears to introduce 9 
new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 187m  8s | Tests failed in hadoop-hdfs. |
| | | 231m 16s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs |
|  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.DFSOutputStream.streamer; locked 89% of time  
Unsynchronized access at DFSOutputStream.java:89% of time  Unsynchronized 
access at DFSOutputStream.java:[line 146] |
|  |  Possible null pointer dereference of arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:arr$ in 
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoStripedUnderConstruction.initializeBlockRecovery(long)
  Dereferenced at BlockInfoStripedUnderConstruction.java:[line 194] |
|  |  Unread field:field be static?  At ErasureCodingWorker.java:[line 252] |
|  |  Should 
org.apache.hadoop.hdfs.server.datanode.erasurecode.ErasureCodingWorker$StripedReader
 be a _static_ inner class?  At ErasureCodingWorker.java:inner class?  At 
ErasureCodingWorker.java:[lines 913-915] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema):in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.createErasureCodingZone(String,
 ECSchema): String.getBytes()  At ErasureCodingZoneManager.java:[line 117] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):in
 
org.apache.hadoop.hdfs.server.namenode.ErasureCodingZoneManager.getECZoneInfo(INodesInPath):
 new String(byte[])  At ErasureCodingZoneManager.java:[line 81] |
|  |  Dead store to dataBlkNum in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.calcualteChunkPositionsInBuf(ECSchema,
 LocatedStripedBlock, byte[], int, int, int, int, long, int, 
StripedBlockUtil$AlignedStripe[])  At 
StripedBlockUtil.java:org.apache.hadoop.hdfs.util.StripedBlockUtil.calcualteChunkPositionsInBuf(ECSchema,
 LocatedStripedBlock, byte[], int, int, int, int, long, int, 
StripedBlockUtil$AlignedStripe[])  At StripedBlockUtil.java:[line 467] |
|  |  Result of integer multiplication cast to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
 int, int, int, int)  At StripedBlockUtil.java:to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.constructInternalBlock(LocatedStripedBlock,
 int, int, int, int)  At StripedBlockUtil.java:[line 86] |
|  |  Result of integer multiplication cast to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, 
int, int)  At StripedBlockUtil.java:to long in 
org.apache.hadoop.hdfs.util.StripedBlockUtil.planReadPortions(int, int, long, 
int, int)  At StripedBlockUtil.java:[line 206] |
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.datanode.TestTriggerBlockReport |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.server.datanode.TestIncrementalBlockReports |
|   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.hdfs.server.blockmanagement.TestBlockInfo |
| Timed out tests | org.apache.hadoop.hdfs.TestDatanodeDeath |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733482/HDFS-8418-HDFS-7285.001.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / f9be529 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11028/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HDFS-Build/11028/artifact/patchprocess/diffcheckstylehadoop-hdfs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11028/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11028/artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11028/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11028/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11028/console |


This message was automatically generated.

> Fix the isNeededReplication calculation for Striped block in NN
> ---------------------------------------------------------------
>
>                 Key: HDFS-8418
>                 URL: https://issues.apache.org/jira/browse/HDFS-8418
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>            Priority: Critical
>         Attachments: HDFS-8418-HDFS-7285.001.patch
>
>
> Currently when calculating {{isNeededReplication}} for striped block, we use 
> BlockCollection#getPreferredBlockReplication to get expected replica number 
> for striped block. See an example:
> {code}
> public void checkReplication(BlockCollection bc) {
>     final short expected = bc.getPreferredBlockReplication();
>     for (BlockInfo block : bc.getBlocks()) {
>       final NumberReplicas n = countNodes(block);
>       if (isNeededReplication(block, expected, n.liveReplicas())) { 
>         neededReplications.add(block, n.liveReplicas(),
>             n.decommissionedAndDecommissioning(), expected);
>       } else if (n.liveReplicas() > expected) {
>         processOverReplicatedBlock(block, expected, null, null);
>       }
>     }
>   }
> {code}
> But actually it's not correct, for example, if the length of striped file is 
> less than a cell, then the expected replica of the block should be {{1 + 
> parityBlkNum}} instead of {{dataBlkNum + parityBlkNum}}. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to