[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-13 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16543450#comment-16543450
 ] 

Shweta commented on HDFS-13663:
---

Thanks Xiao for the commit to trunk. 

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542456#comment-16542456
 ] 

Xiao Chen commented on HDFS-13663:
--

Test failures not related to this patch. Committing this

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542396#comment-16542396
 ] 

genericqa commented on HDFS-13663:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.fs.viewfs.TestViewFileSystemHdfs |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931394/HDFS-13663.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e33b9ae88bd9 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 556d9b3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24590/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542290#comment-16542290
 ] 

Xiao Chen commented on HDFS-13663:
--

+1 pending jenkins. Thanks Shweta!

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542275#comment-16542275
 ] 

Shweta commented on HDFS-13663:
---

Hi Xiao,

I have updated the patch as mentioned by you above. 

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-12 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542180#comment-16542180
 ] 

Xiao Chen commented on HDFS-13663:
--

Hi Shweta,

Patch 3 looks really close. Could you remove the extra line above the {{break}}?

 

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch, 
> HDFS-13663.003.patch, HDFS-13663.004.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-10 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16539443#comment-16539443
 ] 

genericqa commented on HDFS-13663:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m  
1s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}153m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12931084/HDFS-13663.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3a08085bc95d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4e59b92 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24580/testReport/ |
| Max. process+thread count | 3156 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24580/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Should throw exception when incorrect block size is 

[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-09 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537878#comment-16537878
 ] 

genericqa commented on HDFS-13663:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930921/HDFS-13663.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e8c18cc8dea2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9bd5bef |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24577/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24577/testReport/ |
| Max. process+thread count | 3033 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-09 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537762#comment-16537762
 ] 

Shweta commented on HDFS-13663:
---

Thanks Xiao for the comments. 

I have updated the code accordingly and attached the updated patch.

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch, HDFS-13663.002.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-09 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537647#comment-16537647
 ] 

Xiao Chen commented on HDFS-13663:
--

Thanks Yongjun for reporting this, and Shweta for working on it.

I agree it's better to fail explicitly here rather than depending on assertion, 
which may silently goes by if the NN is not {{-ea}}'ed.

Comments on the patch:
- I understand the existing class has some inconsistent styles. Let's follow 
the convention of having a space around the brackets.
- We can omit the else statement, because the if statement would throw.
- No need to keep the original assert as a comment. We can just remove it.

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-06 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535479#comment-16535479
 ] 

Shweta commented on HDFS-13663:
---

Failed tests passed locally and don't look related to change. The issue was 
trivial and hence there aren’t any unit tests associated.

> Should throw exception when incorrect block size is set
> ---
>
> Key: HDFS-13663
> URL: https://issues.apache.org/jira/browse/HDFS-13663
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13663.001.patch
>
>
> See
> ./hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockRecoveryWorker.java
> {code}
> void syncBlock(List syncList) throws IOException {
>newBlock.setNumBytes(finalizedLength);
> break;
>   case RBW:
>   case RWR:
> long minLength = Long.MAX_VALUE;
> for(BlockRecord r : syncList) {
>   ReplicaState rState = r.rInfo.getOriginalReplicaState();
>   if(rState == bestState) {
> minLength = Math.min(minLength, r.rInfo.getNumBytes());
> participatingList.add(r);
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("syncBlock replicaInfo: block=" + block +
> ", from datanode " + r.id + ", receivedState=" + 
> rState.name() +
> ", receivedLength=" + r.rInfo.getNumBytes() + ", bestState=" +
> bestState.name());
>   }
> }
> // recover() guarantees syncList will have at least one replica with 
> RWR
> // or better state.
> assert minLength != Long.MAX_VALUE : "wrong minLength"; <= should 
> throw exception 
> newBlock.setNumBytes(minLength);
> break;
>   case RUR:
>   case TEMPORARY:
> assert false : "bad replica state: " + bestState;
>   default:
> break; // we have 'case' all enum values
>   }
> {code}
> when minLength is Long.MAX_VALUE, it should throw exception.
> There might be other places like this.
> Otherwise, we would see the following WARN in datanode log
> {code}
> WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Can't replicate block 
> xyz because on-disk length 11852203 is shorter than NameNode recorded length 
> 9223372036854775807
> {code}
> where 9223372036854775807 is Long.MAX_VALUE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13663) Should throw exception when incorrect block size is set

2018-07-06 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16535407#comment-16535407
 ] 

genericqa commented on HDFS-13663:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.client.impl.TestBlockReaderLocal |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
|   | hadoop.hdfs.TestRollingUpgrade |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13663 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930568/HDFS-13663.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a9d0690603dd 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 061b168 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24569/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt