[jira] [Commented] (HDFS-15135) EC : ArrayIndexOutOfBoundsException in BlockRecoveryWorker#RecoveryTaskStriped.

2020-02-04 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030392#comment-17030392
 ] 

Surendra Singh Lilhore commented on HDFS-15135:
---

[~Sushma_28], try if you can add UT for lease recovery.

> EC : ArrayIndexOutOfBoundsException in 
> BlockRecoveryWorker#RecoveryTaskStriped.
> ---
>
> Key: HDFS-15135
> URL: https://issues.apache.org/jira/browse/HDFS-15135
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Surendra Singh Lilhore
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-15135.001.patch
>
>
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 8
>at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$RecoveryTaskStriped.recover(BlockRecoveryWorker.java:464)
>at 
> org.apache.hadoop.hdfs.server.datanode.BlockRecoveryWorker$1.run(BlockRecoveryWorker.java:602)
>at java.lang.Thread.run(Thread.java:745) {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15115) Namenode crash caused by NPE in BlockPlacementPolicyDefault when dynamically change logger to debug

2020-02-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030365#comment-17030365
 ] 

Hadoop QA commented on HDFS-15115:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 36 unchanged - 0 fixed = 39 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15115 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12992650/HDFS-15115.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 61393573b5b8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ce7b8b5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28740/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28740/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28740/testReport/ |
| asflicense | 

[jira] [Comment Edited] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Xieming Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030309#comment-17030309
 ] 

Xieming Li edited comment on HDFS-15111 at 2/5/20 2:37 AM:
---

[~ayushtkn]
 Thank you for your comment.

Again, there are 3 test failures:

[TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestDelegationTokensWithHA/testObserverReadProxyProviderWithDT/]
 : This happens even on trunk and thus regarded irrelevant.

[org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage.blockReport_02,
 
|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.datanode/TestNNHandlesBlockReportPerStorage/blockReport_02/]
 
[org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport.blockReport_02:
 
|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.datanode/TestNNHandlesCombinedBlockReport/blockReport_02/]:
 These two errors were not reproducible on my local enviroment.

 

[~shv]

Thank your for your comment.
I think assertion might not be a good idea. As a namenode could be stopped from 
a starting state:

( NameNode#stop seems to be able to stop a namenode on any states )

 

 


was (Author: risyomei):
[~ayushtkn]
 Thank you for your comment.

Again, there are 3 test failures:

[TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestDelegationTokensWithHA/testObserverReadProxyProviderWithDT/]
 : This happens even on trunk and thus regarded irrelevant.

[org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage.blockReport_02,
 
|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.datanode/TestNNHandlesBlockReportPerStorage/blockReport_02/]
 
[org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport.blockReport_02:
 
|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.datanode/TestNNHandlesCombinedBlockReport/blockReport_02/]:
 These two errors were not reproducible on my local enviroment.

 

[~shv]

Thank your for your comment.

I think assertion might not be a good idea. As a namenode could be stopped from 
a starting state:

 


  

> stopStandbyServices() should log which service state it is transitioning from.
> --
>
> Key: HDFS-15111
> URL: https://issues.apache.org/jira/browse/HDFS-15111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, logging
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-15111.001.patch, HDFS-15111.002.patch, 
> HDFS-15111.003.patch
>
>
> Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
> logs that it is "Stopping services started for standby state". It should be 
> "Stopping services started for observer state"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Xieming Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030309#comment-17030309
 ] 

Xieming Li commented on HDFS-15111:
---

[~ayushtkn]
 Thank you for your comment.

Again, there are 3 test failures:

[TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestDelegationTokensWithHA/testObserverReadProxyProviderWithDT/]
 : This happens even on trunk and thus regarded irrelevant.

[org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage.blockReport_02,
 
|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.datanode/TestNNHandlesBlockReportPerStorage/blockReport_02/]
 
[org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport.blockReport_02:
 
|https://builds.apache.org/job/PreCommit-HDFS-Build/28735/testReport/org.apache.hadoop.hdfs.server.datanode/TestNNHandlesCombinedBlockReport/blockReport_02/]:
 These two errors were not reproducible on my local enviroment.

 

[~shv]

Thank your for your comment.

I think assertion might not be a good idea. As a namenode could be stopped from 
a starting state:

 


  

> stopStandbyServices() should log which service state it is transitioning from.
> --
>
> Key: HDFS-15111
> URL: https://issues.apache.org/jira/browse/HDFS-15111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, logging
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-15111.001.patch, HDFS-15111.002.patch, 
> HDFS-15111.003.patch
>
>
> Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
> logs that it is "Stopping services started for standby state". It should be 
> "Stopping services started for observer state"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15115) Namenode crash caused by NPE in BlockPlacementPolicyDefault when dynamically change logger to debug

2020-02-04 Thread wangzhixiang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030303#comment-17030303
 ] 

wangzhixiang commented on HDFS-15115:
-

Thanx [~ayushtkn] [~hexiaoqiao] [~weichiu] [~xuzq_zander]

I update a UT in HDFS-15115.003.patch.  Please continue to help review code.

> Namenode crash caused by NPE in BlockPlacementPolicyDefault when dynamically 
> change logger to debug
> ---
>
> Key: HDFS-15115
> URL: https://issues.apache.org/jira/browse/HDFS-15115
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhixiang
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-15115.001.patch, HDFS-15115.003.patch, 
> HDFS-15115.2.patch
>
>
> To get debug info, we dynamically change the logger of 
> BlockPlacementPolicyDefault to debug when namenode is running. However, the 
> Namenode crashs. From the log, we find some NPE in 
> BlockPlacementPolicyDefault.chooseRandom. Because *StringBuilder builder* 
> will be used 4 times in BlockPlacementPolicyDefault.chooseRandom method. 
> While the *builder* only initializes in the first time of this method. If we 
> change the logger of BlockPlacementPolicyDefault to debug after the part, the 
> *builder* in remaining part is *NULL* and cause *NPE*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15115) Namenode crash caused by NPE in BlockPlacementPolicyDefault when dynamically change logger to debug

2020-02-04 Thread wangzhixiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangzhixiang updated HDFS-15115:

Attachment: HDFS-15115.003.patch

> Namenode crash caused by NPE in BlockPlacementPolicyDefault when dynamically 
> change logger to debug
> ---
>
> Key: HDFS-15115
> URL: https://issues.apache.org/jira/browse/HDFS-15115
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: wangzhixiang
>Assignee: David Mollitor
>Priority: Major
> Attachments: HDFS-15115.001.patch, HDFS-15115.003.patch, 
> HDFS-15115.2.patch
>
>
> To get debug info, we dynamically change the logger of 
> BlockPlacementPolicyDefault to debug when namenode is running. However, the 
> Namenode crashs. From the log, we find some NPE in 
> BlockPlacementPolicyDefault.chooseRandom. Because *StringBuilder builder* 
> will be used 4 times in BlockPlacementPolicyDefault.chooseRandom method. 
> While the *builder* only initializes in the first time of this method. If we 
> change the logger of BlockPlacementPolicyDefault to debug after the part, the 
> *builder* in remaining part is *NULL* and cause *NPE*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030274#comment-17030274
 ] 

Konstantin Shvachko commented on HDFS-15111:


Can we just add an assert then that curState is either {{OBSERVER}} or 
{{STANDBY}}. Isn't it what we expect here.

> stopStandbyServices() should log which service state it is transitioning from.
> --
>
> Key: HDFS-15111
> URL: https://issues.apache.org/jira/browse/HDFS-15111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, logging
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-15111.001.patch, HDFS-15111.002.patch, 
> HDFS-15111.003.patch
>
>
> Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
> logs that it is "Stopping services started for standby state". It should be 
> "Stopping services started for observer state"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15086) Block scheduled counter never get decremet if the block got deleted before replication.

2020-02-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030208#comment-17030208
 ] 

Hadoop QA commented on HDFS-15086:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 166 unchanged - 2 fixed = 166 total (was 168) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15086 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12992630/HDFS-15086.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 28447ae03b83 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ce7b8b5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28739/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28739/testReport/ |
| Max. process+thread count | 3158 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Commented] (HDFS-15150) Introduce read write lock to Datanode

2020-02-04 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030188#comment-17030188
 ] 

Wei-Chiu Chuang commented on HDFS-15150:


{code:java}
  this.datasetRWLock = new InstrumentedReadWriteLock(true,
{code}
This means the dataset lock will be a fair RW lock. I wonder if we should make 
it configurable. Looking at the ReentrantReadWriteLock usage in namenode, 
(HDFS-5241) unfair lock outperforms fair lock.

Unrelated, but the following existing code looks suspicious:
{code}
volumes.waitVolumeRemoved(5000, datasetWriteLockCondition);
{code}
This condition is never notified by another thread. Wonder how it worked 
before. I need to dust off the Java concurrency.

> Introduce read write lock to Datanode
> -
>
> Key: HDFS-15150
> URL: https://issues.apache.org/jira/browse/HDFS-15150
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15150.001.patch, HDFS-15150.002.patch
>
>
> HDFS-9668 pointed out the issues around the DN lock being a point of 
> contention some time ago, but that Jira went in a direction of creating a new 
> FSDataset implementation which is very risky, and activity on the Jira has 
> stalled for a few years now. Edit: Looks like HDFS-9668 eventually went in a 
> similar direction to what I was thinking, so I will review that Jira in more 
> detail to see if this one is necessary.
> I feel there could be significant gains by moving to a ReentrantReadWrite 
> lock within the DN. The current implementation is simply a ReentrantLock so 
> any locker blocks all others.
> Once place I think a read lock would benefit us significantly, is when the DN 
> is serving a lot of small blocks and there are jobs which perform a lot of 
> reads. The start of reading any blocks right now takes the lock, but if we 
> moved this to a read lock, many reads could do this at the same time.
> The first conservative step, would be to change the current lock and then 
> make all accesses to it obtain the write lock. That way, we should keep the 
> current behaviour and then we can selectively move some lock accesses to the 
> readlock in separate Jiras.
> I would appreciate any thoughts on this, and also if anyone has attempted it 
> before and found any blockers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15148) dfs.namenode.send.qop.enabled should not apply to primary NN port

2020-02-04 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-15148:
--
Fix Version/s: 3.3.1
   2.10.1
   3.2.2
   3.1.4
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> dfs.namenode.send.qop.enabled should not apply to primary NN port
> -
>
> Key: HDFS-15148
> URL: https://issues.apache.org/jira/browse/HDFS-15148
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.1, 3.3.1
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Fix For: 3.1.4, 3.2.2, 2.10.1, 3.3.1
>
> Attachments: HDFS-15148.001.patch, HDFS-15148.002.patch, 
> HDFS-15148.003.patch, HDFS-15148.004.patch
>
>
> In HDFS-13617, NameNode can be configured to wrap its established QOP into 
> block access token as an encrypted message. Later on DataNode will use this 
> message to create SASL connection. But this new behavior should only apply to 
> new auxiliary NameNode ports, not the primary port (the one configured in 
> fs.defaultFS), as it may cause conflicting behavior with existing other SASL 
> related configuration (e.g. dfs.data.transfer.protection). Since this 
> configure is introduced for to auxiliary ports only, we should restrict this 
> new behavior to not apply to primary port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15150) Introduce read write lock to Datanode

2020-02-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030157#comment-17030157
 ] 

Hadoop QA commented on HDFS-15150:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 31s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}115m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
|   | hadoop.hdfs.TestDeadNodeDetection |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15150 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12992622/HDFS-15150.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 6f9b5a77d826 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e3a0b0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | 

[jira] [Commented] (HDFS-15148) dfs.namenode.send.qop.enabled should not apply to primary NN port

2020-02-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030102#comment-17030102
 ] 

Hudson commented on HDFS-15148:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17925 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17925/])
HDFS-15148. dfs.namenode.send.qop.enabled should not apply to primary NN 
(cliang: rev ce7b8b5634ef84602019cac4ce52337fbe4f9d42)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestMultipleNNPortQOP.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/block/BlockTokenSecretManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockTokenWrappingQOP.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


> dfs.namenode.send.qop.enabled should not apply to primary NN port
> -
>
> Key: HDFS-15148
> URL: https://issues.apache.org/jira/browse/HDFS-15148
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.1, 3.3.1
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15148.001.patch, HDFS-15148.002.patch, 
> HDFS-15148.003.patch, HDFS-15148.004.patch
>
>
> In HDFS-13617, NameNode can be configured to wrap its established QOP into 
> block access token as an encrypted message. Later on DataNode will use this 
> message to create SASL connection. But this new behavior should only apply to 
> new auxiliary NameNode ports, not the primary port (the one configured in 
> fs.defaultFS), as it may cause conflicting behavior with existing other SASL 
> related configuration (e.g. dfs.data.transfer.protection). Since this 
> configure is introduced for to auxiliary ports only, we should restrict this 
> new behavior to not apply to primary port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15086) Block scheduled counter never get decremet if the block got deleted before replication.

2020-02-04 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030086#comment-17030086
 ] 

hemanthboyina commented on HDFS-15086:
--

attached patch , please review

> Block scheduled counter never get decremet if the block got deleted before 
> replication.
> ---
>
> Key: HDFS-15086
> URL: https://issues.apache.org/jira/browse/HDFS-15086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15086.001.patch
>
>
> If the block is scheduled for replication and same file get deleted then this 
> type of block will be reported as a bad block from DN. 
> For this failed replication work scheduled block counter never get decrement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15086) Block scheduled counter never get decremet if the block got deleted before replication.

2020-02-04 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-15086:
-
Attachment: HDFS-15086.001.patch
Status: Patch Available  (was: Open)

> Block scheduled counter never get decremet if the block got deleted before 
> replication.
> ---
>
> Key: HDFS-15086
> URL: https://issues.apache.org/jira/browse/HDFS-15086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-15086.001.patch
>
>
> If the block is scheduled for replication and same file get deleted then this 
> type of block will be reported as a bad block from DN. 
> For this failed replication work scheduled block counter never get decrement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15148) dfs.namenode.send.qop.enabled should not apply to primary NN port

2020-02-04 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030085#comment-17030085
 ] 

Chen Liang commented on HDFS-15148:
---

Thanks [~shv]! I have filed HDFS-15146 to fix the test. Will commit v04 patch 
shortly.

> dfs.namenode.send.qop.enabled should not apply to primary NN port
> -
>
> Key: HDFS-15148
> URL: https://issues.apache.org/jira/browse/HDFS-15148
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.10.1, 3.3.1
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15148.001.patch, HDFS-15148.002.patch, 
> HDFS-15148.003.patch, HDFS-15148.004.patch
>
>
> In HDFS-13617, NameNode can be configured to wrap its established QOP into 
> block access token as an encrypted message. Later on DataNode will use this 
> message to create SASL connection. But this new behavior should only apply to 
> new auxiliary NameNode ports, not the primary port (the one configured in 
> fs.defaultFS), as it may cause conflicting behavior with existing other SASL 
> related configuration (e.g. dfs.data.transfer.protection). Since this 
> configure is introduced for to auxiliary ports only, we should restrict this 
> new behavior to not apply to primary port.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15086) Block scheduled counter never get decremet if the block got deleted before replication.

2020-02-04 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030084#comment-17030084
 ] 

hemanthboyina commented on HDFS-15086:
--

thanks [~surendrasingh] for raising the issue 
I would like to work on this , assigning for myself

> Block scheduled counter never get decremet if the block got deleted before 
> replication.
> ---
>
> Key: HDFS-15086
> URL: https://issues.apache.org/jira/browse/HDFS-15086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
>
> If the block is scheduled for replication and same file get deleted then this 
> type of block will be reported as a bad block from DN. 
> For this failed replication work scheduled block counter never get decrement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15086) Block scheduled counter never get decremet if the block got deleted before replication.

2020-02-04 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-15086:


Assignee: hemanthboyina  (was: Surendra Singh Lilhore)

> Block scheduled counter never get decremet if the block got deleted before 
> replication.
> ---
>
> Key: HDFS-15086
> URL: https://issues.apache.org/jira/browse/HDFS-15086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
>
> If the block is scheduled for replication and same file get deleted then this 
> type of block will be reported as a bad block from DN. 
> For this failed replication work scheduled block counter never get decrement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15153) TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT fails intermittently

2020-02-04 Thread Chen Liang (Jira)
Chen Liang created HDFS-15153:
-

 Summary: 
TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT fails 
intermittently
 Key: HDFS-15153
 URL: https://issues.apache.org/jira/browse/HDFS-15153
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Chen Liang
Assignee: Chen Liang


The unit TestDelegationTokensWithHA.testObserverReadProxyProviderWithDT is 
failing consistently. Seems this is due to a log message change. We should fix 
it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14989) Add a 'swapBlockList' operation to Namenode.

2020-02-04 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDFS-14989:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks for the reviews [~arp], [~weichiu], [~ayushtkn]. PR 
https://github.com/apache/hadoop/pull/1819 merged to branch 
HDFS-14978_ec_conversion. 

> Add a 'swapBlockList' operation to Namenode.
> 
>
> Key: HDFS-14989
> URL: https://issues.apache.org/jira/browse/HDFS-14989
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>
> Borrowing from the design doc.
> bq. The swapBlockList takes two parameters, a source file and a destination 
> file. This operation swaps the blocks belonging to the source and the 
> destination atomically.
> bq. The namespace metadata of interest is the INodeFile class. A file 
> (INodeFile) contains a header composed of PREFERRED_BLOCK_SIZE, 
> BLOCK_LAYOUT_AND_REDUNDANCY and STORAGE_POLICY_ID. In addition, an INodeFile 
> contains a list of blocks (BlockInfo[]). The operation will swap 
> BLOCK_LAYOUT_AND_REDUNDANCY header bits and the block lists. But it will not 
> touch other fields. To avoid complication, this operation will abort if 
> either file is open (isUnderConstruction() == true)
> bq. Additionally, this operation introduces a new opcode OP_SWAP_BLOCK_LIST 
> to record the change persistently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12491) Support wildcard in CLASSPATH for libhdfs

2020-02-04 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030055#comment-17030055
 ] 

Kihwal Lee commented on HDFS-12491:
---

Committed this from trunk to branch-2.10. Thanks for working on this, 
[~samkhan] and for the reviews, [~Jim_Brennan].

> Support wildcard in CLASSPATH for libhdfs
> -
>
> Key: HDFS-12491
> URL: https://issues.apache.org/jira/browse/HDFS-12491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Muhammad Samir Khan
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-12491.001.patch, HDFS-12491.002.patch, 
> testWildCard.sh
>
>
> According to libhdfs doc, wildcard in CLASSPATH is not support:
> bq. The most common problem is the CLASSPATH is not set properly when calling 
> a program that uses libhdfs. Make sure you set it to all the Hadoop jars 
> needed to run Hadoop itself as well as the right configuration directory 
> containing hdfs-site.xml. It is not valid to use wildcard syntax for 
> specifying multiple jars. It may be useful to run hadoop classpath --glob or 
> hadoop classpath --jar  to generate the correct classpath for your 
> deployment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12491) Support wildcard in CLASSPATH for libhdfs

2020-02-04 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030053#comment-17030053
 ] 

Hudson commented on HDFS-12491:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17924 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17924/])
HDFS-12491. Support wildcard in CLASSPATH for libhdfs. Contributed by (kihwal: 
rev 10a60fbe20bb08cdd71076ea9bf2ebb3a2f6226e)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.h
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/LibHdfs.md


> Support wildcard in CLASSPATH for libhdfs
> -
>
> Key: HDFS-12491
> URL: https://issues.apache.org/jira/browse/HDFS-12491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Muhammad Samir Khan
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-12491.001.patch, HDFS-12491.002.patch, 
> testWildCard.sh
>
>
> According to libhdfs doc, wildcard in CLASSPATH is not support:
> bq. The most common problem is the CLASSPATH is not set properly when calling 
> a program that uses libhdfs. Make sure you set it to all the Hadoop jars 
> needed to run Hadoop itself as well as the right configuration directory 
> containing hdfs-site.xml. It is not valid to use wildcard syntax for 
> specifying multiple jars. It may be useful to run hadoop classpath --glob or 
> hadoop classpath --jar  to generate the correct classpath for your 
> deployment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12491) Support wildcard in CLASSPATH for libhdfs

2020-02-04 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-12491:
--
Fix Version/s: 2.10.1

> Support wildcard in CLASSPATH for libhdfs
> -
>
> Key: HDFS-12491
> URL: https://issues.apache.org/jira/browse/HDFS-12491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Muhammad Samir Khan
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2, 2.10.1
>
> Attachments: HDFS-12491.001.patch, HDFS-12491.002.patch, 
> testWildCard.sh
>
>
> According to libhdfs doc, wildcard in CLASSPATH is not support:
> bq. The most common problem is the CLASSPATH is not set properly when calling 
> a program that uses libhdfs. Make sure you set it to all the Hadoop jars 
> needed to run Hadoop itself as well as the right configuration directory 
> containing hdfs-site.xml. It is not valid to use wildcard syntax for 
> specifying multiple jars. It may be useful to run hadoop classpath --glob or 
> hadoop classpath --jar  to generate the correct classpath for your 
> deployment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12491) Support wildcard in CLASSPATH for libhdfs

2020-02-04 Thread Kihwal Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-12491:
--
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Support wildcard in CLASSPATH for libhdfs
> -
>
> Key: HDFS-12491
> URL: https://issues.apache.org/jira/browse/HDFS-12491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Muhammad Samir Khan
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-12491.001.patch, HDFS-12491.002.patch, 
> testWildCard.sh
>
>
> According to libhdfs doc, wildcard in CLASSPATH is not support:
> bq. The most common problem is the CLASSPATH is not set properly when calling 
> a program that uses libhdfs. Make sure you set it to all the Hadoop jars 
> needed to run Hadoop itself as well as the right configuration directory 
> containing hdfs-site.xml. It is not valid to use wildcard syntax for 
> specifying multiple jars. It may be useful to run hadoop classpath --glob or 
> hadoop classpath --jar  to generate the correct classpath for your 
> deployment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12491) Support wildcard in CLASSPATH for libhdfs

2020-02-04 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17030039#comment-17030039
 ] 

Kihwal Lee commented on HDFS-12491:
---

Sorry for the late review.  It looks good to me. I just built and tested it on 
trunk.

+1

> Support wildcard in CLASSPATH for libhdfs
> -
>
> Key: HDFS-12491
> URL: https://issues.apache.org/jira/browse/HDFS-12491
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: Muhammad Samir Khan
>Priority: Major
> Attachments: HDFS-12491.001.patch, HDFS-12491.002.patch, 
> testWildCard.sh
>
>
> According to libhdfs doc, wildcard in CLASSPATH is not support:
> bq. The most common problem is the CLASSPATH is not set properly when calling 
> a program that uses libhdfs. Make sure you set it to all the Hadoop jars 
> needed to run Hadoop itself as well as the right configuration directory 
> containing hdfs-site.xml. It is not valid to use wildcard syntax for 
> specifying multiple jars. It may be useful to run hadoop classpath --glob or 
> hadoop classpath --jar  to generate the correct classpath for your 
> deployment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15150) Introduce read write lock to Datanode

2020-02-04 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15150:
-
Attachment: HDFS-15150.002.patch

> Introduce read write lock to Datanode
> -
>
> Key: HDFS-15150
> URL: https://issues.apache.org/jira/browse/HDFS-15150
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15150.001.patch, HDFS-15150.002.patch
>
>
> HDFS-9668 pointed out the issues around the DN lock being a point of 
> contention some time ago, but that Jira went in a direction of creating a new 
> FSDataset implementation which is very risky, and activity on the Jira has 
> stalled for a few years now. Edit: Looks like HDFS-9668 eventually went in a 
> similar direction to what I was thinking, so I will review that Jira in more 
> detail to see if this one is necessary.
> I feel there could be significant gains by moving to a ReentrantReadWrite 
> lock within the DN. The current implementation is simply a ReentrantLock so 
> any locker blocks all others.
> Once place I think a read lock would benefit us significantly, is when the DN 
> is serving a lot of small blocks and there are jobs which perform a lot of 
> reads. The start of reading any blocks right now takes the lock, but if we 
> moved this to a read lock, many reads could do this at the same time.
> The first conservative step, would be to change the current lock and then 
> make all accesses to it obtain the write lock. That way, we should keep the 
> current behaviour and then we can selectively move some lock accesses to the 
> readlock in separate Jiras.
> I would appreciate any thoughts on this, and also if anyone has attempted it 
> before and found any blockers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15150) Introduce read write lock to Datanode

2020-02-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029982#comment-17029982
 ] 

Hadoop QA commented on HDFS-15150:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 41s{color} | {color:orange} root: The patch generated 1 new + 237 unchanged 
- 0 fixed = 238 total (was 237) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 44s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}243m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15150 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12992607/HDFS-15150.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3e31715f3ab1 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e3a0b0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| 

[jira] [Updated] (HDFS-15150) Introduce read write lock to Datanode

2020-02-04 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15150:
-
Attachment: HDFS-15150.001.patch

> Introduce read write lock to Datanode
> -
>
> Key: HDFS-15150
> URL: https://issues.apache.org/jira/browse/HDFS-15150
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15150.001.patch
>
>
> HDFS-9668 pointed out the issues around the DN lock being a point of 
> contention some time ago, but that Jira went in a direction of creating a new 
> FSDataset implementation which is very risky, and activity on the Jira has 
> stalled for a few years now. Edit: Looks like HDFS-9668 eventually went in a 
> similar direction to what I was thinking, so I will review that Jira in more 
> detail to see if this one is necessary.
> I feel there could be significant gains by moving to a ReentrantReadWrite 
> lock within the DN. The current implementation is simply a ReentrantLock so 
> any locker blocks all others.
> Once place I think a read lock would benefit us significantly, is when the DN 
> is serving a lot of small blocks and there are jobs which perform a lot of 
> reads. The start of reading any blocks right now takes the lock, but if we 
> moved this to a read lock, many reads could do this at the same time.
> The first conservative step, would be to change the current lock and then 
> make all accesses to it obtain the write lock. That way, we should keep the 
> current behaviour and then we can selectively move some lock accesses to the 
> readlock in separate Jiras.
> I would appreciate any thoughts on this, and also if anyone has attempted it 
> before and found any blockers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15150) Introduce read write lock to Datanode

2020-02-04 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15150:
-
Status: Patch Available  (was: Open)

> Introduce read write lock to Datanode
> -
>
> Key: HDFS-15150
> URL: https://issues.apache.org/jira/browse/HDFS-15150
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15150.001.patch
>
>
> HDFS-9668 pointed out the issues around the DN lock being a point of 
> contention some time ago, but that Jira went in a direction of creating a new 
> FSDataset implementation which is very risky, and activity on the Jira has 
> stalled for a few years now. Edit: Looks like HDFS-9668 eventually went in a 
> similar direction to what I was thinking, so I will review that Jira in more 
> detail to see if this one is necessary.
> I feel there could be significant gains by moving to a ReentrantReadWrite 
> lock within the DN. The current implementation is simply a ReentrantLock so 
> any locker blocks all others.
> Once place I think a read lock would benefit us significantly, is when the DN 
> is serving a lot of small blocks and there are jobs which perform a lot of 
> reads. The start of reading any blocks right now takes the lock, but if we 
> moved this to a read lock, many reads could do this at the same time.
> The first conservative step, would be to change the current lock and then 
> make all accesses to it obtain the write lock. That way, we should keep the 
> current behaviour and then we can selectively move some lock accesses to the 
> readlock in separate Jiras.
> I would appreciate any thoughts on this, and also if anyone has attempted it 
> before and found any blockers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15150) Introduce read write lock to Datanode

2020-02-04 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029808#comment-17029808
 ] 

Stephen O'Donnell commented on HDFS-15150:
--

I have posted an initial patch here. There are two main changes:

1. Introduce a RW lock to the DN, but ensure that all lock acquisitions take 
the write lock. That way, things should operate as before. 
Inside FsDatasetImpl, the bulk of the changes were performed with a replace in 
intellij.

2. ReplicaMap currently takes a AutoClosableLock. Rather than changing the 
constructor to accept a read lock and a write lock, I opted to changed 
ReplicaMap to expect a ReadWriteLock. Then internally, for now, it uses the 
write lock only. This change rippled out into a few test cases where they need 
a one line change to switch new AutoClosableLock to new ReentrantReadWriteLock.

Lets see how this does in the CI run.

> Introduce read write lock to Datanode
> -
>
> Key: HDFS-15150
> URL: https://issues.apache.org/jira/browse/HDFS-15150
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>
> HDFS-9668 pointed out the issues around the DN lock being a point of 
> contention some time ago, but that Jira went in a direction of creating a new 
> FSDataset implementation which is very risky, and activity on the Jira has 
> stalled for a few years now. Edit: Looks like HDFS-9668 eventually went in a 
> similar direction to what I was thinking, so I will review that Jira in more 
> detail to see if this one is necessary.
> I feel there could be significant gains by moving to a ReentrantReadWrite 
> lock within the DN. The current implementation is simply a ReentrantLock so 
> any locker blocks all others.
> Once place I think a read lock would benefit us significantly, is when the DN 
> is serving a lot of small blocks and there are jobs which perform a lot of 
> reads. The start of reading any blocks right now takes the lock, but if we 
> moved this to a read lock, many reads could do this at the same time.
> The first conservative step, would be to change the current lock and then 
> make all accesses to it obtain the write lock. That way, we should keep the 
> current behaviour and then we can selectively move some lock accesses to the 
> readlock in separate Jiras.
> I would appreciate any thoughts on this, and also if anyone has attempted it 
> before and found any blockers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029790#comment-17029790
 ] 

Hadoop QA commented on HDFS-15111:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
|   | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:c44943d1fc3 |
| JIRA Issue | HDFS-15111 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12992577/HDFS-15111.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 99be5fa4986c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 
08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1e3a0b0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_232 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28736/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28736/testReport/ |
| Max. process+thread count | 2894 (vs. ulimit of 

[jira] [Commented] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029646#comment-17029646
 ] 

Ayush Saxena commented on HDFS-15111:
-

Thanx [~risyomei] for the update. v003 LGTM.

there may be a checkstyle warning due to missing space before "?"

if no further comments or Jenkins complains I will fix it while committing.

> stopStandbyServices() should log which service state it is transitioning from.
> --
>
> Key: HDFS-15111
> URL: https://issues.apache.org/jira/browse/HDFS-15111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, logging
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-15111.001.patch, HDFS-15111.002.patch, 
> HDFS-15111.003.patch
>
>
> Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
> logs that it is "Stopping services started for standby state". It should be 
> "Stopping services started for observer state"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Xieming Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17029639#comment-17029639
 ] 

Xieming Li commented on HDFS-15111:
---

[~shv] [~ayushtkn]

Thank you for your comments:

I have changed the description, 
modified to code to use the sl4j semantics and only log standby or observer 
state.

 

There are 3 failed tests in the previous Jenkins log:
 * Failures in TestNNHandlesBlockReportPerStorage and 
TestNNHandlesCombinedBlockReport were not happening on my local test 
environments.
 * TestDelegationTokensWithHA fails even on the trunk branch, and I think it is 
irrelevant to the modification. 

 

> stopStandbyServices() should log which service state it is transitioning from.
> --
>
> Key: HDFS-15111
> URL: https://issues.apache.org/jira/browse/HDFS-15111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, logging
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-15111.001.patch, HDFS-15111.002.patch, 
> HDFS-15111.003.patch
>
>
> Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
> logs that it is "Stopping services started for standby state". It should be 
> "Stopping services started for observer state"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li updated HDFS-15111:
--
Attachment: HDFS-15111.003.patch
Status: Patch Available  (was: Open)

> stopStandbyServices() should log which service state it is transitioning from.
> --
>
> Key: HDFS-15111
> URL: https://issues.apache.org/jira/browse/HDFS-15111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, logging
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-15111.001.patch, HDFS-15111.002.patch, 
> HDFS-15111.003.patch
>
>
> Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
> logs that it is "Stopping services started for standby state". It should be 
> "Stopping services started for observer state"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li updated HDFS-15111:
--
Status: Open  (was: Patch Available)

> stopStandbyServices() should log which service state it is transitioning from.
> --
>
> Key: HDFS-15111
> URL: https://issues.apache.org/jira/browse/HDFS-15111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, logging
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-15111.001.patch, HDFS-15111.002.patch, 
> HDFS-15111.003.patch
>
>
> Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
> logs that it is "Stopping services started for standby state". It should be 
> "Stopping services started for observer state"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15111) stopStandbyServices() should log which service state it is transitioning from.

2020-02-04 Thread Xieming Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xieming Li updated HDFS-15111:
--
Description: Trying to transition Observer to Standby state. 
{{stopStandbyServices()}} logs that it is "Stopping services started for 
standby state". It should be "Stopping services started for observer state"  
(was: Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
logs that it is stopping/starting Standby services.
 # {{startStandbyServices()}} should log which state it is transitioning TO.
 # {{stopStandbyServices()}} should log which state it is transitioning FROM.)

> stopStandbyServices() should log which service state it is transitioning from.
> --
>
> Key: HDFS-15111
> URL: https://issues.apache.org/jira/browse/HDFS-15111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, logging
>Affects Versions: 2.10.0
>Reporter: Konstantin Shvachko
>Assignee: Xieming Li
>Priority: Major
>  Labels: newbie++
> Attachments: HDFS-15111.001.patch, HDFS-15111.002.patch
>
>
> Trying to transition Observer to Standby state. {{stopStandbyServices()}} 
> logs that it is "Stopping services started for standby state". It should be 
> "Stopping services started for observer state"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org