[jira] [Created] (HDFS-16201) Select datanode based on storage type

2021-08-31 Thread Yuanbo Liu (Jira)
Yuanbo Liu created HDFS-16201:
-

 Summary: Select datanode based on storage type
 Key: HDFS-16201
 URL: https://issues.apache.org/jira/browse/HDFS-16201
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Yuanbo Liu
Assignee: Yuanbo Liu


Since storage policy was introduced into hdfs, it would be useful if hdfs 
client could choose datanode of replica based on storage type priority when 
reading. The priority should be RAM_DISK > SSD > DISK > ARCHIVE.

Here is the process graph

!https://iwiki.woa.com/download/attachments/979566104/image2021-8-31_20-23-24.png?version=1=1630412605000=v2!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644709=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644709
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 01/Sep/21 02:07
Start Date: 01/Sep/21 02:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909806663


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 58s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m  2s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 36s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 42s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   5m  2s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  15m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 21s |  |  the patch passed  |
   | -1 :x: |  javac  |  14m 21s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/7/artifact/out/results-compile-javac-root.txt)
 |  root generated 1 new + 1587 unchanged - 1 fixed = 1588 total (was 1588)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 32s |  |  root: The patch generated 
0 new + 569 unchanged - 1 fixed = 569 total (was 570)  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   5m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 57s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 175m 41s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 306m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestRedudantBlocks |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux aa126cf10a5a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 610e6e46ec87d615c19a51cf979bf43721706e07 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/7/testReport/ |
   | Max. process+thread count | 3439 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/7/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message 

[jira] [Work logged] (HDFS-16200) Improve NameNode failover

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16200?focusedWorklogId=644708=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644708
 ]

ASF GitHub Bot logged work on HDFS-16200:
-

Author: ASF GitHub Bot
Created on: 01/Sep/21 01:54
Start Date: 01/Sep/21 01:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3364:
URL: https://github.com/apache/hadoop/pull/3364#issuecomment-909799030


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   1m 10s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   1m 16s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   1m 16s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   1m  9s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   1m  9s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 53s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 251 unchanged 
- 0 fixed = 258 total (was 251)  |
   | -1 :x: |  mvnsite  |   1m 11s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK 
Private 

[jira] [Commented] (HDFS-16200) Improve NameNode failover

2021-08-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407748#comment-17407748
 ] 

Hadoop QA commented on HDFS-16200:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
20s{color} |  | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} |  | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} codespell {color} | {color:blue}  0m  
1s{color} |  | {color:blue} codespell was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} |  | {color:green} The patch does not contain any @author tags. 
{color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} |  | {color:green} The patch appears to include 1 new or modified 
test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 33m 
58s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} |  | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} |  | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m 
18s{color} |  | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 57s{color} |  | {color:green} branch has no errors when building and 
testing our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m 
10s{color} | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt]
 | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
16s{color} | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt]
 | {color:red} hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 16s{color} 
| 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt]
 | {color:red} hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
9s{color} | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt]
 | {color:red} hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  9s{color} 
| 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3364/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt]
 | {color:red} hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. {color} |
| 

[jira] [Work logged] (HDFS-16188) RBF: Router to support resolving monitored namenodes with DNS

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16188?focusedWorklogId=644695=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644695
 ]

ASF GitHub Bot logged work on HDFS-16188:
-

Author: ASF GitHub Bot
Created on: 01/Sep/21 01:17
Start Date: 01/Sep/21 01:17
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r699774160



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##
@@ -113,47 +117,93 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId  Identifier of the nameservice.
+   * @param nnId  Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+  ActiveNamenodeResolver resolver, String nsId, String nnId, String 
resolvedHost) {
+super(getNnHeartBeatServiceName(nsId, nnId));
+
+this.resolver = resolver;
+
+this.nameserviceId = nsId;
+// Concat a uniq id from original nnId and resolvedHost
+this.namenodeId = nnId + "-" + resolvedHost;
+this.resolvedHost = resolvedHost;
+// Same the original nnid to get the ports from config.
+this.originalNnId = nnId;
+
+  }
+
   @Override
   protected void serviceInit(Configuration configuration) throws Exception {
 
 this.conf = DFSHAAdmin.addSecurityConfiguration(configuration);
 
 String nnDesc = nameserviceId;
 if (this.namenodeId != null && !this.namenodeId.isEmpty()) {
-  this.localTarget = new NNHAServiceTarget(
-  conf, nameserviceId, namenodeId);
   nnDesc += "-" + namenodeId;
 } else {
   this.localTarget = null;
 }
 
+if (originalNnId == null) {
+  originalNnId = namenodeId;
+}
+
 // Get the RPC address for the clients to connect
-this.rpcAddress = getRpcAddress(conf, nameserviceId, namenodeId);
+this.rpcAddress = getRpcAddress(conf, nameserviceId, originalNnId);
+if (resolvedHost != null) {
+  rpcAddress = resolvedHost + ":"
+  + NetUtils.createSocketAddr(rpcAddress).getPort();
+}
 LOG.info("{} RPC address: {}", nnDesc, rpcAddress);
 
 // Get the Service RPC address for monitoring
 this.serviceAddress =
-DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, namenodeId);
+DFSUtil.getNamenodeServiceAddr(conf, nameserviceId, originalNnId);
 if (this.serviceAddress == null) {
   LOG.error("Cannot locate RPC service address for NN {}, " +
   "using RPC address {}", nnDesc, this.rpcAddress);
   this.serviceAddress = this.rpcAddress;
 }
+if (resolvedHost != null) {

Review comment:
   We do the same thing over and over for the lifeline and the others.
   Maybe do all of them in a single shot?
   The way to extract the port might also be expensive to be honest; creating a 
socket address is usually bad.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##
@@ -426,37 +426,53 @@ static String concatSuffixes(String... suffixes) {
 Collection nnIds = getNameNodeIds(conf, nsId);
 Map ret = Maps.newLinkedHashMap();
 for (String nnId : emptyAsSingletonNull(nnIds)) {
-  String suffix = concatSuffixes(nsId, nnId);
-  String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-  if (address != null) {
-InetSocketAddress isa = NetUtils.createSocketAddr(address);
-try {
-  // Datanode should just use FQDN
-  String[] resolvedHostNames = dnr
-  .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-  int port = isa.getPort();
-  for (String hostname : resolvedHostNames) {
-InetSocketAddress inetSocketAddress = new InetSocketAddress(
-hostname, port);
-// Concat nn info with host info to make uniq ID
-String concatId;
-if (nnId == null || nnId.isEmpty()) {
-  concatId = String
-  .join("-", nsId, hostname, String.valueOf(port));
-} else {
-  concatId = String
-  .join("-", nsId, nnId, hostname, String.valueOf(port));
-}
-ret.put(concatId, inetSocketAddress);
-  }
-} catch (UnknownHostException e) {
-  LOG.error("Failed to resolve address: " + address);
+  ret.putAll(getResolvedAddressesForNnId(
+  conf, nsId, nnId, dnr, defaultValue, keys));
+}
+return ret;
+  }
+
+  public static Map getResolvedAddressesForNnId(
+  Configuration conf, String nsId, 

[jira] [Commented] (HDFS-16194) Add a public method DatanodeID#getDisplayName

2021-08-31 Thread tomscut (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407731#comment-17407731
 ] 

tomscut commented on HDFS-16194:


Hi [~weichiu] [~ayushsaxena] [~ferhui] [~hexiaoqiao] , please take a look at 
this. Thanks.

> Add a public method DatanodeID#getDisplayName
> -
>
> Key: HDFS-16194
> URL: https://issues.apache.org/jira/browse/HDFS-16194
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Add a public method DatanodeID#getDisplayName to simplify the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16188) RBF: Router to support resolving monitored namenodes with DNS

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16188?focusedWorklogId=644692=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644692
 ]

ASF GitHub Bot logged work on HDFS-16188:
-

Author: ASF GitHub Bot
Created on: 01/Sep/21 00:39
Start Date: 01/Sep/21 00:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-909762383


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   4m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  4s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/5/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 3 new + 15 unchanged - 1 fixed = 
18 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m  1s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 243m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  22m 49s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 390m 28s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux c00376752a94 4.15.0-151-generic 

[jira] [Work logged] (HDFS-16158) Discover datanodes with unbalanced volume usage by the standard deviation

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16158?focusedWorklogId=644691=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644691
 ]

ASF GitHub Bot logged work on HDFS-16158:
-

Author: ASF GitHub Bot
Created on: 01/Sep/21 00:38
Start Date: 01/Sep/21 00:38
Worklog Time Spent: 10m 
  Work Description: tomscut closed pull request #3288:
URL: https://github.com/apache/hadoop/pull/3288


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644691)
Time Spent: 1h 50m  (was: 1h 40m)

> Discover datanodes with unbalanced volume usage by the standard deviation 
> --
>
> Key: HDFS-16158
> URL: https://issues.apache.org/jira/browse/HDFS-16158
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-08-11-10-14-58-430.png
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Discover datanodes with unbalanced volume usage by the standard deviation
> In some scenarios, we may cause unbalanced datanode disk usage:
> 1. Repair the damaged disk and make it online again.
> 2. Add disks to some Datanodes.
> 3. Some disks are damaged, resulting in slow data writing.
> 4. Use some custom volume choosing policies.
> In the case of unbalanced disk usage, a sudden increase in datanode write 
> traffic may result in busy disk I/O with low volume usage, resulting in 
> decreased throughput across datanodes.
> In this case, we need to find these nodes in time to do diskBalance, or other 
> processing. Based on the volume usage of each datanode, we can calculate the 
> standard deviation of the volume usage. The more unbalanced the volume, the 
> higher the standard deviation.
> To prevent the namenode from being too busy, we can calculate the standard 
> variance on the datanode side, transmit it to the namenode through heartbeat, 
> and display the result on the Web of namenode. We can then sort directly to 
> find the nodes on the Web where the volumes usages are unbalanced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16158) Discover datanodes with unbalanced volume usage by the standard deviation

2021-08-31 Thread tomscut (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tomscut resolved HDFS-16158.

Resolution: Abandoned

> Discover datanodes with unbalanced volume usage by the standard deviation 
> --
>
> Key: HDFS-16158
> URL: https://issues.apache.org/jira/browse/HDFS-16158
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-08-11-10-14-58-430.png
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Discover datanodes with unbalanced volume usage by the standard deviation
> In some scenarios, we may cause unbalanced datanode disk usage:
> 1. Repair the damaged disk and make it online again.
> 2. Add disks to some Datanodes.
> 3. Some disks are damaged, resulting in slow data writing.
> 4. Use some custom volume choosing policies.
> In the case of unbalanced disk usage, a sudden increase in datanode write 
> traffic may result in busy disk I/O with low volume usage, resulting in 
> decreased throughput across datanodes.
> In this case, we need to find these nodes in time to do diskBalance, or other 
> processing. Based on the volume usage of each datanode, we can calculate the 
> standard deviation of the volume usage. The more unbalanced the volume, the 
> higher the standard deviation.
> To prevent the namenode from being too busy, we can calculate the standard 
> variance on the datanode side, transmit it to the namenode through heartbeat, 
> and display the result on the Web of namenode. We can then sort directly to 
> find the nodes on the Web where the volumes usages are unbalanced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16158) Discover datanodes with unbalanced volume usage by the standard deviation

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16158?focusedWorklogId=644690=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644690
 ]

ASF GitHub Bot logged work on HDFS-16158:
-

Author: ASF GitHub Bot
Created on: 01/Sep/21 00:37
Start Date: 01/Sep/21 00:37
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3288:
URL: https://github.com/apache/hadoop/pull/3288#issuecomment-909761516


   Hi @jojochuang , this PR has a lot of changes, which can make rolling 
updates difficult. I re-implemented this feature and will submit another PR 
later. Thank you for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644690)
Time Spent: 1h 40m  (was: 1.5h)

> Discover datanodes with unbalanced volume usage by the standard deviation 
> --
>
> Key: HDFS-16158
> URL: https://issues.apache.org/jira/browse/HDFS-16158
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-08-11-10-14-58-430.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Discover datanodes with unbalanced volume usage by the standard deviation
> In some scenarios, we may cause unbalanced datanode disk usage:
> 1. Repair the damaged disk and make it online again.
> 2. Add disks to some Datanodes.
> 3. Some disks are damaged, resulting in slow data writing.
> 4. Use some custom volume choosing policies.
> In the case of unbalanced disk usage, a sudden increase in datanode write 
> traffic may result in busy disk I/O with low volume usage, resulting in 
> decreased throughput across datanodes.
> In this case, we need to find these nodes in time to do diskBalance, or other 
> processing. Based on the volume usage of each datanode, we can calculate the 
> standard deviation of the volume usage. The more unbalanced the volume, the 
> higher the standard deviation.
> To prevent the namenode from being too busy, we can calculate the standard 
> variance on the datanode side, transmit it to the namenode through heartbeat, 
> and display the result on the Web of namenode. We can then sort directly to 
> find the nodes on the Web where the volumes usages are unbalanced.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16200) Improve NameNode failover

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16200?focusedWorklogId=644687=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644687
 ]

ASF GitHub Bot logged work on HDFS-16200:
-

Author: ASF GitHub Bot
Created on: 01/Sep/21 00:16
Start Date: 01/Sep/21 00:16
Worklog Time Spent: 10m 
  Work Description: aihuaxu opened a new pull request #3364:
URL: https://github.com/apache/hadoop/pull/3364


   This patch adds a configuration to skip resolving the topology for the 
client hosts, e.g., YARN hosts. Such topology info is useful in colocated 
environment but not in non-colocated env. 
   
   
   ### How was this patch tested?
   unit test
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644687)
Remaining Estimate: 0h
Time Spent: 10m

> Improve NameNode failover
> -
>
> Key: HDFS-16200
> URL: https://issues.apache.org/jira/browse/HDFS-16200
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namanode
>Affects Versions: 2.8.2
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In a busy cluster, we are noticing the NameNode failover takes longer time 
> (over 10 minutes) and it causes cluster down time during the time period.
> One bottleneck locates in resolving the client host's topology when the 
> cluster is not colocated with the computing hosts. NameNode resolves the 
> client host's topology and uses it to sort the hosts where the blocks locate 
> in. Such topology will be cached so the next access will be efficient, while 
> if the standby NameNode is newly restarted, then all the client hosts, e.g., 
> YARN hosts need to be resolved.
> Solutions can be: 1) we can expose an API in DFSAdmin to load topology cache, 
> or 2) we can add a new configuration in HDFS cluster to skip resolving 
> topology for non-colocated HDFS cluster. Since client hosts and HDFS hosts 
> are not colocated, it's unnecessary to sort the DataNodes for the clients.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16200) Improve NameNode failover

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16200:
--
Labels: pull-request-available  (was: )

> Improve NameNode failover
> -
>
> Key: HDFS-16200
> URL: https://issues.apache.org/jira/browse/HDFS-16200
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namanode
>Affects Versions: 2.8.2
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In a busy cluster, we are noticing the NameNode failover takes longer time 
> (over 10 minutes) and it causes cluster down time during the time period.
> One bottleneck locates in resolving the client host's topology when the 
> cluster is not colocated with the computing hosts. NameNode resolves the 
> client host's topology and uses it to sort the hosts where the blocks locate 
> in. Such topology will be cached so the next access will be efficient, while 
> if the standby NameNode is newly restarted, then all the client hosts, e.g., 
> YARN hosts need to be resolved.
> Solutions can be: 1) we can expose an API in DFSAdmin to load topology cache, 
> or 2) we can add a new configuration in HDFS cluster to skip resolving 
> topology for non-colocated HDFS cluster. Since client hosts and HDFS hosts 
> are not colocated, it's unnecessary to sort the DataNodes for the clients.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16200) Improve NameNode failover

2021-08-31 Thread Aihua Xu (Jira)
Aihua Xu created HDFS-16200:
---

 Summary: Improve NameNode failover
 Key: HDFS-16200
 URL: https://issues.apache.org/jira/browse/HDFS-16200
 Project: Hadoop HDFS
  Issue Type: Task
  Components: namanode
Affects Versions: 2.8.2
Reporter: Aihua Xu
Assignee: Aihua Xu


In a busy cluster, we are noticing the NameNode failover takes longer time 
(over 10 minutes) and it causes cluster down time during the time period.

One bottleneck locates in resolving the client host's topology when the cluster 
is not colocated with the computing hosts. NameNode resolves the client host's 
topology and uses it to sort the hosts where the blocks locate in. Such 
topology will be cached so the next access will be efficient, while if the 
standby NameNode is newly restarted, then all the client hosts, e.g., YARN 
hosts need to be resolved.

Solutions can be: 1) we can expose an API in DFSAdmin to load topology cache, 
or 2) we can add a new configuration in HDFS cluster to skip resolving topology 
for non-colocated HDFS cluster. Since client hosts and HDFS hosts are not 
colocated, it's unnecessary to sort the DataNodes for the clients. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644632=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644632
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 21:03
Start Date: 31/Aug/21 21:03
Worklog Time Spent: 10m 
  Work Description: amahussein edited a comment on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909634740


   After rebasing, it looks like there are javac checks added to the build that 
failed it.
   Although this is a part of the original code, I decided to address it so 
that we can pass the build and move forward with the PR.
   I also tested the TestDirectoryScanner and It passed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644632)
Time Spent: 3h 50m  (was: 3h 40m)

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644631=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644631
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 21:02
Start Date: 31/Aug/21 21:02
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909634740


   After rebasing, it looks like there are javac checks added to the build that 
failed it.
   Although this is a part of the original code, I decided to address it so 
that we can pass the build and move forward with the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644631)
Time Spent: 3h 40m  (was: 3.5h)

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644595=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644595
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 19:40
Start Date: 31/Aug/21 19:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909552788


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 44s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  16m 34s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 28s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 40s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   4m 59s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  14m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 24s |  |  the patch passed  |
   | -1 :x: |  javac  |  14m 24s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/6/artifact/out/results-compile-javac-root.txt)
 |  root generated 1 new + 1587 unchanged - 1 fixed = 1588 total (was 1588)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 33s |  |  root: The patch generated 
0 new + 569 unchanged - 1 fixed = 569 total (was 570)  |
   | +1 :green_heart: |  mvnsite  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   5m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 54s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 174m 57s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 316m 39s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 8ac03118cb9f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 29a98320b76d60afb481f729456fbb9db1c37287 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/6/testReport/ |
   | Max. process+thread count | 3189 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/6/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This 

[jira] [Work logged] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?focusedWorklogId=644593=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644593
 ]

ASF GitHub Bot logged work on HDFS-16198:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 19:39
Start Date: 31/Aug/21 19:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3359:
URL: https://github.com/apache/hadoop/pull/3359#issuecomment-909552073


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   5m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |   6m 15s | 
[/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3359/1/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 654 unchanged - 0 
fixed = 655 total (was 654)  |
   | +1 :green_heart: |  compile  |   5m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |   5m 50s | 
[/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3359/1/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 
with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 
632 unchanged - 0 fixed = 633 total (was 632)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 22s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3359/1/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 5 new + 31 unchanged - 0 fixed = 
36 total (was 31)  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 349m 41s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 510m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | 

[jira] [Work logged] (HDFS-16199) Resolve log placeholders in NamenodeBeanMetrics

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16199?focusedWorklogId=644574=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644574
 ]

ASF GitHub Bot logged work on HDFS-16199:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 19:04
Start Date: 31/Aug/21 19:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3362:
URL: https://github.com/apache/hadoop/pull/3362#issuecomment-909521290


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 35s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 115m  9s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3362 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e9cf69f2ea5b 4.15.0-151-generic #157-Ubuntu SMP Fri Jul 9 
23:07:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 41dfba73d39b5c0aa780f4dc1df96307f404612c |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/1/testReport/ |
   | Max. process+thread count | 2109 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/1/console |
   | versions 

[jira] [Work logged] (HDFS-16199) Resolve log placeholders in NamenodeBeanMetrics

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16199?focusedWorklogId=644528=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644528
 ]

ASF GitHub Bot logged work on HDFS-16199:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 16:52
Start Date: 31/Aug/21 16:52
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #3362:
URL: https://github.com/apache/hadoop/pull/3362


   ### Description of PR
   NamenodeBeanMetrics has some missing placeholders in logs. This patch 
attempts to add them.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644528)
Remaining Estimate: 0h
Time Spent: 10m

> Resolve log placeholders in NamenodeBeanMetrics
> ---
>
> Key: HDFS-16199
> URL: https://issues.apache.org/jira/browse/HDFS-16199
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NamenodeBeanMetrics has some missing placeholders in logs. This Jira is to 
> fix them all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16199) Resolve log placeholders in NamenodeBeanMetrics

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16199:
--
Labels: pull-request-available  (was: )

> Resolve log placeholders in NamenodeBeanMetrics
> ---
>
> Key: HDFS-16199
> URL: https://issues.apache.org/jira/browse/HDFS-16199
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NamenodeBeanMetrics has some missing placeholders in logs. This Jira is to 
> fix them all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16199) Resolve log placeholders in NamenodeBeanMetrics

2021-08-31 Thread Viraj Jasani (Jira)
Viraj Jasani created HDFS-16199:
---

 Summary: Resolve log placeholders in NamenodeBeanMetrics
 Key: HDFS-16199
 URL: https://issues.apache.org/jira/browse/HDFS-16199
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Viraj Jasani
Assignee: Viraj Jasani


NamenodeBeanMetrics has some missing placeholders in logs. This Jira is to fix 
them all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?focusedWorklogId=644517=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644517
 ]

ASF GitHub Bot logged work on HDFS-16198:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 16:30
Start Date: 31/Aug/21 16:30
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #3359:
URL: https://github.com/apache/hadoop/pull/3359#discussion_r699484858



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithShortCircuitRead.java
##
@@ -0,0 +1,233 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager;
+import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
+import org.apache.hadoop.hdfs.shortcircuit.DfsClientShm;
+import 
org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.PerDatanodeVisitorInfo;
+import org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.Visitor;
+import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache;
+import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitShm.Slot;
+import org.apache.hadoop.net.unix.DomainSocket;
+import org.apache.hadoop.net.unix.TemporarySocketDirectory;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.log4j.Level;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Random;
+
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TestBlockTokenWithShortCircuitRead {
+
+  private static final int BLOCK_SIZE = 1024;
+  private static final int FILE_SIZE = 2 * BLOCK_SIZE;
+  private static final String FILE_TO_SHORT_CIRCUIT_READ = "/fileToSSR.dat";
+  private final byte[] rawData = new byte[FILE_SIZE];
+
+  {
+GenericTestUtils.setLogLevel(DFSClient.LOG, Level.ALL);
+Random r = new Random();
+r.nextBytes(rawData);
+  }
+
+  private void createFile(FileSystem fs, Path filename) throws IOException {
+FSDataOutputStream out = fs.create(filename);
+out.write(rawData);
+out.close();
+  }
+
+  // read a file using blockSeekTo()
+  private boolean checkFile1(FSDataInputStream in) {
+byte[] toRead = new byte[FILE_SIZE];
+int totalRead = 0;
+int nRead;
+try {
+  while ((nRead = in.read(toRead, totalRead,
+toRead.length - totalRead)) > 0) {
+totalRead += nRead;
+  }
+} catch (IOException e) {
+  return false;
+}
+assertEquals("Cannot read file.", toRead.length, totalRead);
+return checkFile(toRead);
+  }
+
+  private boolean checkFile(byte[] fileToCheck) {
+if (fileToCheck.length != rawData.length) {
+  return false;
+}
+for (int i = 0; i < fileToCheck.length; i++) {
+  if 

[jira] [Work logged] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?focusedWorklogId=644502=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644502
 ]

ASF GitHub Bot logged work on HDFS-16198:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 16:08
Start Date: 31/Aug/21 16:08
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3359:
URL: https://github.com/apache/hadoop/pull/3359#discussion_r699469267



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithShortCircuitRead.java
##
@@ -0,0 +1,233 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.blockmanagement;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSClient;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.client.HdfsClientConfigKeys;
+import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
+import org.apache.hadoop.hdfs.protocol.LocatedBlock;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
+import org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager;
+import org.apache.hadoop.hdfs.security.token.block.SecurityTestUtil;
+import org.apache.hadoop.hdfs.server.namenode.NameNode;
+import org.apache.hadoop.hdfs.server.protocol.NamenodeProtocols;
+import org.apache.hadoop.hdfs.shortcircuit.DfsClientShm;
+import 
org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.PerDatanodeVisitorInfo;
+import org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.Visitor;
+import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache;
+import org.apache.hadoop.hdfs.shortcircuit.ShortCircuitShm.Slot;
+import org.apache.hadoop.net.unix.DomainSocket;
+import org.apache.hadoop.net.unix.TemporarySocketDirectory;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.log4j.Level;
+import org.junit.Assert;
+import org.junit.Test;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Random;
+
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_BLOCK_SIZE_KEY;
+import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DOMAIN_SOCKET_PATH_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TestBlockTokenWithShortCircuitRead {
+
+  private static final int BLOCK_SIZE = 1024;
+  private static final int FILE_SIZE = 2 * BLOCK_SIZE;
+  private static final String FILE_TO_SHORT_CIRCUIT_READ = "/fileToSSR.dat";
+  private final byte[] rawData = new byte[FILE_SIZE];
+
+  {
+GenericTestUtils.setLogLevel(DFSClient.LOG, Level.ALL);
+Random r = new Random();
+r.nextBytes(rawData);
+  }
+
+  private void createFile(FileSystem fs, Path filename) throws IOException {
+FSDataOutputStream out = fs.create(filename);
+out.write(rawData);
+out.close();
+  }
+
+  // read a file using blockSeekTo()
+  private boolean checkFile1(FSDataInputStream in) {

Review comment:
   nit: can you rename it to something more meaningful?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockTokenWithShortCircuitRead.java
##
@@ -0,0 +1,233 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you 

[jira] [Work logged] (HDFS-16192) ViewDistributedFileSystem#rename wrongly using src in the place of dst.

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16192?focusedWorklogId=62=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-62
 ]

ASF GitHub Bot logged work on HDFS-16192:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:25
Start Date: 31/Aug/21 15:25
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on a change in pull request 
#3353:
URL: https://github.com/apache/hadoop/pull/3353#discussion_r698559465



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
##
@@ -89,4 +91,30 @@ public void testEmptyDelegationToken() throws IOException {
   }
 }
   }
+
+  @Test
+  public void testRenameWithOptions() throws IOException {
+Configuration conf = getTestConfiguration();
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(0).build();
+  URI defaultUri =
+  URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+  conf.set("fs.viewfs.mounttable." + defaultUri.getHost() + 
".linkFallback",
+  defaultUri.toString());
+  conf.setLong(CommonConfigurationKeys.FS_TRASH_INTERVAL_KEY, 3);
+  try (ViewDistributedFileSystem fileSystem =
+  (ViewDistributedFileSystem) FileSystem.get(conf)) {
+final Path testDir = new Path("/test");
+final Path renameDir = new Path("/testRename");
+fileSystem.mkdirs(testDir);
+fileSystem.rename(testDir, renameDir, Options.Rename.TO_TRASH);
+Assert.assertTrue(fileSystem.exists(renameDir));

Review comment:
   @jojochuang The root cause for this issue is not really due to trash 
right. Issue is with rename with options API itself. Irrespective of the 
options flag, Issue comes. So, just passed To_trash flag as started debug 
through that API. In trash case the target dir formed by TrashDefaultPolicy. In 
this test I am giving my own target. Here rename responsibility is to do raname 
to the given path. So, I am asserting to my given dir. NN using this flags to 
do some additional checks at server. 
   
   

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
##
@@ -89,4 +91,30 @@ public void testEmptyDelegationToken() throws IOException {
   }
 }
   }
+
+  @Test
+  public void testRenameWithOptions() throws IOException {
+Configuration conf = getTestConfiguration();
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(0).build();
+  URI defaultUri =
+  URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+  conf.set("fs.viewfs.mounttable." + defaultUri.getHost() + 
".linkFallback",
+  defaultUri.toString());
+  conf.setLong(CommonConfigurationKeys.FS_TRASH_INTERVAL_KEY, 3);
+  try (ViewDistributedFileSystem fileSystem =
+  (ViewDistributedFileSystem) FileSystem.get(conf)) {
+final Path testDir = new Path("/test");
+final Path renameDir = new Path("/testRename");
+fileSystem.mkdirs(testDir);
+fileSystem.rename(testDir, renameDir, Options.Rename.TO_TRASH);
+Assert.assertTrue(fileSystem.exists(renameDir));

Review comment:
   @jojochuang The root cause for this issue is not really due to trash 
right. Issue is with rename with options API itself. Irrespective of the 
options flag, Issue comes. So, just passed To_trash flag as started debug 
through that API. In trash case the target dir formed by TrashDefaultPolicy. In 
this test I am giving my own target. Here rename responsibility is to do raname 
to the given path. So, I am asserting to my given dir. NN using this flags to 
do some additional checks at server. 
   
   ex: In FSDirRenameOp
   
   ```
   if(renameToTrash) {
   // if destination is the trash directory,
   // besides the permission check on "rename"
   // we need to enforce the check for "delete"
   // otherwise, it would expose a
   // security hole that stuff moved to trash
   // will be deleted by superuser
   fsd.checkPermission(pc, srcIIP, false, null, FsAction.WRITE, null,
   FsAction.ALL, true);
 } else {
   // Rename does not operate on link targets
   // Do not resolveLink when checking permissions of src and dst
   // Check write access to parent of src
   fsd.checkPermission(pc, srcIIP, false, null, FsAction.WRITE, null,
   null, false);
 }
   ```
   Let me know if you are still concern with the flag, I can change it to 
overwrite flag. We can also add moveToTrash API ( Once we know the root cause, 
I thought my test should focus on cause.) I tested manually trash and it worked.
   




-- 
This is 

[jira] [Work logged] (HDFS-15862) Rename file back to original in TestViewfsWithNfs3.testNfsRenameSingleNN

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15862?focusedWorklogId=644467=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644467
 ]

ASF GitHub Bot logged work on HDFS-15862:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:27
Start Date: 31/Aug/21 15:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2724:
URL: https://github.com/apache/hadoop/pull/2724#issuecomment-908942305


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 58s |  |  hadoop-hdfs-nfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2724/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2724 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 3441aa40f3c3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 75d6bb6fe30634f905bab8bdadf1eb0e882d97a9 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2724/2/testReport/ |
   | Max. process+thread count | 636 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-nfs U: 
hadoop-hdfs-project/hadoop-hdfs-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2724/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   

[jira] [Work logged] (HDFS-16188) RBF: Router to support resolving monitored namenodes with DNS

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16188?focusedWorklogId=644427=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644427
 ]

ASF GitHub Bot logged work on HDFS-16188:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:23
Start Date: 31/Aug/21 15:23
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r698849492



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##
@@ -113,47 +116,91 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId  Identifier of the nameservice.
+   * @param nnId  Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+  ActiveNamenodeResolver resolver, String nsId, String nnId, String 
resolvedHost) {
+super(NamenodeHeartbeatService.class.getSimpleName() +

Review comment:
   We probably want a getNnName() to make this more readable.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##
@@ -426,35 +426,42 @@ static String concatSuffixes(String... suffixes) {
 Collection nnIds = getNameNodeIds(conf, nsId);
 Map ret = Maps.newLinkedHashMap();
 for (String nnId : emptyAsSingletonNull(nnIds)) {
-  String suffix = concatSuffixes(nsId, nnId);
-  String address = checkKeysAndProcess(defaultValue, suffix, conf, keys);
-  if (address != null) {
-InetSocketAddress isa = NetUtils.createSocketAddr(address);
-try {
-  // Datanode should just use FQDN
-  String[] resolvedHostNames = dnr
-  .getAllResolvedHostnameByDomainName(isa.getHostName(), true);
-  int port = isa.getPort();
-  for (String hostname : resolvedHostNames) {
-InetSocketAddress inetSocketAddress = new InetSocketAddress(
-hostname, port);
-// Concat nn info with host info to make uniq ID
-String concatId;
-if (nnId == null || nnId.isEmpty()) {
-  concatId = String
-  .join("-", nsId, hostname, String.valueOf(port));
-} else {
-  concatId = String
-  .join("-", nsId, nnId, hostname, String.valueOf(port));
-}
-ret.put(concatId, inetSocketAddress);
+  getResolvedAddressesForNnId(
+  conf, nsId, nnId, dnr, defaultValue, ret, keys);

Review comment:
   I think it is better to not have the "ret" as a parameter.
   We should return it and use addAll().

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterNamenodeHeartbeat.java
##
@@ -203,4 +210,64 @@ public void testHearbeat() throws InterruptedException, 
IOException {
 standby = normalNss.get(1);
 assertEquals(NAMENODES[1], standby.getNamenodeId());
   }
+
+  @Test
+  public void testNamenodeHeartbeatServiceNNResolution() {
+String nsId = "test-ns";
+String nnId = "nn";
+String rpcPort = "1000";
+String servicePort = "1001";
+String lifelinePort = "1002";
+String webAddressPort = "1003";
+Configuration conf = generateNamenodeConfiguration(nsId, nnId,
+rpcPort, servicePort, lifelinePort, webAddressPort);
+
+Router testRouter = new Router();
+testRouter.setConf(conf);
+
+Collection heartbeatServices =
+testRouter.createNamenodeHeartbeatServices();
+
+assertEquals(2, heartbeatServices.size());
+
+Iterator iterator = heartbeatServices.iterator();
+NamenodeHeartbeatService service = iterator.next();
+service.init(conf);
+assertEquals("test-ns-nn-host01.test:host01.test:1001",
+service.getNamenodeDesc());
+
+service = iterator.next();
+service.init(conf);
+assertEquals("test-ns-nn-host02.test:host02.test:1001",
+service.getNamenodeDesc());
+
+  }
+
+  private Configuration generateNamenodeConfiguration(
+  String nsId, String nnId,
+  String rpcPort, String servicePort,

Review comment:
   Make all the ports ints.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
##
@@ -426,35 +426,42 @@ static String concatSuffixes(String... suffixes) {
 Collection nnIds = getNameNodeIds(conf, nsId);
 Map ret = Maps.newLinkedHashMap();
 for (String nnId : emptyAsSingletonNull(nnIds)) {
-  String suffix = concatSuffixes(nsId, nnId);
-  String address = checkKeysAndProcess(defaultValue, 

[jira] [Work logged] (HDFS-16197) Simplify getting NNStorage in FSNamesystem

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16197?focusedWorklogId=644429=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644429
 ]

ASF GitHub Bot logged work on HDFS-16197:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:23
Start Date: 31/Aug/21 15:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3357:
URL: https://github.com/apache/hadoop/pull/3357#issuecomment-909077338


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 52s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3357/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 125 unchanged 
- 2 fixed = 127 total (was 127)  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 262m 18s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 366m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3357/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3357 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 76ee140ebf16 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6536e303023742f18b43b23a6820ee3a27784d00 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3357/1/testReport/ |
   | Max. process+thread count | 2721 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Work logged] (HDFS-16192) ViewDistributedFileSystem#rename wrongly using src in the place of dst.

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16192?focusedWorklogId=644415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644415
 ]

ASF GitHub Bot logged work on HDFS-16192:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:21
Start Date: 31/Aug/21 15:21
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3353:
URL: https://github.com/apache/hadoop/pull/3353#discussion_r698896805



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java
##
@@ -89,4 +91,30 @@ public void testEmptyDelegationToken() throws IOException {
   }
 }
   }
+
+  @Test
+  public void testRenameWithOptions() throws IOException {
+Configuration conf = getTestConfiguration();
+MiniDFSCluster cluster = null;
+try {
+  cluster = new MiniDFSCluster.Builder(conf).numDataNodes(0).build();
+  URI defaultUri =
+  URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+  conf.set("fs.viewfs.mounttable." + defaultUri.getHost() + 
".linkFallback",
+  defaultUri.toString());
+  conf.setLong(CommonConfigurationKeys.FS_TRASH_INTERVAL_KEY, 3);
+  try (ViewDistributedFileSystem fileSystem =
+  (ViewDistributedFileSystem) FileSystem.get(conf)) {
+final Path testDir = new Path("/test");
+final Path renameDir = new Path("/testRename");
+fileSystem.mkdirs(testDir);
+fileSystem.rename(testDir, renameDir, Options.Rename.TO_TRASH);
+Assert.assertTrue(fileSystem.exists(renameDir));

Review comment:
   ok got it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644415)
Time Spent: 2.5h  (was: 2h 20m)

> ViewDistributedFileSystem#rename wrongly using src in the place of dst.
> ---
>
> Key: HDFS-16192
> URL: https://issues.apache.org/jira/browse/HDFS-16192
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In ViewDistributedFileSystem, we are mistakenly used src path in the place of 
> dst path when finding mount path info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644370=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644370
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:16
Start Date: 31/Aug/21 15:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909105949


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 31s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m 31s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 39s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   5m  3s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  16m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 53s |  |  the patch passed  |
   | -1 :x: |  javac  |  14m 53s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/artifact/out/results-compile-javac-root.txt)
 |  root generated 1 new + 1576 unchanged - 1 fixed = 1577 total (was 1577)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 42s |  |  root: The patch generated 
0 new + 568 unchanged - 1 fixed = 568 total (was 569)  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   5m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 45s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 208m 36s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 357m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 573c79676bd4 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 9516ddf5488d512bd55f539363b0befe575b893a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/testReport/ |
   | Max. process+thread count | 1948 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache 

[jira] [Work logged] (HDFS-15862) Rename file back to original in TestViewfsWithNfs3.testNfsRenameSingleNN

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15862?focusedWorklogId=644364=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644364
 ]

ASF GitHub Bot logged work on HDFS-15862:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:15
Start Date: 31/Aug/21 15:15
Worklog Time Spent: 10m 
  Work Description: lzx404243 commented on pull request #2724:
URL: https://github.com/apache/hadoop/pull/2724#issuecomment-908901777


   @aajisaka Thanks for the feedback! I've made the corresponding changes 
following your suggestion. Please let me know what you think.
   
   Thanks!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644364)
Time Spent: 50m  (was: 40m)

> Rename file back to original in TestViewfsWithNfs3.testNfsRenameSingleNN
> 
>
> Key: HDFS-15862
> URL: https://issues.apache.org/jira/browse/HDFS-15862
> Project: Hadoop HDFS
>  Issue Type: Test
>Reporter: Zhengxi Li
>Assignee: Zhengxi Li
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDFS-15862.001.patch, HDFS-15862.002.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The 
> 'org.apache.hadoop.hdfs.nfs.nfs3.TestViewfsWithNfs3.testNfsRenameSingleNN' 
> test is not idempotent and fails if run twice in the same JVM, because it 
> pollutes state shared among tests. It may be good to clean this state 
> pollution so that some other tests do not fail in the future due to the 
> shared state polluted by this test.
> Running {{TestViewfsWithNfs3.testNfsRenameSingleNN}} twice would result in 
> the second run failing with a NullPointer exception:
> {noformat}
> [ERROR] Errors:
> [ERROR]   TestViewfsWithNfs3.testNfsRenameSingleNN:317 NullPointer
> {noformat}
> The reason for this is that the {{/user1/renameSingleNN}} file is created in 
> {{setup()}}, but gets renamed in {{testNfsRenameSingl{{eNN. When the 
> second run of {{testNfsRenameSingleNN}} tries to get info of the file by its 
> original name, it returns a NullPointer since the file no longer exists.
>  
> Link to PR: https://github.com/apache/hadoop/pull/2724



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16192) ViewDistributedFileSystem#rename wrongly using src in the place of dst.

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16192?focusedWorklogId=644342=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644342
 ]

ASF GitHub Bot logged work on HDFS-16192:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:13
Start Date: 31/Aug/21 15:13
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #3353:
URL: https://github.com/apache/hadoop/pull/3353


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644342)
Time Spent: 2h 20m  (was: 2h 10m)

> ViewDistributedFileSystem#rename wrongly using src in the place of dst.
> ---
>
> Key: HDFS-16192
> URL: https://issues.apache.org/jira/browse/HDFS-16192
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In ViewDistributedFileSystem, we are mistakenly used src path in the place of 
> dst path when finding mount path info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?focusedWorklogId=644271=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644271
 ]

ASF GitHub Bot logged work on HDFS-16198:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:05
Start Date: 31/Aug/21 15:05
Worklog Time Spent: 10m 
  Work Description: EungsopYoo opened a new pull request #3359:
URL: https://github.com/apache/hadoop/pull/3359


   
   
   ### Description of PR
   Fix leakage of short circuit read Slot when InvalidToken exception
   
   ### How was this patch tested?
   A new test case is added
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644271)
Time Spent: 20m  (was: 10m)

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-16198.patch, screenshot-2.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15452) Dynamically initialize the capacity of BlocksMap

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15452?focusedWorklogId=644279=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644279
 ]

ASF GitHub Bot logged work on HDFS-15452:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:06
Start Date: 31/Aug/21 15:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2128:
URL: https://github.com/apache/hadoop/pull/2128#issuecomment-908618472


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 54s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2128/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 318 unchanged 
- 0 fixed = 319 total (was 318)  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 241m 32s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2128/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 326m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2128/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2128 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 5e275cd9ecc9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 00e33b0d553258e49e85e2cbae6b55e02b11d2b8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 

[jira] [Work logged] (HDFS-16197) Simplify getting NNStorage in FSNamesystem

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16197?focusedWorklogId=644248=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644248
 ]

ASF GitHub Bot logged work on HDFS-16197:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 15:03
Start Date: 31/Aug/21 15:03
Worklog Time Spent: 10m 
  Work Description: jianghuazhu opened a new pull request #3357:
URL: https://github.com/apache/hadoop/pull/3357


   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644248)
Time Spent: 50m  (was: 40m)

> Simplify getting NNStorage in FSNamesystem
> --
>
> Key: HDFS-16197
> URL: https://issues.apache.org/jira/browse/HDFS-16197
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In FSNamesystem, there are many places where NNStorage needs to be used 
> (according to preliminary statistics, there are 15 times), and these places 
> are obtained using "getFSImage().getStorage()". We should try to use a 
> simpler way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644205=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644205
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 14:59
Start Date: 31/Aug/21 14:59
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909234801






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644205)
Time Spent: 3h 10m  (was: 3h)

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407412#comment-17407412
 ] 

Hadoop QA commented on HDFS-16198:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
0s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
17s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
15s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
42s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 26s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
11s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 38m 
13s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  8m 
31s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
10s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  8m 10s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/704/artifact/out/diff-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
 | {color:red} hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 654 unchanged 
- 0 fixed = 655 total (was 654) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
12s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  7m 12s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/704/artifact/out/diff-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color}
 | {color:red} 
hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with 
JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 632 
unchanged - 0 fixed = 633 total (was 632) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 32s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/704/artifact/out/diff-checkstyle-hadoop-hdfs-project.txt{color}
 | {color:orange} 

[jira] [Work logged] (HDFS-16192) ViewDistributedFileSystem#rename wrongly using src in the place of dst.

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16192?focusedWorklogId=644161=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644161
 ]

ASF GitHub Bot logged work on HDFS-16192:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 14:54
Start Date: 31/Aug/21 14:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3353:
URL: https://github.com/apache/hadoop/pull/3353#issuecomment-908877023


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  24m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   8m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 43s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   9m  1s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  1s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 342m 31s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 575m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3353/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3353 |
   | JIRA Issue | HDFS-16192 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 9332e4d92718 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d3b400a949b12eb6b873c571a0f7f20ca4dcaf41 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3353/4/testReport/ |
   | Max. process+thread 

[jira] [Work logged] (HDFS-16192) ViewDistributedFileSystem#rename wrongly using src in the place of dst.

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16192?focusedWorklogId=644133=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644133
 ]

ASF GitHub Bot logged work on HDFS-16192:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 14:51
Start Date: 31/Aug/21 14:51
Worklog Time Spent: 10m 
  Work Description: umamaheswararao commented on pull request #3353:
URL: https://github.com/apache/hadoop/pull/3353#issuecomment-908904407


   Thanks a lot @jojochuang for the reviews!!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644133)
Time Spent: 2h  (was: 1h 50m)

> ViewDistributedFileSystem#rename wrongly using src in the place of dst.
> ---
>
> Key: HDFS-16192
> URL: https://issues.apache.org/jira/browse/HDFS-16192
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In ViewDistributedFileSystem, we are mistakenly used src path in the place of 
> dst path when finding mount path info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16188) RBF: Router to support resolving monitored namenodes with DNS

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16188?focusedWorklogId=644135=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644135
 ]

ASF GitHub Bot logged work on HDFS-16188:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 14:51
Start Date: 31/Aug/21 14:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-908343565






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644135)
Time Spent: 2h  (was: 1h 50m)

> RBF: Router to support resolving monitored namenodes with DNS
> -
>
> Key: HDFS-16188
> URL: https://issues.apache.org/jira/browse/HDFS-16188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We can use a DNS round-robin record to configure list of monitored namenodes, 
> so we don't have to reconfigure everything namenode hostname is changed. For 
> example, in containerized environment the hostname of namenode/observers can 
> change pretty often.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644129=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644129
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 14:51
Start Date: 31/Aug/21 14:51
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909239985


   Thanks @amahussein. It is OK for me to squash and check in them together. 
Just find there are many checks failed reported by Yetus here. Do you mind give 
another check?
   Considering branch-3.2 will release recently, I think it is time to push 
this PR into branch-3.2 before release happen. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644129)
Time Spent: 3h  (was: 2h 50m)

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16197) Simplify getting NNStorage in FSNamesystem

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16197?focusedWorklogId=644095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644095
 ]

ASF GitHub Bot logged work on HDFS-16197:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 14:47
Start Date: 31/Aug/21 14:47
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3357:
URL: https://github.com/apache/hadoop/pull/3357#issuecomment-909085552


   @aajisaka , @virajjasani , can you help review the code.
   thank you very much.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644095)
Time Spent: 40m  (was: 0.5h)

> Simplify getting NNStorage in FSNamesystem
> --
>
> Key: HDFS-16197
> URL: https://issues.apache.org/jira/browse/HDFS-16197
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In FSNamesystem, there are many places where NNStorage needs to be used 
> (according to preliminary statistics, there are 15 times), and these places 
> are obtained using "getFSImage().getStorage()". We should try to use a 
> simpler way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16188) RBF: Router to support resolving monitored namenodes with DNS

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16188?focusedWorklogId=644080=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644080
 ]

ASF GitHub Bot logged work on HDFS-16188:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 14:45
Start Date: 31/Aug/21 14:45
Worklog Time Spent: 10m 
  Work Description: LeonGao91 commented on a change in pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#discussion_r698897540



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##
@@ -113,47 +116,91 @@ public NamenodeHeartbeatService(
 
   }
 
+  /**
+   * Create a new Namenode status updater.
+   *
+   * @param resolver Namenode resolver service to handle NN registration.
+   * @param nsId  Identifier of the nameservice.
+   * @param nnId  Identifier of the namenode in HA.
+   * @param resolvedHost  resolvedHostname for this specific namenode.
+   */
+  public NamenodeHeartbeatService(
+  ActiveNamenodeResolver resolver, String nsId, String nnId, String 
resolvedHost) {
+super(NamenodeHeartbeatService.class.getSimpleName() +

Review comment:
   sure, will make a static method for this as it is calling super()

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
##
@@ -56,9 +56,25 @@
   private final String nnId;
   private final String nsId;
   private final boolean autoFailoverEnabled;
-  
+
   public NNHAServiceTarget(Configuration conf,
   String nsId, String nnId) {
+this(conf, nsId, nnId, null, null, null);
+  }
+
+  /**
+   * Create a NNHAServiceTarget for a namenode.
+   *
+   * @param conf  HDFS configuration.
+   * @param nsId  nsId of this nn.
+   * @param nnId  nnId of this nn.
+   * @param serviceAddr   Provided service address.
+   * @param addr  Provided service address.
+   * @param lifelineAddr  Provided service address.
+   */
+  public NNHAServiceTarget(Configuration conf,

Review comment:
   Yeah this will look cleaner. I will need to remove the final key word 
from some vars as the assignment is out of the constructor.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/NNHAServiceTarget.java
##
@@ -56,9 +56,25 @@
   private final String nnId;
   private final String nsId;
   private final boolean autoFailoverEnabled;
-  
+
   public NNHAServiceTarget(Configuration conf,
   String nsId, String nnId) {
+this(conf, nsId, nnId, null, null, null);
+  }
+
+  /**
+   * Create a NNHAServiceTarget for a namenode.
+   *
+   * @param conf  HDFS configuration.
+   * @param nsId  nsId of this nn.
+   * @param nnId  nnId of this nn.
+   * @param serviceAddr   Provided service address.
+   * @param addr  Provided service address.
+   * @param lifelineAddr  Provided service address.
+   */
+  public NNHAServiceTarget(Configuration conf,

Review comment:
   Added a simple test per comment.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644080)
Time Spent: 1h 50m  (was: 1h 40m)

> RBF: Router to support resolving monitored namenodes with DNS
> -
>
> Key: HDFS-16188
> URL: https://issues.apache.org/jira/browse/HDFS-16188
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> We can use a DNS round-robin record to configure list of monitored namenodes, 
> so we don't have to reconfigure everything namenode hostname is changed. For 
> example, in containerized environment the hostname of namenode/observers can 
> change pretty often.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15979) Move within EZ fails and cannot remove nested EZs

2021-08-31 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407375#comment-17407375
 ] 

Ahmed Hussein commented on HDFS-15979:
--

Hi [~weichiu], Are you okay with the changes in this Jira? or do you still have 
any concerns?

> Move within EZ fails and cannot remove nested EZs
> -
>
> Key: HDFS-15979
> URL: https://issues.apache.org/jira/browse/HDFS-15979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15979.001.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Moving between EZ directories should work fine if the EZ key for the 
> directories is identical. If the key is name identical then no 
> decrypt/re-encrypt is necessary.
> However, the rename operation checks more than the key name. It compares the 
> inode number (unique identifier) of the source and dest dirs which will never 
> be the same for 2 dirs resulting in the cited failure. Note it also 
> incorrectly compares the key version.
> A related issue is if an ancestor of a EZ share the same key (ie. 
> /projects/foo and /projects/foo/bar/blah both use same key), files also 
> cannot be moved from the child to a parent dir, plus the child EZ cannot be 
> removed even though it's now covered by the ancestor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644038=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644038
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 13:34
Start Date: 31/Aug/21 13:34
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909242831


   > Thanks @amahussein. It is OK for me to squash and check in them together. 
Just find there are many checks failed reported by Yetus here. Do you mind give 
another check?
   > Considering branch-3.2 will release recently, I think it is time to push 
this PR into branch-3.2 before release happen.
   
   I will rebase the code changes and push a fresh version.
   P.S: the final merge "_should not_" squash the commits. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644038)
Time Spent: 2h 50m  (was: 2h 40m)

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644037=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644037
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 13:30
Start Date: 31/Aug/21 13:30
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909239985


   Thanks @amahussein. It is OK for me to squash and check in them together. 
Just find there are many checks failed reported by Yetus here. Do you mind give 
another check?
   Considering branch-3.2 will release recently, I think it is time to push 
this PR into branch-3.2 before release happen. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644037)
Time Spent: 2h 40m  (was: 2.5h)

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=644035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-644035
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 13:24
Start Date: 31/Aug/21 13:24
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909234801


   > Thanks @amahussein for your works. I trigger Yetus and it seems works 
well. Failed unit test `TestDirectoryScanner ` looks not related to changes. 
AND verify this feature with pseudo cluster, it also seems work well.
   > Just notice that this PR include different cherry-pick commits together. I 
am not sure if it is traced gracefully when check in.
   > @brahmareddybattula do you have any idea here? If not I would like give my 
+1.
   
   Thanks @Hexiaoqiao for the review and the feedback. I appreciate it.
   
   I intentionally cherry-picked the commits related to the feature so that the 
PR would be complete. Otherwise, if we did separate PRs for each commit, 
branch-3.2 would be broken.
   I believe it should be okay to have multiple cherry-picks as long as the 
final merge would not squash them into one.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 644035)
Time Spent: 2.5h  (was: 2h 20m)

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16198:
--
Labels: pull-request-available  (was: )

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-16198.patch, screenshot-2.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?focusedWorklogId=643966=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-643966
 ]

ASF GitHub Bot logged work on HDFS-16198:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 11:07
Start Date: 31/Aug/21 11:07
Worklog Time Spent: 10m 
  Work Description: EungsopYoo opened a new pull request #3359:
URL: https://github.com/apache/hadoop/pull/3359


   
   
   ### Description of PR
   Fix leakage of short circuit read Slot when InvalidToken exception
   
   ### How was this patch tested?
   A new test case is added
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 643966)
Remaining Estimate: 0h
Time Spent: 10m

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: HDFS-16198.patch, screenshot-2.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=643947=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-643947
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 10:25
Start Date: 31/Aug/21 10:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3200:
URL: https://github.com/apache/hadoop/pull/3200#issuecomment-909105949


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 31s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m 31s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 39s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   5m  3s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  16m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 53s |  |  the patch passed  |
   | -1 :x: |  javac  |  14m 53s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/artifact/out/results-compile-javac-root.txt)
 |  root generated 1 new + 1576 unchanged - 1 fixed = 1577 total (was 1577)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 42s |  |  root: The patch generated 
0 new + 568 unchanged - 1 fixed = 568 total (was 569)  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   5m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  15m 45s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 208m 36s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 357m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3200 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 573c79676bd4 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 9516ddf5488d512bd55f539363b0befe575b893a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/testReport/ |
   | Max. process+thread count | 1948 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3200/5/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache 

[jira] [Work logged] (HDFS-16197) Simplify getting NNStorage in FSNamesystem

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16197?focusedWorklogId=643940=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-643940
 ]

ASF GitHub Bot logged work on HDFS-16197:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 09:58
Start Date: 31/Aug/21 09:58
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3357:
URL: https://github.com/apache/hadoop/pull/3357#issuecomment-909085552


   @aajisaka , @virajjasani , can you help review the code.
   thank you very much.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 643940)
Time Spent: 0.5h  (was: 20m)

> Simplify getting NNStorage in FSNamesystem
> --
>
> Key: HDFS-16197
> URL: https://issues.apache.org/jira/browse/HDFS-16197
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In FSNamesystem, there are many places where NNStorage needs to be used 
> (according to preliminary statistics, there are 15 times), and these places 
> are obtained using "getFSImage().getStorage()". We should try to use a 
> simpler way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16197) Simplify getting NNStorage in FSNamesystem

2021-08-31 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu updated HDFS-16197:

Priority: Major  (was: Minor)

> Simplify getting NNStorage in FSNamesystem
> --
>
> Key: HDFS-16197
> URL: https://issues.apache.org/jira/browse/HDFS-16197
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In FSNamesystem, there are many places where NNStorage needs to be used 
> (according to preliminary statistics, there are 15 times), and these places 
> are obtained using "getFSImage().getStorage()". We should try to use a 
> simpler way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16197) Simplify getting NNStorage in FSNamesystem

2021-08-31 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407209#comment-17407209
 ] 

JiangHua Zhu commented on HDFS-16197:
-

Add a method to obtain NNStorage more concisely, for example:
public NNStorage getNNStorage() {
 return getFSImage().getStorage();
 }

> Simplify getting NNStorage in FSNamesystem
> --
>
> Key: HDFS-16197
> URL: https://issues.apache.org/jira/browse/HDFS-16197
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In FSNamesystem, there are many places where NNStorage needs to be used 
> (according to preliminary statistics, there are 15 times), and these places 
> are obtained using "getFSImage().getStorage()". We should try to use a 
> simpler way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16188) RBF: Router to support resolving monitored namenodes with DNS

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16188?focusedWorklogId=643938=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-643938
 ]

ASF GitHub Bot logged work on HDFS-16188:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 09:52
Start Date: 31/Aug/21 09:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-909081414


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  12m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  7s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 3 new + 15 unchanged - 1 fixed = 
18 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   2m 36s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 332m 26s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  37m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 511m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSHAAdmin |
   |   | hadoop.hdfs.tools.TestDFSZKFailoverController |
   |   | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 

[jira] [Work logged] (HDFS-16197) Simplify getting NNStorage in FSNamesystem

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16197?focusedWorklogId=643937=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-643937
 ]

ASF GitHub Bot logged work on HDFS-16197:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 09:46
Start Date: 31/Aug/21 09:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3357:
URL: https://github.com/apache/hadoop/pull/3357#issuecomment-909077338


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 52s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3357/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 125 unchanged 
- 2 fixed = 127 total (was 127)  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 262m 18s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 366m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3357/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3357 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 76ee140ebf16 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6536e303023742f18b43b23a6820ee3a27784d00 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3357/1/testReport/ |
   | Max. process+thread count | 2721 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407204#comment-17407204
 ] 

Viraj Jasani commented on HDFS-16198:
-

Nice one [~Eungsop Yoo], could you please also create github PR?

FYI [~weichiu]

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: HDFS-16198.patch, screenshot-2.png
>
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Eungsop Yoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HDFS-16198:
---
Status: Patch Available  (was: Open)

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: HDFS-16198.patch, screenshot-2.png
>
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Eungsop Yoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HDFS-16198:
---
Attachment: HDFS-16198.patch

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: HDFS-16198.patch, screenshot-2.png
>
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Eungsop Yoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HDFS-16198:
---
Description: 
In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With this 
configuration SecretManager.InvalidToken exception may be thrown if the access 
token expires when we do short circuit reads. It doesn't matter because the 
failed reads will be retried. But it causes the leakage of ShortCircuitShm.Slot 
objects. We found this problem in our secure HBase clusters.
 !screenshot-2.png! 

The fix is trivial. Just free the slot when InvalidToken exception is thrown.

  was:
In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With this 
configuration SecretManager.InvalidToken exception may be thrown if the access 
token expires when we do short circuit reads. It doesn't matter because the 
failed reads will be retried. But it causes the leakage of ShortCircuitShm.Slot 
objects. We found this problem in our secure HBase clusters.
 !screenshot-1.png! 

The fix is trivial. Just free the slot when InvalidToken exception is thrown.


> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: screenshot-2.png
>
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Eungsop Yoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HDFS-16198:
---
Attachment: (was: screenshot-1.png)

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: screenshot-2.png
>
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-2.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Eungsop Yoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HDFS-16198:
---
Attachment: screenshot-2.png

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: screenshot-2.png
>
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-1.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Eungsop Yoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HDFS-16198:
---
Description: 
In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With this 
configuration SecretManager.InvalidToken exception may be thrown if the access 
token expires when we do short circuit reads. It doesn't matter because the 
failed reads will be retried. But it causes the leakage of ShortCircuitShm.Slot 
objects. We found this problem in our secure HBase clusters.
 !screenshot-1.png! 

The fix is trivial. Just free the slot when InvalidToken exception is thrown.

  was:
In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With this 
configuration SecretManager.InvalidToken exception may be thrown if the access 
token expires when we do short circuit reads. It doesn't matter because the 
failed reads will be retried. But it causes the leakage of ShortCircuitShm.Slot 
objects. We found this problem in our secure HBase clusters.

The fix is trivial. Just free the slot when InvalidToken exception is thrown.


> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: screenshot-1.png
>
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
>  !screenshot-1.png! 
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Eungsop Yoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eungsop Yoo updated HDFS-16198:
---
Attachment: screenshot-1.png

> Short circuit read leaks Slot objects when InvalidToken exception is thrown
> ---
>
> Key: HDFS-16198
> URL: https://issues.apache.org/jira/browse/HDFS-16198
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eungsop Yoo
>Priority: Major
> Attachments: screenshot-1.png
>
>
> In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With 
> this configuration SecretManager.InvalidToken exception may be thrown if the 
> access token expires when we do short circuit reads. It doesn't matter 
> because the failed reads will be retried. But it causes the leakage of 
> ShortCircuitShm.Slot objects. We found this problem in our secure HBase 
> clusters.
> The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16198) Short circuit read leaks Slot objects when InvalidToken exception is thrown

2021-08-31 Thread Eungsop Yoo (Jira)
Eungsop Yoo created HDFS-16198:
--

 Summary: Short circuit read leaks Slot objects when InvalidToken 
exception is thrown
 Key: HDFS-16198
 URL: https://issues.apache.org/jira/browse/HDFS-16198
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eungsop Yoo


In secure mode, 'dfs.block.access.token.enable' should be set 'true'. With this 
configuration SecretManager.InvalidToken exception may be thrown if the access 
token expires when we do short circuit reads. It doesn't matter because the 
failed reads will be retried. But it causes the leakage of ShortCircuitShm.Slot 
objects. We found this problem in our secure HBase clusters.

The fix is trivial. Just free the slot when InvalidToken exception is thrown.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15862) Rename file back to original in TestViewfsWithNfs3.testNfsRenameSingleNN

2021-08-31 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407140#comment-17407140
 ] 

Hadoop QA commented on HDFS-15862:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
12s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 
14s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 32s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 22m 
10s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
21s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 33s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m  
2s{color} | {color:green}{color} | 

[jira] [Work logged] (HDFS-15862) Rename file back to original in TestViewfsWithNfs3.testNfsRenameSingleNN

2021-08-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15862?focusedWorklogId=643880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-643880
 ]

ASF GitHub Bot logged work on HDFS-15862:
-

Author: ASF GitHub Bot
Created on: 31/Aug/21 06:36
Start Date: 31/Aug/21 06:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2724:
URL: https://github.com/apache/hadoop/pull/2724#issuecomment-908942305


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 58s |  |  hadoop-hdfs-nfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2724/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2724 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 3441aa40f3c3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 75d6bb6fe30634f905bab8bdadf1eb0e882d97a9 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2724/2/testReport/ |
   | Max. process+thread count | 636 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-nfs U: 
hadoop-hdfs-project/hadoop-hdfs-nfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2724/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |