[jira] [Work logged] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16710?focusedWorklogId=797096=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797096
 ]

ASF GitHub Bot logged work on HDFS-16710:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 02:34
Start Date: 02/Aug/22 02:34
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4670:
URL: https://github.com/apache/hadoop/pull/4670#issuecomment-1201946584

   > We need to pay attention to checkstyle, and then see if the junit test 
problem is related to this change. Thank you.
   @slfan1989 Copy, sir, I will fix them later.




Issue Time Tracking
---

Worklog Id: (was: 797096)
Time Spent: 1h  (was: 50m)

> Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode 
> package
> ---
>
> Key: HDFS-16710
> URL: https://issues.apache.org/jira/browse/HDFS-16710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When I read some class about HDFS NameNode, I found there are many redundant 
> throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws ServiceFailedException, AccessControlException, IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
> Because ServiceFailedException and AccessControlException is subClass of 
> IOException, so I feel that ServiceFailedException and AccessControlException 
> are redundant, so we can remove it to make code clearer, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16704) Datanode return empty response instead of NPE for GetVolumeInfo during restarting

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16704?focusedWorklogId=797095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797095
 ]

ASF GitHub Bot logged work on HDFS-16704:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 02:31
Start Date: 02/Aug/22 02:31
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4661:
URL: https://github.com/apache/hadoop/pull/4661#issuecomment-1201945547

   @ayushtkn Thank you very much for helping me review it.
   
   Yes, you are right, we should return an empty value for this case. How about 
change it same with `getDiskBalancerStatus`, such as:
   ```
   @Override // DataNodeMXBean
 public String getVolumeInfo() {
   if (data == null) {
 LOG.debug("Storage not yet initialized.");
 return ""; // it's different with JSON.toString(new HashMap())
   }
   return JSON.toString(data.getVolumeInfoMap());
 }
   ```
   
   Developers or SREs are very sensitive to NPE, so I feel we shouldn't out put 
it in this expected situation.
   
   > Extra Stuff: This log line just below is wrong, incorrect placeholder, can 
fix in a separate jira if interested. :-)
   Copy, sir. I will fix it in a separate Jira.




Issue Time Tracking
---

Worklog Id: (was: 797095)
Time Spent: 0.5h  (was: 20m)

> Datanode return empty response instead of NPE for GetVolumeInfo during 
> restarting
> -
>
> Key: HDFS-16704
> URL: https://issues.apache.org/jira/browse/HDFS-16704
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> During datanode starting, I found some NPE in logs:
> {code:java}
> Caused by: java.lang.NullPointerException: Storage not yet initialized
>     at 
> org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:899)
>     at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:3533)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:72)
>     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:276)
>     at 
> com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
>     at 
> com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
>  {code}
> Because the storage of datanode not yet initialized when we trying to get 
> metrics of datanode, and related code as below:
> {code:java}
> @Override // DataNodeMXBean
> public String getVolumeInfo() {
>   Preconditions.checkNotNull(data, "Storage not yet initialized");
>   return JSON.toString(data.getVolumeInfoMap());
> } {code}
> The logic is ok, but I feel that the more reasonable logic should be return 
> an empty response instead of NPE, because InfoServer will be started before 
> initBlockPool.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16705) RBF: Support healthMonitor timeout configurable and cache NN and client proxy in NamenodeHeartbeatService

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16705?focusedWorklogId=797092=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797092
 ]

ASF GitHub Bot logged work on HDFS-16705:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 02:17
Start Date: 02/Aug/22 02:17
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on code in PR #4662:
URL: https://github.com/apache/hadoop/pull/4662#discussion_r935053891


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java:
##
@@ -96,6 +96,10 @@ public class RBFConfigKeys extends 
CommonConfigurationKeysPublic {
   FEDERATION_ROUTER_PREFIX + "heartbeat.interval";
   public static final long DFS_ROUTER_HEARTBEAT_INTERVAL_MS_DEFAULT =
   TimeUnit.SECONDS.toMillis(5);
+  public static final String DFS_ROUTER_HEALTH_MONITOR_TIMEOUT_MS =

Review Comment:
   Copy, sir, I will fix it later.





Issue Time Tracking
---

Worklog Id: (was: 797092)
Time Spent: 3h 40m  (was: 3.5h)

> RBF: Support healthMonitor timeout configurable and cache NN and client proxy 
> in NamenodeHeartbeatService
> -
>
> Key: HDFS-16705
> URL: https://issues.apache.org/jira/browse/HDFS-16705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> When I read NamenodeHeartbeatService.class of RBF, I feel that there are 
> somethings we can do for NamenodeHeartbeatService.class.
>  * Cache NameNode Protocol and Client Protocol to avoid creating a new proxy 
> every time
>  * Supports healthMonitorTimeout configuration
>  * Format code of getNamenodeStatusReport to make it clearer



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16705) RBF: Support healthMonitor timeout configurable and cache NN and client proxy in NamenodeHeartbeatService

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16705?focusedWorklogId=797090=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797090
 ]

ASF GitHub Bot logged work on HDFS-16705:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 01:52
Start Date: 02/Aug/22 01:52
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4662:
URL: https://github.com/apache/hadoop/pull/4662#discussion_r935044257


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RBFConfigKeys.java:
##
@@ -96,6 +96,10 @@ public class RBFConfigKeys extends 
CommonConfigurationKeysPublic {
   FEDERATION_ROUTER_PREFIX + "heartbeat.interval";
   public static final long DFS_ROUTER_HEARTBEAT_INTERVAL_MS_DEFAULT =
   TimeUnit.SECONDS.toMillis(5);
+  public static final String DFS_ROUTER_HEALTH_MONITOR_TIMEOUT_MS =

Review Comment:
   Let's do the getTimeDuration and cast it at the end.





Issue Time Tracking
---

Worklog Id: (was: 797090)
Time Spent: 3.5h  (was: 3h 20m)

> RBF: Support healthMonitor timeout configurable and cache NN and client proxy 
> in NamenodeHeartbeatService
> -
>
> Key: HDFS-16705
> URL: https://issues.apache.org/jira/browse/HDFS-16705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> When I read NamenodeHeartbeatService.class of RBF, I feel that there are 
> somethings we can do for NamenodeHeartbeatService.class.
>  * Cache NameNode Protocol and Client Protocol to avoid creating a new proxy 
> every time
>  * Supports healthMonitorTimeout configuration
>  * Format code of getNamenodeStatusReport to make it clearer



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16699) RBF: Router Update Observer NameNode state to Active when failover because of sockeTimeOut Exception

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16699?focusedWorklogId=797089=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797089
 ]

ASF GitHub Bot logged work on HDFS-16699:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 01:49
Start Date: 02/Aug/22 01:49
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4663:
URL: https://github.com/apache/hadoop/pull/4663#discussion_r935043281


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -483,10 +483,11 @@ public Object invokeMethod(
 final Object proxy = client.getProxy();
 
 ret = invoke(nsId, 0, method, proxy, params);
-if (failover) {
+if (failover && 
namenode.getState().equals(FederationNamenodeServiceState.STANDBY)) {

Review Comment:
   ```
   FederationNamenodeServiceState.STANDBY.equals(namenode.getState())
   ```
   Is usually safer.



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -483,10 +483,11 @@ public Object invokeMethod(
 final Object proxy = client.getProxy();
 
 ret = invoke(nsId, 0, method, proxy, params);
-if (failover) {
+if (failover && 
namenode.getState().equals(FederationNamenodeServiceState.STANDBY)) {
   // Success on alternate server, update
   InetSocketAddress address = client.getAddress();
   namenodeResolver.updateActiveNamenode(nsId, address);
+  LOG.info("Update ActiveNameNode,nsId = {},rpcAddress = {}.", nsId, 
rpcAddress);

Review Comment:
   Fix spaces in the log





Issue Time Tracking
---

Worklog Id: (was: 797089)
Time Spent: 1h 10m  (was: 1h)

> RBF: Router Update Observer NameNode state to Active when failover because of 
>  sockeTimeOut  Exception
> --
>
> Key: HDFS-16699
> URL: https://issues.apache.org/jira/browse/HDFS-16699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.1.1
>Reporter: ShuangQi Xia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> we found that ,router print logs that indicate  Observer NameNode  state 
> changed to active always,here's the log
> 2022-03-18 11:00:54,589 | INFO  | NamenodeHeartbeatService hacluster 11342-0 
> | NN registration state has changed: 
> test101:25019->hacluster:11342:test103:25000-ACTIVE -> 
> test102:25019->hacluster:11342::test103:25000-OBSERVER | 
> MembershipStoreImpl.java:170
> for code ,I fount that , when router request failed for some reson ,like 
> sockettimeout Excetion , failover to Observer NameNode,will  update it's 
> state to Active
>if (failover) {
>   // Success on alternate server, update
>   InetSocketAddress address = client.getAddress();
>   namenodeResolver.updateActiveNamenode(nsId, address);
>}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16705) RBF: Support healthMonitor timeout configurable and cache NN and client proxy in NamenodeHeartbeatService

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16705?focusedWorklogId=797087=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797087
 ]

ASF GitHub Bot logged work on HDFS-16705:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 01:41
Start Date: 02/Aug/22 01:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4662:
URL: https://github.com/apache/hadoop/pull/4662#issuecomment-1201918556

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  31m 13s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 136m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4662/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4662 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux e7df68ce33e0 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b4e60dc7debe5070186e2e813ed4b6cc528e8cb3 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4662/3/testReport/ |
   | Max. process+thread count | 2523 (vs. ulimit of 

[jira] [Work logged] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16710?focusedWorklogId=797070=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797070
 ]

ASF GitHub Bot logged work on HDFS-16710:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 00:41
Start Date: 02/Aug/22 00:41
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4670:
URL: https://github.com/apache/hadoop/pull/4670#issuecomment-1201880094

   @ZanderXu We need to pay attention to checkstyle, and then see if the junit 
test problem is related to this change. Thank you.




Issue Time Tracking
---

Worklog Id: (was: 797070)
Time Spent: 50m  (was: 40m)

> Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode 
> package
> ---
>
> Key: HDFS-16710
> URL: https://issues.apache.org/jira/browse/HDFS-16710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When I read some class about HDFS NameNode, I found there are many redundant 
> throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws ServiceFailedException, AccessControlException, IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
> Because ServiceFailedException and AccessControlException is subClass of 
> IOException, so I feel that ServiceFailedException and AccessControlException 
> are redundant, so we can remove it to make code clearer, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16702) MiniDFSCluster should report cause of exception in assertion error

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16702?focusedWorklogId=797069=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797069
 ]

ASF GitHub Bot logged work on HDFS-16702:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 00:41
Start Date: 02/Aug/22 00:41
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request, #4671:
URL: https://github.com/apache/hadoop/pull/4671

   ### Description of PR
   - MiniDFSCluster should report cause of exception in assertion error
   - Improve message of ExitException to include cause
   




Issue Time Tracking
---

Worklog Id: (was: 797069)
Remaining Estimate: 0h
Time Spent: 10m

> MiniDFSCluster should report cause of exception in assertion error
> --
>
> Key: HDFS-16702
> URL: https://issues.apache.org/jira/browse/HDFS-16702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
> Environment: Tests running in the Hadoop dev environment image.
>Reporter: Steve Vaughan
>Assignee: Viraj Jasani
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When the MiniDFSClsuter detects that an exception caused an exit, it should 
> include that exception as the cause for the AssertionError that it throws.  
> The current AssertError simply reports the message "Test resulted in an 
> unexpected exit" and provides a stack trace to the location of the check for 
> an exit exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16702) MiniDFSCluster should report cause of exception in assertion error

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16702:
--
Labels: pull-request-available  (was: )

> MiniDFSCluster should report cause of exception in assertion error
> --
>
> Key: HDFS-16702
> URL: https://issues.apache.org/jira/browse/HDFS-16702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
> Environment: Tests running in the Hadoop dev environment image.
>Reporter: Steve Vaughan
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When the MiniDFSClsuter detects that an exception caused an exit, it should 
> include that exception as the cause for the AssertionError that it throws.  
> The current AssertError simply reports the message "Test resulted in an 
> unexpected exit" and provides a stack trace to the location of the check for 
> an exit exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16709) Remove redundant cast in FSEditLogOp.class

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16709?focusedWorklogId=797067=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797067
 ]

ASF GitHub Bot logged work on HDFS-16709:
-

Author: ASF GitHub Bot
Created on: 02/Aug/22 00:40
Start Date: 02/Aug/22 00:40
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4667:
URL: https://github.com/apache/hadoop/pull/4667#issuecomment-1201877435

   @ZanderXu 
   
   > I think we need it. Because after removing it, the IDEA warns _Unchecked 
cast:`org.apache.hadoop.hdfs.server.namenode.FSEditLogOp` to T._
   
   Thanks for the explanation, I have understood your changes.
   LGTM.




Issue Time Tracking
---

Worklog Id: (was: 797067)
Time Spent: 50m  (was: 40m)

> Remove redundant cast in FSEditLogOp.class
> --
>
> Key: HDFS-16709
> URL: https://issues.apache.org/jira/browse/HDFS-16709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When I read some class about Edits of NameNode, I found that there are much 
> redundant cast in FSEditLogOp.class, I feel that we should remove them.
> Such as:
> {code:java}
> static UpdateBlocksOp getInstance(OpInstanceCache cache) {
>   return (UpdateBlocksOp)cache.get(OP_UPDATE_BLOCKS);
> } {code}
> Because cache.get() have cast the response to T, such as:
> {code:java}
> @SuppressWarnings("unchecked")
> public  T get(FSEditLogOpCodes opCode) {
>   return useCache ? (T)CACHE.get().get(opCode) : (T)newInstance(opCode);
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] (HDFS-16702) MiniDFSCluster should report cause of exception in assertion error

2022-08-01 Thread Viraj Jasani (Jira)


[ https://issues.apache.org/jira/browse/HDFS-16702 ]


Viraj Jasani deleted comment on HDFS-16702:
-

was (Author: vjasani):
In fact, we can make a generic change to ExitException so that it's object 
always prints the cause for the ExitException.

> MiniDFSCluster should report cause of exception in assertion error
> --
>
> Key: HDFS-16702
> URL: https://issues.apache.org/jira/browse/HDFS-16702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
> Environment: Tests running in the Hadoop dev environment image.
>Reporter: Steve Vaughan
>Assignee: Viraj Jasani
>Priority: Minor
>
> When the MiniDFSClsuter detects that an exception caused an exit, it should 
> include that exception as the cause for the AssertionError that it throws.  
> The current AssertError simply reports the message "Test resulted in an 
> unexpected exit" and provides a stack trace to the location of the check for 
> an exit exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16710?focusedWorklogId=797062=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797062
 ]

ASF GitHub Bot logged work on HDFS-16710:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 23:42
Start Date: 01/Aug/22 23:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4670:
URL: https://github.com/apache/hadoop/pull/4670#issuecomment-1201845862

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 50s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  3s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4670/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 7 new + 360 unchanged 
- 3 fixed = 367 total (was 363)  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 345m 45s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4670/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 466m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestCheckpoint |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4670/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4670 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 52ff54f33b77 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision 

[jira] [Resolved] (HDFS-16670) Improve Code With Lambda in EditLogTailer class

2022-08-01 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu resolved HDFS-16670.
-
Resolution: Duplicate

> Improve Code With Lambda in EditLogTailer class
> ---
>
> Key: HDFS-16670
> URL: https://issues.apache.org/jira/browse/HDFS-16670
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Improve Code With Lambda in EditLogTailer class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16670) Improve Code With Lambda in EditLogTailer class

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16670?focusedWorklogId=797059=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797059
 ]

ASF GitHub Bot logged work on HDFS-16670:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 23:36
Start Date: 01/Aug/22 23:36
Worklog Time Spent: 10m 
  Work Description: ZanderXu closed pull request #4596: HDFS-16670. Improve 
Code With Lambda in EditLogTailer class
URL: https://github.com/apache/hadoop/pull/4596




Issue Time Tracking
---

Worklog Id: (was: 797059)
Time Spent: 1h 40m  (was: 1.5h)

> Improve Code With Lambda in EditLogTailer class
> ---
>
> Key: HDFS-16670
> URL: https://issues.apache.org/jira/browse/HDFS-16670
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Improve Code With Lambda in EditLogTailer class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16670) Improve Code With Lambda in EditLogTailer class

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16670?focusedWorklogId=797058=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797058
 ]

ASF GitHub Bot logged work on HDFS-16670:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 23:36
Start Date: 01/Aug/22 23:36
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4596:
URL: https://github.com/apache/hadoop/pull/4596#issuecomment-1201842552

   Thanks @slfan1989 for you review and nice suggestion. I will close this PR 
and accomplish it in HDFS-16695.




Issue Time Tracking
---

Worklog Id: (was: 797058)
Time Spent: 1.5h  (was: 1h 20m)

> Improve Code With Lambda in EditLogTailer class
> ---
>
> Key: HDFS-16670
> URL: https://issues.apache.org/jira/browse/HDFS-16670
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Improve Code With Lambda in EditLogTailer class



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16692) Add detailed scope info in NotEnoughReplicas Reason logs.

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16692?focusedWorklogId=797057=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797057
 ]

ASF GitHub Bot logged work on HDFS-16692:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 23:34
Start Date: 01/Aug/22 23:34
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4630:
URL: https://github.com/apache/hadoop/pull/4630#issuecomment-1201841527

   @goiri Sir, can you help me review this patch? Thanks
   
   Add detailed scop information to the log will help us to locate the root 
cause of this log easily.




Issue Time Tracking
---

Worklog Id: (was: 797057)
Time Spent: 1h 20m  (was: 1h 10m)

> Add detailed scope info in NotEnoughReplicas Reason logs.
> -
>
> Key: HDFS-16692
> URL: https://issues.apache.org/jira/browse/HDFS-16692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When we write some ec data from clients that are not in hdfs cluster, there 
> is a lot of INFO log output, as blew
> {code:shell}
> 2022-07-26 15:50:40,973 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=17}
> 2022-07-26 15:50:40,974 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=18}
> 2022-07-26 15:50:40,974 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=17}
> 2022-07-26 15:50:40,975 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=18}
> 2022-07-26 15:50:40,975 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=17}
> 2022-07-26 15:50:40,976 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=18}
> 2022-07-26 15:50:40,976 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=18}
> 2022-07-26 15:50:40,977 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=18}
> 2022-07-26 15:50:40,977 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=19}
> 2022-07-26 15:50:40,977 INFO  blockmanagement.BlockPlacementPolicy 
> (BlockPlacementPolicyDefault.java:chooseRandom(912)) - Not enough replicas 
> was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1, TOO_MANY_NODES_ON_RACK=3}
> {code}
> I feel that we should add detailed scope info in this log to show the scope 
> that we cannot select any good nodes from.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16703) Enable RPC Timeout for some protocols of NameNode.

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16703?focusedWorklogId=797055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797055
 ]

ASF GitHub Bot logged work on HDFS-16703:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 23:30
Start Date: 01/Aug/22 23:30
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4660:
URL: https://github.com/apache/hadoop/pull/4660#issuecomment-1201838947

   @goiri Sir, can you help me review this patch?  Thanks
   In our prod environment, we encountered the RBF NameNodeHeartbeatServices 
thread being blocked for a long time because one of the NameNode machine 
crashed. So I think this PR will also have an effect on RBF.




Issue Time Tracking
---

Worklog Id: (was: 797055)
Time Spent: 1h 20m  (was: 1h 10m)

> Enable RPC Timeout for some protocols of NameNode.
> --
>
> Key: HDFS-16703
> URL: https://issues.apache.org/jira/browse/HDFS-16703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When I read some code about protocol, I found that only 
> ClientNamenodeProtocolPB proxy with RPC timeout, other protocolPB proxy 
> without RPC timeout, such as RefreshAuthorizationPolicyProtocolPB, 
> RefreshUserMappingsProtocolPB, RefreshCallQueueProtocolPB, 
> GetUserMappingsProtocolPB and NamenodeProtocolPB.
>  
> If proxy without rpc timeout,  it will be blocked for a long time if the NN 
> machine crash or bad network during writing or reading with NN. 
>  
> So I feel that we should enable RPC timeout for all ProtocolPB.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16702) MiniDFSCluster should report cause of exception in assertion error

2022-08-01 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573963#comment-17573963
 ] 

Viraj Jasani commented on HDFS-16702:
-

In fact, we can make a generic change to ExitException so that it's object 
always prints the cause for the ExitException.

> MiniDFSCluster should report cause of exception in assertion error
> --
>
> Key: HDFS-16702
> URL: https://issues.apache.org/jira/browse/HDFS-16702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
> Environment: Tests running in the Hadoop dev environment image.
>Reporter: Steve Vaughan
>Assignee: Viraj Jasani
>Priority: Minor
>
> When the MiniDFSClsuter detects that an exception caused an exit, it should 
> include that exception as the cause for the AssertionError that it throws.  
> The current AssertError simply reports the message "Test resulted in an 
> unexpected exit" and provides a stack trace to the location of the check for 
> an exit exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16705) RBF: Support healthMonitor timeout configurable and cache NN and client proxy in NamenodeHeartbeatService

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16705?focusedWorklogId=797054=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797054
 ]

ASF GitHub Bot logged work on HDFS-16705:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 23:24
Start Date: 01/Aug/22 23:24
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4662:
URL: https://github.com/apache/hadoop/pull/4662#issuecomment-1201834463

   @goiri @slfan1989 Sir, please help me review this patch, thanks.




Issue Time Tracking
---

Worklog Id: (was: 797054)
Time Spent: 3h 10m  (was: 3h)

> RBF: Support healthMonitor timeout configurable and cache NN and client proxy 
> in NamenodeHeartbeatService
> -
>
> Key: HDFS-16705
> URL: https://issues.apache.org/jira/browse/HDFS-16705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> When I read NamenodeHeartbeatService.class of RBF, I feel that there are 
> somethings we can do for NamenodeHeartbeatService.class.
>  * Cache NameNode Protocol and Client Protocol to avoid creating a new proxy 
> every time
>  * Supports healthMonitorTimeout configuration
>  * Format code of getNamenodeStatusReport to make it clearer



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16702) MiniDFSCluster should report cause of exception in assertion error

2022-08-01 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573962#comment-17573962
 ] 

Viraj Jasani commented on HDFS-16702:
-

I did encounter this sometime back and had similar thought but somehow missed 
creating Jira. Let me take this up?

> MiniDFSCluster should report cause of exception in assertion error
> --
>
> Key: HDFS-16702
> URL: https://issues.apache.org/jira/browse/HDFS-16702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
> Environment: Tests running in the Hadoop dev environment image.
>Reporter: Steve Vaughan
>Priority: Minor
>
> When the MiniDFSClsuter detects that an exception caused an exit, it should 
> include that exception as the cause for the AssertionError that it throws.  
> The current AssertError simply reports the message "Test resulted in an 
> unexpected exit" and provides a stack trace to the location of the check for 
> an exit exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16702) MiniDFSCluster should report cause of exception in assertion error

2022-08-01 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HDFS-16702:
---

Assignee: Viraj Jasani

> MiniDFSCluster should report cause of exception in assertion error
> --
>
> Key: HDFS-16702
> URL: https://issues.apache.org/jira/browse/HDFS-16702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
> Environment: Tests running in the Hadoop dev environment image.
>Reporter: Steve Vaughan
>Assignee: Viraj Jasani
>Priority: Minor
>
> When the MiniDFSClsuter detects that an exception caused an exit, it should 
> include that exception as the cause for the AssertionError that it throws.  
> The current AssertError simply reports the message "Test resulted in an 
> unexpected exit" and provides a stack trace to the location of the check for 
> an exit exception.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16709) Remove redundant cast in FSEditLogOp.class

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16709?focusedWorklogId=797053=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797053
 ]

ASF GitHub Bot logged work on HDFS-16709:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 23:18
Start Date: 01/Aug/22 23:18
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4667:
URL: https://github.com/apache/hadoop/pull/4667#issuecomment-1201831182

   Thanks @slfan1989 for your review.
   
   > Can the unchecked flag be removed?
   
   I think we need it. Because after removing it, the IDEA warns _Unchecked 
cast:`org.apache.hadoop.hdfs.server.namenode.FSEditLogOp` to T._
   
   




Issue Time Tracking
---

Worklog Id: (was: 797053)
Time Spent: 40m  (was: 0.5h)

> Remove redundant cast in FSEditLogOp.class
> --
>
> Key: HDFS-16709
> URL: https://issues.apache.org/jira/browse/HDFS-16709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When I read some class about Edits of NameNode, I found that there are much 
> redundant cast in FSEditLogOp.class, I feel that we should remove them.
> Such as:
> {code:java}
> static UpdateBlocksOp getInstance(OpInstanceCache cache) {
>   return (UpdateBlocksOp)cache.get(OP_UPDATE_BLOCKS);
> } {code}
> Because cache.get() have cast the response to T, such as:
> {code:java}
> @SuppressWarnings("unchecked")
> public  T get(FSEditLogOpCodes opCode) {
>   return useCache ? (T)CACHE.get().get(opCode) : (T)newInstance(opCode);
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=797051=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797051
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 23:12
Start Date: 01/Aug/22 23:12
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4668:
URL: https://github.com/apache/hadoop/pull/4668#issuecomment-1201827298

   Thanks @goiri @slfan1989 for your review.
   
   - First, I will remove some code that has nothing to do with lambdas
   - Second, I will split this PR at the subPackage Level, such as: 
`org.apache.hadoop.hdfs.server.namenode`, 
`org.apache.hadoop.hdfs.server.namenode.ha`




Issue Time Tracking
---

Worklog Id: (was: 797051)
Time Spent: 1.5h  (was: 1h 20m)

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16709) Remove redundant cast in FSEditLogOp.class

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16709?focusedWorklogId=797046=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797046
 ]

ASF GitHub Bot logged work on HDFS-16709:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 22:07
Start Date: 01/Aug/22 22:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4667:
URL: https://github.com/apache/hadoop/pull/4667#issuecomment-1201772807

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 45s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 416m 26s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4667/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 17s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 527m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithShortCircuitRead |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4667/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4667 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux be082fe1e5d1 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6154108e9b0acbe7f86cb9b0e553d1d60f6722b4 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 

[jira] [Work logged] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16708?focusedWorklogId=797039=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797039
 ]

ASF GitHub Bot logged work on HDFS-16708:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 21:18
Start Date: 01/Aug/22 21:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4666:
URL: https://github.com/apache/hadoop/pull/4666#issuecomment-1201732038

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  9s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   7m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   6m 32s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   6m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  12m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 23s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  cc  |  25m 23s |  |  the patch passed  |
   | -1 :x: |  javac  |  25m 23s | 
[/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4666/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1 new + 2850 unchanged - 0 
fixed = 2851 total (was 2850)  |
   | +1 :green_heart: |  compile  |  20m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  cc  |  20m 59s |  |  the patch passed  |
   | -1 :x: |  javac  |  20m 59s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4666/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 2645 
unchanged - 0 fixed = 2646 total (was 2645)  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 26s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4666/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 25 new + 697 unchanged - 1 fixed = 722 total (was 
698)  |
   | +1 :green_heart: |  mvnsite  |   7m 47s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 39s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4666/1/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 1 new + 0 
unchanged 

[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=797035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797035
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 20:59
Start Date: 01/Aug/22 20:59
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4668:
URL: https://github.com/apache/hadoop/pull/4668#issuecomment-1201714219

   > > Personally, I think this optimization should still be valuable, I hope 
that the submitted pr is not at the class level, at least at the moudle level.
   > 
   > I wouldn't go for one per class but maybe subpackage. I also think this PR 
is doing a couple of things more other than adding lambdas. For example, 
adjusting for iterations.
   
   I understand your suggestion that the lambda should be tuned with other code 
modifications, not just the lambda code.
   




Issue Time Tracking
---

Worklog Id: (was: 797035)
Time Spent: 1h 20m  (was: 1h 10m)

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=797033=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797033
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 20:49
Start Date: 01/Aug/22 20:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4668:
URL: https://github.com/apache/hadoop/pull/4668#issuecomment-1201705853

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4668/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 17 new + 1047 
unchanged - 59 fixed = 1064 total (was 1106)  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   3m 25s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4668/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  22m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 240m 48s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 13s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 351m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Dead store to size in 
org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.getFeature(Class)
  At 
INodeWithAdditionalFields.java:org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.getFeature(Class)
  At INodeWithAdditionalFields.java:[line 347] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4668/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4668 |
   | 

[jira] [Work logged] (HDFS-16700) RBF: Record the real client IP carried by the Router in the NameNode log

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16700?focusedWorklogId=797029=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797029
 ]

ASF GitHub Bot logged work on HDFS-16700:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 20:47
Start Date: 01/Aug/22 20:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4659:
URL: https://github.com/apache/hadoop/pull/4659#issuecomment-1201703911

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 23s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  20m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   9m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 41s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  22m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   5m 59s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 38s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4659/4/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   5m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   9m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  0s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 428m 24s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4659/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  37m 20s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   2m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 731m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 

[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=797025=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797025
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 20:30
Start Date: 01/Aug/22 20:30
Worklog Time Spent: 10m 
  Work Description: goiri commented on PR #4668:
URL: https://github.com/apache/hadoop/pull/4668#issuecomment-1201689352

   > Personally, I think this optimization should still be valuable, I hope 
that the submitted pr is not at the class level, at least at the moudle level.
   
   I wouldn't go for one per class but maybe subpackage.
   I also think this PR is doing a couple of things more other than adding 
lambdas.
   For example, adjusting for iterations.
   




Issue Time Tracking
---

Worklog Id: (was: 797025)
Time Spent: 1h  (was: 50m)

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=797021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797021
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 20:10
Start Date: 01/Aug/22 20:10
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4668:
URL: https://github.com/apache/hadoop/pull/4668#issuecomment-1201661795

   > This is a massive PR. I'm not sure if it is worth though. I would prefer 
to scope it too.
   
   Personally, I think this optimization should still be valuable, I hope that 
the submitted pr is not at the class level, at least at the moudle level.




Issue Time Tracking
---

Worklog Id: (was: 797021)
Time Spent: 50m  (was: 40m)

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16709) Remove redundant cast in FSEditLogOp.class

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16709?focusedWorklogId=797017=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797017
 ]

ASF GitHub Bot logged work on HDFS-16709:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 20:03
Start Date: 01/Aug/22 20:03
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4667:
URL: https://github.com/apache/hadoop/pull/4667#issuecomment-1201654095

   Can the unchecked flag be removed?
   
   ```
   @SuppressWarnings("unchecked")
   public  T get(FSEditLogOpCodes opCode) {
 return useCache ? (T)CACHE.get().get(opCode) : (T)newInstance(opCode);
   } 
   ```




Issue Time Tracking
---

Worklog Id: (was: 797017)
Time Spent: 20m  (was: 10m)

> Remove redundant cast in FSEditLogOp.class
> --
>
> Key: HDFS-16709
> URL: https://issues.apache.org/jira/browse/HDFS-16709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When I read some class about Edits of NameNode, I found that there are much 
> redundant cast in FSEditLogOp.class, I feel that we should remove them.
> Such as:
> {code:java}
> static UpdateBlocksOp getInstance(OpInstanceCache cache) {
>   return (UpdateBlocksOp)cache.get(OP_UPDATE_BLOCKS);
> } {code}
> Because cache.get() have cast the response to T, such as:
> {code:java}
> @SuppressWarnings("unchecked")
> public  T get(FSEditLogOpCodes opCode) {
>   return useCache ? (T)CACHE.get().get(opCode) : (T)newInstance(opCode);
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=797016=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797016
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 19:58
Start Date: 01/Aug/22 19:58
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4668:
URL: https://github.com/apache/hadoop/pull/4668#issuecomment-1201649379

   Personally, I think this optimization should still be valuable, but I hope 
not to submit pr for one class and one class.




Issue Time Tracking
---

Worklog Id: (was: 797016)
Time Spent: 40m  (was: 0.5h)

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16710?focusedWorklogId=797011=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-797011
 ]

ASF GitHub Bot logged work on HDFS-16710:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 19:47
Start Date: 01/Aug/22 19:47
Worklog Time Spent: 10m 
  Work Description: slfan1989 commented on PR #4670:
URL: https://github.com/apache/hadoop/pull/4670#issuecomment-1201641023

   LGTM.




Issue Time Tracking
---

Worklog Id: (was: 797011)
Time Spent: 0.5h  (was: 20m)

> Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode 
> package
> ---
>
> Key: HDFS-16710
> URL: https://issues.apache.org/jira/browse/HDFS-16710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When I read some class about HDFS NameNode, I found there are many redundant 
> throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws ServiceFailedException, AccessControlException, IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
> Because ServiceFailedException and AccessControlException is subClass of 
> IOException, so I feel that ServiceFailedException and AccessControlException 
> are redundant, so we can remove it to make code clearer, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=796970=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796970
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 17:48
Start Date: 01/Aug/22 17:48
Worklog Time Spent: 10m 
  Work Description: goiri commented on PR #4668:
URL: https://github.com/apache/hadoop/pull/4668#issuecomment-1201520370

   This is a massive PR.
   I'm not sure if it is worth though.
   I would prefer to scope it too.




Issue Time Tracking
---

Worklog Id: (was: 796970)
Time Spent: 0.5h  (was: 20m)

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=796969=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796969
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 17:48
Start Date: 01/Aug/22 17:48
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4668:
URL: https://github.com/apache/hadoop/pull/4668#discussion_r934771841


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java:
##
@@ -636,10 +636,7 @@ private boolean pathResolvesToId(final long zoneId, final 
String zonePath)
   INodesInPath iip = dir.getINodesInPath(zonePath, DirOp.READ_LINK);
   lastINode = iip.getLastINode();
 }
-if (lastINode == null || lastINode.getId() != zoneId) {
-  return false;
-}
-return true;
+return lastINode != null && lastINode.getId() == zoneId;

Review Comment:
   I'm not sure this is more readable than what we had.





Issue Time Tracking
---

Worklog Id: (was: 796969)
Time Spent: 20m  (was: 10m)

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16707?focusedWorklogId=796968=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796968
 ]

ASF GitHub Bot logged work on HDFS-16707:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 17:44
Start Date: 01/Aug/22 17:44
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4665:
URL: https://github.com/apache/hadoop/pull/4665#discussion_r934770189


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -1580,7 +1580,7 @@ private void acquirePermit(final String nsId, final 
UserGroupInformation ugi,
 // Throw StandByException,
 // Clients could fail over and try another router.
 if (rpcMonitor != null) {
-  rpcMonitor.getRPCMetrics().incrProxyOpPermitRejected();

Review Comment:
   Should we keep the old method too?



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcMonitor.java:
##
@@ -74,7 +74,17 @@ void init(
* exception.
*/
   void proxyOpFailureCommunicate(String nsId);
+  
+  /**
+   * Rejected to proxy an operation to a Namenode.
+   */
+  void ProxyOpPermitRejected(String nsId);

Review Comment:
   lower case p?



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -1590,6 +1590,9 @@ private void acquirePermit(final String nsId, final 
UserGroupInformation ugi,
 " is overloaded for NS: " + nsId;
 throw new StandbyException(msg);
   }
+  if (rpcMonitor!= null) {

Review Comment:
   spacing





Issue Time Tracking
---

Worklog Id: (was: 796968)
Time Spent: 0.5h  (was: 20m)

> RBF: Expose RouterRpcFairnessPolicyController related request record metrics 
> for each nameservice to Prometheus
> ---
>
> Key: HDFS-16707
> URL: https://issues.apache.org/jira/browse/HDFS-16707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiale Qi
>Assignee: Jiale Qi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HDFS-16302 intoduced request recored for each namespace, but it is only 
> exposed in /jmx endpoint and in json format, not very convenient.
> this patch exposed these metrics in /prom endpoint for Prometheus



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16710?focusedWorklogId=796967=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796967
 ]

ASF GitHub Bot logged work on HDFS-16710:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 17:42
Start Date: 01/Aug/22 17:42
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4670:
URL: https://github.com/apache/hadoop/pull/4670#discussion_r934768914


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CheckpointFaultInjector.java:
##
@@ -36,9 +36,9 @@ public static void set(CheckpointFaultInjector instance) {
 CheckpointFaultInjector.instance = instance;
   }
   public void beforeGetImageSetsHeaders() throws IOException {}
-  public void afterSecondaryCallsRollEditLog() throws IOException {}

Review Comment:
   What is the source for so many wrong definitions?





Issue Time Tracking
---

Worklog Id: (was: 796967)
Time Spent: 20m  (was: 10m)

> Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode 
> package
> ---
>
> Key: HDFS-16710
> URL: https://issues.apache.org/jira/browse/HDFS-16710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When I read some class about HDFS NameNode, I found there are many redundant 
> throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws ServiceFailedException, AccessControlException, IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
> Because ServiceFailedException and AccessControlException is subClass of 
> IOException, so I feel that ServiceFailedException and AccessControlException 
> are redundant, so we can remove it to make code clearer, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread Simbarashe Dzinamarira (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573842#comment-17573842
 ] 

Simbarashe Dzinamarira commented on HDFS-13522:
---

[~xkrogen] I agree ideally we should only send states for the accessed 
namespaces. The separation between the RPCClient and RPCServer in the router 
makes this non-trivial but it's an optimization worth investigation further.

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC 
> clogging.png, ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16710:
--
Labels: pull-request-available  (was: )

> Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode 
> package
> ---
>
> Key: HDFS-16710
> URL: https://issues.apache.org/jira/browse/HDFS-16710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I read some class about HDFS NameNode, I found there are many redundant 
> throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws ServiceFailedException, AccessControlException, IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
> Because ServiceFailedException and AccessControlException is subClass of 
> IOException, so I feel that ServiceFailedException and AccessControlException 
> are redundant, so we can remove it to make code clearer, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16710?focusedWorklogId=796943=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796943
 ]

ASF GitHub Bot logged work on HDFS-16710:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 15:54
Start Date: 01/Aug/22 15:54
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4670:
URL: https://github.com/apache/hadoop/pull/4670

   ### Description of PR
   When I read some class about HDFS NameNode, I found there are many redundant 
throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
   ```
   public synchronized void transitionToObserver(StateChangeRequestInfo req)
   throws ServiceFailedException, AccessControlException, IOException {
 checkNNStartup();
 nn.checkHaStateChange(req);
 nn.transitionToObserver();
   } 
   ```
   
   Because ServiceFailedException and AccessControlException is subClass of 
IOException, so I feel that ServiceFailedException and AccessControlException 
are redundant, so we can remove it to make code clearer, such as:
   ```
   public synchronized void transitionToObserver(StateChangeRequestInfo req)
   throws IOException {
 checkNNStartup();
 nn.checkHaStateChange(req);
 nn.transitionToObserver();
   }  
   ```
   
   




Issue Time Tracking
---

Worklog Id: (was: 796943)
Remaining Estimate: 0h
Time Spent: 10m

> Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode 
> package
> ---
>
> Key: HDFS-16710
> URL: https://issues.apache.org/jira/browse/HDFS-16710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I read some class about HDFS NameNode, I found there are many redundant 
> throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws ServiceFailedException, AccessControlException, IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
> Because ServiceFailedException and AccessControlException is subClass of 
> IOException, so I feel that ServiceFailedException and AccessControlException 
> are redundant, so we can remove it to make code clearer, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-16710:

Description: 
When I read some class about HDFS NameNode, I found there are many redundant 
throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
{code:java}
public synchronized void transitionToObserver(StateChangeRequestInfo req)
throws ServiceFailedException, AccessControlException, IOException {
  checkNNStartup();
  nn.checkHaStateChange(req);
  nn.transitionToObserver();
} {code}
Because ServiceFailedException and AccessControlException is subClass of 
IOException, so I feel that ServiceFailedException and AccessControlException 
are redundant, so we can remove it to make code clearer, such as:
{code:java}
public synchronized void transitionToObserver(StateChangeRequestInfo req)
throws IOException {
  checkNNStartup();
  nn.checkHaStateChange(req);
  nn.transitionToObserver();
} {code}
 

  was:
When I read some class about HDFS NameNode, I found there are many redundant 
throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:

 
{code:java}
public synchronized void transitionToObserver(StateChangeRequestInfo req)
throws ServiceFailedException, AccessControlException, IOException {
  checkNNStartup();
  nn.checkHaStateChange(req);
  nn.transitionToObserver();
} {code}
Because ServiceFailedException and AccessControlException is subClass of 
IOException, so I feel that ServiceFailedException and AccessControlException 
are redundant, so we can remove it to make code clearer, such as:

 

 
{code:java}
public synchronized void transitionToObserver(StateChangeRequestInfo req)
throws IOException {
  checkNNStartup();
  nn.checkHaStateChange(req);
  nn.transitionToObserver();
} {code}
 

 

 


> Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode 
> package
> ---
>
> Key: HDFS-16710
> URL: https://issues.apache.org/jira/browse/HDFS-16710
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>
> When I read some class about HDFS NameNode, I found there are many redundant 
> throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws ServiceFailedException, AccessControlException, IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
> Because ServiceFailedException and AccessControlException is subClass of 
> IOException, so I feel that ServiceFailedException and AccessControlException 
> are redundant, so we can remove it to make code clearer, such as:
> {code:java}
> public synchronized void transitionToObserver(StateChangeRequestInfo req)
> throws IOException {
>   checkNNStartup();
>   nn.checkHaStateChange(req);
>   nn.transitionToObserver();
> } {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16710) Remove redundant throw exceptions in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ZanderXu (Jira)
ZanderXu created HDFS-16710:
---

 Summary: Remove redundant throw exceptions in 
org.apahce.hadoop.hdfs.server.namenode package
 Key: HDFS-16710
 URL: https://issues.apache.org/jira/browse/HDFS-16710
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: ZanderXu
Assignee: ZanderXu


When I read some class about HDFS NameNode, I found there are many redundant 
throw exception in org.apahce.hadoop.hdfs.server.namenode package, such as:

 
{code:java}
public synchronized void transitionToObserver(StateChangeRequestInfo req)
throws ServiceFailedException, AccessControlException, IOException {
  checkNNStartup();
  nn.checkHaStateChange(req);
  nn.transitionToObserver();
} {code}
Because ServiceFailedException and AccessControlException is subClass of 
IOException, so I feel that ServiceFailedException and AccessControlException 
are redundant, so we can remove it to make code clearer, such as:

 

 
{code:java}
public synchronized void transitionToObserver(StateChangeRequestInfo req)
throws IOException {
  checkNNStartup();
  nn.checkHaStateChange(req);
  nn.transitionToObserver();
} {code}
 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?focusedWorklogId=796930=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796930
 ]

ASF GitHub Bot logged work on HDFS-16695:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 14:56
Start Date: 01/Aug/22 14:56
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4668:
URL: https://github.com/apache/hadoop/pull/4668

   ### Description of PR
   Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
   
   For example:
   Current logic:
   ```
   public ListenableFuture getJournaledEdits(
 long fromTxnId, int maxTransactions) {
   return parallelExecutor.submit(
   new Callable() {
 @Override
 public GetJournaledEditsResponseProto call() throws IOException {
   return getProxy().getJournaledEdits(journalId, nameServiceId,
   fromTxnId, maxTransactions);
 }
   });
 }
   ```
   
   Improved Code with Lambda:
   ```
   public ListenableFuture getJournaledEdits(
 long fromTxnId, int maxTransactions) {
   return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
   journalId, nameServiceId, fromTxnId, maxTransactions));
 }
   ```




Issue Time Tracking
---

Worklog Id: (was: 796930)
Remaining Estimate: 0h
Time Spent: 10m

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16695:
--
Labels: pull-request-available  (was: )

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-16695:

Summary: Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode 
package  (was: Improve Code with Lambda in hadoop-hdfs module)

> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>
> Improve Code with Lambda in hadoop-hdfs module. 
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16695) Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package

2022-08-01 Thread ZanderXu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZanderXu updated HDFS-16695:

Description: 
Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.

For example:
Current logic:
{code:java}
public ListenableFuture getJournaledEdits(
  long fromTxnId, int maxTransactions) {
return parallelExecutor.submit(
new Callable() {
  @Override
  public GetJournaledEditsResponseProto call() throws IOException {
return getProxy().getJournaledEdits(journalId, nameServiceId,
fromTxnId, maxTransactions);
  }
});
  }
{code}
Improved Code with Lambda:
{code:java}
public ListenableFuture getJournaledEdits(
  long fromTxnId, int maxTransactions) {
return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
journalId, nameServiceId, fromTxnId, maxTransactions));
  }
{code}

  was:
Improve Code with Lambda in hadoop-hdfs module. 

For example:
Current logic:
{code:java}
public ListenableFuture getJournaledEdits(
  long fromTxnId, int maxTransactions) {
return parallelExecutor.submit(
new Callable() {
  @Override
  public GetJournaledEditsResponseProto call() throws IOException {
return getProxy().getJournaledEdits(journalId, nameServiceId,
fromTxnId, maxTransactions);
  }
});
  }
{code}

Improved Code with Lambda:
{code:java}
public ListenableFuture getJournaledEdits(
  long fromTxnId, int maxTransactions) {
return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
journalId, nameServiceId, fromTxnId, maxTransactions));
  }
{code}




> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package
> --
>
> Key: HDFS-16695
> URL: https://issues.apache.org/jira/browse/HDFS-16695
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>
> Improve Code with Lambda in org.apahce.hadoop.hdfs.server.namenode package.
> For example:
> Current logic:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(
> new Callable() {
>   @Override
>   public GetJournaledEditsResponseProto call() throws IOException {
> return getProxy().getJournaledEdits(journalId, nameServiceId,
> fromTxnId, maxTransactions);
>   }
> });
>   }
> {code}
> Improved Code with Lambda:
> {code:java}
> public ListenableFuture getJournaledEdits(
>   long fromTxnId, int maxTransactions) {
> return parallelExecutor.submit(() -> getProxy().getJournaledEdits(
> journalId, nameServiceId, fromTxnId, maxTransactions));
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16707?focusedWorklogId=796889=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796889
 ]

ASF GitHub Bot logged work on HDFS-16707:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 13:49
Start Date: 01/Aug/22 13:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4665:
URL: https://github.com/apache/hadoop/pull/4665#issuecomment-1201232800

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4665/1/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 22s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4665/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 1 
unchanged - 0 fixed = 3 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   1m 27s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4665/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf generated 2 new + 0 unchanged - 0 fixed 
= 2 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  23m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  30m 46s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 152m 41s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
   |  |  The method name 
org.apache.hadoop.hdfs.server.federation.metrics.FederationRPCPerformanceMonitor.ProxyOpPermitAccepted(String)
 doesn't start with a lower case letter  At 
FederationRPCPerformanceMonitor.java:start with a lower case letter  At 
FederationRPCPerformanceMonitor.java:[lines 201-205] |
   |  |  The method name 

[jira] [Updated] (HDFS-16709) Remove redundant cast in FSEditLogOp.class

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16709:
--
Labels: pull-request-available  (was: )

> Remove redundant cast in FSEditLogOp.class
> --
>
> Key: HDFS-16709
> URL: https://issues.apache.org/jira/browse/HDFS-16709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I read some class about Edits of NameNode, I found that there are much 
> redundant cast in FSEditLogOp.class, I feel that we should remove them.
> Such as:
> {code:java}
> static UpdateBlocksOp getInstance(OpInstanceCache cache) {
>   return (UpdateBlocksOp)cache.get(OP_UPDATE_BLOCKS);
> } {code}
> Because cache.get() have cast the response to T, such as:
> {code:java}
> @SuppressWarnings("unchecked")
> public  T get(FSEditLogOpCodes opCode) {
>   return useCache ? (T)CACHE.get().get(opCode) : (T)newInstance(opCode);
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16709) Remove redundant cast in FSEditLogOp.class

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16709?focusedWorklogId=796878=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796878
 ]

ASF GitHub Bot logged work on HDFS-16709:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 13:18
Start Date: 01/Aug/22 13:18
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4667:
URL: https://github.com/apache/hadoop/pull/4667

   ### Description of PR
   
   When I read some class about Edits of NameNode, I found that there are much 
redundant cast in FSEditLogOp.class, I feel that we should remove them.
   
   Such as:
   ```
   static UpdateBlocksOp getInstance(OpInstanceCache cache) {
 return (UpdateBlocksOp)cache.get(OP_UPDATE_BLOCKS);
   }
   ``` 
   
   Because cache.get() have cast the response to T, so we can remove the 
redundant cast.
   ```
   @SuppressWarnings("unchecked")
   public  T get(FSEditLogOpCodes opCode) {
 return useCache ? (T)CACHE.get().get(opCode) : (T)newInstance(opCode);
   } 
   ```
   
   




Issue Time Tracking
---

Worklog Id: (was: 796878)
Remaining Estimate: 0h
Time Spent: 10m

> Remove redundant cast in FSEditLogOp.class
> --
>
> Key: HDFS-16709
> URL: https://issues.apache.org/jira/browse/HDFS-16709
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I read some class about Edits of NameNode, I found that there are much 
> redundant cast in FSEditLogOp.class, I feel that we should remove them.
> Such as:
> {code:java}
> static UpdateBlocksOp getInstance(OpInstanceCache cache) {
>   return (UpdateBlocksOp)cache.get(OP_UPDATE_BLOCKS);
> } {code}
> Because cache.get() have cast the response to T, such as:
> {code:java}
> @SuppressWarnings("unchecked")
> public  T get(FSEditLogOpCodes opCode) {
>   return useCache ? (T)CACHE.get().get(opCode) : (T)newInstance(opCode);
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16709) Remove redundant cast in FSEditLogOp.class

2022-08-01 Thread ZanderXu (Jira)
ZanderXu created HDFS-16709:
---

 Summary: Remove redundant cast in FSEditLogOp.class
 Key: HDFS-16709
 URL: https://issues.apache.org/jira/browse/HDFS-16709
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: ZanderXu
Assignee: ZanderXu


When I read some class about Edits of NameNode, I found that there are much 
redundant cast in FSEditLogOp.class, I feel that we should remove them.

Such as:
{code:java}
static UpdateBlocksOp getInstance(OpInstanceCache cache) {
  return (UpdateBlocksOp)cache.get(OP_UPDATE_BLOCKS);
} {code}
Because cache.get() have cast the response to T, such as:
{code:java}
@SuppressWarnings("unchecked")
public  T get(FSEditLogOpCodes opCode) {
  return useCache ? (T)CACHE.get().get(opCode) : (T)newInstance(opCode);
} {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573722#comment-17573722
 ] 

zhengchenyu edited comment on HDFS-16708 at 8/1/22 11:58 AM:
-

[~xkrogen] [~xuzq_zander] [~simbadzina] 

Let's continue Design A here. I think Design A is not implemented in all 
HDFS-13522's PR.

there is no need to propagate all namespace's state ids in Design A. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I will submit new draft PR about the whole implement, 
document is here([^HDFS-13522_proposal_zhengchenyu.pdf]). Can you give me some 
suggestion? 

_Note: It is only a draft, the setting is a little complex, maybe I need to 
make it simple._ 


was (Author: zhengchenyu):
[~xkrogen] [~xuzq_zander] [~simbadzina] 

Let's continue Design A here. I think Design A is not implemented in all 
HDFS-13522's PR.

there is no need to propagate all namespace's state ids in Design A. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I will submit new draft PR about the whole implement, 
document is here([^HDFS-13522_proposal_zhengchenyu.pdf]). Can you give me some 
suggestion? 

_Note: It is a draft, the setting is a little complex, maybe I need to make it 
simple._ 

> RBF: Support transmit state id from client in router.
> -
>
> Key: HDFS-16708
> URL: https://issues.apache.org/jira/browse/HDFS-16708
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522_proposal_zhengchenyu.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement the Design A described in HDFS-13522.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573722#comment-17573722
 ] 

zhengchenyu edited comment on HDFS-16708 at 8/1/22 11:58 AM:
-

[~xkrogen] [~xuzq_zander] [~simbadzina] 

Let's continue Design A here. I think Design A is not implemented in all 
HDFS-13522's PR.

there is no need to propagate all namespace's state ids in Design A. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I will submit new draft PR about the whole implement, 
document is here([^HDFS-13522_proposal_zhengchenyu.pdf]). Can you give me some 
suggestion? 

_Note: It is a draft, the setting is a little complex, maybe I need to make it 
simple._ 


was (Author: zhengchenyu):
[~xkrogen] [~xuzq_zander] [~simbadzina] 

Let's continue Design A here. I think Design A is not implemented in all 
HDFS-13522's PR.

there is no need to propagate all namespace's state ids in Design A. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I will submit new draft PR about the whole implement, 
Can you give me some suggestion? 

_Note: It is a draft, the setting is a little complex, maybe I need to make it 
simple._ 

> RBF: Support transmit state id from client in router.
> -
>
> Key: HDFS-16708
> URL: https://issues.apache.org/jira/browse/HDFS-16708
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522_proposal_zhengchenyu.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement the Design A described in HDFS-13522.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573722#comment-17573722
 ] 

zhengchenyu commented on HDFS-16708:


[~xkrogen] [~xuzq_zander] [~simbadzina] 

Let's continue Design A here. I think Design A is not implemented in all 
HDFS-13522's PR.

there is no need to propagate all namespace's state ids in Design A. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I will submit new draft PR about the whole implement, 
Can you give me some suggestion? 

_Note: It is a draft, the setting is a little complex, maybe I need to make it 
simple._ 

> RBF: Support transmit state id from client in router.
> -
>
> Key: HDFS-16708
> URL: https://issues.apache.org/jira/browse/HDFS-16708
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522_proposal_zhengchenyu.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement the Design A described in HDFS-13522.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16708?focusedWorklogId=796859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796859
 ]

ASF GitHub Bot logged work on HDFS-16708:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 11:53
Start Date: 01/Aug/22 11:53
Worklog Time Spent: 10m 
  Work Description: zhengchenyu opened a new pull request, #4666:
URL: https://github.com/apache/hadoop/pull/4666

   https://issues.apache.org/jira/browse/HDFS-16708




Issue Time Tracking
---

Worklog Id: (was: 796859)
Remaining Estimate: 0h
Time Spent: 10m

> RBF: Support transmit state id from client in router.
> -
>
> Key: HDFS-16708
> URL: https://issues.apache.org/jira/browse/HDFS-16708
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Attachments: HDFS-13522_proposal_zhengchenyu.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement the Design A described in HDFS-13522.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16708:
--
Labels: pull-request-available  (was: )

> RBF: Support transmit state id from client in router.
> -
>
> Key: HDFS-16708
> URL: https://issues.apache.org/jira/browse/HDFS-16708
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522_proposal_zhengchenyu.pdf
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement the Design A described in HDFS-13522.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated HDFS-13522:
---
Attachment: (was: HDFS-13522_proposal_zhengchenyu.pdf)

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC 
> clogging.png, ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated HDFS-16708:
---
Description: Implement the Design A described in HDFS-13522.

> RBF: Support transmit state id from client in router.
> -
>
> Key: HDFS-16708
> URL: https://issues.apache.org/jira/browse/HDFS-16708
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Attachments: HDFS-13522_proposal_zhengchenyu.pdf
>
>
> Implement the Design A described in HDFS-13522.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread zhengchenyu (Jira)
zhengchenyu created HDFS-16708:
--

 Summary: RBF: Support transmit state id from client in router.
 Key: HDFS-16708
 URL: https://issues.apache.org/jira/browse/HDFS-16708
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: zhengchenyu
Assignee: zhengchenyu
 Attachments: HDFS-13522_proposal_zhengchenyu.pdf





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16708) RBF: Support transmit state id from client in router.

2022-08-01 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated HDFS-16708:
---
Attachment: HDFS-13522_proposal_zhengchenyu.pdf

> RBF: Support transmit state id from client in router.
> -
>
> Key: HDFS-16708
> URL: https://issues.apache.org/jira/browse/HDFS-16708
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Attachments: HDFS-13522_proposal_zhengchenyu.pdf
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


[ https://issues.apache.org/jira/browse/HDFS-13522 ]


zhengchenyu deleted comment on HDFS-13522:


was (Author: zhengchenyu):
[~xkrogen] [~xuzq_zander] [~simbadzina] For I know, Design A is not implemented 
in all PR.

For Design A, there is no need to propagate all namespace's state ids. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I have a draft which is combination of Design A and 
B. If someone are interested in Design A, can you help review this draft 
[https://github.com/zhengchenyu/hadoop/commit/a47ae882943f090836a801cf758761c5b970d813.]

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, HDFS-13522_proposal_zhengchenyu.pdf, RBF_ Observer 
> support.pdf, Router+Observer RPC clogging.png, 
> ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread ZanderXu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573713#comment-17573713
 ] 

ZanderXu commented on HDFS-13522:
-

[~zhengchenyu] [~simbadzina] If we plan focus one Design B first, we need to 
clarify what functions Design B needs to support and what needs to be done?

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, HDFS-13522_proposal_zhengchenyu.pdf, RBF_ Observer 
> support.pdf, Router+Observer RPC clogging.png, 
> ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated HDFS-13522:
---
Attachment: HDFS-13522_proposal_zhengchenyu.pdf

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, HDFS-13522_proposal_zhengchenyu.pdf, RBF_ Observer 
> support.pdf, Router+Observer RPC clogging.png, 
> ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573704#comment-17573704
 ] 

zhengchenyu edited comment on HDFS-13522 at 8/1/22 11:34 AM:
-

[~xuzq_zander] 

Hi, the use case about design A is very rare indeed. But Design A also have 
advantage.
(1) More flexible
Client could set their msync period time by itself.
Example: In our cluster, one name service, some special user detect hdfs file 
is created periodically, may need high time precision, means more frequent 
msync.(Though I am oppose to this way).

(2) Save msync
I think there is no need to call msync periodically for most HIVE, MR 
application. Design A will save more msync than Design B。

I agree [~xuzq_zander] 's suggestion that focus on Design B first, add Design A 
as a bonus item. It is no easy to review both Design A and Design B.

Could we only complete Design B in this issue? [~omalley] [~elgoiri] 
[~simbadzina] 


was (Author: zhengchenyu):
[~xuzq_zander] 

Hi, the use case about design A is very rare indeed. But Design A also have 
advantage.
(1) More flexible
Client could set their msync period time by itself.
Example: In our cluster, one name service, some special user detect hdfs file 
is created periodically, may need high time precision, means more frequent 
msync.(Though I am oppose to this way).

(2) Save msync
I think there is no need to call msync periodically for most HIVE, MR 
application. Design A will save more msync than Design B。

I agree [~xuzq_zander] 's suggestion that focus on Design B first, add Design A 
as a bonus item. It is no easy to review both Design A and Design B.

Could we only complete Design B in this issue? [~omalley] [~elgoiri] 

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC 
> clogging.png, ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573707#comment-17573707
 ] 

zhengchenyu edited comment on HDFS-13522 at 8/1/22 11:32 AM:
-

[~xkrogen] [~xuzq_zander] [~simbadzina] For I know, Design A is not implemented 
in all PR.

For Design A, there is no need to propagate all namespace's state ids. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I have a draft which is combination of Design A and 
B. If someone are interested in Design A, can you help review this draft 
[https://github.com/zhengchenyu/hadoop/commit/a47ae882943f090836a801cf758761c5b970d813.]


was (Author: zhengchenyu):
[~xkrogen] [~xuzq_zander] [~simbadzina] For I know, Design A is not implemented 
in all PR.

For Design A, there is no need to propagate all namespace's state ids. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I have a draft which is combination of Design A and 
B. If someone are interested in Design A, can you help review this draft 
[https://github.com/zhengchenyu/hadoop/commit/a47ae882943f090836a801cf758761c5b970d813.]

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC 
> clogging.png, ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengchenyu updated HDFS-13522:
---
Attachment: (was: HDFS-13522_proposal_zhengchenyu_v1.pdf)

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, RBF_ Observer support.pdf, Router+Observer RPC 
> clogging.png, ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573704#comment-17573704
 ] 

zhengchenyu edited comment on HDFS-13522 at 8/1/22 11:26 AM:
-

[~xuzq_zander] 

Hi, the use case about design A is very rare indeed. But Design A also have 
advantage.
(1) More flexible
Client could set their msync period time by itself.
Example: In our cluster, one name service, some special user detect hdfs file 
is created periodically, may need high time precision, means more frequent 
msync.(Though I am oppose to this way).

(2) Save msync
I think there is no need to call msync periodically for most HIVE, MR 
application. Design A will save more msync than Design B。

I agree [~xuzq_zander] 's suggestion that focus on Design B first, add Design A 
as a bonus item. It is no easy to review both Design A and Design B.

Could we only complete Design B in this issue? [~omalley] [~elgoiri] 


was (Author: zhengchenyu):
[~xuzq_zander] 

Hi, the use case about design A is very rare indeed. But Design A also have 
advantage.
(1) More flexible
Client could set their msync period time by itself.
Example: In our cluster, one name service, some special user detect hdfs file 
is created periodically, may need high time precision, means more frequent 
msync.(Though I am oppose to this way).

(2) Save msync
I think there is no need to call msync periodically for most HIVE, MR 
application. Design A will save more msync than Design B。

I agree [~xuzq_zander] 's suggestion that focus on Design B first, add Design A 
as a bonus item. It is no easy to review both Design A and Design B.

Could we only complete Design B in this issue?

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, HDFS-13522_proposal_zhengchenyu_v1.pdf, RBF_ Observer 
> support.pdf, Router+Observer RPC clogging.png, 
> ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573707#comment-17573707
 ] 

zhengchenyu commented on HDFS-13522:


[~xkrogen] [~xuzq_zander] [~simbadzina] For I know, Design A is not implemented 
in all PR.

For Design A, there is no need to propagate all namespace's state ids. We can 
propagate by client's demand. I think we need a whole implement and document, 
then continue to discuss. I have a draft which is combination of Design A and 
B. If someone are interested in Design A, can you help review this draft 
[https://github.com/zhengchenyu/hadoop/commit/a47ae882943f090836a801cf758761c5b970d813.]

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, HDFS-13522_proposal_zhengchenyu_v1.pdf, RBF_ Observer 
> support.pdf, Router+Observer RPC clogging.png, 
> ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573704#comment-17573704
 ] 

zhengchenyu edited comment on HDFS-13522 at 8/1/22 11:24 AM:
-

[~xuzq_zander] 

Hi, the use case about design A is very rare indeed. But Design A also have 
advantage.
(1) More flexible
Client could set their msync period time by itself.
Example: In our cluster, one name service, some special user detect hdfs file 
is created periodically, may need high time precision, means more frequent 
msync.(Though I am oppose to this way).

(2) Save msync
I think there is no need to call msync periodically for most HIVE, MR 
application. Design A will save more msync than Design B。

I agree [~xuzq_zander] 's suggestion that focus on Design B first, add Design A 
as a bonus item. It is no easy to review both Design A and Design B.

Could we only complete Design B in this issue?


was (Author: zhengchenyu):
[~xuzq_zander] 

Hi, the use case about design A is very rare indeed. But Design A also have 
advantage.
(1) More flexible
Client could set their msync period time by itself.
Example: In our cluster, one name service, some special user detect hdfs file 
is created periodically, may need high time precision, means more frequent 
msync.(Though I am oppose to this way).

(2) Save msync
I think there is no need to call msync periodically for most HIVE, MR 
application. Design A will save more msync than Design B。

I agree your suggestion that focus on Design B first, add Design A as a bonus 
item. It is no easy to review both Design A and Design B.

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, HDFS-13522_proposal_zhengchenyu_v1.pdf, RBF_ Observer 
> support.pdf, Router+Observer RPC clogging.png, 
> ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16707?focusedWorklogId=796853=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796853
 ]

ASF GitHub Bot logged work on HDFS-16707:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 11:15
Start Date: 01/Aug/22 11:15
Worklog Time Spent: 10m 
  Work Description: qijiale76 opened a new pull request, #4665:
URL: https://github.com/apache/hadoop/pull/4665

   ### Description of PR
   [HDFS-16302](https://issues.apache.org/jira/browse/HDFS-16302) intoduced 
request record for each namespace, but it is only exposed in /jmx endpoint and 
in json format, not very convenient.
   this patch exposed these metrics in /prom endpoint for Prometheus
   
   ### How was this patch tested?
   manual testing
   




Issue Time Tracking
---

Worklog Id: (was: 796853)
Remaining Estimate: 0h
Time Spent: 10m

> RBF: Expose RouterRpcFairnessPolicyController related request record metrics 
> for each nameservice to Prometheus
> ---
>
> Key: HDFS-16707
> URL: https://issues.apache.org/jira/browse/HDFS-16707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiale Qi
>Assignee: Jiale Qi
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS-16302 intoduced request recored for each namespace, but it is only 
> exposed in /jmx endpoint and in json format, not very convenient.
> this patch exposed these metrics in /prom endpoint for Prometheus



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16707:
--
Labels: pull-request-available  (was: )

> RBF: Expose RouterRpcFairnessPolicyController related request record metrics 
> for each nameservice to Prometheus
> ---
>
> Key: HDFS-16707
> URL: https://issues.apache.org/jira/browse/HDFS-16707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiale Qi
>Assignee: Jiale Qi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> HDFS-16302 intoduced request recored for each namespace, but it is only 
> exposed in /jmx endpoint and in json format, not very convenient.
> this patch exposed these metrics in /prom endpoint for Prometheus



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) RBF: Support observer node from Router-Based Federation

2022-08-01 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17573704#comment-17573704
 ] 

zhengchenyu commented on HDFS-13522:


[~xuzq_zander] 

Hi, the use case about design A is very rare indeed. But Design A also have 
advantage.
(1) More flexible
Client could set their msync period time by itself.
Example: In our cluster, one name service, some special user detect hdfs file 
is created periodically, may need high time precision, means more frequent 
msync.(Though I am oppose to this way).

(2) Save msync
I think there is no need to call msync periodically for most HIVE, MR 
application. Design A will save more msync than Design B。

I agree your suggestion that focus on Design B first, add Design A as a bonus 
item. It is no easy to review both Design A and Design B.

> RBF: Support observer node from Router-Based Federation
> ---
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Simbarashe Dzinamarira
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-13522.001.patch, HDFS-13522.002.patch, 
> HDFS-13522_WIP.patch, HDFS-13522_proposal_zhengchenyu_v1.pdf, RBF_ Observer 
> support.pdf, Router+Observer RPC clogging.png, 
> ShortTerm-Routers+Observer.png, 
> observer_reads_in_rbf_proposal_simbadzina_v1.pdf, 
> observer_reads_in_rbf_proposal_simbadzina_v2.pdf
>
>  Time Spent: 20h 50m
>  Remaining Estimate: 0h
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread Jiale Qi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiale Qi updated HDFS-16707:

Description: 
HDFS-16302 intoduced request recored for each namespace, but it is only exposed 
in /jmx endpoint and in json format, not very convenient.

this patch exposed these metrics in /prom endpoint for Prometheus

  was:
HDFS-16302 intoduced request recored for each namespace, but it is only exposed 
in /jmx endpoint and in json format, not very convenient.

this patch exposed these metrics in /prom endpoint for 


> RBF: Expose RouterRpcFairnessPolicyController related request record metrics 
> for each nameservice to Prometheus
> ---
>
> Key: HDFS-16707
> URL: https://issues.apache.org/jira/browse/HDFS-16707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiale Qi
>Assignee: Jiale Qi
>Priority: Minor
>
> HDFS-16302 intoduced request recored for each namespace, but it is only 
> exposed in /jmx endpoint and in json format, not very convenient.
> this patch exposed these metrics in /prom endpoint for Prometheus



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread Jiale Qi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiale Qi updated HDFS-16707:

Description: 
HDFS-16302 intoduced request recored for each namespace, but it is only exposed 
in /jmx endpoint and in json format, not very convenient.

this patch exposed these metrics in /prom endpoint for 

  was:
HDFS-16302 intoduced request recored for each namespace, but it is only exposed 
in /jmx and in json format, not very convenient.

It is not exposed in /prom  


> RBF: Expose RouterRpcFairnessPolicyController related request record metrics 
> for each nameservice to Prometheus
> ---
>
> Key: HDFS-16707
> URL: https://issues.apache.org/jira/browse/HDFS-16707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiale Qi
>Assignee: Jiale Qi
>Priority: Minor
>
> HDFS-16302 intoduced request recored for each namespace, but it is only 
> exposed in /jmx endpoint and in json format, not very convenient.
> this patch exposed these metrics in /prom endpoint for 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread Jiale Qi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiale Qi updated HDFS-16707:

Description: 
HDFS-16302 intoduced request recored for each namespace, but it is only exposed 
in /jmx and in json format, not very convenient.

It is not exposed in /prom  

  was:[HDFS-16302|https://issues.apache.org/jira/browse/HDFS-16302] intoduced 
request recored for each namespace, but it is only exposed in /jmx and in json 
format, not very convenient.It is not exposed in /prom  


> RBF: Expose RouterRpcFairnessPolicyController related request record metrics 
> for each nameservice to Prometheus
> ---
>
> Key: HDFS-16707
> URL: https://issues.apache.org/jira/browse/HDFS-16707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiale Qi
>Priority: Minor
>
> HDFS-16302 intoduced request recored for each namespace, but it is only 
> exposed in /jmx and in json format, not very convenient.
> It is not exposed in /prom  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread Jiale Qi (Jira)
Jiale Qi created HDFS-16707:
---

 Summary: RBF: Expose RouterRpcFairnessPolicyController related 
request record metrics for each nameservice to Prometheus
 Key: HDFS-16707
 URL: https://issues.apache.org/jira/browse/HDFS-16707
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Jiale Qi


[HDFS-16302|https://issues.apache.org/jira/browse/HDFS-16302] intoduced request 
recored for each namespace, but it is only exposed in /jmx and in json format, 
not very convenient.It is not exposed in /prom  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16707) RBF: Expose RouterRpcFairnessPolicyController related request record metrics for each nameservice to Prometheus

2022-08-01 Thread Jiale Qi (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiale Qi reassigned HDFS-16707:
---

Assignee: Jiale Qi

> RBF: Expose RouterRpcFairnessPolicyController related request record metrics 
> for each nameservice to Prometheus
> ---
>
> Key: HDFS-16707
> URL: https://issues.apache.org/jira/browse/HDFS-16707
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jiale Qi
>Assignee: Jiale Qi
>Priority: Minor
>
> HDFS-16302 intoduced request recored for each namespace, but it is only 
> exposed in /jmx and in json format, not very convenient.
> It is not exposed in /prom  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16699) RBF: Router Update Observer NameNode state to Active when failover because of sockeTimeOut Exception

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16699?focusedWorklogId=796834=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796834
 ]

ASF GitHub Bot logged work on HDFS-16699:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 09:43
Start Date: 01/Aug/22 09:43
Worklog Time Spent: 10m 
  Work Description: SanQiMax commented on PR #4663:
URL: https://github.com/apache/hadoop/pull/4663#issuecomment-1200967858

   > @SanQiMax Thank you very much for your contribution, but in JIRA, your 
description needs to be clearer, you can refer to @ZanderXu's Jira description, 
he wrote very well.
   
   The router will obtain all nn lists (randomly sorted) before forwarding a 
read request each time. If it is a write request, it will obtain the Active and 
Standby、OBserver nn lists, and access them in turn until the request is 
processed successfully.
   From the theoretical analysis, if the router processes the read request and 
the first one in the nn list is the observer nn, and the observer nn  throw a 
ConnectTimeoutException to the router when processing the request, and the 
second time it will try to connect to the second nn from the nn list  , if the 
second nn connects  observer nn and returns successfully, the logic shown in 
the following figure will be executed, and the state of nn will be set to 
ACTIVE, then if the write request is processed, the router will obtain the list 
of active states (at this time it may be It will get multiple NNs in the ACTIVE 
state), and the request will be forwarded to the slave NN.
   This logic will  be abnormal when there are only active and standby nn in 
the cluster. Even if the status of standby nn is set to ACTIVE and the request 
is transferred to standby nn, the standby nn will not process read and write 
requests, and then the router will Forwarded to the Active nn for processing; 
if the observer nn is added, the state of the slave nn may be set to ACTIVE




Issue Time Tracking
---

Worklog Id: (was: 796834)
Time Spent: 1h  (was: 50m)

> RBF: Router Update Observer NameNode state to Active when failover because of 
>  sockeTimeOut  Exception
> --
>
> Key: HDFS-16699
> URL: https://issues.apache.org/jira/browse/HDFS-16699
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.1.1
>Reporter: ShuangQi Xia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> we found that ,router print logs that indicate  Observer NameNode  state 
> changed to active always,here's the log
> 2022-03-18 11:00:54,589 | INFO  | NamenodeHeartbeatService hacluster 11342-0 
> | NN registration state has changed: 
> test101:25019->hacluster:11342:test103:25000-ACTIVE -> 
> test102:25019->hacluster:11342::test103:25000-OBSERVER | 
> MembershipStoreImpl.java:170
> for code ,I fount that , when router request failed for some reson ,like 
> sockettimeout Excetion , failover to Observer NameNode,will  update it's 
> state to Active
>if (failover) {
>   // Success on alternate server, update
>   InetSocketAddress address = client.getAddress();
>   namenodeResolver.updateActiveNamenode(nsId, address);
>}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16700) RBF: Record the real client IP carried by the Router in the NameNode log

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16700?focusedWorklogId=796791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796791
 ]

ASF GitHub Bot logged work on HDFS-16700:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 07:08
Start Date: 01/Aug/22 07:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4659:
URL: https://github.com/apache/hadoop/pull/4659#issuecomment-1200801360

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 12s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 45s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  22m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 59s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   9m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 50s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  22m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 43s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4659/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 239 unchanged - 0 fixed = 240 total (was 
239)  |
   | +1 :green_heart: |  mvnsite  |   5m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 36s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4659/3/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  
hadoop-common-project_hadoop-common-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1
 with JDK Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   5m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   9m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 49s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 456m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4659/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  44m 16s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 778m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   
   
   | Subsystem | Report/Notes |
   

[jira] [Work logged] (HDFS-16672) Fix lease interval comparison in BlockReportLeaseManager

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16672?focusedWorklogId=796786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796786
 ]

ASF GitHub Bot logged work on HDFS-16672:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 06:37
Start Date: 01/Aug/22 06:37
Worklog Time Spent: 10m 
  Work Description: cxzl25 commented on PR #4598:
URL: https://github.com/apache/hadoop/pull/4598#issuecomment-1200775196

   > I don't get this improvement. It seems that `System.nanoTime()` in single 
thread then compare. Do you meet any issue here? Thanks.
   
   `System.nanoTime()`may return a negative number, the comparison here may be 
wrong.
   
   In `LeaseManager.Lease#expiredHardLimit()` we use two nanotimes to subtract. 
 This is consistent with the usage of nanoTime recommended by JDK.
   




Issue Time Tracking
---

Worklog Id: (was: 796786)
Time Spent: 50m  (was: 40m)

> Fix lease interval comparison in BlockReportLeaseManager
> 
>
> Key: HDFS-16672
> URL: https://issues.apache.org/jira/browse/HDFS-16672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: dzcxzl
>Assignee: dzcxzl
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> monotonicNowMs is generated by System.nanoTime(), direct comparison is not 
> recommended.
>  
> org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager#pruneIfExpired
> {code:java}
> if (monotonicNowMs < node.leaseTimeMs + leaseExpiryMs) {
>   return false;
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16672) Fix lease interval comparison in BlockReportLeaseManager

2022-08-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16672?focusedWorklogId=796775=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796775
 ]

ASF GitHub Bot logged work on HDFS-16672:
-

Author: ASF GitHub Bot
Created on: 01/Aug/22 06:12
Start Date: 01/Aug/22 06:12
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on PR #4598:
URL: https://github.com/apache/hadoop/pull/4598#issuecomment-1200755820

   @cxzl25 Thanks involve me here. I don't get this improvement. It seems that 
`System.nanoTime()` in single thread then compare. Do you meet any issue here? 
Thanks.




Issue Time Tracking
---

Worklog Id: (was: 796775)
Time Spent: 40m  (was: 0.5h)

> Fix lease interval comparison in BlockReportLeaseManager
> 
>
> Key: HDFS-16672
> URL: https://issues.apache.org/jira/browse/HDFS-16672
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: dzcxzl
>Assignee: dzcxzl
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> monotonicNowMs is generated by System.nanoTime(), direct comparison is not 
> recommended.
>  
> org.apache.hadoop.hdfs.server.blockmanagement.BlockReportLeaseManager#pruneIfExpired
> {code:java}
> if (monotonicNowMs < node.leaseTimeMs + leaseExpiryMs) {
>   return false;
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org