[jira] [Work logged] (HDFS-16648) Normalize the usage of debug logs in NameNode

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16648?focusedWorklogId=788841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-788841
 ]

ASF GitHub Bot logged work on HDFS-16648:
-

Author: ASF GitHub Bot
Created on: 08/Jul/22 05:11
Start Date: 08/Jul/22 05:11
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on code in PR #4529:
URL: https://github.com/apache/hadoop/pull/4529#discussion_r916468348


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -2908,11 +2896,9 @@ public boolean processReport(final DatanodeID nodeID,
 
 if (blockLog.isDebugEnabled()) {
   for (Block b : invalidatedBlocks) {
-if (blockLog.isDebugEnabled()) {
-  blockLog.debug("BLOCK* processReport 0x{} with lease ID 0x{}: {} on 
node {} size {} " +
-  "does not belong to any file.", strBlockReportId, 
fullBrLeaseId, b,
-   node, b.getNumBytes());
-}
+blockLog.debug("BLOCK* processReport 0x{} with lease ID 0x{}: {} on 
node {} size {} " +

Review Comment:
   There is already a judgment, so we can remove the internal redundant 
judgment.
   ```
   if (blockLog.isDebugEnabled()) {
  for (Block b : invalidatedBlocks) {
 blockLog.debug("BLOCK* processReport 0x{} with lease ID 0x{}: {} on 
node {} size {} " +
   "does not belong to any file.", strBlockReportId, 
fullBrLeaseId, b,
 node, b.getNumBytes());
   }
   }
   ```





Issue Time Tracking
---

Worklog Id: (was: 788841)
Time Spent: 1.5h  (was: 1h 20m)

> Normalize the usage of debug logs in NameNode
> -
>
> Key: HDFS-16648
> URL: https://issues.apache.org/jira/browse/HDFS-16648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There are many irregular debug logs in NameNode.  such as:
> Error type1: 
> {code:java}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Getting groups for user " + user);
> }
> {code}
> we can format it to:
> {code:java}
> LOG.debug("Getting groups for user {}. ", user);
> {code}
> Error type2:
> {code:java}
> LOG.debug("*DIR* NameNode.renameSnapshot: Snapshot Path {}, " +
> "snapshotOldName {}, snapshotNewName {}", snapshotRoot,
> snapshotOldName, snapshotNewName);
> {code}
> we can format it to:
> {code:java}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("*DIR* NameNode.renameSnapshot: Snapshot Path {}, " +
> "snapshotOldName {}, snapshotNewName {}", snapshotRoot,
> snapshotOldName, snapshotNewName); 
> }
> {code}
> Error type3:
> {code:java}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("getAdditionalDatanode: src=" + src
>   + ", fileId=" + fileId
>   + ", blk=" + blk
>   + ", existings=" + Arrays.asList(existings)
>   + ", excludes=" + Arrays.asList(excludes)
>   + ", numAdditionalNodes=" + numAdditionalNodes
>   + ", clientName=" + clientName);
> }
> {code}
> We can format it to:
> {code:java}
>  if (LOG.isDebugEnabled()) {
>LOG.debug("getAdditionalDatanode: src={}, fileId={}, "
>+ "blk={}, existings={}, excludes={}, numAdditionalNodes={}, "
>   + "clientName={}.", src, fileId, blk, Arrays.asList(existings),
>   Arrays.asList(excludes), numAdditionalNodes, clientName);
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16648) Normalize the usage of debug logs in NameNode

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16648?focusedWorklogId=788834=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-788834
 ]

ASF GitHub Bot logged work on HDFS-16648:
-

Author: ASF GitHub Bot
Created on: 08/Jul/22 04:45
Start Date: 08/Jul/22 04:45
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4529:
URL: https://github.com/apache/hadoop/pull/4529#discussion_r916458093


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java:
##
@@ -478,10 +478,7 @@ public void recoverUnfinalizedSegments() throws 
IOException {
 Map resps = createNewUniqueEpoch();
 LOG.info("Successfully started new epoch " + loggers.getEpoch());
 
-if (LOG.isDebugEnabled()) {
-  LOG.debug("newEpoch(" + loggers.getEpoch() + ") responses:\n" +
-QuorumCall.mapToString(resps));
-}
+LOG.debug("newEpoch({}) responses:\n{}", loggers.getEpoch(), 
QuorumCall.mapToString(resps));

Review Comment:
   This particular one should stay within the isDebugEnabled() to avoid the 
mapToString



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumJournalManager.java:
##
@@ -618,10 +612,8 @@ private void selectStreamingInputStreams(
 Map resps =
 loggers.waitForWriteQuorum(q, selectInputStreamsTimeoutMs,
 "selectStreamingInputStreams");
-if (LOG.isDebugEnabled()) {
-  LOG.debug("selectStreamingInputStream manifests:\n {}",
-  Joiner.on("\n").withKeyValueSeparator(": ").join(resps));
-}
+LOG.debug("selectStreamingInputStream manifests:\n {}",

Review Comment:
   This probably should stay.



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -2908,11 +2896,9 @@ public boolean processReport(final DatanodeID nodeID,
 
 if (blockLog.isDebugEnabled()) {
   for (Block b : invalidatedBlocks) {
-if (blockLog.isDebugEnabled()) {
-  blockLog.debug("BLOCK* processReport 0x{} with lease ID 0x{}: {} on 
node {} size {} " +
-  "does not belong to any file.", strBlockReportId, 
fullBrLeaseId, b,
-   node, b.getNumBytes());
-}
+blockLog.debug("BLOCK* processReport 0x{} with lease ID 0x{}: {} on 
node {} size {} " +

Review Comment:
   Don't we want to keep the check?





Issue Time Tracking
---

Worklog Id: (was: 788834)
Time Spent: 1h 20m  (was: 1h 10m)

> Normalize the usage of debug logs in NameNode
> -
>
> Key: HDFS-16648
> URL: https://issues.apache.org/jira/browse/HDFS-16648
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> There are many irregular debug logs in NameNode.  such as:
> Error type1: 
> {code:java}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("Getting groups for user " + user);
> }
> {code}
> we can format it to:
> {code:java}
> LOG.debug("Getting groups for user {}. ", user);
> {code}
> Error type2:
> {code:java}
> LOG.debug("*DIR* NameNode.renameSnapshot: Snapshot Path {}, " +
> "snapshotOldName {}, snapshotNewName {}", snapshotRoot,
> snapshotOldName, snapshotNewName);
> {code}
> we can format it to:
> {code:java}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("*DIR* NameNode.renameSnapshot: Snapshot Path {}, " +
> "snapshotOldName {}, snapshotNewName {}", snapshotRoot,
> snapshotOldName, snapshotNewName); 
> }
> {code}
> Error type3:
> {code:java}
> if (LOG.isDebugEnabled()) {
>   LOG.debug("getAdditionalDatanode: src=" + src
>   + ", fileId=" + fileId
>   + ", blk=" + blk
>   + ", existings=" + Arrays.asList(existings)
>   + ", excludes=" + Arrays.asList(excludes)
>   + ", numAdditionalNodes=" + numAdditionalNodes
>   + ", clientName=" + clientName);
> }
> {code}
> We can format it to:
> {code:java}
>  if (LOG.isDebugEnabled()) {
>LOG.debug("getAdditionalDatanode: src={}, fileId={}, "
>+ "blk={}, existings={}, excludes={}, numAdditionalNodes={}, "
>   + "clientName={}.", src, fileId, blk, Arrays.asList(existings),
>   Arrays.asList(excludes), numAdditionalNodes, clientName);
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16466) Implement Linux permission flags on Windows

2022-07-07 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HDFS-16466.
---
Fix Version/s: 3.4.0
   Resolution: Fixed

Merged PR https://github.com/apache/hadoop/pull/4526 to trunk.

> Implement Linux permission flags on Windows
> ---
>
> Key: HDFS-16466
> URL: https://issues.apache.org/jira/browse/HDFS-16466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> [statinfo.cc|https://github.com/apache/hadoop/blob/869317be0a1fdff23be5fc500dcd9ae4ecd7bc29/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/statinfo.cc#L41-L49]
>  uses POSIX permission flags. These flags aren't available for Windows. We 
> need to implement the equivalent flags on Windows to make this cross platform 
> compatible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16466) Implement Linux permission flags on Windows

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16466?focusedWorklogId=788831=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-788831
 ]

ASF GitHub Bot logged work on HDFS-16466:
-

Author: ASF GitHub Bot
Created on: 08/Jul/22 03:59
Start Date: 08/Jul/22 03:59
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra merged PR #4526:
URL: https://github.com/apache/hadoop/pull/4526




Issue Time Tracking
---

Worklog Id: (was: 788831)
Time Spent: 3h  (was: 2h 50m)

> Implement Linux permission flags on Windows
> ---
>
> Key: HDFS-16466
> URL: https://issues.apache.org/jira/browse/HDFS-16466
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> [statinfo.cc|https://github.com/apache/hadoop/blob/869317be0a1fdff23be5fc500dcd9ae4ecd7bc29/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/common/statinfo.cc#L41-L49]
>  uses POSIX permission flags. These flags aren't available for Windows. We 
> need to implement the equivalent flags on Windows to make this cross platform 
> compatible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16208) [FGL] Implement Delete API with FGL

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16208?focusedWorklogId=788800=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-788800
 ]

ASF GitHub Bot logged work on HDFS-16208:
-

Author: ASF GitHub Bot
Created on: 08/Jul/22 00:08
Start Date: 08/Jul/22 00:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #3376:
URL: https://github.com/apache/hadoop/pull/3376#issuecomment-1178389941

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  19m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ fgl Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 21s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  23m 57s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3376/2/artifact/out/branch-mvninstall-root.txt)
 |  root in fgl failed.  |
   | -1 :x: |  compile  |  14m 43s | 
[/branch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3376/2/artifact/out/branch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in fgl failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |  12m 24s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3376/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in fgl failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  checkstyle  |   4m 14s |  |  fgl passed  |
   | +1 :green_heart: |  mvnsite  |   3m  9s |  |  fgl passed  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  fgl passed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  |  fgl passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   2m 50s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3376/2/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in fgl has 2 extant spotbugs warnings.  
|
   | +1 :green_heart: |  shadedclient  |  19m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 23s |  |  the patch passed  |
   | -1 :x: |  compile  |  14m 43s | 
[/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3376/2/artifact/out/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |  14m 43s | 
[/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3376/2/artifact/out/patch-compile-root-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  root in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |  12m 25s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3376/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |  12m 25s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3376/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 57s | 

[jira] [Resolved] (HDFS-14656) RBF: NPE in RBFMetrics

2022-07-07 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-14656.
-
Resolution: Duplicate

> RBF: NPE in RBFMetrics
> --
>
> Key: HDFS-14656
> URL: https://issues.apache.org/jira/browse/HDFS-14656
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.RBFMetrics.getActiveNamenodeRegistrations(RBFMetrics.java:726)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.RBFMetrics.getNameserviceAggregatedInt(RBFMetrics.java:688)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.RBFMetrics.getNumInMaintenanceDeadDataNodes(RBFMetrics.java:467)
>   at 
> org.apache.hadoop.hdfs.server.federation.metrics.NamenodeBeanMetrics.getNumInMaintenanceDeadDataNodes(NamenodeBeanMetrics.java:693)
>   at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at 
> com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
>   at 
> com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
>   at javax.management.StandardMBean.getAttribute(StandardMBean.java:372)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
>   ... 42 more
> 2019-07-16 19:35:35,228 [qtp1811922029-78] ERROR jmx.JMXJsonServlet 
> (JMXJsonServlet.java:writeAttribute(345)) - getting attribute 
> NumEnteringMaintenanceDataNodes of 
> Hadoop:service=NameNode,name=FSNamesystem-3 threw an exception
> javax.management.RuntimeMBeanException: java.lang.NullPointerException
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
>   at 
> org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
>   at 
> org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
>   at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
>   at 
> org.apache.hadoop.security.authentication.server.ProxyUserAuthenticationFilter.doFilter(ProxyUserAuthenticationFilter.java:104)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
>   at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:51)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:110)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>   at 
> 

[jira] [Resolved] (HDFS-13576) RBF: Add destination path length validation for add/update mount entry

2022-07-07 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-13576.
-
Resolution: Duplicate

> RBF: Add destination path length validation for add/update mount entry
> --
>
> Key: HDFS-13576
> URL: https://issues.apache.org/jira/browse/HDFS-13576
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Dibyendu Karmakar
>Assignee: Ayush Saxena
>Priority: Minor
>
> Currently there is no validation to check destination path length while 
> adding or updating mount entry. But while trying to create directory using 
> this mount entry 
> {noformat}
> RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$PathComponentTooLongException){noformat}
> is thrown with exception message as 
> {noformat}
> "maximum path component name limit of ... directory / is 
> exceeded: limit=255 length=1817"{noformat}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12696) BlockPoolManager#startAll is called twice during DataNode startup

2022-07-07 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17563988#comment-17563988
 ] 

Ayush Saxena commented on HDFS-12696:
-

[~nanda] Just observed you transitioned this to Patch-Available state 
yesterday. Unfortunately we don't run Jenkins now for patches.
And I have one more bad news, Guess this got duped and fixed in HDFS-15448

> BlockPoolManager#startAll is called twice during DataNode startup
> -
>
> Key: HDFS-12696
> URL: https://issues.apache.org/jira/browse/HDFS-12696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-12696.000.patch
>
>
> As part of Datanode startup, {{BlockPoolManager#startAll}} which starts all 
> {{BPServiceActor}} threads is called twice.
> First in {{Datanode}} constructor, {{Datanode#startDataNode}} is called which 
> does {{BlockPoolManager#refreshNamenodes}} inside which we do {{startAll}}
> And as part of {{Datanode#runDatanodeDaemon}} we again call 
> {{BlockPoolManager#startAll}}.
> Since {{BPServiceActor}} checks if {{bpThread}} is already running, before 
> starting them again, the second call is ignored.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16648) Normalize the usage of debug logs in NameNode

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16648?focusedWorklogId=788683=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-788683
 ]

ASF GitHub Bot logged work on HDFS-16648:
-

Author: ASF GitHub Bot
Created on: 07/Jul/22 17:23
Start Date: 07/Jul/22 17:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4529:
URL: https://github.com/apache/hadoop/pull/4529#issuecomment-1177963649

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 37s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  21m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m  4s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 14s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 24s |  |  root: The patch generated 
0 new + 515 unchanged - 1 fixed = 515 total (was 516)  |
   | +1 :green_heart: |  mvnsite  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 55s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 24s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 367m 47s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 625m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4529/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4529 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 860faa4db824 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 52f6c737a92067e6fda82b77bff964d0ca518197 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[jira] [Commented] (HDFS-16652) Upgrade jquery datatable version references to v1.10.19

2022-07-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17563887#comment-17563887
 ] 

Steve Loughran commented on HDFS-16652:
---

could you submit this as a github PR?

> Upgrade jquery datatable version references to v1.10.19
> ---
>
> Key: HDFS-16652
> URL: https://issues.apache.org/jira/browse/HDFS-16652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: D M Murali Krishna Reddy
>Assignee: D M Murali Krishna Reddy
>Priority: Major
> Attachments: HDFS-16652.001.patch
>
>
> Upgrade jquery datatable version references in hdfs webapp to v1.10.19



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-13274) RBF: Extend RouterRpcClient to use multiple sockets

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13274?focusedWorklogId=788493=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-788493
 ]

ASF GitHub Bot logged work on HDFS-13274:
-

Author: ASF GitHub Bot
Created on: 07/Jul/22 06:10
Start Date: 07/Jul/22 06:10
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on code in PR #4531:
URL: https://github.com/apache/hadoop/pull/4531#discussion_r915492822


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java:
##
@@ -1760,7 +1760,7 @@ UserGroupInformation getTicket() {
   return ticket;
 }
 
-private int getRpcTimeout() {

Review Comment:
   copy, I will do it.





Issue Time Tracking
---

Worklog Id: (was: 788493)
Time Spent: 40m  (was: 0.5h)

> RBF: Extend RouterRpcClient to use multiple sockets
> ---
>
> Key: HDFS-13274
> URL: https://issues.apache.org/jira/browse/HDFS-13274
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> HADOOP-13144 introduces the ability to create multiple connections for the 
> same user and use different sockets. The RouterRpcClient should use this 
> approach to get a better throughput.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16283) RBF: improve renewLease() to call only a specific NameNode rather than make fan-out calls

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16283?focusedWorklogId=788492=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-788492
 ]

ASF GitHub Bot logged work on HDFS-16283:
-

Author: ASF GitHub Bot
Created on: 07/Jul/22 06:07
Start Date: 07/Jul/22 06:07
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on code in PR #4524:
URL: https://github.com/apache/hadoop/pull/4524#discussion_r915491438


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java:
##
@@ -1174,8 +1174,16 @@ public boolean mkdirs(String src, FsPermission masked, 
boolean createParent)
   }
 
   @Override // ClientProtocol
-  public void renewLease(String clientName) throws IOException {
+  public void renewLease(String clientName, List namespaces)
+  throws IOException {
+if (namespaces != null && namespaces.size() > 0) {
+  LOG.warn("namespaces({}) should be null or empty "
+  + "on NameNode side, please check it.", namespaces);
+  throw new IOException("namespaces(" + namespaces
+  + ") should be null or empty");
+}
 checkNNStartup();
+// just ignore nsIdentifies

Review Comment:
   copy, I will fix it.





Issue Time Tracking
---

Worklog Id: (was: 788492)
Time Spent: 5h 40m  (was: 5.5h)

> RBF: improve renewLease() to call only a specific NameNode rather than make 
> fan-out calls
> -
>
> Key: HDFS-16283
> URL: https://issues.apache.org/jira/browse/HDFS-16283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
>  Labels: pull-request-available
> Attachments: RBF_ improve renewLease() to call only a specific 
> NameNode rather than make fan-out calls.pdf
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Currently renewLease() against a router will make fan-out to all the 
> NameNodes. Since renewLease() call is so frequent and if one of the NameNodes 
> are slow, then eventually the router queues are blocked by all renewLease() 
> and cause router degradation. 
> We will make a change in the client side to keep track of NameNode Id in 
> additional to current fileId so routers understand which NameNodes the client 
> is renewing lease against.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12213) Ozone: Corona: Support for online mode

2022-07-07 Thread Nandakumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar resolved HDFS-12213.
---
Resolution: Fixed

> Ozone: Corona: Support for online mode
> --
>
> Key: HDFS-12213
> URL: https://issues.apache.org/jira/browse/HDFS-12213
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Major
>  Labels: OzonePostMerge, tool
>
> This jira brings support for online mode in corona.
> In online mode, common crawl data from AWS will be used to populate ozone 
> with data. Default source is [CC-MAIN-2017-17/warc.paths.gz | 
> https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2017-17/warc.paths.gz]
>  (it contains the path to actual data segment), user can override this using 
> -source.
> The following values are derived from URL of Common Crawl data
> * Domain will be used as Volume
> * URL will be used as Bucket
> * FileName will be used as Key



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12696) BlockPoolManager#startAll is called twice during DataNode startup

2022-07-07 Thread Nandakumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12696:
--
Status: Patch Available  (was: Open)

> BlockPoolManager#startAll is called twice during DataNode startup
> -
>
> Key: HDFS-12696
> URL: https://issues.apache.org/jira/browse/HDFS-12696
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-12696.000.patch
>
>
> As part of Datanode startup, {{BlockPoolManager#startAll}} which starts all 
> {{BPServiceActor}} threads is called twice.
> First in {{Datanode}} constructor, {{Datanode#startDataNode}} is called which 
> does {{BlockPoolManager#refreshNamenodes}} inside which we do {{startAll}}
> And as part of {{Datanode#runDatanodeDaemon}} we again call 
> {{BlockPoolManager#startAll}}.
> Since {{BPServiceActor}} checks if {{bpThread}} is already running, before 
> starting them again, the second call is ignored.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16283) RBF: improve renewLease() to call only a specific NameNode rather than make fan-out calls

2022-07-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16283?focusedWorklogId=788490=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-788490
 ]

ASF GitHub Bot logged work on HDFS-16283:
-

Author: ASF GitHub Bot
Created on: 07/Jul/22 06:02
Start Date: 07/Jul/22 06:02
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on code in PR #4524:
URL: https://github.com/apache/hadoop/pull/4524#discussion_r915487803


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java:
##
@@ -1174,8 +1174,16 @@ public boolean mkdirs(String src, FsPermission masked, 
boolean createParent)
   }
 
   @Override // ClientProtocol
-  public void renewLease(String clientName) throws IOException {
+  public void renewLease(String clientName, List namespaces)
+  throws IOException {
+if (namespaces != null && namespaces.size() > 0) {
+  LOG.warn("namespaces({}) should be null or empty "
+  + "on NameNode side, please check it.", namespaces);
+  throw new IOException("namespaces(" + namespaces
+  + ") should be null or empty");
+}
 checkNNStartup();
+// just ignore nsIdentifies

Review Comment:
   remove this line or change it to // Ignore the namespaces.





Issue Time Tracking
---

Worklog Id: (was: 788490)
Time Spent: 5.5h  (was: 5h 20m)

> RBF: improve renewLease() to call only a specific NameNode rather than make 
> fan-out calls
> -
>
> Key: HDFS-16283
> URL: https://issues.apache.org/jira/browse/HDFS-16283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
>  Labels: pull-request-available
> Attachments: RBF_ improve renewLease() to call only a specific 
> NameNode rather than make fan-out calls.pdf
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Currently renewLease() against a router will make fan-out to all the 
> NameNodes. Since renewLease() call is so frequent and if one of the NameNodes 
> are slow, then eventually the router queues are blocked by all renewLease() 
> and cause router degradation. 
> We will make a change in the client side to keep track of NameNode Id in 
> additional to current fileId so routers understand which NameNodes the client 
> is renewing lease against.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org