[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=580484=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580484
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 10/Apr/21 05:12
Start Date: 10/Apr/21 05:12
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881#issuecomment-817080125


   > Thanx @bshashikant for the fix, Can you extend a test covering the fix as 
well
   
   Added the test case.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580484)
Time Spent: 50m  (was: 40m)

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=580483=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580483
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 10/Apr/21 05:12
Start Date: 10/Apr/21 05:12
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on a change in pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881#discussion_r610998104



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -124,7 +124,13 @@
 import org.apache.hadoop.hdfs.server.namenode.metrics.ReplicatedBlocksMBean;
 import org.apache.hadoop.hdfs.server.protocol.SlowDiskReports;
 import org.apache.hadoop.ipc.ObserverRetryOnActiveException;
-import org.apache.hadoop.util.*;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.DataChecksum;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.util.VersionInfo;
+import org.apache.hadoop.util.ExitUtil;

Review comment:
   I guess, its a better practice to avoid * imports. I would prefer to 
address them here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580483)
Time Spent: 40m  (was: 0.5h)

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15959) Add support to digest based authentication in ZKDelegationTokenSecretManager

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15959?focusedWorklogId=580482=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580482
 ]

ASF GitHub Bot logged work on HDFS-15959:
-

Author: ASF GitHub Bot
Created on: 10/Apr/21 05:04
Start Date: 10/Apr/21 05:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2888:
URL: https://github.com/apache/hadoop/pull/2888#issuecomment-817079199


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  20m  7s | 
[/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2888/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 1932 unchanged - 0 
fixed = 1933 total (was 1932)  |
   | +1 :green_heart: |  compile  |  18m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  javac  |  18m  7s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2888/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new + 1827 
unchanged - 0 fixed = 1828 total (was 1827)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  5s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2888/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 9 new + 12 
unchanged - 0 fixed = 21 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 32s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 178m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2888/1/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Work logged] (HDFS-15959) Add support to digest based authentication in ZKDelegationTokenSecretManager

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15959?focusedWorklogId=580463=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580463
 ]

ASF GitHub Bot logged work on HDFS-15959:
-

Author: ASF GitHub Bot
Created on: 10/Apr/21 02:04
Start Date: 10/Apr/21 02:04
Worklog Time Spent: 10m 
  Work Description: bolerio opened a new pull request #2888:
URL: https://github.com/apache/hadoop/pull/2888


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580463)
Remaining Estimate: 0h
Time Spent: 10m

> Add support to digest based authentication in ZKDelegationTokenSecretManager
> 
>
> Key: HDFS-15959
> URL: https://issues.apache.org/jira/browse/HDFS-15959
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Borislav Iordanov
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15959) Add support to digest based authentication in ZKDelegationTokenSecretManager

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15959:
--
Labels: pull-request-available  (was: )

> Add support to digest based authentication in ZKDelegationTokenSecretManager
> 
>
> Key: HDFS-15959
> URL: https://issues.apache.org/jira/browse/HDFS-15959
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Borislav Iordanov
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15960) Router NamenodeHeartbeatService fails to authenticate with namenode in a kerberized envi

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15960?focusedWorklogId=580360=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580360
 ]

ASF GitHub Bot logged work on HDFS-15960:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 21:49
Start Date: 09/Apr/21 21:49
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2887:
URL: https://github.com/apache/hadoop/pull/2887#discussion_r610921207



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##
@@ -170,7 +172,20 @@ protected void serviceInit(Configuration configuration) 
throws Exception {
 
   @Override
   public void periodicInvoke() {
-updateState();
+try {
+  SecurityUtil.doAsCurrentUser(
+  new PrivilegedExceptionAction() {

Review comment:
   Can this be a lambda?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/NamenodeHeartbeatService.java
##
@@ -170,7 +172,20 @@ protected void serviceInit(Configuration configuration) 
throws Exception {
 
   @Override
   public void periodicInvoke() {
-updateState();
+try {
+  SecurityUtil.doAsCurrentUser(
+  new PrivilegedExceptionAction() {
+@Override
+public Object run() {
+  updateState();
+  return null;
+}
+  });
+} catch (IOException e) {
+  // Generic error that we don't know about
+  LOG.error("Unexpected exception while communicating with {}: {}",

Review comment:
   Can we have a unit test for this?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580360)
Time Spent: 0.5h  (was: 20m)

> Router NamenodeHeartbeatService fails to authenticate with namenode in a 
> kerberized envi
> 
>
> Key: HDFS-15960
> URL: https://issues.apache.org/jira/browse/HDFS-15960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We use http.hadoop.authentication.type = "kerberos" and when the 
> NamenodeHeartbeatService calls the namenode via JMX, it is not providing a 
> user security context so the authentication token is not transmitted and it 
> fails.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15423) RBF: WebHDFS create shouldn't choose DN from all sub-clusters

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15423?focusedWorklogId=580358=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580358
 ]

ASF GitHub Bot logged work on HDFS-15423:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 21:48
Start Date: 09/Apr/21 21:48
Worklog Time Spent: 10m 
  Work Description: goiri commented on pull request #2605:
URL: https://github.com/apache/hadoop/pull/2605#issuecomment-816989551


   If @ayushtkn doesn't have further comments, I'll go ahead and merge this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580358)
Time Spent: 6h  (was: 5h 50m)

> RBF: WebHDFS create shouldn't choose DN from all sub-clusters
> -
>
> Key: HDFS-15423
> URL: https://issues.apache.org/jira/browse/HDFS-15423
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf, webhdfs
>Reporter: Chao Sun
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> In {{RouterWebHdfsMethods}} and for a {{CREATE}} call, {{chooseDatanode}} 
> first gets all DNs via {{getDatanodeReport}}, and then randomly pick one from 
> the list via {{getRandomDatanode}}. This logic doesn't seem correct as it 
> should pick a DN for the specific cluster(s) of the input {{path}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15878) Flaky test TestRouterWebHDFSContractCreate>AbstractContractCreateTest#testSyncable in Trunk

2021-04-09 Thread Fengnan Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318339#comment-17318339
 ] 

Fengnan Li commented on HDFS-15878:
---

I think this will be fixed by 
[HDFS-15423|https://issues.apache.org/jira/browse/HDFS-15423]

> Flaky test 
> TestRouterWebHDFSContractCreate>AbstractContractCreateTest#testSyncable in 
> Trunk
> ---
>
> Key: HDFS-15878
> URL: https://issues.apache.org/jira/browse/HDFS-15878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Fengnan Li
>Priority: Major
>
> ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 
> 24.627 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testSyncable(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.222 s  <<< ERROR!
> java.io.FileNotFoundException: File /test/testSyncable not found.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:576)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$900(WebHdfsFileSystem.java:146)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:892)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:858)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:652)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:690)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:686)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.getRedirectedUrl(WebHdfsFileSystem.java:2307)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.(WebHdfsFileSystem.java:2296)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$WebHdfsInputStream.(WebHdfsFileSystem.java:2176)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(WebHdfsFileSystem.java:1610)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:975)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.validateSyncableSemantics(AbstractContractCreateTest.java:556)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testSyncable(AbstractContractCreateTest.java:459)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> /test/testSyncable not found.
>   at 
> 

[jira] [Updated] (HDFS-15878) Flaky test TestRouterWebHDFSContractCreate>AbstractContractCreateTest#testSyncable in Trunk

2021-04-09 Thread Fengnan Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-15878:
--
Component/s: rbf
 hdfs

> Flaky test 
> TestRouterWebHDFSContractCreate>AbstractContractCreateTest#testSyncable in 
> Trunk
> ---
>
> Key: HDFS-15878
> URL: https://issues.apache.org/jira/browse/HDFS-15878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, rbf
>Reporter: Renukaprasad C
>Assignee: Fengnan Li
>Priority: Major
>
> ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 
> 24.627 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testSyncable(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.222 s  <<< ERROR!
> java.io.FileNotFoundException: File /test/testSyncable not found.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:576)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$900(WebHdfsFileSystem.java:146)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:892)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:858)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:652)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:690)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:686)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.getRedirectedUrl(WebHdfsFileSystem.java:2307)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.(WebHdfsFileSystem.java:2296)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$WebHdfsInputStream.(WebHdfsFileSystem.java:2176)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(WebHdfsFileSystem.java:1610)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:975)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.validateSyncableSemantics(AbstractContractCreateTest.java:556)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testSyncable(AbstractContractCreateTest.java:459)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> /test/testSyncable not found.
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:90)
>   at 
> 

[jira] [Assigned] (HDFS-15878) Flaky test TestRouterWebHDFSContractCreate>AbstractContractCreateTest#testSyncable in Trunk

2021-04-09 Thread Fengnan Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li reassigned HDFS-15878:
-

Assignee: Fengnan Li

> Flaky test 
> TestRouterWebHDFSContractCreate>AbstractContractCreateTest#testSyncable in 
> Trunk
> ---
>
> Key: HDFS-15878
> URL: https://issues.apache.org/jira/browse/HDFS-15878
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Fengnan Li
>Priority: Major
>
> ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 2, Time elapsed: 
> 24.627 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate
> [ERROR] 
> testSyncable(org.apache.hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate)
>   Time elapsed: 0.222 s  <<< ERROR!
> java.io.FileNotFoundException: File /test/testSyncable not found.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:110)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toIOException(WebHdfsFileSystem.java:576)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$900(WebHdfsFileSystem.java:146)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.shouldRetry(WebHdfsFileSystem.java:892)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:858)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:652)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:690)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:686)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.getRedirectedUrl(WebHdfsFileSystem.java:2307)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$ReadRunner.(WebHdfsFileSystem.java:2296)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem$WebHdfsInputStream.(WebHdfsFileSystem.java:2176)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.open(WebHdfsFileSystem.java:1610)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:975)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.validateSyncableSemantics(AbstractContractCreateTest.java:556)
>   at 
> org.apache.hadoop.fs.contract.AbstractContractCreateTest.testSyncable(AbstractContractCreateTest.java:459)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File 
> /test/testSyncable not found.
>   at 
> org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:90)
>   at 
> 

[jira] [Commented] (HDFS-15675) TestRouterRpcMultiDestination#testErasureCoding fails on trunk

2021-04-09 Thread Fengnan Li (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318315#comment-17318315
 ] 

Fengnan Li commented on HDFS-15675:
---

Is this still happening? If so I would like to take it.

> TestRouterRpcMultiDestination#testErasureCoding fails on trunk
> --
>
> Key: HDFS-15675
> URL: https://issues.apache.org/jira/browse/HDFS-15675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>
> qbt report (Nov 8, 2020, 11:28 AM) shows failures in testErasureCoding



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15960) Router NamenodeHeartbeatService fails to authenticate with namenode in a kerberized envi

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15960?focusedWorklogId=580207=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580207
 ]

ASF GitHub Bot logged work on HDFS-15960:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 17:56
Start Date: 09/Apr/21 17:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2887:
URL: https://github.com/apache/hadoop/pull/2887#issuecomment-816855180


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  17m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2887/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.hdfs.server.federation.router.TestRouterAllResolver |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2887/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2887 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ed950788e3c0 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ed8da9f7a95c79e9c7c06aebca255a55aeb95244 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 

[jira] [Work logged] (HDFS-15960) Router NamenodeHeartbeatService fails to authenticate with namenode in a kerberized envi

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15960?focusedWorklogId=580117=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580117
 ]

ASF GitHub Bot logged work on HDFS-15960:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 16:21
Start Date: 09/Apr/21 16:21
Worklog Time Spent: 10m 
  Work Description: bolerio opened a new pull request #2887:
URL: https://github.com/apache/hadoop/pull/2887


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580117)
Remaining Estimate: 0h
Time Spent: 10m

> Router NamenodeHeartbeatService fails to authenticate with namenode in a 
> kerberized envi
> 
>
> Key: HDFS-15960
> URL: https://issues.apache.org/jira/browse/HDFS-15960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Borislav Iordanov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We use http.hadoop.authentication.type = "kerberos" and when the 
> NamenodeHeartbeatService calls the namenode via JMX, it is not providing a 
> user security context so the authentication token is not transmitted and it 
> fails.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15960) Router NamenodeHeartbeatService fails to authenticate with namenode in a kerberized envi

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15960:
--
Labels: pull-request-available  (was: )

> Router NamenodeHeartbeatService fails to authenticate with namenode in a 
> kerberized envi
> 
>
> Key: HDFS-15960
> URL: https://issues.apache.org/jira/browse/HDFS-15960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We use http.hadoop.authentication.type = "kerberos" and when the 
> NamenodeHeartbeatService calls the namenode via JMX, it is not providing a 
> user security context so the authentication token is not transmitted and it 
> fails.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318102#comment-17318102
 ] 

Íñigo Goiri commented on HDFS-15962:


Thanks [~gautham] for the patch.
Merged PR 2883.

> Make strcasecmp cross platform
> --
>
> Key: HDFS-15962
> URL: https://issues.apache.org/jira/browse/HDFS-15962
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15962:
---
Affects Version/s: (was: 3.4.0)

> Make strcasecmp cross platform
> --
>
> Key: HDFS-15962
> URL: https://issues.apache.org/jira/browse/HDFS-15962
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15962:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Make strcasecmp cross platform
> --
>
> Key: HDFS-15962
> URL: https://issues.apache.org/jira/browse/HDFS-15962
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15962?focusedWorklogId=580103=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580103
 ]

ASF GitHub Bot logged work on HDFS-15962:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 16:01
Start Date: 09/Apr/21 16:01
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #2883:
URL: https://github.com/apache/hadoop/pull/2883


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580103)
Time Spent: 0.5h  (was: 20m)

> Make strcasecmp cross platform
> --
>
> Key: HDFS-15962
> URL: https://issues.apache.org/jira/browse/HDFS-15962
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-15962:
---
Status: Patch Available  (was: Open)

> Make strcasecmp cross platform
> --
>
> Key: HDFS-15962
> URL: https://issues.apache.org/jira/browse/HDFS-15962
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14904) Add Option to let Balancer prefer highly utilized nodes in each iteration

2021-04-09 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17318060#comment-17318060
 ] 

Ahmed Hussein commented on HDFS-14904:
--

Hey [~LeonG] and [~jingzhao] ! Thanks for the contribution.

I find this very useful improvement.

Is it possible to commit that change into branch-3.3 too, if you please?

 

 

> Add Option to let Balancer prefer highly utilized nodes in each iteration
> -
>
> Key: HDFS-14904
> URL: https://issues.apache.org/jira/browse/HDFS-14904
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Normally the most important purpose for HDFS balancer is to reduce the top 
> used node to prevent datanode usage from being too high.
> Currently, balancer almost randomly picks nodes as sources regardless of 
> usage, which makes it slow to bring down the top used datanodes in the 
> cluster, when there are less underutilized nodes in the cluster (consider 
> expansion).
> We can add an option to prefer top used nodes first in each iteration, as 
> suggested in HDFS-14894 .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) Fix NullPointException when start dfsrouter

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=580060=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580060
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 14:41
Start Date: 09/Apr/21 14:41
Worklog Time Spent: 10m 
  Work Description: lamberken closed pull request #2284:
URL: https://github.com/apache/hadoop/pull/2284


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580060)
Time Spent: 1h  (was: 50m)

> Fix NullPointException when start dfsrouter
> ---
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) Fix NullPointException when start dfsrouter

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=580059=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-580059
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 14:41
Start Date: 09/Apr/21 14:41
Worklog Time Spent: 10m 
  Work Description: lamberken commented on pull request #2284:
URL: https://github.com/apache/hadoop/pull/2284#issuecomment-816731174


   > @lamberken are you still working on this issue? If not can I take it on? 
Thanks!
   
   Hi @fengnanli, closed the patch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 580059)
Time Spent: 50m  (was: 40m)

> Fix NullPointException when start dfsrouter
> ---
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15815) if required storageType are unavailable, log the failed reason during choosing Datanode

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15815?focusedWorklogId=579947=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579947
 ]

ASF GitHub Bot logged work on HDFS-15815:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 12:27
Start Date: 09/Apr/21 12:27
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2882:
URL: https://github.com/apache/hadoop/pull/2882#issuecomment-816646538


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  22m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 13s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   4m  1s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  23m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   4m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 235m 12s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2882/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 364m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.TestSafeModeWithStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2882/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2882 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 3ba287a43482 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / e0f095449b960bd1835dfb8b02cb9a4163b38c08 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2882/1/testReport/ |
   | Max. process+thread count | 2083 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2882/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time 

[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=579932=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579932
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 11:54
Start Date: 09/Apr/21 11:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881#issuecomment-816629423


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 163 unchanged - 2 
fixed = 163 total (was 165)  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 250m 56s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2881/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 336m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2881/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2881 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 72b359fb9c07 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 

[jira] [Work logged] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15962?focusedWorklogId=579918=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579918
 ]

ASF GitHub Bot logged work on HDFS-15962:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 11:30
Start Date: 09/Apr/21 11:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2883:
URL: https://github.com/apache/hadoop/pull/2883#issuecomment-816618010


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   2m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  59m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  cc  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  cc  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 124m 46s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 209m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2883/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2883 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux 5bff810d20b1 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e15dbb0cf8677907aa18aeb83642ad38ac93205e |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2883/1/testReport/ |
   | Max. process+thread count | 550 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2883/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579918)
Time Spent: 20m  (was: 10m)

> Make strcasecmp cross platform
> --
>
> Key: HDFS-15962
> 

[jira] [Commented] (HDFS-15815) if required storageType are unavailable, log the failed reason during choosing Datanode

2021-04-09 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317920#comment-17317920
 ] 

Renukaprasad C commented on HDFS-15815:
---

Thanks [~hadoop_yangyun] for reporting, [~ayushtkn] [~hexiaoqiao] for review.
Can we merge this to 3.3/3.2 & 3.1 as well?

>  if required storageType are unavailable, log the failed reason during 
> choosing Datanode
> 
>
> Key: HDFS-15815
> URL: https://issues.apache.org/jira/browse/HDFS-15815
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15815.001.patch, HDFS-15815.002.patch, 
> HDFS-15815.003.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For better debug,  if required storageType are unavailable, log the failed 
> reason "NO_REQUIRED_STORAGE_TYPE" when choosing Datanode.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15865) Interrupt DataStreamer thread

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15865?focusedWorklogId=579903=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579903
 ]

ASF GitHub Bot logged work on HDFS-15865:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 11:07
Start Date: 09/Apr/21 11:07
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on a change in pull request #2728:
URL: https://github.com/apache/hadoop/pull/2728#discussion_r610540938



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##
@@ -895,6 +895,8 @@ void waitForAckedSeqno(long seqno) throws IOException {
 try (TraceScope ignored = dfsClient.getTracer().
 newScope("waitForAckedSeqno")) {
   LOG.debug("{} waiting for ack for: {}", this, seqno);
+  int dnodes = nodes != null ? nodes.length : 3;
+  int writeTimeout = dfsClient.getDatanodeWriteTimeout(dnodes);

Review comment:
   This timeout is very long. For a 3 node pipeline, it will be 8 minutes + 
3 * 5 seconds (for the extension).
   
   I'm not sure I have a better suggestion for the timeout.
   
   One question - I believe we saw this problem in a Hung Hive Server 2 
process. Do we know how this problem causes the entire HS2 instance to get 
hung? I would have thought this issue would block the closing of a single file 
on HDFS and other files open within the same client could still progress as 
normal?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579903)
Time Spent: 1h 20m  (was: 1h 10m)

> Interrupt DataStreamer thread
> -
>
> Key: HDFS-15865
> URL: https://issues.apache.org/jira/browse/HDFS-15865
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Karthik Palanisamy
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Have noticed HiveServer2 halts due to DataStreamer#waitForAckedSeqno. 
> I think we have to interrupt DataStreamer if no packet ack(from datanodes). 
> It likely happens with infra/network issue.
> {code:java}
> "HiveServer2-Background-Pool: Thread-35977576" #35977576 prio=5 os_prio=0 
> cpu=797.65ms elapsed=3406.28s tid=0x7fc0c6c29800 nid=0x4198 in 
> Object.wait()  [0x7fc1079f3000]
>     java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at java.lang.Object.wait(java.base(at)11.0.5/Native Method)
>  - waiting on 
>  at 
> org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:886)
>  - waiting to re-lock in wait() <0x7fe6eda86ca0> (a 
> java.util.LinkedList){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15865) Interrupt DataStreamer thread

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15865?focusedWorklogId=579895=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579895
 ]

ASF GitHub Bot logged work on HDFS-15865:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 10:46
Start Date: 09/Apr/21 10:46
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on a change in pull request #2728:
URL: https://github.com/apache/hadoop/pull/2728#discussion_r610529393



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##
@@ -905,6 +907,14 @@ void waitForAckedSeqno(long seqno) throws IOException {
 }
 try {
   dataQueue.wait(1000); // when we receive an ack, we notify on
+  long duration = Time.monotonicNow() - begin;
+  if (duration > writeTimeout) {
+LOG.error("No ack received, took {}ms (threshold={}ms). "
++ "File being written: {}, block: {}, "
++ "Write pipeline datanodes: {}.",
+duration, writeTimeout, src, block, nodes);
+throw new InterruptedIOException("No ack received. ");

Review comment:
   I think it would be good to log the duration waited and perhaps the 
timeout in the exception, eg:
   
   ```
   throw new InterruptedIOException("No ack received after " + duration / 1000 
+ "s and a timeout of " + writeTimeout / 1000 + "s");
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579895)
Time Spent: 1h 10m  (was: 1h)

> Interrupt DataStreamer thread
> -
>
> Key: HDFS-15865
> URL: https://issues.apache.org/jira/browse/HDFS-15865
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Karthik Palanisamy
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Have noticed HiveServer2 halts due to DataStreamer#waitForAckedSeqno. 
> I think we have to interrupt DataStreamer if no packet ack(from datanodes). 
> It likely happens with infra/network issue.
> {code:java}
> "HiveServer2-Background-Pool: Thread-35977576" #35977576 prio=5 os_prio=0 
> cpu=797.65ms elapsed=3406.28s tid=0x7fc0c6c29800 nid=0x4198 in 
> Object.wait()  [0x7fc1079f3000]
>     java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at java.lang.Object.wait(java.base(at)11.0.5/Native Method)
>  - waiting on 
>  at 
> org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:886)
>  - waiting to re-lock in wait() <0x7fe6eda86ca0> (a 
> java.util.LinkedList){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-04-09 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15160:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

This is backported to branch-3.3 via the PR.

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=579863=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579863
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 09:26
Start Date: 09/Apr/21 09:26
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881#discussion_r610480017



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -124,7 +124,13 @@
 import org.apache.hadoop.hdfs.server.namenode.metrics.ReplicatedBlocksMBean;
 import org.apache.hadoop.hdfs.server.protocol.SlowDiskReports;
 import org.apache.hadoop.ipc.ObserverRetryOnActiveException;
-import org.apache.hadoop.util.*;
+import org.apache.hadoop.util.Time;
+import org.apache.hadoop.util.Daemon;
+import org.apache.hadoop.util.DataChecksum;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.util.VersionInfo;
+import org.apache.hadoop.util.ExitUtil;

Review comment:
   nit:
   Avoid bothering imports




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579863)
Time Spent: 20m  (was: 10m)

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-04-09 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15160:
-
Fix Version/s: 3.3.1

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15160?focusedWorklogId=579862=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579862
 ]

ASF GitHub Bot logged work on HDFS-15160:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 09:25
Start Date: 09/Apr/21 09:25
Worklog Time Spent: 10m 
  Work Description: sodonnel merged pull request #2813:
URL: https://github.com/apache/hadoop/pull/2813


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579862)
Time Spent: 0.5h  (was: 20m)

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15160-branch-3.3-001.patch, HDFS-15160.001.patch, 
> HDFS-15160.002.patch, HDFS-15160.003.patch, HDFS-15160.004.patch, 
> HDFS-15160.005.patch, HDFS-15160.006.patch, HDFS-15160.007.patch, 
> HDFS-15160.008.patch, HDFS-15160.branch-3-3.001.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15759) EC: Verify EC reconstruction correctness on DataNode

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15759?focusedWorklogId=579857=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579857
 ]

ASF GitHub Bot logged work on HDFS-15759:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 09:13
Start Date: 09/Apr/21 09:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2868:
URL: https://github.com/apache/hadoop/pull/2868#issuecomment-816543817


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 20s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m 30s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 34s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   5m  8s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  15m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  14m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 44s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 16s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   5m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  15m 32s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  unit  | 216m 46s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 351m 46s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.io.compress.TestCompressorDecompressor |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestBlockRecovery |
   |   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2868 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint xml |
   | uname | Linux 4cbf0596110f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 78e228f24d8356db22d8ff3013a64724e18a2372 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2868/2/testReport/ |
   | Max. process+thread count | 3088 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
   | 

[jira] [Updated] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15962:
--
Labels: pull-request-available  (was: )

> Make strcasecmp cross platform
> --
>
> Key: HDFS-15962
> URL: https://issues.apache.org/jira/browse/HDFS-15962
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15962?focusedWorklogId=579840=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579840
 ]

ASF GitHub Bot logged work on HDFS-15962:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 08:00
Start Date: 09/Apr/21 08:00
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra opened a new pull request #2883:
URL: https://github.com/apache/hadoop/pull/2883


   * strcasecmp isn't available on Visual C++.
 Need to make this cross platform.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579840)
Remaining Estimate: 0h
Time Spent: 10m

> Make strcasecmp cross platform
> --
>
> Key: HDFS-15962
> URL: https://issues.apache.org/jira/browse/HDFS-15962
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15956) Provide utility class for FSNamesystem

2021-04-09 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15956:

Target Version/s:   (was: 3.3.1, 3.4.0)

> Provide utility class for FSNamesystem
> --
>
> Key: HDFS-15956
> URL: https://issues.apache.org/jira/browse/HDFS-15956
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> With ever-growing functionalities, FSNamesystem has become very huge (with 
> ~9k lines of code) over a period of time, we should provide a utility class 
> and refactor as many basic utility functions to new class as we can.
> With any further suggestions, we can create sub-tasks of this Jira and work 
> on them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15956) Provide utility class for FSNamesystem

2021-04-09 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved HDFS-15956.
-
Resolution: Won't Fix

> Provide utility class for FSNamesystem
> --
>
> Key: HDFS-15956
> URL: https://issues.apache.org/jira/browse/HDFS-15956
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> With ever-growing functionalities, FSNamesystem has become very huge (with 
> ~9k lines of code) over a period of time, we should provide a utility class 
> and refactor as many basic utility functions to new class as we can.
> With any further suggestions, we can create sub-tasks of this Jira and work 
> on them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15956) Provide utility class for FSNamesystem

2021-04-09 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HDFS-15956:
---

Assignee: (was: Viraj Jasani)

> Provide utility class for FSNamesystem
> --
>
> Key: HDFS-15956
> URL: https://issues.apache.org/jira/browse/HDFS-15956
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> With ever-growing functionalities, FSNamesystem has become very huge (with 
> ~9k lines of code) over a period of time, we should provide a utility class 
> and refactor as many basic utility functions to new class as we can.
> With any further suggestions, we can create sub-tasks of this Jira and work 
> on them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15956) Provide utility class for FSNamesystem

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15956?focusedWorklogId=579833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579833
 ]

ASF GitHub Bot logged work on HDFS-15956:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 07:38
Start Date: 09/Apr/21 07:38
Worklog Time Spent: 10m 
  Work Description: virajjasani closed pull request #2876:
URL: https://github.com/apache/hadoop/pull/2876


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579833)
Time Spent: 1.5h  (was: 1h 20m)

> Provide utility class for FSNamesystem
> --
>
> Key: HDFS-15956
> URL: https://issues.apache.org/jira/browse/HDFS-15956
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> With ever-growing functionalities, FSNamesystem has become very huge (with 
> ~9k lines of code) over a period of time, we should provide a utility class 
> and refactor as many basic utility functions to new class as we can.
> With any further suggestions, we can create sub-tasks of this Jira and work 
> on them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15962) Make strcasecmp cross platform

2021-04-09 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HDFS-15962:
-

 Summary: Make strcasecmp cross platform
 Key: HDFS-15962
 URL: https://issues.apache.org/jira/browse/HDFS-15962
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs++
Affects Versions: 3.4.0
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


strcasecmp isn't available on Visual C++. Need to make this cross platform.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15957) The ignored IOException in the RPC response sent by FSEditLogAsync can cause the HDFS client to hang

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15957?focusedWorklogId=579803=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579803
 ]

ASF GitHub Bot logged work on HDFS-15957:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 06:49
Start Date: 09/Apr/21 06:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2878:
URL: https://github.com/apache/hadoop/pull/2878#issuecomment-816454504


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  31m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 382m  4s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2878/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 507m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.TestLeaseRecovery |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | 

[jira] [Updated] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-04-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15790:
-
Target Version/s: 3.3.1, 3.4.0
Priority: Critical  (was: Major)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=579791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579791
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 06:35
Start Date: 09/Apr/21 06:35
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2767:
URL: https://github.com/apache/hadoop/pull/2767#issuecomment-816447348


   Thank you @vinayakumarb. Mostly looks good to me.
   
   Hi @belugabehr do you have any comments?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579791)
Time Spent: 1h 50m  (was: 1h 40m)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15175) Multiple CloseOp shared block instance causes the standby namenode to crash when rolling editlog

2021-04-09 Thread Max Xie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317704#comment-17317704
 ] 

Max  Xie commented on HDFS-15175:
-

I think this case is related to async edit logging, too. I try to add a test 
about this case. 

 

```

@Test
public void testEditLogAsync() throws Exception {
 // start a cluster
 Configuration conf = getConf();
 conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_EDITS_ASYNC_LOGGING, true);
 MiniDFSCluster cluster = null;
 FileSystem fileSys = null;
 try {
 cluster = new 
MiniDFSCluster.Builder(conf).numDataNodes(NUM_DATA_NODES).build();
 cluster.waitActive();
 fileSys = cluster.getFileSystem();
 final FSNamesystem namesystem = cluster.getNamesystem();
 FSImage fsimage = namesystem.getFSImage();
 final FSEditLogAsync editLog = (FSEditLogAsync)fsimage.getEditLog();

 // prepare a file with one block
 int blocksPerFile = 1;
 short replication =1;
 long blockSize = 2048;
 long blockGenStamp = 1;
 BlockInfo[] blocks = new BlockInfo[blocksPerFile];
 for (int iB = 0; iB < blocksPerFile; ++iB) {
 blocks[iB] =
 new BlockInfoContiguous(new Block(0, blockSize, blockGenStamp),
 replication);
 }
 INodeId inodeId = new INodeId();
 final INodeFile inode = new INodeFile(inodeId.nextValue(), null,
 new PermissionStatus("joeDoe", "people",
 new FsPermission((short)0777))
 , 0L, 0L, blocks, replication, blockSize);

 editLog.logCloseFile("/testfile", inode);

 //Simulate truncateOp, it will change block's numBytes to newBlockSize
 int newBlockSize = 1024;
 inode.getBlocks()[0].setNumBytes(newBlockSize);

 // Quickly get CloseOp from the FSEditLogAsync.editPendingQ
 // If not quickly, it may be consumed and can't reproduce the issue.
 long closeOpBlockNumByte = ((CloseOp) editLog.getEditPendingQElementOp())
 .getBlocks()[0].getNumBytes();

 // closeOpBlockNumByte should be equal to blockSize, but it has been set to 
newBlockSize
 assertEquals(closeOpBlockNumByte, blockSize);

 editLog.close();
 } finally {
 if(fileSys != null) fileSys.close();
 if(cluster != null) cluster.shutdown();
 }
}

 

// add getEditPendingQElementOp for testing in  FSEditLogAsync

@VisibleForTesting
public FSEditLogOp getEditPendingQElementOp(){
 return editPendingQ.element().op;
}

```

 

 

> Multiple CloseOp shared block instance causes the standby namenode to crash 
> when rolling editlog
> 
>
> Key: HDFS-15175
> URL: https://issues.apache.org/jira/browse/HDFS-15175
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Yicong Cai
>Assignee: Wan Chang
>Priority: Critical
>  Labels: NameNode
> Attachments: HDFS-15175-trunk.1.patch
>
>
>  
> {panel:title=Crash exception}
> 2020-02-16 09:24:46,426 [507844305] - ERROR [Edit log 
> tailer:FSEditLogLoader@245] - Encountered exception on operation CloseOp 
> [length=0, inodeId=0, path=..., replication=3, mtime=1581816138774, 
> atime=1581814760398, blockSize=536870912, blocks=[blk_5568434562_4495417845], 
> permissions=da_music:hdfs:rw-r-, aclEntries=null, clientName=, 
> clientMachine=, overwrite=false, storagePolicyId=0, opCode=OP_CLOSE, 
> txid=32625024993]
>  java.io.IOException: File is not under construction: ..
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:442)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:237)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:146)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:891)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:872)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:262)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:395)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:348)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:365)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:360)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:479)
>  at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:361)
> {panel}
>  
> {panel:title=Editlog}
> 
>  OP_REASSIGN_LEASE
>  
>  32625021150
>  DFSClient_NONMAPREDUCE_-969060727_197760
>  ..
>  DFSClient_NONMAPREDUCE_1000868229_201260
>  
>  
> ..
> 
>  

[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=579789=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579789
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 06:31
Start Date: 09/Apr/21 06:31
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on a change in pull request #2767:
URL: https://github.com/apache/hadoop/pull/2767#discussion_r610378108



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java
##
@@ -179,10 +281,41 @@ public void testProtoBufRpc2() throws Exception {
 MetricsRecordBuilder rpcDetailedMetrics = 
 getMetrics(server.getRpcDetailedMetrics().name());
 assertCounterGt("Echo2NumOps", 0L, rpcDetailedMetrics);
+
+if (testWithLegacy) {
+  testProtobufLegacy();
+}
+  }
+
+  private void testProtobufLegacy()
+  throws IOException, com.google.protobuf.ServiceException {
+TestRpcService2Legacy client = getClientLegacy();
+
+// Test ping method
+client.ping2(null, 
TestProtosLegacy.EmptyRequestProto.newBuilder().build());
+
+// Test echo method
+TestProtosLegacy.EchoResponseProto echoResponse = client.echo2(null,
+TestProtosLegacy.EchoRequestProto.newBuilder().setMessage("hello")
+.build());
+assertThat(echoResponse.getMessage()).isEqualTo("hello");
+
+// Ensure RPC metrics are updated
+MetricsRecordBuilder rpcMetrics = 
getMetrics(server.getRpcMetrics().name());
+assertCounterGt("RpcQueueTimeNumOps", 0L, rpcMetrics);
+assertCounterGt("RpcProcessingTimeNumOps", 0L, rpcMetrics);
+
+MetricsRecordBuilder rpcDetailedMetrics =
+getMetrics(server.getRpcDetailedMetrics().name());
+assertCounterGt("Echo2NumOps", 0L, rpcDetailedMetrics);
   }
 
   @Test (timeout=5000)
   public void testProtoBufRandomException() throws Exception {
+if (testWithLegacy) {
+  //No test with legacy
+  return;
+}

Review comment:
   `assumeFalse(testWithLegacy)` can be used instead.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579789)
Time Spent: 1h 40m  (was: 1.5h)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15815) if required storageType are unavailable, log the failed reason during choosing Datanode

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15815?focusedWorklogId=579788=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579788
 ]

ASF GitHub Bot logged work on HDFS-15815:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 06:22
Start Date: 09/Apr/21 06:22
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2882:
URL: https://github.com/apache/hadoop/pull/2882#issuecomment-816441190


   Only a trivial missing symbol due to HDFS-15355. Replaced that with 
HdfsConstants.COLD_STORAGE_POLICY_ID.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579788)
Time Spent: 20m  (was: 10m)

>  if required storageType are unavailable, log the failed reason during 
> choosing Datanode
> 
>
> Key: HDFS-15815
> URL: https://issues.apache.org/jira/browse/HDFS-15815
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15815.001.patch, HDFS-15815.002.patch, 
> HDFS-15815.003.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For better debug,  if required storageType are unavailable, log the failed 
> reason "NO_REQUIRED_STORAGE_TYPE" when choosing Datanode.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15815) if required storageType are unavailable, log the failed reason during choosing Datanode

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15815:
--
Labels: pull-request-available  (was: )

>  if required storageType are unavailable, log the failed reason during 
> choosing Datanode
> 
>
> Key: HDFS-15815
> URL: https://issues.apache.org/jira/browse/HDFS-15815
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15815.001.patch, HDFS-15815.002.patch, 
> HDFS-15815.003.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For better debug,  if required storageType are unavailable, log the failed 
> reason "NO_REQUIRED_STORAGE_TYPE" when choosing Datanode.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15815) if required storageType are unavailable, log the failed reason during choosing Datanode

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15815?focusedWorklogId=579787=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579787
 ]

ASF GitHub Bot logged work on HDFS-15815:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 06:21
Start Date: 09/Apr/21 06:21
Worklog Time Spent: 10m 
  Work Description: jojochuang opened a new pull request #2882:
URL: https://github.com/apache/hadoop/pull/2882


   (cherry picked from commit e391844e8e414abf8c94f7bd4719053efa3b538a)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579787)
Remaining Estimate: 0h
Time Spent: 10m

>  if required storageType are unavailable, log the failed reason during 
> choosing Datanode
> 
>
> Key: HDFS-15815
> URL: https://issues.apache.org/jira/browse/HDFS-15815
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: block placement
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HDFS-15815.001.patch, HDFS-15815.002.patch, 
> HDFS-15815.003.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For better debug,  if required storageType are unavailable, log the failed 
> reason "NO_REQUIRED_STORAGE_TYPE" when choosing Datanode.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=579785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579785
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 06:15
Start Date: 09/Apr/21 06:15
Worklog Time Spent: 10m 
  Work Description: bshashikant opened a new pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881


   
   Please refer https://issues.apache.org/jira/browse/HDFS-15961.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579785)
Remaining Estimate: 0h
Time Spent: 10m

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15961:
--
Labels: pull-request-available  (was: )

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-09 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDFS-15961:
--

 Summary: standby namenode failed to start ordered snapshot 
deletion is enabled while having snapshottable directories
 Key: HDFS-15961
 URL: https://issues.apache.org/jira/browse/HDFS-15961
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: snapshots
Affects Versions: 3.4.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 3.4.0


{code:java}
2021-04-08 12:07:25,398 INFO 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
status 1: Could not provision Trash directory for existing snapshottable 
directories. Exiting Namenode.
2021-04-08 12:07:55,596 INFO 
org.apache.ranger.audit.provider.AuditProviderFactory: ==> JVMShutdownHook.run()
2021-04-08 12:07:55,596 INFO 
org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
Signalling async audit cleanup to start.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15958) TestBPOfferService.testMissBlocksWhenReregister is flaky

2021-04-09 Thread Borislav Iordanov (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Borislav Iordanov resolved HDFS-15958.
--
Resolution: Duplicate

Just realized this was fixed already. I had encountered it in 3.3, debugged and 
prepared a patch for it.

> TestBPOfferService.testMissBlocksWhenReregister is flaky
> 
>
> Key: HDFS-15958
> URL: https://issues.apache.org/jira/browse/HDFS-15958
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Borislav Iordanov
>Priority: Minor
>
> This test fails relatively frequently due to a race condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) Fix NullPointException when start dfsrouter

2021-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=579782=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-579782
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 09/Apr/21 06:07
Start Date: 09/Apr/21 06:07
Worklog Time Spent: 10m 
  Work Description: fengnanli commented on pull request #2284:
URL: https://github.com/apache/hadoop/pull/2284#issuecomment-816434207


   @lamberken  are you still working on this issue? If not can I take it on? 
Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 579782)
Time Spent: 40m  (was: 0.5h)

> Fix NullPointException when start dfsrouter
> ---
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org