[jira] [Updated] (HDFS-15505) Fix NullPointerException when call getAdditionalDatanode method with null extendedBlock parameter

2021-04-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15505:
-
Fix Version/s: (was: 3.4.0)
   (was: 3.3.1)

Removed "Fix version". It is set by a committer when the fix is merged.

> Fix NullPointerException when call getAdditionalDatanode method with null 
> extendedBlock parameter
> -
>
> Key: HDFS-15505
> URL: https://issues.apache.org/jira/browse/HDFS-15505
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfsclient
>Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 3.0.3, 3.1.2, 
> 3.3.0, 3.2.1, 3.1.3
>Reporter: hang chen
>Priority: Major
>
> When client call getAdditionalDatanode method, it will initialize 
> GetAdditionalDatanodeRequestProto and send RPC request to Router/namenode. 
> However, if we call getAdditionalDatanode method with null extendedBlock 
> parameter, it will set GetAdditionalDatanodeRequestProto's blk field with 
> null, which will cause NullPointerException. The code show as follow.
> {code:java}
> // code placeholder
> GetAdditionalDatanodeRequestProto req = GetAdditionalDatanodeRequestProto
>  .newBuilder()
>  .setSrc(src)
>  .setFileId(fileId)
>  .setBlk(PBHelperClient.convert(blk))
>  .addAllExistings(PBHelperClient.convert(existings))
>  .addAllExistingStorageUuids(Arrays.asList(existingStorageIDs))
>  .addAllExcludes(PBHelperClient.convert(excludes))
>  .setNumAdditionalNodes(numAdditionalNodes)
>  .setClientName(clientName)
>  .build();{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15715) ReplicatorMonitor performance degrades, when the storagePolicy of many file are not match with their real datanodestorage

2021-04-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15715:
-
Fix Version/s: (was: 3.3.1)

Removed "Fix version". It is set by a committer when the fix is merged.

> ReplicatorMonitor performance degrades, when the storagePolicy of many file 
> are not match with their real datanodestorage 
> --
>
> Key: HDFS-15715
> URL: https://issues.apache.org/jira/browse/HDFS-15715
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3, 3.2.1
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
> Attachments: HDFS-15715.001.patch, HDFS-15715.002.patch, 
> HDFS-15715.002.patch.addendum, image-2021-03-26-12-17-45-500.png
>
>
> One of our Namenode which has 300M files and blocks. In common way, this 
> namode shoud not be in heavy load. But we found rpc process time keep high, 
> and decommission is very slow.
>  
> I search the metrics, I found uderreplicated blocks keep high. Then I jstack 
> namenode, found 'InnerNode.getLoc' is hot spot cod. I think maybe 
> chooseTarget can't find block, so result to performance degradation. Consider 
> with HDFS-10453, I guess maybe some logical trigger to the scene where 
> chooseTarget can't find proper block.
> Then I enable some debug. (Of course I revise some code so that only debug 
> isGoodTarget, because enable BlockPlacementPolicy's debug log is dangrouse). 
> I found "the rack has too many chosen nodes" is called. Then I found some log 
> like this 
> {code}
> 2020-12-04 12:13:56,345 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 0 to reach 3 (unavailableStorages=[], 
> storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
> creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For 
> more information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> 2020-12-04 12:14:03,843 WARN 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to 
> place enough replicas, still in need of 0 to reach 3 (unavailableStorages=[], 
> storagePolicy=BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], 
> creationFallbacks=[], replicationFallbacks=[]}, newBlock=false) For more 
> information, please enable DEBUG log level on 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
> {code} 
> Then through some debug and simulation, I found the reason, and reproduction 
> this exception.
> The reason is that some developer use COLD storage policy and mover, but the 
> operatiosn of setting storage policy and mover are asynchronous. So some 
> file's real  datanodestorages are not match with this storagePolicy.
> Let me simualte this proccess. If /tmp/a is create, then have 2 replications 
> are DISK. Then set storage policy to COLD. When some logical trigger(For 
> example decommission) to copy this block. chooseTarget then use 
> chooseStorageTypes to filter real needed block. Here the size of variable 
> requiredStorageTypes which chooseStorageTypes returned is 3. But the size of  
> result is 2. But 3 means need 3 ARCHIVE storage. 2 means bocks has 2 DISK 
> storage. Then will request to choose 3 target. choose first target is right, 
> but when choose seconde target, the variable 'counter' is 4 which is larger 
> than maxTargetPerRack which is 3 in function isGoodTarget. So skip all 
> datanodestorage. Then result to bad performance.
> I think chooseStorageTypes need to consider the result, when the exist 
> replication doesn't meet storage policy's demand, we need to remove this from 
> result. 
> I changed by this way, and test in my unit-test. Then solve it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332916#comment-17332916
 ] 

Haiyang Hu edited comment on HDFS-15998 at 4/27/21, 4:12 AM:
-

{code:java}FSNamesystem.java
  public BatchedListEntries getFilesBlockingDecom(long prevId,
   String path) {
...
 for (DatanodeDescriptor dataNode :
 blockManager.getDatanodeManager().getDatanodes()) {
   for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) {
 INode ucFile = getFSDirectory().getInode(ucFileId);
 if (ucFile == null || ucFileId <= prevId ||
 openFileIds.contains(ucFileId)) {
   // probably got deleted or
   // part of previous batch or
   // already part of the current batch
   continue;
 }
 Preconditions.checkState(ucFile instanceof INodeFile);
 openFileIds.add(ucFileId);
 INodeFile inodeFile = ucFile.asFile();

 String fullPathName = inodeFile.getFullPathName();
 if (org.apache.commons.lang3.StringUtils.isEmpty(path)
 || fullPathName.startsWith(path)) {
   openFileEntries.add(new OpenFileEntry(inodeFile.getId(),
   inodeFile.getFullPathName(),
   inodeFile.getFileUnderConstructionFeature().getClientName(), 
//Here throw NPE
   inodeFile.getFileUnderConstructionFeature().getClientMachine()));
 }
{code}

Later,upload PR


was (Author: haiyang hu):
{code:java}LeaseManager.java
  
{code}

Later,upload PR

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)

[jira] [Comment Edited] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332916#comment-17332916
 ] 

Haiyang Hu edited comment on HDFS-15998 at 4/27/21, 4:10 AM:
-

{code:java}LeaseManager.java
  
{code}

Later,upload PR


was (Author: haiyang hu):
{code:java}LeaseManager.java
  public BatchedListEntries getUnderConstructionFiles(
  final long prevId, final String path) throws IOException {
assert fsnamesystem.hasReadLock();
...
Iterator inodeIdIterator = inodeIds.iterator();
while (inodeIdIterator.hasNext()) {
  Long inodeId = inodeIdIterator.next();
  final INodeFile inodeFile =
  fsnamesystem.getFSDirectory().getInode(inodeId).asFile();
  if (!inodeFile.isUnderConstruction()) {
LOG.warn("The file {} is not under construction but has lease.",
inodeFile.getFullPathName());
continue;
  }

  fullPathName = inodeFile.getFullPathName();
  if (StringUtils.isEmpty(path) ||
  DFSUtil.isParentEntry(fullPathName, path)) {
openFileEntries.add(new OpenFileEntry(inodeFile.getId(), fullPathName,
inodeFile.getFileUnderConstructionFeature().getClientName(), 
//Here,throw NPE
inodeFile.getFileUnderConstructionFeature().getClientMachine()));
count++;
  }
{code}

Later,upload PR

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
>  List open files failed.
>  listOpenFiles: java.lang.NullPointerException
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332916#comment-17332916
 ] 

Haiyang Hu edited comment on HDFS-15998 at 4/27/21, 4:09 AM:
-

{code:java}LeaseManager.java
  public BatchedListEntries getUnderConstructionFiles(
  final long prevId, final String path) throws IOException {
assert fsnamesystem.hasReadLock();
...
Iterator inodeIdIterator = inodeIds.iterator();
while (inodeIdIterator.hasNext()) {
  Long inodeId = inodeIdIterator.next();
  final INodeFile inodeFile =
  fsnamesystem.getFSDirectory().getInode(inodeId).asFile();
  if (!inodeFile.isUnderConstruction()) {
LOG.warn("The file {} is not under construction but has lease.",
inodeFile.getFullPathName());
continue;
  }

  fullPathName = inodeFile.getFullPathName();
  if (StringUtils.isEmpty(path) ||
  DFSUtil.isParentEntry(fullPathName, path)) {
openFileEntries.add(new OpenFileEntry(inodeFile.getId(), fullPathName,
inodeFile.getFileUnderConstructionFeature().getClientName(), 
//Here,throw NPE
inodeFile.getFileUnderConstructionFeature().getClientMachine()));
count++;
  }
{code}

Later,upload PR


was (Author: haiyang hu):
Later,upload PR

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
>  List open files failed.
>  listOpenFiles: java.lang.NullPointerException
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Issue Comment Deleted] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-15998:
--
Comment: was deleted

(was: {code:java}LeaseManager.java
  public BatchedListEntries getUnderConstructionFiles(
  final long prevId, final String path) throws IOException {
assert fsnamesystem.hasReadLock();
...
Iterator inodeIdIterator = inodeIds.iterator();
while (inodeIdIterator.hasNext()) {
  Long inodeId = inodeIdIterator.next();
  final INodeFile inodeFile =
  fsnamesystem.getFSDirectory().getInode(inodeId).asFile();
  if (!inodeFile.isUnderConstruction()) {
LOG.warn("The file {} is not under construction but has lease.",
inodeFile.getFullPathName());
continue;
  }

  fullPathName = inodeFile.getFullPathName();
  if (StringUtils.isEmpty(path) ||
  DFSUtil.isParentEntry(fullPathName, path)) {
openFileEntries.add(new OpenFileEntry(inodeFile.getId(), fullPathName,
inodeFile.getFileUnderConstructionFeature().getClientName(), 
//Here,throw NPE
inodeFile.getFileUnderConstructionFeature().getClientMachine()));
count++;
  }
{code}
)

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
>  List open files failed.
>  listOpenFiles: java.lang.NullPointerException
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332916#comment-17332916
 ] 

Haiyang Hu commented on HDFS-15998:
---

Later,upload PR

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
>  List open files failed.
>  listOpenFiles: java.lang.NullPointerException
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332914#comment-17332914
 ] 

Haiyang Hu edited comment on HDFS-15998 at 4/27/21, 4:03 AM:
-

{code:java}LeaseManager.java
  public BatchedListEntries getUnderConstructionFiles(
  final long prevId, final String path) throws IOException {
assert fsnamesystem.hasReadLock();
...
Iterator inodeIdIterator = inodeIds.iterator();
while (inodeIdIterator.hasNext()) {
  Long inodeId = inodeIdIterator.next();
  final INodeFile inodeFile =
  fsnamesystem.getFSDirectory().getInode(inodeId).asFile();
  if (!inodeFile.isUnderConstruction()) {
LOG.warn("The file {} is not under construction but has lease.",
inodeFile.getFullPathName());
continue;
  }

  fullPathName = inodeFile.getFullPathName();
  if (StringUtils.isEmpty(path) ||
  DFSUtil.isParentEntry(fullPathName, path)) {
openFileEntries.add(new OpenFileEntry(inodeFile.getId(), fullPathName,
inodeFile.getFileUnderConstructionFeature().getClientName(), 
//Here,throw NPE
inodeFile.getFileUnderConstructionFeature().getClientMachine()));
count++;
  }
{code}



was (Author: haiyang hu):

{code:java}LeaseManager.java
  public BatchedListEntries getUnderConstructionFiles(
  final long prevId, final String path) throws IOException {
assert fsnamesystem.hasReadLock();
SortedMap remainingLeases;
synchronized (this) {
  remainingLeases = leasesById.tailMap(prevId, false);
}
Collection inodeIds = remainingLeases.keySet();
final int numResponses = Math.min(
this.fsnamesystem.getMaxListOpenFilesResponses(), inodeIds.size());
final List openFileEntries =
Lists.newArrayListWithExpectedSize(numResponses);

int count = 0;
String fullPathName = null;
Iterator inodeIdIterator = inodeIds.iterator();
while (inodeIdIterator.hasNext()) {
  Long inodeId = inodeIdIterator.next();
  final INodeFile inodeFile =
  fsnamesystem.getFSDirectory().getInode(inodeId).asFile();
  if (!inodeFile.isUnderConstruction()) {
LOG.warn("The file {} is not under construction but has lease.",
inodeFile.getFullPathName());
continue;
  }

  fullPathName = inodeFile.getFullPathName();
  if (StringUtils.isEmpty(path) ||
  DFSUtil.isParentEntry(fullPathName, path)) {
openFileEntries.add(new OpenFileEntry(inodeFile.getId(), fullPathName,
inodeFile.getFileUnderConstructionFeature().getClientName(), 
//Here,throw NPE
inodeFile.getFileUnderConstructionFeature().getClientMachine()));
count++;
  }
{code}


> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> 

[jira] [Commented] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332914#comment-17332914
 ] 

Haiyang Hu commented on HDFS-15998:
---


{code:java}LeaseManager.java
  public BatchedListEntries getUnderConstructionFiles(
  final long prevId, final String path) throws IOException {
assert fsnamesystem.hasReadLock();
SortedMap remainingLeases;
synchronized (this) {
  remainingLeases = leasesById.tailMap(prevId, false);
}
Collection inodeIds = remainingLeases.keySet();
final int numResponses = Math.min(
this.fsnamesystem.getMaxListOpenFilesResponses(), inodeIds.size());
final List openFileEntries =
Lists.newArrayListWithExpectedSize(numResponses);

int count = 0;
String fullPathName = null;
Iterator inodeIdIterator = inodeIds.iterator();
while (inodeIdIterator.hasNext()) {
  Long inodeId = inodeIdIterator.next();
  final INodeFile inodeFile =
  fsnamesystem.getFSDirectory().getInode(inodeId).asFile();
  if (!inodeFile.isUnderConstruction()) {
LOG.warn("The file {} is not under construction but has lease.",
inodeFile.getFullPathName());
continue;
  }

  fullPathName = inodeFile.getFullPathName();
  if (StringUtils.isEmpty(path) ||
  DFSUtil.isParentEntry(fullPathName, path)) {
openFileEntries.add(new OpenFileEntry(inodeFile.getId(), fullPathName,
inodeFile.getFileUnderConstructionFeature().getClientName(), 
//Here,throw NPE
inodeFile.getFileUnderConstructionFeature().getClientMachine()));
count++;
  }
{code}


> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
> NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> {quote}
>  org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   ...
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>   at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
>   at 
> org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
>   at 
> org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
>   at 
> org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
>   at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   

[jira] [Updated] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-15998:
--
Description: 
Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{quote}
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
...
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
 List open files failed.
 listOpenFiles: java.lang.NullPointerException
{quote}


  was:
Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{quote}
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
...
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)

[jira] [Updated] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-15998:
--
Description: 
Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{quote}
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
...
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
 List open files failed.
 listOpenFiles: java.lang.NullPointerException
{quote}



  was:
Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{quote}
 21/03/17 14:17:38 DEBUG retry.RetryInvocationHandler: Exception while invoking 
call #0 ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not 
retrying because try once and fail.
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
...
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 

[jira] [Updated] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-15998:
--
Description: 
Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{quote}
 21/03/17 14:17:38 DEBUG retry.RetryInvocationHandler: Exception while invoking 
call #0 ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not 
retrying because try once and fail.
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
...
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
 List open files failed.
 listOpenFiles: java.lang.NullPointerException
{quote}



  was:
Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{quote}
 21/03/17 14:17:38 DEBUG retry.RetryInvocationHandler: Exception while invoking 
call #0 ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not 
retrying because try once and fail.
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828)
at java.security.AccessController.doPrivileged(Native Method)
at 

[jira] [Updated] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-15998:
--
Description: 
Use the Hadoop 3.2.0 client execute the following command: occasionally throw 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{quote}
 21/03/17 14:17:38 DEBUG retry.RetryInvocationHandler: Exception while invoking 
call #0 ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not 
retrying because try once and fail.
 org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2717)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy9.listOpenFiles(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)
at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)
at 
org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)
at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590)
 List open files failed.
 listOpenFiles: java.lang.NullPointerException
{quote}



  was:
Use the Hadoop 3.2.0 client execute the following command: occasionally appear 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 


> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: 

[jira] [Updated] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-15998:
--
Description: 
Use the Hadoop 3.2.0 client execute the following command: occasionally appear 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 

  was:
Use the Hadoop 3.2.0 client execute the following command: occasionally appear 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{code:java}
// code placeholder
21/03/17 14:17:38 DEBUG retry.RetryInvocationHandler: Exception while invoking 
call #0 ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not 
retrying because try once and fail.21/03/17 14:17:38 DEBUG 
retry.RetryInvocationHandler: Exception while invoking call #0 
ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not retrying 
because try once and fail. 
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)  at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886)  at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828)  at 
java.security.AccessController.doPrivileged(Native Method)  at 
javax.security.auth.Subject.doAs(Subject.java:422)  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2717)
  at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)  at 
org.apache.hadoop.ipc.Client.call(Client.java:1457)  at 
org.apache.hadoop.ipc.Client.call(Client.java:1367)  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
  at com.sun.proxy.$Proxy9.listOpenFiles(Unknown Source)  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
  at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)  at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
  at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
  at org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)  
at org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)  at 
org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)  at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)  at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)  
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590) 
List open files failed. 
listOpenFiles: java.lang.NullPointerException
{code}
 

 

 


> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major

[jira] [Updated] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-15998:
--
Description: 
Use the Hadoop 3.2.0 client execute the following command: occasionally appear 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 
{code:java}
// code placeholder
21/03/17 14:17:38 DEBUG retry.RetryInvocationHandler: Exception while invoking 
call #0 ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not 
retrying because try once and fail.21/03/17 14:17:38 DEBUG 
retry.RetryInvocationHandler: Exception while invoking call #0 
ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not retrying 
because try once and fail. 
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)  at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886)  at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828)  at 
java.security.AccessController.doPrivileged(Native Method)  at 
javax.security.auth.Subject.doAs(Subject.java:422)  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2717)
  at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)  at 
org.apache.hadoop.ipc.Client.call(Client.java:1457)  at 
org.apache.hadoop.ipc.Client.call(Client.java:1367)  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
  at com.sun.proxy.$Proxy9.listOpenFiles(Unknown Source)  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
  at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)  at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
  at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
  at org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)  
at org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)  at 
org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)  at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)  at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)  
at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590) 
List open files failed. 
listOpenFiles: java.lang.NullPointerException
{code}
 

 

 

  was:
Use the Hadoop 3.2.0 client execute the following command: occasionally appear 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 

{{{quote}}}

 21/03/17 14:17:38 DEBUG retry.RetryInvocationHandler: Exception while invoking 
call #0 ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not 
retrying because try once and fail. 21/03/17 14:17:38 DEBUG 
retry.RetryInvocationHandler: Exception while invoking call #0 
ClientNamenodeProtocolTranslatorPB.listOpenFiles 

[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=589546=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589546
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 03:45
Start Date: 27/Apr/21 03:45
Worklog Time Spent: 10m 
  Work Description: bshashikant merged pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589546)
Time Spent: 3h 50m  (was: 3h 40m)

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-26 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDFS-15961.

Resolution: Fixed

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=589545=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589545
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 03:45
Start Date: 27/Apr/21 03:45
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881#issuecomment-827291426


   The failures are nit related to the patch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589545)
Time Spent: 3h 40m  (was: 3.5h)

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=589544=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589544
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 03:44
Start Date: 27/Apr/21 03:44
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881#issuecomment-827291331


   Thanks @ayushtkn and @smengcl for the reviews. Committing this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589544)
Time Spent: 3.5h  (was: 3h 20m)

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HDFS-15998:
--
Description: 
Use the Hadoop 3.2.0 client execute the following command: occasionally appear 
NPE.

hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
-path /xxx

 

{{{quote}}}

 21/03/17 14:17:38 DEBUG retry.RetryInvocationHandler: Exception while invoking 
call #0 ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not 
retrying because try once and fail. 21/03/17 14:17:38 DEBUG 
retry.RetryInvocationHandler: Exception while invoking call #0 
ClientNamenodeProtocolTranslatorPB.listOpenFiles over xxx:8020. Not retrying 
because try once and fail. 
org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
java.lang.NullPointerException  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFilesBlockingDecom(FSNamesystem.java:1917)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.listOpenFiles(FSNamesystem.java:1876)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.listOpenFiles(NameNodeRpcServer.java:1453)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.listOpenFiles(ClientNamenodeProtocolServerSideTranslatorPB.java:1894)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)  at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:886)  at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:828)  at 
java.security.AccessController.doPrivileged(Native Method)  at 
javax.security.auth.Subject.doAs(Subject.java:422)  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1903)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2717)
  at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)  at 
org.apache.hadoop.ipc.Client.call(Client.java:1457)  at 
org.apache.hadoop.ipc.Client.call(Client.java:1367)  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
  at com.sun.proxy.$Proxy9.listOpenFiles(Unknown Source)  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.listOpenFiles(ClientNamenodeProtocolTranslatorPB.java:1952)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
  at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
  at com.sun.proxy.$Proxy10.listOpenFiles(Unknown Source)  at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:89)
  at 
org.apache.hadoop.hdfs.protocol.OpenFilesIterator.makeRequest(OpenFilesIterator.java:35)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequest(BatchedRemoteIterator.java:77)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.makeRequestIfNeeded(BatchedRemoteIterator.java:85)
  at 
org.apache.hadoop.fs.BatchedRemoteIterator.hasNext(BatchedRemoteIterator.java:99)
  at org.apache.hadoop.hdfs.tools.DFSAdmin.printOpenFiles(DFSAdmin.java:1006)  
at org.apache.hadoop.hdfs.tools.DFSAdmin.listOpenFiles(DFSAdmin.java:994)  at 
org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:2431)  at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)  at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)  at 
org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2590) List open files 
failed. listOpenFiles: java.lang.NullPointerException{{}}

 

{{{quote}}}

 

> Fix NullPointException In listOpenFiles
> ---
>
> Key: HDFS-15998
> URL: https://issues.apache.org/jira/browse/HDFS-15998
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Haiyang Hu
>Priority: Major
>
> Use the Hadoop 3.2.0 client execute the following command: occasionally 
> appear NPE.
> hdfs dfsadmin -Dfs.defaultFS=hdfs://xxx -listOpenFiles -blockingDecommission 
> -path /xxx
>  
> 

[jira] [Created] (HDFS-15998) Fix NullPointException In listOpenFiles

2021-04-26 Thread Haiyang Hu (Jira)
Haiyang Hu created HDFS-15998:
-

 Summary: Fix NullPointException In listOpenFiles
 Key: HDFS-15998
 URL: https://issues.apache.org/jira/browse/HDFS-15998
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Haiyang Hu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng updated HDFS-15923:
-
Description: 
Rename accross subcluster with RBF and Kerberos environment. Will encounter the 
following two errors:
 # Save Object to journal.
 # Precheck try to get src file status

So, we need use Router Login UGI doAs create DistcpProcedure and TrashProcedure 
and submit Job.

 

Beside, we should check user permission for src and dst path in router side 
before do rename internal. (HDFS-15973)

First: Save Object to journal.
{code:java}
// code placeholder
2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
encountered while connecting to the server 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
at org.apache.hadoop.ipc.Client.call(Client.java:1452)
at org.apache.hadoop.ipc.Client.call(Client.java:1405)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy11.create(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy12.create(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:544)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:471)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1125)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1105)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:994)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:982)
at 
org.apache.hadoop.tools.fedbalance.procedure.BalanceJournalInfoHDFS.saveJob(BalanceJournalInfoHDFS.java:89)
at 
org.apache.hadoop.tools.fedbalance.procedure.BalanceProcedureScheduler.submit(BalanceProcedureScheduler.java:134)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterFederationRename.routerFedRename(RouterFederationRename.java:105)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.rename(RouterClientProtocol.java:596)
at 

[jira] [Commented] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332893#comment-17332893
 ] 

Jinglun commented on HDFS-15923:


LGTM. +1 on v002.

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
> Attachments: HDFS-15923.001.patch, HDFS-15923.002.patch, 
> HDFS-15923.stack-trace, hdfs-15923-fix-security-issue.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
> submit Job.
> In patch i use proxy ugi doAs above method. It worked.
> But there are another strange thing and this patch not solve:
> Router use ugi itself to submit the Distcp job. But not user ugi or proxy 
> ugi. This may cause excessive distcp permissions.
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 

[jira] [Work logged] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?focusedWorklogId=589531=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589531
 ]

ASF GitHub Bot logged work on HDFS-15997:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 02:04
Start Date: 27/Apr/21 02:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2958:
URL: https://github.com/apache/hadoop/pull/2958#issuecomment-827256700


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 58s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2958/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 179 unchanged 
- 0 fixed = 181 total (was 179)  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 477m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2958/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  9s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 571m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.fs.viewfs.TestViewFileSystemLinkFallback |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
 

[jira] [Work logged] (HDFS-15810) RBF: RBFMetrics's TotalCapacity out of bounds

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15810?focusedWorklogId=589528=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589528
 ]

ASF GitHub Bot logged work on HDFS-15810:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 01:30
Start Date: 27/Apr/21 01:30
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on a change in pull request #2910:
URL: https://github.com/apache/hadoop/pull/2910#discussion_r620772544



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestMetricsBase.java
##
@@ -259,4 +259,15 @@ private MembershipState createRegistration(String ns, 
String nn,
 assertTrue(response.getResult());
 return record;
   }
+
+  // refresh namenode registration for new attributes
+  public boolean refreshNamenodeRegistration(NamenodeHeartbeatRequest request)
+  throws IOException {
+boolean result = membershipStore.namenodeHeartbeat(request).getResult();
+membershipStore.loadCache(true);
+MembershipNamenodeResolver resolver =
+(MembershipNamenodeResolver) router.getNamenodeResolver();
+resolver.loadCache(true);
+return result;

Review comment:
   The result is not used. Is it okay?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589528)
Time Spent: 2.5h  (was: 2h 20m)

> RBF: RBFMetrics's TotalCapacity out of bounds
> -
>
> Key: HDFS-15810
> URL: https://issues.apache.org/jira/browse/HDFS-15810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoxing Wei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2021-04-26 at 5.25.12 PM.png, 
> image-2021-02-02-10-59-17-113.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The Long type fields TotalCapacity,UsedCapacity and RemainingCapacity in 
> RBFMetrics maybe ** out of bounds.
> !image-2021-02-02-10-59-17-113.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15810) RBF: RBFMetrics's TotalCapacity out of bounds

2021-04-26 Thread Fengnan Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-15810:
--
Attachment: Screen Shot 2021-04-26 at 5.25.12 PM.png

> RBF: RBFMetrics's TotalCapacity out of bounds
> -
>
> Key: HDFS-15810
> URL: https://issues.apache.org/jira/browse/HDFS-15810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoxing Wei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2021-04-26 at 5.25.12 PM.png, 
> image-2021-02-02-10-59-17-113.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The Long type fields TotalCapacity,UsedCapacity and RemainingCapacity in 
> RBFMetrics maybe ** out of bounds.
> !image-2021-02-02-10-59-17-113.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15810) RBF: RBFMetrics's TotalCapacity out of bounds

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15810?focusedWorklogId=589508=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589508
 ]

ASF GitHub Bot logged work on HDFS-15810:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 00:27
Start Date: 27/Apr/21 00:27
Worklog Time Spent: 10m 
  Work Description: fengnanli commented on pull request #2910:
URL: https://github.com/apache/hadoop/pull/2910#issuecomment-827224009


   @aajisaka @goiri @cxorm  mind giving it another look? The test errors are 
irrelevant. I also attached a screenshot in the jira ticket for 
2*LONG.max_value = 16ZB display capacity


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589508)
Time Spent: 2h 20m  (was: 2h 10m)

> RBF: RBFMetrics's TotalCapacity out of bounds
> -
>
> Key: HDFS-15810
> URL: https://issues.apache.org/jira/browse/HDFS-15810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoxing Wei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-02-02-10-59-17-113.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The Long type fields TotalCapacity,UsedCapacity and RemainingCapacity in 
> RBFMetrics maybe ** out of bounds.
> !image-2021-02-02-10-59-17-113.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2021-04-26 Thread Zhe Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332809#comment-17332809
 ] 

Zhe Zhang commented on HDFS-7285:
-

[~iostream2...@163.com] Unfortunately I don't have permission to the 
[link|[https://img-ask.csdnimg.cn/upload/1619363340018.png]] you just pasted. 
It shows "403 Forbidden". Could you find a way to upload the image to an 
accessible location, or describe the issue? Thanks

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> HDFS-7285-Consolidated-20150911.patch, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=589422=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589422
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 19:46
Start Date: 26/Apr/21 19:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954#issuecomment-827097992


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  17m 52s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.federation.router.TestRouterWebHdfsMethods |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2954 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 16a1455df276 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c8b3b271edb72d5c354852254db7486c6a85ae1a |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/2/testReport/ |
   | Max. process+thread count | 2920 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=589416=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589416
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 19:41
Start Date: 26/Apr/21 19:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2925:
URL: https://github.com/apache/hadoop/pull/2925#issuecomment-827095197


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  26m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  4s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   4m 22s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   3m 58s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   3m 26s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   8m 18s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/6/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 618 unchanged - 6 fixed = 
627 total (was 624)  |
   | +1 :green_heart: |  mvnsite  |   3m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   8m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 231m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 31s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | -1 :x: |  unit  |  15m 37s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 412m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.tools.TestDFSZKFailoverController |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2925 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint |
   | uname | 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=589406=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589406
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 19:25
Start Date: 26/Apr/21 19:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-827086492


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  2s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   5m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   5m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   5m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  9s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/7/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 
630 total (was 627)  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 243m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   6m 14s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m 32s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 402m  6s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=589405=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589405
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 19:24
Start Date: 26/Apr/21 19:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-827085786


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   4m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   5m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   5m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 17s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/6/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 
630 total (was 627)  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 20s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 243m 40s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   6m 15s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | -1 :x: |  unit  |  18m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 401m 29s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=589382=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589382
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 18:36
Start Date: 26/Apr/21 18:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2925:
URL: https://github.com/apache/hadoop/pull/2925#issuecomment-827058990


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  1s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   3m 42s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   3m  9s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   7m 36s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/5/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 618 unchanged - 6 fixed = 
627 total (was 624)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 50s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   8m  1s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  9s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 205m 22s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 16s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  16m 18s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 346m 55s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestDistributedFileSystem |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2925 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint |
   | uname | Linux 4ff9bebbe984 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 4bc7d779598148fd684f9bcd09396f6201d8323a |
   | Default 

[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=589372=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589372
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 18:12
Start Date: 26/Apr/21 18:12
Worklog Time Spent: 10m 
  Work Description: fengnanli commented on pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954#issuecomment-827044852


   Thanks for the review @Hexiaoqiao  I accidentally checked out this branch 
from another WIP branch. Rebase it on the latest trunk now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589372)
Time Spent: 1h 50m  (was: 1h 40m)

> RBF: Fix NullPointException when start dfsrouter
> 
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15923?focusedWorklogId=589335=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589335
 ]

ASF GitHub Bot logged work on HDFS-15923:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 16:40
Start Date: 26/Apr/21 16:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2819:
URL: https://github.com/apache/hadoop/pull/2819#issuecomment-826986045


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  25m 24s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 112m  6s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2819/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2819 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux 37b8ebb1141e 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6c6fe4ff608a1fca5a697e57c8f229aeed5fdf92 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 

[jira] [Updated] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-26 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15997:
--
Status: Patch Available  (was: Open)

> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?focusedWorklogId=589332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589332
 ]

ASF GitHub Bot logged work on HDFS-15997:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 16:30
Start Date: 26/Apr/21 16:30
Worklog Time Spent: 10m 
  Work Description: smengcl opened a new pull request #2958:
URL: https://github.com/apache/hadoop/pull/2958


   https://issues.apache.org/jira/browse/HDFS-15997


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589332)
Remaining Estimate: 0h
Time Spent: 10m

> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15997:
--
Labels: pull-request-available  (was: )

> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332571#comment-17332571
 ] 

Hadoop QA commented on HDFS-15923:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
18s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 2 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
26s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 30s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 19m  
6s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green}{color} | {color:green} The patch has no ill-formed 
XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 57s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} the patch passed with JDK 

[jira] [Work logged] (HDFS-15961) standby namenode failed to start ordered snapshot deletion is enabled while having snapshottable directories

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15961?focusedWorklogId=589329=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589329
 ]

ASF GitHub Bot logged work on HDFS-15961:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 16:21
Start Date: 26/Apr/21 16:21
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #2881:
URL: https://github.com/apache/hadoop/pull/2881#issuecomment-826973263


   +1 on the latest change. Let's merge the fix to trunk before `dfsadmin 
-provisionSnapshotTrash -all` is implemented.
   
   I have filed [HDFS-15997](https://issues.apache.org/jira/browse/HDFS-15997) 
to implement `dfsadmin -provisionSnapshotTrash -all`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589329)
Time Spent: 3h 20m  (was: 3h 10m)

> standby namenode failed to start ordered snapshot deletion is enabled while 
> having snapshottable directories
> 
>
> Key: HDFS-15961
> URL: https://issues.apache.org/jira/browse/HDFS-15961
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: snapshots
>Affects Versions: 3.4.0
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-04-08 12:07:25,398 INFO 
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor: Adding new 
> storage ID DS-515dfb62-9975-4a2d-8384-d33ac8ff9cd1 for DN 172.27.121.195:9866
> 2021-04-08 12:07:55,581 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: Could not provision Trash directory for existing snapshottable 
> directories. Exiting Namenode.
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: ==> 
> JVMShutdownHook.run()
> 2021-04-08 12:07:55,596 INFO 
> org.apache.ranger.audit.provider.AuditProviderFactory: JVMShutdownHook: 
> Signalling async audit cleanup to start.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-26 Thread Siyao Meng (Jira)
Siyao Meng created HDFS-15997:
-

 Summary: Implement dfsadmin -provisionSnapshotTrash -all
 Key: HDFS-15997
 URL: https://issues.apache.org/jira/browse/HDFS-15997
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: dfsadmin
Reporter: Siyao Meng
Assignee: Siyao Meng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=589277=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589277
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 15:00
Start Date: 26/Apr/21 15:00
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-826905704


   @liuml07 done with changes if you would like to review further.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589277)
Time Spent: 4h  (was: 3h 50m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread zhuobin zheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332477#comment-17332477
 ] 

zhuobin zheng commented on HDFS-15923:
--

Sorry, my mistake. Submit new Patch: [^HDFS-15923.002.patch]

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
> Attachments: HDFS-15923.001.patch, HDFS-15923.002.patch, 
> HDFS-15923.stack-trace, hdfs-15923-fix-security-issue.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
> submit Job.
> In patch i use proxy ugi doAs above method. It worked.
> But there are another strange thing and this patch not solve:
> Router use ugi itself to submit the Distcp job. But not user ugi or proxy 
> ugi. This may cause excessive distcp permissions.
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
> at 
> 

[jira] [Updated] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng updated HDFS-15923:
-
Attachment: (was: HDFS-15923.002.patch)

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
> Attachments: HDFS-15923.001.patch, HDFS-15923.002.patch, 
> HDFS-15923.stack-trace, hdfs-15923-fix-security-issue.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
> submit Job.
> In patch i use proxy ugi doAs above method. It worked.
> But there are another strange thing and this patch not solve:
> Router use ugi itself to submit the Distcp job. But not user ugi or proxy 
> ugi. This may cause excessive distcp permissions.
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 

[jira] [Updated] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng updated HDFS-15923:
-
Attachment: HDFS-15923.002.patch

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
> Attachments: HDFS-15923.001.patch, HDFS-15923.002.patch, 
> HDFS-15923.002.patch, HDFS-15923.stack-trace, 
> hdfs-15923-fix-security-issue.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
> submit Job.
> In patch i use proxy ugi doAs above method. It worked.
> But there are another strange thing and this patch not solve:
> Router use ugi itself to submit the Distcp job. But not user ugi or proxy 
> ugi. This may cause excessive distcp permissions.
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
> at 
> 

[jira] [Updated] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng updated HDFS-15923:
-
Status: Open  (was: Patch Available)

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
> Attachments: HDFS-15923.001.patch, HDFS-15923.002.patch, 
> HDFS-15923.002.patch, HDFS-15923.stack-trace, 
> hdfs-15923-fix-security-issue.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
> submit Job.
> In patch i use proxy ugi doAs above method. It worked.
> But there are another strange thing and this patch not solve:
> Router use ugi itself to submit the Distcp job. But not user ugi or proxy 
> ugi. This may cause excessive distcp permissions.
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
> at 
> 

[jira] [Updated] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread zhuobin zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuobin zheng updated HDFS-15923:
-
Attachment: HDFS-15923.002.patch
Status: Patch Available  (was: Open)

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
> Attachments: HDFS-15923.001.patch, HDFS-15923.002.patch, 
> HDFS-15923.002.patch, HDFS-15923.stack-trace, 
> hdfs-15923-fix-security-issue.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
> submit Job.
> In patch i use proxy ugi doAs above method. It worked.
> But there are another strange thing and this patch not solve:
> Router use ugi itself to submit the Distcp job. But not user ugi or proxy 
> ugi. This may cause excessive distcp permissions.
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
> at 
> 

[jira] [Commented] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-26 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332465#comment-17332465
 ] 

Jinglun commented on HDFS-15923:


Hi [~zhengzhuobinzzb], thanks your explanation ! Only some minor comments.

1. I think we can just remove the code comment at 
RouterFederationRename.java#L114.

2. TestRouterFederationRenameInKerberosEnv.java#L129 the code comment could be 
removed too.

Other than that the patch is good to me. Nice work !

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
> Attachments: HDFS-15923.001.patch, HDFS-15923.stack-trace, 
> hdfs-15923-fix-security-issue.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Proxy UGI doAs create DistcpProcedure and TrashProcedure and 
> submit Job.
> In patch i use proxy ugi doAs above method. It worked.
> But there are another strange thing and this patch not solve:
> Router use ugi itself to submit the Distcp job. But not user ugi or proxy 
> ugi. This may cause excessive distcp permissions.
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> 

[jira] [Commented] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-04-26 Thread Harunobu Daikoku (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332434#comment-17332434
 ] 

Harunobu Daikoku commented on HDFS-15900:
-

Hmm, the test is passing for me.
 Are you sure you're on branch-3.1 or did you make some changes there?
{code:none}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.895 s 
- in org.apache.hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time:  01:33 min
[INFO] Finished at: 2021-04-26T13:45:32Z
[INFO] 
$ git log --oneline -1
71a9885c978 (HEAD -> branch-3.1, origin/branch-3.1) HDFS-15621. Datanode 
DirectoryS
canner uses excessive memory (#2849). Contributed by Stephen O'Donnell
{code}

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: image.png
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331801#comment-17331801
 ] 

Viraj Jasani edited comment on HDFS-15982 at 4/26/21, 12:44 PM:


{quote}[~vjasani] I think [~weichiu] is trying to tell like along with webhdfs 
API we have to also consider the behavior of httpfs(server) delete API.
{quote}
Oh I see, my bad. Yes, this is also taken care of 
[here|https://github.com/apache/hadoop/pull/2927/files#diff-635868fc4dbae8e9368423032df25338bcde0e9bad63e0a11e5e328db7dcfbccR261].


was (Author: vjasani):
{quote}[~vjasani] I think [~weichiu] is trying to tell like along with webhdfs 
API we have to also consider the behavior of httpfs(server) delete API.
{quote}
Oh I see, my bad. Yes, this is also taken care of 
[here|https://github.com/apache/hadoop/pull/2927/files#diff-b8d1575f2afb5b04a56f4a43eb8ed4387fbef8ebe51c4503052413a2887cf96bR747].

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15621) Datanode DirectoryScanner uses excessive memory

2021-04-26 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell resolved HDFS-15621.
--
Resolution: Fixed

> Datanode DirectoryScanner uses excessive memory
> ---
>
> Key: HDFS-15621
> URL: https://issues.apache.org/jira/browse/HDFS-15621
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: Screenshot 2020-10-09 at 14.11.36.png, Screenshot 
> 2020-10-09 at 15.20.56.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We generally work a rule of 1GB heap on a datanode per 1M blocks. For nodes 
> with a lot of blocks, this can mean a lot of heap.
> We recently captured a heapdump of a DN with about 22M blocks and found only 
> about 1.5GB was occupied by the ReplicaMap. Another 9GB of the heap is taken 
> by the DirectoryScanner ScanInfo objects. Most of this memory was alloated to 
> strings.
> Checking the strings in question, we can see two strings per scanInfo, 
> looking like:
> {code}
> /current/BP-671271071-10.163.205.13-1552020401842/current/finalized/subdir28/subdir17/blk_1180438785
> _106716708.meta
> {code}
> I will update a screen shot from MAT showing this.
> For the first string especially, the part 
> "/current/BP-671271071-10.163.205.13-1552020401842/current/finalized/" will 
> be the same for every block in the block pool as the scanner is only 
> concerned about finalized blocks.
> We can probably also store just the subdir indexes "28" and "27" rather than 
> "subdir28/subdir17" and then construct the path when it is requested via the 
> getter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15621) Datanode DirectoryScanner uses excessive memory

2021-04-26 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15621:
-
Fix Version/s: 3.2.3
   3.1.5
   3.4.0
   3.3.1

> Datanode DirectoryScanner uses excessive memory
> ---
>
> Key: HDFS-15621
> URL: https://issues.apache.org/jira/browse/HDFS-15621
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: Screenshot 2020-10-09 at 14.11.36.png, Screenshot 
> 2020-10-09 at 15.20.56.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We generally work a rule of 1GB heap on a datanode per 1M blocks. For nodes 
> with a lot of blocks, this can mean a lot of heap.
> We recently captured a heapdump of a DN with about 22M blocks and found only 
> about 1.5GB was occupied by the ReplicaMap. Another 9GB of the heap is taken 
> by the DirectoryScanner ScanInfo objects. Most of this memory was alloated to 
> strings.
> Checking the strings in question, we can see two strings per scanInfo, 
> looking like:
> {code}
> /current/BP-671271071-10.163.205.13-1552020401842/current/finalized/subdir28/subdir17/blk_1180438785
> _106716708.meta
> {code}
> I will update a screen shot from MAT showing this.
> For the first string especially, the part 
> "/current/BP-671271071-10.163.205.13-1552020401842/current/finalized/" will 
> be the same for every block in the block pool as the scanner is only 
> concerned about finalized blocks.
> We can probably also store just the subdir indexes "28" and "27" rather than 
> "subdir28/subdir17" and then construct the path when it is requested via the 
> getter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?focusedWorklogId=589137=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589137
 ]

ASF GitHub Bot logged work on HDFS-15624:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 11:38
Start Date: 26/Apr/21 11:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2955:
URL: https://github.com/apache/hadoop/pull/2955#issuecomment-826765476


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 29s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  17m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 51s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2955/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 407 unchanged - 2 fixed = 408 total (was 
409)  |
   | +1 :green_heart: |  mvnsite  |   3m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 44s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 225m 17s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2955/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  18m 19s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2955/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 456m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2955/1/artifact/out/Dockerfile

[jira] [Work logged] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?focusedWorklogId=589133=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589133
 ]

ASF GitHub Bot logged work on HDFS-15991:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 11:10
Start Date: 26/Apr/21 11:10
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #2933:
URL: https://github.com/apache/hadoop/pull/2933#issuecomment-826748724


   > Merged to trunk. Thanks for your contribution, @tomscut!
   
   Thanks @tasanuma for your review and merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589133)
Time Spent: 2h 20m  (was: 2h 10m)

> Add location into datanode info for NameNodeMXBean
> --
>
> Key: HDFS-15991
> URL: https://issues.apache.org/jira/browse/HDFS-15991
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Add location into datanode info for NameNodeMXBean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-04-26 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332000#comment-17332000
 ] 

Bhavik Patel commented on HDFS-15900:
-

[~hdaikoku] thank you for the commit link but, there is a UT failure in 
"TestRouterRPCClientRetries" class 
{code:java}
[INFO] Running 
org.apache.hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries
[ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 47.144 
s <<< FAILURE! - in 
org.apache.hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries
[ERROR] 
testRetryWhenOneNameServiceDown(org.apache.hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries)
  Time elapsed: 13.178 s  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:633)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries.testRetryWhenOneNameServiceDown(TestRouterRPCClientRetries.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

{code}


can you please check and update...

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: image.png
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-04-26 Thread Harunobu Daikoku (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331990#comment-17331990
 ] 

Harunobu Daikoku commented on HDFS-15900:
-

[~bpatel] I believe [~tasanuma] has cherry-picked the commit to branch-3.1 so 
it's already there.
 
[https://github.com/apache/hadoop/commit/c2cd0ab237df7ba32767ff4633822b171d2c2f0a]

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: image.png
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15621) Datanode DirectoryScanner uses excessive memory

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15621?focusedWorklogId=589107=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589107
 ]

ASF GitHub Bot logged work on HDFS-15621:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 10:00
Start Date: 26/Apr/21 10:00
Worklog Time Spent: 10m 
  Work Description: sodonnel merged pull request #2849:
URL: https://github.com/apache/hadoop/pull/2849


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589107)
Time Spent: 1h 20m  (was: 1h 10m)

> Datanode DirectoryScanner uses excessive memory
> ---
>
> Key: HDFS-15621
> URL: https://issues.apache.org/jira/browse/HDFS-15621
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2020-10-09 at 14.11.36.png, Screenshot 
> 2020-10-09 at 15.20.56.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We generally work a rule of 1GB heap on a datanode per 1M blocks. For nodes 
> with a lot of blocks, this can mean a lot of heap.
> We recently captured a heapdump of a DN with about 22M blocks and found only 
> about 1.5GB was occupied by the ReplicaMap. Another 9GB of the heap is taken 
> by the DirectoryScanner ScanInfo objects. Most of this memory was alloated to 
> strings.
> Checking the strings in question, we can see two strings per scanInfo, 
> looking like:
> {code}
> /current/BP-671271071-10.163.205.13-1552020401842/current/finalized/subdir28/subdir17/blk_1180438785
> _106716708.meta
> {code}
> I will update a screen shot from MAT showing this.
> For the first string especially, the part 
> "/current/BP-671271071-10.163.205.13-1552020401842/current/finalized/" will 
> be the same for every block in the block pool as the scanner is only 
> concerned about finalized blocks.
> We can probably also store just the subdir indexes "28" and "27" rather than 
> "subdir28/subdir17" and then construct the path when it is requested via the 
> getter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-04-26 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331978#comment-17331978
 ] 

Bhavik Patel commented on HDFS-15900:
-

[~hdaikoku] are you going to raise PR for the 3.1 branch?

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: image.png
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?focusedWorklogId=589098=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589098
 ]

ASF GitHub Bot logged work on HDFS-15991:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 09:51
Start Date: 26/Apr/21 09:51
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2933:
URL: https://github.com/apache/hadoop/pull/2933#issuecomment-826690416


   I'm planning to backport HDFS-15358 to branch-3.3. After that, I will 
cherry-pick this PR to branch-3.3.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589098)
Time Spent: 2h 10m  (was: 2h)

> Add location into datanode info for NameNodeMXBean
> --
>
> Key: HDFS-15991
> URL: https://issues.apache.org/jira/browse/HDFS-15991
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Add location into datanode info for NameNodeMXBean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-26 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-15991.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Add location into datanode info for NameNodeMXBean
> --
>
> Key: HDFS-15991
> URL: https://issues.apache.org/jira/browse/HDFS-15991
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Add location into datanode info for NameNodeMXBean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?focusedWorklogId=589089=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589089
 ]

ASF GitHub Bot logged work on HDFS-15991:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 09:39
Start Date: 26/Apr/21 09:39
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2933:
URL: https://github.com/apache/hadoop/pull/2933#issuecomment-826681115


   Merged to trunk. Thanks for your contribution, @tomscut!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589089)
Time Spent: 2h  (was: 1h 50m)

> Add location into datanode info for NameNodeMXBean
> --
>
> Key: HDFS-15991
> URL: https://issues.apache.org/jira/browse/HDFS-15991
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Add location into datanode info for NameNodeMXBean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13916) Distcp SnapshotDiff to support WebHDFS

2021-04-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331948#comment-17331948
 ] 

Hadoop QA commented on HDFS-13916:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red}{color} | {color:red} HDFS-13916 does not apply to trunk. Rebase 
required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13916 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976595/HDFS-13916.006.patch |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/587/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Distcp SnapshotDiff to support WebHDFS
> --
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.006.patch, 
> HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: HDFS-13052 to provide the possibility to 
> make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?focusedWorklogId=589088=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589088
 ]

ASF GitHub Bot logged work on HDFS-15991:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 09:38
Start Date: 26/Apr/21 09:38
Worklog Time Spent: 10m 
  Work Description: tasanuma merged pull request #2933:
URL: https://github.com/apache/hadoop/pull/2933


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589088)
Time Spent: 1h 50m  (was: 1h 40m)

> Add location into datanode info for NameNodeMXBean
> --
>
> Key: HDFS-15991
> URL: https://issues.apache.org/jira/browse/HDFS-15991
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Add location into datanode info for NameNodeMXBean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15982:
---

===Bulk update===

I am planning to cut the branch for Hadoop 3.3.1 release, and this jira targets 
3.3.1 currently. Please take the time to review the patch, or push out of 3.3.1 
if you think it can't be finished in the next few weeks.

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?focusedWorklogId=589087=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589087
 ]

ASF GitHub Bot logged work on HDFS-15991:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 09:37
Start Date: 26/Apr/21 09:37
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2933:
URL: https://github.com/apache/hadoop/pull/2933#issuecomment-826679528


   Thanks for updating the PR, @tomscut.
   
   > IMO, the NameNode UI does not need to show NS. What do you think?
   
   I agree with you.
   The latest commit looks pretty good to me. And the failed tests aren't 
related.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589087)
Time Spent: 1h 40m  (was: 1.5h)

> Add location into datanode info for NameNodeMXBean
> --
>
> Key: HDFS-15991
> URL: https://issues.apache.org/jira/browse/HDFS-15991
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Add location into datanode info for NameNodeMXBean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15788) Correct the statement for pmem cache to reflect cache persistence support

2021-04-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-15788:
---

===Bulk update===

I am planning to cut the branch for Hadoop 3.3.1 release, and this jira targets 
3.3.1 currently. Please take the time to review the patch, or push out of 3.3.1 
if you think it can't be finished in the next few weeks.

> Correct the statement for pmem cache to reflect cache persistence support
> -
>
> Key: HDFS-15788
> URL: https://issues.apache.org/jira/browse/HDFS-15788
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.4.0
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Minor
> Attachments: HDFS-15788-01.patch, HDFS-15788-02.patch
>
>
> Correct the statement for pmem cache to reflect cache persistence support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13916) Distcp SnapshotDiff to support WebHDFS

2021-04-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13916:
---

===Bulk update===

I am planning to cut the branch for Hadoop 3.3.1 release, and this jira targets 
3.3.1 currently. Please take the time to review the patch, or push out of 3.3.1 
if you think it can't be finished in the next few weeks.

> Distcp SnapshotDiff to support WebHDFS
> --
>
> Key: HDFS-13916
> URL: https://issues.apache.org/jira/browse/HDFS-13916
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp, webhdfs
>Affects Versions: 3.0.1, 3.1.1
>Reporter: Xun REN
>Assignee: Xun REN
>Priority: Major
>  Labels: easyfix, newbie, patch
> Attachments: HDFS-13916.002.patch, HDFS-13916.003.patch, 
> HDFS-13916.004.patch, HDFS-13916.005.patch, HDFS-13916.006.patch, 
> HDFS-13916.patch
>
>
> [~ljain] has worked on the JIRA: HDFS-13052 to provide the possibility to 
> make DistCP of SnapshotDiff with WebHDFSFileSystem. However, in the patch, 
> there is no modification for the real java class which is used by launching 
> the command "hadoop distcp ..."
>  
> You can check in the latest version here:
> [https://github.com/apache/hadoop/blob/branch-3.1.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java#L96-L100]
> In the method "preSyncCheck" of the class "DistCpSync", we still check if the 
> file system is DFS. 
> So I propose to change the class DistCpSync in order to take into 
> consideration what was committed by Lokesh Jain.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15853) Add option to adjust slow IO warning threshold time for different StorageType on DFSClient

2021-04-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331928#comment-17331928
 ] 

Hadoop QA commented on HDFS-15853:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
47s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  4m 
25s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-mvninstall-root.txt{color}
 | {color:red} root in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-compile-root-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt{color}
 | {color:red} root in trunk failed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt{color}
 | {color:red} root in trunk failed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
29s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
35s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt{color}
 | {color:red} hadoop-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt{color}
 | {color:red} hadoop-hdfs-client in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:red} hadoop-hdfs in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 12s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
49s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 32m  
6s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:red}-1{color} | {color:red} spotbugs {color} | {color:red}  0m 
23s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt{color}
 | {color:red} hadoop-common in trunk failed. {color} |
| {color:red}-1{color} | {color:red} spotbugs {color} | {color:red}  0m 
20s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt{color}
 | {color:red} hadoop-hdfs-client in trunk failed. {color} |
| {color:red}-1{color} | {color:red} spotbugs {color} | {color:red}  0m 
27s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/586/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:red} hadoop-hdfs in trunk failed. {color} |
|| || || || {color:brown} 

[jira] [Work logged] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?focusedWorklogId=589055=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589055
 ]

ASF GitHub Bot logged work on HDFS-15975:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 08:36
Start Date: 26/Apr/21 08:36
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2940:
URL: https://github.com/apache/hadoop/pull/2940#issuecomment-826631859


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  21m 12s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 43s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   4m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   4m  9s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   8m  5s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m 39s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2940/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 5 new + 243 unchanged - 5 fixed = 248 total (was 
248)  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 11s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   8m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  17m  8s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2940/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 33s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 187m 22s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2940/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 377m 14s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.ftp.TestFTPFileSystem |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2940/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2940 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0806822b7565 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / c4c0469cbef81be6268bbac9a840009619e2d6b0 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 

[jira] [Commented] (HDFS-15853) Add option to adjust slow IO warning threshold time for different StorageType on DFSClient

2021-04-26 Thread Yang Yun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331880#comment-17331880
 ] 

Yang Yun commented on HDFS-15853:
-

Refector to HDFS-15853.002.patch

> Add option to adjust slow IO warning threshold time for different StorageType 
> on DFSClient
> --
>
> Key: HDFS-15853
> URL: https://issues.apache.org/jira/browse/HDFS-15853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15853.001.patch, HDFS-15853.002.patch
>
>
> Slow IO warning threshold time is different for different StorageType, add 
> option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15853) Add option to adjust slow IO warning threshold time for different StorageType on DFSClient

2021-04-26 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15853:

Status: Open  (was: Patch Available)

> Add option to adjust slow IO warning threshold time for different StorageType 
> on DFSClient
> --
>
> Key: HDFS-15853
> URL: https://issues.apache.org/jira/browse/HDFS-15853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15853.001.patch, HDFS-15853.002.patch
>
>
> Slow IO warning threshold time is different for different StorageType, add 
> option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15853) Add option to adjust slow IO warning threshold time for different StorageType on DFSClient

2021-04-26 Thread Yang Yun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Yun updated HDFS-15853:

Attachment: HDFS-15853.002.patch
Status: Patch Available  (was: Open)

> Add option to adjust slow IO warning threshold time for different StorageType 
> on DFSClient
> --
>
> Key: HDFS-15853
> URL: https://issues.apache.org/jira/browse/HDFS-15853
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Minor
> Attachments: HDFS-15853.001.patch, HDFS-15853.002.patch
>
>
> Slow IO warning threshold time is different for different StorageType, add 
> option to adjust it according to StorageType.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13904) ContentSummary does not always respect processing limit, resulting in long lock acquisitions

2021-04-26 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331878#comment-17331878
 ] 

Qi Zhu commented on HDFS-13904:
---

Is this going on?

I met the problem also.--

Thanks.

> ContentSummary does not always respect processing limit, resulting in long 
> lock acquisitions
> 
>
> Key: HDFS-13904
> URL: https://issues.apache.org/jira/browse/HDFS-13904
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
>
> HDFS-4995 added a config {{dfs.content-summary.limit}} which allows for an 
> administrator to set a limit on the number of entries processed during a 
> single acquisition of the {{FSNamesystemLock}} during the creation of a 
> content summary. This is useful to prevent very long (multiple seconds) 
> pauses on the NameNode when {{getContentSummary}} is called on large 
> directories.
> However, even on versions with HDFS-4995, we have seen warnings like:
> {code}
> INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem read 
> lock held for 9398 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:950)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.readUnlock(FSNamesystemLock.java:188)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.readUnlock(FSNamesystem.java:1486)
> org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.yield(ContentSummaryComputationContext.java:109)
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.computeDirectoryContentSummary(INodeDirectory.java:679)
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.computeContentSummary(INodeDirectory.java:642)
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory.computeDirectoryContentSummary(INodeDirectory.java:656)
> {code}
> happen quite consistently when {{getContentSummary}} was called on a large 
> directory on a heavily-loaded NameNode. Such long pauses completely destroy 
> the performance of the NameNode. We have the limit set to its default of 
> 5000; if it was respected, clearly there would not be a 10-second pause.
> The current {{yield()}} code within {{ContentSummaryComputationContext}} 
> looks like:
> {code}
>   public boolean yield() {
> // Are we set up to do this?
> if (limitPerRun <= 0 || dir == null || fsn == null) {
>   return false;
> }
> // Have we reached the limit?
> long currentCount = counts.getFileCount() +
> counts.getSymlinkCount() +
> counts.getDirectoryCount() +
> counts.getSnapshotableDirectoryCount();
> if (currentCount <= nextCountLimit) {
>   return false;
> }
> // Update the next limit
> nextCountLimit = currentCount + limitPerRun;
> boolean hadDirReadLock = dir.hasReadLock();
> boolean hadDirWriteLock = dir.hasWriteLock();
> boolean hadFsnReadLock = fsn.hasReadLock();
> boolean hadFsnWriteLock = fsn.hasWriteLock();
> // sanity check.
> if (!hadDirReadLock || !hadFsnReadLock || hadDirWriteLock ||
> hadFsnWriteLock || dir.getReadHoldCount() != 1 ||
> fsn.getReadHoldCount() != 1) {
>   // cannot relinquish
>   return false;
> }
> // unlock
> dir.readUnlock();
> fsn.readUnlock("contentSummary");
> try {
>   Thread.sleep(sleepMilliSec, sleepNanoSec);
> } catch (InterruptedException ie) {
> } finally {
>   // reacquire
>   fsn.readLock();
>   dir.readLock();
> }
> yieldCount++;
> return true;
>   }
> {code}
> We believe that this check in particular is the culprit:
> {code}
> if (!hadDirReadLock || !hadFsnReadLock || hadDirWriteLock ||
> hadFsnWriteLock || dir.getReadHoldCount() != 1 ||
> fsn.getReadHoldCount() != 1) {
>   // cannot relinquish
>   return false;
> }
> {code}
> The content summary computation will only relinquish the lock if it is 
> currently the _only_ holder of the lock. Given the high volume of read 
> requests on a heavily loaded NameNode, especially when unfair locking is 
> enabled, it is likely there may be another holder of the read lock performing 
> some short-lived operation. By refusing to give up the lock in this case, the 
> content summary computation ends up never relinquishing the lock.
> We propose to simply remove the readHoldCount checks from this {{yield()}}. 
> This should alleviate the case described above by giving up the read lock and 
> allowing other short-lived operations to complete (while the content summary 
> thread sleeps) so that the lock can finally be given up completely. This has 
> the drawback that sometimes, the content summary may give up the lock 
> 

[jira] [Work logged] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?focusedWorklogId=588997=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588997
 ]

ASF GitHub Bot logged work on HDFS-15624:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 07:12
Start Date: 26/Apr/21 07:12
Worklog Time Spent: 10m 
  Work Description: jojochuang opened a new pull request #2955:
URL: https://github.com/apache/hadoop/pull/2955


   1. puts NVDIMM to the end of storage type enum to make sure 
compatibility.
   2. adds check to make sure the software layout version is satisfied
   
   Co-authored-by: su xu 
   Co-authored-by: huangtianhua 
   Co-authored-by: YaYun-Wang <34060507+yayun-w...@users.noreply.github.com>
   
   Signed-off-by: Mingliang Liu 
   Signed-off-by: Ayush Saxena 
   Signed-off-by: Vinayakumar B 
   
   This is the same PR as (#2377) except the layout version was updated to 
avoid conflict with HDFS-15666.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588997)
Time Spent: 10h  (was: 9h 50m)

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: huangtianhua
>Priority: Major
>  Labels: pull-request-available, release-blocker
> Fix For: 3.4.0
>
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?focusedWorklogId=588962=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588962
 ]

ASF GitHub Bot logged work on HDFS-15991:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 07:07
Start Date: 26/Apr/21 07:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2933:
URL: https://github.com/apache/hadoop/pull/2933#issuecomment-826298516


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   4m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  6s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   5m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   4m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 49s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 231m 11s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2933/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  18m  0s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2933/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 363m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2933/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2933 |
   | Optional Tests | dupname asflicense shadedclient codespell compile javac 
javadoc mvninstall mvnsite unit spotbugs checkstyle |
   | uname | Linux a4e3f5406bb7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 

[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=588950=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588950
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 07:05
Start Date: 26/Apr/21 07:05
Worklog Time Spent: 10m 
  Work Description: fengnanli opened a new pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588950)
Time Spent: 1h 40m  (was: 1.5h)

> RBF: Fix NullPointException when start dfsrouter
> 
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15978) Solve DatanodeManager#getBlockRecoveryCommand() printing IOException

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15978?focusedWorklogId=588947=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588947
 ]

ASF GitHub Bot logged work on HDFS-15978:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 07:05
Start Date: 26/Apr/21 07:05
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao merged pull request #2913:
URL: https://github.com/apache/hadoop/pull/2913


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588947)
Time Spent: 1h 20m  (was: 1h 10m)

> Solve DatanodeManager#getBlockRecoveryCommand() printing IOException
> 
>
> Key: HDFS-15978
> URL: https://issues.apache.org/jira/browse/HDFS-15978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: screenshot-1.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When calling DatanodeManager#getBlockRecoveryCommand(), if an IOException 
> occurs, there is no space when printing the BlockInfo information, which is 
> not conducive to viewing the log.
> BlockUnderConstructionFeature uc = b.getUnderConstructionFeature();
>if(uc == null) {
>  throw new IOException("Recovery block "+ b +
>  "where it is not under construction.");
>}
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15920) Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15920?focusedWorklogId=588941=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588941
 ]

ASF GitHub Bot logged work on HDFS-15920:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 07:04
Start Date: 26/Apr/21 07:04
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2831:
URL: https://github.com/apache/hadoop/pull/2831#issuecomment-826081101


   Thanx @jianghuazhu for the changes.
   There are some checkstyle warnings reported by Jenkins.
   Can you address them:
   
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2831/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
   
   +1, once addressed 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588941)
Time Spent: 3h  (was: 2h 50m)

> Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be 
> configured
> --
>
> Key: HDFS-15920
> URL: https://issues.apache.org/jira/browse/HDFS-15920
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The current SafeModeMonitor#RECHECK_INTERVAL value has a fixed value (=1000), 
> and this value should be set and configurable. Because the lock is occupied 
> internally, it competes with other places.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15920) Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15920?focusedWorklogId=588918=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588918
 ]

ASF GitHub Bot logged work on HDFS-15920:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 07:02
Start Date: 26/Apr/21 07:02
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #2831:
URL: https://github.com/apache/hadoop/pull/2831#issuecomment-826308838


   @ayushtkn , thank you very much for your attention.
   I have submitted some new codes and tried to solve these problems.
   Can you give me some suggestions again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588918)
Time Spent: 2h 50m  (was: 2h 40m)

> Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be 
> configured
> --
>
> Key: HDFS-15920
> URL: https://issues.apache.org/jira/browse/HDFS-15920
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The current SafeModeMonitor#RECHECK_INTERVAL value has a fixed value (=1000), 
> and this value should be set and configurable. Because the lock is occupied 
> internally, it competes with other places.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?focusedWorklogId=588861=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588861
 ]

ASF GitHub Bot logged work on HDFS-15975:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 06:55
Start Date: 26/Apr/21 06:55
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #2940:
URL: https://github.com/apache/hadoop/pull/2940#issuecomment-826503495


   > Could rebase on branch-3.3 and force-push it to clean commit log?
   
   Hi @tasanuma . I'm sorry, I just saw the message today. I have rebase and 
commit it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588861)
Time Spent: 5h 20m  (was: 5h 10m)

> Use LongAdder instead of AtomicLong
> ---
>
> Key: HDFS-15975
> URL: https://issues.apache.org/jira/browse/HDFS-15975
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> When counting some indicators, we can use LongAdder instead of AtomicLong to 
> improve performance. The long value is not an atomic snapshot in LongAdder, 
> but I think we can tolerate that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=588834=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588834
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 06:52
Start Date: 26/Apr/21 06:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-826397946






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588834)
Time Spent: 3h 50m  (was: 3h 40m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=588840=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588840
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 06:52
Start Date: 26/Apr/21 06:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954#issuecomment-826296992


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  13m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 34s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m 12s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2954 |
   | Optional Tests | dupname asflicense mvnsite codespell markdownlint compile 
javac javadoc mvninstall unit shadedclient spotbugs checkstyle |
   | uname | Linux ceadae4ebf44 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4d83dc44ee06d94d12dc0d648be906d7df2072fb |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/1/testReport/ |
   | Max. process+thread count | 3310 (vs. ulimit of 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=588827=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588827
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 06:51
Start Date: 26/Apr/21 06:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2925:
URL: https://github.com/apache/hadoop/pull/2925#issuecomment-826427230


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  23m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 43s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   3m 44s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   3m 20s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   3m  8s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   7m 35s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 5 new + 221 unchanged - 4 fixed = 
226 total (was 225)  |
   | +1 :green_heart: |  mvnsite  |   2m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   7m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 207m  7s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 14s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | -1 :x: |  unit  |  15m  1s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 371m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   |   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2925 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient 

[jira] [Work logged] (HDFS-15978) Solve DatanodeManager#getBlockRecoveryCommand() printing IOException

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15978?focusedWorklogId=588817=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588817
 ]

ASF GitHub Bot logged work on HDFS-15978:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 06:49
Start Date: 26/Apr/21 06:49
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #2913:
URL: https://github.com/apache/hadoop/pull/2913#issuecomment-826265146


   @Hexiaoqiao , can you give some suggestions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588817)
Time Spent: 1h 10m  (was: 1h)

> Solve DatanodeManager#getBlockRecoveryCommand() printing IOException
> 
>
> Key: HDFS-15978
> URL: https://issues.apache.org/jira/browse/HDFS-15978
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: screenshot-1.png
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When calling DatanodeManager#getBlockRecoveryCommand(), if an IOException 
> occurs, there is no space when printing the BlockInfo information, which is 
> not conducive to viewing the log.
> BlockUnderConstructionFeature uc = b.getUnderConstructionFeature();
>if(uc == null) {
>  throw new IOException("Recovery block "+ b +
>  "where it is not under construction.");
>}
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=588789=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588789
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 06:47
Start Date: 26/Apr/21 06:47
Worklog Time Spent: 10m 
  Work Description: vinayakumarb commented on a change in pull request 
#2767:
URL: https://github.com/apache/hadoop/pull/2767#discussion_r619602510



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestProtoBufRpc.java
##
@@ -179,10 +281,41 @@ public void testProtoBufRpc2() throws Exception {
 MetricsRecordBuilder rpcDetailedMetrics = 
 getMetrics(server.getRpcDetailedMetrics().name());
 assertCounterGt("Echo2NumOps", 0L, rpcDetailedMetrics);
+
+if (testWithLegacy) {
+  testProtobufLegacy();
+}
+  }
+
+  private void testProtobufLegacy()
+  throws IOException, com.google.protobuf.ServiceException {
+TestRpcService2Legacy client = getClientLegacy();
+
+// Test ping method
+client.ping2(null, 
TestProtosLegacy.EmptyRequestProto.newBuilder().build());
+
+// Test echo method
+TestProtosLegacy.EchoResponseProto echoResponse = client.echo2(null,
+TestProtosLegacy.EchoRequestProto.newBuilder().setMessage("hello")
+.build());
+assertThat(echoResponse.getMessage()).isEqualTo("hello");
+
+// Ensure RPC metrics are updated
+MetricsRecordBuilder rpcMetrics = 
getMetrics(server.getRpcMetrics().name());
+assertCounterGt("RpcQueueTimeNumOps", 0L, rpcMetrics);
+assertCounterGt("RpcProcessingTimeNumOps", 0L, rpcMetrics);
+
+MetricsRecordBuilder rpcDetailedMetrics =
+getMetrics(server.getRpcDetailedMetrics().name());
+assertCounterGt("Echo2NumOps", 0L, rpcDetailedMetrics);
   }
 
   @Test (timeout=5000)
   public void testProtoBufRandomException() throws Exception {
+if (testWithLegacy) {
+  //No test with legacy
+  return;
+}

Review comment:
   Updated. Please check.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588789)
Time Spent: 2h 50m  (was: 2h 40m)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15920) Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be configured

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15920?focusedWorklogId=588773=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588773
 ]

ASF GitHub Bot logged work on HDFS-15920:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 06:44
Start Date: 26/Apr/21 06:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2831:
URL: https://github.com/apache/hadoop/pull/2831#issuecomment-826142428






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588773)
Time Spent: 2h 40m  (was: 2.5h)

> Solve the problem that the value of SafeModeMonitor#RECHECK_INTERVAL can be 
> configured
> --
>
> Key: HDFS-15920
> URL: https://issues.apache.org/jira/browse/HDFS-15920
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The current SafeModeMonitor#RECHECK_INTERVAL value has a fixed value (=1000), 
> and this value should be set and configurable. Because the lock is occupied 
> internally, it competes with other places.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist

2021-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=588760=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-588760
 ]

ASF GitHub Bot logged work on HDFS-15790:
-

Author: ASF GitHub Bot
Created on: 26/Apr/21 06:43
Start Date: 26/Apr/21 06:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2767:
URL: https://github.com/apache/hadoop/pull/2767#issuecomment-826069768






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 588760)
Time Spent: 2h 40m  (was: 2.5h)

> Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
> --
>
> Key: HDFS-15790
> URL: https://issues.apache.org/jira/browse/HDFS-15790
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive 
> project.  This was not an awesome thing to do between minor versions in 
> regards to backwards compatibility for downstream projects.
> Additionally, these two frameworks are not drop-in replacements, they have 
> some differences.  Also, Protobuf 2 is not deprecated or anything so let us 
> have both protocols available at the same time.  In Hadoop 4.x Protobuf 2 
> support can be dropped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-26 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331801#comment-17331801
 ] 

Viraj Jasani commented on HDFS-15982:
-

{quote}[~vjasani] I think [~weichiu] is trying to tell like along with webhdfs 
API we have to also consider the behavior of httpfs(server) delete API.
{quote}
Oh I see, my bad. Yes, this is also taken care of 
[here|https://github.com/apache/hadoop/pull/2927/files#diff-b8d1575f2afb5b04a56f4a43eb8ed4387fbef8ebe51c4503052413a2887cf96bR747].

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2021-04-26 Thread Stone (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17331776#comment-17331776
 ] 

Stone commented on HDFS-7285:
-

[~zhz]

I have no permission to upload attachment.Please visit csdn web

 the url of image->[https://img-ask.csdnimg.cn/upload/1619363340018.png]

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> HDFS-7285-Consolidated-20150911.patch, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org