[jira] [Work logged] (HDFS-15818) Fix TestFsDatasetImpl.testReadLockCanBeDisabledByConfig

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15818?focusedWorklogId=548930=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548930
 ]

ASF GitHub Bot logged work on HDFS-15818:
-

Author: ASF GitHub Bot
Created on: 06/Feb/21 00:13
Start Date: 06/Feb/21 00:13
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2679:
URL: https://github.com/apache/hadoop/pull/2679#issuecomment-774355339


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  12m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  5s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 245m 43s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2679/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 332m 38s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.TestEncryptionZones |
   |   | hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.TestInjectionForSimulatedStorage |
   |   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | 
hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.server.namenode.TestCacheDirectivesWithViewDFS |
   |   | hadoop.hdfs.TestErasureCodingExerciseAPIs |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |

[jira] [Updated] (HDFS-15813) DataStreamer: keep sending heartbeat packets while streaming

2021-02-05 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated HDFS-15813:
---
Fix Version/s: 3.2.3
   3.1.5
   3.4.0
   3.3.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~daryn] for the contribution, and [~kihwal] for the reviews.
I have committed this to trunk - branch-3.1.


> DataStreamer: keep sending heartbeat packets while streaming
> 
>
> Key: HDFS-15813
> URL: https://issues.apache.org/jira/browse/HDFS-15813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: HDFS-15813.001.patch, HDFS-15813.002.patch, 
> HDFS-15813.003.patch, HDFS-15813.004.patch
>
>
> In response to [HDFS-5032], [~daryn] made a change to our internal code to 
> ensure that heartbeats continue during data steaming, even in the face of a 
> slow disk.
> As [~kihwal] noted, absence of heartbeat during flush will be fixed in a 
> separate jira.  It doesn't look like this change was ever pushed back to 
> apache, so I am providing it here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15820) Ensure snapshot root trash provisioning happens only post safe mode exit

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15820?focusedWorklogId=548884=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548884
 ]

ASF GitHub Bot logged work on HDFS-15820:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 22:33
Start Date: 05/Feb/21 22:33
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #2682:
URL: https://github.com/apache/hadoop/pull/2682#issuecomment-774324533


   Unrelated failures. All flaky tests passed locally for me. Will merge 
shortly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548884)
Time Spent: 1h 20m  (was: 1h 10m)

> Ensure snapshot root trash provisioning happens only post safe mode exit
> 
>
> Key: HDFS-15820
> URL: https://issues.apache.org/jira/browse/HDFS-15820
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently, on namenode startup, snapshot trash root provisioning starts as 
> along with trash emptier service but namenode might not be out of safe mode 
> by then. This can fail the snapshot trash dir creation thereby crashing the 
> namenode. The idea here is to trigger snapshot trash provisioning only post 
> safe mode exit.
> {code:java}
> 2021-02-04 11:23:47,323 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring 
> NN shutdown. Shutting down immediately.
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /upgrade/.Trash. Name node is in safe mode.
> The reported blocks 0 needs additional 1383 blocks to reach the threshold 
> 0.9990 of total blocks 1385.
> The number of live datanodes 0 needs an additional 1 live datanodes to reach 
> the minimum number 1.
> Safe mode will be turned off automatically once the thresholds have been 
> reached. NamenodeHostName:quasar-brabeg-5.quasar-brabeg.root.hwx.site
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1542)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1529)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3288)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAndProvisionSnapshotTrashRoots(FSNamesystem.java:8269)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1939)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:967)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:936)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1740)
> 2021-02-04 11:23:47,334 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
> create directory /upgrade/.Trash. Name node is in safe mode.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15811) completeFile should log final file size

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15811?focusedWorklogId=548878=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548878
 ]

ASF GitHub Bot logged work on HDFS-15811:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 22:18
Start Date: 05/Feb/21 22:18
Worklog Time Spent: 10m 
  Work Description: zehaoc2 commented on a change in pull request #2670:
URL: https://github.com/apache/hadoop/pull/2670#discussion_r571280969



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -3146,23 +3148,30 @@ INodeFile checkLease(INodesInPath iip, String holder, 
long fileId)
   boolean completeFile(final String src, String holder,
ExtendedBlock last, long fileId)
 throws IOException {
+final String operationName = CMD_COMPLETE_FILE;
 boolean success = false;
+FileStatus stat = null;
 checkOperation(OperationCategory.WRITE);
 final FSPermissionChecker pc = getPermissionChecker();
 FSPermissionChecker.setOperationType(null);
 writeLock();
 try {
   checkOperation(OperationCategory.WRITE);
   checkNameNodeSafeMode("Cannot complete file " + src);
-  success = FSDirWriteFileOp.completeFile(this, pc, src, holder, last,
+  INodesInPath iip = dir.resolvePath(pc, src, fileId);
+  success = FSDirWriteFileOp.completeFile(this, iip, src, holder, last,
   fileId);
+  if (success) {
+stat = dir.getAuditFileInfo(iip);
+  }
 } finally {
-  writeUnlock("completeFile");
+  writeUnlock(operationName);

Review comment:
   Sorry. I should change this to "complete" instead of "close". I change 
this because the audit log cmd names usually mimic the client api names rather 
than the rpc method name. For instance, rpc method "startFile" is audit logged 
as cmd "create". 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548878)
Time Spent: 40m  (was: 0.5h)

> completeFile should log final file size
> ---
>
> Key: HDFS-15811
> URL: https://issues.apache.org/jira/browse/HDFS-15811
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zehao Chen
>Assignee: Zehao Chen
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Jobs, particularly hive queries by non-headless users, can create an 
> excessive number of files (many hundreds of thousands). A single user's query 
> can generate a sustained burst of 60-80% of all creates for tens of minutes 
> or more and impact overall cluster performance. Adding the file size to the 
> logline allows us to identify excessive tiny or large files.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15792) ClasscastException while loading FSImage

2021-02-05 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279994#comment-17279994
 ] 

Stephen O'Donnell commented on HDFS-15792:
--

If we commit a patch with checkstyle issues, then I think it causes all further 
patches on the branch to flag a checkstyle warning. Therefore we will need to 
figure out a way around this if we want the change on 2.10.

> ClasscastException while loading FSImage
> 
>
> Key: HDFS-15792
> URL: https://issues.apache.org/jira/browse/HDFS-15792
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15792-branch-2.10.001.patch, 
> HDFS-15792-branch-2.10.002.patch, HDFS-15792-branch-2.10.003.patch, 
> HDFS-15792.001.patch, HDFS-15792.002.patch, HDFS-15792.003.patch, 
> HDFS-15792.004.patch, HDFS-15792.005.patch, HDFS-15792.addendum.001.patch, 
> image-2021-01-27-12-00-34-846.png
>
>
> FSImage loading has failed with ClasscastException - 
> java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to 
> java.util.HashMap$TreeNode.
> This is the usage issue with Hashmap in concurrent scenarios.
> Same issue has been reported on Java & closed as usage issue.  - 
> https://bugs.openjdk.java.net/browse/JDK-8173671
> 2020-12-28 11:36:26,127 | ERROR | main | An exception occurred when loading 
> INODE from fsiamge. | FSImageFormatProtobuf.java:442
> java.lang.
> : java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
>   at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1835)
>   at java.util.HashMap$TreeNode.treeify(HashMap.java:1951)
>   at java.util.HashMap.treeifyBin(HashMap.java:772)
>   at java.util.HashMap.putVal(HashMap.java:644)
>   at java.util.HashMap.put(HashMap.java:612)
>   at 
> org.apache.hadoop.hdfs.util.ReferenceCountMap.put(ReferenceCountMap.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AclStorage.addAclFeature(AclStorage.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.addAclFeature(INodeWithAdditionalFields.java:349)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectory(FSImageFormatPBINode.java:225)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINode(FSImageFormatPBINode.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.readPBINodes(FSImageFormatPBINode.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:342)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$2.call(FSImageFormatProtobuf.java:469)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-12-28 11:36:26,130 | ERROR | main | Failed to load image from 
> FSImageFile(file=/srv/BigData/namenode/current/fsimage_00198227480, 
> cpktTxId=00198227480) | FSImage.java:738
> java.io.IOException: java.lang.ClassCastException: java.util.HashMap$Node 
> cannot be cast to java.util.HashMap$TreeNode
>   at 
> org.apache.hadoop.io.MultipleIOException$Builder.add(MultipleIOException.java:68)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.runLoaderTasks(FSImageFormatProtobuf.java:444)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:360)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:971)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:955)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:820)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:733)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1113)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:730)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:648)

[jira] [Commented] (HDFS-15792) ClasscastException while loading FSImage

2021-02-05 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279790#comment-17279790
 ] 

Renukaprasad C commented on HDFS-15792:
---

Thanks [~hexiaoqiao],
I applied this patch locally too & tried to compile locally (JDK 8), 
compilation failed with the below error.

[INFO] Checking unresolved references to org.codehaus.mojo.signature:java17:1.0
[ERROR] 
D:\Hadoop\Code\hadoop_OS\hadoop-hdfs-project\hadoop-hdfs\src\main\java\org\apache\hadoop\hdfs\util\ReferenceCountMap.java:77:
 Undefined reference: java.util.concurrent.ConcurrentHashMap.KeySetView
[ERROR] 
D:\Hadoop\Code\hadoop_OS\hadoop-hdfs-project\hadoop-hdfs\src\main\java\org\apache\hadoop\hdfs\util\ReferenceCountMap.java:77:
 Undefined reference: java.util.concurrent.ConcurrentHashMap.KeySetView 
java.util.concurrent.ConcurrentHashMap.keySet()

This is due to compatibility changes done in JDK 7 & JDK 8. 
ConcurrentHashMap.KeySetView - introduced in JDK-8, but when we compile code 
with javac.version=1.7 we get this problem.

Whereas, if we compile on JDK 7, then we dont get this error. I dont think we 
can go with this approach. This is not even JDK bug, rather its usability issue 
with our environment & scripts. So, dont see any other solution for this except 
putting back hashmap and synchronize the code ourselves. 

Since, this is method is used only for test, there is no harm with the changes 
done. My opinion is to ignore the checkstyle issue and continue with this 
workaround - HDFS-15792-branch-2.10.002.patch.  

We shall discuss if any other suggestion.

> ClasscastException while loading FSImage
> 
>
> Key: HDFS-15792
> URL: https://issues.apache.org/jira/browse/HDFS-15792
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15792-branch-2.10.001.patch, 
> HDFS-15792-branch-2.10.002.patch, HDFS-15792-branch-2.10.003.patch, 
> HDFS-15792.001.patch, HDFS-15792.002.patch, HDFS-15792.003.patch, 
> HDFS-15792.004.patch, HDFS-15792.005.patch, HDFS-15792.addendum.001.patch, 
> image-2021-01-27-12-00-34-846.png
>
>
> FSImage loading has failed with ClasscastException - 
> java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to 
> java.util.HashMap$TreeNode.
> This is the usage issue with Hashmap in concurrent scenarios.
> Same issue has been reported on Java & closed as usage issue.  - 
> https://bugs.openjdk.java.net/browse/JDK-8173671
> 2020-12-28 11:36:26,127 | ERROR | main | An exception occurred when loading 
> INODE from fsiamge. | FSImageFormatProtobuf.java:442
> java.lang.
> : java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
>   at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1835)
>   at java.util.HashMap$TreeNode.treeify(HashMap.java:1951)
>   at java.util.HashMap.treeifyBin(HashMap.java:772)
>   at java.util.HashMap.putVal(HashMap.java:644)
>   at java.util.HashMap.put(HashMap.java:612)
>   at 
> org.apache.hadoop.hdfs.util.ReferenceCountMap.put(ReferenceCountMap.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AclStorage.addAclFeature(AclStorage.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.addAclFeature(INodeWithAdditionalFields.java:349)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectory(FSImageFormatPBINode.java:225)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINode(FSImageFormatPBINode.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.readPBINodes(FSImageFormatPBINode.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:342)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$2.call(FSImageFormatProtobuf.java:469)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-12-28 11:36:26,130 | ERROR | main | Failed to load image from 
> FSImageFile(file=/srv/BigData/namenode/current/fsimage_00198227480, 
> cpktTxId=00198227480) | FSImage.java:738
> java.io.IOException: java.lang.ClassCastException: java.util.HashMap$Node 
> cannot be cast to java.util.HashMap$TreeNode
>   at 
> org.apache.hadoop.io.MultipleIOException$Builder.add(MultipleIOException.java:68)
>   at 
> 

[jira] [Commented] (HDFS-15792) ClasscastException while loading FSImage

2021-02-05 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279755#comment-17279755
 ] 

Stephen O'Donnell commented on HDFS-15792:
--

I think the changes for 2.10 look good. However I am not sure the build error 
can be ignored. Are you referring to this error in mvninstall?

{code}
[ERROR] 
/home/jenkins/jenkins-home/workspace/PreCommit-HDFS-Build/sourcedir/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/ReferenceCountMap.java:77:
 Undefined reference: java.util.concurrent.ConcurrentHashMap.KeySetView
[ERROR] 
/home/jenkins/jenkins-home/workspace/PreCommit-HDFS-Build/sourcedir/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/ReferenceCountMap.java:77:
 Undefined reference: java.util.concurrent.ConcurrentHashMap.KeySetView 
java.util.concurrent.ConcurrentHashMap.keySet()
{code}

It looks like the definition of ConcurrentHashMap.keySet() has changed between 
Java 7 and 8:

https://stackoverflow.com/a/47836829/88839

So that is probably something to do with the problem, but I am not sure what 
the answer is!

> ClasscastException while loading FSImage
> 
>
> Key: HDFS-15792
> URL: https://issues.apache.org/jira/browse/HDFS-15792
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15792-branch-2.10.001.patch, 
> HDFS-15792-branch-2.10.002.patch, HDFS-15792-branch-2.10.003.patch, 
> HDFS-15792.001.patch, HDFS-15792.002.patch, HDFS-15792.003.patch, 
> HDFS-15792.004.patch, HDFS-15792.005.patch, HDFS-15792.addendum.001.patch, 
> image-2021-01-27-12-00-34-846.png
>
>
> FSImage loading has failed with ClasscastException - 
> java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to 
> java.util.HashMap$TreeNode.
> This is the usage issue with Hashmap in concurrent scenarios.
> Same issue has been reported on Java & closed as usage issue.  - 
> https://bugs.openjdk.java.net/browse/JDK-8173671
> 2020-12-28 11:36:26,127 | ERROR | main | An exception occurred when loading 
> INODE from fsiamge. | FSImageFormatProtobuf.java:442
> java.lang.
> : java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
>   at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1835)
>   at java.util.HashMap$TreeNode.treeify(HashMap.java:1951)
>   at java.util.HashMap.treeifyBin(HashMap.java:772)
>   at java.util.HashMap.putVal(HashMap.java:644)
>   at java.util.HashMap.put(HashMap.java:612)
>   at 
> org.apache.hadoop.hdfs.util.ReferenceCountMap.put(ReferenceCountMap.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AclStorage.addAclFeature(AclStorage.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.addAclFeature(INodeWithAdditionalFields.java:349)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectory(FSImageFormatPBINode.java:225)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINode(FSImageFormatPBINode.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.readPBINodes(FSImageFormatPBINode.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:342)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$2.call(FSImageFormatProtobuf.java:469)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-12-28 11:36:26,130 | ERROR | main | Failed to load image from 
> FSImageFile(file=/srv/BigData/namenode/current/fsimage_00198227480, 
> cpktTxId=00198227480) | FSImage.java:738
> java.io.IOException: java.lang.ClassCastException: java.util.HashMap$Node 
> cannot be cast to java.util.HashMap$TreeNode
>   at 
> org.apache.hadoop.io.MultipleIOException$Builder.add(MultipleIOException.java:68)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.runLoaderTasks(FSImageFormatProtobuf.java:444)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:360)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
>   at 
> 

[jira] [Commented] (HDFS-15308) TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently

2021-02-05 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279753#comment-17279753
 ] 

Jim Brennan commented on HDFS-15308:


Still seeing similar failures in pre-commit builds.  I filed [HDFS-15823] for 
those.

> TestReconstructStripedFile#testNNSendsErasureCodingTasks fails intermittently
> -
>
> Key: HDFS-15308
> URL: https://issues.apache.org/jira/browse/HDFS-15308
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.3.0
>Reporter: Toshihiko Uchida
>Assignee: Hemanth Boyina
>Priority: Major
>  Labels: flaky-test
> Fix For: 3.4.0
>
> Attachments: HDFS-15308.001.patch, HDFS-15308.002.patch
>
>
> In HDFS-14353, TestReconstructStripedFile.testNNSendsErasureCodingTasks 
> failed once due to pending reconstruction timeout as follows.
> {code}
> java.lang.AssertionError: Found 4 timeout pending reconstruction tasks
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:502)
>   at 
> org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:458)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> The error occurred on the following assertion.
> {code}
> // Make sure that all pending reconstruction tasks can be processed.
> while (ns.getPendingReconstructionBlocks() > 0) {
>   long timeoutPending = ns.getNumTimedOutPendingReconstructions();
>   assertTrue(String.format("Found %d timeout pending reconstruction tasks",
>   timeoutPending), timeoutPending == 0);
>   Thread.sleep(1000);
> }
> {code}
> The failure could not be reproduced in the reporter's docker environment 
> (start-build-environment.sh).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15813) DataStreamer: keep sending heartbeat packets while streaming

2021-02-05 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279751#comment-17279751
 ] 

Jim Brennan commented on HDFS-15813:


Thanks for the review [~kihwal]!
Jira for {{TestUnderReplicatedBlocks#testSetRepIncWithUnderReplicatedBlocks}} : 
[HDFS-9243]
Jira for {{TestNameNodeMXBean.testDecommissioningNodes}}: [HDFS-15411]
I filed Jira [HDFS-15823] for {{TestReconstructStripedFileWithRandomECPolicy}}. 
 It looks like an attempt was made to fix this bug in [HDFS-15308], but it is 
still happening in some cases.


> DataStreamer: keep sending heartbeat packets while streaming
> 
>
> Key: HDFS-15813
> URL: https://issues.apache.org/jira/browse/HDFS-15813
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: HDFS-15813.001.patch, HDFS-15813.002.patch, 
> HDFS-15813.003.patch, HDFS-15813.004.patch
>
>
> In response to [HDFS-5032], [~daryn] made a change to our internal code to 
> ensure that heartbeats continue during data steaming, even in the face of a 
> slow disk.
> As [~kihwal] noted, absence of heartbeat during flush will be fixed in a 
> separate jira.  It doesn't look like this change was ever pushed back to 
> apache, so I am providing it here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15823) TestReconstructStripedFileWithRandomECPolicy failures

2021-02-05 Thread Jim Brennan (Jira)
Jim Brennan created HDFS-15823:
--

 Summary: TestReconstructStripedFileWithRandomECPolicy failures
 Key: HDFS-15823
 URL: https://issues.apache.org/jira/browse/HDFS-15823
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.4.0
Reporter: Jim Brennan


Seeing this failure in recent pre-commit builds:
{noformat}
[ERROR] Tests run: 17, Failures: 1, Errors: 0, Skipped: 1, Time elapsed: 
172.162 s <<< FAILURE! - in 
org.apache.hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy
[ERROR] 
testNNSendsErasureCodingTasks(org.apache.hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy)
  Time elapsed: 33.458 s  <<< FAILURE!
java.lang.AssertionError: Found 3 timeout pending reconstruction tasks 
expected:<0> but was:<3>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:507)
at 
org.apache.hadoop.hdfs.TestReconstructStripedFile.testNNSendsErasureCodingTasks(TestReconstructStripedFile.java:463)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{noformat}
This same bug was fixed in [HDFS-15308], but it still appears to be failing for 
TestReconstructStripedFileWithRandomECPolicy.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15822) Client retry mechanism may invalid when use hedgedRead

2021-02-05 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279746#comment-17279746
 ] 

Kihwal Lee commented on HDFS-15822:
---

Hedged read has been known to be buggy. When there is an exception in one 
datanode, it does not recover well.  Multiple jiras have been filed in the past 
regarding its flaws. e.g. HDFS-10597, HDFS-12971 and HDFS-15407. See if your 
patch addresses the issues described there. You can dupe the Jira if you think 
your change covers it.

> Client retry mechanism may invalid when use hedgedRead
> --
>
> Key: HDFS-15822
> URL: https://issues.apache.org/jira/browse/HDFS-15822
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Major
> Attachments: HDFS-15822.001.patch
>
>
> Hedgedread uses ignoreNodes to ensure that multiple requests fall on 
> different nodes. But the ignoreNodes never been cleared. So if the request of 
> 1st round all failed, and the refetched location is not changed, HDFS client 
> would not request the same nodes which are in the ignoreNodes. It just sleep 
> time by time until reach the retry num, then throw a exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15818) Fix TestFsDatasetImpl.testReadLockCanBeDisabledByConfig

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15818?focusedWorklogId=548685=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548685
 ]

ASF GitHub Bot logged work on HDFS-15818:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 15:08
Start Date: 05/Feb/21 15:08
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on pull request #2679:
URL: https://github.com/apache/hadoop/pull/2679#issuecomment-774089865


   Thanks for fixing this issue. The changes look good to me. Can you fix the 
checkstyle warnings please?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548685)
Time Spent: 40m  (was: 0.5h)

> Fix TestFsDatasetImpl.testReadLockCanBeDisabledByConfig
> ---
>
> Key: HDFS-15818
> URL: https://issues.apache.org/jira/browse/HDFS-15818
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Current TestFsDatasetImpl.testReadLockCanBeDisabledByConfig is incorrect:
> 1) Test fails intermittently as holder can acquire lock first
> [https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2666/1/testReport/]
>  
> 2) Test passes regardless of the setting of 
> DFS_DATANODE_LOCK_READ_WRITE_ENABLED_KEY



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2021-02-05 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279740#comment-17279740
 ] 

Jim Brennan commented on HDFS-9243:
---

[~kihwal] analyzed this as well.  Including his comment here:
{noformat}
The test artificially invalidated a replica on a node, but before the test made 
further progress, the NN fixed the under-replication by having another node 
send the block to the same node. The test then went ahead and removed it from 
the NN's data structure (blocksmap) and called setReplication(). The NN picked 
two nodes, but one of them was the node that already has the block replica. It 
was only missing in NN's data structure. Again, this happened because the NN 
fixed the under-replication between the test deleting the replica and modifying 
the nn data structure. The replication failed with 
ReplicaAlreadyExistsException. This kind of inconsistency does not happen in 
real clusters, but even if it did, it would be fixed when the replication times 
out. The test is set to timeout before the default replication timeout, so it 
didn't have any chance to do that.
{noformat}

> TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
> -
>
> Key: HDFS-9243
> URL: https://issues.apache.org/jira/browse/HDFS-9243
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Wei-Chiu Chuang
>Assignee: Hrishikesh Gadre
>Priority: Minor
>
> org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
> sometimes time out.
> This is happening on trunk as can be observed in several recent jenkins job. 
> (e.g. https://builds.apache.org/job/Hadoop-Hdfs-trunk/2423/  
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2386/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/2351/ 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/472/
> On my local Linux machine, this test case times out 6 out of 10 times. When 
> it does not time out, this test takes about 20 seconds, otherwise it takes 
> more than 60 seconds and then time out.
> I suspect it's a deadlock issue, as dead lock had occurred at this test case 
> in HDFS-5527 before.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15792) ClasscastException while loading FSImage

2021-02-05 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279701#comment-17279701
 ] 

Hadoop QA commented on HDFS-15792:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 
36s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
16s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK 
Oracle Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~16.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK 
Oracle Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} branch-2.10 passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~16.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
25s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green}{color} | {color:green} branch-2.10 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
51s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/461/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Oracle Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~16.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Oracle Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~16.04-b01 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green}{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other 

[jira] [Work logged] (HDFS-15817) Rename snapshots while marking them deleted

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15817?focusedWorklogId=548628=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548628
 ]

ASF GitHub Bot logged work on HDFS-15817:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 12:38
Start Date: 05/Feb/21 12:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2677:
URL: https://github.com/apache/hadoop/pull/2677#issuecomment-774008291


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 20s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 18s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 27s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 195m  3s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 288m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2677 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3db44fa108f6 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 65857ea7c07 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/3/testReport/ |
   | Max. process+thread count | 3240 (vs. ulimit of 5500) |
   | 

[jira] [Updated] (HDFS-15822) Client retry mechanism may invalid when use hedgedRead

2021-02-05 Thread tianhang tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tianhang tang updated HDFS-15822:
-
Attachment: HDFS-15822.001.patch

> Client retry mechanism may invalid when use hedgedRead
> --
>
> Key: HDFS-15822
> URL: https://issues.apache.org/jira/browse/HDFS-15822
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Major
> Attachments: HDFS-15822.001.patch
>
>
> Hedgedread uses ignoreNodes to ensure that multiple requests fall on 
> different nodes. But the ignoreNodes never been cleared. So if the request of 
> 1st round all failed, and the refetched location is not changed, HDFS client 
> would not request the same nodes which are in the ignoreNodes. It just sleep 
> time by time until reach the retry num, then throw a exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15817) Rename snapshots while marking them deleted

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15817?focusedWorklogId=548611=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548611
 ]

ASF GitHub Bot logged work on HDFS-15817:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 12:09
Start Date: 05/Feb/21 12:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2677:
URL: https://github.com/apache/hadoop/pull/2677#issuecomment-773994881


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   4m 20s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 13s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 49s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/4/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 56s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/4/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.  |
   | -1 :x: |  javac  |   0m 56s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/4/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.  |
   | -1 :x: |  compile  |   0m 56s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/4/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.  |
   | -1 :x: |  javac  |   0m 56s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/4/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m 50s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/4/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | -1 :x: |  shadedclient  |   3m 21s |  |  patch has errors when building 
and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | -1 :x: |  findbugs  |   0m 43s | 

[jira] [Work logged] (HDFS-15683) Allow configuring DISK/ARCHIVE capacity for individual volumes

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15683?focusedWorklogId=548594=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548594
 ]

ASF GitHub Bot logged work on HDFS-15683:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 11:33
Start Date: 05/Feb/21 11:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2625:
URL: https://github.com/apache/hadoop/pull/2625#issuecomment-773978212


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  4s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |   1m 12s | 
[/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/9/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt)
 |  hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 
with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 14 new + 580 
unchanged - 14 fixed = 594 total (was 594)  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 821 unchanged - 1 
fixed = 821 total (was 822)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  12m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  5s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 207m 37s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 292m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.fs.viewfs.TestViewFsLinkFallback |
   |   | hadoop.fs.TestHDFSFileContextMainOperations |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2625 |
   

[jira] [Updated] (HDFS-15792) ClasscastException while loading FSImage

2021-02-05 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15792:
---
Attachment: HDFS-15792-branch-2.10.003.patch

> ClasscastException while loading FSImage
> 
>
> Key: HDFS-15792
> URL: https://issues.apache.org/jira/browse/HDFS-15792
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15792-branch-2.10.001.patch, 
> HDFS-15792-branch-2.10.002.patch, HDFS-15792-branch-2.10.003.patch, 
> HDFS-15792.001.patch, HDFS-15792.002.patch, HDFS-15792.003.patch, 
> HDFS-15792.004.patch, HDFS-15792.005.patch, HDFS-15792.addendum.001.patch, 
> image-2021-01-27-12-00-34-846.png
>
>
> FSImage loading has failed with ClasscastException - 
> java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to 
> java.util.HashMap$TreeNode.
> This is the usage issue with Hashmap in concurrent scenarios.
> Same issue has been reported on Java & closed as usage issue.  - 
> https://bugs.openjdk.java.net/browse/JDK-8173671
> 2020-12-28 11:36:26,127 | ERROR | main | An exception occurred when loading 
> INODE from fsiamge. | FSImageFormatProtobuf.java:442
> java.lang.
> : java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
>   at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1835)
>   at java.util.HashMap$TreeNode.treeify(HashMap.java:1951)
>   at java.util.HashMap.treeifyBin(HashMap.java:772)
>   at java.util.HashMap.putVal(HashMap.java:644)
>   at java.util.HashMap.put(HashMap.java:612)
>   at 
> org.apache.hadoop.hdfs.util.ReferenceCountMap.put(ReferenceCountMap.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AclStorage.addAclFeature(AclStorage.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.addAclFeature(INodeWithAdditionalFields.java:349)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectory(FSImageFormatPBINode.java:225)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINode(FSImageFormatPBINode.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.readPBINodes(FSImageFormatPBINode.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:342)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$2.call(FSImageFormatProtobuf.java:469)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-12-28 11:36:26,130 | ERROR | main | Failed to load image from 
> FSImageFile(file=/srv/BigData/namenode/current/fsimage_00198227480, 
> cpktTxId=00198227480) | FSImage.java:738
> java.io.IOException: java.lang.ClassCastException: java.util.HashMap$Node 
> cannot be cast to java.util.HashMap$TreeNode
>   at 
> org.apache.hadoop.io.MultipleIOException$Builder.add(MultipleIOException.java:68)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.runLoaderTasks(FSImageFormatProtobuf.java:444)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:360)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:971)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:955)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:820)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:733)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1113)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:730)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:648)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:953)
>   at 
> 

[jira] [Commented] (HDFS-15792) ClasscastException while loading FSImage

2021-02-05 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279626#comment-17279626
 ] 

Xiaoqiao He commented on HDFS-15792:


Hi [~prasad-acit], I upload 003 following your initial patch and try to trigger 
Yetus, let us see what it says.

> ClasscastException while loading FSImage
> 
>
> Key: HDFS-15792
> URL: https://issues.apache.org/jira/browse/HDFS-15792
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nn
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15792-branch-2.10.001.patch, 
> HDFS-15792-branch-2.10.002.patch, HDFS-15792-branch-2.10.003.patch, 
> HDFS-15792.001.patch, HDFS-15792.002.patch, HDFS-15792.003.patch, 
> HDFS-15792.004.patch, HDFS-15792.005.patch, HDFS-15792.addendum.001.patch, 
> image-2021-01-27-12-00-34-846.png
>
>
> FSImage loading has failed with ClasscastException - 
> java.lang.ClassCastException: java.util.HashMap$Node cannot be cast to 
> java.util.HashMap$TreeNode.
> This is the usage issue with Hashmap in concurrent scenarios.
> Same issue has been reported on Java & closed as usage issue.  - 
> https://bugs.openjdk.java.net/browse/JDK-8173671
> 2020-12-28 11:36:26,127 | ERROR | main | An exception occurred when loading 
> INODE from fsiamge. | FSImageFormatProtobuf.java:442
> java.lang.
> : java.util.HashMap$Node cannot be cast to java.util.HashMap$TreeNode
>   at java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1835)
>   at java.util.HashMap$TreeNode.treeify(HashMap.java:1951)
>   at java.util.HashMap.treeifyBin(HashMap.java:772)
>   at java.util.HashMap.putVal(HashMap.java:644)
>   at java.util.HashMap.put(HashMap.java:612)
>   at 
> org.apache.hadoop.hdfs.util.ReferenceCountMap.put(ReferenceCountMap.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.AclStorage.addAclFeature(AclStorage.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields.addAclFeature(INodeWithAdditionalFields.java:349)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectory(FSImageFormatPBINode.java:225)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINode(FSImageFormatPBINode.java:406)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.readPBINodes(FSImageFormatPBINode.java:367)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:342)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader$2.call(FSImageFormatProtobuf.java:469)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-12-28 11:36:26,130 | ERROR | main | Failed to load image from 
> FSImageFile(file=/srv/BigData/namenode/current/fsimage_00198227480, 
> cpktTxId=00198227480) | FSImage.java:738
> java.io.IOException: java.lang.ClassCastException: java.util.HashMap$Node 
> cannot be cast to java.util.HashMap$TreeNode
>   at 
> org.apache.hadoop.io.MultipleIOException$Builder.add(MultipleIOException.java:68)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.runLoaderTasks(FSImageFormatProtobuf.java:444)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:360)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:263)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:227)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:971)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:955)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:820)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:733)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:331)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1113)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:730)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:648)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:710)
>   at 
> 

[jira] [Work logged] (HDFS-15819) Fix a codestyle issue for TestQuotaByStorageType

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15819?focusedWorklogId=548370=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548370
 ]

ASF GitHub Bot logged work on HDFS-15819:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:14
Start Date: 05/Feb/21 10:14
Worklog Time Spent: 10m 
  Work Description: ferhui merged pull request #2681:
URL: https://github.com/apache/hadoop/pull/2681


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548370)
Time Spent: 1.5h  (was: 1h 20m)

> Fix a codestyle issue for TestQuotaByStorageType
> 
>
> Key: HDFS-15819
> URL: https://issues.apache.org/jira/browse/HDFS-15819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Baolong Mao
>Assignee: Baolong Mao
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15820) Ensure snapshot root trash provisioning happens only post safe mode exit

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15820?focusedWorklogId=548443=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548443
 ]

ASF GitHub Bot logged work on HDFS-15820:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:22
Start Date: 05/Feb/21 10:22
Worklog Time Spent: 10m 
  Work Description: bshashikant opened a new pull request #2682:
URL: https://github.com/apache/hadoop/pull/2682


   Please see https://issues.apache.org/jira/browse/HDFS-15820.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548443)
Time Spent: 1h 10m  (was: 1h)

> Ensure snapshot root trash provisioning happens only post safe mode exit
> 
>
> Key: HDFS-15820
> URL: https://issues.apache.org/jira/browse/HDFS-15820
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Currently, on namenode startup, snapshot trash root provisioning starts as 
> along with trash emptier service but namenode might not be out of safe mode 
> by then. This can fail the snapshot trash dir creation thereby crashing the 
> namenode. The idea here is to trigger snapshot trash provisioning only post 
> safe mode exit.
> {code:java}
> 2021-02-04 11:23:47,323 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring 
> NN shutdown. Shutting down immediately.
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /upgrade/.Trash. Name node is in safe mode.
> The reported blocks 0 needs additional 1383 blocks to reach the threshold 
> 0.9990 of total blocks 1385.
> The number of live datanodes 0 needs an additional 1 live datanodes to reach 
> the minimum number 1.
> Safe mode will be turned off automatically once the thresholds have been 
> reached. NamenodeHostName:quasar-brabeg-5.quasar-brabeg.root.hwx.site
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1542)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1529)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3288)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAndProvisionSnapshotTrashRoots(FSNamesystem.java:8269)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1939)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:967)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:936)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1740)
> 2021-02-04 11:23:47,334 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
> create directory /upgrade/.Trash. Name node is in safe mode.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15785) Datanode to support using DNS to resolve nameservices to IP addresses to get list of namenodes

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15785?focusedWorklogId=548430=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548430
 ]

ASF GitHub Bot logged work on HDFS-15785:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:20
Start Date: 05/Feb/21 10:20
Worklog Time Spent: 10m 
  Work Description: mithmatt commented on a change in pull request #2639:
URL: https://github.com/apache/hadoop/pull/2639#discussion_r570776436



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -1557,6 +1557,17 @@
   public static final double
   DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE_DEFAULT = 0.0;
 
+
+  public static final String
+  DFS_RESOLVE_NAMESERVICE_NEEDED =
+  "dfs.resolve.nameservice.needed";

Review comment:
   "dfs.nameservices.resolution.enabled" or 
"dfs.nameservices.resolution-enabled" may be a better property name
   
   

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -1557,6 +1557,17 @@
   public static final double
   DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE_DEFAULT = 0.0;
 
+
+  public static final String
+  DFS_RESOLVE_NAMESERVICE_NEEDED =
+  "dfs.resolve.nameservice.needed";
+  public static final boolean
+  DFS_RESOLVE_NAMESERVICE_NEEDED_DEFAULT = false;
+
+  public static final String
+  DFS_RESOLVER_IMPL =
+  "dfs.resolver.impl";

Review comment:
   May be "dfs.nameservices.resolver.impl" ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548430)
Time Spent: 40m  (was: 0.5h)

> Datanode to support using DNS to resolve nameservices to IP addresses to get 
> list of namenodes
> --
>
> Key: HDFS-15785
> URL: https://issues.apache.org/jira/browse/HDFS-15785
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently as HDFS supports observers, multiple-standby and router, the 
> namenode hosts are changing frequently in large deployment, we can consider 
> supporting https://issues.apache.org/jira/browse/HDFS-14118 on datanode to 
> reduce the need to update config frequently on all datanodes. In that case, 
> datanode and clients can use the same set of config as well.
> Basically we can resolve the DNS and generate namenode for each IP behind it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15817) Rename snapshots while marking them deleted

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15817?focusedWorklogId=548428=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548428
 ]

ASF GitHub Bot logged work on HDFS-15817:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:20
Start Date: 05/Feb/21 10:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2677:
URL: https://github.com/apache/hadoop/pull/2677#issuecomment-773621670


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  89m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 54s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 50s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   4m  9s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 209m 30s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 366m 46s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.tools.TestViewFileSystemOverloadSchemeWithDFSAdmin |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.server.namenode.snapshot.TestOrderedSnapshotDeletionGc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2677/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2677 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8829bc4f5317 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5f34271bb14 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Work logged] (HDFS-15683) Allow configuring DISK/ARCHIVE capacity for individual volumes

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15683?focusedWorklogId=548335=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548335
 ]

ASF GitHub Bot logged work on HDFS-15683:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:11
Start Date: 05/Feb/21 10:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2625:
URL: https://github.com/apache/hadoop/pull/2625#issuecomment-773778344


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  14m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  5s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |   1m 11s | 
[/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/8/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04.txt)
 |  hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 
with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 generated 14 new + 580 
unchanged - 14 fixed = 594 total (was 594)  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 820 unchanged - 1 
fixed = 820 total (was 821)  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  12m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  0s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 192m 10s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 291m  0s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestDFSClientExcludedNodes |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2625/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2625 |
   | Optional Tests | dupname asflicense compile javac javadoc 

[jira] [Work logged] (HDFS-15819) Fix a codestyle issue for TestQuotaByStorageType

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15819?focusedWorklogId=548346=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548346
 ]

ASF GitHub Bot logged work on HDFS-15819:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:12
Start Date: 05/Feb/21 10:12
Worklog Time Spent: 10m 
  Work Description: maobaolong opened a new pull request #2681:
URL: https://github.com/apache/hadoop/pull/2681


   https://issues.apache.org/jira/browse/HDFS-15819



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548346)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix a codestyle issue for TestQuotaByStorageType
> 
>
> Key: HDFS-15819
> URL: https://issues.apache.org/jira/browse/HDFS-15819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Baolong Mao
>Assignee: Baolong Mao
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15819) Fix a codestyle issue for TestQuotaByStorageType

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15819?focusedWorklogId=548409=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548409
 ]

ASF GitHub Bot logged work on HDFS-15819:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:18
Start Date: 05/Feb/21 10:18
Worklog Time Spent: 10m 
  Work Description: maobaolong commented on pull request #2681:
URL: https://github.com/apache/hadoop/pull/2681#issuecomment-773863372


   @ferhui Thanks very mush for merge this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548409)
Time Spent: 1h 40m  (was: 1.5h)

> Fix a codestyle issue for TestQuotaByStorageType
> 
>
> Key: HDFS-15819
> URL: https://issues.apache.org/jira/browse/HDFS-15819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Baolong Mao
>Assignee: Baolong Mao
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15817) Rename snapshots while marking them deleted

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15817?focusedWorklogId=548375=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548375
 ]

ASF GitHub Bot logged work on HDFS-15817:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:14
Start Date: 05/Feb/21 10:14
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2677:
URL: https://github.com/apache/hadoop/pull/2677#issuecomment-773861987


   Appending a timestamp to the deleted snapshot name may lead to different 
names being generated pre and post restart of namenode during edit log replay. 
Therefore, the idea here is to just append the snapshot Id which will remain 
constant.
   
   @szetszwo , can you have a look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548375)
Time Spent: 50m  (was: 40m)

> Rename snapshots while marking them deleted 
> 
>
> Key: HDFS-15817
> URL: https://issues.apache.org/jira/browse/HDFS-15817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> With ordered snapshot feature turned on, a snapshot will be just marked as 
> deleted but won't actually be deleted if its not the oldest one. Since, the 
> snapshot is just marked deleted, creation of  new snapshot having the same 
> name as the one which was marked deleted will fail. In order to mitigate such 
> problems, the idea here is to rename the snapshot getting marked as deleted 
> by appending deletion timestamp along with snapshot id to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15819) Fix a codestyle issue for TestQuotaByStorageType

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15819?focusedWorklogId=548267=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548267
 ]

ASF GitHub Bot logged work on HDFS-15819:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:03
Start Date: 05/Feb/21 10:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2681:
URL: https://github.com/apache/hadoop/pull/2681#issuecomment-773247145


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  12m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  8s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 227m 44s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2681/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 313m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2681/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2681 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux c6784034c641 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 15a1f7adfc0 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2681/1/testReport/ |
   | Max. process+thread count | 3062 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Work logged] (HDFS-15820) Ensure snapshot root trash provisioning happens only post safe mode exit

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15820?focusedWorklogId=548251=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548251
 ]

ASF GitHub Bot logged work on HDFS-15820:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 10:00
Start Date: 05/Feb/21 10:00
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2682:
URL: https://github.com/apache/hadoop/pull/2682#discussion_r570461526



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -8531,25 +8527,37 @@ void checkAccess(String src, FsAction mode) throws 
IOException {
* Check if snapshot roots are created for all existing snapshottable
* directories. Create them if not.
*/
-  void checkAndProvisionSnapshotTrashRoots() throws IOException {
-SnapshottableDirectoryStatus[] dirStatusList = 
getSnapshottableDirListing();
-if (dirStatusList == null) {
-  return;
-}
-for (SnapshottableDirectoryStatus dirStatus : dirStatusList) {
-  String currDir = dirStatus.getFullPath().toString();
-  if (!currDir.endsWith(Path.SEPARATOR)) {
-currDir += Path.SEPARATOR;
-  }
-  String trashPath = currDir + FileSystem.TRASH_PREFIX;
-  HdfsFileStatus fileStatus = getFileInfo(trashPath, false, false, false);
-  if (fileStatus == null) {
-LOG.info("Trash doesn't exist for snapshottable directory {}. "
-+ "Creating trash at {}", currDir, trashPath);
-PermissionStatus permissionStatus = new 
PermissionStatus(getRemoteUser()
-.getShortUserName(), null, SHARED_TRASH_PERMISSION);
-mkdirs(trashPath, permissionStatus, false);
+  @Override
+  public void checkAndProvisionSnapshotTrashRoots() {
+if (isSnapshotTrashRootEnabled) {
+  try {
+SnapshottableDirectoryStatus[] dirStatusList =
+getSnapshottableDirListing();
+if (dirStatusList == null) {
+  return;
+}
+for (SnapshottableDirectoryStatus dirStatus : dirStatusList) {
+  String currDir = dirStatus.getFullPath().toString();
+  if (!currDir.endsWith(Path.SEPARATOR)) {
+currDir += Path.SEPARATOR;
+  }
+  String trashPath = currDir + FileSystem.TRASH_PREFIX;
+  HdfsFileStatus fileStatus = getFileInfo(trashPath, false, false, 
false);
+  if (fileStatus == null) {
+LOG.info("Trash doesn't exist for snapshottable directory {}. " + 
"Creating trash at {}", currDir, trashPath);
+PermissionStatus permissionStatus =
+new PermissionStatus(getRemoteUser().getShortUserName(), null,
+SHARED_TRASH_PERMISSION);
+mkdirs(trashPath, permissionStatus, false);
+  }
+}
+  } catch (IOException e) {
+final String msg =
+"Could not provision Trash directory for existing "
++ "snapshottable directories. Exiting Namenode.";
+ExitUtil.terminate(1, msg);

Review comment:
   Pro: Terminating NN in this case is a sure good way of uncovering an 
unexpected problems instead of hiding it in the logs.
   
   Con: I wonder if we really should terminate NN when Trash directory fails to 
be deployed. We could just throw a warning message?
   
   Either way, I'm fine with both. Just a thought.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java
##
@@ -2524,7 +2524,7 @@ public void testNameNodeCreateSnapshotTrashRootOnStartup()
 MiniDFSCluster cluster =
 new MiniDFSCluster.Builder(conf).numDataNodes(1).build();
 try {
-  final DistributedFileSystem dfs = cluster.getFileSystem();
+ DistributedFileSystem dfs = cluster.getFileSystem();

Review comment:
   nit: add one more space before this line for alignment.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548251)
Time Spent: 1h  (was: 50m)

> Ensure snapshot root trash provisioning happens only post safe mode exit
> 
>
> Key: HDFS-15820
> URL: https://issues.apache.org/jira/browse/HDFS-15820
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>

[jira] [Commented] (HDFS-15805) Hadoop prints sensitive Cookie information.

2021-02-05 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279493#comment-17279493
 ] 

Renukaprasad C commented on HDFS-15805:
---

Thank you [~tasanuma], sure i will follow the same if required.

> Hadoop prints sensitive Cookie information.
> ---
>
> Key: HDFS-15805
> URL: https://issues.apache.org/jira/browse/HDFS-15805
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-15805.001.patch
>
>
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.AuthCookieHandler#setAuthCookie
>  - prints cookie information in log. Any sensitive infomation in Cookies will 
> be logged, which needs to be avaided.
> LOG.trace("Setting token value to {} ({})", authCookie, oldCookie);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15818) Fix TestFsDatasetImpl.testReadLockCanBeDisabledByConfig

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15818?focusedWorklogId=548200=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548200
 ]

ASF GitHub Bot logged work on HDFS-15818:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 09:55
Start Date: 05/Feb/21 09:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2679:
URL: https://github.com/apache/hadoop/pull/2679#issuecomment-773148937


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 19s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 15s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2679/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 41 unchanged - 
0 fixed = 43 total (was 41)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~20.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  7s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 195m 42s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2679/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 284m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2679/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2679 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4af8bbd0fc69 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 15a1f7adfc0 |
   | Default Java | Private 

[jira] [Work logged] (HDFS-15820) Ensure snapshot root trash provisioning happens only post safe mode exit

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15820?focusedWorklogId=548179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548179
 ]

ASF GitHub Bot logged work on HDFS-15820:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 09:53
Start Date: 05/Feb/21 09:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2682:
URL: https://github.com/apache/hadoop/pull/2682#issuecomment-773619671







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548179)
Time Spent: 50m  (was: 40m)

> Ensure snapshot root trash provisioning happens only post safe mode exit
> 
>
> Key: HDFS-15820
> URL: https://issues.apache.org/jira/browse/HDFS-15820
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, on namenode startup, snapshot trash root provisioning starts as 
> along with trash emptier service but namenode might not be out of safe mode 
> by then. This can fail the snapshot trash dir creation thereby crashing the 
> namenode. The idea here is to trigger snapshot trash provisioning only post 
> safe mode exit.
> {code:java}
> 2021-02-04 11:23:47,323 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Error encountered requiring 
> NN shutdown. Shutting down immediately.
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /upgrade/.Trash. Name node is in safe mode.
> The reported blocks 0 needs additional 1383 blocks to reach the threshold 
> 0.9990 of total blocks 1385.
> The number of live datanodes 0 needs an additional 1 live datanodes to reach 
> the minimum number 1.
> Safe mode will be turned off automatically once the thresholds have been 
> reached. NamenodeHostName:quasar-brabeg-5.quasar-brabeg.root.hwx.site
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1542)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1529)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3288)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAndProvisionSnapshotTrashRoots(FSNamesystem.java:8269)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1939)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:967)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:936)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1740)
> 2021-02-04 11:23:47,334 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
> create directory /upgrade/.Trash. Name node is in safe mode.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15683) Allow configuring DISK/ARCHIVE capacity for individual volumes

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15683?focusedWorklogId=548171=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548171
 ]

ASF GitHub Bot logged work on HDFS-15683:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 09:52
Start Date: 05/Feb/21 09:52
Worklog Time Spent: 10m 
  Work Description: Jing9 commented on a change in pull request #2625:
URL: https://github.com/apache/hadoop/pull/2625#discussion_r57126



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
##
@@ -815,6 +815,25 @@ public IOException call() {
 String.format("FAILED TO ADD: %s: %s%n",
 volume, ioe.getMessage()));
 LOG.error("Failed to add volume: {}", volume, ioe);
+/**
+ * TODO: Some cases are not supported yet with
+ *   same-disk-tiering on. For example, when replacing a
+ *   storage directory on same mount, we have check if same
+ *   storage type already exists on the mount. In this case
+ *   we need to remove existing vol first then add.
+ *   Also, we will need to adjust new capacity ratio when
+ *   refreshVolume in the future.
+ */
+if (ioe.getMessage()

Review comment:
   It may be better to check if there is conflict with the 
same-disk-tiering feature when we first load the refreshVolume configuration. 
I.e., we can do some verification on changedVolumes





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548171)
Time Spent: 2h 50m  (was: 2h 40m)

> Allow configuring DISK/ARCHIVE capacity for individual volumes
> --
>
> Key: HDFS-15683
> URL: https://issues.apache.org/jira/browse/HDFS-15683
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This is a follow-up task for https://issues.apache.org/jira/browse/HDFS-15548
> In case that the datanode disks are not unified, we should allow admins to 
> configure capacity for individual volumes on top of the default one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15819) Fix a codestyle issue for TestQuotaByStorageType

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15819?focusedWorklogId=548160=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548160
 ]

ASF GitHub Bot logged work on HDFS-15819:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 09:51
Start Date: 05/Feb/21 09:51
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #2681:
URL: https://github.com/apache/hadoop/pull/2681#issuecomment-773731300


   @maobaolong Thanks for fix, merged 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548160)
Time Spent: 1h  (was: 50m)

> Fix a codestyle issue for TestQuotaByStorageType
> 
>
> Key: HDFS-15819
> URL: https://issues.apache.org/jira/browse/HDFS-15819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Baolong Mao
>Assignee: Baolong Mao
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15761) Dead NORMAL DN shouldn't transit to DECOMMISSIONED immediately

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15761?focusedWorklogId=548113=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548113
 ]

ASF GitHub Bot logged work on HDFS-15761:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 09:46
Start Date: 05/Feb/21 09:46
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on a change in pull request #2588:
URL: https://github.com/apache/hadoop/pull/2588#discussion_r570072022



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java
##
@@ -383,30 +383,70 @@ public void testDecommissionStatusAfterDNRestart() throws 
Exception {
 
   /**
* Verify the support for decommissioning a datanode that is already dead.
-   * Under this scenario the datanode should immediately be marked as
-   * DECOMMISSIONED
+   * Under this scenario the datanode should be marked as
+   * DECOMMISSION_IN_PROGRESS first. When pendingReplicationBlocksCount and
+   * underReplicatedBlocksCount are both 0, it becomes DECOMMISSIONED.
*/
   @Test(timeout=12)
   public void testDecommissionDeadDN() throws Exception {
 Logger log = Logger.getLogger(DatanodeAdminManager.class);
 log.setLevel(Level.DEBUG);
-DatanodeID dnID = cluster.getDataNodes().get(0).getDatanodeId();
-String dnName = dnID.getXferAddr();
-DataNodeProperties stoppedDN = cluster.stopDataNode(0);
-DFSTestUtil.waitForDatanodeState(cluster, dnID.getDatanodeUuid(),
-false, 3);
+
+DistributedFileSystem fileSystem = cluster.getFileSystem();
+
+// Create a file with one block. That block has one replica.
+Path f = new Path("decommission.dat");
+DFSTestUtil.createFile(fileSystem, f, fileSize, fileSize, fileSize,
+(short)1, seed);
+
+// Find the DN that owns the only replica.
+RemoteIterator fileList =
+fileSystem.listLocatedStatus(f);
+BlockLocation[] blockLocations = fileList.next().getBlockLocations();
+String[] dnNames = blockLocations[0].getNames();

Review comment:
   As the target DN is one host, we may not need to use String array and 
for-loop.
   ```java
   String dnName = blockLocations[0].getNames()[0];
   ```

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java
##
@@ -383,30 +383,70 @@ public void testDecommissionStatusAfterDNRestart() throws 
Exception {
 
   /**
* Verify the support for decommissioning a datanode that is already dead.
-   * Under this scenario the datanode should immediately be marked as
-   * DECOMMISSIONED
+   * Under this scenario the datanode should be marked as
+   * DECOMMISSION_IN_PROGRESS first. When pendingReplicationBlocksCount and
+   * underReplicatedBlocksCount are both 0, it becomes DECOMMISSIONED.
*/
   @Test(timeout=12)
   public void testDecommissionDeadDN() throws Exception {
 Logger log = Logger.getLogger(DatanodeAdminManager.class);
 log.setLevel(Level.DEBUG);
-DatanodeID dnID = cluster.getDataNodes().get(0).getDatanodeId();
-String dnName = dnID.getXferAddr();
-DataNodeProperties stoppedDN = cluster.stopDataNode(0);
-DFSTestUtil.waitForDatanodeState(cluster, dnID.getDatanodeUuid(),
-false, 3);
+
+DistributedFileSystem fileSystem = cluster.getFileSystem();
+
+// Create a file with one block. That block has one replica.
+Path f = new Path("decommission.dat");
+DFSTestUtil.createFile(fileSystem, f, fileSize, fileSize, fileSize,
+(short)1, seed);
+
+// Find the DN that owns the only replica.
+RemoteIterator fileList =
+fileSystem.listLocatedStatus(f);
+BlockLocation[] blockLocations = fileList.next().getBlockLocations();
+String[] dnNames = blockLocations[0].getNames();
+
+// Stop the DN leads to 1 block under-replicated
+DataNodeProperties[] stoppedDNs = new DataNodeProperties[dnNames.length];
+for (int i = 0; i < dnNames.length; i++) {
+  stoppedDNs[i] = cluster.stopDataNode(dnNames[i]);
+}
+
 FSNamesystem fsn = cluster.getNamesystem();
 final DatanodeManager dm = fsn.getBlockManager().getDatanodeManager();
-DatanodeDescriptor dnDescriptor = dm.getDatanode(dnID);
-decommissionNode(dnName);
+final List dead = new ArrayList();
+while (true) {
+  dm.fetchDatanodes(null, dead, false);
+  if (dead.size() == 3) {

Review comment:
   Why waiting for `dead.size()==3`? They all seem to be the same host.
   
   And it would be better to use `GenericTestUtils.waitFor` instead of using 
the `while(true)` loop.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDecommissioningStatus.java
##
@@ -453,10 +493,10 @@ public void testDecommissionLosingData() throws Exception 
{
 

[jira] [Assigned] (HDFS-15822) Client retry mechanism may invalid when use hedgedRead

2021-02-05 Thread tianhang tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tianhang tang reassigned HDFS-15822:


Assignee: tianhang tang

> Client retry mechanism may invalid when use hedgedRead
> --
>
> Key: HDFS-15822
> URL: https://issues.apache.org/jira/browse/HDFS-15822
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: tianhang tang
>Assignee: tianhang tang
>Priority: Major
>
> Hedgedread uses ignoreNodes to ensure that multiple requests fall on 
> different nodes. But the ignoreNodes never been cleared. So if the request of 
> 1st round all failed, and the refetched location is not changed, HDFS client 
> would not request the same nodes which are in the ignoreNodes. It just sleep 
> time by time until reach the retry num, then throw a exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15822) Client retry mechanism may invalid when use hedgedRead

2021-02-05 Thread tianhang tang (Jira)
tianhang tang created HDFS-15822:


 Summary: Client retry mechanism may invalid when use hedgedRead
 Key: HDFS-15822
 URL: https://issues.apache.org/jira/browse/HDFS-15822
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: tianhang tang


Hedgedread uses ignoreNodes to ensure that multiple requests fall on different 
nodes. But the ignoreNodes never been cleared. So if the request of 1st round 
all failed, and the refetched location is not changed, HDFS client would not 
request the same nodes which are in the ignoreNodes. It just sleep time by time 
until reach the retry num, then throw a exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15785) Datanode to support using DNS to resolve nameservices to IP addresses to get list of namenodes

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15785?focusedWorklogId=548070=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548070
 ]

ASF GitHub Bot logged work on HDFS-15785:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 07:48
Start Date: 05/Feb/21 07:48
Worklog Time Spent: 10m 
  Work Description: mithmatt commented on a change in pull request #2639:
URL: https://github.com/apache/hadoop/pull/2639#discussion_r570776436



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -1557,6 +1557,17 @@
   public static final double
   DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE_DEFAULT = 0.0;
 
+
+  public static final String
+  DFS_RESOLVE_NAMESERVICE_NEEDED =
+  "dfs.resolve.nameservice.needed";

Review comment:
   "dfs.nameservices.resolution.enabled" or 
"dfs.nameservices.resolution-enabled" may be a better property name
   
   

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
##
@@ -1557,6 +1557,17 @@
   public static final double
   DFS_DATANODE_RESERVE_FOR_ARCHIVE_DEFAULT_PERCENTAGE_DEFAULT = 0.0;
 
+
+  public static final String
+  DFS_RESOLVE_NAMESERVICE_NEEDED =
+  "dfs.resolve.nameservice.needed";
+  public static final boolean
+  DFS_RESOLVE_NAMESERVICE_NEEDED_DEFAULT = false;
+
+  public static final String
+  DFS_RESOLVER_IMPL =
+  "dfs.resolver.impl";

Review comment:
   May be "dfs.nameservices.resolver.impl" ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548070)
Time Spent: 0.5h  (was: 20m)

> Datanode to support using DNS to resolve nameservices to IP addresses to get 
> list of namenodes
> --
>
> Key: HDFS-15785
> URL: https://issues.apache.org/jira/browse/HDFS-15785
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently as HDFS supports observers, multiple-standby and router, the 
> namenode hosts are changing frequently in large deployment, we can consider 
> supporting https://issues.apache.org/jira/browse/HDFS-14118 on datanode to 
> reduce the need to update config frequently on all datanodes. In that case, 
> datanode and clients can use the same set of config as well.
> Basically we can resolve the DNS and generate namenode for each IP behind it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2021-02-05 Thread Hui Fei (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17279473#comment-17279473
 ] 

Hui Fei commented on HDFS-14703:


[~shv] Great feature and look forward to it, Is it in progress?

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: 001-partitioned-inodeMap-POC.tar.gz, 
> 002-partitioned-inodeMap-POC.tar.gz, NameNode Fine-Grained Locking.pdf, 
> NameNode Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15819) Fix a codestyle issue for TestQuotaByStorageType

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15819?focusedWorklogId=548075=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548075
 ]

ASF GitHub Bot logged work on HDFS-15819:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 07:56
Start Date: 05/Feb/21 07:56
Worklog Time Spent: 10m 
  Work Description: maobaolong commented on pull request #2681:
URL: https://github.com/apache/hadoop/pull/2681#issuecomment-773863372


   @ferhui Thanks very mush for merge this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548075)
Time Spent: 50m  (was: 40m)

> Fix a codestyle issue for TestQuotaByStorageType
> 
>
> Key: HDFS-15819
> URL: https://issues.apache.org/jira/browse/HDFS-15819
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: Baolong Mao
>Assignee: Baolong Mao
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15817) Rename snapshots while marking them deleted

2021-02-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15817?focusedWorklogId=548074=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-548074
 ]

ASF GitHub Bot logged work on HDFS-15817:
-

Author: ASF GitHub Bot
Created on: 05/Feb/21 07:53
Start Date: 05/Feb/21 07:53
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2677:
URL: https://github.com/apache/hadoop/pull/2677#issuecomment-773861987


   Appending a timestamp to the deleted snapshot name may lead to different 
names being generated pre and post restart of namenode during edit log replay. 
Therefore, the idea here is to just append the snapshot Id which will remain 
constant.
   
   @szetszwo , can you have a look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 548074)
Time Spent: 40m  (was: 0.5h)

> Rename snapshots while marking them deleted 
> 
>
> Key: HDFS-15817
> URL: https://issues.apache.org/jira/browse/HDFS-15817
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> With ordered snapshot feature turned on, a snapshot will be just marked as 
> deleted but won't actually be deleted if its not the oldest one. Since, the 
> snapshot is just marked deleted, creation of  new snapshot having the same 
> name as the one which was marked deleted will fail. In order to mitigate such 
> problems, the idea here is to rename the snapshot getting marked as deleted 
> by appending deletion timestamp along with snapshot id to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org