[jira] [Work logged] (HDFS-16004) startLogSegment and journal in BackupNode lack Permission check.

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16004?focusedWorklogId=591358=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591358
 ]

ASF GitHub Bot logged work on HDFS-16004:
-

Author: ASF GitHub Bot
Created on: 30/Apr/21 03:33
Start Date: 30/Apr/21 03:33
Worklog Time Spent: 10m 
  Work Description: lujiefsi opened a new pull request #2966:
URL: https://github.com/apache/hadoop/pull/2966


   I have some doubt when i configurate secure HDFS.  I know we have Service 
Level Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and 
so on.
   But i do not find such Authorization   for JournalProtocol after reading the 
code in HDFSPolicyProvider.  And if we have, how can i configurate such 
Authorization?

   Besides  even NamenodeProtocol has Service Level Authorization, its methods 
still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
implemented NamenodeProtocol  for example:

   public NamenodeCommand startCheckpoint(NamenodeRegistration registration)
 throws IOException {
   String operationName = "startCheckpoint";
   checkNNStartup();
   namesystem.checkSuperuserPrivilege(operationName);
   ..

   I found that the methods in  BackupNodeRpcServer who implemented 
JournalProtocol  lack of such  Permission check. See below:


   public void startLogSegment(JournalInfo journalInfo, long epoch,
   long txid) throws IOException {
 namesystem.checkOperation(OperationCategory.JOURNAL);
 verifyJournalRequest(journalInfo);
 getBNImage().namenodeStartedLogSegment(txid);
   }

   @Override
   public void journal(JournalInfo journalInfo, long epoch, long firstTxId,
   int numTxns, byte[] records) throws IOException {
 namesystem.checkOperation(OperationCategory.JOURNAL);
 verifyJournalRequest(journalInfo);
 getBNImage().journal(firstTxId, numTxns, records);
   }

   Do we need add Permission check for them?

   Please point out my mistakes if i am wrong or miss something. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591358)
Remaining Estimate: 0h
Time Spent: 10m

> startLogSegment and journal in BackupNode lack Permission check.
> 
>
> Key: HDFS-16004
> URL: https://issues.apache.org/jira/browse/HDFS-16004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I have some doubt when i configurate secure HDFS.  I know we have Service 
> Level Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and 
> so on.
> But i do not find such Authorization   for JournalProtocol after reading the 
> code in HDFSPolicyProvider.  And if we have, how can i configurate such 
> Authorization?
>  
> Besides  even NamenodeProtocol has Service Level Authorization, its methods 
> still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
> implemented NamenodeProtocol  for example:
>  
> _public NamenodeCommand startCheckpoint(NamenodeRegistration registration)_
>       _throws IOException {_
>     _String operationName = "startCheckpoint";_
>     _checkNNStartup();_
>     _{color:#ff6600}namesystem.checkSuperuserPrivilege(operationName);{color}_
> _.._
>  
> I found that the methods in  BackupNodeRpcServer who implemented 
> JournalProtocol  lack of such  Permission check. See below:
>  
>  
>     _public void startLogSegment(JournalInfo journalInfo, long epoch,_
>         _long txid) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().namenodeStartedLogSegment(txid);_
>     _}_
>  
>     _@Override_
>     _public void journal(JournalInfo journalInfo, long epoch, long firstTxId,_
>         _int numTxns, byte[] records) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().journal(firstTxId, numTxns, records);_
>     _}_
>  
> Do we need add Permission check for them?
>  
> Please point out my mistakes if i am wrong or miss something. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (HDFS-16004) startLogSegment and journal in BackupNode lack Permission check.

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16004:
--
Labels: pull-request-available  (was: )

> startLogSegment and journal in BackupNode lack Permission check.
> 
>
> Key: HDFS-16004
> URL: https://issues.apache.org/jira/browse/HDFS-16004
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lujie
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I have some doubt when i configurate secure HDFS.  I know we have Service 
> Level Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and 
> so on.
> But i do not find such Authorization   for JournalProtocol after reading the 
> code in HDFSPolicyProvider.  And if we have, how can i configurate such 
> Authorization?
>  
> Besides  even NamenodeProtocol has Service Level Authorization, its methods 
> still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
> implemented NamenodeProtocol  for example:
>  
> _public NamenodeCommand startCheckpoint(NamenodeRegistration registration)_
>       _throws IOException {_
>     _String operationName = "startCheckpoint";_
>     _checkNNStartup();_
>     _{color:#ff6600}namesystem.checkSuperuserPrivilege(operationName);{color}_
> _.._
>  
> I found that the methods in  BackupNodeRpcServer who implemented 
> JournalProtocol  lack of such  Permission check. See below:
>  
>  
>     _public void startLogSegment(JournalInfo journalInfo, long epoch,_
>         _long txid) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().namenodeStartedLogSegment(txid);_
>     _}_
>  
>     _@Override_
>     _public void journal(JournalInfo journalInfo, long epoch, long firstTxId,_
>         _int numTxns, byte[] records) throws IOException {_
>       _namesystem.checkOperation(OperationCategory.JOURNAL);_
>       _verifyJournalRequest(journalInfo);_
>       _getBNImage().journal(firstTxId, numTxns, records);_
>     _}_
>  
> Do we need add Permission check for them?
>  
> Please point out my mistakes if i am wrong or miss something. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16004) startLogSegment and journal in BackupNode lack Permission check.

2021-04-29 Thread lujie (Jira)
lujie created HDFS-16004:


 Summary: startLogSegment and journal in BackupNode lack Permission 
check.
 Key: HDFS-16004
 URL: https://issues.apache.org/jira/browse/HDFS-16004
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: lujie


I have some doubt when i configurate secure HDFS.  I know we have Service Level 
Authorization  for protocols like NamenodeProtocol,DatanodeProtocol and so on.
But i do not find such Authorization   for JournalProtocol after reading the 
code in HDFSPolicyProvider.  And if we have, how can i configurate such 
Authorization?
 
Besides  even NamenodeProtocol has Service Level Authorization, its methods 
still have Permission check. Take startCheckpoint in NameNodeRpcServer who 
implemented NamenodeProtocol  for example:
 
_public NamenodeCommand startCheckpoint(NamenodeRegistration registration)_
      _throws IOException {_
    _String operationName = "startCheckpoint";_
    _checkNNStartup();_
    _{color:#ff6600}namesystem.checkSuperuserPrivilege(operationName);{color}_
_.._
 
I found that the methods in  BackupNodeRpcServer who implemented 
JournalProtocol  lack of such  Permission check. See below:
 
 
    _public void startLogSegment(JournalInfo journalInfo, long epoch,_
        _long txid) throws IOException {_
      _namesystem.checkOperation(OperationCategory.JOURNAL);_
      _verifyJournalRequest(journalInfo);_
      _getBNImage().namenodeStartedLogSegment(txid);_
    _}_
 
    _@Override_
    _public void journal(JournalInfo journalInfo, long epoch, long firstTxId,_
        _int numTxns, byte[] records) throws IOException {_
      _namesystem.checkOperation(OperationCategory.JOURNAL);_
      _verifyJournalRequest(journalInfo);_
      _getBNImage().journal(firstTxId, numTxns, records);_
    _}_
 
Do we need add Permission check for them?
 
Please point out my mistakes if i am wrong or miss something. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16000) HDFS : Rename performance optimization

2021-04-29 Thread zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17337014#comment-17337014
 ] 

zhu commented on HDFS-16000:


[~hexiaoqiao] Thank you for your comments and suggestions. This week I will 
solve these warns and add tests.

> HDFS : Rename performance optimization
> --
>
> Key: HDFS-16000
> URL: https://issues.apache.org/jira/browse/HDFS-16000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.1.4, 3.3.1
>Reporter: zhu
>Assignee: zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: 20210428-143238.svg, 20210428-171635-lambda.svg, 
> HDFS-16000.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It takes a long time to move a large directory with rename. For example, it 
> takes about 40 seconds to move a 1000W directory. When a large amount of data 
> is deleted to the trash, the move large directory will occur when the recycle 
> bin makes checkpoint. In addition, the user may also actively trigger the 
> move large directory operation, which will cause the NameNode to lock too 
> long and be killed by Zkfc. Through the flame graph, it is found that the 
> main time consuming is to create the EnumCounters object.
> h3. I think the following two points can optimize the efficiency of rename 
> execution
> h3. QuotaCount calculation time-consuming optimization:
>  * Create a QuotaCounts object in the calculation directory quotaCount, and 
> pass the quotaCount to the next calculation function through a parameter each 
> time, so as to avoid creating an EnumCounters object for each calculation.
>  * In addition, through the flame graph, it is found that using lambda to 
> modify QuotaCounts takes longer than the ordinary method, so the ordinary 
> method is used to modify the QuotaCounts count.
> h3. Rename logic optimization:
>  * Regardless of whether the rename operation is the source directory and the 
> target directory, the quota count must be calculated three times. The first 
> time, check whether the moved directory exceeds the target directory quota, 
> the second time, calculate the mobile directory quota to update the source 
> directory quota, and the third time, calculate the mobile directory 
> configuration update to the target directory.
>  * I think some of the above three quota quota calculations are unnecessary. 
> For example, if all parent directories of the source directory and target 
> directory are not configured with quota, there is no need to calculate 
> quotaCount. Even if both the source directory and the target directory use 
> quota, there is no need to calculate the quota three times. The calculation 
> logic for the first and third times is the same, and it only needs to be 
> calculated once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591320=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591320
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 30/Apr/21 00:19
Start Date: 30/Apr/21 00:19
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2925:
URL: https://github.com/apache/hadoop/pull/2925#issuecomment-829712363


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m  7s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   4m 10s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   3m 46s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   3m 57s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   8m 43s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  21m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  6s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   5m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/10/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 618 unchanged - 6 fixed = 
627 total (was 624)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |  11m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 249m 40s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m  7s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | -1 :x: |  unit  |  15m  8s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 411m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestReencryption |
   |   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
   |   | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
   |   | hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives |
   |   | hadoop.hdfs.server.namenode.TestNameEditsConfigs |
   |   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
   |   | hadoop.hdfs.server.namenode.TestNetworkTopologyServlet |
   |   | 
hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | 
hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocksWithRandomECPolicy |
   |   | hadoop.cli.TestHDFSCLI |
   |   | hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithHA |
   |   | 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591323=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591323
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 30/Apr/21 00:41
Start Date: 30/Apr/21 00:41
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623513249



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
##
@@ -1527,34 +1535,49 @@ public Response delete(
   @QueryParam(RecursiveParam.NAME) @DefaultValue(RecursiveParam.DEFAULT)
   final RecursiveParam recursive,
   @QueryParam(SnapshotNameParam.NAME) 
@DefaultValue(SnapshotNameParam.DEFAULT)
-  final SnapshotNameParam snapshotName
+  final SnapshotNameParam snapshotName,
+  @QueryParam(DeleteSkipTrashParam.NAME)
+  @DefaultValue(DeleteSkipTrashParam.DEFAULT)
+  final DeleteSkipTrashParam skiptrash
   ) throws IOException, InterruptedException {
 
-init(ugi, delegation, username, doAsUser, path, op, recursive, 
snapshotName);
+init(ugi, delegation, username, doAsUser, path, op, recursive,
+snapshotName, skiptrash);
 
-return doAs(ugi, new PrivilegedExceptionAction() {
-  @Override
-  public Response run() throws IOException {
-  return delete(ugi, delegation, username, doAsUser,
-  path.getAbsolutePath(), op, recursive, snapshotName);
-  }
-});
+return doAs(ugi, () -> delete(
+path.getAbsolutePath(), op, recursive, snapshotName, skiptrash));
   }
 
   protected Response delete(
-  final UserGroupInformation ugi,
-  final DelegationParam delegation,
-  final UserParam username,
-  final DoAsParam doAsUser,
   final String fullpath,
   final DeleteOpParam op,
   final RecursiveParam recursive,
-  final SnapshotNameParam snapshotName
-  ) throws IOException {
+  final SnapshotNameParam snapshotName,
+  final DeleteSkipTrashParam skipTrash) throws IOException {
 final ClientProtocol cp = getRpcClientProtocol();
 
 switch(op.getValue()) {
 case DELETE: {
+  Configuration conf =
+  (Configuration) context.getAttribute(JspHelper.CURRENT_CONF);
+  long trashInterval =
+  conf.getLong(FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT);
+  if (trashInterval > 0 && !skipTrash.getValue()) {
+LOG.info("{} is {} , trying to archive {} instead of removing",
+FS_TRASH_INTERVAL_KEY, trashInterval, fullpath);
+org.apache.hadoop.fs.Path path =
+new org.apache.hadoop.fs.Path(fullpath);
+boolean movedToTrash = Trash.moveToAppropriateTrash(
+FileSystem.get(conf), path, conf);

Review comment:
   This could lead to OOM. We should not create FileSystem object inside 
NameNode.
   See https://issues.apache.org/jira/browse/HDFS-15052 for a similar problem.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591323)
Time Spent: 10h 10m  (was: 10h)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?focusedWorklogId=591312=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591312
 ]

ASF GitHub Bot logged work on HDFS-15997:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 23:57
Start Date: 29/Apr/21 23:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2958:
URL: https://github.com/apache/hadoop/pull/2958#issuecomment-829704322


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 232m  9s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2958/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 319m 10s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2958/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2958 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ce7a005114d5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 16b8ad6aa7a83ad296d8b5cbfb4d90bde528af2b 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591315=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591315
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 30/Apr/21 00:02
Start Date: 30/Apr/21 00:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829705908


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  1s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   4m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   5m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   4m 55s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 20s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/11/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 
630 total (was 627)  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 234m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |   5m 51s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt)
 |  hadoop-hdfs-httpfs in the patch passed.  |
   | -1 :x: |  unit  |  18m 10s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 399m 18s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591236=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591236
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 21:01
Start Date: 29/Apr/21 21:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829593545


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  1s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   4m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   4m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   4m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 11s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/10/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 
630 total (was 627)  |
   | +1 :green_heart: |  mvnsite  |   3m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 20s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 236m 12s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   6m 30s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | -1 :x: |  unit  |  18m 33s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 396m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591228=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591228
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 20:33
Start Date: 29/Apr/21 20:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2925:
URL: https://github.com/apache/hadoop/pull/2925#issuecomment-829577178


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 37s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 18s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   3m 48s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   3m 22s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   7m 38s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 42s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 618 unchanged - 6 fixed = 
627 total (was 624)  |
   | +1 :green_heart: |  mvnsite  |   2m 58s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   7m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  9s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 222m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 58s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | -1 :x: |  unit  |  17m 14s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 365m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2925 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint |
   | uname | Linux 600c3befd35a 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   

[jira] [Updated] (HDFS-15652) Make block size from NNThroughputBenchmark configurable

2021-04-29 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-15652:
---
Fix Version/s: 3.2.3
   2.10.2
   3.1.5
   3.3.1

Just back-ported this to branches 3.3, 3.2, 3.1, and 2.10. Updated Fix Versions.
Thanks [~ferhui] for contributing.

> Make block size from NNThroughputBenchmark configurable 
> 
>
> Key: HDFS-15652
> URL: https://issues.apache.org/jira/browse/HDFS-15652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: benchmarks
>Affects Versions: 3.3.0
>Reporter: Hui Fei
>Assignee: Hui Fei
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> When test NNThroughputBenchmark, get following error logs.
> {quote}
> 2020-10-26 20:51:25,781 ERROR namenode.NNThroughputBenchmark: StatsDaemon 43 
> failed: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block 
> size is less than configured minimum value 
> (dfs.namenode.fs-limits.min-block-size): 16 < 1048576
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2514)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2452)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.createOriginal(NameNodeRpcServer.java:824)
> at 
> org.apache.hadoop.hdfs.server.namenode.ProtectionManager.create(ProtectionManager.java:344)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:792)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:326)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:2002)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2985)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
> at org.apache.hadoop.ipc.Client.call(Client.java:1508)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy9.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:281)
> at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy10.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$CreateFileStats.executeOp(NNThroughputBenchmark.java:597)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$StatsDaemon.benchmarkOne(NNThroughputBenchmark.java:428)
> at 
> org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark$StatsDaemon.run(NNThroughputBenchmark.java:412)
> {quote}
> Because NN has start and serves, we should make block size of client 
> benchmark configurable, and that will be convenient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591169=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591169
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 17:52
Start Date: 29/Apr/21 17:52
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623268777



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
##
@@ -462,7 +462,7 @@ See also: [`destination`](#Destination), 
[FileSystem](../../api/org/apache/hadoo
 * Submit a HTTP DELETE request.
 
 curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE
-  [=]"
+  
[=][=]"
 

Review comment:
   Yes this would be good enough. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591169)
Time Spent: 9h 20m  (was: 9h 10m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591168=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591168
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 17:51
Start Date: 29/Apr/21 17:51
Worklog Time Spent: 10m 
  Work Description: mooons commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623263562



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
##
@@ -462,7 +462,7 @@ See also: [`destination`](#Destination), 
[FileSystem](../../api/org/apache/hadoo
 * Submit a HTTP DELETE request.
 
 curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE
-  [=]"
+  
[=][=]"
 

Review comment:
   Looks good. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591168)
Time Spent: 9h 10m  (was: 9h)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591159=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591159
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 17:44
Start Date: 29/Apr/21 17:44
Worklog Time Spent: 10m 
  Work Description: mooons commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623263562



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
##
@@ -462,7 +462,7 @@ See also: [`destination`](#Destination), 
[FileSystem](../../api/org/apache/hadoo
 * Submit a HTTP DELETE request.
 
 curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE
-  [=]"
+  
[=][=]"
 

Review comment:
   Looks good. Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591159)
Time Spent: 9h  (was: 8h 50m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591149=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591149
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 17:31
Start Date: 29/Apr/21 17:31
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623254234



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
##
@@ -462,7 +462,7 @@ See also: [`destination`](#Destination), 
[FileSystem](../../api/org/apache/hadoo
 * Submit a HTTP DELETE request.
 
 curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE
-  [=]"
+  
[=][=]"
 

Review comment:
   Done. @smengcl does this look good?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591149)
Time Spent: 8h 50m  (was: 8h 40m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591140=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591140
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 17:20
Start Date: 29/Apr/21 17:20
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623247100



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
##
@@ -462,7 +462,7 @@ See also: [`destination`](#Destination), 
[FileSystem](../../api/org/apache/hadoo
 * Submit a HTTP DELETE request.
 
 curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE
-  [=]"
+  
[=][=]"
 

Review comment:
   Unfortunately, making these values bold by `**` is not working as this 
text is covered by scrollable (Insert code mode).
   However, let me make special note of default value right below the curl 
command.
   Thanks for the suggestion.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591140)
Time Spent: 8h 40m  (was: 8.5h)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15982:

Release Note: 
Incompatible change:

Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config "fs.trash.interval" is set to value greater 
than 0, DELETE API will by-default try to move given file to .Trash dir 
(similar to Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config "fs.trash.interval" is set to value greater than 0 
(similar to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://host:port/webhdfs/v1/path?op=DELETE 
[=true|false][=true|false]"

  was:
Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config "fs.trash.interval" is set to value greater 
than 0, DELETE API will by-default try to move given file to .Trash dir 
(similar to Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config "fs.trash.interval" is set to value greater than 0 
(similar to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://host:port/webhdfs/v1/path?op=DELETE 
[=true|false][=true|false]"


> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591131=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591131
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 17:01
Start Date: 29/Apr/21 17:01
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623234638



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.web.resources;
+
+/**
+ * SkipTrash param to be used by DELETE query.
+ */
+public class DeleteSkipTrashParam extends BooleanParam {
+
+  public static final String NAME = "skiptrash";
+  public static final String DEFAULT = FALSE;

Review comment:
   . Let's include the incompatible change note in 3.3.1 and 3.4.0 release 
notes.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591131)
Time Spent: 8.5h  (was: 8h 20m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591099=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591099
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 16:11
Start Date: 29/Apr/21 16:11
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623196324



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.web.resources;
+
+/**
+ * SkipTrash param to be used by DELETE query.
+ */
+public class DeleteSkipTrashParam extends BooleanParam {
+
+  public static final String NAME = "skiptrash";
+  public static final String DEFAULT = FALSE;

Review comment:
   I understand your concerns @smengcl.
   @jojochuang's 
[comment](https://issues.apache.org/jira/browse/HDFS-15982?focusedCommentId=17331521=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17331521)
 from Jira that you might be interested to look at:
   ```
   This is a big incompatible change. If we think this should be part of 3.4.0, 
risking our compatibility guarantee (which I think makes sense, given how many 
times I was involved in accidental data deletion), I think it can be part of 
3.3.1. We traditionally regard 3.3.0 as non-production ready, so making an 
incompat change in 3.3.1 probably is justifiable.
   
   
   ```
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591099)
Time Spent: 8h 20m  (was: 8h 10m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591094=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591094
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 16:10
Start Date: 29/Apr/21 16:10
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623196326



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
##
@@ -462,7 +462,7 @@ See also: [`destination`](#Destination), 
[FileSystem](../../api/org/apache/hadoo
 * Submit a HTTP DELETE request.
 
 curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE
-  [=]"
+  
[=][=]"
 

Review comment:
   nit: if we can emphasize the default value of `recursive` (`false`) and 
`skiptrash` here in the doc it would be great! Try bold font: 
`[=]`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591094)
Time Spent: 8h 10m  (was: 8h)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591093=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591093
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 16:10
Start Date: 29/Apr/21 16:10
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623196324



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.web.resources;
+
+/**
+ * SkipTrash param to be used by DELETE query.
+ */
+public class DeleteSkipTrashParam extends BooleanParam {
+
+  public static final String NAME = "skiptrash";
+  public static final String DEFAULT = FALSE;

Review comment:
   I understand your concerns @smengcl.
   @jojochuang's 
[comment](https://issues.apache.org/jira/browse/HDFS-15982?focusedCommentId=17331521=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17331521)
 from Jira that you might be interested to look at:
   ```
   This is a big incompatible change. If we think this should be part of 3.4.0, 
risking our compatibility guarantee (which I think makes sense, given how many 
times I was involved in accidental data deletion), I think it can be part of 
3.3.1. We traditionally regard 3.3.0 as non-production ready, so making an 
incompat change in 3.3.1 probably is justifiable.
   ```
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591093)
Time Spent: 8h  (was: 7h 50m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16002) TestJournalNodeRespectsBindHostKeys#testHttpsBindHostKey very flaky

2021-04-29 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335618#comment-17335618
 ] 

Bilwa S T commented on HDFS-16002:
--

Hi [~weichiu]

can you assign this issue to me?

> TestJournalNodeRespectsBindHostKeys#testHttpsBindHostKey very flaky
> ---
>
> Key: HDFS-16002
> URL: https://issues.apache.org/jira/browse/HDFS-16002
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> This test appears to be failing a lot lately. I suspect it has to be with the 
> new change to support reloading httpserver2 certificates, but I've not looked 
> into it.
> {noformat}
> Stacktrace
> java.lang.NullPointerException
>   at sun.nio.fs.UnixPath.normalizeAndCheck(UnixPath.java:77)
>   at sun.nio.fs.UnixPath.(UnixPath.java:71)
>   at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
>   at java.nio.file.Paths.get(Paths.java:84)
>   at 
> org.apache.hadoop.http.HttpServer2$Builder.makeConfigurationChangeMonitor(HttpServer2.java:609)
>   at 
> org.apache.hadoop.http.HttpServer2$Builder.createHttpsChannelConnector(HttpServer2.java:592)
>   at 
> org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:518)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeHttpServer.start(JournalNodeHttpServer.java:81)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:238)
>   at 
> org.apache.hadoop.hdfs.qjournal.MiniJournalCluster.(MiniJournalCluster.java:120)
>   at 
> org.apache.hadoop.hdfs.qjournal.MiniJournalCluster.(MiniJournalCluster.java:47)
>   at 
> org.apache.hadoop.hdfs.qjournal.MiniJournalCluster$Builder.build(MiniJournalCluster.java:79)
>   at 
> org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys.testHttpsBindHostKey(TestJournalNodeRespectsBindHostKeys.java:180)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15982:
--
Component/s: hdfs-client

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, hdfs-client, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15982:
--
Labels: pull-request-available  (was: incompatibleChange 
pull-request-available)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591084=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591084
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 15:56
Start Date: 29/Apr/21 15:56
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623184950



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.web.resources;
+
+/**
+ * SkipTrash param to be used by DELETE query.
+ */
+public class DeleteSkipTrashParam extends BooleanParam {
+
+  public static final String NAME = "skiptrash";
+  public static final String DEFAULT = FALSE;

Review comment:
   Ah I just noticed target version includes 3.3.1, backporting to 3.3.x 
might be a problem if this is an incompatible change.
   
   According to the [compatibility 
guideline](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#REST_APIs):
   > Each API has an API-specific version number. Any incompatible changes MUST 
increment the API version number.
   
   How about this:
   - Change `skiptrash=true` default to be compatible with WebHDFS v1, backport 
this to 3.3.1
   - Set `skiptrash=false` in a separate jira for 3.4.0, which will be an 
incompatible change
   
   Or:
   - Increment WebHDFS REST API version to v2 which has `skiptrash=false` as 
default for DELETE




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591084)
Time Spent: 7h 50m  (was: 7h 40m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: incompatibleChange, pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591083=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591083
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 15:54
Start Date: 29/Apr/21 15:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829358556


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 48s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   4m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   5m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   4m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 19s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/8/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 
630 total (was 627)  |
   | +1 :green_heart: |  mvnsite  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 239m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   6m 16s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  18m 15s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 405m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
 

[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335598#comment-17335598
 ] 

Hadoop QA commented on HDFS-16003:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
48s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
22s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 39s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 27m 
38s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  4m 
13s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
28s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/594/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 
112 unchanged - 0 fixed = 114 total (was 112) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 14s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591081=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591081
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 15:50
Start Date: 29/Apr/21 15:50
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2925:
URL: https://github.com/apache/hadoop/pull/2925#issuecomment-829354581


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  23m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 41s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 31s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   4m 36s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   3m 44s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   3m 21s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   8m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 56s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 619 unchanged - 6 fixed = 
628 total (was 625)  |
   | +1 :green_heart: |  mvnsite  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   9m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 225m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 42s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  16m 27s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 395m 58s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2925 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint |
   | uname | Linux 58188c9b6b03 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 
18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 4ee943682cebc28bbe37b40d29cced08eb7fd968 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/8/testReport/ |
   | Max. process+thread count | 2110 (vs. ulimit 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591078=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591078
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 15:47
Start Date: 29/Apr/21 15:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2925:
URL: https://github.com/apache/hadoop/pull/2925#issuecomment-829352574


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  24m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  1s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 21s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   4m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   3m 44s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   3m 20s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   8m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 58s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 619 unchanged - 6 fixed = 
628 total (was 625)  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  1s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   9m  1s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 17s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 223m 40s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   5m 51s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | -1 :x: |  unit  |  16m 19s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 394m 21s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2925/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2925 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell jshint markdownlint |
   | uname | Linux 0e6778b1e820 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 
18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591071=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591071
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 15:37
Start Date: 29/Apr/21 15:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829344056


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  1s |  |  jshint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   4m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   4m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   4m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/9/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 9 new + 621 unchanged - 6 fixed = 
630 total (was 627)  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 231m 19s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   6m 10s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | -1 :x: |  unit  |  18m 19s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2927/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 384m  4s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   

[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591021
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 14:29
Start Date: 29/Apr/21 14:29
Worklog Time Spent: 10m 
  Work Description: virajjasani edited a comment on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829285381


   @smengcl This is `hadoop-3.3` backport PR #2925 and I have kept it upto date 
with this PR while addressing review comments.
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591021)
Time Spent: 7h  (was: 6h 50m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: incompatibleChange, pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591020=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591020
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 14:28
Start Date: 29/Apr/21 14:28
Worklog Time Spent: 10m 
  Work Description: virajjasani edited a comment on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829285381


   @smengcl This is `hadoop-3.3` backport PR #2925 and I have kept it upto date 
while addressing review comments.
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591020)
Time Spent: 6h 50m  (was: 6h 40m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: incompatibleChange, pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591019=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591019
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 14:28
Start Date: 29/Apr/21 14:28
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-829285381


   @smengcl This is hadoop-3.3 backport PR #2925 and I have kept it upto date 
while addressing review comments.
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591019)
Time Spent: 6h 40m  (was: 6.5h)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: incompatibleChange, pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=591017=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-591017
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 14:20
Start Date: 29/Apr/21 14:20
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623098996



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
##
@@ -756,6 +760,10 @@ public JSONObject execute(FileSystem fs) throws 
IOException {
   return toJSON(
   StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true);
 }
+// Same is the behavior with Delete shell command.
+// If moveToAppropriateTrash() returns false, file deletion
+// is attempted rather than throwing Error.
+LOG.error("Could not move {} to Trash, attempting removal", path);

Review comment:
   Sure, sounds good. Let me do it right away.
   Thanks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 591017)
Time Spent: 6.5h  (was: 6h 20m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: incompatibleChange, pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15915) Race condition with async edits logging due to updating txId outside of the namesystem log

2021-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335498#comment-17335498
 ] 

Hadoop QA commented on HDFS-15915:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 4 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  6m 
34s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/595/artifact/out/branch-mvninstall-root.txt{color}
 | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 30s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 20m 
44s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m  
3s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 19s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/595/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
 | {color:red} 
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with 
JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 471 unchanged - 1 
fixed = 472 total (was 472) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  9s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/595/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color}
 | {color:red} 
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10
 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new 
+ 455 unchanged - 1 fixed = 456 total (was 456) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green}{color} | {color:green} 
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 204 unchanged - 1 
fixed = 204 total (was 205) {color} |
| 

[jira] [Resolved] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-29 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-15561.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk. Thanks [~fengnanli] for your works! And thanks [~lamberken] 
for your reports!

> RBF: Fix NullPointException when start dfsrouter
> 
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=590974=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590974
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 13:14
Start Date: 29/Apr/21 13:14
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao merged pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590974)
Time Spent: 3h  (was: 2h 50m)

> RBF: Fix NullPointException when start dfsrouter
> 
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=590972=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590972
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 13:12
Start Date: 29/Apr/21 13:12
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623028802



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
##
@@ -756,6 +760,10 @@ public JSONObject execute(FileSystem fs) throws 
IOException {
   return toJSON(
   StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true);
 }
+// Same is the behavior with Delete shell command.
+// If moveToAppropriateTrash() returns false, file deletion
+// is attempted rather than throwing Error.
+LOG.error("Could not move {} to Trash, attempting removal", path);

Review comment:
   Let's lower this log level to `debug` instead **if we decide to make 
skiptrash default to false**. `error` could generate a lot of noise if trash is 
not enabled here.
   
   When skiptrash defaults to true then I'm fine with `error`. But `warn` might 
still be better.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590972)
Time Spent: 6h 20m  (was: 6h 10m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: incompatibleChange, pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 6h 20m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=590971=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590971
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 13:10
Start Date: 29/Apr/21 13:10
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623036457



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.web.resources;
+
+/**
+ * SkipTrash param to be used by DELETE query.
+ */
+public class DeleteSkipTrashParam extends BooleanParam {
+
+  public static final String NAME = "skiptrash";
+  public static final String DEFAULT = FALSE;

Review comment:
   Alright, if the jira has the incompatible label I'm fine with 
skiptrash=false default. :)
   
   @jojochuang 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590971)
Time Spent: 6h 10m  (was: 6h)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: incompatibleChange, pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15982:
--
Labels: incompatibleChange pull-request-available  (was: 
pull-request-available)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: incompatibleChange, pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=590970=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590970
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 13:04
Start Date: 29/Apr/21 13:04
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623031345



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java
##
@@ -1527,34 +1535,49 @@ public Response delete(
   @QueryParam(RecursiveParam.NAME) @DefaultValue(RecursiveParam.DEFAULT)
   final RecursiveParam recursive,
   @QueryParam(SnapshotNameParam.NAME) 
@DefaultValue(SnapshotNameParam.DEFAULT)
-  final SnapshotNameParam snapshotName
+  final SnapshotNameParam snapshotName,
+  @QueryParam(DeleteSkipTrashParam.NAME)
+  @DefaultValue(DeleteSkipTrashParam.DEFAULT)
+  final DeleteSkipTrashParam skiptrash
   ) throws IOException, InterruptedException {
 
-init(ugi, delegation, username, doAsUser, path, op, recursive, 
snapshotName);
+init(ugi, delegation, username, doAsUser, path, op, recursive,
+snapshotName, skiptrash);
 
-return doAs(ugi, new PrivilegedExceptionAction() {
-  @Override
-  public Response run() throws IOException {
-  return delete(ugi, delegation, username, doAsUser,
-  path.getAbsolutePath(), op, recursive, snapshotName);
-  }
-});
+return doAs(ugi, () -> delete(
+path.getAbsolutePath(), op, recursive, snapshotName, skiptrash));
   }
 
   protected Response delete(
-  final UserGroupInformation ugi,
-  final DelegationParam delegation,
-  final UserParam username,
-  final DoAsParam doAsUser,
   final String fullpath,
   final DeleteOpParam op,
   final RecursiveParam recursive,
-  final SnapshotNameParam snapshotName
-  ) throws IOException {
+  final SnapshotNameParam snapshotName,
+  final DeleteSkipTrashParam skipTrash) throws IOException {
 final ClientProtocol cp = getRpcClientProtocol();
 
 switch(op.getValue()) {
 case DELETE: {
+  Configuration conf =
+  (Configuration) context.getAttribute(JspHelper.CURRENT_CONF);
+  long trashInterval =
+  conf.getLong(FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT);
+  if (trashInterval > 0 && !skipTrash.getValue()) {
+LOG.info("{} is {} , trying to archive {} instead of removing",
+FS_TRASH_INTERVAL_KEY, trashInterval, fullpath);
+org.apache.hadoop.fs.Path path =
+new org.apache.hadoop.fs.Path(fullpath);
+boolean movedToTrash = Trash.moveToAppropriateTrash(
+FileSystem.get(conf), path, conf);
+if (movedToTrash) {
+  final String js = JsonUtil.toJsonString("boolean", true);
+  return Response.ok(js).type(MediaType.APPLICATION_JSON).build();
+}
+// Same is the behavior with Delete shell command.
+// If moveToAppropriateTrash() returns false, file deletion
+// is attempted rather than throwing Error.
+LOG.error("Could not move {} to Trash, attempting removal", fullpath);

Review comment:
   Same as above




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590970)
Time Spent: 6h  (was: 5h 50m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message 

[jira] [Commented] (HDFS-16000) HDFS : Rename performance optimization

2021-04-29 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335450#comment-17335450
 ] 

Xiaoqiao He commented on HDFS-16000:


[~zhuxiangyi] Thanks for your report and contribution. It is good idea and 
improvement.
BTW, just notice that different unit tests run failed and some 
checkstyle/javadoc warn. Would you mind to have another checks?
Another side, it is enough to submit patch here or github only. No need to 
sumbit both side. 
Thanks again.

> HDFS : Rename performance optimization
> --
>
> Key: HDFS-16000
> URL: https://issues.apache.org/jira/browse/HDFS-16000
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Affects Versions: 3.1.4, 3.3.1
>Reporter: zhu
>Assignee: zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: 20210428-143238.svg, 20210428-171635-lambda.svg, 
> HDFS-16000.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It takes a long time to move a large directory with rename. For example, it 
> takes about 40 seconds to move a 1000W directory. When a large amount of data 
> is deleted to the trash, the move large directory will occur when the recycle 
> bin makes checkpoint. In addition, the user may also actively trigger the 
> move large directory operation, which will cause the NameNode to lock too 
> long and be killed by Zkfc. Through the flame graph, it is found that the 
> main time consuming is to create the EnumCounters object.
> h3. I think the following two points can optimize the efficiency of rename 
> execution
> h3. QuotaCount calculation time-consuming optimization:
>  * Create a QuotaCounts object in the calculation directory quotaCount, and 
> pass the quotaCount to the next calculation function through a parameter each 
> time, so as to avoid creating an EnumCounters object for each calculation.
>  * In addition, through the flame graph, it is found that using lambda to 
> modify QuotaCounts takes longer than the ordinary method, so the ordinary 
> method is used to modify the QuotaCounts count.
> h3. Rename logic optimization:
>  * Regardless of whether the rename operation is the source directory and the 
> target directory, the quota count must be calculated three times. The first 
> time, check whether the moved directory exceeds the target directory quota, 
> the second time, calculate the mobile directory quota to update the source 
> directory quota, and the third time, calculate the mobile directory 
> configuration update to the target directory.
>  * I think some of the above three quota quota calculations are unnecessary. 
> For example, if all parent directories of the source directory and target 
> directory are not configured with quota, there is no need to calculate 
> quotaCount. Even if both the source directory and the target directory use 
> quota, there is no need to calculate the quota three times. The calculation 
> logic for the first and third times is the same, and it only needs to be 
> calculated once.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=590969=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590969
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 13:03
Start Date: 29/Apr/21 13:03
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623028802



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
##
@@ -756,6 +760,10 @@ public JSONObject execute(FileSystem fs) throws 
IOException {
   return toJSON(
   StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true);
 }
+// Same is the behavior with Delete shell command.
+// If moveToAppropriateTrash() returns false, file deletion
+// is attempted rather than throwing Error.
+LOG.error("Could not move {} to Trash, attempting removal", path);

Review comment:
   Let's lower this log level to `debug` instead **if we decide to make 
skiptrash default to false**. `error` could generate a lot of noise if trash is 
not enabled here.
   
   When skiptrash defaults to true then I'm fine with error. But `warn` might 
be better.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590969)
Time Spent: 5h 50m  (was: 5h 40m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=590967=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590967
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 13:00
Start Date: 29/Apr/21 13:00
Worklog Time Spent: 10m 
  Work Description: smengcl commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r623028802



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
##
@@ -756,6 +760,10 @@ public JSONObject execute(FileSystem fs) throws 
IOException {
   return toJSON(
   StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true);
 }
+// Same is the behavior with Delete shell command.
+// If moveToAppropriateTrash() returns false, file deletion
+// is attempted rather than throwing Error.
+LOG.error("Could not move {} to Trash, attempting removal", path);

Review comment:
   Let's lower this log level to `debug` instead.
   `error` could generate a lot of noise if trash is not enabled here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590967)
Time Spent: 5h 40m  (was: 5.5h)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?focusedWorklogId=590893=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590893
 ]

ASF GitHub Bot logged work on HDFS-15997:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 08:57
Start Date: 29/Apr/21 08:57
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #2958:
URL: https://github.com/apache/hadoop/pull/2958#issuecomment-829061453


   @smengcl , please check the checkstyle and other failures if any.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590893)
Time Spent: 0.5h  (was: 20m)

> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently dfsadmin -provisionSnapshotTrash only supports creating trash root 
> one by one.
> This jira adds -all argument to create trash root on ALL snapshottable dirs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=590892=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590892
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 08:51
Start Date: 29/Apr/21 08:51
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#discussion_r622855584



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
##
@@ -743,6 +748,15 @@ public FSDelete(String path, boolean recursive) {
  */
 @Override
 public JSONObject execute(FileSystem fs) throws IOException {
+  if (!skipTrash) {
+boolean movedToTrash = Trash.moveToAppropriateTrash(fs, path,
+fs.getConf());
+if (movedToTrash) {
+  HttpFSServerWebApp.getMetrics().incrOpsDelete();
+  return toJSON(
+  StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), true);
+}

Review comment:
   Sure thing. I put comment on `NamenodeWebHdfsMethods` but somehow missed 
here.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/DeleteSkipTrashParam.java
##
@@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.web.resources;
+
+/**
+ * SkipTrash param to be used by DELETE query.
+ */
+public class DeleteSkipTrashParam extends BooleanParam {
+
+  public static final String NAME = "skiptrash";
+  public static final String DEFAULT = FALSE;

Review comment:
   Thanks for the suggestion. In fact, there was similar discussion on Jira 
as well and it seems so far the consensus was to keep it `false` by default. 
Because of which, this will be incompatible change w.r.t default behaviour of 
DELETE API. 
   Hence, the decision was to mark Jira Incompatible change and we can still go 
ahead with this new behaviour starting 3.3.1/3.4.0 release.

   However, I am fine changing this to `true` as well if that's where majority 
would like to go with.
   
   FYI @jojochuang 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590892)
Time Spent: 5.5h  (was: 5h 20m)

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15915) Race condition with async edits logging due to updating txId outside of the namesystem log

2021-04-29 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-15915:
---
Attachment: HDFS-15915-03.patch

> Race condition with async edits logging due to updating txId outside of the 
> namesystem log
> --
>
> Key: HDFS-15915
> URL: https://issues.apache.org/jira/browse/HDFS-15915
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Major
> Attachments: HDFS-15915-01.patch, HDFS-15915-02.patch, 
> HDFS-15915-03.patch, testMkdirsRace.patch
>
>
> {{FSEditLogAsync}} creates an {{FSEditLogOp}} and populates its fields inside 
> {{FSNamesystem.writeLock}}. But one essential field the transaction id of the 
> edits op remains unset until the time when the operation is scheduled for 
> synching. At that time {{beginTransaction()}} will set the the 
> {{FSEditLogOp.txid}} and increment the global transaction count. On busy 
> NameNode this event can fall outside the write lock. 
> This causes problems for Observer reads. It also can potentially reshuffle 
> transactions and Standby will apply them in a wrong order.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread lei w (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335217#comment-17335217
 ] 

lei w commented on HDFS-16003:
--

Ok ,thank you so much for responding to my message.

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15915) Race condition with async edits logging due to updating txId outside of the namesystem log

2021-04-29 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335215#comment-17335215
 ] 

Hadoop QA commented on HDFS-15915:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
32s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 4 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 4s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 22s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 32m 
57s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  4m 
40s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 37s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/593/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt{color}
 | {color:red} 
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with 
JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 1 new + 471 unchanged - 1 
fixed = 472 total (was 472) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 21s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/593/artifact/out/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08.txt{color}
 | {color:red} 
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08
 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 1 new 
+ 455 unchanged - 1 fixed = 456 total (was 456) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/593/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 
204 unchanged - 1 fixed = 205 

[jira] [Updated] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-16003:
---
Assignee: lei w
  Status: Patch Available  (was: Open)

Add [~lei w] to contributor list and trigger Jenkins manually.
BTW, it it better not set fix version before patch committed. Thanks.

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread lei w (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335210#comment-17335210
 ] 

lei w commented on HDFS-16003:
--

In the actual production environment, the log level is generally info. We first 
judged the log level will save the time of traversing the collection. If the 
log level is debug, then we can traverse the collection and print the 
information. So we will not loss trace information about some specific blocks.

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Priority: Minor
> Attachments: HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-16003:
---
Fix Version/s: (was: 3.3.0)

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Priority: Minor
> Attachments: HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335201#comment-17335201
 ] 

Xiaoqiao He commented on HDFS-16003:


Thanks [~lei w] for your proposal. It makes sense to me. But I am concerned if 
it will loss some trace information about some specific block.

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16003:
-
Fix Version/s: 3.3.0
Affects Version/s: 3.3.0

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Affects Versions: 3.3.0
>Reporter: lei w
>Priority: Minor
> Fix For: 3.3.0
>
> Attachments: HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16003:
-
Attachment: HDFS-16003.patch

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Priority: Minor
> Attachments: HDFS-16003.patch
>
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16003:
-
Description: In BlockManager#processReport( ) method, we will print 
invalidated blocks if log level is debug。We always traverse this 
invalidatedBlocks list without considering the log level。I suggest to give 
priority to the log level before printing, which can save the time of traversal 
if log  level is info.  (was: In BlockManager#processReport( ) method, we will 
print )

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Priority: Minor
>
> In BlockManager#processReport( ) method, we will print invalidated blocks if 
> log level is debug。We always traverse this invalidatedBlocks list without 
> considering the log level。I suggest to give priority to the log level before 
> printing, which can save the time of traversal if log  level is info.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16003:
-
Description: In BlockManager#processReport( ) method, we will print 

> ProcessReport print invalidatedBlocks should judge debug level at first
> ---
>
> Key: HDFS-16003
> URL: https://issues.apache.org/jira/browse/HDFS-16003
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Priority: Minor
>
> In BlockManager#processReport( ) method, we will print 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16003) ProcessReport print invalidatedBlocks should judge debug level at first

2021-04-29 Thread lei w (Jira)
lei w created HDFS-16003:


 Summary: ProcessReport print invalidatedBlocks should judge debug 
level at first
 Key: HDFS-16003
 URL: https://issues.apache.org/jira/browse/HDFS-16003
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namanode
Reporter: lei w






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=590867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590867
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 07:11
Start Date: 29/Apr/21 07:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954#issuecomment-828994479


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 31s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2954 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 25436df36b00 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 10230955dab780cb961459ab66cdf9e40258c1bf |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/4/testReport/ |
   | Max. process+thread count | 2395 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2954/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   

[jira] [Work logged] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?focusedWorklogId=590860=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590860
 ]

ASF GitHub Bot logged work on HDFS-15624:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 06:54
Start Date: 29/Apr/21 06:54
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #2955:
URL: https://github.com/apache/hadoop/pull/2955


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590860)
Time Spent: 10.5h  (was: 10h 20m)

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: huangtianhua
>Priority: Major
>  Labels: pull-request-available, release-blocker
> Fix For: 3.4.0
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-04-29 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-15624.

Resolution: Fixed

The updated patch was committed. Thanks Ayush for the help!

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: huangtianhua
>Priority: Major
>  Labels: pull-request-available, release-blocker
> Fix For: 3.4.0
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15624) Fix the SetQuotaByStorageTypeOp problem after updating hadoop

2021-04-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15624?focusedWorklogId=590859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590859
 ]

ASF GitHub Bot logged work on HDFS-15624:
-

Author: ASF GitHub Bot
Created on: 29/Apr/21 06:54
Start Date: 29/Apr/21 06:54
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2955:
URL: https://github.com/apache/hadoop/pull/2955#issuecomment-828985536


   Failed tests do not reproduce locally. Merging the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590859)
Time Spent: 10h 20m  (was: 10h 10m)

>  Fix the SetQuotaByStorageTypeOp problem after updating hadoop 
> ---
>
> Key: HDFS-15624
> URL: https://issues.apache.org/jira/browse/HDFS-15624
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.4.0
>Reporter: YaYun Wang
>Assignee: huangtianhua
>Priority: Major
>  Labels: pull-request-available, release-blocker
> Fix For: 3.4.0
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> HDFS-15025 adds a new storage Type NVDIMM, changes the ordinal() of the enum 
> of StorageType. And, setting the quota by storageType depends on the 
> ordinal(), therefore, it may cause the setting of quota to be invalid after 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2021-04-29 Thread Tsz-wo Sze (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335172#comment-17335172
 ] 

Tsz-wo Sze commented on HDFS-7285:
--

Below is https://img-ask.csdnimg.cn/upload/1619363340018.png :
 !1619363340018.png! 

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: 1619363340018.png, Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> HDFS-7285-Consolidated-20150911.patch, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-7285) Erasure Coding Support inside HDFS

2021-04-29 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HDFS-7285:
-
Attachment: 1619363340018.png

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: 1619363340018.png, Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> HDFS-7285-Consolidated-20150911.patch, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2021-04-29 Thread Stone (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335169#comment-17335169
 ] 

Stone commented on HDFS-7285:
-

[~zhz] https://img-ask.csdnimg.cn/upload/1619363340018.png

You can see this picture by coping the URL address to the browser.Don't open 
this url link directly

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> HDFS-7285-Consolidated-20150911.patch, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16002) TestJournalNodeRespectsBindHostKeys#testHttpsBindHostKey very flaky

2021-04-29 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDFS-16002:
--

 Summary: TestJournalNodeRespectsBindHostKeys#testHttpsBindHostKey 
very flaky
 Key: HDFS-16002
 URL: https://issues.apache.org/jira/browse/HDFS-16002
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


This test appears to be failing a lot lately. I suspect it has to be with the 
new change to support reloading httpserver2 certificates, but I've not looked 
into it.
{noformat}
Stacktrace
java.lang.NullPointerException
at sun.nio.fs.UnixPath.normalizeAndCheck(UnixPath.java:77)
at sun.nio.fs.UnixPath.(UnixPath.java:71)
at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
at java.nio.file.Paths.get(Paths.java:84)
at 
org.apache.hadoop.http.HttpServer2$Builder.makeConfigurationChangeMonitor(HttpServer2.java:609)
at 
org.apache.hadoop.http.HttpServer2$Builder.createHttpsChannelConnector(HttpServer2.java:592)
at 
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:518)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeHttpServer.start(JournalNodeHttpServer.java:81)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:238)
at 
org.apache.hadoop.hdfs.qjournal.MiniJournalCluster.(MiniJournalCluster.java:120)
at 
org.apache.hadoop.hdfs.qjournal.MiniJournalCluster.(MiniJournalCluster.java:47)
at 
org.apache.hadoop.hdfs.qjournal.MiniJournalCluster$Builder.build(MiniJournalCluster.java:79)
at 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys.testHttpsBindHostKey(TestJournalNodeRespectsBindHostKeys.java:180)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org