[jira] [Commented] (HDFS-14525) JspHelper ignores hadoop.http.authentication.type

2021-04-21 Thread Qi Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327104#comment-17327104
 ] 

Qi Zhu commented on HDFS-14525:
---

[~prabhujoseph] 

I also think this is needed, if we can add an option to support:

We can add an option to allow this two independent, 
hadoop.security.authentication is specific to RPC Authentication whereas 
hadoop.http.authentication.type is specific to HTTP Authentication.

We want to make HTTP not authentication, but RPC Authentication.

 

> JspHelper ignores hadoop.http.authentication.type
> -
>
> Key: HDFS-14525
> URL: https://issues.apache.org/jira/browse/HDFS-14525
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> On Secure Cluster With hadoop.http.authentication.type simple and 
> hadoop.http.authentication.anonymous.allowed is true, WebHdfs Rest Api fails 
> when user.name is not set. It runs fine if user.name=ambari-qa is set..
> {code}
> [knox@pjosephdocker-1 ~]$ curl -sS -L -w '%{http_code}' -X GET -d '' -H 
> 'Content-Length: 0' --negotiate -u : 
> 'http://pjosephdocker-1.openstacklocal:50070/webhdfs/v1/services/sync/yarn-ats?op=GETFILESTATUS'
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Security enabled but 
> user not authenticated by filter"}}403[knox@pjosephdocker-1 ~]$ 
> {code}
> JspHelper#getUGI checks UserGroupInformation.isSecurityEnabled() instead of 
> conf.get(hadoop.http.authentication.type).equals("kerberos") to check if Http 
> is Secure causing the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15974) RBF: Unable to display the datanode UI of the router

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15974?focusedWorklogId=587025=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587025
 ]

ASF GitHub Bot logged work on HDFS-15974:
-

Author: ASF GitHub Bot
Created on: 22/Apr/21 03:23
Start Date: 22/Apr/21 03:23
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2915:
URL: https://github.com/apache/hadoop/pull/2915#issuecomment-824509633


   @goiri Could you review it again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 587025)
Time Spent: 1h  (was: 50m)

> RBF: Unable to display the datanode UI of the router
> 
>
> Key: HDFS-15974
> URL: https://issues.apache.org/jira/browse/HDFS-15974
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf, ui
>Affects Versions: 3.4.0
>Reporter: zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15358-1.patch, image-2021-04-15-11-36-47-644.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Clicking the Datanodes tag on the Router UI does not respond.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS

2021-04-21 Thread Zhe Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327055#comment-17327055
 ] 

Zhe Zhang commented on HDFS-7285:
-

[~iostream2...@163.com]: the image cannot be viewed. Can you upload again? 
Thanks

> Erasure Coding Support inside HDFS
> --
>
> Key: HDFS-7285
> URL: https://issues.apache.org/jira/browse/HDFS-7285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Weihua Jiang
>Assignee: Zhe Zhang
>Priority: Major
> Fix For: 3.0.0-alpha1
>
> Attachments: Compare-consolidated-20150824.diff, 
> Consolidated-20150707.patch, Consolidated-20150806.patch, 
> Consolidated-20150810.patch, ECAnalyzer.py, ECParser.py, 
> HDFS-7285-Consolidated-20150911.patch, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> HDFSErasureCodingSystemTestPlan-20150824.pdf, 
> HDFSErasureCodingSystemTestReport-20150826.pdf, fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15988) Stabilise HDFS Pre-Commit

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15988?focusedWorklogId=587003=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-587003
 ]

ASF GitHub Bot logged work on HDFS-15988:
-

Author: ASF GitHub Bot
Created on: 22/Apr/21 02:14
Start Date: 22/Apr/21 02:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2860:
URL: https://github.com/apache/hadoop/pull/2860#issuecomment-824488319


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   4m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   4m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   4m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 14s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/17/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 177 unchanged - 1 fixed = 
178 total (was 178)  |
   | +1 :green_heart: |  hadolint  |   0m  3s |  |  No new issues.  |
   | +1 :green_heart: |  mvnsite  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  2s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 26s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 233m 57s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/17/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  18m 19s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 384m 10s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/17/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Work started] (HDFS-15993) INodesInPath#toString() will throw AssertionError.

2021-04-21 Thread zhanghuazong (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15993 started by zhanghuazong.
---
> INodesInPath#toString() will throw AssertionError.
> --
>
> Key: HDFS-15993
> URL: https://issues.apache.org/jira/browse/HDFS-15993
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: zhanghuazong
>Assignee: zhanghuazong
>Priority: Major
>
> In the case of a snapshot, INodesInpath#toString() will throw an 
> AssertionError



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15993) INodesInPath#toString() will throw AssertionError.

2021-04-21 Thread zhanghuazong (Jira)
zhanghuazong created HDFS-15993:
---

 Summary: INodesInPath#toString() will throw AssertionError.
 Key: HDFS-15993
 URL: https://issues.apache.org/jira/browse/HDFS-15993
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: zhanghuazong
Assignee: zhanghuazong


In the case of a snapshot, INodesInpath#toString() will throw an AssertionError



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-04-21 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327041#comment-17327041
 ] 

Wei-Chiu Chuang commented on HDFS-15850:


We should get HADOOP-17079 to branch-3.3 too. I'll look into that one.

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15850.branch-3.3.001.patch, HDFS-15850.v1.patch, 
> HDFS-15850.v2.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15250) Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException

2021-04-21 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326996#comment-17326996
 ] 

Ctest commented on HDFS-15250:
--

Hello [~sodonnell]

Sorry that we didn't keep the stack trace of this issue.

All I remembered is that we set `dfs.client.use.datanode.hostname` to true and 
set the hostname of the datanode wrongly which triggers the exception.

I think the system throws the correct exception here, but probably needs to 
handle it better.

 

> Setting `dfs.client.use.datanode.hostname` to true can crash the system 
> because of unhandled UnresolvedAddressException
> ---
>
> Key: HDFS-15250
> URL: https://issues.apache.org/jira/browse/HDFS-15250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ctest
>Assignee: Ctest
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15250-001.patch, HDFS-15250-002.patch
>
>
> *Problem:*
> `dfs.client.use.datanode.hostname` by default is set to false, which means 
> the client will use the IP address of the datanode to connect to the 
> datanode, rather than the hostname of the datanode.
> In `org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer`:
>  
> {code:java}
>  try {
>    Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token,
>    datanode);
>    LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer);
>    return new BlockReaderPeer(peer, false);
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  }
> {code}
>  
> If `dfs.client.use.datanode.hostname` is false, then it will try to connect 
> via IP address. If the IP address is illegal and the connection fails, 
> IOException will be thrown from `newConnectedPeer` and be handled.
> If `dfs.client.use.datanode.hostname` is true, then it will try to connect 
> via hostname. If the hostname cannot be resolved, UnresolvedAddressException 
> will be thrown from `newConnectedPeer`. However, UnresolvedAddressException 
> is not a subclass of IOException so `nextTcpPeer` doesn’t handle this 
> exception at all. This unhandled exception could crash the system.
>  
> *Solution:*
> Since the method is handling the illegal IP address, then the illegal 
> hostname should be also handled as well. One solution is to add the handling 
> logic in `nextTcpPeer`:
> {code:java}
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  } catch (UnresolvedAddressException e) {
>    ... // handling logic 
>  }{code}
> I am very happy to provide a patch to do this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-04-21 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-15850:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15850.branch-3.3.001.patch, HDFS-15850.v1.patch, 
> HDFS-15850.v2.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15250) Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException

2021-04-21 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326902#comment-17326902
 ] 

Stephen O'Donnell commented on HDFS-15250:
--

I am reviewing some backports and came across this one. The change here does 
not seem to fix anything as a couple of people have stated. Has anyone got a 
stack trace from an occurrence of this error so we can see where it fails 
exactly?

> Setting `dfs.client.use.datanode.hostname` to true can crash the system 
> because of unhandled UnresolvedAddressException
> ---
>
> Key: HDFS-15250
> URL: https://issues.apache.org/jira/browse/HDFS-15250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ctest
>Assignee: Ctest
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15250-001.patch, HDFS-15250-002.patch
>
>
> *Problem:*
> `dfs.client.use.datanode.hostname` by default is set to false, which means 
> the client will use the IP address of the datanode to connect to the 
> datanode, rather than the hostname of the datanode.
> In `org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer`:
>  
> {code:java}
>  try {
>    Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token,
>    datanode);
>    LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer);
>    return new BlockReaderPeer(peer, false);
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  }
> {code}
>  
> If `dfs.client.use.datanode.hostname` is false, then it will try to connect 
> via IP address. If the IP address is illegal and the connection fails, 
> IOException will be thrown from `newConnectedPeer` and be handled.
> If `dfs.client.use.datanode.hostname` is true, then it will try to connect 
> via hostname. If the hostname cannot be resolved, UnresolvedAddressException 
> will be thrown from `newConnectedPeer`. However, UnresolvedAddressException 
> is not a subclass of IOException so `nextTcpPeer` doesn’t handle this 
> exception at all. This unhandled exception could crash the system.
>  
> *Solution:*
> Since the method is handling the illegal IP address, then the illegal 
> hostname should be also handled as well. One solution is to add the handling 
> logic in `nextTcpPeer`:
> {code:java}
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  } catch (UnresolvedAddressException e) {
>    ... // handling logic 
>  }{code}
> I am very happy to provide a patch to do this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15988) Stabilise HDFS Pre-Commit

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15988?focusedWorklogId=586851=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586851
 ]

ASF GitHub Bot logged work on HDFS-15988:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 19:52
Start Date: 21/Apr/21 19:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2860:
URL: https://github.com/apache/hadoop/pull/2860#issuecomment-824315947


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2860/17/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586851)
Time Spent: 1.5h  (was: 1h 20m)

> Stabilise HDFS Pre-Commit
> -
>
> Key: HDFS-15988
> URL: https://issues.apache.org/jira/browse/HDFS-15988
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Fix couple of unit-tests:
> TestRouterRpc
> TestRouterRpcMultiDest
> TestNestedSnapshots
> TestPersistBlocks
> TestDirectoryScanner
>  * Increase Maven OPTS, Remove timeouts from couple of tests and Add a retry 
> flaky test option in the build, So, as to make the build little stable



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-04-21 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326759#comment-17326759
 ] 

Hadoop QA commented on HDFS-15850:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 
52s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} branch-3.3 Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 62m 
 2s{color} | {color:green}{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green}{color} | {color:green} branch-3.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 56s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green}{color} | {color:green} branch-3.3 passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 23m 
31s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m 
12s{color} | {color:green}{color} | {color:green} branch-3.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green}{color} | {color:green} 
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 497 unchanged - 6 
fixed = 497 total (was 503) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 23s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m 
20s{color} | {color:green}{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}203m 31s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/580/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green}{color} | {color:green} The patch does not generate 
ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}344m 29s{color} | 
{color:black}{color} | {color:black}{color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Work logged] (HDFS-15865) Interrupt DataStreamer thread

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15865?focusedWorklogId=586735=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586735
 ]

ASF GitHub Bot logged work on HDFS-15865:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 16:48
Start Date: 21/Apr/21 16:48
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on a change in pull request #2728:
URL: https://github.com/apache/hadoop/pull/2728#discussion_r617717300



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##
@@ -895,6 +895,8 @@ void waitForAckedSeqno(long seqno) throws IOException {
 try (TraceScope ignored = dfsClient.getTracer().
 newScope("waitForAckedSeqno")) {
   LOG.debug("{} waiting for ack for: {}", this, seqno);
+  int dnodes = nodes != null ? nodes.length : 3;
+  int writeTimeout = dfsClient.getDatanodeWriteTimeout(dnodes);

Review comment:
   Thanks - I guess we can go ahead with this change, but even with it, the 
HS2 may well get hung for 8+ minutes. Its hard to know for sure without knowing 
why this problem caused the whole instance to hang.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586735)
Time Spent: 1h 40m  (was: 1.5h)

> Interrupt DataStreamer thread
> -
>
> Key: HDFS-15865
> URL: https://issues.apache.org/jira/browse/HDFS-15865
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Karthik Palanisamy
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Have noticed HiveServer2 halts due to DataStreamer#waitForAckedSeqno. 
> I think we have to interrupt DataStreamer if no packet ack(from datanodes). 
> It likely happens with infra/network issue.
> {code:java}
> "HiveServer2-Background-Pool: Thread-35977576" #35977576 prio=5 os_prio=0 
> cpu=797.65ms elapsed=3406.28s tid=0x7fc0c6c29800 nid=0x4198 in 
> Object.wait()  [0x7fc1079f3000]
>     java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at java.lang.Object.wait(java.base(at)11.0.5/Native Method)
>  - waiting on 
>  at 
> org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:886)
>  - waiting to re-lock in wait() <0x7fe6eda86ca0> (a 
> java.util.LinkedList){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15967) Improve the log for Short Circuit Local Reads

2021-04-21 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326715#comment-17326715
 ] 

Takanobu Asanuma commented on HDFS-15967:
-

[~bpatel] Thanks for submitting the patch. Some comments for 
[^HDFS-15967.001.patch],
 * The method name of {{transferReplicaForPipelineRecovery}} in the log may be 
redundant.
{noformat}
2021-04-22 01:38:35,350 [DataXceiver for client 
DFSClient_NONMAPREDUCE_2066906768_1 at /127.0.0.1:56769 [TRANSFER_BLOCK 
BP-463639783-192.168.3.25-1619023112754:blk_1073741825_1001]] DEBUG 
datanode.DataNode (DataNode.java:transferReplicaForPipelineRecovery(3129)) - 
transferReplicaForPipelineRecovery: Replica is being written!
{noformat}

 * 
{code:java}
  LOG.warn("Parent directory check failed; replica {} is " +
-  "not backed by a local file" + info);
+  "not backed by a local file", info);
{code}

> Improve the log for Short Circuit Local Reads
> -
>
> Key: HDFS-15967
> URL: https://issues.apache.org/jira/browse/HDFS-15967
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Assignee: Bhavik Patel
>Priority: Minor
> Attachments: HDFS-15967.001.patch
>
>
> Improve the log for Short Circuit Local Reads 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15979) Move within EZ fails and cannot remove nested EZs

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15979?focusedWorklogId=586675=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586675
 ]

ASF GitHub Bot logged work on HDFS-15979:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 15:30
Start Date: 21/Apr/21 15:30
Worklog Time Spent: 10m 
  Work Description: daryn-sharp commented on a change in pull request #2919:
URL: https://github.com/apache/hadoop/pull/2919#discussion_r617656673



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNestedEncryptionZones.java
##
@@ -210,6 +214,80 @@ public void testNestedEZWithRoot() throws Exception {
 "File not in trash : " + nestedTrashFile, fs.exists(nestedTrashFile));
   }
 
+  @Test(timeout = 6)
+  public void testRenameBetweenEncryptionZones() throws Exception {
+String key1 = TOP_EZ_KEY;
+String key2 = NESTED_EZ_KEY;
+Path top = new Path("/dir");
+Path ez1 = new Path(top, "ez1");
+Path ez2 = new Path(top, "ez2");
+Path ez3 = new Path(top, "ez3");
+Path p = new Path(ez1, "file");
+fs.mkdirs(ez1, FsPermission.getDirDefault());
+fs.mkdirs(ez2, FsPermission.getDirDefault());
+fs.mkdirs(ez3, FsPermission.getDirDefault());
+fs.createEncryptionZone(ez1, key1);
+fs.createEncryptionZone(ez2, key2);
+fs.createEncryptionZone(ez3, key1);
+fs.create(p).close();
+
+// cannot rename between 2 EZs with different keys.
+try {
+  fs.rename(p, new Path(ez2, "file"));
+} catch (RemoteException re) {
+  Assert.assertEquals(
+  p + " can't be moved from encryption zone " + ez1 +
+  " to encryption zone " + ez2 + ".",
+  re.getMessage().split("\n")[0]);
+}
+// can rename between 2 EZs with the same key.
+Assert.assertTrue(fs.rename(p, new Path(ez3, "file")));
+  }
+
+  @Test(timeout = 6)
+  public void testRemoveEncryptionZoneWithAncestorKey() throws Exception {
+removeEZDirUnderAncestor(TOP_EZ_KEY);
+  }
+
+  @Test(timeout = 6)
+  public void testRemoveEncryptionZoneWithNoAncestorKey() throws Exception {
+removeEZDirUnderAncestor(null);
+  }
+
+  private void removeEZDirUnderAncestor(String parentKey) throws Exception {

Review comment:
   As further clarification, the use case for removing a nested EZ that 
shares the same key is: user wants to test EZ on a subtree of a large directory 
so they request an EZ on /big-dir/I-want-to-test-EZ/.  They are satisfied it 
works so they request the EZ to be moved up to /big-dir to cover the entire 
tree.
   
   The current impl won't allow the EZ xattr on /big-dir/I-want-to-test-EZ to 
be removed – even though it shares the same key with the EZ on /big-dir.  It 
also won't allow moving files from /big-dir/I-want-to-test-EZ to other places 
in /big-dir – even though they share the same key.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586675)
Time Spent: 1h  (was: 50m)

> Move within EZ fails and cannot remove nested EZs
> -
>
> Key: HDFS-15979
> URL: https://issues.apache.org/jira/browse/HDFS-15979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15979.001.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Moving between EZ directories should work fine if the EZ key for the 
> directories is identical. If the key is name identical then no 
> decrypt/re-encrypt is necessary.
> However, the rename operation checks more than the key name. It compares the 
> inode number (unique identifier) of the source and dest dirs which will never 
> be the same for 2 dirs resulting in the cited failure. Note it also 
> incorrectly compares the key version.
> A related issue is if an ancestor of a EZ share the same key (ie. 
> /projects/foo and /projects/foo/bar/blah both use same key), files also 
> cannot be moved from the child to a parent dir, plus the child EZ cannot be 
> removed even though it's now covered by the ancestor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15979) Move within EZ fails and cannot remove nested EZs

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15979?focusedWorklogId=58=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-58
 ]

ASF GitHub Bot logged work on HDFS-15979:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 15:20
Start Date: 21/Apr/21 15:20
Worklog Time Spent: 10m 
  Work Description: daryn-sharp commented on a change in pull request #2919:
URL: https://github.com/apache/hadoop/pull/2919#discussion_r617648020



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNestedEncryptionZones.java
##
@@ -210,6 +214,80 @@ public void testNestedEZWithRoot() throws Exception {
 "File not in trash : " + nestedTrashFile, fs.exists(nestedTrashFile));
   }
 
+  @Test(timeout = 6)
+  public void testRenameBetweenEncryptionZones() throws Exception {
+String key1 = TOP_EZ_KEY;
+String key2 = NESTED_EZ_KEY;
+Path top = new Path("/dir");
+Path ez1 = new Path(top, "ez1");
+Path ez2 = new Path(top, "ez2");
+Path ez3 = new Path(top, "ez3");
+Path p = new Path(ez1, "file");
+fs.mkdirs(ez1, FsPermission.getDirDefault());
+fs.mkdirs(ez2, FsPermission.getDirDefault());
+fs.mkdirs(ez3, FsPermission.getDirDefault());
+fs.createEncryptionZone(ez1, key1);
+fs.createEncryptionZone(ez2, key2);
+fs.createEncryptionZone(ez3, key1);
+fs.create(p).close();
+
+// cannot rename between 2 EZs with different keys.
+try {
+  fs.rename(p, new Path(ez2, "file"));
+} catch (RemoteException re) {
+  Assert.assertEquals(
+  p + " can't be moved from encryption zone " + ez1 +
+  " to encryption zone " + ez2 + ".",
+  re.getMessage().split("\n")[0]);
+}
+// can rename between 2 EZs with the same key.
+Assert.assertTrue(fs.rename(p, new Path(ez3, "file")));
+  }
+
+  @Test(timeout = 6)
+  public void testRemoveEncryptionZoneWithAncestorKey() throws Exception {
+removeEZDirUnderAncestor(TOP_EZ_KEY);
+  }
+
+  @Test(timeout = 6)
+  public void testRemoveEncryptionZoneWithNoAncestorKey() throws Exception {
+removeEZDirUnderAncestor(null);
+  }
+
+  private void removeEZDirUnderAncestor(String parentKey) throws Exception {

Review comment:
   AFAIK, nested EZs have always been supported or at least for a long time 
– otherwise TestNestedEncryptionZones would not already exist, right? :)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 58)
Time Spent: 50m  (was: 40m)

> Move within EZ fails and cannot remove nested EZs
> -
>
> Key: HDFS-15979
> URL: https://issues.apache.org/jira/browse/HDFS-15979
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, hdfs
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15979.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Moving between EZ directories should work fine if the EZ key for the 
> directories is identical. If the key is name identical then no 
> decrypt/re-encrypt is necessary.
> However, the rename operation checks more than the key name. It compares the 
> inode number (unique identifier) of the source and dest dirs which will never 
> be the same for 2 dirs resulting in the cited failure. Note it also 
> incorrectly compares the key version.
> A related issue is if an ancestor of a EZ share the same key (ie. 
> /projects/foo and /projects/foo/bar/blah both use same key), files also 
> cannot be moved from the child to a parent dir, plus the child EZ cannot be 
> removed even though it's now covered by the ancestor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data on the Web UI must be saved to the trash

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=586615=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586615
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 14:30
Start Date: 21/Apr/21 14:30
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-824107665


   @liuml07 if you have some free cycles to take a look?
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586615)
Time Spent: 1h 40m  (was: 1.5h)

> Deleted data on the Web UI must be saved to the trash 
> --
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
>  
> This can be helpful when the user accidentally deletes data from the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326576#comment-17326576
 ] 

Viraj Jasani commented on HDFS-15989:
-

Thank you for review [~tasanuma]. All backport PRs have QA results available.

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?focusedWorklogId=586554=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586554
 ]

ASF GitHub Bot logged work on HDFS-15991:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 13:14
Start Date: 21/Apr/21 13:14
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #2933:
URL: https://github.com/apache/hadoop/pull/2933#issuecomment-824051303


   > @tomscut Thanks for working on this. It makes sense to me. Some comments,
   > 
   > * Could you add unit tests? We may want to edit TestNameNodeMXBean.
   > * Router WebUI uses the location information. Could you do the same for 
NameNode WebUI?
   > 
   > Router WebUI (federationhealth.html)
   > 
![image](https://user-images.githubusercontent.com/11712443/115539064-7d5f5480-a2d7-11eb-97a0-78adb80a2158.png)
   > 
   > NameNode WebUI (dfshealth.html)
   > 
![image](https://user-images.githubusercontent.com/11712443/115539120-90722480-a2d7-11eb-8150-258d0ff5ac91.png)
   
   Thanks @tasanuma for your review and comment. I'll refine that ASAP.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586554)
Time Spent: 50m  (was: 40m)

> Add location into datanode info for NameNodeMXBean
> --
>
> Key: HDFS-15991
> URL: https://issues.apache.org/jira/browse/HDFS-15991
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Add location into datanode info for NameNodeMXBean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586535=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586535
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 12:43
Start Date: 21/Apr/21 12:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2944:
URL: https://github.com/apache/hadoop/pull/2944#issuecomment-824030016


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.1 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 53s |  |  branch-3.1 passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  branch-3.1 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  branch-3.1 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  branch-3.1 passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  branch-3.1 passed  |
   | -1 :x: |  spotbugs  |   3m 10s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2944/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in branch-3.1 has 4 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  17m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  
hadoop-hdfs-project_hadoop-hdfs generated 0 new + 523 unchanged - 1 fixed = 523 
total (was 524)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 45s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2944/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 149 unchanged 
- 49 fixed = 159 total (was 198)  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 191m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2944/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 272m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
   |   | hadoop.fs.TestHdfsNativeCodeLoader |
   |   | hadoop.hdfs.server.datanode.TestBlockRecovery |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2944/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2944 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux efd6f13129f6 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.1 / f75d068a6242538841b6c4aab8ea00cbbe6812e8 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2944/1/testReport/ |
   | Max. process+thread count | 1893 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2944/1/console |
   | versions | 

[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586518=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586518
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 12:12
Start Date: 21/Apr/21 12:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2942:
URL: https://github.com/apache/hadoop/pull/2942#issuecomment-824011637


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 20s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  
hadoop-hdfs-project_hadoop-hdfs generated 0 new + 537 unchanged - 1 fixed = 537 
total (was 538)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 38s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 149 unchanged 
- 49 fixed = 159 total (was 198)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 48s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 195m 35s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 280m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestMaintenanceState |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2942 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux b29dc9edf09f 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 568b5758418502359a9d20c6694b0706202d50f3 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/1/testReport/ |
   | Max. process+thread count | 3572 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2942/1/console |
   | versions | 

[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586505=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586505
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 11:44
Start Date: 21/Apr/21 11:44
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2943:
URL: https://github.com/apache/hadoop/pull/2943#issuecomment-823996210


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m 26s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  14m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  |  
hadoop-hdfs-project_hadoop-hdfs generated 0 new + 550 unchanged - 1 fixed = 550 
total (was 551)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 38s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 149 unchanged 
- 49 fixed = 159 total (was 198)  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 166m 28s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 245m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2943 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux caa78d50075c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / a25179b14acbb87ac99a83160294cc4f7873f82e |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/1/testReport/ |
   | Max. process+thread count | 2691 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2943/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-04-21 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15850:
-
Status: Patch Available  (was: Reopened)

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15850.branch-3.3.001.patch, HDFS-15850.v1.patch, 
> HDFS-15850.v2.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-04-21 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15850:
-
Attachment: HDFS-15850.branch-3.3.001.patch

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15850.branch-3.3.001.patch, HDFS-15850.v1.patch, 
> HDFS-15850.v2.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-04-21 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell reopened HDFS-15850:
--

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15850.v1.patch, HDFS-15850.v2.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-04-21 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326442#comment-17326442
 ] 

Stephen O'Donnell commented on HDFS-15850:
--

We should backport this to branch-3.3. I tried to cherry-pick it, but there is 
one conflict due to HDFS-15217 not being on branch-3.3 in 
FSNameSystem.truncate(...). There are some questions around the performance of 
HDFS-15217, so I'd rather not backport it to branch-3.3 at this stage, and it 
would be better to fix the conflict.

Then I got a compile error as below as HADOOP-17079 is not backported to 
branch-3.3:

{code}
[ERROR] 
/Users/sodonnell/source/upstream_hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeAttributeProvider.java:[425,20]
 cannot find symbol
[ERROR]   symbol:   method getGroupsSet()
[ERROR]   location: variable callerUgi of type 
org.apache.hadoop.security.UserGroupInformation
{code}

It would be good to backport HADOOP-17079 too, but there are some issues caused 
by it, which as still in progress so we cannot backport it either.

I fixed the conflicts and uploaded a branch-3.3 patch for this change. Can you 
all please review especially around this areas:

INodeAttributeProvider:
{code}
default void checkSuperUserPermissionWithContext(
AuthorizationContext authzContext)
throws AccessControlException {
  UserGroupInformation callerUgi = authzContext.getCallerUgi();
  boolean isSuperUser =
  callerUgi.getShortUserName().equals(authzContext.getFsOwner()) ||
  callerUgi.getGroups().contains(authzContext.getSupergroup());   // 
This line changed form getGroupsSet() to getGroups()
  if (!isSuperUser) {
throw new AccessControlException("Access denied for user " +
callerUgi.getShortUserName() + ". Superuser privilege is " +
"required for operation " + authzContext.getOperationName());
  }
}
{code}

FSNameSystem around the truncate method at line 2233.

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15850.v1.patch, HDFS-15850.v2.patch
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data on the Web UI must be saved to the trash

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=586498=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586498
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 11:20
Start Date: 21/Apr/21 11:20
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-823984010


   Thanks for the review @bhavikpatel9977. Could you please take a look 
@tasanuma @aajisaka ?
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586498)
Time Spent: 1.5h  (was: 1h 20m)

> Deleted data on the Web UI must be saved to the trash 
> --
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
>  
> This can be helpful when the user accidentally deletes data from the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?focusedWorklogId=586485=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586485
 ]

ASF GitHub Bot logged work on HDFS-15975:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 10:59
Start Date: 21/Apr/21 10:59
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2940:
URL: https://github.com/apache/hadoop/pull/2940#issuecomment-823972762


   Could rebase on branch-3.3 and force-push it to clean commit log?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586485)
Time Spent: 5h  (was: 4h 50m)

> Use LongAdder instead of AtomicLong
> ---
>
> Key: HDFS-15975
> URL: https://issues.apache.org/jira/browse/HDFS-15975
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> When counting some indicators, we can use LongAdder instead of AtomicLong to 
> improve performance. The long value is not an atomic snapshot in LongAdder, 
> but I think we can tolerate that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-15989.
-
Resolution: Fixed

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586481=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586481
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 10:47
Start Date: 21/Apr/21 10:47
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2923:
URL: https://github.com/apache/hadoop/pull/2923#issuecomment-823966167


   Merged. Thanks for your contribution, @virajjasani. Thanks for your review, 
@aajisaka.
   
   I will review for the PRs for other branches.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586481)
Time Spent: 5.5h  (was: 5h 20m)

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-15989:

Fix Version/s: 3.4.0

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586479=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586479
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 10:46
Start Date: 21/Apr/21 10:46
Worklog Time Spent: 10m 
  Work Description: tasanuma merged pull request #2923:
URL: https://github.com/apache/hadoop/pull/2923


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586479)
Time Spent: 5h 20m  (was: 5h 10m)

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15991) Add location into datanode info for NameNodeMXBean

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15991?focusedWorklogId=586474=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586474
 ]

ASF GitHub Bot logged work on HDFS-15991:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 10:43
Start Date: 21/Apr/21 10:43
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2933:
URL: https://github.com/apache/hadoop/pull/2933#issuecomment-823964136


   @tomscut Thanks for working on this. It makes sense to me. Some comments,
   
   * Could you add unit tests? We may want to edit TestNameNodeMXBean.
   
   * Router WebUI uses the location information. Could you do the same for 
NameNode WebUI?
   
   Router WebUI (federationhealth.html)
   
![image](https://user-images.githubusercontent.com/11712443/115539064-7d5f5480-a2d7-11eb-97a0-78adb80a2158.png)
   
   NameNode WebUI (dfshealth.html)
   
![image](https://user-images.githubusercontent.com/11712443/115539120-90722480-a2d7-11eb-8150-258d0ff5ac91.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586474)
Time Spent: 40m  (was: 0.5h)

> Add location into datanode info for NameNodeMXBean
> --
>
> Key: HDFS-15991
> URL: https://issues.apache.org/jira/browse/HDFS-15991
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Add location into datanode info for NameNodeMXBean.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15963) Unreleased volume references cause an infinite loop

2021-04-21 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326431#comment-17326431
 ] 

Xiaoqiao He commented on HDFS-15963:


Committed to branch-3.3 via https://github.com/apache/hadoop/pull/2941
Thanks [~zhangshuyan].

> Unreleased volume references cause an infinite loop
> ---
>
> Key: HDFS-15963
> URL: https://issues.apache.org/jira/browse/HDFS-15963
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15963.001.patch, HDFS-15963.002.patch, 
> HDFS-15963.003.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> When BlockSender throws an exception because the meta-data cannot be found, 
> the volume reference obtained by the thread is not released, which causes the 
> thread trying to remove the volume to wait and fall into an infinite loop.
> {code:java}
> boolean checkVolumesRemoved() {
>   Iterator it = volumesBeingRemoved.iterator();
>   while (it.hasNext()) {
> FsVolumeImpl volume = it.next();
> if (!volume.checkClosed()) {
>   return false;
> }
> it.remove();
>   }
>   return true;
> }
> boolean checkClosed() {
>   // always be true.
>   if (this.reference.getReferenceCount() > 0) {
> FsDatasetImpl.LOG.debug("The reference count for {} is {}, wait to be 0.",
> this, reference.getReferenceCount());
> return false;
>   }
>   return true;
> }
> {code}
> At the same time, because the thread has been holding checkDirsLock when 
> removing the volume, other threads trying to acquire the same lock will be 
> permanently blocked.
> Similar problems also occur in RamDiskAsyncLazyPersistService and 
> FsDatasetAsyncDiskService.
> This patch releases the three previously unreleased volume references.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15963) Unreleased volume references cause an infinite loop

2021-04-21 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15963:
---
Fix Version/s: 3.3.1

> Unreleased volume references cause an infinite loop
> ---
>
> Key: HDFS-15963
> URL: https://issues.apache.org/jira/browse/HDFS-15963
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15963.001.patch, HDFS-15963.002.patch, 
> HDFS-15963.003.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> When BlockSender throws an exception because the meta-data cannot be found, 
> the volume reference obtained by the thread is not released, which causes the 
> thread trying to remove the volume to wait and fall into an infinite loop.
> {code:java}
> boolean checkVolumesRemoved() {
>   Iterator it = volumesBeingRemoved.iterator();
>   while (it.hasNext()) {
> FsVolumeImpl volume = it.next();
> if (!volume.checkClosed()) {
>   return false;
> }
> it.remove();
>   }
>   return true;
> }
> boolean checkClosed() {
>   // always be true.
>   if (this.reference.getReferenceCount() > 0) {
> FsDatasetImpl.LOG.debug("The reference count for {} is {}, wait to be 0.",
> this, reference.getReferenceCount());
> return false;
>   }
>   return true;
> }
> {code}
> At the same time, because the thread has been holding checkDirsLock when 
> removing the volume, other threads trying to acquire the same lock will be 
> permanently blocked.
> Similar problems also occur in RamDiskAsyncLazyPersistService and 
> FsDatasetAsyncDiskService.
> This patch releases the three previously unreleased volume references.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15963) Unreleased volume references cause an infinite loop

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15963?focusedWorklogId=586463=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586463
 ]

ASF GitHub Bot logged work on HDFS-15963:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 10:33
Start Date: 21/Apr/21 10:33
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao merged pull request #2941:
URL: https://github.com/apache/hadoop/pull/2941


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586463)
Time Spent: 4.5h  (was: 4h 20m)

> Unreleased volume references cause an infinite loop
> ---
>
> Key: HDFS-15963
> URL: https://issues.apache.org/jira/browse/HDFS-15963
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-15963.001.patch, HDFS-15963.002.patch, 
> HDFS-15963.003.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> When BlockSender throws an exception because the meta-data cannot be found, 
> the volume reference obtained by the thread is not released, which causes the 
> thread trying to remove the volume to wait and fall into an infinite loop.
> {code:java}
> boolean checkVolumesRemoved() {
>   Iterator it = volumesBeingRemoved.iterator();
>   while (it.hasNext()) {
> FsVolumeImpl volume = it.next();
> if (!volume.checkClosed()) {
>   return false;
> }
> it.remove();
>   }
>   return true;
> }
> boolean checkClosed() {
>   // always be true.
>   if (this.reference.getReferenceCount() > 0) {
> FsDatasetImpl.LOG.debug("The reference count for {} is {}, wait to be 0.",
> this, reference.getReferenceCount());
> return false;
>   }
>   return true;
> }
> {code}
> At the same time, because the thread has been holding checkDirsLock when 
> removing the volume, other threads trying to acquire the same lock will be 
> permanently blocked.
> Similar problems also occur in RamDiskAsyncLazyPersistService and 
> FsDatasetAsyncDiskService.
> This patch releases the three previously unreleased volume references.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?focusedWorklogId=586457=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586457
 ]

ASF GitHub Bot logged work on HDFS-15975:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 10:26
Start Date: 21/Apr/21 10:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2940:
URL: https://github.com/apache/hadoop/pull/2940#issuecomment-823954819


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 25s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 32s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 38s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  17m  9s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 45s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   4m 10s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   4m  8s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   8m  2s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  19m  5s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   2m 36s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2940/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 5 new + 243 unchanged - 5 fixed = 248 total (was 
248)  |
   | +1 :green_heart: |  mvnsite  |   4m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m  8s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   8m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 35s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 185m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2940/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 370m 55s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2940/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2940 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 5e9773e8c1f5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / c9772e37b331850419680bc45fa693b3d1c965ce |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 

[jira] [Work logged] (HDFS-15963) Unreleased volume references cause an infinite loop

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15963?focusedWorklogId=586442=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586442
 ]

ASF GitHub Bot logged work on HDFS-15963:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 10:09
Start Date: 21/Apr/21 10:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2941:
URL: https://github.com/apache/hadoop/pull/2941#issuecomment-823945251


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  9s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 12s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  19m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 240m 21s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2941/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 329m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithValidator |
   |   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2941/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2941 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 950af898b8fb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / b2cd7e4fd7f0eef97555be6292a788f1428d892e |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2941/1/testReport/ |
   | Max. process+thread count | 3191 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2941/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure 

[jira] [Updated] (HDFS-15968) Improve the log for The DecayRpcScheduler

2021-04-21 Thread Bhavik Patel (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhavik Patel updated HDFS-15968:

Status: Patch Available  (was: Open)

> Improve the log for The DecayRpcScheduler 
> --
>
> Key: HDFS-15968
> URL: https://issues.apache.org/jira/browse/HDFS-15968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Assignee: Bhavik Patel
>Priority: Minor
> Attachments: HDFS-15968.001.patch
>
>
> Improve the log for The DecayRpcScheduler to make use of the SELF4j logger 
> factory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15968) Improve the log for The DecayRpcScheduler

2021-04-21 Thread Bhavik Patel (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bhavik Patel updated HDFS-15968:

Status: Open  (was: Patch Available)

> Improve the log for The DecayRpcScheduler 
> --
>
> Key: HDFS-15968
> URL: https://issues.apache.org/jira/browse/HDFS-15968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Assignee: Bhavik Patel
>Priority: Minor
> Attachments: HDFS-15968.001.patch
>
>
> Improve the log for The DecayRpcScheduler to make use of the SELF4j logger 
> factory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15967) Improve the log for Short Circuit Local Reads

2021-04-21 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326370#comment-17326370
 ] 

Bhavik Patel commented on HDFS-15967:
-

[~tasanuma] [~hemanthboyina] Can you please review?

> Improve the log for Short Circuit Local Reads
> -
>
> Key: HDFS-15967
> URL: https://issues.apache.org/jira/browse/HDFS-15967
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Assignee: Bhavik Patel
>Priority: Minor
> Attachments: HDFS-15967.001.patch
>
>
> Improve the log for Short Circuit Local Reads 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586403=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586403
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 08:11
Start Date: 21/Apr/21 08:11
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #2923:
URL: https://github.com/apache/hadoop/pull/2923#issuecomment-823870485


   Created all backport PRs: #2942 #2943 #2944 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586403)
Time Spent: 5h 10m  (was: 5h)

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586402=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586402
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 08:10
Start Date: 21/Apr/21 08:10
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #2944:
URL: https://github.com/apache/hadoop/pull/2944


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586402)
Time Spent: 5h  (was: 4h 50m)

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586379=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586379
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 07:38
Start Date: 21/Apr/21 07:38
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #2943:
URL: https://github.com/apache/hadoop/pull/2943


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586379)
Time Spent: 4h 50m  (was: 4h 40m)

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15989) Split TestBalancer into two classes

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15989?focusedWorklogId=586374=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586374
 ]

ASF GitHub Bot logged work on HDFS-15989:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 07:29
Start Date: 21/Apr/21 07:29
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #2942:
URL: https://github.com/apache/hadoop/pull/2942


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586374)
Time Spent: 4h 40m  (was: 4.5h)

> Split TestBalancer into two classes
> ---
>
> Key: HDFS-15989
> URL: https://issues.apache.org/jira/browse/HDFS-15989
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> TestBalancer has many tests accumulated, it would be good to split it up into 
> two classes. Moreover, TestBalancer#testMaxIterationTime is flaky. We should 
> also resolve it with this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15982) Deleted data on the Web UI must be saved to the trash

2021-04-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?focusedWorklogId=586355=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-586355
 ]

ASF GitHub Bot logged work on HDFS-15982:
-

Author: ASF GitHub Bot
Created on: 21/Apr/21 06:27
Start Date: 21/Apr/21 06:27
Worklog Time Spent: 10m 
  Work Description: bhavikpatel9977 commented on pull request #2927:
URL: https://github.com/apache/hadoop/pull/2927#issuecomment-823811467


   LGTM (Verified on my local cluster)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 586355)
Time Spent: 1h 20m  (was: 1h 10m)

> Deleted data on the Web UI must be saved to the trash 
> --
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
>  
> This can be helpful when the user accidentally deletes data from the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org