[jira] [Comment Edited] (HDFS-15968) Improve the log for The DecayRpcScheduler

2021-04-27 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17334463#comment-17334463
 ] 

Bhavik Patel edited comment on HDFS-15968 at 4/28/21, 5:43 AM:
---

[~aajisaka] I have updated the Jira status couple of times but the Jenkins job 
Is not getting trigger, can you please help me with this. 

Thanks.


was (Author: bpatel):
[~aajisaka] I have updated the Jira status couple of times but the Jenkins job 
Is not getting trigger, can you please help me with this. 

> Improve the log for The DecayRpcScheduler 
> --
>
> Key: HDFS-15968
> URL: https://issues.apache.org/jira/browse/HDFS-15968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Assignee: Bhavik Patel
>Priority: Minor
> Attachments: HDFS-15968.001.patch
>
>
> Improve the log for The DecayRpcScheduler to make use of the SELF4j logger 
> factory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15969) DFSClient prints token information a string format

2021-04-27 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17334465#comment-17334465
 ] 

Bhavik Patel commented on HDFS-15969:
-

[~tasanuma] can you please review. Thank you

> DFSClient prints token information a string format 
> ---
>
> Key: HDFS-15969
> URL: https://issues.apache.org/jira/browse/HDFS-15969
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Assignee: Bhavik Patel
>Priority: Minor
> Attachments: HDFS-15969.001.patch
>
>
> DFSclient prints token information in a string format, as this is sensitive 
> information it must be moved to debug level or can be exempted even from 
> debug level



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15968) Improve the log for The DecayRpcScheduler

2021-04-27 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17334463#comment-17334463
 ] 

Bhavik Patel commented on HDFS-15968:
-

[~aajisaka] I have updated the Jira status couple of times but the Jenkins job 
Is not getting trigger, can you please help me with this. 

> Improve the log for The DecayRpcScheduler 
> --
>
> Key: HDFS-15968
> URL: https://issues.apache.org/jira/browse/HDFS-15968
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bhavik Patel
>Assignee: Bhavik Patel
>Priority: Minor
> Attachments: HDFS-15968.001.patch
>
>
> Improve the log for The DecayRpcScheduler to make use of the SELF4j logger 
> factory



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?focusedWorklogId=590102=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590102
 ]

ASF GitHub Bot logged work on HDFS-15975:
-

Author: ASF GitHub Bot
Created on: 28/Apr/21 01:39
Start Date: 28/Apr/21 01:39
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #2940:
URL: https://github.com/apache/hadoop/pull/2940#issuecomment-828075377


   > The failed tests are not related. 
(TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight fails 
with/without this PR. I will cherry-pick 
[HDFS-15378](https://issues.apache.org/jira/browse/HDFS-15378).)
   
   I also founded this problem, but I haven't had time to analyze why. Thanks 
@tasanuma for you help. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590102)
Time Spent: 6h 10m  (was: 6h)

> Use LongAdder instead of AtomicLong
> ---
>
> Key: HDFS-15975
> URL: https://issues.apache.org/jira/browse/HDFS-15975
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> When counting some indicators, we can use LongAdder instead of AtomicLong to 
> improve performance. The long value is not an atomic snapshot in LongAdder, 
> but I think we can tolerate that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15810) RBF: RBFMetrics's TotalCapacity out of bounds

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15810?focusedWorklogId=590029=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590029
 ]

ASF GitHub Bot logged work on HDFS-15810:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 21:11
Start Date: 27/Apr/21 21:11
Worklog Time Spent: 10m 
  Work Description: fengnanli commented on pull request #2910:
URL: https://github.com/apache/hadoop/pull/2910#issuecomment-827933648


   @goiri  Updated the PR title to align with the JIRA title.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590029)
Time Spent: 3h 10m  (was: 3h)

> RBF: RBFMetrics's TotalCapacity out of bounds
> -
>
> Key: HDFS-15810
> URL: https://issues.apache.org/jira/browse/HDFS-15810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoxing Wei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2021-04-26 at 5.25.12 PM.png, 
> image-2021-02-02-10-59-17-113.png
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The Long type fields TotalCapacity,UsedCapacity and RemainingCapacity in 
> RBFMetrics maybe ** out of bounds.
> !image-2021-02-02-10-59-17-113.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15810) RBF: RBFMetrics's TotalCapacity out of bounds

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15810?focusedWorklogId=590028=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590028
 ]

ASF GitHub Bot logged work on HDFS-15810:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 21:07
Start Date: 27/Apr/21 21:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2910:
URL: https://github.com/apache/hadoop/pull/2910#issuecomment-827931441


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  23m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 56s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  19m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  21m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  19m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  22m  2s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 251m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2910/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2910 |
   | Optional Tests | dupname asflicense mvnsite codespell markdownlint compile 
javac javadoc mvninstall unit shadedclient spotbugs checkstyle |
   | uname | Linux 7d1a7396c0f0 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 666eb74cdd23ee4c0cffbd771d4d11ccd79e3920 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2910/4/testReport/ |
   | Max. process+thread count | 1993 (vs. ulimit 

[jira] [Work logged] (HDFS-15810) RBF: RBFMetrics's TotalCapacity out of bounds

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15810?focusedWorklogId=590010=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-590010
 ]

ASF GitHub Bot logged work on HDFS-15810:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 20:15
Start Date: 27/Apr/21 20:15
Worklog Time Spent: 10m 
  Work Description: goiri commented on pull request #2910:
URL: https://github.com/apache/hadoop/pull/2910#issuecomment-827900081


   Can we align the name of the JIRA and the PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 590010)
Time Spent: 2h 50m  (was: 2h 40m)

> RBF: RBFMetrics's TotalCapacity out of bounds
> -
>
> Key: HDFS-15810
> URL: https://issues.apache.org/jira/browse/HDFS-15810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoxing Wei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2021-04-26 at 5.25.12 PM.png, 
> image-2021-02-02-10-59-17-113.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> The Long type fields TotalCapacity,UsedCapacity and RemainingCapacity in 
> RBFMetrics maybe ** out of bounds.
> !image-2021-02-02-10-59-17-113.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15923) RBF: Authentication failed when rename accross sub clusters

2021-04-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17333525#comment-17333525
 ] 

Íñigo Goiri commented on HDFS-15923:


I'm not a fan at all of the Thread.sleep(100);
Can we wait for the values to be ready properly?

> RBF:  Authentication failed when rename accross sub clusters
> 
>
> Key: HDFS-15923
> URL: https://issues.apache.org/jira/browse/HDFS-15923
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: zhuobin zheng
>Assignee: zhuobin zheng
>Priority: Major
>  Labels: RBF, pull-request-available, rename
> Attachments: HDFS-15923.001.patch, HDFS-15923.002.patch, 
> HDFS-15923.stack-trace, hdfs-15923-fix-security-issue.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Rename accross subcluster with RBF and Kerberos environment. Will encounter 
> the following two errors:
>  # Save Object to journal.
>  # Precheck try to get src file status
> So, we need use Router Login UGI doAs create DistcpProcedure and 
> TrashProcedure and submit Job.
>  
> Beside, we should check user permission for src and dst path in router side 
> before do rename internal. (HDFS-15973)
> First: Save Object to journal.
> {code:java}
> // code placeholder
> 2021-03-23 14:01:16,233 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
> at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:408)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:622)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:413)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:822)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:818)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:818)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:413)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1636)
> at org.apache.hadoop.ipc.Client.call(Client.java:1452)
> at org.apache.hadoop.ipc.Client.call(Client.java:1405)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
> at com.sun.proxy.$Proxy11.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:376)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy12.create(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:277)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1240)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1219)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1201)
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1139)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:533)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> 

[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=589916=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589916
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 17:57
Start Date: 27/Apr/21 17:57
Worklog Time Spent: 10m 
  Work Description: fengnanli commented on pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954#issuecomment-827800452


   > > @fengnanli unit test TestRouterWebHdfsMethods run failed. Would you mind 
to check it or trigger Yetus again?
   > 
   > Yes, I checked this test locally and it works fine. Even for the first 
commit it succeeded.
   
   I further checked and found this sometimes doesn't work in local env as 
well. It looks like a flaky test but irrelevant to this path. I am working on 
fixing it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589916)
Time Spent: 2.5h  (was: 2h 20m)

> RBF: Fix NullPointException when start dfsrouter
> 
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15997:
--
Description: 
Currently dfsadmin -provisionSnapshotTrash only supports creating trash root 
one by one.

This jira adds -all argument to create trash root on ALL snapshottable dirs.

  was:Currently 


> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently dfsadmin -provisionSnapshotTrash only supports creating trash root 
> one by one.
> This jira adds -all argument to create trash root on ALL snapshottable dirs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15997) Implement dfsadmin -provisionSnapshotTrash -all

2021-04-27 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-15997:
--
Description: Currently 

> Implement dfsadmin -provisionSnapshotTrash -all
> ---
>
> Key: HDFS-15997
> URL: https://issues.apache.org/jira/browse/HDFS-15997
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsadmin
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=589870=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589870
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 16:56
Start Date: 27/Apr/21 16:56
Worklog Time Spent: 10m 
  Work Description: fengnanli commented on pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954#issuecomment-827760811


   > @fengnanli unit test TestRouterWebHdfsMethods run failed. Would you mind 
to check it or trigger Yetus again?
   
   Yes, I checked this test locally and it works fine. Even for the first 
commit it succeeded. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589870)
Time Spent: 2h 20m  (was: 2h 10m)

> RBF: Fix NullPointException when start dfsrouter
> 
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15810) RBF: RBFMetrics's TotalCapacity out of bounds

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15810?focusedWorklogId=589869=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589869
 ]

ASF GitHub Bot logged work on HDFS-15810:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 16:55
Start Date: 27/Apr/21 16:55
Worklog Time Spent: 10m 
  Work Description: fengnanli commented on a change in pull request #2910:
URL: https://github.com/apache/hadoop/pull/2910#discussion_r621412490



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestMetricsBase.java
##
@@ -259,4 +259,15 @@ private MembershipState createRegistration(String ns, 
String nn,
 assertTrue(response.getResult());
 return record;
   }
+
+  // refresh namenode registration for new attributes
+  public boolean refreshNamenodeRegistration(NamenodeHeartbeatRequest request)
+  throws IOException {
+boolean result = membershipStore.namenodeHeartbeat(request).getResult();
+membershipStore.loadCache(true);
+MembershipNamenodeResolver resolver =
+(MembershipNamenodeResolver) router.getNamenodeResolver();
+resolver.loadCache(true);
+return result;

Review comment:
   I think the validation is still useful so kept the returning result. 
Also added an assert for the result. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589869)
Time Spent: 2h 40m  (was: 2.5h)

> RBF: RBFMetrics's TotalCapacity out of bounds
> -
>
> Key: HDFS-15810
> URL: https://issues.apache.org/jira/browse/HDFS-15810
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiaoxing Wei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2021-04-26 at 5.25.12 PM.png, 
> image-2021-02-02-10-59-17-113.png
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> The Long type fields TotalCapacity,UsedCapacity and RemainingCapacity in 
> RBFMetrics maybe ** out of bounds.
> !image-2021-02-02-10-59-17-113.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15999) TestCommitBlockSynchronization incorrect tests

2021-04-27 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HDFS-15999:


 Summary: TestCommitBlockSynchronization incorrect tests
 Key: HDFS-15999
 URL: https://issues.apache.org/jira/browse/HDFS-15999
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namanode, test
Reporter: Ahmed Hussein


{{TestCommitBlockSynchronization}} assumes any exception is the expected 
failure, while assuming no exception must be success without any state 
verification.
The test also creates mocks that do no input verification, returns values 
independent of actual state even after state should have changed, and returns 
invalid values such as a block list containing a null



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15378) TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight is failing on trunk

2021-04-27 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-15378:

Fix Version/s: 3.2.3
   3.1.5
   3.3.1

> TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight is failing on 
> trunk
> -
>
> Key: HDFS-15378
> URL: https://issues.apache.org/jira/browse/HDFS-15378
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: HDFS-15378.001.patch
>
>
> [https://builds.apache.org/job/PreCommit-HDFS-Build/29377/#showFailuresLink]
> [https://builds.apache.org/job/PreCommit-HDFS-Build/29368/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15378) TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight is failing on trunk

2021-04-27 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17333289#comment-17333289
 ] 

Takanobu Asanuma commented on HDFS-15378:
-

It also failed in the lower branches. Cherry-picked to branch-3.3, branch-3.2, 
branch-3.1.

> TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight is failing on 
> trunk
> -
>
> Key: HDFS-15378
> URL: https://issues.apache.org/jira/browse/HDFS-15378
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hemanth Boyina
>Assignee: Hemanth Boyina
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15378.001.patch
>
>
> [https://builds.apache.org/job/PreCommit-HDFS-Build/29377/#showFailuresLink]
> [https://builds.apache.org/job/PreCommit-HDFS-Build/29368/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?focusedWorklogId=589746=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589746
 ]

ASF GitHub Bot logged work on HDFS-15975:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 13:40
Start Date: 27/Apr/21 13:40
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2940:
URL: https://github.com/apache/hadoop/pull/2940#issuecomment-827614596


   Merged. Thanks for your work, @tomscut!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589746)
Time Spent: 6h  (was: 5h 50m)

> Use LongAdder instead of AtomicLong
> ---
>
> Key: HDFS-15975
> URL: https://issues.apache.org/jira/browse/HDFS-15975
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> When counting some indicators, we can use LongAdder instead of AtomicLong to 
> improve performance. The long value is not an atomic snapshot in LongAdder, 
> but I think we can tolerate that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-27 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-15975:

Fix Version/s: 3.3.1

> Use LongAdder instead of AtomicLong
> ---
>
> Key: HDFS-15975
> URL: https://issues.apache.org/jira/browse/HDFS-15975
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> When counting some indicators, we can use LongAdder instead of AtomicLong to 
> improve performance. The long value is not an atomic snapshot in LongAdder, 
> but I think we can tolerate that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?focusedWorklogId=589744=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589744
 ]

ASF GitHub Bot logged work on HDFS-15975:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 13:39
Start Date: 27/Apr/21 13:39
Worklog Time Spent: 10m 
  Work Description: tasanuma merged pull request #2940:
URL: https://github.com/apache/hadoop/pull/2940


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589744)
Time Spent: 5h 50m  (was: 5h 40m)

> Use LongAdder instead of AtomicLong
> ---
>
> Key: HDFS-15975
> URL: https://issues.apache.org/jira/browse/HDFS-15975
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> When counting some indicators, we can use LongAdder instead of AtomicLong to 
> improve performance. The long value is not an atomic snapshot in LongAdder, 
> but I think we can tolerate that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15975) Use LongAdder instead of AtomicLong

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15975?focusedWorklogId=589741=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589741
 ]

ASF GitHub Bot logged work on HDFS-15975:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 13:38
Start Date: 27/Apr/21 13:38
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2940:
URL: https://github.com/apache/hadoop/pull/2940#issuecomment-827613630


   The failed tests are not related. 
(TestReconstructStripedFile#testErasureCodingWorkerXmitsWeight fails 
with/without this PR. I will cherry-pick HDFS-15378.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589741)
Time Spent: 5h 40m  (was: 5.5h)

> Use LongAdder instead of AtomicLong
> ---
>
> Key: HDFS-15975
> URL: https://issues.apache.org/jira/browse/HDFS-15975
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> When counting some indicators, we can use LongAdder instead of AtomicLong to 
> improve performance. The long value is not an atomic snapshot in LongAdder, 
> but I think we can tolerate that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-27 Thread Bhavik Patel (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17333176#comment-17333176
 ] 

Bhavik Patel commented on HDFS-15982:
-

[~brahma] [~chaosun] [~aceric] [~daryn] can you please review this PR? 

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp should restore the permission of both src and dst when execute DistCpProcedure#restorePermission

2021-04-27 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, when we  execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to clean 
src  due to permission problem

  was:
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to clean 
src  due to permission problem


> RBF: federation-rename by distcp  should restore the permission of both src 
> and dst when execute DistCpProcedure#restorePermission
> --
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable write  by removing 
> the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> if (acl != null) {
> dstFs.modifyAclEntries(dst, acl.getEntries());
> }
> if (fPerm != null) {
> dstFs.setPermission(dst, fPerm);
> }
> }
>  {code}
> i think both the src and dst permission need to be modify, because after 
> restorePermission, when we  execute the TrashProcedure , in order to move the 
> src to the correct trash dir , we switch to a custom account to execute 
> trashProcedure 

[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp should restore the permission of both src and dst when execute DistCpProcedure#restorePermission

2021-04-27 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to clean 
src  due to permission problem

  was:
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to clean 
src  due to permission problem


> RBF: federation-rename by distcp  should restore the permission of both src 
> and dst when execute DistCpProcedure#restorePermission
> --
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable write  by removing 
> the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> if (acl != null) {
> dstFs.modifyAclEntries(dst, acl.getEntries());
> }
> if (fPerm != null) {
> dstFs.setPermission(dst, fPerm);
> }
> }
>  {code}
> i think both the src and dst permission need to be modify, because after 
> restorePermission, we need to execute the TrashProcedure , in order to move 
> the src to the correct trash dir , we switch to a custom 

[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp should restore the permission of both src and dst when execute DistCpProcedure#restorePermission

2021-04-27 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to clean 
src  due to permission problem

  was:
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to 
remove the src  due to permission problem


> RBF: federation-rename by distcp  should restore the permission of both src 
> and dst when execute DistCpProcedure#restorePermission
> --
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable write  by removing 
> the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> dstFs.removeAcl(dst);
> if (acl != null) {
> dstFs.modifyAclEntries(dst, acl.getEntries());
> }
> if (fPerm != null) {
> dstFs.setPermission(dst, fPerm);
> }
> }
>  {code}
> i think both the src and dst permission need to be modify, because after 
> restorePermission, we need to execute the TrashProcedure , in order to move 
> the 

[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp should restore the permission of both src and dst when execute DistCpProcedure#restorePermission

2021-04-27 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to 
remove the src  due to permission problem

  was:
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to 
remove the src  due to permission problem


> RBF: federation-rename by distcp  should restore the permission of both src 
> and dst when execute DistCpProcedure#restorePermission
> --
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable write  by removing 
> the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> dstFs.removeAcl(dst);
> if (acl != null) {
> dstFs.modifyAclEntries(dst, acl.getEntries());
> }
> if (fPerm != null) {
> dstFs.setPermission(dst, fPerm);
> }
> }
>  {code}
> i think both the src and 

[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp should restore the permission of both src and dst when execute DistCpProcedure#restorePermission

2021-04-27 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable write  by removing 
the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to 
remove the src  due to permission problem

  was:
when execute rename distcp ,  we can see one step disable write of src  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to 
remove the src  due to permission problem


> RBF: federation-rename by distcp  should restore the permission of both src 
> and dst when execute DistCpProcedure#restorePermission
> --
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable write  by removing 
> the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more 

[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp should restore the permission of both src and dst when execute DistCpProcedure#restorePermission

2021-04-27 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable write of src  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to 
remove the src  due to permission problem

  was:
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to 
remove the src  due to permission problem


> RBF: federation-rename by distcp  should restore the permission of both src 
> and dst when execute DistCpProcedure#restorePermission
> --
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable write of src  by 
> removing the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need 

[jira] [Updated] (HDFS-15996) RBF: federation-rename by distcp should restore the permission of both src and dst when execute DistCpProcedure#restorePermission

2021-04-27 Thread leizhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

leizhang updated HDFS-15996:

Description: 
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed to 
remove the src  due to permission problem

  was:
when execute rename distcp ,  we can see one step disable the write  by 
removing the permission of src , see DistCpProcedure#disableWrite

 
{code:java}
protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
// Save and cancel permission.
FileStatus status = srcFs.getFileStatus(src);
fPerm = status.getPermission();
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//acl = srcFs.getAclStatus(src);
srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
updateStage(Stage.FINAL_DISTCP);
}
{code}
but when  finishDistcp and execute restoring,  it set the previous stored 
permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
{code:java}
/**
 * Enable write by restoring the x permission.
 */
void restorePermission() throws IOException {
// restore permission.
//TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
init,need a more reasonable way to handle this
//dstFs.removeAcl(dst);
if (acl != null) {
dstFs.modifyAclEntries(dst, acl.getEntries());
}
if (fPerm != null) {
dstFs.setPermission(dst, fPerm);
}
}
 {code}
i think both the src and dst permission need to be modify, because after 
restorePermission, we need to execute the TrashProcedure , in order to move the 
src to the correct trash dir , we switch to a custom account to execute 
trashProcedure , when use non-admin account, the trashProcedure failed due to 
permission problem


> RBF: federation-rename by distcp  should restore the permission of both src 
> and dst when execute DistCpProcedure#restorePermission
> --
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable the write  by 
> removing the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to 

[jira] [Commented] (HDFS-15996) RBF: federation-rename by distcp should restore the permission of both src and dst when execute DistCpProcedure#restorePermission

2021-04-27 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17333145#comment-17333145
 ] 

Viraj Jasani commented on HDFS-15996:
-

{quote}i think both the src and dst permission need to be modify
{quote}
+1 to this idea.

> RBF: federation-rename by distcp  should restore the permission of both src 
> and dst when execute DistCpProcedure#restorePermission
> --
>
> Key: HDFS-15996
> URL: https://issues.apache.org/jira/browse/HDFS-15996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Reporter: leizhang
>Priority: Major
>
> when execute rename distcp ,  we can see one step disable the write  by 
> removing the permission of src , see DistCpProcedure#disableWrite
>  
> {code:java}
> protected void disableWrite(FedBalanceContext fbcontext) throws IOException {
> // Save and cancel permission.
> FileStatus status = srcFs.getFileStatus(src);
> fPerm = status.getPermission();
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //acl = srcFs.getAclStatus(src);
> srcFs.setPermission(src, FsPermission.createImmutable((short) 0));
> updateStage(Stage.FINAL_DISTCP);
> }
> {code}
> but when  finishDistcp and execute restoring,  it set the previous stored 
> permission of  src  to the  dest path , see  DistCpProcedure#restorePermission
> {code:java}
> /**
>  * Enable write by restoring the x permission.
>  */
> void restorePermission() throws IOException {
> // restore permission.
> //TODO our cluster set the dfs.namenode.acls.enabled to false so skip acl 
> init,need a more reasonable way to handle this
> //dstFs.removeAcl(dst);
> if (acl != null) {
> dstFs.modifyAclEntries(dst, acl.getEntries());
> }
> if (fPerm != null) {
> dstFs.setPermission(dst, fPerm);
> }
> }
>  {code}
> i think both the src and dst permission need to be modify, because after 
> restorePermission, we need to execute the TrashProcedure , in order to move 
> the src to the correct trash dir , we switch to a custom account to execute 
> trashProcedure , when use non-admin account, the trashProcedure failed due to 
> permission problem



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15561) RBF: Fix NullPointException when start dfsrouter

2021-04-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15561?focusedWorklogId=589618=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-589618
 ]

ASF GitHub Bot logged work on HDFS-15561:
-

Author: ASF GitHub Bot
Created on: 27/Apr/21 08:46
Start Date: 27/Apr/21 08:46
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on pull request #2954:
URL: https://github.com/apache/hadoop/pull/2954#issuecomment-827433056


   @fengnanli unit test TestRouterWebHdfsMethods run failed. Would you mind to 
check it or trigger Yetus again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 589618)
Time Spent: 2h 10m  (was: 2h)

> RBF: Fix NullPointException when start dfsrouter
> 
>
> Key: HDFS-15561
> URL: https://issues.apache.org/jira/browse/HDFS-15561
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Xie Lei
>Assignee: Fengnan Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> when start dfsrouter, it throw NPE
> {code:java}
> 2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: null2020-09-08 19:41:14,989 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Unexpected exception while communicating with null:null: 
> java.net.UnknownHostException: nulljava.lang.IllegalArgumentException: 
> java.net.UnknownHostException: null at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:447)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:171)
>  at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:123) 
> at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) 
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:248)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:205)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>  at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>  at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
>  at 
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:300)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
>  at java.base/java.lang.Thread.run(Thread.java:844)Caused by: 
> java.net.UnknownHostException: null ... 14 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-27 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15982:

Release Note: 
Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config "fs.trash.interval" is set to value greater 
than 0, DELETE API will by-default try to move given file to .Trash dir 
(similar to Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config "fs.trash.interval" is set to value greater than 0 
(similar to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://host:port/webhdfs/v1/path?op=DELETE 
[=true|false][=true|false]"

  was:
Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config fs.trash.interval is set to value greater than 
0, DELETE API will by-default try to move given file to .Trash dir (similar to 
Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config fs.trash.interval is to value greater than 0 (similar 
to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://host:port/webhdfs/v1/path?op=DELETE 
[=true|false][=true|false]"


> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-27 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17332992#comment-17332992
 ] 

Viraj Jasani commented on HDFS-15982:
-

[~aajisaka] [~ayushtkn] [~liuml07] [~tasanuma] Sorry for the wider ping, since 
3.3.1 RC cut is going to happen very soon, could you please help review PRs as 
per your convenience:
 # trunk [PR|https://github.com/apache/hadoop/pull/2927]
 # branch-3.3 backport [PR|https://github.com/apache/hadoop/pull/2925] (trunk 
PR is cleanly applied to branch-3.3)

Thanks

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-27 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15982:

Release Note: 
Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config fs.trash.interval is set to value greater than 
0, DELETE API will by-default try to move given file to .Trash dir (similar to 
Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config fs.trash.interval is to value greater than 0 (similar 
to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://host:port/webhdfs/v1/path?op=DELETE 
[=true|false][=true|false]"

  was:
Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config fs.trash.interval is set to value greater than 
0, DELETE API will by-default try to move given file to .Trash dir (similar to 
Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config fs.trash.interval is to value greater than 0 (similar 
to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE 
[=][=]"


> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-27 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15982:

Release Note: 
Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config fs.trash.interval is set to value greater than 
0, DELETE API will by-default try to move given file to .Trash dir (similar to 
Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config fs.trash.interval is to value greater than 0 (similar 
to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE 
[=][=]"

  was:
Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config fs.trash.interval is to value greater than 0, 
DELETE API will by default try to move given file to .Trash dir (similar to 
Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config fs.trash.interval is to value greater than 0 (similar 
to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE 
[=][=]"


> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-27 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15982:

Release Note: 
Webhdfs and httpfs DELETE API's default behaviour is going to be similar to 
Delete shell command. If config fs.trash.interval is to value greater than 0, 
DELETE API will by default try to move given file to .Trash dir (similar to 
Delete shell command's behaviour).
However, DELETE API will also have skiptrash query param available that can 
skip trash even if config fs.trash.interval is to value greater than 0 (similar 
to skipTrash argument of Delete shell command).
Default value of skiptrash query param will be false.

API change:
curl -i -X DELETE "http://:/webhdfs/v1/?op=DELETE 
[=][=]"

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15982) Deleted data using HTTP API should be saved to the trash

2021-04-27 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15982:

Component/s: httpfs

> Deleted data using HTTP API should be saved to the trash
> 
>
> Key: HDFS-15982
> URL: https://issues.apache.org/jira/browse/HDFS-15982
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs, httpfs, webhdfs
>Reporter: Bhavik Patel
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screenshot 2021-04-23 at 4.19.42 PM.png, Screenshot 
> 2021-04-23 at 4.36.57 PM.png
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> If we delete the data from the Web UI then it should be first moved to 
> configured/default Trash directory and after the trash interval time, it 
> should be removed. currently, data directly removed from the system[This 
> behavior should be the same as CLI cmd]
> This can be helpful when the user accidentally deletes data from the Web UI.
> Similarly we should provide "Skip Trash" option in HTTP API as well which 
> should be accessible through Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org