[ 
https://issues.apache.org/jira/browse/HDFS-16570?focusedWorklogId=770643&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-770643
 ]

ASF GitHub Bot logged work on HDFS-16570:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 16/May/22 02:50
            Start Date: 16/May/22 02:50
    Worklog Time Spent: 10m 
      Work Description: zhangxiping1 commented on code in PR #4269:
URL: https://github.com/apache/hadoop/pull/4269#discussion_r873287952


##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java:
##########
@@ -1896,6 +1897,9 @@ public FederationRPCMetrics getRPCMetrics() {
   boolean isPathAll(final String path) {
     if (subclusterResolver instanceof MountTableResolver) {
       try {
+        if(isTrashPath(path)){
+          return true;

Review Comment:
   I can make two changes in isPathAll fuction:
   1. Process the Trash path, remove the prefix, and check
   2. Check whether it is the trash path
   If we delete, mkdir, LS on the recycle bin data, if reslove gets multiple 
Remotelocation, then we should operate on all remotelocation, so I'm going to 
choose the second option. But the first is certainly fine.
   If you think there's something wrong, you can talk me out of it, thank you.





Issue Time Tracking
-------------------

    Worklog Id:     (was: 770643)
    Time Spent: 1.5h  (was: 1h 20m)

> RBF: The router using MultipleDestinationMountTableResolver remove Multiple 
> subcluster data under the mount point failed
> ------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-16570
>                 URL: https://issues.apache.org/jira/browse/HDFS-16570
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: rbf
>            Reporter: Xiping Zhang
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Please look at the following example :
> hadoop>{color:#FF0000}hdfs dfsrouteradmin -add /home/data ns0,ns1 /home/data 
> -order RANDOM{color}
> Successfully removed mount point /home/data
> hadoop>{color:#FF0000}hdfs dfsrouteradmin -ls{color}
> Mount Table Entries:
> Source                    Destinations              Owner                     
> Group                     Mode       Quota/Usage
> /home/data                ns0->/home/data,ns1->/home/data  zhangxiping        
>        Administrators            rwxr-xr-x  [NsQuota: -/-, SsQuota: -/-]
> hadoop>{color:#FF0000}hdfs dfs -touch 
> hdfs://ns0/home/data/test/fileNs0.txt{color}
> hadoop>{color:#FF0000}hdfs dfs -touch 
> hdfs://ns1/home/data/test/fileNs1.txt{color}
> hadoop>{color:#FF0000}hdfs dfs -ls 
> hdfs://ns0/home/data/test/fileNs0.txt{color}
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://ns0/home/data/test/fileNs0.txt
> hadoop>{color:#FF0000}hdfs dfs -ls 
> hdfs://ns1/home/data/test/fileNs1.txt{color}
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://ns1/home/data/test/fileNs1.txt
> hadoop>{color:#FF0000}hdfs dfs -ls 
> hdfs://127.0.0.1:40250/home/data/test{color}
> Found 2 items
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://127.0.0.1:40250/home/data/test/fileNs0.txt
> {-}rw-r{-}{-}r{-}-   3 zhangxiping supergroup          0 2022-05-06 18:01 
> hdfs://127.0.0.1:40250/home/data/test/fileNs1.txt
> hadoop>{color:#FF0000}hdfs dfs -rm -r 
> hdfs://127.0.0.1:40250/home/data/test{color}
> rm: Failed to move to trash: hdfs://127.0.0.1:40250/home/data/test: rename 
> destination parent /user/zhangxiping/.Trash/Current/home/data/test not found.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to