xloya commented on issue #5564:
URL: https://github.com/apache/gravitino/issues/5564#issuecomment-2475565494

   > > Besides, `rm` command deleting to trash should be the default logic of 
hadoop delete shell command. If you do not need to delete it to trash, you can 
add a parameter: `-skipTrash` like `hadoop dfs -rm -skipTrash 
gvfs://fileset/{catalog}/{schema}/{fileset_name}/sub_dir`. 
![image](https://private-user-images.githubusercontent.com/26177232/385696454-afcc0c8e-7c48-4f3e-85d2-da0dd1b39130.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzE1NjY4NTEsIm5iZiI6MTczMTU2NjU1MSwicGF0aCI6Ii8yNjE3NzIzMi8zODU2OTY0NTQtYWZjYzBjOGUtN2M0OC00ZjNlLTg1ZDItZGEwZGQxYjM5MTMwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDExMTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMTE0VDA2NDIzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTk1ZjAzMTcwNzg0NmM1M2QxZTE3MGYzYmYyNjAzYmZhNjM2NmY3YjZlMzRjYjY0YTlkMmQzYWQ2MGRmOGEyMmUmWC1BbXotU2ln
 bmVkSGVhZGVycz1ob3N0In0.xHlzoYvuZqhCtAkrCUfql0QUWZCYJe0EPPc3COfVhmI) 
![image](https://private-user-images.githubusercontent.com/26177232/385696582-13ed3646-f0ff-4665-bf49-92bea28079d7.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzE1NjY4NTEsIm5iZiI6MTczMTU2NjU1MSwicGF0aCI6Ii8yNjE3NzIzMi8zODU2OTY1ODItMTNlZDM2NDYtZjBmZi00NjY1LWJmNDktOTJiZWEyODA3OWQ3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDExMTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQxMTE0VDA2NDIzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWNkYTBmZGU1OGM2ZGFkMDBkZWUwNDEzMTE5ZmJlYmY3ZWI0NDMzNDQ4ZWUwY2Y0NDRjMDI4YzU3ZTgzMzk1MDgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.QJsVl3GPsA8YMMPy2DAGd-yyMyeCd3npH5A6yIT8wEs)
   > 
   > @xloya To ensure data security, the`-skipTrash`option has been forcibly 
disabled, so deleted data will first go to the trash to prevent accidental 
deletion. How can fileset be compatible with the `hadoop fs -rm `command? thanks
   
   Hi, if you could not use `-skipTrash` option, I think there may not have a 
way to delete directly unless you modified the hadoop source code about `rm` 
command.
   Another option is to use Hadoop FileSystem to delete in Java / Scala code. I 
think this will delete the directory or file directly instead of deleting it to 
trash:
   ```
   import java.io.IOException;
   import org.apache.hadoop.conf.Configuration;
   import org.apache.hadoop.fs.FileSystem;
   import org.apache.hadoop.fs.Path;
   
   public class Test {
       public static void main(String[] args) throws IOException {
               Path filesetPath = new 
Path("gvfs://fileset/{your_catalog}/{your_schema}/{your_fileset}/sub_path");
               try (FileSystem fs = filesetPath.getFileSystem(new 
Configuration())) {
                 fs.delete(filesetPath, true);
               }
       }
   }
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to