baitian77 commented on issue #5564:
URL: https://github.com/apache/gravitino/issues/5564#issuecomment-2475634280
> > > Besides, `rm` command deleting to trash should be the default logic of
hadoop delete shell command. If you do not need to delete it to trash, you can
add a parameter: `-skipTrash` like `hadoop dfs -rm -skipTrash
gvfs://fileset/{catalog}/{schema}/{fileset_name}/sub_dir`.


> >
> >
> > @xloya To ensure data security, the`-skipTrash`option has been forcibly
disabled, so deleted data will first go to the trash to prevent accidental
deletion. How can fileset be compatible with the `hadoop fs -rm `command? thanks
>
> Hi, if you could not use `-skipTrash` option, I think there may not have a
way to delete directly unless you modified the hadoop source code about `rm`
command. Another option is to use Hadoop FileSystem to delete in Java / Scala
code. I think this will delete the directory or file directly instead of
deleting it to trash:
>
> ```
> import java.io.IOException;
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.fs.FileSystem;
> import org.apache.hadoop.fs.Path;
>
> public class Test {
> public static void main(String[] args) throws IOException {
> Path filesetPath = new
Path("gvfs://fileset/{your_catalog}/{your_schema}/{your_fileset}/sub_path");
> try (FileSystem fs = filesetPath.getFileSystem(new
Configuration())) {
> fs.delete(filesetPath, true);
> }
> }
> }
> ```
Sure, thank you for your response.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]