jordepic commented on code in PR #14501:
URL: https://github.com/apache/iceberg/pull/14501#discussion_r2538855525
##########
core/src/main/java/org/apache/iceberg/hadoop/HadoopFileIO.java:
##########
@@ -211,6 +213,27 @@ private ExecutorService executorService() {
return executorService;
}
+ private void deletePath(FileSystem fs, Path toDelete, boolean recursive)
throws IOException {
+ if (!shouldUseTrashIfConfigured()) {
+ fs.delete(toDelete, recursive);
+ return;
+ }
+ Trash trash = new Trash(fs, getConf());
+ if (!trash.isEnabled()) {
Review Comment:
> I could see this going both ways in that if you configure the
core-site.xml and didn't realize there was a second property to set, you would
lose data.
To be fair, this is the status quo
> You also potentially have the issue where deleted data goes to the trash
configured for whoever deleted it (so two separate deletes could end up in
completely different locations).
I suppose that would be the case for HDFS trash in general - even if writing
parquet files without iceberg, each writer might potentially use a different
trash based on configuration.
I think these are just limitations of the existing system, but let me know
if you disagree
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]