szehon-ho opened a new issue #3582: URL: https://github.com/apache/iceberg/issues/3582
For ExpireSnapshotAction and RemoveOrphanFile, if there are a lot of files to remove then the Spark jobs run very slow and are prone to timeout. I guess current design for driver-side delete is to be safe from throttling or DDOS'ing the filesystem, but I wonder if this is flexible. - Could we make a configuration to delete on the Spark executor side? - Could we make a batch deleteObjects() interface in FileIO to take advantage of some faster delete methods native to different storages? For example AWS Multi-Object Delete (1000 at a time) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
