steveloughran commented on issue #14951:
URL: https://github.com/apache/iceberg/issues/14951#issuecomment-3946861699

   so if it is during delete, you could change the size of each batch delete up 
from its default of 250 to a value up to 1000.
   ```
   s3.delete.batch-size 500
   ```
   
   that means half as many http connections are needed.
   But as each file deleted still counts against your 3500 write/second 
allocation, large deletes can have adverse consequences, see 
https://github.com/apache/hadoop/commit/56dee667707926f3796c7757be1a133a362f05c9
 for the details. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to