TSFenwick commented on PR #14642:
URL: https://github.com/apache/druid/pull/14642#issuecomment-1647196793

   @jasonk000 in my hypothetical best world for this I would say the deep 
storage extension should determine or give a hint as to what the optimal batch 
size should be. The S3 multi object delete is limited to at most 1000 objects 
at a time meaning for S3 the 1000 segment limit seems a bit small for the 
ability to delete segments in S3 since the effort to spin up a task to handle 
the delete will be more overhead than deleting a decent amount.
   Other storage systems that don't have batch delete should take advantage of 
a sensible default.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to