deniskuzZ commented on PR #6432:
URL: https://github.com/apache/iceberg/pull/6432#issuecomment-1519993615

   > @rbalamohan, do you have the same position files that are read over and 
over again for different data files in a combined scan task? Or is it mostly 
unique delete files per each data file?
   > 
   > I am a bit concerned about using a thread pool on executors. Let me take a 
look with fresh eyes tomorrow. I wonder whether we can cache the result of 
reading a particular delete file (like bitmap or whatever is constructed) and 
reuse that when the same delete file must be read for different data files 
instead of parallelizing reading deletes.
   
   @aokolnychyi , if you'll have time please take a look at 
https://github.com/apache/iceberg/pull/6527 which introduces the caching for 
positional deletes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to