kbendick edited a comment on pull request #2867:
URL: https://github.com/apache/iceberg/pull/2867#issuecomment-886470356


   > This add one feature that flink write iceberg auto compact small files.
   
   Possibly I'm missing something, but I don't see any accounting for files 
that might be already near, or close to the optimal size. It's late and my eyes 
may deceive me, but this appears to be compacting all files to be ideally the 
target file size bytes, regardless of their existing size etc. In some cases, 
the cost of opening and rewriting provides less value than leaving the data as 
is. Can we account for this like we do in some other places? Or am I just 
missing the fact that that functionality is hidden elsewhere?
   
   This would be a good topic to consider discussing in the mentioned GitHub 
issue :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to