lirui-apache commented on PR #4596: URL: https://github.com/apache/iceberg/pull/4596#issuecomment-1138691153
@rdblue Thanks for the advice. Actually it's not a small-file problem. Each partition has over 200 billion records. We do have optimizations to make sure each query only scans a small portion. But it cannot help if we hit OOM at the planning phase. I also noticed latest code supports planning files with separate pools. So with separate pools + limited manifest entries + limited pool size, I think we can bring the memory usage under control. Although personally I still prefer the blocking queue solution, which seems easier to achieve and more reliable. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
