pp-eyushin opened a new issue, #14377: URL: https://github.com/apache/iceberg/issues/14377
### Feature Request / Improvement Current Spark integration doesn't seem to support `write.parquet.row-group-size-bytes` table property. As a workaround, we use extra call to [rewrite_data_files](https://iceberg.apache.org/docs/latest/spark-procedures/#rewrite_data_files) procedure to align actual underlying parquet files layout after Spark complete its writes, which is sub-optimal. This is crucial when we want to limit parallelism and reduce amount of data to be fed to single Spark task reading the table to trade-off OutOfMemory errors. ### Query engine Spark ### Willingness to contribute - [x] I can contribute this improvement/feature independently - [ ] I would be willing to contribute this improvement/feature with guidance from the Iceberg community - [ ] I cannot contribute this improvement/feature at this time -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
