jpugliesi commented on issue #2796: URL: https://github.com/apache/iceberg/issues/2796#issuecomment-1169237845
For the record, I was able to resolve this by (much in line with @otayel's suggestion): 1. Reducing the number of spark executors (to 1, in my case) 2. configuring the aws sdk s3 stream buffer size: ``` spark.driver.extraJavaOptions -Dcom.amazonaws.sdk.s3.defaultStreamBufferSize=512m spark.executor.extraJavaOptions -Dcom.amazonaws.sdk.s3.defaultStreamBufferSize=512m ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
