bchittari commented on issue #11985:
URL: https://github.com/apache/hudi/issues/11985#issuecomment-2369013603

   @ad1happy2go, thanks for taking a look at my issue.
   
   I have already set the "spark.hadoop.fs.s3a.connection.maximum" to 9000 
(earlier I used 1000 when the job was working ok upto 18th Sepetember).
   
   We are using AWS Glue 4.0 which by default uses Hudi 0.12.1.   Is this a 
known issue in Hudi 0.12.1 ? 
   upgrading to 0.12.3 is not easy for us as this requires full regression 
tests of the upgrade etc.
   
   Are there any work arounds this issue without an upgrade of Hudi to 0.12.3 
or later ?
   
   regards, Balki 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to