Jason-liujc commented on issue #7653:
URL: https://github.com/apache/hudi/issues/7653#issuecomment-1766886802

   Can't speak to what the official guidance from Hudi is at the moment (seems 
like they will rollout the non-blocking concurent write feature in version 
1.0+).
   
   We had to increase `yarn.resourcemanager.am.max-attempts` and 
`spark.yarn.maxAppAttempts` (the spark specific config) to make it retry more 
and reoganize our tables to reduce concurrent writes. Any other lock provider 
wasn't an option for us since we are running different jobs from different 
clusters.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to