Jason-liujc commented on issue #7653:
URL: https://github.com/apache/hudi/issues/7653#issuecomment-1752131875

   The main thing we did was to change our Hudi table structure to avoid 
concurrent writes to the same partition as much as possible (batch workload 
together, sequence the job etc)
   
   For us, the DynamoDB lock provider wasn't able to to do any write retries, 
so it just fails the Spark job. We increased the yarn and spark retry to 
automatically retry from the cluster side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to