luffyd commented on issue #1872:
URL: https://github.com/apache/hudi/issues/1872#issuecomment-677887900


   @yuhadooper 
   I added logs when hudi calls createMarkerFile, it was not that high. But 
probably other part of hudi was consuming s3 limits.
   Another thing is adding these s3 retry configurations at the time spark 
context seems to be not working.
   
   I added them at the time of creating the cluster, and did not see the FAULTS 
again. But it could add latency to the processing
   
   ```
   {
           "Classification": "emrfs-site",
           "Properties": {
               "fs.s3.maxRetries": "50",
               "fs.s3.sleepTimeSeconds": "600"
           }
       }
   ```
   
   
   What error are you noticing? can you share


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to