dongjoon-hyun edited a comment on pull request #34089:
URL: https://github.com/apache/spark/pull/34089#issuecomment-1035848643


   It seems that you are using `the breaking change` in a broader way.
   
   When Apache Spark changes the default configuration, we write a migration 
guide for it. We can add it to our migration guide. In addition, we can 
override from Spark side like we did for Hadoop with 
`spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version`.
   > I see the conclusion in the analysis, "We don't understand the behavior of 
acks=all and acks=1 across different workloads and across the entire latency 
spectrum. We should leave the default as is.", and the default, has changed.
   
   Anything else you want to add?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to