dongjoon-hyun commented on pull request #29895:
URL: https://github.com/apache/spark/pull/29895#issuecomment-700821012


   Hi, @steveloughran and @tgravescs . 
   
   What happens in the future, they cannot change the history (Apache Hadoop 
3.2.0). For now, Apache Spark 3.1 will be stuck in Apache Hadoop 3.2.0 due to 
the Guava issue. That's the reason why we need to do this right now from Spark 
side.
   
   For the following, @steveloughran , as I wrote in the PR description, this 
PR doesn't not override the explicit user-give config. This is only setting 
`v1` when there is no explicit setting.
   > V2 is used in places where people have hit the scale limits with v1, and 
they are happy with the risk of failures. 
   
   Eventually, I believe we can use `hadoop-client-runtime` only in order to 
remove guava dependency (#29843) and consume @steveloughran 's new Hadoop 
release in the future.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to