dbtsai commented on pull request #28788:
URL: https://github.com/apache/spark/pull/28788#issuecomment-642296071


   @tgravescs I totally understand this could be a breaking change in Spark 
3.1. Being said that, it's a postive breaking change as long as we doucment the 
behavior well.
   
   Many of enginners run into issues that their applications pick up two hadoop 
versions, and the problems happen when the version of the hadoop in cluster is 
very different from the standard hadoop jars in Spark. In this case, the 
transitive deps such as guava, jackson can be very different resulting runtime 
errors.
   
   As you mentioned, if you are using some jars that are in cluster's classpath 
but not in Spark's `with-hadoop` distribution, they should ship it together 
with application. Or you should use `no-hadoop` distribution, and the entire 
classpath of hadoop is provided by cluster.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to